Merge remote-tracking branch 'origin/release-v1.61' into matrix-org-hotfixes
commit
e3b00708bd
88
CHANGES.md
88
CHANGES.md
|
@ -1,3 +1,91 @@
|
||||||
|
Synapse 1.61.0rc1 (2022-06-07)
|
||||||
|
==============================
|
||||||
|
|
||||||
|
This release removes support for the non-standard feature known both as 'groups' and as 'communities', which have been superseded by *Spaces*.
|
||||||
|
|
||||||
|
Features
|
||||||
|
--------
|
||||||
|
|
||||||
|
- Add new `media_retention` options to the homeserver config for routinely cleaning up non-recently accessed media. ([\#12732](https://github.com/matrix-org/synapse/issues/12732), [\#12972](https://github.com/matrix-org/synapse/issues/12972), [\#12977](https://github.com/matrix-org/synapse/issues/12977))
|
||||||
|
- Experimental support for [MSC3772](https://github.com/matrix-org/matrix-spec-proposals/pull/3772): Push rule for mutually related events. ([\#12740](https://github.com/matrix-org/synapse/issues/12740), [\#12859](https://github.com/matrix-org/synapse/issues/12859))
|
||||||
|
- Update to the `check_event_for_spam` module callback: Deprecate the current callback signature, replace it with a new signature that is both less ambiguous (replacing booleans with explicit allow/block) and more powerful (ability to return explicit error codes). ([\#12808](https://github.com/matrix-org/synapse/issues/12808))
|
||||||
|
- Add storage and module API methods to get monthly active users (and their corresponding appservices) within an optionally specified time range. ([\#12838](https://github.com/matrix-org/synapse/issues/12838), [\#12917](https://github.com/matrix-org/synapse/issues/12917))
|
||||||
|
- Support the new error code `ORG.MATRIX.MSC3823.USER_ACCOUNT_SUSPENDED` from [MSC3823](https://github.com/matrix-org/matrix-spec-proposals/pull/3823). ([\#12845](https://github.com/matrix-org/synapse/issues/12845), [\#12923](https://github.com/matrix-org/synapse/issues/12923))
|
||||||
|
- Add a configurable background job to delete stale devices. ([\#12855](https://github.com/matrix-org/synapse/issues/12855))
|
||||||
|
- Improve URL previews for pages with empty elements. ([\#12951](https://github.com/matrix-org/synapse/issues/12951))
|
||||||
|
- Allow updating a user's password using the admin API without logging out their devices. Contributed by @jcgruenhage. ([\#12952](https://github.com/matrix-org/synapse/issues/12952))
|
||||||
|
|
||||||
|
|
||||||
|
Bugfixes
|
||||||
|
--------
|
||||||
|
|
||||||
|
- Always send an `access_token` in `/thirdparty/` requests to appservices, as required by the [Application Service API specification](https://spec.matrix.org/v1.1/application-service-api/#third-party-networks). ([\#12746](https://github.com/matrix-org/synapse/issues/12746))
|
||||||
|
- Implement [MSC3816](https://github.com/matrix-org/matrix-spec-proposals/pull/3816): sending the root event in a thread should count as having 'participated' in it. ([\#12766](https://github.com/matrix-org/synapse/issues/12766))
|
||||||
|
- Delete events from the `federation_inbound_events_staging` table when a room is purged through the admin API. ([\#12784](https://github.com/matrix-org/synapse/issues/12784))
|
||||||
|
- Fix a bug where we did not correctly handle invalid device list updates over federation. Contributed by Carl Bordum Hansen. ([\#12829](https://github.com/matrix-org/synapse/issues/12829))
|
||||||
|
- Fix a bug which allowed multiple async operations to access database locks concurrently. Contributed by @sumnerevans @ Beeper. ([\#12832](https://github.com/matrix-org/synapse/issues/12832))
|
||||||
|
- Fix an issue introduced in Synapse 0.34 where the `/notifications` endpoint would only return notifications if a user registered at least one pusher. Contributed by Famedly. ([\#12840](https://github.com/matrix-org/synapse/issues/12840))
|
||||||
|
- Fix a bug where servers using a Postgres database would fail to backfill from an insertion event when MSC2716 is enabled (`experimental_features.msc2716_enabled`). ([\#12843](https://github.com/matrix-org/synapse/issues/12843))
|
||||||
|
- Fix [MSC3787](https://github.com/matrix-org/matrix-spec-proposals/pull/3787) rooms being omitted from room directory, room summary and space hierarchy responses. ([\#12858](https://github.com/matrix-org/synapse/issues/12858))
|
||||||
|
- Fix a bug introduced in Synapse 1.54.0 which could sometimes cause exceptions when handling federated traffic. ([\#12877](https://github.com/matrix-org/synapse/issues/12877))
|
||||||
|
- Fix a bug introduced in Synapse 1.59.0 which caused room deletion to fail with a foreign key violation error. ([\#12889](https://github.com/matrix-org/synapse/issues/12889))
|
||||||
|
- Fix a long-standing bug which caused the `/messages` endpoint to return an incorrect `end` attribute when there were no more events. Contributed by @Vetchu. ([\#12903](https://github.com/matrix-org/synapse/issues/12903))
|
||||||
|
- Fix a bug introduced in Synapse 1.58.0 where `/sync` would fail if the most recent event in a room was a redaction of an event that has since been purged. ([\#12905](https://github.com/matrix-org/synapse/issues/12905))
|
||||||
|
- Fix a potential memory leak when generating thumbnails. ([\#12932](https://github.com/matrix-org/synapse/issues/12932))
|
||||||
|
- Fix a long-standing bug where a URL preview would break if the image failed to download. ([\#12950](https://github.com/matrix-org/synapse/issues/12950))
|
||||||
|
|
||||||
|
|
||||||
|
Improved Documentation
|
||||||
|
----------------------
|
||||||
|
|
||||||
|
- Fix typographical errors in documentation. ([\#12863](https://github.com/matrix-org/synapse/issues/12863))
|
||||||
|
- Fix documentation incorrectly stating the `sendToDevice` endpoint can be directed at generic workers. Contributed by Nick @ Beeper. ([\#12867](https://github.com/matrix-org/synapse/issues/12867))
|
||||||
|
|
||||||
|
|
||||||
|
Deprecations and Removals
|
||||||
|
-------------------------
|
||||||
|
|
||||||
|
- Remove support for the non-standard groups/communities feature from Synapse. ([\#12553](https://github.com/matrix-org/synapse/issues/12553), [\#12558](https://github.com/matrix-org/synapse/issues/12558), [\#12563](https://github.com/matrix-org/synapse/issues/12563), [\#12895](https://github.com/matrix-org/synapse/issues/12895), [\#12897](https://github.com/matrix-org/synapse/issues/12897), [\#12899](https://github.com/matrix-org/synapse/issues/12899), [\#12900](https://github.com/matrix-org/synapse/issues/12900), [\#12936](https://github.com/matrix-org/synapse/issues/12936), [\#12966](https://github.com/matrix-org/synapse/issues/12966))
|
||||||
|
- Remove contributed `kick_users.py` script. This is broken under Python 3, and is not added to the environment when `pip install`ing Synapse. ([\#12908](https://github.com/matrix-org/synapse/issues/12908))
|
||||||
|
- Remove `contrib/jitsimeetbridge`. This was an unused experiment that hasn't been meaningfully changed since 2014. ([\#12909](https://github.com/matrix-org/synapse/issues/12909))
|
||||||
|
- Remove unused `contrib/experiements/cursesio.py` script, which fails to run under Python 3. ([\#12910](https://github.com/matrix-org/synapse/issues/12910))
|
||||||
|
- Remove unused `contrib/experiements/test_messaging.py` script. This fails to run on Python 3. ([\#12911](https://github.com/matrix-org/synapse/issues/12911))
|
||||||
|
|
||||||
|
|
||||||
|
Internal Changes
|
||||||
|
----------------
|
||||||
|
|
||||||
|
- Test Synapse against Complement with workers. ([\#12810](https://github.com/matrix-org/synapse/issues/12810), [\#12933](https://github.com/matrix-org/synapse/issues/12933))
|
||||||
|
- Reduce the amount of state we pull from the DB. ([\#12811](https://github.com/matrix-org/synapse/issues/12811), [\#12964](https://github.com/matrix-org/synapse/issues/12964))
|
||||||
|
- Try other homeservers when re-syncing state for rooms with partial state. ([\#12812](https://github.com/matrix-org/synapse/issues/12812))
|
||||||
|
- Resume state re-syncing for rooms with partial state after a Synapse restart. ([\#12813](https://github.com/matrix-org/synapse/issues/12813))
|
||||||
|
- Remove Mutual Rooms' ([MSC2666](https://github.com/matrix-org/matrix-spec-proposals/pull/2666)) endpoint dependency on the User Directory. ([\#12836](https://github.com/matrix-org/synapse/issues/12836))
|
||||||
|
- Experimental: expand `check_event_for_spam` with ability to return additional fields. This enables spam-checker implementations to experiment with mechanisms to give users more information about why they are blocked and whether any action is needed from them to be unblocked. ([\#12846](https://github.com/matrix-org/synapse/issues/12846))
|
||||||
|
- Remove `dont_notify` from the `.m.rule.room.server_acl` rule. ([\#12849](https://github.com/matrix-org/synapse/issues/12849))
|
||||||
|
- Remove the unstable `/hierarchy` endpoint from [MSC2946](https://github.com/matrix-org/matrix-doc/pull/2946). ([\#12851](https://github.com/matrix-org/synapse/issues/12851))
|
||||||
|
- Pull out less state when handling gaps in room DAG. ([\#12852](https://github.com/matrix-org/synapse/issues/12852), [\#12904](https://github.com/matrix-org/synapse/issues/12904))
|
||||||
|
- Clean-up the push rules datastore. ([\#12856](https://github.com/matrix-org/synapse/issues/12856))
|
||||||
|
- Correct a type annotation in the URL preview source code. ([\#12860](https://github.com/matrix-org/synapse/issues/12860))
|
||||||
|
- Update `pyjwt` dependency to [2.4.0](https://github.com/jpadilla/pyjwt/releases/tag/2.4.0). ([\#12865](https://github.com/matrix-org/synapse/issues/12865))
|
||||||
|
- Enable the `/account/whoami` endpoint on synapse worker processes. Contributed by Nick @ Beeper. ([\#12866](https://github.com/matrix-org/synapse/issues/12866))
|
||||||
|
- Enable the `batch_send` endpoint on synapse worker processes. Contributed by Nick @ Beeper. ([\#12868](https://github.com/matrix-org/synapse/issues/12868))
|
||||||
|
- Don't generate empty AS transactions when the AS is flagged as down. Contributed by Nick @ Beeper. ([\#12869](https://github.com/matrix-org/synapse/issues/12869))
|
||||||
|
- Fix up the variable `state_store` naming. ([\#12871](https://github.com/matrix-org/synapse/issues/12871))
|
||||||
|
- Faster room joins: when querying the current state of the room, wait for state to be populated. ([\#12872](https://github.com/matrix-org/synapse/issues/12872))
|
||||||
|
- Avoid running queries which will never result in deletions. ([\#12879](https://github.com/matrix-org/synapse/issues/12879))
|
||||||
|
- Use constants for EDU types. ([\#12884](https://github.com/matrix-org/synapse/issues/12884))
|
||||||
|
- Reduce database load of `/sync` when presence is enabled. ([\#12885](https://github.com/matrix-org/synapse/issues/12885))
|
||||||
|
- Refactor `have_seen_events` to reduce memory consumed when processing federation traffic. ([\#12886](https://github.com/matrix-org/synapse/issues/12886))
|
||||||
|
- Refactor receipt linearization code. ([\#12888](https://github.com/matrix-org/synapse/issues/12888))
|
||||||
|
- Add type annotations to `synapse.logging.opentracing`. ([\#12894](https://github.com/matrix-org/synapse/issues/12894))
|
||||||
|
- Remove PyNaCl occurrences directly used in Synapse code. ([\#12902](https://github.com/matrix-org/synapse/issues/12902))
|
||||||
|
- Bump types-jsonschema from 4.4.1 to 4.4.6. ([\#12912](https://github.com/matrix-org/synapse/issues/12912))
|
||||||
|
- Rename storage classes. ([\#12913](https://github.com/matrix-org/synapse/issues/12913))
|
||||||
|
- Preparation for database schema simplifications: stop reading from `event_edges.room_id`. ([\#12914](https://github.com/matrix-org/synapse/issues/12914))
|
||||||
|
- Check if we are in a virtual environment before overriding the `PYTHONPATH` environment variable in the demo script. ([\#12916](https://github.com/matrix-org/synapse/issues/12916))
|
||||||
|
- Improve the logging when signature checks on events fail. ([\#12925](https://github.com/matrix-org/synapse/issues/12925))
|
||||||
|
|
||||||
|
|
||||||
Synapse 1.60.0 (2022-05-31)
|
Synapse 1.60.0 (2022-05-31)
|
||||||
===========================
|
===========================
|
||||||
|
|
||||||
|
|
|
@ -1 +0,0 @@
|
||||||
Remove support for the non-standard groups/communities feature from Synapse.
|
|
|
@ -1 +0,0 @@
|
||||||
Remove support for the non-standard groups/communities feature from Synapse.
|
|
|
@ -1 +0,0 @@
|
||||||
Remove support for the non-standard groups/communities feature from Synapse.
|
|
|
@ -1 +0,0 @@
|
||||||
Add new `media_retention` options to the homeserver config for routinely cleaning up non-recently accessed media.
|
|
|
@ -1 +0,0 @@
|
||||||
Experimental support for [MSC3772](https://github.com/matrix-org/matrix-spec-proposals/pull/3772): Push rule for mutually related events.
|
|
|
@ -1 +0,0 @@
|
||||||
Always send an `access_token` in `/thirdparty/` requests to appservices, as required by the [Matrix specification](https://spec.matrix.org/v1.1/application-service-api/#third-party-networks).
|
|
|
@ -1 +0,0 @@
|
||||||
Implement [MSC3816](https://github.com/matrix-org/matrix-spec-proposals/pull/3816): sending the root event in a thread should count as "participated" in it.
|
|
|
@ -1 +0,0 @@
|
||||||
Delete events from the `federation_inbound_events_staging` table when a room is purged through the admin API.
|
|
|
@ -1 +0,0 @@
|
||||||
Update to `check_event_for_spam`. Deprecate the current callback signature, replace it with a new signature that is both less ambiguous (replacing booleans with explicit allow/block) and more powerful (ability to return explicit error codes).
|
|
|
@ -1 +0,0 @@
|
||||||
Test Synapse against Complement with workers.
|
|
|
@ -1 +0,0 @@
|
||||||
Reduce the amount of state we pull from the DB.
|
|
|
@ -1 +0,0 @@
|
||||||
Try other homeservers when re-syncing state for rooms with partial state.
|
|
|
@ -1 +0,0 @@
|
||||||
Resume state re-syncing for rooms with partial state after a Synapse restart.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix a bug where we did not correctly handle invalid device list updates over federation. Contributed by Carl Bordum Hansen.
|
|
|
@ -1 +0,0 @@
|
||||||
Fixed a bug which allowed multiple async operations to access database locks concurrently. Contributed by @sumnerevans @ Beeper.
|
|
|
@ -1 +0,0 @@
|
||||||
Remove Mutual Rooms ([MSC2666](https://github.com/matrix-org/matrix-spec-proposals/pull/2666)) endpoint dependency on the User Directory.
|
|
|
@ -1 +0,0 @@
|
||||||
Add storage and module API methods to get monthly active users (and their corresponding appservices) within an optionally specified time range.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix an issue introduced in Synapse 0.34 where the `/notifications` endpoint would only return notifications if a user registered at least one pusher. Contributed by Famedly.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix bug where servers using a Postgres database would fail to backfill from an insertion event when MSC2716 is enabled (`experimental_features.msc2716_enabled`).
|
|
|
@ -1 +0,0 @@
|
||||||
Support the new error code "ORG.MATRIX.MSC3823.USER_ACCOUNT_SUSPENDED" from [MSC3823](https://github.com/matrix-org/matrix-spec-proposals/pull/3823).
|
|
|
@ -1 +0,0 @@
|
||||||
Experimental: expand `check_event_for_spam` with ability to return additional fields. This enables spam-checker implementations to experiment with mechanisms to give users more information about why they are blocked and whether any action is needed from them to be unblocked.
|
|
|
@ -1 +0,0 @@
|
||||||
Remove `dont_notify` from the `.m.rule.room.server_acl` rule.
|
|
|
@ -1 +0,0 @@
|
||||||
Remove the unstable `/hierarchy` endpoint from [MSC2946](https://github.com/matrix-org/matrix-doc/pull/2946).
|
|
|
@ -1 +0,0 @@
|
||||||
Pull out less state when handling gaps in room DAG.
|
|
|
@ -1 +0,0 @@
|
||||||
Add a configurable background job to delete stale devices.
|
|
|
@ -1 +0,0 @@
|
||||||
Clean-up the push rules datastore.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix [MSC3787](https://github.com/matrix-org/matrix-spec-proposals/pull/3787) rooms being omitted from room directory, room summary and space hierarchy responses.
|
|
|
@ -1 +0,0 @@
|
||||||
Experimental support for [MSC3772](https://github.com/matrix-org/matrix-spec-proposals/pull/3772): Push rule for mutually related events.
|
|
|
@ -1 +0,0 @@
|
||||||
Correct a type annotation in the URL preview source code.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix typos in documentation.
|
|
|
@ -1 +0,0 @@
|
||||||
Update `pyjwt` dependency to [2.4.0](https://github.com/jpadilla/pyjwt/releases/tag/2.4.0).
|
|
|
@ -1 +0,0 @@
|
||||||
Enable the `/account/whoami` endpoint on synapse worker processes. Contributed by Nick @ Beeper.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix documentation incorrectly stating the `sendToDevice` endpoint can be directed at generic workers. Contributed by Nick @ Beeper.
|
|
|
@ -1 +0,0 @@
|
||||||
Enable the `batch_send` endpoint on synapse worker processes. Contributed by Nick @ Beeper.
|
|
|
@ -1 +0,0 @@
|
||||||
Don't generate empty AS transactions when the AS is flagged as down. Contributed by Nick @ Beeper.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix up the variable `state_store` naming.
|
|
|
@ -1 +0,0 @@
|
||||||
Faster room joins: when querying the current state of the room, wait for state to be populated.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix a bug introduced in Synapse 1.54 which could sometimes cause exceptions when handling federated traffic.
|
|
|
@ -1 +0,0 @@
|
||||||
Avoid running queries which will never result in deletions.
|
|
|
@ -1 +0,0 @@
|
||||||
Use constants for EDU types.
|
|
|
@ -1 +0,0 @@
|
||||||
Reduce database load of `/sync` when presence is enabled.
|
|
|
@ -1 +0,0 @@
|
||||||
Refactor `have_seen_events` to reduce memory consumed when processing federation traffic.
|
|
|
@ -1 +0,0 @@
|
||||||
Refactor receipt linearization code.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix a bug introduced in Synapse 1.59.0 which caused room deletion to fail with a foreign key violation.
|
|
|
@ -1 +0,0 @@
|
||||||
Add type annotations to `synapse.logging.opentracing`.
|
|
|
@ -1 +0,0 @@
|
||||||
Remove support for the non-standard groups/communities feature from Synapse.
|
|
|
@ -1 +0,0 @@
|
||||||
Remove support for the non-standard groups/communities feature from Synapse.
|
|
|
@ -1 +0,0 @@
|
||||||
Remove support for the non-standard groups/communities feature from Synapse.
|
|
|
@ -1 +0,0 @@
|
||||||
Remove support for the non-standard groups/communities feature from Synapse.
|
|
|
@ -1 +0,0 @@
|
||||||
Remove PyNaCl occurrences directly used in Synapse code.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix a long-standing bug which caused the `/messages` endpoint to return an incorrect `end` attribute when there were no more events. Contributed by @Vetchu.
|
|
|
@ -1 +0,0 @@
|
||||||
Pull out less state when handling gaps in room DAG.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix a bug introduced in Synapse 1.58.0 where `/sync` would fail if the most recent event in a room was a redaction of an event that has since been purged.
|
|
|
@ -1 +0,0 @@
|
||||||
Remove contributed `kick_users.py` script. This is broken under Python 3, and is not added to the environment when `pip install`ing Synapse.
|
|
|
@ -1 +0,0 @@
|
||||||
Remove `contrib/jitsimeetbridge`. This was an unused experiment that hasn't been meaningfully changed since 2014.
|
|
|
@ -1 +0,0 @@
|
||||||
Remove unused `contrib/experiements/cursesio.py` script, which fails to run under Python 3.
|
|
|
@ -1 +0,0 @@
|
||||||
Remove unused `contrib/experiements/test_messaging.py` script. This fails to run on Python 3.
|
|
|
@ -1 +0,0 @@
|
||||||
Bump types-jsonschema from 4.4.1 to 4.4.6.
|
|
|
@ -1 +0,0 @@
|
||||||
Rename storage classes.
|
|
|
@ -1 +0,0 @@
|
||||||
Preparation for database schema simplifications: stop reading from `event_edges.room_id`.
|
|
|
@ -1 +0,0 @@
|
||||||
Check if we are in a virtual environment before overriding the `PYTHONPATH` environment variable in the demo script.
|
|
|
@ -1 +0,0 @@
|
||||||
Add storage and module API methods to get monthly active users (and their corresponding appservices) within an optionally specified time range.
|
|
|
@ -1 +0,0 @@
|
||||||
Support the new error code "ORG.MATRIX.MSC3823.USER_ACCOUNT_SUSPENDED" from [MSC3823](https://github.com/matrix-org/matrix-spec-proposals/pull/3823).
|
|
|
@ -1 +0,0 @@
|
||||||
Improve the logging when signature checks on events fail.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix potential memory leak when generating thumbnails.
|
|
|
@ -1 +0,0 @@
|
||||||
Test Synapse against Complement with workers.
|
|
|
@ -1 +0,0 @@
|
||||||
Remove support for the non-standard groups/communities feature from Synapse.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix a long-standing bug where a URL preview would break if the image failed to download.
|
|
|
@ -1 +0,0 @@
|
||||||
Improve URL previews for pages with empty elements.
|
|
|
@ -1 +0,0 @@
|
||||||
Allow updating a user's password using the admin API without logging out their devices. Contributed by @jcgruenhage.
|
|
|
@ -1,9 +1,9 @@
|
||||||
matrix-synapse-py3 (1.61.0~rc1+nmu1) UNRELEASED; urgency=medium
|
matrix-synapse-py3 (1.61.0~rc1) stable; urgency=medium
|
||||||
|
|
||||||
* Non-maintainer upload.
|
|
||||||
* Remove unused `jitsimeetbridge` experiment from `contrib` directory.
|
* Remove unused `jitsimeetbridge` experiment from `contrib` directory.
|
||||||
|
* New Synapse release 1.61.0rc1.
|
||||||
|
|
||||||
-- Synapse Packaging team <packages@matrix.org> Sun, 29 May 2022 14:44:45 +0100
|
-- Synapse Packaging team <packages@matrix.org> Tue, 07 Jun 2022 12:42:31 +0100
|
||||||
|
|
||||||
matrix-synapse-py3 (1.60.0) stable; urgency=medium
|
matrix-synapse-py3 (1.60.0) stable; urgency=medium
|
||||||
|
|
||||||
|
|
|
@ -1583,6 +1583,12 @@ been accessed, the media's creation time is used instead. Both thumbnails
|
||||||
and the original media will be removed. If either of these options are unset,
|
and the original media will be removed. If either of these options are unset,
|
||||||
then media of that type will not be purged.
|
then media of that type will not be purged.
|
||||||
|
|
||||||
|
Local or cached remote media that has been
|
||||||
|
[quarantined](../../admin_api/media_admin_api.md#quarantining-media-in-a-room)
|
||||||
|
will not be deleted. Similarly, local media that has been marked as
|
||||||
|
[protected from quarantine](../../admin_api/media_admin_api.md#protecting-media-from-being-quarantined)
|
||||||
|
will not be deleted.
|
||||||
|
|
||||||
Example configuration:
|
Example configuration:
|
||||||
```yaml
|
```yaml
|
||||||
media_retention:
|
media_retention:
|
||||||
|
|
|
@ -54,7 +54,7 @@ skip_gitignore = true
|
||||||
|
|
||||||
[tool.poetry]
|
[tool.poetry]
|
||||||
name = "matrix-synapse"
|
name = "matrix-synapse"
|
||||||
version = "1.60.0"
|
version = "1.61.0rc1"
|
||||||
description = "Homeserver for the Matrix decentralised comms protocol"
|
description = "Homeserver for the Matrix decentralised comms protocol"
|
||||||
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
|
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
|
||||||
license = "Apache-2.0"
|
license = "Apache-2.0"
|
||||||
|
|
|
@ -182,6 +182,7 @@ IGNORED_TABLES = {
|
||||||
"groups",
|
"groups",
|
||||||
"local_group_membership",
|
"local_group_membership",
|
||||||
"local_group_updates",
|
"local_group_updates",
|
||||||
|
"remote_profile_cache",
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -245,6 +245,8 @@ class FederationSender(AbstractFederationSender):
|
||||||
self.store = hs.get_datastores().main
|
self.store = hs.get_datastores().main
|
||||||
self.state = hs.get_state_handler()
|
self.state = hs.get_state_handler()
|
||||||
|
|
||||||
|
self._storage_controllers = hs.get_storage_controllers()
|
||||||
|
|
||||||
self.clock = hs.get_clock()
|
self.clock = hs.get_clock()
|
||||||
self.is_mine_id = hs.is_mine_id
|
self.is_mine_id = hs.is_mine_id
|
||||||
|
|
||||||
|
@ -602,7 +604,9 @@ class FederationSender(AbstractFederationSender):
|
||||||
room_id = receipt.room_id
|
room_id = receipt.room_id
|
||||||
|
|
||||||
# Work out which remote servers should be poked and poke them.
|
# Work out which remote servers should be poked and poke them.
|
||||||
domains_set = await self.state.get_current_hosts_in_room(room_id)
|
domains_set = await self._storage_controllers.state.get_current_hosts_in_room(
|
||||||
|
room_id
|
||||||
|
)
|
||||||
domains = [
|
domains = [
|
||||||
d
|
d
|
||||||
for d in domains_set
|
for d in domains_set
|
||||||
|
|
|
@ -23,14 +23,7 @@ from synapse.api.errors import (
|
||||||
StoreError,
|
StoreError,
|
||||||
SynapseError,
|
SynapseError,
|
||||||
)
|
)
|
||||||
from synapse.metrics.background_process_metrics import wrap_as_background_process
|
from synapse.types import JsonDict, Requester, UserID, create_requester
|
||||||
from synapse.types import (
|
|
||||||
JsonDict,
|
|
||||||
Requester,
|
|
||||||
UserID,
|
|
||||||
create_requester,
|
|
||||||
get_domain_from_id,
|
|
||||||
)
|
|
||||||
from synapse.util.caches.descriptors import cached
|
from synapse.util.caches.descriptors import cached
|
||||||
from synapse.util.stringutils import parse_and_validate_mxc_uri
|
from synapse.util.stringutils import parse_and_validate_mxc_uri
|
||||||
|
|
||||||
|
@ -50,9 +43,6 @@ class ProfileHandler:
|
||||||
delegate to master when necessary.
|
delegate to master when necessary.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
PROFILE_UPDATE_MS = 60 * 1000
|
|
||||||
PROFILE_UPDATE_EVERY_MS = 24 * 60 * 60 * 1000
|
|
||||||
|
|
||||||
def __init__(self, hs: "HomeServer"):
|
def __init__(self, hs: "HomeServer"):
|
||||||
self.store = hs.get_datastores().main
|
self.store = hs.get_datastores().main
|
||||||
self.clock = hs.get_clock()
|
self.clock = hs.get_clock()
|
||||||
|
@ -73,11 +63,6 @@ class ProfileHandler:
|
||||||
|
|
||||||
self._third_party_rules = hs.get_third_party_event_rules()
|
self._third_party_rules = hs.get_third_party_event_rules()
|
||||||
|
|
||||||
if hs.config.worker.run_background_tasks:
|
|
||||||
self.clock.looping_call(
|
|
||||||
self._update_remote_profile_cache, self.PROFILE_UPDATE_MS
|
|
||||||
)
|
|
||||||
|
|
||||||
async def get_profile(self, user_id: str) -> JsonDict:
|
async def get_profile(self, user_id: str) -> JsonDict:
|
||||||
target_user = UserID.from_string(user_id)
|
target_user = UserID.from_string(user_id)
|
||||||
|
|
||||||
|
@ -116,30 +101,6 @@ class ProfileHandler:
|
||||||
raise SynapseError(502, "Failed to fetch profile")
|
raise SynapseError(502, "Failed to fetch profile")
|
||||||
raise e.to_synapse_error()
|
raise e.to_synapse_error()
|
||||||
|
|
||||||
async def get_profile_from_cache(self, user_id: str) -> JsonDict:
|
|
||||||
"""Get the profile information from our local cache. If the user is
|
|
||||||
ours then the profile information will always be correct. Otherwise,
|
|
||||||
it may be out of date/missing.
|
|
||||||
"""
|
|
||||||
target_user = UserID.from_string(user_id)
|
|
||||||
if self.hs.is_mine(target_user):
|
|
||||||
try:
|
|
||||||
displayname = await self.store.get_profile_displayname(
|
|
||||||
target_user.localpart
|
|
||||||
)
|
|
||||||
avatar_url = await self.store.get_profile_avatar_url(
|
|
||||||
target_user.localpart
|
|
||||||
)
|
|
||||||
except StoreError as e:
|
|
||||||
if e.code == 404:
|
|
||||||
raise SynapseError(404, "Profile was not found", Codes.NOT_FOUND)
|
|
||||||
raise
|
|
||||||
|
|
||||||
return {"displayname": displayname, "avatar_url": avatar_url}
|
|
||||||
else:
|
|
||||||
profile = await self.store.get_from_remote_profile_cache(user_id)
|
|
||||||
return profile or {}
|
|
||||||
|
|
||||||
async def get_displayname(self, target_user: UserID) -> Optional[str]:
|
async def get_displayname(self, target_user: UserID) -> Optional[str]:
|
||||||
if self.hs.is_mine(target_user):
|
if self.hs.is_mine(target_user):
|
||||||
try:
|
try:
|
||||||
|
@ -509,45 +470,3 @@ class ProfileHandler:
|
||||||
# so we act as if we couldn't find the profile.
|
# so we act as if we couldn't find the profile.
|
||||||
raise SynapseError(403, "Profile isn't available", Codes.FORBIDDEN)
|
raise SynapseError(403, "Profile isn't available", Codes.FORBIDDEN)
|
||||||
raise
|
raise
|
||||||
|
|
||||||
@wrap_as_background_process("Update remote profile")
|
|
||||||
async def _update_remote_profile_cache(self) -> None:
|
|
||||||
"""Called periodically to check profiles of remote users we haven't
|
|
||||||
checked in a while.
|
|
||||||
"""
|
|
||||||
entries = await self.store.get_remote_profile_cache_entries_that_expire(
|
|
||||||
last_checked=self.clock.time_msec() - self.PROFILE_UPDATE_EVERY_MS
|
|
||||||
)
|
|
||||||
|
|
||||||
for user_id, displayname, avatar_url in entries:
|
|
||||||
is_subscribed = await self.store.is_subscribed_remote_profile_for_user(
|
|
||||||
user_id
|
|
||||||
)
|
|
||||||
if not is_subscribed:
|
|
||||||
await self.store.maybe_delete_remote_profile_cache(user_id)
|
|
||||||
continue
|
|
||||||
|
|
||||||
try:
|
|
||||||
profile = await self.federation.make_query(
|
|
||||||
destination=get_domain_from_id(user_id),
|
|
||||||
query_type="profile",
|
|
||||||
args={"user_id": user_id},
|
|
||||||
ignore_backoff=True,
|
|
||||||
)
|
|
||||||
except Exception:
|
|
||||||
logger.exception("Failed to get avatar_url")
|
|
||||||
|
|
||||||
await self.store.update_remote_profile_cache(
|
|
||||||
user_id, displayname, avatar_url
|
|
||||||
)
|
|
||||||
continue
|
|
||||||
|
|
||||||
new_name = profile.get("displayname")
|
|
||||||
if not isinstance(new_name, str):
|
|
||||||
new_name = None
|
|
||||||
new_avatar = profile.get("avatar_url")
|
|
||||||
if not isinstance(new_avatar, str):
|
|
||||||
new_avatar = None
|
|
||||||
|
|
||||||
# We always hit update to update the last_check timestamp
|
|
||||||
await self.store.update_remote_profile_cache(user_id, new_name, new_avatar)
|
|
||||||
|
|
|
@ -59,6 +59,7 @@ class FollowerTypingHandler:
|
||||||
|
|
||||||
def __init__(self, hs: "HomeServer"):
|
def __init__(self, hs: "HomeServer"):
|
||||||
self.store = hs.get_datastores().main
|
self.store = hs.get_datastores().main
|
||||||
|
self._storage_controllers = hs.get_storage_controllers()
|
||||||
self.server_name = hs.config.server.server_name
|
self.server_name = hs.config.server.server_name
|
||||||
self.clock = hs.get_clock()
|
self.clock = hs.get_clock()
|
||||||
self.is_mine_id = hs.is_mine_id
|
self.is_mine_id = hs.is_mine_id
|
||||||
|
@ -131,7 +132,6 @@ class FollowerTypingHandler:
|
||||||
return
|
return
|
||||||
|
|
||||||
try:
|
try:
|
||||||
users = await self.store.get_users_in_room(member.room_id)
|
|
||||||
self._member_last_federation_poke[member] = self.clock.time_msec()
|
self._member_last_federation_poke[member] = self.clock.time_msec()
|
||||||
|
|
||||||
now = self.clock.time_msec()
|
now = self.clock.time_msec()
|
||||||
|
@ -139,7 +139,10 @@ class FollowerTypingHandler:
|
||||||
now=now, obj=member, then=now + FEDERATION_PING_INTERVAL
|
now=now, obj=member, then=now + FEDERATION_PING_INTERVAL
|
||||||
)
|
)
|
||||||
|
|
||||||
for domain in {get_domain_from_id(u) for u in users}:
|
hosts = await self._storage_controllers.state.get_current_hosts_in_room(
|
||||||
|
member.room_id
|
||||||
|
)
|
||||||
|
for domain in hosts:
|
||||||
if domain != self.server_name:
|
if domain != self.server_name:
|
||||||
logger.debug("sending typing update to %s", domain)
|
logger.debug("sending typing update to %s", domain)
|
||||||
self.federation.build_and_send_edu(
|
self.federation.build_and_send_edu(
|
||||||
|
|
|
@ -83,7 +83,7 @@ class QuarantineMediaByUser(RestServlet):
|
||||||
requester = await self.auth.get_user_by_req(request)
|
requester = await self.auth.get_user_by_req(request)
|
||||||
await assert_user_is_admin(self.auth, requester.user)
|
await assert_user_is_admin(self.auth, requester.user)
|
||||||
|
|
||||||
logging.info("Quarantining local media by user: %s", user_id)
|
logging.info("Quarantining media by user: %s", user_id)
|
||||||
|
|
||||||
# Quarantine all media this user has uploaded
|
# Quarantine all media this user has uploaded
|
||||||
num_quarantined = await self.store.quarantine_media_ids_by_user(
|
num_quarantined = await self.store.quarantine_media_ids_by_user(
|
||||||
|
@ -112,7 +112,7 @@ class QuarantineMediaByID(RestServlet):
|
||||||
requester = await self.auth.get_user_by_req(request)
|
requester = await self.auth.get_user_by_req(request)
|
||||||
await assert_user_is_admin(self.auth, requester.user)
|
await assert_user_is_admin(self.auth, requester.user)
|
||||||
|
|
||||||
logging.info("Quarantining local media by ID: %s/%s", server_name, media_id)
|
logging.info("Quarantining media by ID: %s/%s", server_name, media_id)
|
||||||
|
|
||||||
# Quarantine this media id
|
# Quarantine this media id
|
||||||
await self.store.quarantine_media_by_id(
|
await self.store.quarantine_media_by_id(
|
||||||
|
@ -140,9 +140,7 @@ class UnquarantineMediaByID(RestServlet):
|
||||||
) -> Tuple[int, JsonDict]:
|
) -> Tuple[int, JsonDict]:
|
||||||
await assert_requester_is_admin(self.auth, request)
|
await assert_requester_is_admin(self.auth, request)
|
||||||
|
|
||||||
logging.info(
|
logging.info("Remove from quarantine media by ID: %s/%s", server_name, media_id)
|
||||||
"Remove from quarantine local media by ID: %s/%s", server_name, media_id
|
|
||||||
)
|
|
||||||
|
|
||||||
# Remove from quarantine this media id
|
# Remove from quarantine this media id
|
||||||
await self.store.quarantine_media_by_id(server_name, media_id, None)
|
await self.store.quarantine_media_by_id(server_name, media_id, None)
|
||||||
|
|
|
@ -919,10 +919,14 @@ class MediaRepository:
|
||||||
await self.delete_old_local_media(
|
await self.delete_old_local_media(
|
||||||
before_ts=local_media_threshold_timestamp_ms,
|
before_ts=local_media_threshold_timestamp_ms,
|
||||||
keep_profiles=True,
|
keep_profiles=True,
|
||||||
|
delete_quarantined_media=False,
|
||||||
|
delete_protected_media=False,
|
||||||
)
|
)
|
||||||
|
|
||||||
async def delete_old_remote_media(self, before_ts: int) -> Dict[str, int]:
|
async def delete_old_remote_media(self, before_ts: int) -> Dict[str, int]:
|
||||||
old_media = await self.store.get_remote_media_before(before_ts)
|
old_media = await self.store.get_remote_media_ids(
|
||||||
|
before_ts, include_quarantined_media=False
|
||||||
|
)
|
||||||
|
|
||||||
deleted = 0
|
deleted = 0
|
||||||
|
|
||||||
|
@ -975,6 +979,8 @@ class MediaRepository:
|
||||||
before_ts: int,
|
before_ts: int,
|
||||||
size_gt: int = 0,
|
size_gt: int = 0,
|
||||||
keep_profiles: bool = True,
|
keep_profiles: bool = True,
|
||||||
|
delete_quarantined_media: bool = False,
|
||||||
|
delete_protected_media: bool = False,
|
||||||
) -> Tuple[List[str], int]:
|
) -> Tuple[List[str], int]:
|
||||||
"""
|
"""
|
||||||
Delete local or remote media from this server by size and timestamp. Removes
|
Delete local or remote media from this server by size and timestamp. Removes
|
||||||
|
@ -982,18 +988,22 @@ class MediaRepository:
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
before_ts: Unix timestamp in ms.
|
before_ts: Unix timestamp in ms.
|
||||||
Files that were last used before this timestamp will be deleted
|
Files that were last used before this timestamp will be deleted.
|
||||||
size_gt: Size of the media in bytes. Files that are larger will be deleted
|
size_gt: Size of the media in bytes. Files that are larger will be deleted.
|
||||||
keep_profiles: Switch to delete also files that are still used in image data
|
keep_profiles: Switch to delete also files that are still used in image data
|
||||||
(e.g user profile, room avatar)
|
(e.g user profile, room avatar). If false these files will be deleted.
|
||||||
If false these files will be deleted
|
delete_quarantined_media: If True, media marked as quarantined will be deleted.
|
||||||
|
delete_protected_media: If True, media marked as protected will be deleted.
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
A tuple of (list of deleted media IDs, total deleted media IDs).
|
A tuple of (list of deleted media IDs, total deleted media IDs).
|
||||||
"""
|
"""
|
||||||
old_media = await self.store.get_local_media_before(
|
old_media = await self.store.get_local_media_ids(
|
||||||
before_ts,
|
before_ts,
|
||||||
size_gt,
|
size_gt,
|
||||||
keep_profiles,
|
keep_profiles,
|
||||||
|
include_quarantined_media=delete_quarantined_media,
|
||||||
|
include_protected_media=delete_protected_media,
|
||||||
)
|
)
|
||||||
return await self._remove_local_media_from_disk(old_media)
|
return await self._remove_local_media_from_disk(old_media)
|
||||||
|
|
||||||
|
|
|
@ -172,10 +172,6 @@ class StateHandler:
|
||||||
entry = await self.resolve_state_groups_for_events(room_id, latest_event_ids)
|
entry = await self.resolve_state_groups_for_events(room_id, latest_event_ids)
|
||||||
return await self.store.get_joined_users_from_state(room_id, entry)
|
return await self.store.get_joined_users_from_state(room_id, entry)
|
||||||
|
|
||||||
async def get_current_hosts_in_room(self, room_id: str) -> FrozenSet[str]:
|
|
||||||
event_ids = await self.store.get_latest_event_ids_in_room(room_id)
|
|
||||||
return await self.get_hosts_in_room_at_events(room_id, event_ids)
|
|
||||||
|
|
||||||
async def get_hosts_in_room_at_events(
|
async def get_hosts_in_room_at_events(
|
||||||
self, room_id: str, event_ids: Collection[str]
|
self, room_id: str, event_ids: Collection[str]
|
||||||
) -> FrozenSet[str]:
|
) -> FrozenSet[str]:
|
||||||
|
|
|
@ -71,6 +71,7 @@ class SQLBaseStore(metaclass=ABCMeta):
|
||||||
self._attempt_to_invalidate_cache("is_host_joined", (room_id, host))
|
self._attempt_to_invalidate_cache("is_host_joined", (room_id, host))
|
||||||
if members_changed:
|
if members_changed:
|
||||||
self._attempt_to_invalidate_cache("get_users_in_room", (room_id,))
|
self._attempt_to_invalidate_cache("get_users_in_room", (room_id,))
|
||||||
|
self._attempt_to_invalidate_cache("get_current_hosts_in_room", (room_id,))
|
||||||
self._attempt_to_invalidate_cache(
|
self._attempt_to_invalidate_cache(
|
||||||
"get_users_in_room_with_profiles", (room_id,)
|
"get_users_in_room_with_profiles", (room_id,)
|
||||||
)
|
)
|
||||||
|
|
|
@ -23,6 +23,7 @@ from typing import (
|
||||||
List,
|
List,
|
||||||
Mapping,
|
Mapping,
|
||||||
Optional,
|
Optional,
|
||||||
|
Set,
|
||||||
Tuple,
|
Tuple,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -482,3 +483,10 @@ class StateStorageController:
|
||||||
room_id, StateFilter.from_types((key,))
|
room_id, StateFilter.from_types((key,))
|
||||||
)
|
)
|
||||||
return state_map.get(key)
|
return state_map.get(key)
|
||||||
|
|
||||||
|
async def get_current_hosts_in_room(self, room_id: str) -> Set[str]:
|
||||||
|
"""Get current hosts in room based on current state."""
|
||||||
|
|
||||||
|
await self._partial_state_room_tracker.await_full_state(room_id)
|
||||||
|
|
||||||
|
return await self.stores.main.get_current_hosts_in_room(room_id)
|
||||||
|
|
|
@ -151,10 +151,6 @@ class DataStore(
|
||||||
],
|
],
|
||||||
)
|
)
|
||||||
|
|
||||||
self._group_updates_id_gen = StreamIdGenerator(
|
|
||||||
db_conn, "local_group_updates", "stream_id"
|
|
||||||
)
|
|
||||||
|
|
||||||
self._cache_id_gen: Optional[MultiWriterIdGenerator]
|
self._cache_id_gen: Optional[MultiWriterIdGenerator]
|
||||||
if isinstance(self.database_engine, PostgresEngine):
|
if isinstance(self.database_engine, PostgresEngine):
|
||||||
# We set the `writers` to an empty list here as we don't care about
|
# We set the `writers` to an empty list here as we don't care about
|
||||||
|
@ -197,20 +193,6 @@ class DataStore(
|
||||||
prefilled_cache=curr_state_delta_prefill,
|
prefilled_cache=curr_state_delta_prefill,
|
||||||
)
|
)
|
||||||
|
|
||||||
_group_updates_prefill, min_group_updates_id = self.db_pool.get_cache_dict(
|
|
||||||
db_conn,
|
|
||||||
"local_group_updates",
|
|
||||||
entity_column="user_id",
|
|
||||||
stream_column="stream_id",
|
|
||||||
max_value=self._group_updates_id_gen.get_current_token(),
|
|
||||||
limit=1000,
|
|
||||||
)
|
|
||||||
self._group_updates_stream_cache = StreamChangeCache(
|
|
||||||
"_group_updates_stream_cache",
|
|
||||||
min_group_updates_id,
|
|
||||||
prefilled_cache=_group_updates_prefill,
|
|
||||||
)
|
|
||||||
|
|
||||||
self._stream_order_on_start = self.get_room_max_stream_ordering()
|
self._stream_order_on_start = self.get_room_max_stream_ordering()
|
||||||
self._min_stream_order_on_start = self.get_room_min_stream_ordering()
|
self._min_stream_order_on_start = self.get_room_min_stream_ordering()
|
||||||
|
|
||||||
|
|
|
@ -251,12 +251,36 @@ class MediaRepositoryStore(MediaRepositoryBackgroundUpdateStore):
|
||||||
"get_local_media_by_user_paginate_txn", get_local_media_by_user_paginate_txn
|
"get_local_media_by_user_paginate_txn", get_local_media_by_user_paginate_txn
|
||||||
)
|
)
|
||||||
|
|
||||||
async def get_local_media_before(
|
async def get_local_media_ids(
|
||||||
self,
|
self,
|
||||||
before_ts: int,
|
before_ts: int,
|
||||||
size_gt: int,
|
size_gt: int,
|
||||||
keep_profiles: bool,
|
keep_profiles: bool,
|
||||||
|
include_quarantined_media: bool,
|
||||||
|
include_protected_media: bool,
|
||||||
) -> List[str]:
|
) -> List[str]:
|
||||||
|
"""
|
||||||
|
Retrieve a list of media IDs from the local media store.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
before_ts: Only retrieve IDs from media that was either last accessed
|
||||||
|
(or if never accessed, created) before the given UNIX timestamp in ms.
|
||||||
|
size_gt: Only retrieve IDs from media that has a size (in bytes) greater than
|
||||||
|
the given integer.
|
||||||
|
keep_profiles: If True, exclude media IDs from the results that are used in the
|
||||||
|
following situations:
|
||||||
|
* global profile user avatar
|
||||||
|
* per-room profile user avatar
|
||||||
|
* room avatar
|
||||||
|
* a user's avatar in the user directory
|
||||||
|
include_quarantined_media: If False, exclude media IDs from the results that have
|
||||||
|
been marked as quarantined.
|
||||||
|
include_protected_media: If False, exclude media IDs from the results that have
|
||||||
|
been marked as protected from quarantine.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
A list of local media IDs.
|
||||||
|
"""
|
||||||
|
|
||||||
# to find files that have never been accessed (last_access_ts IS NULL)
|
# to find files that have never been accessed (last_access_ts IS NULL)
|
||||||
# compare with `created_ts`
|
# compare with `created_ts`
|
||||||
|
@ -294,12 +318,24 @@ class MediaRepositoryStore(MediaRepositoryBackgroundUpdateStore):
|
||||||
)
|
)
|
||||||
sql += sql_keep
|
sql += sql_keep
|
||||||
|
|
||||||
def _get_local_media_before_txn(txn: LoggingTransaction) -> List[str]:
|
if include_quarantined_media is False:
|
||||||
|
# Do not include media that has been quarantined
|
||||||
|
sql += """
|
||||||
|
AND quarantined_by IS NULL
|
||||||
|
"""
|
||||||
|
|
||||||
|
if include_protected_media is False:
|
||||||
|
# Do not include media that has been protected from quarantine
|
||||||
|
sql += """
|
||||||
|
AND NOT safe_from_quarantine
|
||||||
|
"""
|
||||||
|
|
||||||
|
def _get_local_media_ids_txn(txn: LoggingTransaction) -> List[str]:
|
||||||
txn.execute(sql, (before_ts, before_ts, size_gt))
|
txn.execute(sql, (before_ts, before_ts, size_gt))
|
||||||
return [row[0] for row in txn]
|
return [row[0] for row in txn]
|
||||||
|
|
||||||
return await self.db_pool.runInteraction(
|
return await self.db_pool.runInteraction(
|
||||||
"get_local_media_before", _get_local_media_before_txn
|
"get_local_media_ids", _get_local_media_ids_txn
|
||||||
)
|
)
|
||||||
|
|
||||||
async def store_local_media(
|
async def store_local_media(
|
||||||
|
@ -599,15 +635,37 @@ class MediaRepositoryStore(MediaRepositoryBackgroundUpdateStore):
|
||||||
desc="store_remote_media_thumbnail",
|
desc="store_remote_media_thumbnail",
|
||||||
)
|
)
|
||||||
|
|
||||||
async def get_remote_media_before(self, before_ts: int) -> List[Dict[str, str]]:
|
async def get_remote_media_ids(
|
||||||
|
self, before_ts: int, include_quarantined_media: bool
|
||||||
|
) -> List[Dict[str, str]]:
|
||||||
|
"""
|
||||||
|
Retrieve a list of server name, media ID tuples from the remote media cache.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
before_ts: Only retrieve IDs from media that was either last accessed
|
||||||
|
(or if never accessed, created) before the given UNIX timestamp in ms.
|
||||||
|
include_quarantined_media: If False, exclude media IDs from the results that have
|
||||||
|
been marked as quarantined.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
A list of tuples containing:
|
||||||
|
* The server name of homeserver where the media originates from,
|
||||||
|
* The ID of the media.
|
||||||
|
"""
|
||||||
sql = (
|
sql = (
|
||||||
"SELECT media_origin, media_id, filesystem_id"
|
"SELECT media_origin, media_id, filesystem_id"
|
||||||
" FROM remote_media_cache"
|
" FROM remote_media_cache"
|
||||||
" WHERE last_access_ts < ?"
|
" WHERE last_access_ts < ?"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
if include_quarantined_media is False:
|
||||||
|
# Only include media that has not been quarantined
|
||||||
|
sql += """
|
||||||
|
AND quarantined_by IS NULL
|
||||||
|
"""
|
||||||
|
|
||||||
return await self.db_pool.execute(
|
return await self.db_pool.execute(
|
||||||
"get_remote_media_before", self.db_pool.cursor_to_dict, sql, before_ts
|
"get_remote_media_ids", self.db_pool.cursor_to_dict, sql, before_ts
|
||||||
)
|
)
|
||||||
|
|
||||||
async def delete_remote_media(self, media_origin: str, media_id: str) -> None:
|
async def delete_remote_media(self, media_origin: str, media_id: str) -> None:
|
||||||
|
|
|
@ -11,11 +11,10 @@
|
||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
# See the License for the specific language governing permissions and
|
# See the License for the specific language governing permissions and
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
from typing import Any, Dict, List, Optional
|
from typing import Optional
|
||||||
|
|
||||||
from synapse.api.errors import StoreError
|
from synapse.api.errors import StoreError
|
||||||
from synapse.storage._base import SQLBaseStore
|
from synapse.storage._base import SQLBaseStore
|
||||||
from synapse.storage.database import LoggingTransaction
|
|
||||||
from synapse.storage.databases.main.roommember import ProfileInfo
|
from synapse.storage.databases.main.roommember import ProfileInfo
|
||||||
|
|
||||||
|
|
||||||
|
@ -55,17 +54,6 @@ class ProfileWorkerStore(SQLBaseStore):
|
||||||
desc="get_profile_avatar_url",
|
desc="get_profile_avatar_url",
|
||||||
)
|
)
|
||||||
|
|
||||||
async def get_from_remote_profile_cache(
|
|
||||||
self, user_id: str
|
|
||||||
) -> Optional[Dict[str, Any]]:
|
|
||||||
return await self.db_pool.simple_select_one(
|
|
||||||
table="remote_profile_cache",
|
|
||||||
keyvalues={"user_id": user_id},
|
|
||||||
retcols=("displayname", "avatar_url"),
|
|
||||||
allow_none=True,
|
|
||||||
desc="get_from_remote_profile_cache",
|
|
||||||
)
|
|
||||||
|
|
||||||
async def create_profile(self, user_localpart: str) -> None:
|
async def create_profile(self, user_localpart: str) -> None:
|
||||||
await self.db_pool.simple_insert(
|
await self.db_pool.simple_insert(
|
||||||
table="profiles", values={"user_id": user_localpart}, desc="create_profile"
|
table="profiles", values={"user_id": user_localpart}, desc="create_profile"
|
||||||
|
@ -91,97 +79,6 @@ class ProfileWorkerStore(SQLBaseStore):
|
||||||
desc="set_profile_avatar_url",
|
desc="set_profile_avatar_url",
|
||||||
)
|
)
|
||||||
|
|
||||||
async def update_remote_profile_cache(
|
|
||||||
self, user_id: str, displayname: Optional[str], avatar_url: Optional[str]
|
|
||||||
) -> int:
|
|
||||||
return await self.db_pool.simple_update(
|
|
||||||
table="remote_profile_cache",
|
|
||||||
keyvalues={"user_id": user_id},
|
|
||||||
updatevalues={
|
|
||||||
"displayname": displayname,
|
|
||||||
"avatar_url": avatar_url,
|
|
||||||
"last_check": self._clock.time_msec(),
|
|
||||||
},
|
|
||||||
desc="update_remote_profile_cache",
|
|
||||||
)
|
|
||||||
|
|
||||||
async def maybe_delete_remote_profile_cache(self, user_id: str) -> None:
|
|
||||||
"""Check if we still care about the remote user's profile, and if we
|
|
||||||
don't then remove their profile from the cache
|
|
||||||
"""
|
|
||||||
subscribed = await self.is_subscribed_remote_profile_for_user(user_id)
|
|
||||||
if not subscribed:
|
|
||||||
await self.db_pool.simple_delete(
|
|
||||||
table="remote_profile_cache",
|
|
||||||
keyvalues={"user_id": user_id},
|
|
||||||
desc="delete_remote_profile_cache",
|
|
||||||
)
|
|
||||||
|
|
||||||
async def is_subscribed_remote_profile_for_user(self, user_id: str) -> bool:
|
|
||||||
"""Check whether we are interested in a remote user's profile."""
|
|
||||||
res: Optional[str] = await self.db_pool.simple_select_one_onecol(
|
|
||||||
table="group_users",
|
|
||||||
keyvalues={"user_id": user_id},
|
|
||||||
retcol="user_id",
|
|
||||||
allow_none=True,
|
|
||||||
desc="should_update_remote_profile_cache_for_user",
|
|
||||||
)
|
|
||||||
|
|
||||||
if res:
|
|
||||||
return True
|
|
||||||
|
|
||||||
res = await self.db_pool.simple_select_one_onecol(
|
|
||||||
table="group_invites",
|
|
||||||
keyvalues={"user_id": user_id},
|
|
||||||
retcol="user_id",
|
|
||||||
allow_none=True,
|
|
||||||
desc="should_update_remote_profile_cache_for_user",
|
|
||||||
)
|
|
||||||
|
|
||||||
if res:
|
|
||||||
return True
|
|
||||||
return False
|
|
||||||
|
|
||||||
async def get_remote_profile_cache_entries_that_expire(
|
|
||||||
self, last_checked: int
|
|
||||||
) -> List[Dict[str, str]]:
|
|
||||||
"""Get all users who haven't been checked since `last_checked`"""
|
|
||||||
|
|
||||||
def _get_remote_profile_cache_entries_that_expire_txn(
|
|
||||||
txn: LoggingTransaction,
|
|
||||||
) -> List[Dict[str, str]]:
|
|
||||||
sql = """
|
|
||||||
SELECT user_id, displayname, avatar_url
|
|
||||||
FROM remote_profile_cache
|
|
||||||
WHERE last_check < ?
|
|
||||||
"""
|
|
||||||
|
|
||||||
txn.execute(sql, (last_checked,))
|
|
||||||
|
|
||||||
return self.db_pool.cursor_to_dict(txn)
|
|
||||||
|
|
||||||
return await self.db_pool.runInteraction(
|
|
||||||
"get_remote_profile_cache_entries_that_expire",
|
|
||||||
_get_remote_profile_cache_entries_that_expire_txn,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class ProfileStore(ProfileWorkerStore):
|
class ProfileStore(ProfileWorkerStore):
|
||||||
async def add_remote_profile_cache(
|
pass
|
||||||
self, user_id: str, displayname: str, avatar_url: str
|
|
||||||
) -> None:
|
|
||||||
"""Ensure we are caching the remote user's profiles.
|
|
||||||
|
|
||||||
This should only be called when `is_subscribed_remote_profile_for_user`
|
|
||||||
would return true for the user.
|
|
||||||
"""
|
|
||||||
await self.db_pool.simple_upsert(
|
|
||||||
table="remote_profile_cache",
|
|
||||||
keyvalues={"user_id": user_id},
|
|
||||||
values={
|
|
||||||
"displayname": displayname,
|
|
||||||
"avatar_url": avatar_url,
|
|
||||||
"last_check": self._clock.time_msec(),
|
|
||||||
},
|
|
||||||
desc="add_remote_profile_cache",
|
|
||||||
)
|
|
||||||
|
|
|
@ -393,7 +393,6 @@ class PurgeEventsStore(StateGroupWorkerStore, CacheInvalidationWorkerStore):
|
||||||
"partial_state_events",
|
"partial_state_events",
|
||||||
"events",
|
"events",
|
||||||
"federation_inbound_events_staging",
|
"federation_inbound_events_staging",
|
||||||
"group_rooms",
|
|
||||||
"local_current_membership",
|
"local_current_membership",
|
||||||
"partial_state_rooms_servers",
|
"partial_state_rooms_servers",
|
||||||
"partial_state_rooms",
|
"partial_state_rooms",
|
||||||
|
@ -413,7 +412,6 @@ class PurgeEventsStore(StateGroupWorkerStore, CacheInvalidationWorkerStore):
|
||||||
"e2e_room_keys",
|
"e2e_room_keys",
|
||||||
"event_push_summary",
|
"event_push_summary",
|
||||||
"pusher_throttle",
|
"pusher_throttle",
|
||||||
"group_summary_rooms",
|
|
||||||
"room_account_data",
|
"room_account_data",
|
||||||
"room_tags",
|
"room_tags",
|
||||||
# "rooms" happens last, to keep the foreign keys in the other tables
|
# "rooms" happens last, to keep the foreign keys in the other tables
|
||||||
|
|
|
@ -893,6 +893,43 @@ class RoomMemberWorkerStore(EventsWorkerStore):
|
||||||
|
|
||||||
return True
|
return True
|
||||||
|
|
||||||
|
@cached(iterable=True, max_entries=10000)
|
||||||
|
async def get_current_hosts_in_room(self, room_id: str) -> Set[str]:
|
||||||
|
"""Get current hosts in room based on current state."""
|
||||||
|
|
||||||
|
# First we check if we already have `get_users_in_room` in the cache, as
|
||||||
|
# we can just calculate result from that
|
||||||
|
users = self.get_users_in_room.cache.get_immediate(
|
||||||
|
(room_id,), None, update_metrics=False
|
||||||
|
)
|
||||||
|
if users is not None:
|
||||||
|
return {get_domain_from_id(u) for u in users}
|
||||||
|
|
||||||
|
if isinstance(self.database_engine, Sqlite3Engine):
|
||||||
|
# If we're using SQLite then let's just always use
|
||||||
|
# `get_users_in_room` rather than funky SQL.
|
||||||
|
users = await self.get_users_in_room(room_id)
|
||||||
|
return {get_domain_from_id(u) for u in users}
|
||||||
|
|
||||||
|
# For PostgreSQL we can use a regex to pull out the domains from the
|
||||||
|
# joined users in `current_state_events` via regex.
|
||||||
|
|
||||||
|
def get_current_hosts_in_room_txn(txn: LoggingTransaction) -> Set[str]:
|
||||||
|
sql = """
|
||||||
|
SELECT DISTINCT substring(state_key FROM '@[^:]*:(.*)$')
|
||||||
|
FROM current_state_events
|
||||||
|
WHERE
|
||||||
|
type = 'm.room.member'
|
||||||
|
AND membership = 'join'
|
||||||
|
AND room_id = ?
|
||||||
|
"""
|
||||||
|
txn.execute(sql, (room_id,))
|
||||||
|
return {d for d, in txn}
|
||||||
|
|
||||||
|
return await self.db_pool.runInteraction(
|
||||||
|
"get_current_hosts_in_room", get_current_hosts_in_room_txn
|
||||||
|
)
|
||||||
|
|
||||||
async def get_joined_hosts(
|
async def get_joined_hosts(
|
||||||
self, room_id: str, state_entry: "_StateCacheEntry"
|
self, room_id: str, state_entry: "_StateCacheEntry"
|
||||||
) -> FrozenSet[str]:
|
) -> FrozenSet[str]:
|
||||||
|
|
|
@ -70,6 +70,7 @@ Changes in SCHEMA_VERSION = 70:
|
||||||
|
|
||||||
Changes in SCHEMA_VERSION = 71:
|
Changes in SCHEMA_VERSION = 71:
|
||||||
- event_edges.room_id is no longer read from.
|
- event_edges.room_id is no longer read from.
|
||||||
|
- Tables related to groups are no longer accessed.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -30,16 +30,16 @@ from tests.unittest import HomeserverTestCase, override_config
|
||||||
|
|
||||||
class FederationSenderReceiptsTestCases(HomeserverTestCase):
|
class FederationSenderReceiptsTestCases(HomeserverTestCase):
|
||||||
def make_homeserver(self, reactor, clock):
|
def make_homeserver(self, reactor, clock):
|
||||||
mock_state_handler = Mock(spec=["get_current_hosts_in_room"])
|
hs = self.setup_test_homeserver(
|
||||||
# Ensure a new Awaitable is created for each call.
|
|
||||||
mock_state_handler.get_current_hosts_in_room.return_value = make_awaitable(
|
|
||||||
["test", "host2"]
|
|
||||||
)
|
|
||||||
return self.setup_test_homeserver(
|
|
||||||
state_handler=mock_state_handler,
|
|
||||||
federation_transport_client=Mock(spec=["send_transaction"]),
|
federation_transport_client=Mock(spec=["send_transaction"]),
|
||||||
)
|
)
|
||||||
|
|
||||||
|
hs.get_storage_controllers().state.get_current_hosts_in_room = Mock(
|
||||||
|
return_value=make_awaitable({"test", "host2"})
|
||||||
|
)
|
||||||
|
|
||||||
|
return hs
|
||||||
|
|
||||||
@override_config({"send_federation": True})
|
@override_config({"send_federation": True})
|
||||||
def test_send_receipts(self):
|
def test_send_receipts(self):
|
||||||
mock_send_transaction = (
|
mock_send_transaction = (
|
||||||
|
|
|
@ -129,10 +129,12 @@ class TypingNotificationsTestCase(unittest.HomeserverTestCase):
|
||||||
|
|
||||||
hs.get_event_auth_handler().check_host_in_room = check_host_in_room
|
hs.get_event_auth_handler().check_host_in_room = check_host_in_room
|
||||||
|
|
||||||
def get_joined_hosts_for_room(room_id: str):
|
async def get_current_hosts_in_room(room_id: str):
|
||||||
return {member.domain for member in self.room_members}
|
return {member.domain for member in self.room_members}
|
||||||
|
|
||||||
self.datastore.get_joined_hosts_for_room = get_joined_hosts_for_room
|
hs.get_storage_controllers().state.get_current_hosts_in_room = (
|
||||||
|
get_current_hosts_in_room
|
||||||
|
)
|
||||||
|
|
||||||
async def get_users_in_room(room_id: str):
|
async def get_users_in_room(room_id: str):
|
||||||
return {str(u) for u in self.room_members}
|
return {str(u) for u in self.room_members}
|
||||||
|
|
|
@ -2467,7 +2467,6 @@ PURGE_TABLES = [
|
||||||
"event_push_actions",
|
"event_push_actions",
|
||||||
"event_search",
|
"event_search",
|
||||||
"events",
|
"events",
|
||||||
"group_rooms",
|
|
||||||
"receipts_graph",
|
"receipts_graph",
|
||||||
"receipts_linearized",
|
"receipts_linearized",
|
||||||
"room_aliases",
|
"room_aliases",
|
||||||
|
@ -2484,7 +2483,6 @@ PURGE_TABLES = [
|
||||||
"e2e_room_keys",
|
"e2e_room_keys",
|
||||||
"event_push_summary",
|
"event_push_summary",
|
||||||
"pusher_throttle",
|
"pusher_throttle",
|
||||||
"group_summary_rooms",
|
|
||||||
"room_account_data",
|
"room_account_data",
|
||||||
"room_tags",
|
"room_tags",
|
||||||
# "state_groups", # Current impl leaves orphaned state groups around.
|
# "state_groups", # Current impl leaves orphaned state groups around.
|
||||||
|
|
|
@ -53,13 +53,16 @@ class MediaRetentionTestCase(unittest.HomeserverTestCase):
|
||||||
# Create a user to upload media with
|
# Create a user to upload media with
|
||||||
test_user_id = self.register_user("alice", "password")
|
test_user_id = self.register_user("alice", "password")
|
||||||
|
|
||||||
# Inject media (3 images each; recently accessed, old access, never accessed)
|
# Inject media (recently accessed, old access, never accessed, old access
|
||||||
# into both the local store and the remote cache
|
# quarantined media) into both the local store and the remote cache, plus
|
||||||
|
# one additional local media that is marked as protected from quarantine.
|
||||||
media_repository = hs.get_media_repository()
|
media_repository = hs.get_media_repository()
|
||||||
test_media_content = b"example string"
|
test_media_content = b"example string"
|
||||||
|
|
||||||
def _create_media_and_set_last_accessed(
|
def _create_media_and_set_attributes(
|
||||||
last_accessed_ms: Optional[int],
|
last_accessed_ms: Optional[int],
|
||||||
|
is_quarantined: Optional[bool] = False,
|
||||||
|
is_protected: Optional[bool] = False,
|
||||||
) -> str:
|
) -> str:
|
||||||
# "Upload" some media to the local media store
|
# "Upload" some media to the local media store
|
||||||
mxc_uri = self.get_success(
|
mxc_uri = self.get_success(
|
||||||
|
@ -84,10 +87,31 @@ class MediaRetentionTestCase(unittest.HomeserverTestCase):
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
|
|
||||||
|
if is_quarantined:
|
||||||
|
# Mark this media as quarantined
|
||||||
|
self.get_success(
|
||||||
|
self.store.quarantine_media_by_id(
|
||||||
|
server_name=self.hs.config.server.server_name,
|
||||||
|
media_id=media_id,
|
||||||
|
quarantined_by="@theadmin:test",
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
if is_protected:
|
||||||
|
# Mark this media as protected from quarantine
|
||||||
|
self.get_success(
|
||||||
|
self.store.mark_local_media_as_safe(
|
||||||
|
media_id=media_id,
|
||||||
|
safe=True,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
return media_id
|
return media_id
|
||||||
|
|
||||||
def _cache_remote_media_and_set_last_accessed(
|
def _cache_remote_media_and_set_attributes(
|
||||||
media_id: str, last_accessed_ms: Optional[int]
|
media_id: str,
|
||||||
|
last_accessed_ms: Optional[int],
|
||||||
|
is_quarantined: Optional[bool] = False,
|
||||||
) -> str:
|
) -> str:
|
||||||
# Pretend to cache some remote media
|
# Pretend to cache some remote media
|
||||||
self.get_success(
|
self.get_success(
|
||||||
|
@ -112,23 +136,58 @@ class MediaRetentionTestCase(unittest.HomeserverTestCase):
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
|
|
||||||
|
if is_quarantined:
|
||||||
|
# Mark this media as quarantined
|
||||||
|
self.get_success(
|
||||||
|
self.store.quarantine_media_by_id(
|
||||||
|
server_name=self.remote_server_name,
|
||||||
|
media_id=media_id,
|
||||||
|
quarantined_by="@theadmin:test",
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
return media_id
|
return media_id
|
||||||
|
|
||||||
# Start with the local media store
|
# Start with the local media store
|
||||||
self.local_recently_accessed_media = _create_media_and_set_last_accessed(
|
self.local_recently_accessed_media = _create_media_and_set_attributes(
|
||||||
self.THIRTY_DAYS_IN_MS
|
last_accessed_ms=self.THIRTY_DAYS_IN_MS,
|
||||||
)
|
)
|
||||||
self.local_not_recently_accessed_media = _create_media_and_set_last_accessed(
|
self.local_not_recently_accessed_media = _create_media_and_set_attributes(
|
||||||
self.ONE_DAY_IN_MS
|
last_accessed_ms=self.ONE_DAY_IN_MS,
|
||||||
|
)
|
||||||
|
self.local_not_recently_accessed_quarantined_media = (
|
||||||
|
_create_media_and_set_attributes(
|
||||||
|
last_accessed_ms=self.ONE_DAY_IN_MS,
|
||||||
|
is_quarantined=True,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
self.local_not_recently_accessed_protected_media = (
|
||||||
|
_create_media_and_set_attributes(
|
||||||
|
last_accessed_ms=self.ONE_DAY_IN_MS,
|
||||||
|
is_protected=True,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
self.local_never_accessed_media = _create_media_and_set_attributes(
|
||||||
|
last_accessed_ms=None,
|
||||||
)
|
)
|
||||||
self.local_never_accessed_media = _create_media_and_set_last_accessed(None)
|
|
||||||
|
|
||||||
# And now the remote media store
|
# And now the remote media store
|
||||||
self.remote_recently_accessed_media = _cache_remote_media_and_set_last_accessed(
|
self.remote_recently_accessed_media = _cache_remote_media_and_set_attributes(
|
||||||
"a", self.THIRTY_DAYS_IN_MS
|
media_id="a",
|
||||||
|
last_accessed_ms=self.THIRTY_DAYS_IN_MS,
|
||||||
)
|
)
|
||||||
self.remote_not_recently_accessed_media = (
|
self.remote_not_recently_accessed_media = (
|
||||||
_cache_remote_media_and_set_last_accessed("b", self.ONE_DAY_IN_MS)
|
_cache_remote_media_and_set_attributes(
|
||||||
|
media_id="b",
|
||||||
|
last_accessed_ms=self.ONE_DAY_IN_MS,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
self.remote_not_recently_accessed_quarantined_media = (
|
||||||
|
_cache_remote_media_and_set_attributes(
|
||||||
|
media_id="c",
|
||||||
|
last_accessed_ms=self.ONE_DAY_IN_MS,
|
||||||
|
is_quarantined=True,
|
||||||
|
)
|
||||||
)
|
)
|
||||||
# Remote media will always have a "last accessed" attribute, as it would not
|
# Remote media will always have a "last accessed" attribute, as it would not
|
||||||
# be fetched from the remote homeserver unless instigated by a user.
|
# be fetched from the remote homeserver unless instigated by a user.
|
||||||
|
@ -163,8 +222,20 @@ class MediaRetentionTestCase(unittest.HomeserverTestCase):
|
||||||
],
|
],
|
||||||
not_purged=[
|
not_purged=[
|
||||||
(self.hs.config.server.server_name, self.local_recently_accessed_media),
|
(self.hs.config.server.server_name, self.local_recently_accessed_media),
|
||||||
|
(
|
||||||
|
self.hs.config.server.server_name,
|
||||||
|
self.local_not_recently_accessed_quarantined_media,
|
||||||
|
),
|
||||||
|
(
|
||||||
|
self.hs.config.server.server_name,
|
||||||
|
self.local_not_recently_accessed_protected_media,
|
||||||
|
),
|
||||||
(self.remote_server_name, self.remote_recently_accessed_media),
|
(self.remote_server_name, self.remote_recently_accessed_media),
|
||||||
(self.remote_server_name, self.remote_not_recently_accessed_media),
|
(self.remote_server_name, self.remote_not_recently_accessed_media),
|
||||||
|
(
|
||||||
|
self.remote_server_name,
|
||||||
|
self.remote_not_recently_accessed_quarantined_media,
|
||||||
|
),
|
||||||
],
|
],
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -199,6 +270,18 @@ class MediaRetentionTestCase(unittest.HomeserverTestCase):
|
||||||
self.hs.config.server.server_name,
|
self.hs.config.server.server_name,
|
||||||
self.local_not_recently_accessed_media,
|
self.local_not_recently_accessed_media,
|
||||||
),
|
),
|
||||||
|
(
|
||||||
|
self.hs.config.server.server_name,
|
||||||
|
self.local_not_recently_accessed_quarantined_media,
|
||||||
|
),
|
||||||
|
(
|
||||||
|
self.hs.config.server.server_name,
|
||||||
|
self.local_not_recently_accessed_protected_media,
|
||||||
|
),
|
||||||
|
(
|
||||||
|
self.remote_server_name,
|
||||||
|
self.remote_not_recently_accessed_quarantined_media,
|
||||||
|
),
|
||||||
(self.hs.config.server.server_name, self.local_never_accessed_media),
|
(self.hs.config.server.server_name, self.local_never_accessed_media),
|
||||||
],
|
],
|
||||||
)
|
)
|
||||||
|
|
Loading…
Reference in New Issue