Merge branch 'release-v1.55' into matrix-org-hotfixes
commit
2207fa50b4
84
CHANGES.md
84
CHANGES.md
|
@ -1,3 +1,87 @@
|
||||||
|
Synapse 1.55.0rc1 (2022-03-15)
|
||||||
|
==============================
|
||||||
|
|
||||||
|
This release removes a workaround introduced in Synapse 1.50.0 for Mjolnir compatibility. **This breaks compatibility with Mjolnir 1.3.1 and earlier. ([\#11700](https://github.com/matrix-org/synapse/issues/11700))**; Mjolnir users should upgrade Mjolnir before upgrading Synapse to this version.
|
||||||
|
|
||||||
|
Features
|
||||||
|
--------
|
||||||
|
|
||||||
|
- Add third-party rules callbacks `check_can_shutdown_room` and `check_can_deactivate_user`. ([\#12028](https://github.com/matrix-org/synapse/issues/12028))
|
||||||
|
- Improve performance of logging in for large accounts. ([\#12132](https://github.com/matrix-org/synapse/issues/12132))
|
||||||
|
- Add experimental env var `SYNAPSE_ASYNC_IO_REACTOR` that causes Synapse to use the asyncio reactor for Twisted. ([\#12135](https://github.com/matrix-org/synapse/issues/12135))
|
||||||
|
- Support the stable identifiers from [MSC3440](https://github.com/matrix-org/matrix-doc/pull/3440): threads. ([\#12151](https://github.com/matrix-org/synapse/issues/12151))
|
||||||
|
- Add a new Jinja2 template filter to extract the local part of an email address. ([\#12212](https://github.com/matrix-org/synapse/issues/12212))
|
||||||
|
|
||||||
|
|
||||||
|
Bugfixes
|
||||||
|
--------
|
||||||
|
|
||||||
|
- Use the proper serialization format for bundled thread aggregations. The bug has existed since Synapse v1.48.0. ([\#12090](https://github.com/matrix-org/synapse/issues/12090))
|
||||||
|
- Fix a long-standing bug when redacting events with relations. ([\#12113](https://github.com/matrix-org/synapse/issues/12113), [\#12121](https://github.com/matrix-org/synapse/issues/12121), [\#12130](https://github.com/matrix-org/synapse/issues/12130), [\#12189](https://github.com/matrix-org/synapse/issues/12189))
|
||||||
|
- Fix a bug introduced in Synapse 1.7.2 whereby background updates are never run with the default background batch size. ([\#12157](https://github.com/matrix-org/synapse/issues/12157))
|
||||||
|
- Fix a bug where non-standard information was returned from the `/hierarchy` API. Introduced in Synapse v1.41.0. ([\#12175](https://github.com/matrix-org/synapse/issues/12175))
|
||||||
|
- Fix a bug introduced in Synapse 1.54.0 that broke background updates on sqlite homeservers while search was disabled. ([\#12215](https://github.com/matrix-org/synapse/issues/12215))
|
||||||
|
|
||||||
|
|
||||||
|
Improved Documentation
|
||||||
|
----------------------
|
||||||
|
|
||||||
|
- Fix complexity checking config example in [Resource Constrained Devices](https://matrix-org.github.io/synapse/v1.54/other/running_synapse_on_single_board_computers.html) docs page. ([\#11998](https://github.com/matrix-org/synapse/issues/11998))
|
||||||
|
- Improve documentation for demo scripts. ([\#12143](https://github.com/matrix-org/synapse/issues/12143))
|
||||||
|
- Updates to the Room DAG concepts development document. ([\#12179](https://github.com/matrix-org/synapse/issues/12179))
|
||||||
|
- Document that the `typing`, `to_device`, `account_data`, `receipts`, and `presence` stream writer can only be used on a single worker. ([\#12196](https://github.com/matrix-org/synapse/issues/12196))
|
||||||
|
- Document that contributors can sign off privately by email. ([\#12204](https://github.com/matrix-org/synapse/issues/12204))
|
||||||
|
|
||||||
|
|
||||||
|
Deprecations and Removals
|
||||||
|
-------------------------
|
||||||
|
|
||||||
|
- **Remove workaround introduced in Synapse 1.50.0 for Mjolnir compatibility. Breaks compatibility with Mjolnir 1.3.1 and earlier. ([\#11700](https://github.com/matrix-org/synapse/issues/11700))**
|
||||||
|
- Remove backwards compatibilty with pagination tokens from the `/relations` and `/aggregations` endpoints generated from Synapse < v1.52.0. ([\#12138](https://github.com/matrix-org/synapse/issues/12138))
|
||||||
|
- The groups/communities feature in Synapse has been deprecated. ([\#12200](https://github.com/matrix-org/synapse/issues/12200))
|
||||||
|
|
||||||
|
|
||||||
|
Internal Changes
|
||||||
|
----------------
|
||||||
|
|
||||||
|
- Simplify the `ApplicationService` class' set of public methods related to interest checking. ([\#11915](https://github.com/matrix-org/synapse/issues/11915))
|
||||||
|
- Add config settings for background update parameters. ([\#11980](https://github.com/matrix-org/synapse/issues/11980))
|
||||||
|
- Correct type hints for txredis. ([\#12042](https://github.com/matrix-org/synapse/issues/12042))
|
||||||
|
- Limit the size of `aggregation_key` on annotations. ([\#12101](https://github.com/matrix-org/synapse/issues/12101))
|
||||||
|
- Add type hints to tests files. ([\#12108](https://github.com/matrix-org/synapse/issues/12108), [\#12146](https://github.com/matrix-org/synapse/issues/12146), [\#12207](https://github.com/matrix-org/synapse/issues/12207), [\#12208](https://github.com/matrix-org/synapse/issues/12208))
|
||||||
|
- Move scripts to Synapse package and expose as setuptools entry points. ([\#12118](https://github.com/matrix-org/synapse/issues/12118))
|
||||||
|
- Add support for cancellation to `ReadWriteLock`. ([\#12120](https://github.com/matrix-org/synapse/issues/12120))
|
||||||
|
- Fix data validation to compare to lists, not sequences. ([\#12128](https://github.com/matrix-org/synapse/issues/12128))
|
||||||
|
- Fix CI not attaching source distributions and wheels to the GitHub releases. ([\#12131](https://github.com/matrix-org/synapse/issues/12131))
|
||||||
|
- Remove unused mocks from `test_typing`. ([\#12136](https://github.com/matrix-org/synapse/issues/12136))
|
||||||
|
- Give `scripts-dev` scripts suffixes for neater CI config. ([\#12137](https://github.com/matrix-org/synapse/issues/12137))
|
||||||
|
- Move `synctl` into `synapse._scripts` and expose as an entry point. ([\#12140](https://github.com/matrix-org/synapse/issues/12140))
|
||||||
|
- Move the snapcraft configuration file to `contrib`. ([\#12142](https://github.com/matrix-org/synapse/issues/12142))
|
||||||
|
- Enable [MSC3030](https://github.com/matrix-org/matrix-doc/pull/3030) Complement tests in CI. ([\#12144](https://github.com/matrix-org/synapse/issues/12144))
|
||||||
|
- Enable [MSC2716](https://github.com/matrix-org/matrix-doc/pull/2716) Complement tests in CI. ([\#12145](https://github.com/matrix-org/synapse/issues/12145))
|
||||||
|
- Add test for `ObservableDeferred`'s cancellation behaviour. ([\#12149](https://github.com/matrix-org/synapse/issues/12149))
|
||||||
|
- Use `ParamSpec` in type hints for `synapse.logging.context`. ([\#12150](https://github.com/matrix-org/synapse/issues/12150))
|
||||||
|
- Prune unused jobs from `tox` config. ([\#12152](https://github.com/matrix-org/synapse/issues/12152))
|
||||||
|
- Move CI checks out of tox, to facilitate a move to using poetry. ([\#12153](https://github.com/matrix-org/synapse/issues/12153))
|
||||||
|
- Avoid generating state groups for local out-of-band leaves. ([\#12154](https://github.com/matrix-org/synapse/issues/12154))
|
||||||
|
- Avoid trying to calculate the state at outlier events. ([\#12155](https://github.com/matrix-org/synapse/issues/12155), [\#12173](https://github.com/matrix-org/synapse/issues/12173), [\#12202](https://github.com/matrix-org/synapse/issues/12202))
|
||||||
|
- Fix some type annotations. ([\#12156](https://github.com/matrix-org/synapse/issues/12156))
|
||||||
|
- Add type hints for `ObservableDeferred` attributes. ([\#12159](https://github.com/matrix-org/synapse/issues/12159))
|
||||||
|
- Use a prebuilt Action for the `tests-done` CI job. ([\#12161](https://github.com/matrix-org/synapse/issues/12161))
|
||||||
|
- Reduce number of DB queries made during processing of `/sync`. ([\#12163](https://github.com/matrix-org/synapse/issues/12163))
|
||||||
|
- Add `delay_cancellation` utility function, which behaves like `stop_cancellation` but waits until the original `Deferred` resolves before raising a `CancelledError`. ([\#12180](https://github.com/matrix-org/synapse/issues/12180))
|
||||||
|
- Retry HTTP replication failures, this should prevent 502's when restarting stateful workers (main, event persisters, stream writers). Contributed by Nick @ Beeper. ([\#12182](https://github.com/matrix-org/synapse/issues/12182))
|
||||||
|
- Add cancellation support to `@cached` and `@cachedList` decorators. ([\#12183](https://github.com/matrix-org/synapse/issues/12183))
|
||||||
|
- Remove unused variables. ([\#12187](https://github.com/matrix-org/synapse/issues/12187))
|
||||||
|
- Add combined test for HTTP pusher and push rule. Contributed by Nick @ Beeper. ([\#12188](https://github.com/matrix-org/synapse/issues/12188))
|
||||||
|
- Rename `HomeServer.get_tcp_replication` to `get_replication_command_handler`. ([\#12192](https://github.com/matrix-org/synapse/issues/12192))
|
||||||
|
- Remove some dead code. ([\#12197](https://github.com/matrix-org/synapse/issues/12197))
|
||||||
|
- Fix a misleading comment in the function `check_event_for_spam`. ([\#12203](https://github.com/matrix-org/synapse/issues/12203))
|
||||||
|
- Remove unnecessary `pass` statements. ([\#12206](https://github.com/matrix-org/synapse/issues/12206))
|
||||||
|
- Update the SSO username picker template to comply with SIWA guidelines. ([\#12210](https://github.com/matrix-org/synapse/issues/12210))
|
||||||
|
- Improve code documentation for the typing stream over replication. ([\#12211](https://github.com/matrix-org/synapse/issues/12211))
|
||||||
|
|
||||||
|
|
||||||
Synapse 1.54.0 (2022-03-08)
|
Synapse 1.54.0 (2022-03-08)
|
||||||
===========================
|
===========================
|
||||||
|
|
||||||
|
|
|
@ -1 +0,0 @@
|
||||||
Simplify the `ApplicationService` class' set of public methods related to interest checking.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix complexity checking config example in [Resource Constrained Devices](https://matrix-org.github.io/synapse/v1.54/other/running_synapse_on_single_board_computers.html) docs page.
|
|
|
@ -1 +0,0 @@
|
||||||
Add third-party rules rules callbacks `check_can_shutdown_room` and `check_can_deactivate_user`.
|
|
|
@ -1 +0,0 @@
|
||||||
Correct type hints for txredis.
|
|
|
@ -1 +0,0 @@
|
||||||
Use the proper serialization format for bundled thread aggregations. The bug has existed since Synapse v1.48.0.
|
|
|
@ -1 +0,0 @@
|
||||||
Limit the size of `aggregation_key` on annotations.
|
|
|
@ -1 +0,0 @@
|
||||||
Add type hints to `tests/rest/client`.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix a long-standing bug when redacting events with relations.
|
|
|
@ -1 +0,0 @@
|
||||||
Move scripts to Synapse package and expose as setuptools entry points.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix a long-standing bug when redacting events with relations.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix data validation to compare to lists, not sequences.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix a long-standing bug when redacting events with relations.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix CI not attaching source distributions and wheels to the GitHub releases.
|
|
|
@ -1 +0,0 @@
|
||||||
Improve performance of logging in for large accounts.
|
|
|
@ -1 +0,0 @@
|
||||||
Add experimental env var `SYNAPSE_ASYNC_IO_REACTOR` that causes Synapse to use the asyncio reactor for Twisted.
|
|
|
@ -1 +0,0 @@
|
||||||
Remove unused mocks from `test_typing`.
|
|
|
@ -1 +0,0 @@
|
||||||
Give `scripts-dev` scripts suffixes for neater CI config.
|
|
|
@ -1 +0,0 @@
|
||||||
Remove backwards compatibilty with pagination tokens from the `/relations` and `/aggregations` endpoints generated from Synapse < v1.52.0.
|
|
|
@ -1 +0,0 @@
|
||||||
Move `synctl` into `synapse._scripts` and expose as an entry point.
|
|
|
@ -1 +0,0 @@
|
||||||
Move the snapcraft configuration file to `contrib`.
|
|
|
@ -1 +0,0 @@
|
||||||
Improve documentation for demo scripts.
|
|
|
@ -1 +0,0 @@
|
||||||
Enable [MSC3030](https://github.com/matrix-org/matrix-doc/pull/3030) Complement tests in CI.
|
|
|
@ -1 +0,0 @@
|
||||||
Enable [MSC2716](https://github.com/matrix-org/matrix-doc/pull/2716) Complement tests in CI.
|
|
|
@ -1 +0,0 @@
|
||||||
Add type hints to `tests/rest`.
|
|
|
@ -1 +0,0 @@
|
||||||
Add test for `ObservableDeferred`'s cancellation behaviour.
|
|
|
@ -1 +0,0 @@
|
||||||
Use `ParamSpec` in type hints for `synapse.logging.context`.
|
|
|
@ -1 +0,0 @@
|
||||||
Support the stable identifiers from [MSC3440](https://github.com/matrix-org/matrix-doc/pull/3440): threads.
|
|
|
@ -1 +0,0 @@
|
||||||
Prune unused jobs from `tox` config.
|
|
|
@ -1 +0,0 @@
|
||||||
Move CI checks out of tox, to facilitate a move to using poetry.
|
|
|
@ -1 +0,0 @@
|
||||||
Avoid generating state groups for local out-of-band leaves.
|
|
|
@ -1 +0,0 @@
|
||||||
Avoid trying to calculate the state at outlier events.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix some type annotations.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix a bug introduced in #4864 whereby background updates are never run with the default background batch size.
|
|
|
@ -1 +0,0 @@
|
||||||
Add type hints for `ObservableDeferred` attributes.
|
|
|
@ -1 +0,0 @@
|
||||||
Use a prebuilt Action for the `tests-done` CI job.
|
|
|
@ -1 +0,0 @@
|
||||||
Reduce number of DB queries made during processing of `/sync`.
|
|
|
@ -1 +0,0 @@
|
||||||
Avoid trying to calculate the state at outlier events.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix a bug where non-standard information was returned from the `/hierarchy` API. Introduced in Synapse v1.41.0.
|
|
|
@ -1 +0,0 @@
|
||||||
Updates to the Room DAG concepts development document.
|
|
|
@ -1 +0,0 @@
|
||||||
Retry HTTP replication failures, this should prevent 502's when restarting stateful workers (main, event persisters, stream writers). Contributed by Nick @ Beeper.
|
|
|
@ -1 +0,0 @@
|
||||||
Remove unused variables.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix a long-standing bug when redacting events with relations.
|
|
|
@ -1 +0,0 @@
|
||||||
Rename `HomeServer.get_tcp_replication` to `get_replication_command_handler`.
|
|
|
@ -1 +0,0 @@
|
||||||
Remove some dead code.
|
|
|
@ -1,3 +1,9 @@
|
||||||
|
matrix-synapse-py3 (1.55.0~rc1) stable; urgency=medium
|
||||||
|
|
||||||
|
* New synapse release 1.55.0~rc1.
|
||||||
|
|
||||||
|
-- Synapse Packaging team <packages@matrix.org> Tue, 15 Mar 2022 10:59:31 +0000
|
||||||
|
|
||||||
matrix-synapse-py3 (1.54.0) stable; urgency=medium
|
matrix-synapse-py3 (1.54.0) stable; urgency=medium
|
||||||
|
|
||||||
* New synapse release 1.54.0.
|
* New synapse release 1.54.0.
|
||||||
|
|
|
@ -458,6 +458,17 @@ Git allows you to add this signoff automatically when using the `-s`
|
||||||
flag to `git commit`, which uses the name and email set in your
|
flag to `git commit`, which uses the name and email set in your
|
||||||
`user.name` and `user.email` git configs.
|
`user.name` and `user.email` git configs.
|
||||||
|
|
||||||
|
### Private Sign off
|
||||||
|
|
||||||
|
If you would like to provide your legal name privately to the Matrix.org
|
||||||
|
Foundation (instead of in a public commit or comment), you can do so
|
||||||
|
by emailing your legal name and a link to the pull request to
|
||||||
|
[dco@matrix.org](mailto:dco@matrix.org?subject=Private%20sign%20off).
|
||||||
|
It helps to include "sign off" or similar in the subject line. You will then
|
||||||
|
be instructed further.
|
||||||
|
|
||||||
|
Once private sign off is complete, doing so for future contributions will not
|
||||||
|
be required.
|
||||||
|
|
||||||
# 10. Turn feedback into better code.
|
# 10. Turn feedback into better code.
|
||||||
|
|
||||||
|
|
|
@ -1947,8 +1947,14 @@ saml2_config:
|
||||||
#
|
#
|
||||||
# localpart_template: Jinja2 template for the localpart of the MXID.
|
# localpart_template: Jinja2 template for the localpart of the MXID.
|
||||||
# If this is not set, the user will be prompted to choose their
|
# If this is not set, the user will be prompted to choose their
|
||||||
# own username (see 'sso_auth_account_details.html' in the 'sso'
|
# own username (see the documentation for the
|
||||||
# section of this file).
|
# 'sso_auth_account_details.html' template). This template can
|
||||||
|
# use the 'localpart_from_email' filter.
|
||||||
|
#
|
||||||
|
# confirm_localpart: Whether to prompt the user to validate (or
|
||||||
|
# change) the generated localpart (see the documentation for the
|
||||||
|
# 'sso_auth_account_details.html' template), instead of
|
||||||
|
# registering the account right away.
|
||||||
#
|
#
|
||||||
# display_name_template: Jinja2 template for the display name to set
|
# display_name_template: Jinja2 template for the display name to set
|
||||||
# on first login. If unset, no displayname will be set.
|
# on first login. If unset, no displayname will be set.
|
||||||
|
@ -2729,3 +2735,35 @@ redis:
|
||||||
# Optional password if configured on the Redis instance
|
# Optional password if configured on the Redis instance
|
||||||
#
|
#
|
||||||
#password: <secret_password>
|
#password: <secret_password>
|
||||||
|
|
||||||
|
|
||||||
|
## Background Updates ##
|
||||||
|
|
||||||
|
# Background updates are database updates that are run in the background in batches.
|
||||||
|
# The duration, minimum batch size, default batch size, whether to sleep between batches and if so, how long to
|
||||||
|
# sleep can all be configured. This is helpful to speed up or slow down the updates.
|
||||||
|
#
|
||||||
|
background_updates:
|
||||||
|
# How long in milliseconds to run a batch of background updates for. Defaults to 100. Uncomment and set
|
||||||
|
# a time to change the default.
|
||||||
|
#
|
||||||
|
#background_update_duration_ms: 500
|
||||||
|
|
||||||
|
# Whether to sleep between updates. Defaults to True. Uncomment to change the default.
|
||||||
|
#
|
||||||
|
#sleep_enabled: false
|
||||||
|
|
||||||
|
# If sleeping between updates, how long in milliseconds to sleep for. Defaults to 1000. Uncomment
|
||||||
|
# and set a duration to change the default.
|
||||||
|
#
|
||||||
|
#sleep_duration_ms: 300
|
||||||
|
|
||||||
|
# Minimum size a batch of background updates can be. Must be greater than 0. Defaults to 1. Uncomment and
|
||||||
|
# set a size to change the default.
|
||||||
|
#
|
||||||
|
#min_batch_size: 10
|
||||||
|
|
||||||
|
# The batch size to use for the first iteration of a new background update. The default is 100.
|
||||||
|
# Uncomment and set a size to change the default.
|
||||||
|
#
|
||||||
|
#default_batch_size: 50
|
||||||
|
|
|
@ -36,6 +36,13 @@ Turns a `mxc://` URL for media content into an HTTP(S) one using the homeserver'
|
||||||
|
|
||||||
Example: `message.sender_avatar_url|mxc_to_http(32,32)`
|
Example: `message.sender_avatar_url|mxc_to_http(32,32)`
|
||||||
|
|
||||||
|
```python
|
||||||
|
localpart_from_email(address: str) -> str
|
||||||
|
```
|
||||||
|
|
||||||
|
Returns the local part of an email address (e.g. `alice` in `alice@example.com`).
|
||||||
|
|
||||||
|
Example: `user.email_address|localpart_from_email`
|
||||||
|
|
||||||
## Email templates
|
## Email templates
|
||||||
|
|
||||||
|
@ -176,8 +183,11 @@ Below are the templates Synapse will look for when generating pages related to S
|
||||||
for the brand of the IdP
|
for the brand of the IdP
|
||||||
* `user_attributes`: an object containing details about the user that
|
* `user_attributes`: an object containing details about the user that
|
||||||
we received from the IdP. May have the following attributes:
|
we received from the IdP. May have the following attributes:
|
||||||
* display_name: the user's display_name
|
* `display_name`: the user's display name
|
||||||
* emails: a list of email addresses
|
* `emails`: a list of email addresses
|
||||||
|
* `localpart`: the local part of the Matrix user ID to register,
|
||||||
|
if `localpart_template` is set in the mapping provider configuration (empty
|
||||||
|
string if not)
|
||||||
The template should render a form which submits the following fields:
|
The template should render a form which submits the following fields:
|
||||||
* `username`: the localpart of the user's chosen user id
|
* `username`: the localpart of the user's chosen user id
|
||||||
* `sso_new_user_consent.html`: HTML page allowing the user to consent to the
|
* `sso_new_user_consent.html`: HTML page allowing the user to consent to the
|
||||||
|
|
|
@ -85,6 +85,20 @@ process, for example:
|
||||||
dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb
|
dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb
|
||||||
```
|
```
|
||||||
|
|
||||||
|
# Upgrading to v1.56.0
|
||||||
|
|
||||||
|
## Groups/communities feature has been deprecated
|
||||||
|
|
||||||
|
The non-standard groups/communities feature in Synapse has been deprecated and will
|
||||||
|
be disabled by default in Synapse v1.58.0.
|
||||||
|
|
||||||
|
You can test disabling it by adding the following to your homeserver configuration:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
experimental_features:
|
||||||
|
groups_enabled: false
|
||||||
|
```
|
||||||
|
|
||||||
# Upgrading to v1.55.0
|
# Upgrading to v1.55.0
|
||||||
|
|
||||||
## `synctl` script has been moved
|
## `synctl` script has been moved
|
||||||
|
@ -106,6 +120,14 @@ You will need to ensure `synctl` is on your `PATH`.
|
||||||
automatically, though you might need to activate a virtual environment
|
automatically, though you might need to activate a virtual environment
|
||||||
depending on how you installed Synapse.
|
depending on how you installed Synapse.
|
||||||
|
|
||||||
|
|
||||||
|
## Compatibility dropped for Mjolnir 1.3.1 and earlier
|
||||||
|
|
||||||
|
Synapse v1.55.0 drops support for Mjolnir 1.3.1 and earlier.
|
||||||
|
If you use the Mjolnir module to moderate your homeserver,
|
||||||
|
please upgrade Mjolnir to version 1.3.2 or later before upgrading Synapse.
|
||||||
|
|
||||||
|
|
||||||
# Upgrading to v1.54.0
|
# Upgrading to v1.54.0
|
||||||
|
|
||||||
## Legacy structured logging configuration removal
|
## Legacy structured logging configuration removal
|
||||||
|
|
|
@ -351,8 +351,11 @@ is only supported with Redis-based replication.)
|
||||||
|
|
||||||
To enable this, the worker must have a HTTP replication listener configured,
|
To enable this, the worker must have a HTTP replication listener configured,
|
||||||
have a `worker_name` and be listed in the `instance_map` config. The same worker
|
have a `worker_name` and be listed in the `instance_map` config. The same worker
|
||||||
can handle multiple streams. For example, to move event persistence off to a
|
can handle multiple streams, but unless otherwise documented, each stream can only
|
||||||
dedicated worker, the shared configuration would include:
|
have a single writer.
|
||||||
|
|
||||||
|
For example, to move event persistence off to a dedicated worker, the shared
|
||||||
|
configuration would include:
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
instance_map:
|
instance_map:
|
||||||
|
@ -370,8 +373,8 @@ streams and the endpoints associated with them:
|
||||||
|
|
||||||
##### The `events` stream
|
##### The `events` stream
|
||||||
|
|
||||||
The `events` stream also experimentally supports having multiple writers, where
|
The `events` stream experimentally supports having multiple writers, where work
|
||||||
work is sharded between them by room ID. Note that you *must* restart all worker
|
is sharded between them by room ID. Note that you *must* restart all worker
|
||||||
instances when adding or removing event persisters. An example `stream_writers`
|
instances when adding or removing event persisters. An example `stream_writers`
|
||||||
configuration with multiple writers:
|
configuration with multiple writers:
|
||||||
|
|
||||||
|
@ -384,38 +387,38 @@ stream_writers:
|
||||||
|
|
||||||
##### The `typing` stream
|
##### The `typing` stream
|
||||||
|
|
||||||
The following endpoints should be routed directly to the workers configured as
|
The following endpoints should be routed directly to the worker configured as
|
||||||
stream writers for the `typing` stream:
|
the stream writer for the `typing` stream:
|
||||||
|
|
||||||
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/typing
|
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/typing
|
||||||
|
|
||||||
##### The `to_device` stream
|
##### The `to_device` stream
|
||||||
|
|
||||||
The following endpoints should be routed directly to the workers configured as
|
The following endpoints should be routed directly to the worker configured as
|
||||||
stream writers for the `to_device` stream:
|
the stream writer for the `to_device` stream:
|
||||||
|
|
||||||
^/_matrix/client/(api/v1|r0|v3|unstable)/sendToDevice/
|
^/_matrix/client/(api/v1|r0|v3|unstable)/sendToDevice/
|
||||||
|
|
||||||
##### The `account_data` stream
|
##### The `account_data` stream
|
||||||
|
|
||||||
The following endpoints should be routed directly to the workers configured as
|
The following endpoints should be routed directly to the worker configured as
|
||||||
stream writers for the `account_data` stream:
|
the stream writer for the `account_data` stream:
|
||||||
|
|
||||||
^/_matrix/client/(api/v1|r0|v3|unstable)/.*/tags
|
^/_matrix/client/(api/v1|r0|v3|unstable)/.*/tags
|
||||||
^/_matrix/client/(api/v1|r0|v3|unstable)/.*/account_data
|
^/_matrix/client/(api/v1|r0|v3|unstable)/.*/account_data
|
||||||
|
|
||||||
##### The `receipts` stream
|
##### The `receipts` stream
|
||||||
|
|
||||||
The following endpoints should be routed directly to the workers configured as
|
The following endpoints should be routed directly to the worker configured as
|
||||||
stream writers for the `receipts` stream:
|
the stream writer for the `receipts` stream:
|
||||||
|
|
||||||
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/receipt
|
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/receipt
|
||||||
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/read_markers
|
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/read_markers
|
||||||
|
|
||||||
##### The `presence` stream
|
##### The `presence` stream
|
||||||
|
|
||||||
The following endpoints should be routed directly to the workers configured as
|
The following endpoints should be routed directly to the worker configured as
|
||||||
stream writers for the `presence` stream:
|
the stream writer for the `presence` stream:
|
||||||
|
|
||||||
^/_matrix/client/(api/v1|r0|v3|unstable)/presence/
|
^/_matrix/client/(api/v1|r0|v3|unstable)/presence/
|
||||||
|
|
||||||
|
|
1
mypy.ini
1
mypy.ini
|
@ -90,7 +90,6 @@ exclude = (?x)
|
||||||
|tests/push/test_push_rule_evaluator.py
|
|tests/push/test_push_rule_evaluator.py
|
||||||
|tests/rest/client/test_transactions.py
|
|tests/rest/client/test_transactions.py
|
||||||
|tests/rest/media/v1/test_media_storage.py
|
|tests/rest/media/v1/test_media_storage.py
|
||||||
|tests/rest/media/v1/test_url_preview.py
|
|
||||||
|tests/scripts/test_new_matrix_user.py
|
|tests/scripts/test_new_matrix_user.py
|
||||||
|tests/server.py
|
|tests/server.py
|
||||||
|tests/server_notices/test_resource_limits_server_notices.py
|
|tests/server_notices/test_resource_limits_server_notices.py
|
||||||
|
|
|
@ -68,7 +68,7 @@ try:
|
||||||
except ImportError:
|
except ImportError:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
__version__ = "1.54.0"
|
__version__ = "1.55.0rc1"
|
||||||
|
|
||||||
if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
|
if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
|
||||||
# We import here so that we don't have to install a bunch of deps when
|
# We import here so that we don't have to install a bunch of deps when
|
||||||
|
|
|
@ -322,7 +322,8 @@ class GenericWorkerServer(HomeServer):
|
||||||
|
|
||||||
presence.register_servlets(self, resource)
|
presence.register_servlets(self, resource)
|
||||||
|
|
||||||
groups.register_servlets(self, resource)
|
if self.config.experimental.groups_enabled:
|
||||||
|
groups.register_servlets(self, resource)
|
||||||
|
|
||||||
resources.update({CLIENT_API_PREFIX: resource})
|
resources.update({CLIENT_API_PREFIX: resource})
|
||||||
|
|
||||||
|
|
|
@ -19,6 +19,7 @@ from synapse.config import (
|
||||||
api,
|
api,
|
||||||
appservice,
|
appservice,
|
||||||
auth,
|
auth,
|
||||||
|
background_updates,
|
||||||
cache,
|
cache,
|
||||||
captcha,
|
captcha,
|
||||||
cas,
|
cas,
|
||||||
|
@ -113,6 +114,7 @@ class RootConfig:
|
||||||
caches: cache.CacheConfig
|
caches: cache.CacheConfig
|
||||||
federation: federation.FederationConfig
|
federation: federation.FederationConfig
|
||||||
retention: retention.RetentionConfig
|
retention: retention.RetentionConfig
|
||||||
|
background_updates: background_updates.BackgroundUpdateConfig
|
||||||
|
|
||||||
config_classes: List[Type["Config"]] = ...
|
config_classes: List[Type["Config"]] = ...
|
||||||
def __init__(self) -> None: ...
|
def __init__(self) -> None: ...
|
||||||
|
|
|
@ -0,0 +1,68 @@
|
||||||
|
# Copyright 2022 Matrix.org Foundation C.I.C.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
|
||||||
|
from ._base import Config
|
||||||
|
|
||||||
|
|
||||||
|
class BackgroundUpdateConfig(Config):
|
||||||
|
section = "background_updates"
|
||||||
|
|
||||||
|
def generate_config_section(self, **kwargs) -> str:
|
||||||
|
return """\
|
||||||
|
## Background Updates ##
|
||||||
|
|
||||||
|
# Background updates are database updates that are run in the background in batches.
|
||||||
|
# The duration, minimum batch size, default batch size, whether to sleep between batches and if so, how long to
|
||||||
|
# sleep can all be configured. This is helpful to speed up or slow down the updates.
|
||||||
|
#
|
||||||
|
background_updates:
|
||||||
|
# How long in milliseconds to run a batch of background updates for. Defaults to 100. Uncomment and set
|
||||||
|
# a time to change the default.
|
||||||
|
#
|
||||||
|
#background_update_duration_ms: 500
|
||||||
|
|
||||||
|
# Whether to sleep between updates. Defaults to True. Uncomment to change the default.
|
||||||
|
#
|
||||||
|
#sleep_enabled: false
|
||||||
|
|
||||||
|
# If sleeping between updates, how long in milliseconds to sleep for. Defaults to 1000. Uncomment
|
||||||
|
# and set a duration to change the default.
|
||||||
|
#
|
||||||
|
#sleep_duration_ms: 300
|
||||||
|
|
||||||
|
# Minimum size a batch of background updates can be. Must be greater than 0. Defaults to 1. Uncomment and
|
||||||
|
# set a size to change the default.
|
||||||
|
#
|
||||||
|
#min_batch_size: 10
|
||||||
|
|
||||||
|
# The batch size to use for the first iteration of a new background update. The default is 100.
|
||||||
|
# Uncomment and set a size to change the default.
|
||||||
|
#
|
||||||
|
#default_batch_size: 50
|
||||||
|
"""
|
||||||
|
|
||||||
|
def read_config(self, config, **kwargs) -> None:
|
||||||
|
bg_update_config = config.get("background_updates") or {}
|
||||||
|
|
||||||
|
self.update_duration_ms = bg_update_config.get(
|
||||||
|
"background_update_duration_ms", 100
|
||||||
|
)
|
||||||
|
|
||||||
|
self.sleep_enabled = bg_update_config.get("sleep_enabled", True)
|
||||||
|
|
||||||
|
self.sleep_duration_ms = bg_update_config.get("sleep_duration_ms", 1000)
|
||||||
|
|
||||||
|
self.min_batch_size = bg_update_config.get("min_batch_size", 1)
|
||||||
|
|
||||||
|
self.default_batch_size = bg_update_config.get("default_batch_size", 100)
|
|
@ -74,3 +74,6 @@ class ExperimentalConfig(Config):
|
||||||
|
|
||||||
# MSC3720 (Account status endpoint)
|
# MSC3720 (Account status endpoint)
|
||||||
self.msc3720_enabled: bool = experimental.get("msc3720_enabled", False)
|
self.msc3720_enabled: bool = experimental.get("msc3720_enabled", False)
|
||||||
|
|
||||||
|
# The deprecated groups feature.
|
||||||
|
self.groups_enabled: bool = experimental.get("groups_enabled", True)
|
||||||
|
|
|
@ -16,6 +16,7 @@ from .account_validity import AccountValidityConfig
|
||||||
from .api import ApiConfig
|
from .api import ApiConfig
|
||||||
from .appservice import AppServiceConfig
|
from .appservice import AppServiceConfig
|
||||||
from .auth import AuthConfig
|
from .auth import AuthConfig
|
||||||
|
from .background_updates import BackgroundUpdateConfig
|
||||||
from .cache import CacheConfig
|
from .cache import CacheConfig
|
||||||
from .captcha import CaptchaConfig
|
from .captcha import CaptchaConfig
|
||||||
from .cas import CasConfig
|
from .cas import CasConfig
|
||||||
|
@ -99,4 +100,5 @@ class HomeServerConfig(RootConfig):
|
||||||
WorkerConfig,
|
WorkerConfig,
|
||||||
RedisConfig,
|
RedisConfig,
|
||||||
ExperimentalConfig,
|
ExperimentalConfig,
|
||||||
|
BackgroundUpdateConfig,
|
||||||
]
|
]
|
||||||
|
|
|
@ -182,8 +182,14 @@ class OIDCConfig(Config):
|
||||||
#
|
#
|
||||||
# localpart_template: Jinja2 template for the localpart of the MXID.
|
# localpart_template: Jinja2 template for the localpart of the MXID.
|
||||||
# If this is not set, the user will be prompted to choose their
|
# If this is not set, the user will be prompted to choose their
|
||||||
# own username (see 'sso_auth_account_details.html' in the 'sso'
|
# own username (see the documentation for the
|
||||||
# section of this file).
|
# 'sso_auth_account_details.html' template). This template can
|
||||||
|
# use the 'localpart_from_email' filter.
|
||||||
|
#
|
||||||
|
# confirm_localpart: Whether to prompt the user to validate (or
|
||||||
|
# change) the generated localpart (see the documentation for the
|
||||||
|
# 'sso_auth_account_details.html' template), instead of
|
||||||
|
# registering the account right away.
|
||||||
#
|
#
|
||||||
# display_name_template: Jinja2 template for the display name to set
|
# display_name_template: Jinja2 template for the display name to set
|
||||||
# on first login. If unset, no displayname will be set.
|
# on first login. If unset, no displayname will be set.
|
||||||
|
|
|
@ -245,8 +245,8 @@ class SpamChecker:
|
||||||
"""Checks if a given event is considered "spammy" by this server.
|
"""Checks if a given event is considered "spammy" by this server.
|
||||||
|
|
||||||
If the server considers an event spammy, then it will be rejected if
|
If the server considers an event spammy, then it will be rejected if
|
||||||
sent by a local user. If it is sent by a user on another server, then
|
sent by a local user. If it is sent by a user on another server, the
|
||||||
users receive a blank event.
|
event is soft-failed.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
event: the event to be checked
|
event: the event to be checked
|
||||||
|
|
|
@ -289,7 +289,7 @@ class OpenIdUserInfo(BaseFederationServlet):
|
||||||
return 200, {"sub": user_id}
|
return 200, {"sub": user_id}
|
||||||
|
|
||||||
|
|
||||||
DEFAULT_SERVLET_GROUPS: Dict[str, Iterable[Type[BaseFederationServlet]]] = {
|
SERVLET_GROUPS: Dict[str, Iterable[Type[BaseFederationServlet]]] = {
|
||||||
"federation": FEDERATION_SERVLET_CLASSES,
|
"federation": FEDERATION_SERVLET_CLASSES,
|
||||||
"room_list": (PublicRoomList,),
|
"room_list": (PublicRoomList,),
|
||||||
"group_server": GROUP_SERVER_SERVLET_CLASSES,
|
"group_server": GROUP_SERVER_SERVLET_CLASSES,
|
||||||
|
@ -298,6 +298,10 @@ DEFAULT_SERVLET_GROUPS: Dict[str, Iterable[Type[BaseFederationServlet]]] = {
|
||||||
"openid": (OpenIdUserInfo,),
|
"openid": (OpenIdUserInfo,),
|
||||||
}
|
}
|
||||||
|
|
||||||
|
DEFAULT_SERVLET_GROUPS = ("federation", "room_list", "openid")
|
||||||
|
|
||||||
|
GROUP_SERVLET_GROUPS = ("group_server", "group_local", "group_attestation")
|
||||||
|
|
||||||
|
|
||||||
def register_servlets(
|
def register_servlets(
|
||||||
hs: "HomeServer",
|
hs: "HomeServer",
|
||||||
|
@ -320,16 +324,19 @@ def register_servlets(
|
||||||
Defaults to ``DEFAULT_SERVLET_GROUPS``.
|
Defaults to ``DEFAULT_SERVLET_GROUPS``.
|
||||||
"""
|
"""
|
||||||
if not servlet_groups:
|
if not servlet_groups:
|
||||||
servlet_groups = DEFAULT_SERVLET_GROUPS.keys()
|
servlet_groups = DEFAULT_SERVLET_GROUPS
|
||||||
|
# Only allow the groups servlets if the deprecated groups feature is enabled.
|
||||||
|
if hs.config.experimental.groups_enabled:
|
||||||
|
servlet_groups = servlet_groups + GROUP_SERVLET_GROUPS
|
||||||
|
|
||||||
for servlet_group in servlet_groups:
|
for servlet_group in servlet_groups:
|
||||||
# Skip unknown servlet groups.
|
# Skip unknown servlet groups.
|
||||||
if servlet_group not in DEFAULT_SERVLET_GROUPS:
|
if servlet_group not in SERVLET_GROUPS:
|
||||||
raise RuntimeError(
|
raise RuntimeError(
|
||||||
f"Attempting to register unknown federation servlet: '{servlet_group}'"
|
f"Attempting to register unknown federation servlet: '{servlet_group}'"
|
||||||
)
|
)
|
||||||
|
|
||||||
for servletclass in DEFAULT_SERVLET_GROUPS[servlet_group]:
|
for servletclass in SERVLET_GROUPS[servlet_group]:
|
||||||
# Only allow the `/timestamp_to_event` servlet if msc3030 is enabled
|
# Only allow the `/timestamp_to_event` servlet if msc3030 is enabled
|
||||||
if (
|
if (
|
||||||
servletclass == FederationTimestampLookupServlet
|
servletclass == FederationTimestampLookupServlet
|
||||||
|
|
|
@ -371,7 +371,6 @@ class DeviceHandler(DeviceWorkerHandler):
|
||||||
log_kv(
|
log_kv(
|
||||||
{"reason": "User doesn't have device id.", "device_id": device_id}
|
{"reason": "User doesn't have device id.", "device_id": device_id}
|
||||||
)
|
)
|
||||||
pass
|
|
||||||
else:
|
else:
|
||||||
raise
|
raise
|
||||||
|
|
||||||
|
@ -414,7 +413,6 @@ class DeviceHandler(DeviceWorkerHandler):
|
||||||
# no match
|
# no match
|
||||||
set_tag("error", True)
|
set_tag("error", True)
|
||||||
set_tag("reason", "User doesn't have that device id.")
|
set_tag("reason", "User doesn't have that device id.")
|
||||||
pass
|
|
||||||
else:
|
else:
|
||||||
raise
|
raise
|
||||||
|
|
||||||
|
|
|
@ -45,6 +45,7 @@ from synapse.types import JsonDict, UserID, map_username_to_mxid_localpart
|
||||||
from synapse.util import Clock, json_decoder
|
from synapse.util import Clock, json_decoder
|
||||||
from synapse.util.caches.cached_call import RetryOnExceptionCachedCall
|
from synapse.util.caches.cached_call import RetryOnExceptionCachedCall
|
||||||
from synapse.util.macaroons import get_value_from_macaroon, satisfy_expiry
|
from synapse.util.macaroons import get_value_from_macaroon, satisfy_expiry
|
||||||
|
from synapse.util.templates import _localpart_from_email_filter
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
from synapse.server import HomeServer
|
from synapse.server import HomeServer
|
||||||
|
@ -1228,6 +1229,7 @@ class OidcSessionData:
|
||||||
|
|
||||||
class UserAttributeDict(TypedDict):
|
class UserAttributeDict(TypedDict):
|
||||||
localpart: Optional[str]
|
localpart: Optional[str]
|
||||||
|
confirm_localpart: bool
|
||||||
display_name: Optional[str]
|
display_name: Optional[str]
|
||||||
emails: List[str]
|
emails: List[str]
|
||||||
|
|
||||||
|
@ -1307,6 +1309,11 @@ def jinja_finalize(thing: Any) -> Any:
|
||||||
|
|
||||||
|
|
||||||
env = Environment(finalize=jinja_finalize)
|
env = Environment(finalize=jinja_finalize)
|
||||||
|
env.filters.update(
|
||||||
|
{
|
||||||
|
"localpart_from_email": _localpart_from_email_filter,
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
||||||
|
@ -1316,6 +1323,7 @@ class JinjaOidcMappingConfig:
|
||||||
display_name_template: Optional[Template]
|
display_name_template: Optional[Template]
|
||||||
email_template: Optional[Template]
|
email_template: Optional[Template]
|
||||||
extra_attributes: Dict[str, Template]
|
extra_attributes: Dict[str, Template]
|
||||||
|
confirm_localpart: bool = False
|
||||||
|
|
||||||
|
|
||||||
class JinjaOidcMappingProvider(OidcMappingProvider[JinjaOidcMappingConfig]):
|
class JinjaOidcMappingProvider(OidcMappingProvider[JinjaOidcMappingConfig]):
|
||||||
|
@ -1357,12 +1365,17 @@ class JinjaOidcMappingProvider(OidcMappingProvider[JinjaOidcMappingConfig]):
|
||||||
"invalid jinja template", path=["extra_attributes", key]
|
"invalid jinja template", path=["extra_attributes", key]
|
||||||
) from e
|
) from e
|
||||||
|
|
||||||
|
confirm_localpart = config.get("confirm_localpart") or False
|
||||||
|
if not isinstance(confirm_localpart, bool):
|
||||||
|
raise ConfigError("must be a bool", path=["confirm_localpart"])
|
||||||
|
|
||||||
return JinjaOidcMappingConfig(
|
return JinjaOidcMappingConfig(
|
||||||
subject_claim=subject_claim,
|
subject_claim=subject_claim,
|
||||||
localpart_template=localpart_template,
|
localpart_template=localpart_template,
|
||||||
display_name_template=display_name_template,
|
display_name_template=display_name_template,
|
||||||
email_template=email_template,
|
email_template=email_template,
|
||||||
extra_attributes=extra_attributes,
|
extra_attributes=extra_attributes,
|
||||||
|
confirm_localpart=confirm_localpart,
|
||||||
)
|
)
|
||||||
|
|
||||||
def get_remote_user_id(self, userinfo: UserInfo) -> str:
|
def get_remote_user_id(self, userinfo: UserInfo) -> str:
|
||||||
|
@ -1398,7 +1411,10 @@ class JinjaOidcMappingProvider(OidcMappingProvider[JinjaOidcMappingConfig]):
|
||||||
emails.append(email)
|
emails.append(email)
|
||||||
|
|
||||||
return UserAttributeDict(
|
return UserAttributeDict(
|
||||||
localpart=localpart, display_name=display_name, emails=emails
|
localpart=localpart,
|
||||||
|
display_name=display_name,
|
||||||
|
emails=emails,
|
||||||
|
confirm_localpart=self._config.confirm_localpart,
|
||||||
)
|
)
|
||||||
|
|
||||||
async def get_extra_attributes(self, userinfo: UserInfo, token: Token) -> JsonDict:
|
async def get_extra_attributes(self, userinfo: UserInfo, token: Token) -> JsonDict:
|
||||||
|
|
|
@ -350,7 +350,7 @@ class PaginationHandler:
|
||||||
"""
|
"""
|
||||||
self._purges_in_progress_by_room.add(room_id)
|
self._purges_in_progress_by_room.add(room_id)
|
||||||
try:
|
try:
|
||||||
with await self.pagination_lock.write(room_id):
|
async with self.pagination_lock.write(room_id):
|
||||||
await self.storage.purge_events.purge_history(
|
await self.storage.purge_events.purge_history(
|
||||||
room_id, token, delete_local_events
|
room_id, token, delete_local_events
|
||||||
)
|
)
|
||||||
|
@ -406,7 +406,7 @@ class PaginationHandler:
|
||||||
room_id: room to be purged
|
room_id: room to be purged
|
||||||
force: set true to skip checking for joined users.
|
force: set true to skip checking for joined users.
|
||||||
"""
|
"""
|
||||||
with await self.pagination_lock.write(room_id):
|
async with self.pagination_lock.write(room_id):
|
||||||
# first check that we have no users in this room
|
# first check that we have no users in this room
|
||||||
if not force:
|
if not force:
|
||||||
joined = await self.store.is_host_joined(room_id, self._server_name)
|
joined = await self.store.is_host_joined(room_id, self._server_name)
|
||||||
|
@ -448,7 +448,7 @@ class PaginationHandler:
|
||||||
|
|
||||||
room_token = from_token.room_key
|
room_token = from_token.room_key
|
||||||
|
|
||||||
with await self.pagination_lock.read(room_id):
|
async with self.pagination_lock.read(room_id):
|
||||||
(
|
(
|
||||||
membership,
|
membership,
|
||||||
member_event_id,
|
member_event_id,
|
||||||
|
@ -615,7 +615,7 @@ class PaginationHandler:
|
||||||
|
|
||||||
self._purges_in_progress_by_room.add(room_id)
|
self._purges_in_progress_by_room.add(room_id)
|
||||||
try:
|
try:
|
||||||
with await self.pagination_lock.write(room_id):
|
async with self.pagination_lock.write(room_id):
|
||||||
self._delete_by_id[delete_id].status = DeleteStatus.STATUS_SHUTTING_DOWN
|
self._delete_by_id[delete_id].status = DeleteStatus.STATUS_SHUTTING_DOWN
|
||||||
self._delete_by_id[
|
self._delete_by_id[
|
||||||
delete_id
|
delete_id
|
||||||
|
|
|
@ -267,7 +267,6 @@ class BasePresenceHandler(abc.ABC):
|
||||||
is_syncing: Whether or not the user is now syncing
|
is_syncing: Whether or not the user is now syncing
|
||||||
sync_time_msec: Time in ms when the user was last syncing
|
sync_time_msec: Time in ms when the user was last syncing
|
||||||
"""
|
"""
|
||||||
pass
|
|
||||||
|
|
||||||
async def update_external_syncs_clear(self, process_id: str) -> None:
|
async def update_external_syncs_clear(self, process_id: str) -> None:
|
||||||
"""Marks all users that had been marked as syncing by a given process
|
"""Marks all users that had been marked as syncing by a given process
|
||||||
|
@ -277,7 +276,6 @@ class BasePresenceHandler(abc.ABC):
|
||||||
|
|
||||||
This is a no-op when presence is handled by a different worker.
|
This is a no-op when presence is handled by a different worker.
|
||||||
"""
|
"""
|
||||||
pass
|
|
||||||
|
|
||||||
async def process_replication_rows(
|
async def process_replication_rows(
|
||||||
self, stream_name: str, instance_name: str, token: int, rows: list
|
self, stream_name: str, instance_name: str, token: int, rows: list
|
||||||
|
|
|
@ -132,6 +132,7 @@ class UserAttributes:
|
||||||
# if `None`, the mapper has not picked a userid, and the user should be prompted to
|
# if `None`, the mapper has not picked a userid, and the user should be prompted to
|
||||||
# enter one.
|
# enter one.
|
||||||
localpart: Optional[str]
|
localpart: Optional[str]
|
||||||
|
confirm_localpart: bool = False
|
||||||
display_name: Optional[str] = None
|
display_name: Optional[str] = None
|
||||||
emails: Collection[str] = attr.Factory(list)
|
emails: Collection[str] = attr.Factory(list)
|
||||||
|
|
||||||
|
@ -561,9 +562,10 @@ class SsoHandler:
|
||||||
# Must provide either attributes or session, not both
|
# Must provide either attributes or session, not both
|
||||||
assert (attributes is not None) != (session is not None)
|
assert (attributes is not None) != (session is not None)
|
||||||
|
|
||||||
if (attributes and attributes.localpart is None) or (
|
if (
|
||||||
session and session.chosen_localpart is None
|
attributes
|
||||||
):
|
and (attributes.localpart is None or attributes.confirm_localpart is True)
|
||||||
|
) or (session and session.chosen_localpart is None):
|
||||||
return b"/_synapse/client/pick_username/account_details"
|
return b"/_synapse/client/pick_username/account_details"
|
||||||
elif self._consent_at_registration and not (
|
elif self._consent_at_registration and not (
|
||||||
session and session.terms_accepted_version
|
session and session.terms_accepted_version
|
||||||
|
|
|
@ -160,8 +160,9 @@ class FollowerTypingHandler:
|
||||||
"""Should be called whenever we receive updates for typing stream."""
|
"""Should be called whenever we receive updates for typing stream."""
|
||||||
|
|
||||||
if self._latest_room_serial > token:
|
if self._latest_room_serial > token:
|
||||||
# The master has gone backwards. To prevent inconsistent data, just
|
# The typing worker has gone backwards (e.g. it may have restarted).
|
||||||
# clear everything.
|
# To prevent inconsistent data, just clear everything.
|
||||||
|
logger.info("Typing handler stream went backwards; resetting")
|
||||||
self._reset()
|
self._reset()
|
||||||
|
|
||||||
# Set the latest serial token to whatever the server gave us.
|
# Set the latest serial token to whatever the server gave us.
|
||||||
|
|
|
@ -120,7 +120,6 @@ class ByteParser(ByteWriteable, Generic[T], abc.ABC):
|
||||||
"""Called when response has finished streaming and the parser should
|
"""Called when response has finished streaming and the parser should
|
||||||
return the final result (or error).
|
return the final result (or error).
|
||||||
"""
|
"""
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
||||||
|
@ -601,7 +600,6 @@ class MatrixFederationHttpClient:
|
||||||
response.code,
|
response.code,
|
||||||
response_phrase,
|
response_phrase,
|
||||||
)
|
)
|
||||||
pass
|
|
||||||
else:
|
else:
|
||||||
logger.info(
|
logger.info(
|
||||||
"{%s} [%s] Got response headers: %d %s",
|
"{%s} [%s] Got response headers: %d %s",
|
||||||
|
|
|
@ -233,7 +233,6 @@ class HttpServer(Protocol):
|
||||||
servlet_classname (str): The name of the handler to be used in prometheus
|
servlet_classname (str): The name of the handler to be used in prometheus
|
||||||
and opentracing logs.
|
and opentracing logs.
|
||||||
"""
|
"""
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
class _AsyncResource(resource.Resource, metaclass=abc.ABCMeta):
|
class _AsyncResource(resource.Resource, metaclass=abc.ABCMeta):
|
||||||
|
|
|
@ -169,7 +169,7 @@ BASE_APPEND_OVERRIDE_RULES: List[Dict[str, Any]] = [
|
||||||
"kind": "event_match",
|
"kind": "event_match",
|
||||||
"key": "content.msgtype",
|
"key": "content.msgtype",
|
||||||
"pattern": "m.notice",
|
"pattern": "m.notice",
|
||||||
"_id": "_suppress_notices",
|
"_cache_key": "_suppress_notices",
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"actions": ["dont_notify"],
|
"actions": ["dont_notify"],
|
||||||
|
@ -183,13 +183,13 @@ BASE_APPEND_OVERRIDE_RULES: List[Dict[str, Any]] = [
|
||||||
"kind": "event_match",
|
"kind": "event_match",
|
||||||
"key": "type",
|
"key": "type",
|
||||||
"pattern": "m.room.member",
|
"pattern": "m.room.member",
|
||||||
"_id": "_member",
|
"_cache_key": "_member",
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"kind": "event_match",
|
"kind": "event_match",
|
||||||
"key": "content.membership",
|
"key": "content.membership",
|
||||||
"pattern": "invite",
|
"pattern": "invite",
|
||||||
"_id": "_invite_member",
|
"_cache_key": "_invite_member",
|
||||||
},
|
},
|
||||||
{"kind": "event_match", "key": "state_key", "pattern_type": "user_id"},
|
{"kind": "event_match", "key": "state_key", "pattern_type": "user_id"},
|
||||||
],
|
],
|
||||||
|
@ -212,7 +212,7 @@ BASE_APPEND_OVERRIDE_RULES: List[Dict[str, Any]] = [
|
||||||
"kind": "event_match",
|
"kind": "event_match",
|
||||||
"key": "type",
|
"key": "type",
|
||||||
"pattern": "m.room.member",
|
"pattern": "m.room.member",
|
||||||
"_id": "_member",
|
"_cache_key": "_member",
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"actions": ["dont_notify"],
|
"actions": ["dont_notify"],
|
||||||
|
@ -237,12 +237,12 @@ BASE_APPEND_OVERRIDE_RULES: List[Dict[str, Any]] = [
|
||||||
"kind": "event_match",
|
"kind": "event_match",
|
||||||
"key": "content.body",
|
"key": "content.body",
|
||||||
"pattern": "@room",
|
"pattern": "@room",
|
||||||
"_id": "_roomnotif_content",
|
"_cache_key": "_roomnotif_content",
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"kind": "sender_notification_permission",
|
"kind": "sender_notification_permission",
|
||||||
"key": "room",
|
"key": "room",
|
||||||
"_id": "_roomnotif_pl",
|
"_cache_key": "_roomnotif_pl",
|
||||||
},
|
},
|
||||||
],
|
],
|
||||||
"actions": ["notify", {"set_tweak": "highlight", "value": True}],
|
"actions": ["notify", {"set_tweak": "highlight", "value": True}],
|
||||||
|
@ -254,13 +254,13 @@ BASE_APPEND_OVERRIDE_RULES: List[Dict[str, Any]] = [
|
||||||
"kind": "event_match",
|
"kind": "event_match",
|
||||||
"key": "type",
|
"key": "type",
|
||||||
"pattern": "m.room.tombstone",
|
"pattern": "m.room.tombstone",
|
||||||
"_id": "_tombstone",
|
"_cache_key": "_tombstone",
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"kind": "event_match",
|
"kind": "event_match",
|
||||||
"key": "state_key",
|
"key": "state_key",
|
||||||
"pattern": "",
|
"pattern": "",
|
||||||
"_id": "_tombstone_statekey",
|
"_cache_key": "_tombstone_statekey",
|
||||||
},
|
},
|
||||||
],
|
],
|
||||||
"actions": ["notify", {"set_tweak": "highlight", "value": True}],
|
"actions": ["notify", {"set_tweak": "highlight", "value": True}],
|
||||||
|
@ -272,7 +272,7 @@ BASE_APPEND_OVERRIDE_RULES: List[Dict[str, Any]] = [
|
||||||
"kind": "event_match",
|
"kind": "event_match",
|
||||||
"key": "type",
|
"key": "type",
|
||||||
"pattern": "m.reaction",
|
"pattern": "m.reaction",
|
||||||
"_id": "_reaction",
|
"_cache_key": "_reaction",
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"actions": ["dont_notify"],
|
"actions": ["dont_notify"],
|
||||||
|
@ -288,7 +288,7 @@ BASE_APPEND_UNDERRIDE_RULES: List[Dict[str, Any]] = [
|
||||||
"kind": "event_match",
|
"kind": "event_match",
|
||||||
"key": "type",
|
"key": "type",
|
||||||
"pattern": "m.call.invite",
|
"pattern": "m.call.invite",
|
||||||
"_id": "_call",
|
"_cache_key": "_call",
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"actions": [
|
"actions": [
|
||||||
|
@ -302,12 +302,12 @@ BASE_APPEND_UNDERRIDE_RULES: List[Dict[str, Any]] = [
|
||||||
{
|
{
|
||||||
"rule_id": "global/underride/.m.rule.room_one_to_one",
|
"rule_id": "global/underride/.m.rule.room_one_to_one",
|
||||||
"conditions": [
|
"conditions": [
|
||||||
{"kind": "room_member_count", "is": "2", "_id": "member_count"},
|
{"kind": "room_member_count", "is": "2", "_cache_key": "member_count"},
|
||||||
{
|
{
|
||||||
"kind": "event_match",
|
"kind": "event_match",
|
||||||
"key": "type",
|
"key": "type",
|
||||||
"pattern": "m.room.message",
|
"pattern": "m.room.message",
|
||||||
"_id": "_message",
|
"_cache_key": "_message",
|
||||||
},
|
},
|
||||||
],
|
],
|
||||||
"actions": [
|
"actions": [
|
||||||
|
@ -321,12 +321,12 @@ BASE_APPEND_UNDERRIDE_RULES: List[Dict[str, Any]] = [
|
||||||
{
|
{
|
||||||
"rule_id": "global/underride/.m.rule.encrypted_room_one_to_one",
|
"rule_id": "global/underride/.m.rule.encrypted_room_one_to_one",
|
||||||
"conditions": [
|
"conditions": [
|
||||||
{"kind": "room_member_count", "is": "2", "_id": "member_count"},
|
{"kind": "room_member_count", "is": "2", "_cache_key": "member_count"},
|
||||||
{
|
{
|
||||||
"kind": "event_match",
|
"kind": "event_match",
|
||||||
"key": "type",
|
"key": "type",
|
||||||
"pattern": "m.room.encrypted",
|
"pattern": "m.room.encrypted",
|
||||||
"_id": "_encrypted",
|
"_cache_key": "_encrypted",
|
||||||
},
|
},
|
||||||
],
|
],
|
||||||
"actions": [
|
"actions": [
|
||||||
|
@ -342,7 +342,7 @@ BASE_APPEND_UNDERRIDE_RULES: List[Dict[str, Any]] = [
|
||||||
"kind": "event_match",
|
"kind": "event_match",
|
||||||
"key": "type",
|
"key": "type",
|
||||||
"pattern": "m.room.message",
|
"pattern": "m.room.message",
|
||||||
"_id": "_message",
|
"_cache_key": "_message",
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"actions": ["notify", {"set_tweak": "highlight", "value": False}],
|
"actions": ["notify", {"set_tweak": "highlight", "value": False}],
|
||||||
|
@ -356,7 +356,7 @@ BASE_APPEND_UNDERRIDE_RULES: List[Dict[str, Any]] = [
|
||||||
"kind": "event_match",
|
"kind": "event_match",
|
||||||
"key": "type",
|
"key": "type",
|
||||||
"pattern": "m.room.encrypted",
|
"pattern": "m.room.encrypted",
|
||||||
"_id": "_encrypted",
|
"_cache_key": "_encrypted",
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"actions": ["notify", {"set_tweak": "highlight", "value": False}],
|
"actions": ["notify", {"set_tweak": "highlight", "value": False}],
|
||||||
|
@ -368,19 +368,19 @@ BASE_APPEND_UNDERRIDE_RULES: List[Dict[str, Any]] = [
|
||||||
"kind": "event_match",
|
"kind": "event_match",
|
||||||
"key": "type",
|
"key": "type",
|
||||||
"pattern": "im.vector.modular.widgets",
|
"pattern": "im.vector.modular.widgets",
|
||||||
"_id": "_type_modular_widgets",
|
"_cache_key": "_type_modular_widgets",
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"kind": "event_match",
|
"kind": "event_match",
|
||||||
"key": "content.type",
|
"key": "content.type",
|
||||||
"pattern": "jitsi",
|
"pattern": "jitsi",
|
||||||
"_id": "_content_type_jitsi",
|
"_cache_key": "_content_type_jitsi",
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"kind": "event_match",
|
"kind": "event_match",
|
||||||
"key": "state_key",
|
"key": "state_key",
|
||||||
"pattern": "*",
|
"pattern": "*",
|
||||||
"_id": "_is_state_event",
|
"_cache_key": "_is_state_event",
|
||||||
},
|
},
|
||||||
],
|
],
|
||||||
"actions": ["notify", {"set_tweak": "highlight", "value": False}],
|
"actions": ["notify", {"set_tweak": "highlight", "value": False}],
|
||||||
|
|
|
@ -274,17 +274,17 @@ def _condition_checker(
|
||||||
cache: Dict[str, bool],
|
cache: Dict[str, bool],
|
||||||
) -> bool:
|
) -> bool:
|
||||||
for cond in conditions:
|
for cond in conditions:
|
||||||
_id = cond.get("_id", None)
|
_cache_key = cond.get("_cache_key", None)
|
||||||
if _id:
|
if _cache_key:
|
||||||
res = cache.get(_id, None)
|
res = cache.get(_cache_key, None)
|
||||||
if res is False:
|
if res is False:
|
||||||
return False
|
return False
|
||||||
elif res is True:
|
elif res is True:
|
||||||
continue
|
continue
|
||||||
|
|
||||||
res = evaluator.matches(cond, uid, display_name)
|
res = evaluator.matches(cond, uid, display_name)
|
||||||
if _id:
|
if _cache_key:
|
||||||
cache[_id] = bool(res)
|
cache[_cache_key] = bool(res)
|
||||||
|
|
||||||
if not res:
|
if not res:
|
||||||
return False
|
return False
|
||||||
|
|
|
@ -40,7 +40,7 @@ def format_push_rules_for_user(
|
||||||
|
|
||||||
# Remove internal stuff.
|
# Remove internal stuff.
|
||||||
for c in r["conditions"]:
|
for c in r["conditions"]:
|
||||||
c.pop("_id", None)
|
c.pop("_cache_key", None)
|
||||||
|
|
||||||
pattern_type = c.pop("pattern_type", None)
|
pattern_type = c.pop("pattern_type", None)
|
||||||
if pattern_type == "user_id":
|
if pattern_type == "user_id":
|
||||||
|
|
|
@ -709,7 +709,7 @@ class ReplicationCommandHandler:
|
||||||
self.send_command(RemoteServerUpCommand(server))
|
self.send_command(RemoteServerUpCommand(server))
|
||||||
|
|
||||||
def stream_update(self, stream_name: str, token: Optional[int], data: Any) -> None:
|
def stream_update(self, stream_name: str, token: Optional[int], data: Any) -> None:
|
||||||
"""Called when a new update is available to stream to clients.
|
"""Called when a new update is available to stream to Redis subscribers.
|
||||||
|
|
||||||
We need to check if the client is interested in the stream or not
|
We need to check if the client is interested in the stream or not
|
||||||
"""
|
"""
|
||||||
|
|
|
@ -67,8 +67,8 @@ class ReplicationStreamProtocolFactory(ServerFactory):
|
||||||
class ReplicationStreamer:
|
class ReplicationStreamer:
|
||||||
"""Handles replication connections.
|
"""Handles replication connections.
|
||||||
|
|
||||||
This needs to be poked when new replication data may be available. When new
|
This needs to be poked when new replication data may be available.
|
||||||
data is available it will propagate to all connected clients.
|
When new data is available it will propagate to all Redis subscribers.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def __init__(self, hs: "HomeServer"):
|
def __init__(self, hs: "HomeServer"):
|
||||||
|
@ -109,7 +109,7 @@ class ReplicationStreamer:
|
||||||
|
|
||||||
def on_notifier_poke(self) -> None:
|
def on_notifier_poke(self) -> None:
|
||||||
"""Checks if there is actually any new data and sends it to the
|
"""Checks if there is actually any new data and sends it to the
|
||||||
connections if there are.
|
Redis subscribers if there are.
|
||||||
|
|
||||||
This should get called each time new data is available, even if it
|
This should get called each time new data is available, even if it
|
||||||
is currently being executed, so that nothing gets missed
|
is currently being executed, so that nothing gets missed
|
||||||
|
|
|
@ -316,7 +316,19 @@ class PresenceFederationStream(Stream):
|
||||||
class TypingStream(Stream):
|
class TypingStream(Stream):
|
||||||
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
||||||
class TypingStreamRow:
|
class TypingStreamRow:
|
||||||
|
"""
|
||||||
|
An entry in the typing stream.
|
||||||
|
Describes all the users that are 'typing' right now in one room.
|
||||||
|
|
||||||
|
When a user stops typing, it will be streamed as a new update with that
|
||||||
|
user absent; you can think of the `user_ids` list as overwriting the
|
||||||
|
entire list that was there previously.
|
||||||
|
"""
|
||||||
|
|
||||||
|
# The room that this update is for.
|
||||||
room_id: str
|
room_id: str
|
||||||
|
|
||||||
|
# All the users that are 'typing' right now in the specified room.
|
||||||
user_ids: List[str]
|
user_ids: List[str]
|
||||||
|
|
||||||
NAME = "typing"
|
NAME = "typing"
|
||||||
|
|
|
@ -130,15 +130,15 @@
|
||||||
</head>
|
</head>
|
||||||
<body>
|
<body>
|
||||||
<header>
|
<header>
|
||||||
<h1>Your account is nearly ready</h1>
|
<h1>Choose your user name</h1>
|
||||||
<p>Check your details before creating an account on {{ server_name }}</p>
|
<p>This is required to create your account on {{ server_name }}, and you can't change this later.</p>
|
||||||
</header>
|
</header>
|
||||||
<main>
|
<main>
|
||||||
<form method="post" class="form__input" id="form">
|
<form method="post" class="form__input" id="form">
|
||||||
<div class="username_input" id="username_input">
|
<div class="username_input" id="username_input">
|
||||||
<label for="field-username">Username</label>
|
<label for="field-username">Username</label>
|
||||||
<div class="prefix">@</div>
|
<div class="prefix">@</div>
|
||||||
<input type="text" name="username" id="field-username" autofocus>
|
<input type="text" name="username" id="field-username" value="{{ user_attributes.localpart }}" autofocus>
|
||||||
<div class="postfix">:{{ server_name }}</div>
|
<div class="postfix">:{{ server_name }}</div>
|
||||||
</div>
|
</div>
|
||||||
<output for="username_input" id="field-username-output"></output>
|
<output for="username_input" id="field-username-output"></output>
|
||||||
|
|
|
@ -118,7 +118,8 @@ class ClientRestResource(JsonResource):
|
||||||
thirdparty.register_servlets(hs, client_resource)
|
thirdparty.register_servlets(hs, client_resource)
|
||||||
sendtodevice.register_servlets(hs, client_resource)
|
sendtodevice.register_servlets(hs, client_resource)
|
||||||
user_directory.register_servlets(hs, client_resource)
|
user_directory.register_servlets(hs, client_resource)
|
||||||
groups.register_servlets(hs, client_resource)
|
if hs.config.experimental.groups_enabled:
|
||||||
|
groups.register_servlets(hs, client_resource)
|
||||||
room_upgrade_rest_servlet.register_servlets(hs, client_resource)
|
room_upgrade_rest_servlet.register_servlets(hs, client_resource)
|
||||||
room_batch.register_servlets(hs, client_resource)
|
room_batch.register_servlets(hs, client_resource)
|
||||||
capabilities.register_servlets(hs, client_resource)
|
capabilities.register_servlets(hs, client_resource)
|
||||||
|
|
|
@ -293,7 +293,8 @@ def register_servlets_for_client_rest_resource(
|
||||||
ResetPasswordRestServlet(hs).register(http_server)
|
ResetPasswordRestServlet(hs).register(http_server)
|
||||||
SearchUsersRestServlet(hs).register(http_server)
|
SearchUsersRestServlet(hs).register(http_server)
|
||||||
UserRegisterServlet(hs).register(http_server)
|
UserRegisterServlet(hs).register(http_server)
|
||||||
DeleteGroupAdminRestServlet(hs).register(http_server)
|
if hs.config.experimental.groups_enabled:
|
||||||
|
DeleteGroupAdminRestServlet(hs).register(http_server)
|
||||||
AccountValidityRenewServlet(hs).register(http_server)
|
AccountValidityRenewServlet(hs).register(http_server)
|
||||||
|
|
||||||
# Load the media repo ones if we're using them. Otherwise load the servlets which
|
# Load the media repo ones if we're using them. Otherwise load the servlets which
|
||||||
|
|
|
@ -298,7 +298,6 @@ class Responder:
|
||||||
Returns:
|
Returns:
|
||||||
Resolves once the response has finished being written
|
Resolves once the response has finished being written
|
||||||
"""
|
"""
|
||||||
pass
|
|
||||||
|
|
||||||
def __enter__(self) -> None:
|
def __enter__(self) -> None:
|
||||||
pass
|
pass
|
||||||
|
|
|
@ -92,12 +92,20 @@ class AccountDetailsResource(DirectServeHtmlResource):
|
||||||
self._sso_handler.render_error(request, "bad_session", e.msg, code=e.code)
|
self._sso_handler.render_error(request, "bad_session", e.msg, code=e.code)
|
||||||
return
|
return
|
||||||
|
|
||||||
|
# The configuration might mandate going through this step to validate an
|
||||||
|
# automatically generated localpart, so session.chosen_localpart might already
|
||||||
|
# be set.
|
||||||
|
localpart = ""
|
||||||
|
if session.chosen_localpart is not None:
|
||||||
|
localpart = session.chosen_localpart
|
||||||
|
|
||||||
idp_id = session.auth_provider_id
|
idp_id = session.auth_provider_id
|
||||||
template_params = {
|
template_params = {
|
||||||
"idp": self._sso_handler.get_identity_providers()[idp_id],
|
"idp": self._sso_handler.get_identity_providers()[idp_id],
|
||||||
"user_attributes": {
|
"user_attributes": {
|
||||||
"display_name": session.display_name,
|
"display_name": session.display_name,
|
||||||
"emails": session.emails,
|
"emails": session.emails,
|
||||||
|
"localpart": localpart,
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -328,7 +328,6 @@ class HomeServer(metaclass=abc.ABCMeta):
|
||||||
Does nothing in this base class; overridden in derived classes to start the
|
Does nothing in this base class; overridden in derived classes to start the
|
||||||
appropriate listeners.
|
appropriate listeners.
|
||||||
"""
|
"""
|
||||||
pass
|
|
||||||
|
|
||||||
def setup_background_tasks(self) -> None:
|
def setup_background_tasks(self) -> None:
|
||||||
"""
|
"""
|
||||||
|
|
|
@ -60,18 +60,19 @@ class _BackgroundUpdateHandler:
|
||||||
|
|
||||||
|
|
||||||
class _BackgroundUpdateContextManager:
|
class _BackgroundUpdateContextManager:
|
||||||
BACKGROUND_UPDATE_INTERVAL_MS = 1000
|
def __init__(
|
||||||
BACKGROUND_UPDATE_DURATION_MS = 100
|
self, sleep: bool, clock: Clock, sleep_duration_ms: int, update_duration: int
|
||||||
|
):
|
||||||
def __init__(self, sleep: bool, clock: Clock):
|
|
||||||
self._sleep = sleep
|
self._sleep = sleep
|
||||||
self._clock = clock
|
self._clock = clock
|
||||||
|
self._sleep_duration_ms = sleep_duration_ms
|
||||||
|
self._update_duration_ms = update_duration
|
||||||
|
|
||||||
async def __aenter__(self) -> int:
|
async def __aenter__(self) -> int:
|
||||||
if self._sleep:
|
if self._sleep:
|
||||||
await self._clock.sleep(self.BACKGROUND_UPDATE_INTERVAL_MS / 1000)
|
await self._clock.sleep(self._sleep_duration_ms / 1000)
|
||||||
|
|
||||||
return self.BACKGROUND_UPDATE_DURATION_MS
|
return self._update_duration_ms
|
||||||
|
|
||||||
async def __aexit__(self, *exc) -> None:
|
async def __aexit__(self, *exc) -> None:
|
||||||
pass
|
pass
|
||||||
|
@ -133,9 +134,6 @@ class BackgroundUpdater:
|
||||||
process and autotuning the batch size.
|
process and autotuning the batch size.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
MINIMUM_BACKGROUND_BATCH_SIZE = 1
|
|
||||||
DEFAULT_BACKGROUND_BATCH_SIZE = 100
|
|
||||||
|
|
||||||
def __init__(self, hs: "HomeServer", database: "DatabasePool"):
|
def __init__(self, hs: "HomeServer", database: "DatabasePool"):
|
||||||
self._clock = hs.get_clock()
|
self._clock = hs.get_clock()
|
||||||
self.db_pool = database
|
self.db_pool = database
|
||||||
|
@ -160,6 +158,14 @@ class BackgroundUpdater:
|
||||||
# enable/disable background updates via the admin API.
|
# enable/disable background updates via the admin API.
|
||||||
self.enabled = True
|
self.enabled = True
|
||||||
|
|
||||||
|
self.minimum_background_batch_size = hs.config.background_updates.min_batch_size
|
||||||
|
self.default_background_batch_size = (
|
||||||
|
hs.config.background_updates.default_batch_size
|
||||||
|
)
|
||||||
|
self.update_duration_ms = hs.config.background_updates.update_duration_ms
|
||||||
|
self.sleep_duration_ms = hs.config.background_updates.sleep_duration_ms
|
||||||
|
self.sleep_enabled = hs.config.background_updates.sleep_enabled
|
||||||
|
|
||||||
def register_update_controller_callbacks(
|
def register_update_controller_callbacks(
|
||||||
self,
|
self,
|
||||||
on_update: ON_UPDATE_CALLBACK,
|
on_update: ON_UPDATE_CALLBACK,
|
||||||
|
@ -216,7 +222,9 @@ class BackgroundUpdater:
|
||||||
if self._on_update_callback is not None:
|
if self._on_update_callback is not None:
|
||||||
return self._on_update_callback(update_name, database_name, oneshot)
|
return self._on_update_callback(update_name, database_name, oneshot)
|
||||||
|
|
||||||
return _BackgroundUpdateContextManager(sleep, self._clock)
|
return _BackgroundUpdateContextManager(
|
||||||
|
sleep, self._clock, self.sleep_duration_ms, self.update_duration_ms
|
||||||
|
)
|
||||||
|
|
||||||
async def _default_batch_size(self, update_name: str, database_name: str) -> int:
|
async def _default_batch_size(self, update_name: str, database_name: str) -> int:
|
||||||
"""The batch size to use for the first iteration of a new background
|
"""The batch size to use for the first iteration of a new background
|
||||||
|
@ -225,7 +233,7 @@ class BackgroundUpdater:
|
||||||
if self._default_batch_size_callback is not None:
|
if self._default_batch_size_callback is not None:
|
||||||
return await self._default_batch_size_callback(update_name, database_name)
|
return await self._default_batch_size_callback(update_name, database_name)
|
||||||
|
|
||||||
return self.DEFAULT_BACKGROUND_BATCH_SIZE
|
return self.default_background_batch_size
|
||||||
|
|
||||||
async def _min_batch_size(self, update_name: str, database_name: str) -> int:
|
async def _min_batch_size(self, update_name: str, database_name: str) -> int:
|
||||||
"""A lower bound on the batch size of a new background update.
|
"""A lower bound on the batch size of a new background update.
|
||||||
|
@ -235,7 +243,7 @@ class BackgroundUpdater:
|
||||||
if self._min_batch_size_callback is not None:
|
if self._min_batch_size_callback is not None:
|
||||||
return await self._min_batch_size_callback(update_name, database_name)
|
return await self._min_batch_size_callback(update_name, database_name)
|
||||||
|
|
||||||
return self.MINIMUM_BACKGROUND_BATCH_SIZE
|
return self.minimum_background_batch_size
|
||||||
|
|
||||||
def get_current_update(self) -> Optional[BackgroundUpdatePerformance]:
|
def get_current_update(self) -> Optional[BackgroundUpdatePerformance]:
|
||||||
"""Returns the current background update, if any."""
|
"""Returns the current background update, if any."""
|
||||||
|
@ -254,9 +262,12 @@ class BackgroundUpdater:
|
||||||
if self.enabled:
|
if self.enabled:
|
||||||
# if we start a new background update, not all updates are done.
|
# if we start a new background update, not all updates are done.
|
||||||
self._all_done = False
|
self._all_done = False
|
||||||
run_as_background_process("background_updates", self.run_background_updates)
|
sleep = self.sleep_enabled
|
||||||
|
run_as_background_process(
|
||||||
|
"background_updates", self.run_background_updates, sleep
|
||||||
|
)
|
||||||
|
|
||||||
async def run_background_updates(self, sleep: bool = True) -> None:
|
async def run_background_updates(self, sleep: bool) -> None:
|
||||||
if self._running or not self.enabled:
|
if self._running or not self.enabled:
|
||||||
return
|
return
|
||||||
|
|
||||||
|
|
|
@ -48,8 +48,6 @@ class ExternalIDReuseException(Exception):
|
||||||
"""Exception if writing an external id for a user fails,
|
"""Exception if writing an external id for a user fails,
|
||||||
because this external id is given to an other user."""
|
because this external id is given to an other user."""
|
||||||
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
@attr.s(frozen=True, slots=True, auto_attribs=True)
|
@attr.s(frozen=True, slots=True, auto_attribs=True)
|
||||||
class TokenLookupResult:
|
class TokenLookupResult:
|
||||||
|
|
|
@ -125,9 +125,6 @@ class SearchBackgroundUpdateStore(SearchWorkerStore):
|
||||||
):
|
):
|
||||||
super().__init__(database, db_conn, hs)
|
super().__init__(database, db_conn, hs)
|
||||||
|
|
||||||
if not hs.config.server.enable_search:
|
|
||||||
return
|
|
||||||
|
|
||||||
self.db_pool.updates.register_background_update_handler(
|
self.db_pool.updates.register_background_update_handler(
|
||||||
self.EVENT_SEARCH_UPDATE_NAME, self._background_reindex_search
|
self.EVENT_SEARCH_UPDATE_NAME, self._background_reindex_search
|
||||||
)
|
)
|
||||||
|
@ -243,9 +240,13 @@ class SearchBackgroundUpdateStore(SearchWorkerStore):
|
||||||
|
|
||||||
return len(event_search_rows)
|
return len(event_search_rows)
|
||||||
|
|
||||||
result = await self.db_pool.runInteraction(
|
if self.hs.config.server.enable_search:
|
||||||
self.EVENT_SEARCH_UPDATE_NAME, reindex_search_txn
|
result = await self.db_pool.runInteraction(
|
||||||
)
|
self.EVENT_SEARCH_UPDATE_NAME, reindex_search_txn
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
# Don't index anything if search is not enabled.
|
||||||
|
result = 0
|
||||||
|
|
||||||
if not result:
|
if not result:
|
||||||
await self.db_pool.updates._end_background_update(
|
await self.db_pool.updates._end_background_update(
|
||||||
|
|
|
@ -36,7 +36,6 @@ def run_upgrade(cur, database_engine, config, *args, **kwargs):
|
||||||
config_files = config.appservice.app_service_config_files
|
config_files = config.appservice.app_service_config_files
|
||||||
except AttributeError:
|
except AttributeError:
|
||||||
logger.warning("Could not get app_service_config_files from config")
|
logger.warning("Could not get app_service_config_files from config")
|
||||||
pass
|
|
||||||
|
|
||||||
appservices = load_appservices(config.server.server_name, config_files)
|
appservices = load_appservices(config.server.server_name, config_files)
|
||||||
|
|
||||||
|
|
|
@ -31,13 +31,6 @@ from synapse.logging import context
|
||||||
if typing.TYPE_CHECKING:
|
if typing.TYPE_CHECKING:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
# FIXME Mjolnir imports glob_to_regex from this file, but it was moved to
|
|
||||||
# matrix_common.
|
|
||||||
# As a temporary workaround, we import glob_to_regex here for
|
|
||||||
# compatibility with current versions of Mjolnir.
|
|
||||||
# See https://github.com/matrix-org/mjolnir/pull/174
|
|
||||||
from matrix_common.regex import glob_to_regex # noqa
|
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -18,9 +18,10 @@ import collections
|
||||||
import inspect
|
import inspect
|
||||||
import itertools
|
import itertools
|
||||||
import logging
|
import logging
|
||||||
from contextlib import contextmanager
|
from contextlib import asynccontextmanager, contextmanager
|
||||||
from typing import (
|
from typing import (
|
||||||
Any,
|
Any,
|
||||||
|
AsyncIterator,
|
||||||
Awaitable,
|
Awaitable,
|
||||||
Callable,
|
Callable,
|
||||||
Collection,
|
Collection,
|
||||||
|
@ -40,7 +41,7 @@ from typing import (
|
||||||
)
|
)
|
||||||
|
|
||||||
import attr
|
import attr
|
||||||
from typing_extensions import ContextManager, Literal
|
from typing_extensions import AsyncContextManager, Literal
|
||||||
|
|
||||||
from twisted.internet import defer
|
from twisted.internet import defer
|
||||||
from twisted.internet.defer import CancelledError
|
from twisted.internet.defer import CancelledError
|
||||||
|
@ -491,7 +492,7 @@ class ReadWriteLock:
|
||||||
|
|
||||||
Example:
|
Example:
|
||||||
|
|
||||||
with await read_write_lock.read("test_key"):
|
async with read_write_lock.read("test_key"):
|
||||||
# do some work
|
# do some work
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
@ -514,22 +515,24 @@ class ReadWriteLock:
|
||||||
# Latest writer queued
|
# Latest writer queued
|
||||||
self.key_to_current_writer: Dict[str, defer.Deferred] = {}
|
self.key_to_current_writer: Dict[str, defer.Deferred] = {}
|
||||||
|
|
||||||
async def read(self, key: str) -> ContextManager:
|
def read(self, key: str) -> AsyncContextManager:
|
||||||
new_defer: "defer.Deferred[None]" = defer.Deferred()
|
@asynccontextmanager
|
||||||
|
async def _ctx_manager() -> AsyncIterator[None]:
|
||||||
|
new_defer: "defer.Deferred[None]" = defer.Deferred()
|
||||||
|
|
||||||
curr_readers = self.key_to_current_readers.setdefault(key, set())
|
curr_readers = self.key_to_current_readers.setdefault(key, set())
|
||||||
curr_writer = self.key_to_current_writer.get(key, None)
|
curr_writer = self.key_to_current_writer.get(key, None)
|
||||||
|
|
||||||
curr_readers.add(new_defer)
|
curr_readers.add(new_defer)
|
||||||
|
|
||||||
# We wait for the latest writer to finish writing. We can safely ignore
|
|
||||||
# any existing readers... as they're readers.
|
|
||||||
if curr_writer:
|
|
||||||
await make_deferred_yieldable(curr_writer)
|
|
||||||
|
|
||||||
@contextmanager
|
|
||||||
def _ctx_manager() -> Iterator[None]:
|
|
||||||
try:
|
try:
|
||||||
|
# We wait for the latest writer to finish writing. We can safely ignore
|
||||||
|
# any existing readers... as they're readers.
|
||||||
|
# May raise a `CancelledError` if the `Deferred` wrapping us is
|
||||||
|
# cancelled. The `Deferred` we are waiting on must not be cancelled,
|
||||||
|
# since we do not own it.
|
||||||
|
if curr_writer:
|
||||||
|
await make_deferred_yieldable(stop_cancellation(curr_writer))
|
||||||
yield
|
yield
|
||||||
finally:
|
finally:
|
||||||
with PreserveLoggingContext():
|
with PreserveLoggingContext():
|
||||||
|
@ -538,29 +541,35 @@ class ReadWriteLock:
|
||||||
|
|
||||||
return _ctx_manager()
|
return _ctx_manager()
|
||||||
|
|
||||||
async def write(self, key: str) -> ContextManager:
|
def write(self, key: str) -> AsyncContextManager:
|
||||||
new_defer: "defer.Deferred[None]" = defer.Deferred()
|
@asynccontextmanager
|
||||||
|
async def _ctx_manager() -> AsyncIterator[None]:
|
||||||
|
new_defer: "defer.Deferred[None]" = defer.Deferred()
|
||||||
|
|
||||||
curr_readers = self.key_to_current_readers.get(key, set())
|
curr_readers = self.key_to_current_readers.get(key, set())
|
||||||
curr_writer = self.key_to_current_writer.get(key, None)
|
curr_writer = self.key_to_current_writer.get(key, None)
|
||||||
|
|
||||||
# We wait on all latest readers and writer.
|
# We wait on all latest readers and writer.
|
||||||
to_wait_on = list(curr_readers)
|
to_wait_on = list(curr_readers)
|
||||||
if curr_writer:
|
if curr_writer:
|
||||||
to_wait_on.append(curr_writer)
|
to_wait_on.append(curr_writer)
|
||||||
|
|
||||||
# We can clear the list of current readers since the new writer waits
|
# We can clear the list of current readers since `new_defer` waits
|
||||||
# for them to finish.
|
# for them to finish.
|
||||||
curr_readers.clear()
|
curr_readers.clear()
|
||||||
self.key_to_current_writer[key] = new_defer
|
self.key_to_current_writer[key] = new_defer
|
||||||
|
|
||||||
await make_deferred_yieldable(defer.gatherResults(to_wait_on))
|
to_wait_on_defer = defer.gatherResults(to_wait_on)
|
||||||
|
|
||||||
@contextmanager
|
|
||||||
def _ctx_manager() -> Iterator[None]:
|
|
||||||
try:
|
try:
|
||||||
|
# Wait for all current readers and the latest writer to finish.
|
||||||
|
# May raise a `CancelledError` immediately after the wait if the
|
||||||
|
# `Deferred` wrapping us is cancelled. We must only release the lock
|
||||||
|
# once we have acquired it, hence the use of `delay_cancellation`
|
||||||
|
# rather than `stop_cancellation`.
|
||||||
|
await make_deferred_yieldable(delay_cancellation(to_wait_on_defer))
|
||||||
yield
|
yield
|
||||||
finally:
|
finally:
|
||||||
|
# Release the lock.
|
||||||
with PreserveLoggingContext():
|
with PreserveLoggingContext():
|
||||||
new_defer.callback(None)
|
new_defer.callback(None)
|
||||||
# `self.key_to_current_writer[key]` may be missing if there was another
|
# `self.key_to_current_writer[key]` may be missing if there was another
|
||||||
|
@ -686,12 +695,48 @@ def stop_cancellation(deferred: "defer.Deferred[T]") -> "defer.Deferred[T]":
|
||||||
Synapse logcontext rules.
|
Synapse logcontext rules.
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
A new `Deferred`, which will contain the result of the original `Deferred`,
|
A new `Deferred`, which will contain the result of the original `Deferred`.
|
||||||
but will not propagate cancellation through to the original. When cancelled,
|
The new `Deferred` will not propagate cancellation through to the original.
|
||||||
the new `Deferred` will fail with a `CancelledError` and will not follow the
|
When cancelled, the new `Deferred` will fail with a `CancelledError`.
|
||||||
Synapse logcontext rules. `make_deferred_yieldable` should be used to wrap
|
|
||||||
the new `Deferred`.
|
The new `Deferred` will not follow the Synapse logcontext rules and should be
|
||||||
|
wrapped with `make_deferred_yieldable`.
|
||||||
"""
|
"""
|
||||||
new_deferred: defer.Deferred[T] = defer.Deferred()
|
new_deferred: "defer.Deferred[T]" = defer.Deferred()
|
||||||
|
deferred.chainDeferred(new_deferred)
|
||||||
|
return new_deferred
|
||||||
|
|
||||||
|
|
||||||
|
def delay_cancellation(deferred: "defer.Deferred[T]") -> "defer.Deferred[T]":
|
||||||
|
"""Delay cancellation of a `Deferred` until it resolves.
|
||||||
|
|
||||||
|
Has the same effect as `stop_cancellation`, but the returned `Deferred` will not
|
||||||
|
resolve with a `CancelledError` until the original `Deferred` resolves.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
deferred: The `Deferred` to protect against cancellation. May optionally follow
|
||||||
|
the Synapse logcontext rules.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
A new `Deferred`, which will contain the result of the original `Deferred`.
|
||||||
|
The new `Deferred` will not propagate cancellation through to the original.
|
||||||
|
When cancelled, the new `Deferred` will wait until the original `Deferred`
|
||||||
|
resolves before failing with a `CancelledError`.
|
||||||
|
|
||||||
|
The new `Deferred` will follow the Synapse logcontext rules if `deferred`
|
||||||
|
follows the Synapse logcontext rules. Otherwise the new `Deferred` should be
|
||||||
|
wrapped with `make_deferred_yieldable`.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def handle_cancel(new_deferred: "defer.Deferred[T]") -> None:
|
||||||
|
# before the new deferred is cancelled, we `pause` it to stop the cancellation
|
||||||
|
# propagating. we then `unpause` it once the wrapped deferred completes, to
|
||||||
|
# propagate the exception.
|
||||||
|
new_deferred.pause()
|
||||||
|
new_deferred.errback(Failure(CancelledError()))
|
||||||
|
|
||||||
|
deferred.addBoth(lambda _: new_deferred.unpause())
|
||||||
|
|
||||||
|
new_deferred: "defer.Deferred[T]" = defer.Deferred(handle_cancel)
|
||||||
deferred.chainDeferred(new_deferred)
|
deferred.chainDeferred(new_deferred)
|
||||||
return new_deferred
|
return new_deferred
|
||||||
|
|
|
@ -41,6 +41,7 @@ from twisted.python.failure import Failure
|
||||||
|
|
||||||
from synapse.logging.context import make_deferred_yieldable, preserve_fn
|
from synapse.logging.context import make_deferred_yieldable, preserve_fn
|
||||||
from synapse.util import unwrapFirstError
|
from synapse.util import unwrapFirstError
|
||||||
|
from synapse.util.async_helpers import delay_cancellation
|
||||||
from synapse.util.caches.deferred_cache import DeferredCache
|
from synapse.util.caches.deferred_cache import DeferredCache
|
||||||
from synapse.util.caches.lrucache import LruCache
|
from synapse.util.caches.lrucache import LruCache
|
||||||
|
|
||||||
|
@ -350,6 +351,11 @@ class DeferredCacheDescriptor(_CacheDescriptorBase):
|
||||||
ret = defer.maybeDeferred(preserve_fn(self.orig), obj, *args, **kwargs)
|
ret = defer.maybeDeferred(preserve_fn(self.orig), obj, *args, **kwargs)
|
||||||
ret = cache.set(cache_key, ret, callback=invalidate_callback)
|
ret = cache.set(cache_key, ret, callback=invalidate_callback)
|
||||||
|
|
||||||
|
# We started a new call to `self.orig`, so we must always wait for it to
|
||||||
|
# complete. Otherwise we might mark our current logging context as
|
||||||
|
# finished while `self.orig` is still using it in the background.
|
||||||
|
ret = delay_cancellation(ret)
|
||||||
|
|
||||||
return make_deferred_yieldable(ret)
|
return make_deferred_yieldable(ret)
|
||||||
|
|
||||||
wrapped = cast(_CachedFunction, _wrapped)
|
wrapped = cast(_CachedFunction, _wrapped)
|
||||||
|
@ -510,6 +516,11 @@ class DeferredCacheListDescriptor(_CacheDescriptorBase):
|
||||||
d = defer.gatherResults(cached_defers, consumeErrors=True).addCallbacks(
|
d = defer.gatherResults(cached_defers, consumeErrors=True).addCallbacks(
|
||||||
lambda _: results, unwrapFirstError
|
lambda _: results, unwrapFirstError
|
||||||
)
|
)
|
||||||
|
if missing:
|
||||||
|
# We started a new call to `self.orig`, so we must always wait for it to
|
||||||
|
# complete. Otherwise we might mark our current logging context as
|
||||||
|
# finished while `self.orig` is still using it in the background.
|
||||||
|
d = delay_cancellation(d)
|
||||||
return make_deferred_yieldable(d)
|
return make_deferred_yieldable(d)
|
||||||
else:
|
else:
|
||||||
return defer.succeed(results)
|
return defer.succeed(results)
|
||||||
|
|
|
@ -22,8 +22,6 @@ class TreeCacheNode(dict):
|
||||||
leaves.
|
leaves.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
class TreeCache:
|
class TreeCache:
|
||||||
"""
|
"""
|
||||||
|
|
|
@ -64,6 +64,7 @@ def build_jinja_env(
|
||||||
{
|
{
|
||||||
"format_ts": _format_ts_filter,
|
"format_ts": _format_ts_filter,
|
||||||
"mxc_to_http": _create_mxc_to_http_filter(config.server.public_baseurl),
|
"mxc_to_http": _create_mxc_to_http_filter(config.server.public_baseurl),
|
||||||
|
"localpart_from_email": _localpart_from_email_filter,
|
||||||
}
|
}
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -112,3 +113,7 @@ def _create_mxc_to_http_filter(
|
||||||
|
|
||||||
def _format_ts_filter(value: int, format: str) -> str:
|
def _format_ts_filter(value: int, format: str) -> str:
|
||||||
return time.strftime(format, time.localtime(value / 1000))
|
return time.strftime(format, time.localtime(value / 1000))
|
||||||
|
|
||||||
|
|
||||||
|
def _localpart_from_email_filter(address: str) -> str:
|
||||||
|
return address.rsplit("@", 1)[0]
|
||||||
|
|
|
@ -0,0 +1,58 @@
|
||||||
|
# Copyright 2022 The Matrix.org Foundation C.I.C.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
import yaml
|
||||||
|
|
||||||
|
from synapse.storage.background_updates import BackgroundUpdater
|
||||||
|
|
||||||
|
from tests.unittest import HomeserverTestCase, override_config
|
||||||
|
|
||||||
|
|
||||||
|
class BackgroundUpdateConfigTestCase(HomeserverTestCase):
|
||||||
|
# Tests that the default values in the config are correctly loaded. Note that the default
|
||||||
|
# values are loaded when the corresponding config options are commented out, which is why there isn't
|
||||||
|
# a config specified here.
|
||||||
|
def test_default_configuration(self):
|
||||||
|
background_updater = BackgroundUpdater(
|
||||||
|
self.hs, self.hs.get_datastores().main.db_pool
|
||||||
|
)
|
||||||
|
|
||||||
|
self.assertEqual(background_updater.minimum_background_batch_size, 1)
|
||||||
|
self.assertEqual(background_updater.default_background_batch_size, 100)
|
||||||
|
self.assertEqual(background_updater.sleep_enabled, True)
|
||||||
|
self.assertEqual(background_updater.sleep_duration_ms, 1000)
|
||||||
|
self.assertEqual(background_updater.update_duration_ms, 100)
|
||||||
|
|
||||||
|
# Tests that non-default values for the config options are properly picked up and passed on.
|
||||||
|
@override_config(
|
||||||
|
yaml.safe_load(
|
||||||
|
"""
|
||||||
|
background_updates:
|
||||||
|
background_update_duration_ms: 1000
|
||||||
|
sleep_enabled: false
|
||||||
|
sleep_duration_ms: 600
|
||||||
|
min_batch_size: 5
|
||||||
|
default_batch_size: 50
|
||||||
|
"""
|
||||||
|
)
|
||||||
|
)
|
||||||
|
def test_custom_configuration(self):
|
||||||
|
background_updater = BackgroundUpdater(
|
||||||
|
self.hs, self.hs.get_datastores().main.db_pool
|
||||||
|
)
|
||||||
|
|
||||||
|
self.assertEqual(background_updater.minimum_background_batch_size, 5)
|
||||||
|
self.assertEqual(background_updater.default_background_batch_size, 50)
|
||||||
|
self.assertEqual(background_updater.sleep_enabled, False)
|
||||||
|
self.assertEqual(background_updater.sleep_duration_ms, 600)
|
||||||
|
self.assertEqual(background_updater.update_duration_ms, 1000)
|
|
@ -15,11 +15,15 @@
|
||||||
from collections import Counter
|
from collections import Counter
|
||||||
from unittest.mock import Mock
|
from unittest.mock import Mock
|
||||||
|
|
||||||
|
from twisted.test.proto_helpers import MemoryReactor
|
||||||
|
|
||||||
import synapse.rest.admin
|
import synapse.rest.admin
|
||||||
import synapse.storage
|
import synapse.storage
|
||||||
from synapse.api.constants import EventTypes, JoinRules
|
from synapse.api.constants import EventTypes, JoinRules
|
||||||
from synapse.api.room_versions import RoomVersions
|
from synapse.api.room_versions import RoomVersions
|
||||||
from synapse.rest.client import knock, login, room
|
from synapse.rest.client import knock, login, room
|
||||||
|
from synapse.server import HomeServer
|
||||||
|
from synapse.util import Clock
|
||||||
|
|
||||||
from tests import unittest
|
from tests import unittest
|
||||||
|
|
||||||
|
@ -32,7 +36,7 @@ class ExfiltrateData(unittest.HomeserverTestCase):
|
||||||
knock.register_servlets,
|
knock.register_servlets,
|
||||||
]
|
]
|
||||||
|
|
||||||
def prepare(self, reactor, clock, hs):
|
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
|
||||||
self.admin_handler = hs.get_admin_handler()
|
self.admin_handler = hs.get_admin_handler()
|
||||||
|
|
||||||
self.user1 = self.register_user("user1", "password")
|
self.user1 = self.register_user("user1", "password")
|
||||||
|
@ -41,7 +45,7 @@ class ExfiltrateData(unittest.HomeserverTestCase):
|
||||||
self.user2 = self.register_user("user2", "password")
|
self.user2 = self.register_user("user2", "password")
|
||||||
self.token2 = self.login("user2", "password")
|
self.token2 = self.login("user2", "password")
|
||||||
|
|
||||||
def test_single_public_joined_room(self):
|
def test_single_public_joined_room(self) -> None:
|
||||||
"""Test that we write *all* events for a public room"""
|
"""Test that we write *all* events for a public room"""
|
||||||
room_id = self.helper.create_room_as(
|
room_id = self.helper.create_room_as(
|
||||||
self.user1, tok=self.token1, is_public=True
|
self.user1, tok=self.token1, is_public=True
|
||||||
|
@ -74,7 +78,7 @@ class ExfiltrateData(unittest.HomeserverTestCase):
|
||||||
self.assertEqual(counter[(EventTypes.Member, self.user1)], 1)
|
self.assertEqual(counter[(EventTypes.Member, self.user1)], 1)
|
||||||
self.assertEqual(counter[(EventTypes.Member, self.user2)], 1)
|
self.assertEqual(counter[(EventTypes.Member, self.user2)], 1)
|
||||||
|
|
||||||
def test_single_private_joined_room(self):
|
def test_single_private_joined_room(self) -> None:
|
||||||
"""Tests that we correctly write state when we can't see all events in
|
"""Tests that we correctly write state when we can't see all events in
|
||||||
a room.
|
a room.
|
||||||
"""
|
"""
|
||||||
|
@ -112,7 +116,7 @@ class ExfiltrateData(unittest.HomeserverTestCase):
|
||||||
self.assertEqual(counter[(EventTypes.Member, self.user1)], 1)
|
self.assertEqual(counter[(EventTypes.Member, self.user1)], 1)
|
||||||
self.assertEqual(counter[(EventTypes.Member, self.user2)], 1)
|
self.assertEqual(counter[(EventTypes.Member, self.user2)], 1)
|
||||||
|
|
||||||
def test_single_left_room(self):
|
def test_single_left_room(self) -> None:
|
||||||
"""Tests that we don't see events in the room after we leave."""
|
"""Tests that we don't see events in the room after we leave."""
|
||||||
room_id = self.helper.create_room_as(self.user1, tok=self.token1)
|
room_id = self.helper.create_room_as(self.user1, tok=self.token1)
|
||||||
self.helper.send(room_id, body="Hello!", tok=self.token1)
|
self.helper.send(room_id, body="Hello!", tok=self.token1)
|
||||||
|
@ -144,7 +148,7 @@ class ExfiltrateData(unittest.HomeserverTestCase):
|
||||||
self.assertEqual(counter[(EventTypes.Member, self.user1)], 1)
|
self.assertEqual(counter[(EventTypes.Member, self.user1)], 1)
|
||||||
self.assertEqual(counter[(EventTypes.Member, self.user2)], 2)
|
self.assertEqual(counter[(EventTypes.Member, self.user2)], 2)
|
||||||
|
|
||||||
def test_single_left_rejoined_private_room(self):
|
def test_single_left_rejoined_private_room(self) -> None:
|
||||||
"""Tests that see the correct events in private rooms when we
|
"""Tests that see the correct events in private rooms when we
|
||||||
repeatedly join and leave.
|
repeatedly join and leave.
|
||||||
"""
|
"""
|
||||||
|
@ -185,7 +189,7 @@ class ExfiltrateData(unittest.HomeserverTestCase):
|
||||||
self.assertEqual(counter[(EventTypes.Member, self.user1)], 1)
|
self.assertEqual(counter[(EventTypes.Member, self.user1)], 1)
|
||||||
self.assertEqual(counter[(EventTypes.Member, self.user2)], 3)
|
self.assertEqual(counter[(EventTypes.Member, self.user2)], 3)
|
||||||
|
|
||||||
def test_invite(self):
|
def test_invite(self) -> None:
|
||||||
"""Tests that pending invites get handled correctly."""
|
"""Tests that pending invites get handled correctly."""
|
||||||
room_id = self.helper.create_room_as(self.user1, tok=self.token1)
|
room_id = self.helper.create_room_as(self.user1, tok=self.token1)
|
||||||
self.helper.send(room_id, body="Hello!", tok=self.token1)
|
self.helper.send(room_id, body="Hello!", tok=self.token1)
|
||||||
|
@ -204,7 +208,7 @@ class ExfiltrateData(unittest.HomeserverTestCase):
|
||||||
self.assertEqual(args[1].content["membership"], "invite")
|
self.assertEqual(args[1].content["membership"], "invite")
|
||||||
self.assertTrue(args[2]) # Assert there is at least one bit of state
|
self.assertTrue(args[2]) # Assert there is at least one bit of state
|
||||||
|
|
||||||
def test_knock(self):
|
def test_knock(self) -> None:
|
||||||
"""Tests that knock get handled correctly."""
|
"""Tests that knock get handled correctly."""
|
||||||
# create a knockable v7 room
|
# create a knockable v7 room
|
||||||
room_id = self.helper.create_room_as(
|
room_id = self.helper.create_room_as(
|
||||||
|
|
|
@ -15,8 +15,12 @@ from unittest.mock import Mock
|
||||||
|
|
||||||
import pymacaroons
|
import pymacaroons
|
||||||
|
|
||||||
|
from twisted.test.proto_helpers import MemoryReactor
|
||||||
|
|
||||||
from synapse.api.errors import AuthError, ResourceLimitError
|
from synapse.api.errors import AuthError, ResourceLimitError
|
||||||
from synapse.rest import admin
|
from synapse.rest import admin
|
||||||
|
from synapse.server import HomeServer
|
||||||
|
from synapse.util import Clock
|
||||||
|
|
||||||
from tests import unittest
|
from tests import unittest
|
||||||
from tests.test_utils import make_awaitable
|
from tests.test_utils import make_awaitable
|
||||||
|
@ -27,7 +31,7 @@ class AuthTestCase(unittest.HomeserverTestCase):
|
||||||
admin.register_servlets,
|
admin.register_servlets,
|
||||||
]
|
]
|
||||||
|
|
||||||
def prepare(self, reactor, clock, hs):
|
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
|
||||||
self.auth_handler = hs.get_auth_handler()
|
self.auth_handler = hs.get_auth_handler()
|
||||||
self.macaroon_generator = hs.get_macaroon_generator()
|
self.macaroon_generator = hs.get_macaroon_generator()
|
||||||
|
|
||||||
|
@ -42,23 +46,23 @@ class AuthTestCase(unittest.HomeserverTestCase):
|
||||||
|
|
||||||
self.user1 = self.register_user("a_user", "pass")
|
self.user1 = self.register_user("a_user", "pass")
|
||||||
|
|
||||||
def test_macaroon_caveats(self):
|
def test_macaroon_caveats(self) -> None:
|
||||||
token = self.macaroon_generator.generate_guest_access_token("a_user")
|
token = self.macaroon_generator.generate_guest_access_token("a_user")
|
||||||
macaroon = pymacaroons.Macaroon.deserialize(token)
|
macaroon = pymacaroons.Macaroon.deserialize(token)
|
||||||
|
|
||||||
def verify_gen(caveat):
|
def verify_gen(caveat: str) -> bool:
|
||||||
return caveat == "gen = 1"
|
return caveat == "gen = 1"
|
||||||
|
|
||||||
def verify_user(caveat):
|
def verify_user(caveat: str) -> bool:
|
||||||
return caveat == "user_id = a_user"
|
return caveat == "user_id = a_user"
|
||||||
|
|
||||||
def verify_type(caveat):
|
def verify_type(caveat: str) -> bool:
|
||||||
return caveat == "type = access"
|
return caveat == "type = access"
|
||||||
|
|
||||||
def verify_nonce(caveat):
|
def verify_nonce(caveat: str) -> bool:
|
||||||
return caveat.startswith("nonce =")
|
return caveat.startswith("nonce =")
|
||||||
|
|
||||||
def verify_guest(caveat):
|
def verify_guest(caveat: str) -> bool:
|
||||||
return caveat == "guest = true"
|
return caveat == "guest = true"
|
||||||
|
|
||||||
v = pymacaroons.Verifier()
|
v = pymacaroons.Verifier()
|
||||||
|
@ -69,7 +73,7 @@ class AuthTestCase(unittest.HomeserverTestCase):
|
||||||
v.satisfy_general(verify_guest)
|
v.satisfy_general(verify_guest)
|
||||||
v.verify(macaroon, self.hs.config.key.macaroon_secret_key)
|
v.verify(macaroon, self.hs.config.key.macaroon_secret_key)
|
||||||
|
|
||||||
def test_short_term_login_token_gives_user_id(self):
|
def test_short_term_login_token_gives_user_id(self) -> None:
|
||||||
token = self.macaroon_generator.generate_short_term_login_token(
|
token = self.macaroon_generator.generate_short_term_login_token(
|
||||||
self.user1, "", duration_in_ms=5000
|
self.user1, "", duration_in_ms=5000
|
||||||
)
|
)
|
||||||
|
@ -84,7 +88,7 @@ class AuthTestCase(unittest.HomeserverTestCase):
|
||||||
AuthError,
|
AuthError,
|
||||||
)
|
)
|
||||||
|
|
||||||
def test_short_term_login_token_gives_auth_provider(self):
|
def test_short_term_login_token_gives_auth_provider(self) -> None:
|
||||||
token = self.macaroon_generator.generate_short_term_login_token(
|
token = self.macaroon_generator.generate_short_term_login_token(
|
||||||
self.user1, auth_provider_id="my_idp"
|
self.user1, auth_provider_id="my_idp"
|
||||||
)
|
)
|
||||||
|
@ -92,7 +96,7 @@ class AuthTestCase(unittest.HomeserverTestCase):
|
||||||
self.assertEqual(self.user1, res.user_id)
|
self.assertEqual(self.user1, res.user_id)
|
||||||
self.assertEqual("my_idp", res.auth_provider_id)
|
self.assertEqual("my_idp", res.auth_provider_id)
|
||||||
|
|
||||||
def test_short_term_login_token_cannot_replace_user_id(self):
|
def test_short_term_login_token_cannot_replace_user_id(self) -> None:
|
||||||
token = self.macaroon_generator.generate_short_term_login_token(
|
token = self.macaroon_generator.generate_short_term_login_token(
|
||||||
self.user1, "", duration_in_ms=5000
|
self.user1, "", duration_in_ms=5000
|
||||||
)
|
)
|
||||||
|
@ -112,7 +116,7 @@ class AuthTestCase(unittest.HomeserverTestCase):
|
||||||
AuthError,
|
AuthError,
|
||||||
)
|
)
|
||||||
|
|
||||||
def test_mau_limits_disabled(self):
|
def test_mau_limits_disabled(self) -> None:
|
||||||
self.auth_blocking._limit_usage_by_mau = False
|
self.auth_blocking._limit_usage_by_mau = False
|
||||||
# Ensure does not throw exception
|
# Ensure does not throw exception
|
||||||
self.get_success(
|
self.get_success(
|
||||||
|
@ -127,7 +131,7 @@ class AuthTestCase(unittest.HomeserverTestCase):
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
|
|
||||||
def test_mau_limits_exceeded_large(self):
|
def test_mau_limits_exceeded_large(self) -> None:
|
||||||
self.auth_blocking._limit_usage_by_mau = True
|
self.auth_blocking._limit_usage_by_mau = True
|
||||||
self.hs.get_datastores().main.get_monthly_active_count = Mock(
|
self.hs.get_datastores().main.get_monthly_active_count = Mock(
|
||||||
return_value=make_awaitable(self.large_number_of_users)
|
return_value=make_awaitable(self.large_number_of_users)
|
||||||
|
@ -150,7 +154,7 @@ class AuthTestCase(unittest.HomeserverTestCase):
|
||||||
ResourceLimitError,
|
ResourceLimitError,
|
||||||
)
|
)
|
||||||
|
|
||||||
def test_mau_limits_parity(self):
|
def test_mau_limits_parity(self) -> None:
|
||||||
# Ensure we're not at the unix epoch.
|
# Ensure we're not at the unix epoch.
|
||||||
self.reactor.advance(1)
|
self.reactor.advance(1)
|
||||||
self.auth_blocking._limit_usage_by_mau = True
|
self.auth_blocking._limit_usage_by_mau = True
|
||||||
|
@ -189,7 +193,7 @@ class AuthTestCase(unittest.HomeserverTestCase):
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
|
|
||||||
def test_mau_limits_not_exceeded(self):
|
def test_mau_limits_not_exceeded(self) -> None:
|
||||||
self.auth_blocking._limit_usage_by_mau = True
|
self.auth_blocking._limit_usage_by_mau = True
|
||||||
|
|
||||||
self.hs.get_datastores().main.get_monthly_active_count = Mock(
|
self.hs.get_datastores().main.get_monthly_active_count = Mock(
|
||||||
|
@ -211,7 +215,7 @@ class AuthTestCase(unittest.HomeserverTestCase):
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
|
|
||||||
def _get_macaroon(self):
|
def _get_macaroon(self) -> pymacaroons.Macaroon:
|
||||||
token = self.macaroon_generator.generate_short_term_login_token(
|
token = self.macaroon_generator.generate_short_term_login_token(
|
||||||
self.user1, "", duration_in_ms=5000
|
self.user1, "", duration_in_ms=5000
|
||||||
)
|
)
|
||||||
|
|
|
@ -39,7 +39,7 @@ class DeactivateAccountTestCase(HomeserverTestCase):
|
||||||
self.user = self.register_user("user", "pass")
|
self.user = self.register_user("user", "pass")
|
||||||
self.token = self.login("user", "pass")
|
self.token = self.login("user", "pass")
|
||||||
|
|
||||||
def _deactivate_my_account(self):
|
def _deactivate_my_account(self) -> None:
|
||||||
"""
|
"""
|
||||||
Deactivates the account `self.user` using `self.token` and asserts
|
Deactivates the account `self.user` using `self.token` and asserts
|
||||||
that it returns a 200 success code.
|
that it returns a 200 success code.
|
||||||
|
|
|
@ -14,9 +14,14 @@
|
||||||
# See the License for the specific language governing permissions and
|
# See the License for the specific language governing permissions and
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
|
|
||||||
import synapse.api.errors
|
from typing import Optional
|
||||||
import synapse.handlers.device
|
|
||||||
import synapse.storage
|
from twisted.test.proto_helpers import MemoryReactor
|
||||||
|
|
||||||
|
from synapse.api.errors import NotFoundError, SynapseError
|
||||||
|
from synapse.handlers.device import MAX_DEVICE_DISPLAY_NAME_LEN
|
||||||
|
from synapse.server import HomeServer
|
||||||
|
from synapse.util import Clock
|
||||||
|
|
||||||
from tests import unittest
|
from tests import unittest
|
||||||
|
|
||||||
|
@ -25,28 +30,27 @@ user2 = "@theresa:bbb"
|
||||||
|
|
||||||
|
|
||||||
class DeviceTestCase(unittest.HomeserverTestCase):
|
class DeviceTestCase(unittest.HomeserverTestCase):
|
||||||
def make_homeserver(self, reactor, clock):
|
def make_homeserver(self, reactor: MemoryReactor, clock: Clock) -> HomeServer:
|
||||||
hs = self.setup_test_homeserver("server", federation_http_client=None)
|
hs = self.setup_test_homeserver("server", federation_http_client=None)
|
||||||
self.handler = hs.get_device_handler()
|
self.handler = hs.get_device_handler()
|
||||||
self.store = hs.get_datastores().main
|
self.store = hs.get_datastores().main
|
||||||
return hs
|
return hs
|
||||||
|
|
||||||
def prepare(self, reactor, clock, hs):
|
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
|
||||||
# These tests assume that it starts 1000 seconds in.
|
# These tests assume that it starts 1000 seconds in.
|
||||||
self.reactor.advance(1000)
|
self.reactor.advance(1000)
|
||||||
|
|
||||||
def test_device_is_created_with_invalid_name(self):
|
def test_device_is_created_with_invalid_name(self) -> None:
|
||||||
self.get_failure(
|
self.get_failure(
|
||||||
self.handler.check_device_registered(
|
self.handler.check_device_registered(
|
||||||
user_id="@boris:foo",
|
user_id="@boris:foo",
|
||||||
device_id="foo",
|
device_id="foo",
|
||||||
initial_device_display_name="a"
|
initial_device_display_name="a" * (MAX_DEVICE_DISPLAY_NAME_LEN + 1),
|
||||||
* (synapse.handlers.device.MAX_DEVICE_DISPLAY_NAME_LEN + 1),
|
|
||||||
),
|
),
|
||||||
synapse.api.errors.SynapseError,
|
SynapseError,
|
||||||
)
|
)
|
||||||
|
|
||||||
def test_device_is_created_if_doesnt_exist(self):
|
def test_device_is_created_if_doesnt_exist(self) -> None:
|
||||||
res = self.get_success(
|
res = self.get_success(
|
||||||
self.handler.check_device_registered(
|
self.handler.check_device_registered(
|
||||||
user_id="@boris:foo",
|
user_id="@boris:foo",
|
||||||
|
@ -59,7 +63,7 @@ class DeviceTestCase(unittest.HomeserverTestCase):
|
||||||
dev = self.get_success(self.handler.store.get_device("@boris:foo", "fco"))
|
dev = self.get_success(self.handler.store.get_device("@boris:foo", "fco"))
|
||||||
self.assertEqual(dev["display_name"], "display name")
|
self.assertEqual(dev["display_name"], "display name")
|
||||||
|
|
||||||
def test_device_is_preserved_if_exists(self):
|
def test_device_is_preserved_if_exists(self) -> None:
|
||||||
res1 = self.get_success(
|
res1 = self.get_success(
|
||||||
self.handler.check_device_registered(
|
self.handler.check_device_registered(
|
||||||
user_id="@boris:foo",
|
user_id="@boris:foo",
|
||||||
|
@ -81,7 +85,7 @@ class DeviceTestCase(unittest.HomeserverTestCase):
|
||||||
dev = self.get_success(self.handler.store.get_device("@boris:foo", "fco"))
|
dev = self.get_success(self.handler.store.get_device("@boris:foo", "fco"))
|
||||||
self.assertEqual(dev["display_name"], "display name")
|
self.assertEqual(dev["display_name"], "display name")
|
||||||
|
|
||||||
def test_device_id_is_made_up_if_unspecified(self):
|
def test_device_id_is_made_up_if_unspecified(self) -> None:
|
||||||
device_id = self.get_success(
|
device_id = self.get_success(
|
||||||
self.handler.check_device_registered(
|
self.handler.check_device_registered(
|
||||||
user_id="@theresa:foo",
|
user_id="@theresa:foo",
|
||||||
|
@ -93,7 +97,7 @@ class DeviceTestCase(unittest.HomeserverTestCase):
|
||||||
dev = self.get_success(self.handler.store.get_device("@theresa:foo", device_id))
|
dev = self.get_success(self.handler.store.get_device("@theresa:foo", device_id))
|
||||||
self.assertEqual(dev["display_name"], "display")
|
self.assertEqual(dev["display_name"], "display")
|
||||||
|
|
||||||
def test_get_devices_by_user(self):
|
def test_get_devices_by_user(self) -> None:
|
||||||
self._record_users()
|
self._record_users()
|
||||||
|
|
||||||
res = self.get_success(self.handler.get_devices_by_user(user1))
|
res = self.get_success(self.handler.get_devices_by_user(user1))
|
||||||
|
@ -131,7 +135,7 @@ class DeviceTestCase(unittest.HomeserverTestCase):
|
||||||
device_map["abc"],
|
device_map["abc"],
|
||||||
)
|
)
|
||||||
|
|
||||||
def test_get_device(self):
|
def test_get_device(self) -> None:
|
||||||
self._record_users()
|
self._record_users()
|
||||||
|
|
||||||
res = self.get_success(self.handler.get_device(user1, "abc"))
|
res = self.get_success(self.handler.get_device(user1, "abc"))
|
||||||
|
@ -146,21 +150,19 @@ class DeviceTestCase(unittest.HomeserverTestCase):
|
||||||
res,
|
res,
|
||||||
)
|
)
|
||||||
|
|
||||||
def test_delete_device(self):
|
def test_delete_device(self) -> None:
|
||||||
self._record_users()
|
self._record_users()
|
||||||
|
|
||||||
# delete the device
|
# delete the device
|
||||||
self.get_success(self.handler.delete_device(user1, "abc"))
|
self.get_success(self.handler.delete_device(user1, "abc"))
|
||||||
|
|
||||||
# check the device was deleted
|
# check the device was deleted
|
||||||
self.get_failure(
|
self.get_failure(self.handler.get_device(user1, "abc"), NotFoundError)
|
||||||
self.handler.get_device(user1, "abc"), synapse.api.errors.NotFoundError
|
|
||||||
)
|
|
||||||
|
|
||||||
# we'd like to check the access token was invalidated, but that's a
|
# we'd like to check the access token was invalidated, but that's a
|
||||||
# bit of a PITA.
|
# bit of a PITA.
|
||||||
|
|
||||||
def test_delete_device_and_device_inbox(self):
|
def test_delete_device_and_device_inbox(self) -> None:
|
||||||
self._record_users()
|
self._record_users()
|
||||||
|
|
||||||
# add an device_inbox
|
# add an device_inbox
|
||||||
|
@ -191,7 +193,7 @@ class DeviceTestCase(unittest.HomeserverTestCase):
|
||||||
)
|
)
|
||||||
self.assertIsNone(res)
|
self.assertIsNone(res)
|
||||||
|
|
||||||
def test_update_device(self):
|
def test_update_device(self) -> None:
|
||||||
self._record_users()
|
self._record_users()
|
||||||
|
|
||||||
update = {"display_name": "new display"}
|
update = {"display_name": "new display"}
|
||||||
|
@ -200,32 +202,29 @@ class DeviceTestCase(unittest.HomeserverTestCase):
|
||||||
res = self.get_success(self.handler.get_device(user1, "abc"))
|
res = self.get_success(self.handler.get_device(user1, "abc"))
|
||||||
self.assertEqual(res["display_name"], "new display")
|
self.assertEqual(res["display_name"], "new display")
|
||||||
|
|
||||||
def test_update_device_too_long_display_name(self):
|
def test_update_device_too_long_display_name(self) -> None:
|
||||||
"""Update a device with a display name that is invalid (too long)."""
|
"""Update a device with a display name that is invalid (too long)."""
|
||||||
self._record_users()
|
self._record_users()
|
||||||
|
|
||||||
# Request to update a device display name with a new value that is longer than allowed.
|
# Request to update a device display name with a new value that is longer than allowed.
|
||||||
update = {
|
update = {"display_name": "a" * (MAX_DEVICE_DISPLAY_NAME_LEN + 1)}
|
||||||
"display_name": "a"
|
|
||||||
* (synapse.handlers.device.MAX_DEVICE_DISPLAY_NAME_LEN + 1)
|
|
||||||
}
|
|
||||||
self.get_failure(
|
self.get_failure(
|
||||||
self.handler.update_device(user1, "abc", update),
|
self.handler.update_device(user1, "abc", update),
|
||||||
synapse.api.errors.SynapseError,
|
SynapseError,
|
||||||
)
|
)
|
||||||
|
|
||||||
# Ensure the display name was not updated.
|
# Ensure the display name was not updated.
|
||||||
res = self.get_success(self.handler.get_device(user1, "abc"))
|
res = self.get_success(self.handler.get_device(user1, "abc"))
|
||||||
self.assertEqual(res["display_name"], "display 2")
|
self.assertEqual(res["display_name"], "display 2")
|
||||||
|
|
||||||
def test_update_unknown_device(self):
|
def test_update_unknown_device(self) -> None:
|
||||||
update = {"display_name": "new_display"}
|
update = {"display_name": "new_display"}
|
||||||
self.get_failure(
|
self.get_failure(
|
||||||
self.handler.update_device("user_id", "unknown_device_id", update),
|
self.handler.update_device("user_id", "unknown_device_id", update),
|
||||||
synapse.api.errors.NotFoundError,
|
NotFoundError,
|
||||||
)
|
)
|
||||||
|
|
||||||
def _record_users(self):
|
def _record_users(self) -> None:
|
||||||
# check this works for both devices which have a recorded client_ip,
|
# check this works for both devices which have a recorded client_ip,
|
||||||
# and those which don't.
|
# and those which don't.
|
||||||
self._record_user(user1, "xyz", "display 0")
|
self._record_user(user1, "xyz", "display 0")
|
||||||
|
@ -238,8 +237,13 @@ class DeviceTestCase(unittest.HomeserverTestCase):
|
||||||
self.reactor.advance(10000)
|
self.reactor.advance(10000)
|
||||||
|
|
||||||
def _record_user(
|
def _record_user(
|
||||||
self, user_id, device_id, display_name, access_token=None, ip=None
|
self,
|
||||||
):
|
user_id: str,
|
||||||
|
device_id: str,
|
||||||
|
display_name: str,
|
||||||
|
access_token: Optional[str] = None,
|
||||||
|
ip: Optional[str] = None,
|
||||||
|
) -> None:
|
||||||
device_id = self.get_success(
|
device_id = self.get_success(
|
||||||
self.handler.check_device_registered(
|
self.handler.check_device_registered(
|
||||||
user_id=user_id,
|
user_id=user_id,
|
||||||
|
@ -248,7 +252,7 @@ class DeviceTestCase(unittest.HomeserverTestCase):
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
|
|
||||||
if ip is not None:
|
if access_token is not None and ip is not None:
|
||||||
self.get_success(
|
self.get_success(
|
||||||
self.store.insert_client_ip(
|
self.store.insert_client_ip(
|
||||||
user_id, access_token, ip, "user_agent", device_id
|
user_id, access_token, ip, "user_agent", device_id
|
||||||
|
@ -258,7 +262,7 @@ class DeviceTestCase(unittest.HomeserverTestCase):
|
||||||
|
|
||||||
|
|
||||||
class DehydrationTestCase(unittest.HomeserverTestCase):
|
class DehydrationTestCase(unittest.HomeserverTestCase):
|
||||||
def make_homeserver(self, reactor, clock):
|
def make_homeserver(self, reactor: MemoryReactor, clock: Clock) -> HomeServer:
|
||||||
hs = self.setup_test_homeserver("server", federation_http_client=None)
|
hs = self.setup_test_homeserver("server", federation_http_client=None)
|
||||||
self.handler = hs.get_device_handler()
|
self.handler = hs.get_device_handler()
|
||||||
self.registration = hs.get_registration_handler()
|
self.registration = hs.get_registration_handler()
|
||||||
|
@ -266,7 +270,7 @@ class DehydrationTestCase(unittest.HomeserverTestCase):
|
||||||
self.store = hs.get_datastores().main
|
self.store = hs.get_datastores().main
|
||||||
return hs
|
return hs
|
||||||
|
|
||||||
def test_dehydrate_and_rehydrate_device(self):
|
def test_dehydrate_and_rehydrate_device(self) -> None:
|
||||||
user_id = "@boris:dehydration"
|
user_id = "@boris:dehydration"
|
||||||
|
|
||||||
self.get_success(self.store.register_user(user_id, "foobar"))
|
self.get_success(self.store.register_user(user_id, "foobar"))
|
||||||
|
@ -303,7 +307,7 @@ class DehydrationTestCase(unittest.HomeserverTestCase):
|
||||||
access_token=access_token,
|
access_token=access_token,
|
||||||
device_id="not the right device ID",
|
device_id="not the right device ID",
|
||||||
),
|
),
|
||||||
synapse.api.errors.NotFoundError,
|
NotFoundError,
|
||||||
)
|
)
|
||||||
|
|
||||||
# dehydrating the right devices should succeed and change our device ID
|
# dehydrating the right devices should succeed and change our device ID
|
||||||
|
@ -331,7 +335,7 @@ class DehydrationTestCase(unittest.HomeserverTestCase):
|
||||||
# make sure that the device ID that we were initially assigned no longer exists
|
# make sure that the device ID that we were initially assigned no longer exists
|
||||||
self.get_failure(
|
self.get_failure(
|
||||||
self.handler.get_device(user_id, device_id),
|
self.handler.get_device(user_id, device_id),
|
||||||
synapse.api.errors.NotFoundError,
|
NotFoundError,
|
||||||
)
|
)
|
||||||
|
|
||||||
# make sure that there's no device available for dehydrating now
|
# make sure that there's no device available for dehydrating now
|
||||||
|
|
|
@ -124,7 +124,6 @@ class PasswordCustomAuthProvider:
|
||||||
("m.login.password", ("password",)): self.check_auth,
|
("m.login.password", ("password",)): self.check_auth,
|
||||||
}
|
}
|
||||||
)
|
)
|
||||||
pass
|
|
||||||
|
|
||||||
def check_auth(self, *args):
|
def check_auth(self, *args):
|
||||||
return mock_password_provider.check_auth(*args)
|
return mock_password_provider.check_auth(*args)
|
||||||
|
|
|
@ -11,6 +11,7 @@
|
||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
# See the License for the specific language governing permissions and
|
# See the License for the specific language governing permissions and
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
|
from typing import List, Tuple
|
||||||
from unittest.mock import Mock
|
from unittest.mock import Mock
|
||||||
|
|
||||||
from twisted.internet.defer import Deferred
|
from twisted.internet.defer import Deferred
|
||||||
|
@ -18,7 +19,7 @@ from twisted.internet.defer import Deferred
|
||||||
import synapse.rest.admin
|
import synapse.rest.admin
|
||||||
from synapse.logging.context import make_deferred_yieldable
|
from synapse.logging.context import make_deferred_yieldable
|
||||||
from synapse.push import PusherConfigException
|
from synapse.push import PusherConfigException
|
||||||
from synapse.rest.client import login, receipts, room
|
from synapse.rest.client import login, push_rule, receipts, room
|
||||||
|
|
||||||
from tests.unittest import HomeserverTestCase, override_config
|
from tests.unittest import HomeserverTestCase, override_config
|
||||||
|
|
||||||
|
@ -29,6 +30,7 @@ class HTTPPusherTests(HomeserverTestCase):
|
||||||
room.register_servlets,
|
room.register_servlets,
|
||||||
login.register_servlets,
|
login.register_servlets,
|
||||||
receipts.register_servlets,
|
receipts.register_servlets,
|
||||||
|
push_rule.register_servlets,
|
||||||
]
|
]
|
||||||
user_id = True
|
user_id = True
|
||||||
hijack_auth = False
|
hijack_auth = False
|
||||||
|
@ -39,12 +41,12 @@ class HTTPPusherTests(HomeserverTestCase):
|
||||||
return config
|
return config
|
||||||
|
|
||||||
def make_homeserver(self, reactor, clock):
|
def make_homeserver(self, reactor, clock):
|
||||||
self.push_attempts = []
|
self.push_attempts: List[tuple[Deferred, str, dict]] = []
|
||||||
|
|
||||||
m = Mock()
|
m = Mock()
|
||||||
|
|
||||||
def post_json_get_json(url, body):
|
def post_json_get_json(url, body):
|
||||||
d = Deferred()
|
d: Deferred = Deferred()
|
||||||
self.push_attempts.append((d, url, body))
|
self.push_attempts.append((d, url, body))
|
||||||
return make_deferred_yieldable(d)
|
return make_deferred_yieldable(d)
|
||||||
|
|
||||||
|
@ -719,3 +721,67 @@ class HTTPPusherTests(HomeserverTestCase):
|
||||||
access_token=access_token,
|
access_token=access_token,
|
||||||
)
|
)
|
||||||
self.assertEqual(channel.code, 200, channel.json_body)
|
self.assertEqual(channel.code, 200, channel.json_body)
|
||||||
|
|
||||||
|
def _make_user_with_pusher(self, username: str) -> Tuple[str, str]:
|
||||||
|
user_id = self.register_user(username, "pass")
|
||||||
|
access_token = self.login(username, "pass")
|
||||||
|
|
||||||
|
# Register the pusher
|
||||||
|
user_tuple = self.get_success(
|
||||||
|
self.hs.get_datastores().main.get_user_by_access_token(access_token)
|
||||||
|
)
|
||||||
|
token_id = user_tuple.token_id
|
||||||
|
|
||||||
|
self.get_success(
|
||||||
|
self.hs.get_pusherpool().add_pusher(
|
||||||
|
user_id=user_id,
|
||||||
|
access_token=token_id,
|
||||||
|
kind="http",
|
||||||
|
app_id="m.http",
|
||||||
|
app_display_name="HTTP Push Notifications",
|
||||||
|
device_display_name="pushy push",
|
||||||
|
pushkey="a@example.com",
|
||||||
|
lang=None,
|
||||||
|
data={"url": "http://example.com/_matrix/push/v1/notify"},
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
return user_id, access_token
|
||||||
|
|
||||||
|
def test_dont_notify_rule_overrides_message(self):
|
||||||
|
"""
|
||||||
|
The override push rule will suppress notification
|
||||||
|
"""
|
||||||
|
|
||||||
|
user_id, access_token = self._make_user_with_pusher("user")
|
||||||
|
other_user_id, other_access_token = self._make_user_with_pusher("otheruser")
|
||||||
|
|
||||||
|
# Create a room
|
||||||
|
room = self.helper.create_room_as(user_id, tok=access_token)
|
||||||
|
|
||||||
|
# Disable user notifications for this room -> user
|
||||||
|
body = {
|
||||||
|
"conditions": [{"kind": "event_match", "key": "room_id", "pattern": room}],
|
||||||
|
"actions": ["dont_notify"],
|
||||||
|
}
|
||||||
|
channel = self.make_request(
|
||||||
|
"PUT",
|
||||||
|
"/pushrules/global/override/best.friend",
|
||||||
|
body,
|
||||||
|
access_token=access_token,
|
||||||
|
)
|
||||||
|
self.assertEqual(channel.code, 200)
|
||||||
|
|
||||||
|
# Check we start with no pushes
|
||||||
|
self.assertEqual(len(self.push_attempts), 0)
|
||||||
|
|
||||||
|
# The other user joins
|
||||||
|
self.helper.join(room=room, user=other_user_id, tok=other_access_token)
|
||||||
|
|
||||||
|
# The other user sends a message (ignored by dont_notify push rule set above)
|
||||||
|
self.helper.send(room, body="Hi!", tok=other_access_token)
|
||||||
|
self.assertEqual(len(self.push_attempts), 0)
|
||||||
|
|
||||||
|
# The user sends a message back (sends a notification)
|
||||||
|
self.helper.send(room, body="Hello", tok=access_token)
|
||||||
|
self.assertEqual(len(self.push_attempts), 1)
|
||||||
|
|
|
@ -39,6 +39,7 @@ class BackgroundUpdatesTestCase(unittest.HomeserverTestCase):
|
||||||
self.store = hs.get_datastores().main
|
self.store = hs.get_datastores().main
|
||||||
self.admin_user = self.register_user("admin", "pass", admin=True)
|
self.admin_user = self.register_user("admin", "pass", admin=True)
|
||||||
self.admin_user_tok = self.login("admin", "pass")
|
self.admin_user_tok = self.login("admin", "pass")
|
||||||
|
self.updater = BackgroundUpdater(hs, self.store.db_pool)
|
||||||
|
|
||||||
@parameterized.expand(
|
@parameterized.expand(
|
||||||
[
|
[
|
||||||
|
@ -135,10 +136,10 @@ class BackgroundUpdatesTestCase(unittest.HomeserverTestCase):
|
||||||
"""Test the status API works with a background update."""
|
"""Test the status API works with a background update."""
|
||||||
|
|
||||||
# Create a new background update
|
# Create a new background update
|
||||||
|
|
||||||
self._register_bg_update()
|
self._register_bg_update()
|
||||||
|
|
||||||
self.store.db_pool.updates.start_doing_background_updates()
|
self.store.db_pool.updates.start_doing_background_updates()
|
||||||
|
|
||||||
self.reactor.pump([1.0, 1.0, 1.0])
|
self.reactor.pump([1.0, 1.0, 1.0])
|
||||||
|
|
||||||
channel = self.make_request(
|
channel = self.make_request(
|
||||||
|
@ -158,7 +159,7 @@ class BackgroundUpdatesTestCase(unittest.HomeserverTestCase):
|
||||||
"average_items_per_ms": 0.1,
|
"average_items_per_ms": 0.1,
|
||||||
"total_duration_ms": 1000.0,
|
"total_duration_ms": 1000.0,
|
||||||
"total_item_count": (
|
"total_item_count": (
|
||||||
BackgroundUpdater.DEFAULT_BACKGROUND_BATCH_SIZE
|
self.updater.default_background_batch_size
|
||||||
),
|
),
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
@ -213,7 +214,7 @@ class BackgroundUpdatesTestCase(unittest.HomeserverTestCase):
|
||||||
"average_items_per_ms": 0.1,
|
"average_items_per_ms": 0.1,
|
||||||
"total_duration_ms": 1000.0,
|
"total_duration_ms": 1000.0,
|
||||||
"total_item_count": (
|
"total_item_count": (
|
||||||
BackgroundUpdater.DEFAULT_BACKGROUND_BATCH_SIZE
|
self.updater.default_background_batch_size
|
||||||
),
|
),
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
@ -242,7 +243,7 @@ class BackgroundUpdatesTestCase(unittest.HomeserverTestCase):
|
||||||
"average_items_per_ms": 0.1,
|
"average_items_per_ms": 0.1,
|
||||||
"total_duration_ms": 1000.0,
|
"total_duration_ms": 1000.0,
|
||||||
"total_item_count": (
|
"total_item_count": (
|
||||||
BackgroundUpdater.DEFAULT_BACKGROUND_BATCH_SIZE
|
self.updater.default_background_batch_size
|
||||||
),
|
),
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
|
|
@ -24,6 +24,7 @@ from synapse.util import Clock
|
||||||
from synapse.visibility import filter_events_for_client
|
from synapse.visibility import filter_events_for_client
|
||||||
|
|
||||||
from tests import unittest
|
from tests import unittest
|
||||||
|
from tests.unittest import override_config
|
||||||
|
|
||||||
one_hour_ms = 3600000
|
one_hour_ms = 3600000
|
||||||
one_day_ms = one_hour_ms * 24
|
one_day_ms = one_hour_ms * 24
|
||||||
|
@ -38,7 +39,10 @@ class RetentionTestCase(unittest.HomeserverTestCase):
|
||||||
|
|
||||||
def make_homeserver(self, reactor: MemoryReactor, clock: Clock) -> HomeServer:
|
def make_homeserver(self, reactor: MemoryReactor, clock: Clock) -> HomeServer:
|
||||||
config = self.default_config()
|
config = self.default_config()
|
||||||
config["retention"] = {
|
|
||||||
|
# merge this default retention config with anything that was specified in
|
||||||
|
# @override_config
|
||||||
|
retention_config = {
|
||||||
"enabled": True,
|
"enabled": True,
|
||||||
"default_policy": {
|
"default_policy": {
|
||||||
"min_lifetime": one_day_ms,
|
"min_lifetime": one_day_ms,
|
||||||
|
@ -47,6 +51,8 @@ class RetentionTestCase(unittest.HomeserverTestCase):
|
||||||
"allowed_lifetime_min": one_day_ms,
|
"allowed_lifetime_min": one_day_ms,
|
||||||
"allowed_lifetime_max": one_day_ms * 3,
|
"allowed_lifetime_max": one_day_ms * 3,
|
||||||
}
|
}
|
||||||
|
retention_config.update(config.get("retention", {}))
|
||||||
|
config["retention"] = retention_config
|
||||||
|
|
||||||
self.hs = self.setup_test_homeserver(config=config)
|
self.hs = self.setup_test_homeserver(config=config)
|
||||||
|
|
||||||
|
@ -115,22 +121,20 @@ class RetentionTestCase(unittest.HomeserverTestCase):
|
||||||
|
|
||||||
self._test_retention_event_purged(room_id, one_day_ms * 2)
|
self._test_retention_event_purged(room_id, one_day_ms * 2)
|
||||||
|
|
||||||
|
@override_config({"retention": {"purge_jobs": [{"interval": "5d"}]}})
|
||||||
def test_visibility(self) -> None:
|
def test_visibility(self) -> None:
|
||||||
"""Tests that synapse.visibility.filter_events_for_client correctly filters out
|
"""Tests that synapse.visibility.filter_events_for_client correctly filters out
|
||||||
outdated events
|
outdated events, even if the purge job hasn't got to them yet.
|
||||||
|
|
||||||
|
We do this by setting a very long time between purge jobs.
|
||||||
"""
|
"""
|
||||||
store = self.hs.get_datastores().main
|
store = self.hs.get_datastores().main
|
||||||
storage = self.hs.get_storage()
|
storage = self.hs.get_storage()
|
||||||
room_id = self.helper.create_room_as(self.user_id, tok=self.token)
|
room_id = self.helper.create_room_as(self.user_id, tok=self.token)
|
||||||
events = []
|
|
||||||
|
|
||||||
# Send a first event, which should be filtered out at the end of the test.
|
# Send a first event, which should be filtered out at the end of the test.
|
||||||
resp = self.helper.send(room_id=room_id, body="1", tok=self.token)
|
resp = self.helper.send(room_id=room_id, body="1", tok=self.token)
|
||||||
|
first_event_id = resp.get("event_id")
|
||||||
# Get the event from the store so that we end up with a FrozenEvent that we can
|
|
||||||
# give to filter_events_for_client. We need to do this now because the event won't
|
|
||||||
# be in the database anymore after it has expired.
|
|
||||||
events.append(self.get_success(store.get_event(resp.get("event_id"))))
|
|
||||||
|
|
||||||
# Advance the time by 2 days. We're using the default retention policy, therefore
|
# Advance the time by 2 days. We're using the default retention policy, therefore
|
||||||
# after this the first event will still be valid.
|
# after this the first event will still be valid.
|
||||||
|
@ -138,16 +142,17 @@ class RetentionTestCase(unittest.HomeserverTestCase):
|
||||||
|
|
||||||
# Send another event, which shouldn't get filtered out.
|
# Send another event, which shouldn't get filtered out.
|
||||||
resp = self.helper.send(room_id=room_id, body="2", tok=self.token)
|
resp = self.helper.send(room_id=room_id, body="2", tok=self.token)
|
||||||
|
|
||||||
valid_event_id = resp.get("event_id")
|
valid_event_id = resp.get("event_id")
|
||||||
|
|
||||||
events.append(self.get_success(store.get_event(valid_event_id)))
|
|
||||||
|
|
||||||
# Advance the time by another 2 days. After this, the first event should be
|
# Advance the time by another 2 days. After this, the first event should be
|
||||||
# outdated but not the second one.
|
# outdated but not the second one.
|
||||||
self.reactor.advance(one_day_ms * 2 / 1000)
|
self.reactor.advance(one_day_ms * 2 / 1000)
|
||||||
|
|
||||||
# Run filter_events_for_client with our list of FrozenEvents.
|
# Fetch the events, and run filter_events_for_client on them
|
||||||
|
events = self.get_success(
|
||||||
|
store.get_events_as_list([first_event_id, valid_event_id])
|
||||||
|
)
|
||||||
|
self.assertEqual(2, len(events), "events retrieved from database")
|
||||||
filtered_events = self.get_success(
|
filtered_events = self.get_success(
|
||||||
filter_events_for_client(storage, self.user_id, events)
|
filter_events_for_client(storage, self.user_id, events)
|
||||||
)
|
)
|
||||||
|
|
|
@ -1,3 +1,18 @@
|
||||||
|
# Copyright 2018-2021 The Matrix.org Foundation C.I.C.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
|
||||||
|
from http import HTTPStatus
|
||||||
from unittest.mock import Mock, call
|
from unittest.mock import Mock, call
|
||||||
|
|
||||||
from twisted.internet import defer, reactor
|
from twisted.internet import defer, reactor
|
||||||
|
@ -11,14 +26,14 @@ from tests.utils import MockClock
|
||||||
|
|
||||||
|
|
||||||
class HttpTransactionCacheTestCase(unittest.TestCase):
|
class HttpTransactionCacheTestCase(unittest.TestCase):
|
||||||
def setUp(self):
|
def setUp(self) -> None:
|
||||||
self.clock = MockClock()
|
self.clock = MockClock()
|
||||||
self.hs = Mock()
|
self.hs = Mock()
|
||||||
self.hs.get_clock = Mock(return_value=self.clock)
|
self.hs.get_clock = Mock(return_value=self.clock)
|
||||||
self.hs.get_auth = Mock()
|
self.hs.get_auth = Mock()
|
||||||
self.cache = HttpTransactionCache(self.hs)
|
self.cache = HttpTransactionCache(self.hs)
|
||||||
|
|
||||||
self.mock_http_response = (200, "GOOD JOB!")
|
self.mock_http_response = (HTTPStatus.OK, "GOOD JOB!")
|
||||||
self.mock_key = "foo"
|
self.mock_key = "foo"
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue