Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes

pull/13598/head
Erik Johnston 2022-08-17 10:54:17 +01:00
commit 7cd167f607
114 changed files with 4968 additions and 3115 deletions

View File

@ -1,3 +1,9 @@
Synapse 1.65.0 (2022-08-16)
===========================
No significant changes since 1.65.0rc2.
Synapse 1.65.0rc2 (2022-08-11)
==============================
@ -25,7 +31,7 @@ Bugfixes
--------
- Update the version of the LDAP3 auth provider module included in the `matrixdotorg/synapse` DockerHub images and the Debian packages hosted on packages.matrix.org to 0.2.2. This version fixes a regression in the module. ([\#13470](https://github.com/matrix-org/synapse/issues/13470))
- Fix a bug introduced in Synapse v1.41.0 where the `/hierarchy` API returned non-standard information (a `room_id` field under each entry in `children_state`). ([\#13365](https://github.com/matrix-org/synapse/issues/13365))
- Fix a bug introduced in Synapse v1.41.0 where the `/hierarchy` API returned non-standard information (a `room_id` field under each entry in `children_state`) (this was reverted in v1.65.0rc2, see changelog notes above). ([\#13365](https://github.com/matrix-org/synapse/issues/13365))
- Fix a bug introduced in Synapse 0.24.0 that would respond with the wrong error status code to `/joined_members` requests when the requester is not a current member of the room. Contributed by @andrewdoh. ([\#13374](https://github.com/matrix-org/synapse/issues/13374))
- Fix bug in handling of typing events for appservices. Contributed by Nick @ Beeper (@fizzadar). ([\#13392](https://github.com/matrix-org/synapse/issues/13392))
- Fix a bug introduced in Synapse 1.57.0 where rooms listed in `exclude_rooms_from_sync` in the configuration file would not be properly excluded from incremental syncs. ([\#13408](https://github.com/matrix-org/synapse/issues/13408))

View File

@ -0,0 +1 @@
Improve validation of request bodies for the following client-server API endpoints: [`/account/password`](https://spec.matrix.org/v1.3/client-server-api/#post_matrixclientv3accountpassword), [`/account/password/email/requestToken`](https://spec.matrix.org/v1.3/client-server-api/#post_matrixclientv3accountpasswordemailrequesttoken), [`/account/deactivate`](https://spec.matrix.org/v1.3/client-server-api/#post_matrixclientv3accountdeactivate) and [`/account/3pid/email/requestToken`](https://spec.matrix.org/v1.3/client-server-api/#post_matrixclientv3account3pidemailrequesttoken).

1
changelog.d/13453.misc Normal file
View File

@ -0,0 +1 @@
Allow use of both `@trace` and `@tag_args` stacked on the same function (tracing).

1
changelog.d/13459.misc Normal file
View File

@ -0,0 +1 @@
Faster joins: update the rejected state of events during de-partial-stating.

1
changelog.d/13471.misc Normal file
View File

@ -0,0 +1 @@
Clean-up tests for notifications.

1
changelog.d/13472.doc Normal file
View File

@ -0,0 +1 @@
Add `openssl` example for generating registration HMAC digest.

1
changelog.d/13474.misc Normal file
View File

@ -0,0 +1 @@
Add some miscellaneous comments to document sync, especially around `compute_state_delta`.

1
changelog.d/13479.misc Normal file
View File

@ -0,0 +1 @@
Use literals in place of `HTTPStatus` constants in tests.

1
changelog.d/13485.misc Normal file
View File

@ -0,0 +1 @@
Add comments about how event push actions are rotated.

1
changelog.d/13488.misc Normal file
View File

@ -0,0 +1 @@
Use literals in place of `HTTPStatus` constants in tests.

1
changelog.d/13489.misc Normal file
View File

@ -0,0 +1 @@
Instrument the federation/backfill part of `/messages` for understandable traces in Jaeger.

1
changelog.d/13492.doc Normal file
View File

@ -0,0 +1 @@
Document that event purging related to the `redaction_retention_period` config option is executed only every 5 minutes.

1
changelog.d/13493.misc Normal file
View File

@ -0,0 +1 @@
Modify HTML template content to better support mobile devices' screen sizes.

2
changelog.d/13497.doc Normal file
View File

@ -0,0 +1,2 @@
Add a warning to retention documentation regarding the possibility of database corruption.

1
changelog.d/13499.misc Normal file
View File

@ -0,0 +1 @@
Instrument `FederationStateIdsServlet` (`/state_ids`) for understandable traces in Jaeger.

View File

@ -0,0 +1 @@
Add forgotten status to Room Details API.

1
changelog.d/13514.bugfix Normal file
View File

@ -0,0 +1 @@
Faster room joins: make `/joined_members` block whilst the room is partial stated.

1
changelog.d/13515.doc Normal file
View File

@ -0,0 +1 @@
Document that the `DOCKER_BUILDKIT=1` flag is needed to build the docker image.

1
changelog.d/13522.misc Normal file
View File

@ -0,0 +1 @@
Improve performance of sending messages in rooms with thousands of local users.

1
changelog.d/13531.misc Normal file
View File

@ -0,0 +1 @@
Faster room joins: Refuse to start when faster joins is enabled on a deployment with workers, since worker configurations are not currently supported.

1
changelog.d/13533.misc Normal file
View File

@ -0,0 +1 @@
Track HTTP response times over 10 seconds from `/messages` (`synapse_room_message_list_rest_servlet_response_time_seconds`).

1
changelog.d/13535.misc Normal file
View File

@ -0,0 +1 @@
Add metrics to time how long it takes us to do backfill processing (`synapse_federation_backfill_processing_before_time_seconds`, `synapse_federation_backfill_processing_after_time_seconds`).

1
changelog.d/13536.doc Normal file
View File

@ -0,0 +1 @@
Add missing links in `user_consent` section of configuration manual.

1
changelog.d/13544.misc Normal file
View File

@ -0,0 +1 @@
Add metrics to track rate limiter queue timing (`synapse_rate_limit_queue_wait_time_seconds`).

File diff suppressed because it is too large Load Diff

6
debian/changelog vendored
View File

@ -1,3 +1,9 @@
matrix-synapse-py3 (1.65.0) stable; urgency=medium
* New Synapse release 1.65.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 16 Aug 2022 16:51:26 +0100
matrix-synapse-py3 (1.65.0~rc2) stable; urgency=medium
* New Synapse release 1.65.0rc2.

View File

@ -191,7 +191,7 @@ If you need to build the image from a Synapse checkout, use the following `docke
build` command from the repo's root:
```
docker build -t matrixdotorg/synapse -f docker/Dockerfile .
DOCKER_BUILDKIT=1 docker build -t matrixdotorg/synapse -f docker/Dockerfile .
```
You can choose to build a different docker image by changing the value of the `-f` flag to

View File

@ -46,7 +46,24 @@ As an example:
The MAC is the hex digest output of the HMAC-SHA1 algorithm, with the key being
the shared secret and the content being the nonce, user, password, either the
string "admin" or "notadmin", and optionally the user_type
each separated by NULs. For an example of generation in Python:
each separated by NULs.
Here is an easy way to generate the HMAC digest if you have Bash and OpenSSL:
```bash
# Update these values and then paste this code block into a bash terminal
nonce='thisisanonce'
username='pepper_roni'
password='pizza'
admin='admin'
secret='shared_secret'
printf '%s\0%s\0%s\0%s' "$nonce" "$username" "$password" "$admin" |
openssl sha1 -hmac "$secret" |
awk '{print $2}'
```
For an example of generation in Python:
```python
import hmac, hashlib
@ -70,4 +87,4 @@ def generate_mac(nonce, user, password, admin=False, user_type=None):
mac.update(user_type.encode('utf8'))
return mac.hexdigest()
```
```

View File

@ -302,6 +302,8 @@ The following fields are possible in the JSON response body:
* `state_events` - Total number of state_events of a room. Complexity of the room.
* `room_type` - The type of the room taken from the room's creation event; for example "m.space" if the room is a space.
If the room does not define a type, the value will be `null`.
* `forgotten` - Whether all local users have
[forgotten](https://spec.matrix.org/latest/client-server-api/#leaving-rooms) the room.
The API is:
@ -330,7 +332,8 @@ A response body like the following is returned:
"guest_access": null,
"history_visibility": "shared",
"state_events": 93534,
"room_type": "m.space"
"room_type": "m.space",
"forgotten": false
}
```

View File

@ -8,7 +8,8 @@ and allow server and room admins to configure how long messages should
be kept in a homeserver's database before being purged from it.
**Please note that, as this feature isn't part of the Matrix
specification yet, this implementation is to be considered as
experimental.**
experimental. There are known bugs which may cause database corruption.
Proceed with caution.**
A message retention policy is mainly defined by its `max_lifetime`
parameter, which defines how long a message can be kept around after

View File

@ -759,6 +759,10 @@ allowed_avatar_mimetypes: ["image/png", "image/jpeg", "image/gif"]
How long to keep redacted events in unredacted form in the database. After
this period redacted events get replaced with their redacted form in the DB.
Synapse will check whether the rentention period has concluded for redacted
events every 5 minutes. Thus, even if this option is set to `0`, Synapse may
still take up to 5 minutes to purge redacted events from the database.
Defaults to `7d`. Set to `null` to disable.
Example configuration:
@ -845,7 +849,11 @@ which are older than the room's maximum retention period. Synapse will also
filter events received over federation so that events that should have been
purged are ignored and not stored again.
The message retention policies feature is disabled by default.
The message retention policies feature is disabled by default. Please be advised
that enabling this feature carries some risk. There are known bugs with the implementation
which can cause database corruption. Setting retention to delete older history
is less risky than deleting newer history but in general caution is advised when enabling this
experimental feature. You can read more about this feature [here](../../message_retention_policies.md).
This setting has the following sub-options:
* `default_policy`: Default retention policy. If set, Synapse will apply it to rooms that lack the
@ -3344,7 +3352,7 @@ user_directory:
For detailed instructions on user consent configuration, see [here](../../consent_tracking.md).
Parts of this section are required if enabling the `consent` resource under
`listeners`, in particular `template_dir` and `version`. # TODO: link `listeners`
[`listeners`](#listeners), in particular `template_dir` and `version`.
* `template_dir`: gives the location of the templates for the HTML forms.
This directory should contain one subdirectory per language (eg, `en`, `fr`),
@ -3356,7 +3364,7 @@ Parts of this section are required if enabling the `consent` resource under
parameter.
* `server_notice_content`: if enabled, will send a user a "Server Notice"
asking them to consent to the privacy policy. The `server_notices` section ##TODO: link
asking them to consent to the privacy policy. The [`server_notices` section](#server_notices)
must also be configured for this to work. Notices will *not* be sent to
guest users unless `send_server_notice_to_guests` is set to true.

View File

@ -1,6 +1,6 @@
[mypy]
namespace_packages = True
plugins = mypy_zope:plugin, scripts-dev/mypy_synapse_plugin.py
plugins = pydantic.mypy, mypy_zope:plugin, scripts-dev/mypy_synapse_plugin.py
follow_imports = normal
check_untyped_defs = True
show_error_codes = True

54
poetry.lock generated
View File

@ -778,6 +778,21 @@ category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "pydantic"
version = "1.9.1"
description = "Data validation and settings management using python type hints"
category = "main"
optional = false
python-versions = ">=3.6.1"
[package.dependencies]
typing-extensions = ">=3.7.4.3"
[package.extras]
dotenv = ["python-dotenv (>=0.10.4)"]
email = ["email-validator (>=1.0.3)"]
[[package]]
name = "pyflakes"
version = "2.4.0"
@ -1563,7 +1578,7 @@ url_preview = ["lxml"]
[metadata]
lock-version = "1.1"
python-versions = "^3.7.1"
content-hash = "c24bbcee7e86dbbe7cdbf49f91a25b310bf21095452641e7440129f59b077f78"
content-hash = "7de518bf27967b3547eab8574342cfb67f87d6b47b4145c13de11112141dbf2d"
[metadata.files]
attrs = [
@ -2260,6 +2275,43 @@ pycparser = [
{file = "pycparser-2.21-py2.py3-none-any.whl", hash = "sha256:8ee45429555515e1f6b185e78100aea234072576aa43ab53aefcae078162fca9"},
{file = "pycparser-2.21.tar.gz", hash = "sha256:e644fdec12f7872f86c58ff790da456218b10f863970249516d60a5eaca77206"},
]
pydantic = [
{file = "pydantic-1.9.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:c8098a724c2784bf03e8070993f6d46aa2eeca031f8d8a048dff277703e6e193"},
{file = "pydantic-1.9.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:c320c64dd876e45254bdd350f0179da737463eea41c43bacbee9d8c9d1021f11"},
{file = "pydantic-1.9.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:18f3e912f9ad1bdec27fb06b8198a2ccc32f201e24174cec1b3424dda605a310"},
{file = "pydantic-1.9.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c11951b404e08b01b151222a1cb1a9f0a860a8153ce8334149ab9199cd198131"},
{file = "pydantic-1.9.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:8bc541a405423ce0e51c19f637050acdbdf8feca34150e0d17f675e72d119580"},
{file = "pydantic-1.9.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:e565a785233c2d03724c4dc55464559639b1ba9ecf091288dd47ad9c629433bd"},
{file = "pydantic-1.9.1-cp310-cp310-win_amd64.whl", hash = "sha256:a4a88dcd6ff8fd47c18b3a3709a89adb39a6373f4482e04c1b765045c7e282fd"},
{file = "pydantic-1.9.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:447d5521575f18e18240906beadc58551e97ec98142266e521c34968c76c8761"},
{file = "pydantic-1.9.1-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:985ceb5d0a86fcaa61e45781e567a59baa0da292d5ed2e490d612d0de5796918"},
{file = "pydantic-1.9.1-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:059b6c1795170809103a1538255883e1983e5b831faea6558ef873d4955b4a74"},
{file = "pydantic-1.9.1-cp36-cp36m-musllinux_1_1_i686.whl", hash = "sha256:d12f96b5b64bec3f43c8e82b4aab7599d0157f11c798c9f9c528a72b9e0b339a"},
{file = "pydantic-1.9.1-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:ae72f8098acb368d877b210ebe02ba12585e77bd0db78ac04a1ee9b9f5dd2166"},
{file = "pydantic-1.9.1-cp36-cp36m-win_amd64.whl", hash = "sha256:79b485767c13788ee314669008d01f9ef3bc05db9ea3298f6a50d3ef596a154b"},
{file = "pydantic-1.9.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:494f7c8537f0c02b740c229af4cb47c0d39840b829ecdcfc93d91dcbb0779892"},
{file = "pydantic-1.9.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f0f047e11febe5c3198ed346b507e1d010330d56ad615a7e0a89fae604065a0e"},
{file = "pydantic-1.9.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:969dd06110cb780da01336b281f53e2e7eb3a482831df441fb65dd30403f4608"},
{file = "pydantic-1.9.1-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:177071dfc0df6248fd22b43036f936cfe2508077a72af0933d0c1fa269b18537"},
{file = "pydantic-1.9.1-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:9bcf8b6e011be08fb729d110f3e22e654a50f8a826b0575c7196616780683380"},
{file = "pydantic-1.9.1-cp37-cp37m-win_amd64.whl", hash = "sha256:a955260d47f03df08acf45689bd163ed9df82c0e0124beb4251b1290fa7ae728"},
{file = "pydantic-1.9.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:9ce157d979f742a915b75f792dbd6aa63b8eccaf46a1005ba03aa8a986bde34a"},
{file = "pydantic-1.9.1-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:0bf07cab5b279859c253d26a9194a8906e6f4a210063b84b433cf90a569de0c1"},
{file = "pydantic-1.9.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5d93d4e95eacd313d2c765ebe40d49ca9dd2ed90e5b37d0d421c597af830c195"},
{file = "pydantic-1.9.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1542636a39c4892c4f4fa6270696902acb186a9aaeac6f6cf92ce6ae2e88564b"},
{file = "pydantic-1.9.1-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:a9af62e9b5b9bc67b2a195ebc2c2662fdf498a822d62f902bf27cccb52dbbf49"},
{file = "pydantic-1.9.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:fe4670cb32ea98ffbf5a1262f14c3e102cccd92b1869df3bb09538158ba90fe6"},
{file = "pydantic-1.9.1-cp38-cp38-win_amd64.whl", hash = "sha256:9f659a5ee95c8baa2436d392267988fd0f43eb774e5eb8739252e5a7e9cf07e0"},
{file = "pydantic-1.9.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:b83ba3825bc91dfa989d4eed76865e71aea3a6ca1388b59fc801ee04c4d8d0d6"},
{file = "pydantic-1.9.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:1dd8fecbad028cd89d04a46688d2fcc14423e8a196d5b0a5c65105664901f810"},
{file = "pydantic-1.9.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:02eefd7087268b711a3ff4db528e9916ac9aa18616da7bca69c1871d0b7a091f"},
{file = "pydantic-1.9.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7eb57ba90929bac0b6cc2af2373893d80ac559adda6933e562dcfb375029acee"},
{file = "pydantic-1.9.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:4ce9ae9e91f46c344bec3b03d6ee9612802682c1551aaf627ad24045ce090761"},
{file = "pydantic-1.9.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:72ccb318bf0c9ab97fc04c10c37683d9eea952ed526707fabf9ac5ae59b701fd"},
{file = "pydantic-1.9.1-cp39-cp39-win_amd64.whl", hash = "sha256:61b6760b08b7c395975d893e0b814a11cf011ebb24f7d869e7118f5a339a82e1"},
{file = "pydantic-1.9.1-py3-none-any.whl", hash = "sha256:4988c0f13c42bfa9ddd2fe2f569c9d54646ce84adc5de84228cfe83396f3bd58"},
{file = "pydantic-1.9.1.tar.gz", hash = "sha256:1ed987c3ff29fff7fd8c3ea3a3ea877ad310aae2ef9889a119e22d3f2db0691a"},
]
pyflakes = [
{file = "pyflakes-2.4.0-py2.py3-none-any.whl", hash = "sha256:3bb3a3f256f4b7968c9c788781e4ff07dce46bdf12339dcda61053375426ee2e"},
{file = "pyflakes-2.4.0.tar.gz", hash = "sha256:05a85c2872edf37a4ed30b0cce2f6093e1d0581f8c19d7393122da7e25b2b24c"},

View File

@ -54,7 +54,7 @@ skip_gitignore = true
[tool.poetry]
name = "matrix-synapse"
version = "1.65.0rc2"
version = "1.65.0"
description = "Homeserver for the Matrix decentralised comms protocol"
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
license = "Apache-2.0"
@ -158,6 +158,9 @@ packaging = ">=16.1"
# At the time of writing, we only use functions from the version `importlib.metadata`
# which shipped in Python 3.8. This corresponds to version 1.4 of the backport.
importlib_metadata = { version = ">=1.4", python = "<3.8" }
# This is the most recent version of Pydantic with available on common distros.
pydantic = ">=1.7.4"
# Optional Dependencies

View File

@ -441,6 +441,13 @@ def start(config_options: List[str]) -> None:
"synapse.app.user_dir",
)
if config.experimental.faster_joins_enabled:
raise ConfigError(
"You have enabled the experimental `faster_joins` config option, but it is "
"not compatible with worker deployments yet. Please disable `faster_joins` "
"or run Synapse as a single process deployment instead."
)
synapse.events.USE_FROZEN_DICTS = config.server.use_frozen_dicts
synapse.util.caches.TRACK_MEMORY_USAGE = config.caches.track_memory_usage

View File

@ -61,7 +61,7 @@ from synapse.federation.federation_base import (
)
from synapse.federation.transport.client import SendJoinResponse
from synapse.http.types import QueryParams
from synapse.logging.opentracing import trace
from synapse.logging.opentracing import SynapseTags, set_tag, tag_args, trace
from synapse.types import JsonDict, UserID, get_domain_from_id
from synapse.util.async_helpers import concurrently_execute
from synapse.util.caches.expiringcache import ExpiringCache
@ -235,6 +235,7 @@ class FederationClient(FederationBase):
)
@trace
@tag_args
async def backfill(
self, dest: str, room_id: str, limit: int, extremities: Collection[str]
) -> Optional[List[EventBase]]:
@ -337,6 +338,8 @@ class FederationClient(FederationBase):
return None
@trace
@tag_args
async def get_pdu(
self,
destinations: Iterable[str],
@ -448,6 +451,8 @@ class FederationClient(FederationBase):
return event_copy
@trace
@tag_args
async def get_room_state_ids(
self, destination: str, room_id: str, event_id: str
) -> Tuple[List[str], List[str]]:
@ -467,6 +472,23 @@ class FederationClient(FederationBase):
state_event_ids = result["pdu_ids"]
auth_event_ids = result.get("auth_chain_ids", [])
set_tag(
SynapseTags.RESULT_PREFIX + "state_event_ids",
str(state_event_ids),
)
set_tag(
SynapseTags.RESULT_PREFIX + "state_event_ids.length",
str(len(state_event_ids)),
)
set_tag(
SynapseTags.RESULT_PREFIX + "auth_event_ids",
str(auth_event_ids),
)
set_tag(
SynapseTags.RESULT_PREFIX + "auth_event_ids.length",
str(len(auth_event_ids)),
)
if not isinstance(state_event_ids, list) or not isinstance(
auth_event_ids, list
):
@ -474,6 +496,8 @@ class FederationClient(FederationBase):
return state_event_ids, auth_event_ids
@trace
@tag_args
async def get_room_state(
self,
destination: str,
@ -533,6 +557,7 @@ class FederationClient(FederationBase):
return valid_state_events, valid_auth_events
@trace
async def _check_sigs_and_hash_and_fetch(
self,
origin: str,

View File

@ -61,7 +61,12 @@ from synapse.logging.context import (
nested_logging_context,
run_in_background,
)
from synapse.logging.opentracing import log_kv, start_active_span_from_edu, trace
from synapse.logging.opentracing import (
log_kv,
start_active_span_from_edu,
tag_args,
trace,
)
from synapse.metrics.background_process_metrics import wrap_as_background_process
from synapse.replication.http.federation import (
ReplicationFederationSendEduRestServlet,
@ -547,6 +552,8 @@ class FederationServer(FederationBase):
return 200, resp
@trace
@tag_args
async def on_state_ids_request(
self, origin: str, room_id: str, event_id: str
) -> Tuple[int, JsonDict]:
@ -569,6 +576,8 @@ class FederationServer(FederationBase):
return 200, resp
@trace
@tag_args
async def _on_state_ids_request_compute(
self, room_id: str, event_id: str
) -> JsonDict:

View File

@ -32,6 +32,7 @@ from typing import (
)
import attr
from prometheus_client import Histogram
from signedjson.key import decode_verify_key_bytes
from signedjson.sign import verify_signed_json
from unpaddedbase64 import decode_base64
@ -59,7 +60,7 @@ from synapse.events.validator import EventValidator
from synapse.federation.federation_client import InvalidResponseError
from synapse.http.servlet import assert_params_in_dict
from synapse.logging.context import nested_logging_context
from synapse.logging.opentracing import trace
from synapse.logging.opentracing import SynapseTags, set_tag, tag_args, trace
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.module_api import NOT_SPAM
from synapse.replication.http.federation import (
@ -79,6 +80,24 @@ if TYPE_CHECKING:
logger = logging.getLogger(__name__)
# Added to debug performance and track progress on optimizations
backfill_processing_before_timer = Histogram(
"synapse_federation_backfill_processing_before_time_seconds",
"sec",
[],
buckets=(
1.0,
5.0,
10.0,
20.0,
30.0,
40.0,
60.0,
80.0,
"+Inf",
),
)
def get_domains_from_state(state: StateMap[EventBase]) -> List[Tuple[str, int]]:
"""Get joined domains from state
@ -138,6 +157,7 @@ class FederationHandler:
def __init__(self, hs: "HomeServer"):
self.hs = hs
self.clock = hs.get_clock()
self.store = hs.get_datastores().main
self._storage_controllers = hs.get_storage_controllers()
self._state_storage_controller = self._storage_controllers.state
@ -197,12 +217,39 @@ class FederationHandler:
return. This is used as part of the heuristic to decide if we
should back paginate.
"""
# Starting the processing time here so we can include the room backfill
# linearizer lock queue in the timing
processing_start_time = self.clock.time_msec()
async with self._room_backfill.queue(room_id):
return await self._maybe_backfill_inner(room_id, current_depth, limit)
return await self._maybe_backfill_inner(
room_id,
current_depth,
limit,
processing_start_time=processing_start_time,
)
async def _maybe_backfill_inner(
self, room_id: str, current_depth: int, limit: int
self,
room_id: str,
current_depth: int,
limit: int,
*,
processing_start_time: int,
) -> bool:
"""
Checks whether the `current_depth` is at or approaching any backfill
points in the room and if so, will backfill. We only care about
checking backfill points that happened before the `current_depth`
(meaning less than or equal to the `current_depth`).
Args:
room_id: The room to backfill in.
current_depth: The depth to check at for any upcoming backfill points.
limit: The max number of events to request from the remote federated server.
processing_start_time: The time when `maybe_backfill` started
processing. Only used for timing.
"""
backwards_extremities = [
_BackfillPoint(event_id, depth, _BackfillPointType.BACKWARDS_EXTREMITY)
for event_id, depth in await self.store.get_oldest_event_ids_with_depth_in_room(
@ -370,6 +417,14 @@ class FederationHandler:
logger.debug(
"_maybe_backfill_inner: extremities_to_request %s", extremities_to_request
)
set_tag(
SynapseTags.RESULT_PREFIX + "extremities_to_request",
str(extremities_to_request),
)
set_tag(
SynapseTags.RESULT_PREFIX + "extremities_to_request.length",
str(len(extremities_to_request)),
)
# Now we need to decide which hosts to hit first.
@ -425,6 +480,11 @@ class FederationHandler:
return False
processing_end_time = self.clock.time_msec()
backfill_processing_before_timer.observe(
(processing_start_time - processing_end_time) / 1000
)
success = await try_backfill(likely_domains)
if success:
return True
@ -1081,6 +1141,8 @@ class FederationHandler:
return event
@trace
@tag_args
async def get_state_ids_for_pdu(self, room_id: str, event_id: str) -> List[str]:
"""Returns the state at the event. i.e. not including said event."""
event = await self.store.get_event(event_id, check_room_id=room_id)

View File

@ -29,7 +29,7 @@ from typing import (
Tuple,
)
from prometheus_client import Counter
from prometheus_client import Counter, Histogram
from synapse import event_auth
from synapse.api.constants import (
@ -59,7 +59,13 @@ from synapse.events import EventBase
from synapse.events.snapshot import EventContext
from synapse.federation.federation_client import InvalidResponseError
from synapse.logging.context import nested_logging_context
from synapse.logging.opentracing import trace
from synapse.logging.opentracing import (
SynapseTags,
set_tag,
start_active_span,
tag_args,
trace,
)
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.replication.http.devices import ReplicationUserDevicesResyncRestServlet
from synapse.replication.http.federation import (
@ -92,6 +98,26 @@ soft_failed_event_counter = Counter(
"Events received over federation that we marked as soft_failed",
)
# Added to debug performance and track progress on optimizations
backfill_processing_after_timer = Histogram(
"synapse_federation_backfill_processing_after_time_seconds",
"sec",
[],
buckets=(
1.0,
5.0,
10.0,
20.0,
30.0,
40.0,
60.0,
80.0,
120.0,
180.0,
"+Inf",
),
)
class FederationEventHandler:
"""Handles events that originated from federation.
@ -410,6 +436,7 @@ class FederationEventHandler:
prev_member_event,
)
@trace
async def process_remote_join(
self,
origin: str,
@ -597,20 +624,21 @@ class FederationEventHandler:
if not events:
return
# if there are any events in the wrong room, the remote server is buggy and
# should not be trusted.
for ev in events:
if ev.room_id != room_id:
raise InvalidResponseError(
f"Remote server {dest} returned event {ev.event_id} which is in "
f"room {ev.room_id}, when we were backfilling in {room_id}"
)
with backfill_processing_after_timer.time():
# if there are any events in the wrong room, the remote server is buggy and
# should not be trusted.
for ev in events:
if ev.room_id != room_id:
raise InvalidResponseError(
f"Remote server {dest} returned event {ev.event_id} which is in "
f"room {ev.room_id}, when we were backfilling in {room_id}"
)
await self._process_pulled_events(
dest,
events,
backfilled=True,
)
await self._process_pulled_events(
dest,
events,
backfilled=True,
)
@trace
async def _get_missing_events_for_pdu(
@ -715,7 +743,7 @@ class FederationEventHandler:
@trace
async def _process_pulled_events(
self, origin: str, events: Iterable[EventBase], backfilled: bool
self, origin: str, events: Collection[EventBase], backfilled: bool
) -> None:
"""Process a batch of events we have pulled from a remote server
@ -730,6 +758,15 @@ class FederationEventHandler:
backfilled: True if this is part of a historical batch of events (inhibits
notification to clients, and validation of device keys.)
"""
set_tag(
SynapseTags.FUNC_ARG_PREFIX + "event_ids",
str([event.event_id for event in events]),
)
set_tag(
SynapseTags.FUNC_ARG_PREFIX + "event_ids.length",
str(len(events)),
)
set_tag(SynapseTags.FUNC_ARG_PREFIX + "backfilled", str(backfilled))
logger.debug(
"processing pulled backfilled=%s events=%s",
backfilled,
@ -753,6 +790,7 @@ class FederationEventHandler:
await self._process_pulled_event(origin, ev, backfilled=backfilled)
@trace
@tag_args
async def _process_pulled_event(
self, origin: str, event: EventBase, backfilled: bool
) -> None:
@ -854,6 +892,7 @@ class FederationEventHandler:
else:
raise
@trace
async def _compute_event_context_with_maybe_missing_prevs(
self, dest: str, event: EventBase
) -> EventContext:
@ -970,6 +1009,8 @@ class FederationEventHandler:
event, state_ids_before_event=state_map, partial_state=partial_state
)
@trace
@tag_args
async def _get_state_ids_after_missing_prev_event(
self,
destination: str,
@ -1009,10 +1050,10 @@ class FederationEventHandler:
logger.debug("Fetching %i events from cache/store", len(desired_events))
have_events = await self._store.have_seen_events(room_id, desired_events)
missing_desired_events = desired_events - have_events
missing_desired_event_ids = desired_events - have_events
logger.debug(
"We are missing %i events (got %i)",
len(missing_desired_events),
len(missing_desired_event_ids),
len(have_events),
)
@ -1024,13 +1065,30 @@ class FederationEventHandler:
# already have a bunch of the state events. It would be nice if the
# federation api gave us a way of finding out which we actually need.
missing_auth_events = set(auth_event_ids) - have_events
missing_auth_events.difference_update(
await self._store.have_seen_events(room_id, missing_auth_events)
missing_auth_event_ids = set(auth_event_ids) - have_events
missing_auth_event_ids.difference_update(
await self._store.have_seen_events(room_id, missing_auth_event_ids)
)
logger.debug("We are also missing %i auth events", len(missing_auth_events))
logger.debug("We are also missing %i auth events", len(missing_auth_event_ids))
missing_events = missing_desired_events | missing_auth_events
missing_event_ids = missing_desired_event_ids | missing_auth_event_ids
set_tag(
SynapseTags.RESULT_PREFIX + "missing_auth_event_ids",
str(missing_auth_event_ids),
)
set_tag(
SynapseTags.RESULT_PREFIX + "missing_auth_event_ids.length",
str(len(missing_auth_event_ids)),
)
set_tag(
SynapseTags.RESULT_PREFIX + "missing_desired_event_ids",
str(missing_desired_event_ids),
)
set_tag(
SynapseTags.RESULT_PREFIX + "missing_desired_event_ids.length",
str(len(missing_desired_event_ids)),
)
# Making an individual request for each of 1000s of events has a lot of
# overhead. On the other hand, we don't really want to fetch all of the events
@ -1041,13 +1099,13 @@ class FederationEventHandler:
#
# TODO: might it be better to have an API which lets us do an aggregate event
# request
if (len(missing_events) * 10) >= len(auth_event_ids) + len(state_event_ids):
if (len(missing_event_ids) * 10) >= len(auth_event_ids) + len(state_event_ids):
logger.debug("Requesting complete state from remote")
await self._get_state_and_persist(destination, room_id, event_id)
else:
logger.debug("Fetching %i events from remote", len(missing_events))
logger.debug("Fetching %i events from remote", len(missing_event_ids))
await self._get_events_and_persist(
destination=destination, room_id=room_id, event_ids=missing_events
destination=destination, room_id=room_id, event_ids=missing_event_ids
)
# We now need to fill out the state map, which involves fetching the
@ -1104,6 +1162,14 @@ class FederationEventHandler:
event_id,
failed_to_fetch,
)
set_tag(
SynapseTags.RESULT_PREFIX + "failed_to_fetch",
str(failed_to_fetch),
)
set_tag(
SynapseTags.RESULT_PREFIX + "failed_to_fetch.length",
str(len(failed_to_fetch)),
)
if remote_event.is_state() and remote_event.rejected_reason is None:
state_map[
@ -1112,6 +1178,8 @@ class FederationEventHandler:
return state_map
@trace
@tag_args
async def _get_state_and_persist(
self, destination: str, room_id: str, event_id: str
) -> None:
@ -1133,6 +1201,7 @@ class FederationEventHandler:
destination=destination, room_id=room_id, event_ids=(event_id,)
)
@trace
async def _process_received_pdu(
self,
origin: str,
@ -1283,6 +1352,7 @@ class FederationEventHandler:
except Exception:
logger.exception("Failed to resync device for %s", sender)
@trace
async def _handle_marker_event(self, origin: str, marker_event: EventBase) -> None:
"""Handles backfilling the insertion event when we receive a marker
event that points to one.
@ -1414,6 +1484,8 @@ class FederationEventHandler:
return event_from_response
@trace
@tag_args
async def _get_events_and_persist(
self, destination: str, room_id: str, event_ids: Collection[str]
) -> None:
@ -1459,6 +1531,7 @@ class FederationEventHandler:
logger.info("Fetched %i events of %i requested", len(events), len(event_ids))
await self._auth_and_persist_outliers(room_id, events)
@trace
async def _auth_and_persist_outliers(
self, room_id: str, events: Iterable[EventBase]
) -> None:
@ -1477,6 +1550,16 @@ class FederationEventHandler:
"""
event_map = {event.event_id: event for event in events}
event_ids = event_map.keys()
set_tag(
SynapseTags.FUNC_ARG_PREFIX + "event_ids",
str(event_ids),
)
set_tag(
SynapseTags.FUNC_ARG_PREFIX + "event_ids.length",
str(len(event_ids)),
)
# filter out any events we have already seen. This might happen because
# the events were eagerly pushed to us (eg, during a room join), or because
# another thread has raced against us since we decided to request the event.
@ -1593,6 +1676,7 @@ class FederationEventHandler:
backfilled=True,
)
@trace
async def _check_event_auth(
self, origin: Optional[str], event: EventBase, context: EventContext
) -> None:
@ -1631,6 +1715,14 @@ class FederationEventHandler:
claimed_auth_events = await self._load_or_fetch_auth_events_for_event(
origin, event
)
set_tag(
SynapseTags.RESULT_PREFIX + "claimed_auth_events",
str([ev.event_id for ev in claimed_auth_events]),
)
set_tag(
SynapseTags.RESULT_PREFIX + "claimed_auth_events.length",
str(len(claimed_auth_events)),
)
# ... and check that the event passes auth at those auth events.
# https://spec.matrix.org/v1.3/server-server-api/#checks-performed-on-receipt-of-a-pdu:
@ -1728,6 +1820,7 @@ class FederationEventHandler:
)
context.rejected = RejectedReason.AUTH_ERROR
@trace
async def _maybe_kick_guest_users(self, event: EventBase) -> None:
if event.type != EventTypes.GuestAccess:
return
@ -1935,6 +2028,8 @@ class FederationEventHandler:
# instead we raise an AuthError, which will make the caller ignore it.
raise AuthError(code=HTTPStatus.FORBIDDEN, msg="Auth events could not be found")
@trace
@tag_args
async def _get_remote_auth_chain_for_event(
self, destination: str, room_id: str, event_id: str
) -> None:
@ -1963,6 +2058,7 @@ class FederationEventHandler:
await self._auth_and_persist_outliers(room_id, remote_auth_events)
@trace
async def _run_push_actions_and_persist_event(
self, event: EventBase, context: EventContext, backfilled: bool = False
) -> None:
@ -2071,8 +2167,17 @@ class FederationEventHandler:
self._message_handler.maybe_schedule_expiry(event)
if not backfilled: # Never notify for backfilled events
for event in events:
await self._notify_persisted_event(event, max_stream_token)
with start_active_span("notify_persisted_events"):
set_tag(
SynapseTags.RESULT_PREFIX + "event_ids",
str([ev.event_id for ev in events]),
)
set_tag(
SynapseTags.RESULT_PREFIX + "event_ids.length",
str(len(events)),
)
for event in events:
await self._notify_persisted_event(event, max_stream_token)
return max_stream_token.stream

View File

@ -331,7 +331,11 @@ class MessageHandler:
msg="Getting joined members while not being a current member of the room is forbidden.",
)
users_with_profile = await self.store.get_users_in_room_with_profiles(room_id)
users_with_profile = (
await self._state_storage_controller.get_users_in_room_with_profiles(
room_id
)
)
# If this is an AS, double check that they are allowed to see the members.
# This can either be because the AS user is in the room or because there

View File

@ -13,7 +13,17 @@
# limitations under the License.
import itertools
import logging
from typing import TYPE_CHECKING, Any, Dict, FrozenSet, List, Optional, Set, Tuple
from typing import (
TYPE_CHECKING,
Any,
Dict,
FrozenSet,
List,
Optional,
Sequence,
Set,
Tuple,
)
import attr
from prometheus_client import Counter
@ -89,7 +99,7 @@ class SyncConfig:
@attr.s(slots=True, frozen=True, auto_attribs=True)
class TimelineBatch:
prev_batch: StreamToken
events: List[EventBase]
events: Sequence[EventBase]
limited: bool
# A mapping of event ID to the bundled aggregations for the above events.
# This is only calculated if limited is true.
@ -852,16 +862,26 @@ class SyncHandler:
now_token: StreamToken,
full_state: bool,
) -> MutableStateMap[EventBase]:
"""Works out the difference in state between the start of the timeline
and the previous sync.
"""Works out the difference in state between the end of the previous sync and
the start of the timeline.
Args:
room_id:
batch: The timeline batch for the room that will be sent to the user.
sync_config:
since_token: Token of the end of the previous batch. May be None.
since_token: Token of the end of the previous batch. May be `None`.
now_token: Token of the end of the current batch.
full_state: Whether to force returning the full state.
`lazy_load_members` still applies when `full_state` is `True`.
Returns:
The state to return in the sync response for the room.
Clients will overlay this onto the state at the end of the previous sync to
arrive at the state at the start of the timeline.
Clients will then overlay state events in the timeline to arrive at the
state at the end of the timeline, in preparation for the next sync.
"""
# TODO(mjark) Check if the state events were received by the server
# after the previous sync, since we need to include those state
@ -869,7 +889,8 @@ class SyncHandler:
# TODO(mjark) Check for new redactions in the state events.
with Measure(self.clock, "compute_state_delta"):
# The memberships needed for events in the timeline.
# Only calculated when `lazy_load_members` is on.
members_to_fetch = None
lazy_load_members = sync_config.filter_collection.lazy_load_members()
@ -897,38 +918,46 @@ class SyncHandler:
else:
state_filter = StateFilter.all()
# The contribution to the room state from state events in the timeline.
# Only contains the last event for any given state key.
timeline_state = {
(event.type, event.state_key): event.event_id
for event in batch.events
if event.is_state()
}
# Now calculate the state to return in the sync response for the room.
# This is more or less the change in state between the end of the previous
# sync's timeline and the start of the current sync's timeline.
# See the docstring above for details.
state_ids: StateMap[str]
if full_state:
if batch:
current_state_ids = (
state_at_timeline_end = (
await self._state_storage_controller.get_state_ids_for_event(
batch.events[-1].event_id, state_filter=state_filter
)
)
state_ids = (
state_at_timeline_start = (
await self._state_storage_controller.get_state_ids_for_event(
batch.events[0].event_id, state_filter=state_filter
)
)
else:
current_state_ids = await self.get_state_at(
state_at_timeline_end = await self.get_state_at(
room_id, stream_position=now_token, state_filter=state_filter
)
state_ids = current_state_ids
state_at_timeline_start = state_at_timeline_end
state_ids = _calculate_state(
timeline_contains=timeline_state,
timeline_start=state_ids,
previous={},
current=current_state_ids,
timeline_start=state_at_timeline_start,
timeline_end=state_at_timeline_end,
previous_timeline_end={},
lazy_load_members=lazy_load_members,
)
elif batch.limited:
@ -968,24 +997,23 @@ class SyncHandler:
)
if batch:
current_state_ids = (
state_at_timeline_end = (
await self._state_storage_controller.get_state_ids_for_event(
batch.events[-1].event_id, state_filter=state_filter
)
)
else:
# Its not clear how we get here, but empirically we do
# (#5407). Logging has been added elsewhere to try and
# figure out where this state comes from.
current_state_ids = await self.get_state_at(
# We can get here if the user has ignored the senders of all
# the recent events.
state_at_timeline_end = await self.get_state_at(
room_id, stream_position=now_token, state_filter=state_filter
)
state_ids = _calculate_state(
timeline_contains=timeline_state,
timeline_start=state_at_timeline_start,
previous=state_at_previous_sync,
current=current_state_ids,
timeline_end=state_at_timeline_end,
previous_timeline_end=state_at_previous_sync,
# we have to include LL members in case LL initial sync missed them
lazy_load_members=lazy_load_members,
)
@ -1010,6 +1038,13 @@ class SyncHandler:
),
)
# At this point, if `lazy_load_members` is enabled, `state_ids` includes
# the memberships of all event senders in the timeline. This is because we
# may not have sent the memberships in a previous sync.
# When `include_redundant_members` is on, we send all the lazy-loaded
# memberships of event senders. Otherwise we make an effort to limit the set
# of memberships we send to those that we have not already sent to this client.
if lazy_load_members and not include_redundant_members:
cache_key = (sync_config.user.to_string(), sync_config.device_id)
cache = self.get_lazy_loaded_members_cache(cache_key)
@ -2216,8 +2251,8 @@ def _action_has_highlight(actions: List[JsonDict]) -> bool:
def _calculate_state(
timeline_contains: StateMap[str],
timeline_start: StateMap[str],
previous: StateMap[str],
current: StateMap[str],
timeline_end: StateMap[str],
previous_timeline_end: StateMap[str],
lazy_load_members: bool,
) -> StateMap[str]:
"""Works out what state to include in a sync response.
@ -2225,45 +2260,50 @@ def _calculate_state(
Args:
timeline_contains: state in the timeline
timeline_start: state at the start of the timeline
previous: state at the end of the previous sync (or empty dict
timeline_end: state at the end of the timeline
previous_timeline_end: state at the end of the previous sync (or empty dict
if this is an initial sync)
current: state at the end of the timeline
lazy_load_members: whether to return members from timeline_start
or not. assumes that timeline_start has already been filtered to
include only the members the client needs to know about.
"""
event_id_to_key = {
e: key
for key, e in itertools.chain(
event_id_to_state_key = {
event_id: state_key
for state_key, event_id in itertools.chain(
timeline_contains.items(),
previous.items(),
timeline_start.items(),
current.items(),
timeline_end.items(),
previous_timeline_end.items(),
)
}
c_ids = set(current.values())
ts_ids = set(timeline_start.values())
p_ids = set(previous.values())
tc_ids = set(timeline_contains.values())
timeline_end_ids = set(timeline_end.values())
timeline_start_ids = set(timeline_start.values())
previous_timeline_end_ids = set(previous_timeline_end.values())
timeline_contains_ids = set(timeline_contains.values())
# If we are lazyloading room members, we explicitly add the membership events
# for the senders in the timeline into the state block returned by /sync,
# as we may not have sent them to the client before. We find these membership
# events by filtering them out of timeline_start, which has already been filtered
# to only include membership events for the senders in the timeline.
# In practice, we can do this by removing them from the p_ids list,
# which is the list of relevant state we know we have already sent to the client.
# In practice, we can do this by removing them from the previous_timeline_end_ids
# list, which is the list of relevant state we know we have already sent to the
# client.
# see https://github.com/matrix-org/synapse/pull/2970/files/efcdacad7d1b7f52f879179701c7e0d9b763511f#r204732809
if lazy_load_members:
p_ids.difference_update(
previous_timeline_end_ids.difference_update(
e for t, e in timeline_start.items() if t[0] == EventTypes.Member
)
state_ids = ((c_ids | ts_ids) - p_ids) - tc_ids
state_ids = (
(timeline_end_ids | timeline_start_ids)
- previous_timeline_end_ids
- timeline_contains_ids
)
return {event_id_to_key[e]: e for e in state_ids}
return {event_id_to_state_key[e]: e for e in state_ids}
@attr.s(slots=True, auto_attribs=True)

View File

@ -23,9 +23,12 @@ from typing import (
Optional,
Sequence,
Tuple,
Type,
TypeVar,
overload,
)
from pydantic import BaseModel, ValidationError
from typing_extensions import Literal
from twisted.web.server import Request
@ -694,6 +697,28 @@ def parse_json_object_from_request(
return content
Model = TypeVar("Model", bound=BaseModel)
def parse_and_validate_json_object_from_request(
request: Request, model_type: Type[Model]
) -> Model:
"""Parse a JSON object from the body of a twisted HTTP request, then deserialise and
validate using the given pydantic model.
Raises:
SynapseError if the request body couldn't be decoded as JSON or
if it wasn't a JSON object.
"""
content = parse_json_object_from_request(request, allow_empty_body=False)
try:
instance = model_type.parse_obj(content)
except ValidationError as e:
raise SynapseError(HTTPStatus.BAD_REQUEST, str(e), errcode=Codes.BAD_JSON)
return instance
def assert_params_in_dict(body: JsonDict, required: Iterable[str]) -> None:
absent = []
for k in required:

View File

@ -173,6 +173,7 @@ from typing import (
Any,
Callable,
Collection,
ContextManager,
Dict,
Generator,
Iterable,
@ -309,6 +310,19 @@ class SynapseTags:
# The name of the external cache
CACHE_NAME = "cache.name"
# Used to tag function arguments
#
# Tag a named arg. The name of the argument should be appended to this prefix.
FUNC_ARG_PREFIX = "ARG."
# Tag extra variadic number of positional arguments (`def foo(first, second, *extras)`)
FUNC_ARGS = "args"
# Tag keyword args
FUNC_KWARGS = "kwargs"
# Some intermediate result that's interesting to the function. The label for
# the result should be appended to this prefix.
RESULT_PREFIX = "RESULT."
class SynapseBaggage:
FORCE_TRACING = "synapse-force-tracing"
@ -823,75 +837,117 @@ def extract_text_map(carrier: Dict[str, str]) -> Optional["opentracing.SpanConte
# Tracing decorators
def trace_with_opname(opname: str) -> Callable[[Callable[P, R]], Callable[P, R]]:
def _custom_sync_async_decorator(
func: Callable[P, R],
wrapping_logic: Callable[[Callable[P, R], Any, Any], ContextManager[None]],
) -> Callable[P, R]:
"""
Decorates a function that is sync or async (coroutines), or that returns a Twisted
`Deferred`. The custom business logic of the decorator goes in `wrapping_logic`.
Example usage:
```py
# Decorator to time the function and log it out
def duration(func: Callable[P, R]) -> Callable[P, R]:
@contextlib.contextmanager
def _wrapping_logic(func: Callable[P, R], *args: P.args, **kwargs: P.kwargs) -> Generator[None, None, None]:
start_ts = time.time()
try:
yield
finally:
end_ts = time.time()
duration = end_ts - start_ts
logger.info("%s took %s seconds", func.__name__, duration)
return _custom_sync_async_decorator(func, _wrapping_logic)
```
Args:
func: The function to be decorated
wrapping_logic: The business logic of your custom decorator.
This should be a ContextManager so you are able to run your logic
before/after the function as desired.
"""
if inspect.iscoroutinefunction(func):
@wraps(func)
async def _wrapper(*args: P.args, **kwargs: P.kwargs) -> R:
with wrapping_logic(func, *args, **kwargs):
return await func(*args, **kwargs) # type: ignore[misc]
else:
# The other case here handles both sync functions and those
# decorated with inlineDeferred.
@wraps(func)
def _wrapper(*args: P.args, **kwargs: P.kwargs) -> R:
scope = wrapping_logic(func, *args, **kwargs)
scope.__enter__()
try:
result = func(*args, **kwargs)
if isinstance(result, defer.Deferred):
def call_back(result: R) -> R:
scope.__exit__(None, None, None)
return result
def err_back(result: R) -> R:
scope.__exit__(None, None, None)
return result
result.addCallbacks(call_back, err_back)
else:
if inspect.isawaitable(result):
logger.error(
"@trace may not have wrapped %s correctly! "
"The function is not async but returned a %s.",
func.__qualname__,
type(result).__name__,
)
scope.__exit__(None, None, None)
return result
except Exception as e:
scope.__exit__(type(e), None, e.__traceback__)
raise
return _wrapper # type: ignore[return-value]
def trace_with_opname(
opname: str,
*,
tracer: Optional["opentracing.Tracer"] = None,
) -> Callable[[Callable[P, R]], Callable[P, R]]:
"""
Decorator to trace a function with a custom opname.
See the module's doc string for usage examples.
"""
def decorator(func: Callable[P, R]) -> Callable[P, R]:
if opentracing is None:
return func # type: ignore[unreachable]
# type-ignore: mypy bug, see https://github.com/python/mypy/issues/12909
@contextlib.contextmanager # type: ignore[arg-type]
def _wrapping_logic(
func: Callable[P, R], *args: P.args, **kwargs: P.kwargs
) -> Generator[None, None, None]:
with start_active_span(opname, tracer=tracer):
yield
if inspect.iscoroutinefunction(func):
def _decorator(func: Callable[P, R]) -> Callable[P, R]:
if not opentracing:
return func
@wraps(func)
async def _trace_inner(*args: P.args, **kwargs: P.kwargs) -> R:
with start_active_span(opname):
return await func(*args, **kwargs) # type: ignore[misc]
return _custom_sync_async_decorator(func, _wrapping_logic)
else:
# The other case here handles both sync functions and those
# decorated with inlineDeferred.
@wraps(func)
def _trace_inner(*args: P.args, **kwargs: P.kwargs) -> R:
scope = start_active_span(opname)
scope.__enter__()
try:
result = func(*args, **kwargs)
if isinstance(result, defer.Deferred):
def call_back(result: R) -> R:
scope.__exit__(None, None, None)
return result
def err_back(result: R) -> R:
scope.__exit__(None, None, None)
return result
result.addCallbacks(call_back, err_back)
else:
if inspect.isawaitable(result):
logger.error(
"@trace may not have wrapped %s correctly! "
"The function is not async but returned a %s.",
func.__qualname__,
type(result).__name__,
)
scope.__exit__(None, None, None)
return result
except Exception as e:
scope.__exit__(type(e), None, e.__traceback__)
raise
return _trace_inner # type: ignore[return-value]
return decorator
return _decorator
def trace(func: Callable[P, R]) -> Callable[P, R]:
"""
Decorator to trace a function.
Sets the operation name to that of the function's name.
See the module's doc string for usage examples.
"""
@ -900,7 +956,7 @@ def trace(func: Callable[P, R]) -> Callable[P, R]:
def tag_args(func: Callable[P, R]) -> Callable[P, R]:
"""
Tags all of the args to the active span.
Decorator to tag all of the args to the active span.
Args:
func: `func` is assumed to be a method taking a `self` parameter, or a
@ -911,22 +967,25 @@ def tag_args(func: Callable[P, R]) -> Callable[P, R]:
if not opentracing:
return func
@wraps(func)
def _tag_args_inner(*args: P.args, **kwargs: P.kwargs) -> R:
# type-ignore: mypy bug, see https://github.com/python/mypy/issues/12909
@contextlib.contextmanager # type: ignore[arg-type]
def _wrapping_logic(
func: Callable[P, R], *args: P.args, **kwargs: P.kwargs
) -> Generator[None, None, None]:
argspec = inspect.getfullargspec(func)
# We use `[1:]` to skip the `self` object reference and `start=1` to
# make the index line up with `argspec.args`.
#
# FIXME: We could update this handle any type of function by ignoring the
# FIXME: We could update this to handle any type of function by ignoring the
# first argument only if it's named `self` or `cls`. This isn't fool-proof
# but handles the idiomatic cases.
for i, arg in enumerate(args[1:], start=1): # type: ignore[index]
set_tag("ARG_" + argspec.args[i], str(arg))
set_tag("args", str(args[len(argspec.args) :])) # type: ignore[index]
set_tag("kwargs", str(kwargs))
return func(*args, **kwargs)
set_tag(SynapseTags.FUNC_ARG_PREFIX + argspec.args[i], str(arg))
set_tag(SynapseTags.FUNC_ARGS, str(args[len(argspec.args) :])) # type: ignore[index]
set_tag(SynapseTags.FUNC_KWARGS, str(kwargs))
yield
return _tag_args_inner
return _custom_sync_async_decorator(func, _wrapping_logic)
@contextlib.contextmanager

View File

@ -14,128 +14,224 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import copy
from typing import Any, Dict, List
"""
Push rules is the system used to determine which events trigger a push (and a
bump in notification counts).
from synapse.push.rulekinds import PRIORITY_CLASS_INVERSE_MAP, PRIORITY_CLASS_MAP
This consists of a list of "push rules" for each user, where a push rule is a
pair of "conditions" and "actions". When a user receives an event Synapse
iterates over the list of push rules until it finds one where all the conditions
match the event, at which point "actions" describe the outcome (e.g. notify,
highlight, etc).
Push rules are split up into 5 different "kinds" (aka "priority classes"), which
are run in order:
1. Override highest priority rules, e.g. always ignore notices
2. Content content specific rules, e.g. @ notifications
3. Room per room rules, e.g. enable/disable notifications for all messages
in a room
4. Sender per sender rules, e.g. never notify for messages from a given
user
5. Underride the lowest priority "default" rules, e.g. notify for every
message.
The set of "base rules" are the list of rules that every user has by default. A
user can modify their copy of the push rules in one of three ways:
1. Adding a new push rule of a certain kind
2. Changing the actions of a base rule
3. Enabling/disabling a base rule.
The base rules are split into whether they come before or after a particular
kind, so the order of push rule evaluation would be: base rules for before
"override" kind, user defined "override" rules, base rules after "override"
kind, etc, etc.
"""
import itertools
from typing import Dict, Iterator, List, Mapping, Sequence, Tuple, Union
import attr
from synapse.config.experimental import ExperimentalConfig
from synapse.push.rulekinds import PRIORITY_CLASS_MAP
def list_with_base_rules(rawrules: List[Dict[str, Any]]) -> List[Dict[str, Any]]:
"""Combine the list of rules set by the user with the default push rules
@attr.s(auto_attribs=True, slots=True, frozen=True)
class PushRule:
"""A push rule
Args:
rawrules: The rules the user has modified or set.
Returns:
A new list with the rules set by the user combined with the defaults.
Attributes:
rule_id: a unique ID for this rule
priority_class: what "kind" of push rule this is (see
`PRIORITY_CLASS_MAP` for mapping between int and kind)
conditions: the sequence of conditions that all need to match
actions: the actions to apply if all conditions are met
default: is this a base rule?
default_enabled: is this enabled by default?
"""
ruleslist = []
# Grab the base rules that the user has modified.
# The modified base rules have a priority_class of -1.
modified_base_rules = {r["rule_id"]: r for r in rawrules if r["priority_class"] < 0}
rule_id: str
priority_class: int
conditions: Sequence[Mapping[str, str]]
actions: Sequence[Union[str, Mapping]]
default: bool = False
default_enabled: bool = True
# Remove the modified base rules from the list, They'll be added back
# in the default positions in the list.
rawrules = [r for r in rawrules if r["priority_class"] >= 0]
# shove the server default rules for each kind onto the end of each
current_prio_class = list(PRIORITY_CLASS_INVERSE_MAP)[-1]
@attr.s(auto_attribs=True, slots=True, frozen=True, weakref_slot=False)
class PushRules:
"""A collection of push rules for an account.
ruleslist.extend(
make_base_prepend_rules(
PRIORITY_CLASS_INVERSE_MAP[current_prio_class], modified_base_rules
Can be iterated over, producing push rules in priority order.
"""
# A mapping from rule ID to push rule that overrides a base rule. These will
# be returned instead of the base rule.
overriden_base_rules: Dict[str, PushRule] = attr.Factory(dict)
# The following stores the custom push rules at each priority class.
#
# We keep these separate (rather than combining into one big list) to avoid
# copying the base rules around all the time.
override: List[PushRule] = attr.Factory(list)
content: List[PushRule] = attr.Factory(list)
room: List[PushRule] = attr.Factory(list)
sender: List[PushRule] = attr.Factory(list)
underride: List[PushRule] = attr.Factory(list)
def __iter__(self) -> Iterator[PushRule]:
# When iterating over the push rules we need to return the base rules
# interspersed at the correct spots.
for rule in itertools.chain(
BASE_PREPEND_OVERRIDE_RULES,
self.override,
BASE_APPEND_OVERRIDE_RULES,
self.content,
BASE_APPEND_CONTENT_RULES,
self.room,
self.sender,
self.underride,
BASE_APPEND_UNDERRIDE_RULES,
):
# Check if a base rule has been overriden by a custom rule. If so
# return that instead.
override_rule = self.overriden_base_rules.get(rule.rule_id)
if override_rule:
yield override_rule
else:
yield rule
def __len__(self) -> int:
# The length is mostly used by caches to get a sense of "size" / amount
# of memory this object is using, so we only count the number of custom
# rules.
return (
len(self.overriden_base_rules)
+ len(self.override)
+ len(self.content)
+ len(self.room)
+ len(self.sender)
+ len(self.underride)
)
)
for r in rawrules:
if r["priority_class"] < current_prio_class:
while r["priority_class"] < current_prio_class:
ruleslist.extend(
make_base_append_rules(
PRIORITY_CLASS_INVERSE_MAP[current_prio_class],
modified_base_rules,
)
)
current_prio_class -= 1
if current_prio_class > 0:
ruleslist.extend(
make_base_prepend_rules(
PRIORITY_CLASS_INVERSE_MAP[current_prio_class],
modified_base_rules,
)
)
ruleslist.append(r)
@attr.s(auto_attribs=True, slots=True, frozen=True, weakref_slot=False)
class FilteredPushRules:
"""A wrapper around `PushRules` that filters out disabled experimental push
rules, and includes the "enabled" state for each rule when iterated over.
"""
while current_prio_class > 0:
ruleslist.extend(
make_base_append_rules(
PRIORITY_CLASS_INVERSE_MAP[current_prio_class], modified_base_rules
)
)
current_prio_class -= 1
if current_prio_class > 0:
ruleslist.extend(
make_base_prepend_rules(
PRIORITY_CLASS_INVERSE_MAP[current_prio_class], modified_base_rules
)
push_rules: PushRules
enabled_map: Dict[str, bool]
experimental_config: ExperimentalConfig
def __iter__(self) -> Iterator[Tuple[PushRule, bool]]:
for rule in self.push_rules:
if not _is_experimental_rule_enabled(
rule.rule_id, self.experimental_config
):
continue
enabled = self.enabled_map.get(rule.rule_id, rule.default_enabled)
yield rule, enabled
def __len__(self) -> int:
return len(self.push_rules)
DEFAULT_EMPTY_PUSH_RULES = PushRules()
def compile_push_rules(rawrules: List[PushRule]) -> PushRules:
"""Given a set of custom push rules return a `PushRules` instance (which
includes the base rules).
"""
if not rawrules:
# Fast path to avoid allocating empty lists when there are no custom
# rules for the user.
return DEFAULT_EMPTY_PUSH_RULES
rules = PushRules()
for rule in rawrules:
# We need to decide which bucket each custom push rule goes into.
# If it has the same ID as a base rule then it overrides that...
overriden_base_rule = BASE_RULES_BY_ID.get(rule.rule_id)
if overriden_base_rule:
rules.overriden_base_rules[rule.rule_id] = attr.evolve(
overriden_base_rule, actions=rule.actions
)
continue
return ruleslist
# ... otherwise it gets added to the appropriate priority class bucket
collection: List[PushRule]
if rule.priority_class == 5:
collection = rules.override
elif rule.priority_class == 4:
collection = rules.content
elif rule.priority_class == 3:
collection = rules.room
elif rule.priority_class == 2:
collection = rules.sender
elif rule.priority_class == 1:
collection = rules.underride
else:
raise Exception(f"Unknown priority class: {rule.priority_class}")
def make_base_append_rules(
kind: str, modified_base_rules: Dict[str, Dict[str, Any]]
) -> List[Dict[str, Any]]:
rules = []
if kind == "override":
rules = BASE_APPEND_OVERRIDE_RULES
elif kind == "underride":
rules = BASE_APPEND_UNDERRIDE_RULES
elif kind == "content":
rules = BASE_APPEND_CONTENT_RULES
# Copy the rules before modifying them
rules = copy.deepcopy(rules)
for r in rules:
# Only modify the actions, keep the conditions the same.
assert isinstance(r["rule_id"], str)
modified = modified_base_rules.get(r["rule_id"])
if modified:
r["actions"] = modified["actions"]
collection.append(rule)
return rules
def make_base_prepend_rules(
kind: str,
modified_base_rules: Dict[str, Dict[str, Any]],
) -> List[Dict[str, Any]]:
rules = []
if kind == "override":
rules = BASE_PREPEND_OVERRIDE_RULES
# Copy the rules before modifying them
rules = copy.deepcopy(rules)
for r in rules:
# Only modify the actions, keep the conditions the same.
assert isinstance(r["rule_id"], str)
modified = modified_base_rules.get(r["rule_id"])
if modified:
r["actions"] = modified["actions"]
return rules
def _is_experimental_rule_enabled(
rule_id: str, experimental_config: ExperimentalConfig
) -> bool:
"""Used by `FilteredPushRules` to filter out experimental rules when they
have not been enabled.
"""
if (
rule_id == "global/override/.org.matrix.msc3786.rule.room.server_acl"
and not experimental_config.msc3786_enabled
):
return False
if (
rule_id == "global/underride/.org.matrix.msc3772.thread_reply"
and not experimental_config.msc3772_enabled
):
return False
return True
# We have to annotate these types, otherwise mypy infers them as
# `List[Dict[str, Sequence[Collection[str]]]]`.
BASE_APPEND_CONTENT_RULES: List[Dict[str, Any]] = [
{
"rule_id": "global/content/.m.rule.contains_user_name",
"conditions": [
BASE_APPEND_CONTENT_RULES = [
PushRule(
default=True,
priority_class=PRIORITY_CLASS_MAP["content"],
rule_id="global/content/.m.rule.contains_user_name",
conditions=[
{
"kind": "event_match",
"key": "content.body",
@ -143,29 +239,33 @@ BASE_APPEND_CONTENT_RULES: List[Dict[str, Any]] = [
"pattern_type": "user_localpart",
}
],
"actions": [
actions=[
"notify",
{"set_tweak": "sound", "value": "default"},
{"set_tweak": "highlight"},
],
}
)
]
BASE_PREPEND_OVERRIDE_RULES: List[Dict[str, Any]] = [
{
"rule_id": "global/override/.m.rule.master",
"enabled": False,
"conditions": [],
"actions": ["dont_notify"],
}
BASE_PREPEND_OVERRIDE_RULES = [
PushRule(
default=True,
priority_class=PRIORITY_CLASS_MAP["override"],
rule_id="global/override/.m.rule.master",
default_enabled=False,
conditions=[],
actions=["dont_notify"],
)
]
BASE_APPEND_OVERRIDE_RULES: List[Dict[str, Any]] = [
{
"rule_id": "global/override/.m.rule.suppress_notices",
"conditions": [
BASE_APPEND_OVERRIDE_RULES = [
PushRule(
default=True,
priority_class=PRIORITY_CLASS_MAP["override"],
rule_id="global/override/.m.rule.suppress_notices",
conditions=[
{
"kind": "event_match",
"key": "content.msgtype",
@ -173,13 +273,15 @@ BASE_APPEND_OVERRIDE_RULES: List[Dict[str, Any]] = [
"_cache_key": "_suppress_notices",
}
],
"actions": ["dont_notify"],
},
actions=["dont_notify"],
),
# NB. .m.rule.invite_for_me must be higher prio than .m.rule.member_event
# otherwise invites will be matched by .m.rule.member_event
{
"rule_id": "global/override/.m.rule.invite_for_me",
"conditions": [
PushRule(
default=True,
priority_class=PRIORITY_CLASS_MAP["override"],
rule_id="global/override/.m.rule.invite_for_me",
conditions=[
{
"kind": "event_match",
"key": "type",
@ -195,21 +297,23 @@ BASE_APPEND_OVERRIDE_RULES: List[Dict[str, Any]] = [
# Match the requester's MXID.
{"kind": "event_match", "key": "state_key", "pattern_type": "user_id"},
],
"actions": [
actions=[
"notify",
{"set_tweak": "sound", "value": "default"},
{"set_tweak": "highlight", "value": False},
],
},
),
# Will we sometimes want to know about people joining and leaving?
# Perhaps: if so, this could be expanded upon. Seems the most usual case
# is that we don't though. We add this override rule so that even if
# the room rule is set to notify, we don't get notifications about
# join/leave/avatar/displayname events.
# See also: https://matrix.org/jira/browse/SYN-607
{
"rule_id": "global/override/.m.rule.member_event",
"conditions": [
PushRule(
default=True,
priority_class=PRIORITY_CLASS_MAP["override"],
rule_id="global/override/.m.rule.member_event",
conditions=[
{
"kind": "event_match",
"key": "type",
@ -217,24 +321,28 @@ BASE_APPEND_OVERRIDE_RULES: List[Dict[str, Any]] = [
"_cache_key": "_member",
}
],
"actions": ["dont_notify"],
},
actions=["dont_notify"],
),
# This was changed from underride to override so it's closer in priority
# to the content rules where the user name highlight rule lives. This
# way a room rule is lower priority than both but a custom override rule
# is higher priority than both.
{
"rule_id": "global/override/.m.rule.contains_display_name",
"conditions": [{"kind": "contains_display_name"}],
"actions": [
PushRule(
default=True,
priority_class=PRIORITY_CLASS_MAP["override"],
rule_id="global/override/.m.rule.contains_display_name",
conditions=[{"kind": "contains_display_name"}],
actions=[
"notify",
{"set_tweak": "sound", "value": "default"},
{"set_tweak": "highlight"},
],
},
{
"rule_id": "global/override/.m.rule.roomnotif",
"conditions": [
),
PushRule(
default=True,
priority_class=PRIORITY_CLASS_MAP["override"],
rule_id="global/override/.m.rule.roomnotif",
conditions=[
{
"kind": "event_match",
"key": "content.body",
@ -247,11 +355,13 @@ BASE_APPEND_OVERRIDE_RULES: List[Dict[str, Any]] = [
"_cache_key": "_roomnotif_pl",
},
],
"actions": ["notify", {"set_tweak": "highlight", "value": True}],
},
{
"rule_id": "global/override/.m.rule.tombstone",
"conditions": [
actions=["notify", {"set_tweak": "highlight", "value": True}],
),
PushRule(
default=True,
priority_class=PRIORITY_CLASS_MAP["override"],
rule_id="global/override/.m.rule.tombstone",
conditions=[
{
"kind": "event_match",
"key": "type",
@ -265,11 +375,13 @@ BASE_APPEND_OVERRIDE_RULES: List[Dict[str, Any]] = [
"_cache_key": "_tombstone_statekey",
},
],
"actions": ["notify", {"set_tweak": "highlight", "value": True}],
},
{
"rule_id": "global/override/.m.rule.reaction",
"conditions": [
actions=["notify", {"set_tweak": "highlight", "value": True}],
),
PushRule(
default=True,
priority_class=PRIORITY_CLASS_MAP["override"],
rule_id="global/override/.m.rule.reaction",
conditions=[
{
"kind": "event_match",
"key": "type",
@ -277,14 +389,16 @@ BASE_APPEND_OVERRIDE_RULES: List[Dict[str, Any]] = [
"_cache_key": "_reaction",
}
],
"actions": ["dont_notify"],
},
actions=["dont_notify"],
),
# XXX: This is an experimental rule that is only enabled if msc3786_enabled
# is enabled, if it is not the rule gets filtered out in _load_rules() in
# PushRulesWorkerStore
{
"rule_id": "global/override/.org.matrix.msc3786.rule.room.server_acl",
"conditions": [
PushRule(
default=True,
priority_class=PRIORITY_CLASS_MAP["override"],
rule_id="global/override/.org.matrix.msc3786.rule.room.server_acl",
conditions=[
{
"kind": "event_match",
"key": "type",
@ -298,15 +412,17 @@ BASE_APPEND_OVERRIDE_RULES: List[Dict[str, Any]] = [
"_cache_key": "_room_server_acl_state_key",
},
],
"actions": [],
},
actions=[],
),
]
BASE_APPEND_UNDERRIDE_RULES: List[Dict[str, Any]] = [
{
"rule_id": "global/underride/.m.rule.call",
"conditions": [
BASE_APPEND_UNDERRIDE_RULES = [
PushRule(
default=True,
priority_class=PRIORITY_CLASS_MAP["underride"],
rule_id="global/underride/.m.rule.call",
conditions=[
{
"kind": "event_match",
"key": "type",
@ -314,17 +430,19 @@ BASE_APPEND_UNDERRIDE_RULES: List[Dict[str, Any]] = [
"_cache_key": "_call",
}
],
"actions": [
actions=[
"notify",
{"set_tweak": "sound", "value": "ring"},
{"set_tweak": "highlight", "value": False},
],
},
),
# XXX: once m.direct is standardised everywhere, we should use it to detect
# a DM from the user's perspective rather than this heuristic.
{
"rule_id": "global/underride/.m.rule.room_one_to_one",
"conditions": [
PushRule(
default=True,
priority_class=PRIORITY_CLASS_MAP["underride"],
rule_id="global/underride/.m.rule.room_one_to_one",
conditions=[
{"kind": "room_member_count", "is": "2", "_cache_key": "member_count"},
{
"kind": "event_match",
@ -333,17 +451,19 @@ BASE_APPEND_UNDERRIDE_RULES: List[Dict[str, Any]] = [
"_cache_key": "_message",
},
],
"actions": [
actions=[
"notify",
{"set_tweak": "sound", "value": "default"},
{"set_tweak": "highlight", "value": False},
],
},
),
# XXX: this is going to fire for events which aren't m.room.messages
# but are encrypted (e.g. m.call.*)...
{
"rule_id": "global/underride/.m.rule.encrypted_room_one_to_one",
"conditions": [
PushRule(
default=True,
priority_class=PRIORITY_CLASS_MAP["underride"],
rule_id="global/underride/.m.rule.encrypted_room_one_to_one",
conditions=[
{"kind": "room_member_count", "is": "2", "_cache_key": "member_count"},
{
"kind": "event_match",
@ -352,15 +472,17 @@ BASE_APPEND_UNDERRIDE_RULES: List[Dict[str, Any]] = [
"_cache_key": "_encrypted",
},
],
"actions": [
actions=[
"notify",
{"set_tweak": "sound", "value": "default"},
{"set_tweak": "highlight", "value": False},
],
},
{
"rule_id": "global/underride/.org.matrix.msc3772.thread_reply",
"conditions": [
),
PushRule(
default=True,
priority_class=PRIORITY_CLASS_MAP["underride"],
rule_id="global/underride/.org.matrix.msc3772.thread_reply",
conditions=[
{
"kind": "org.matrix.msc3772.relation_match",
"rel_type": "m.thread",
@ -368,11 +490,13 @@ BASE_APPEND_UNDERRIDE_RULES: List[Dict[str, Any]] = [
"sender_type": "user_id",
}
],
"actions": ["notify", {"set_tweak": "highlight", "value": False}],
},
{
"rule_id": "global/underride/.m.rule.message",
"conditions": [
actions=["notify", {"set_tweak": "highlight", "value": False}],
),
PushRule(
default=True,
priority_class=PRIORITY_CLASS_MAP["underride"],
rule_id="global/underride/.m.rule.message",
conditions=[
{
"kind": "event_match",
"key": "type",
@ -380,13 +504,15 @@ BASE_APPEND_UNDERRIDE_RULES: List[Dict[str, Any]] = [
"_cache_key": "_message",
}
],
"actions": ["notify", {"set_tweak": "highlight", "value": False}],
},
actions=["notify", {"set_tweak": "highlight", "value": False}],
),
# XXX: this is going to fire for events which aren't m.room.messages
# but are encrypted (e.g. m.call.*)...
{
"rule_id": "global/underride/.m.rule.encrypted",
"conditions": [
PushRule(
default=True,
priority_class=PRIORITY_CLASS_MAP["underride"],
rule_id="global/underride/.m.rule.encrypted",
conditions=[
{
"kind": "event_match",
"key": "type",
@ -394,11 +520,13 @@ BASE_APPEND_UNDERRIDE_RULES: List[Dict[str, Any]] = [
"_cache_key": "_encrypted",
}
],
"actions": ["notify", {"set_tweak": "highlight", "value": False}],
},
{
"rule_id": "global/underride/.im.vector.jitsi",
"conditions": [
actions=["notify", {"set_tweak": "highlight", "value": False}],
),
PushRule(
default=True,
priority_class=PRIORITY_CLASS_MAP["underride"],
rule_id="global/underride/.im.vector.jitsi",
conditions=[
{
"kind": "event_match",
"key": "type",
@ -418,29 +546,27 @@ BASE_APPEND_UNDERRIDE_RULES: List[Dict[str, Any]] = [
"_cache_key": "_is_state_event",
},
],
"actions": ["notify", {"set_tweak": "highlight", "value": False}],
},
actions=["notify", {"set_tweak": "highlight", "value": False}],
),
]
BASE_RULE_IDS = set()
BASE_RULES_BY_ID: Dict[str, PushRule] = {}
for r in BASE_APPEND_CONTENT_RULES:
r["priority_class"] = PRIORITY_CLASS_MAP["content"]
r["default"] = True
BASE_RULE_IDS.add(r["rule_id"])
BASE_RULE_IDS.add(r.rule_id)
BASE_RULES_BY_ID[r.rule_id] = r
for r in BASE_PREPEND_OVERRIDE_RULES:
r["priority_class"] = PRIORITY_CLASS_MAP["override"]
r["default"] = True
BASE_RULE_IDS.add(r["rule_id"])
BASE_RULE_IDS.add(r.rule_id)
BASE_RULES_BY_ID[r.rule_id] = r
for r in BASE_APPEND_OVERRIDE_RULES:
r["priority_class"] = PRIORITY_CLASS_MAP["override"]
r["default"] = True
BASE_RULE_IDS.add(r["rule_id"])
BASE_RULE_IDS.add(r.rule_id)
BASE_RULES_BY_ID[r.rule_id] = r
for r in BASE_APPEND_UNDERRIDE_RULES:
r["priority_class"] = PRIORITY_CLASS_MAP["underride"]
r["default"] = True
BASE_RULE_IDS.add(r["rule_id"])
BASE_RULE_IDS.add(r.rule_id)
BASE_RULES_BY_ID[r.rule_id] = r

View File

@ -15,7 +15,18 @@
import itertools
import logging
from typing import TYPE_CHECKING, Any, Dict, Iterable, List, Optional, Set, Tuple, Union
from typing import (
TYPE_CHECKING,
Collection,
Dict,
Iterable,
List,
Mapping,
Optional,
Set,
Tuple,
Union,
)
from prometheus_client import Counter
@ -30,6 +41,7 @@ from synapse.util.caches import register_cache
from synapse.util.metrics import measure_func
from synapse.visibility import filter_event_for_clients_with_state
from .baserules import FilteredPushRules, PushRule
from .push_rule_evaluator import PushRuleEvaluatorForEvent
if TYPE_CHECKING:
@ -112,7 +124,7 @@ class BulkPushRuleEvaluator:
async def _get_rules_for_event(
self,
event: EventBase,
) -> Dict[str, List[Dict[str, Any]]]:
) -> Dict[str, FilteredPushRules]:
"""Get the push rules for all users who may need to be notified about
the event.
@ -186,7 +198,7 @@ class BulkPushRuleEvaluator:
return pl_event.content if pl_event else {}, sender_level
async def _get_mutual_relations(
self, event: EventBase, rules: Iterable[Dict[str, Any]]
self, event: EventBase, rules: Iterable[Tuple[PushRule, bool]]
) -> Dict[str, Set[Tuple[str, str]]]:
"""
Fetch event metadata for events which related to the same event as the given event.
@ -216,12 +228,11 @@ class BulkPushRuleEvaluator:
# Pre-filter to figure out which relation types are interesting.
rel_types = set()
for rule in rules:
# Skip disabled rules.
if "enabled" in rule and not rule["enabled"]:
for rule, enabled in rules:
if not enabled:
continue
for condition in rule["conditions"]:
for condition in rule.conditions:
if condition["kind"] != "org.matrix.msc3772.relation_match":
continue
@ -254,7 +265,7 @@ class BulkPushRuleEvaluator:
count_as_unread = _should_count_as_unread(event, context)
rules_by_user = await self._get_rules_for_event(event)
actions_by_user: Dict[str, List[Union[dict, str]]] = {}
actions_by_user: Dict[str, Collection[Union[Mapping, str]]] = {}
room_member_count = await self.store.get_number_joined_users_in_room(
event.room_id
@ -317,15 +328,13 @@ class BulkPushRuleEvaluator:
# current user, it'll be added to the dict later.
actions_by_user[uid] = []
for rule in rules:
if "enabled" in rule and not rule["enabled"]:
for rule, enabled in rules:
if not enabled:
continue
matches = evaluator.check_conditions(
rule["conditions"], uid, display_name
)
matches = evaluator.check_conditions(rule.conditions, uid, display_name)
if matches:
actions = [x for x in rule["actions"] if x != "dont_notify"]
actions = [x for x in rule.actions if x != "dont_notify"]
if actions and "notify" in actions:
# Push rules say we should notify the user of this event
actions_by_user[uid] = actions

View File

@ -18,16 +18,15 @@ from typing import Any, Dict, List, Optional
from synapse.push.rulekinds import PRIORITY_CLASS_INVERSE_MAP, PRIORITY_CLASS_MAP
from synapse.types import UserID
from .baserules import FilteredPushRules, PushRule
def format_push_rules_for_user(
user: UserID, ruleslist: List
user: UserID, ruleslist: FilteredPushRules
) -> Dict[str, Dict[str, list]]:
"""Converts a list of rawrules and a enabled map into nested dictionaries
to match the Matrix client-server format for push rules"""
# We're going to be mutating this a lot, so do a deep copy
ruleslist = copy.deepcopy(ruleslist)
rules: Dict[str, Dict[str, List[Dict[str, Any]]]] = {
"global": {},
"device": {},
@ -35,11 +34,30 @@ def format_push_rules_for_user(
rules["global"] = _add_empty_priority_class_arrays(rules["global"])
for r in ruleslist:
template_name = _priority_class_to_template_name(r["priority_class"])
for r, enabled in ruleslist:
template_name = _priority_class_to_template_name(r.priority_class)
rulearray = rules["global"][template_name]
template_rule = _rule_to_template(r)
if not template_rule:
continue
rulearray.append(template_rule)
template_rule["enabled"] = enabled
if "conditions" not in template_rule:
# Not all formatted rules have explicit conditions, e.g. "room"
# rules omit them as they can be derived from the kind and rule ID.
#
# If the formatted rule has no conditions then we can skip the
# formatting of conditions.
continue
# Remove internal stuff.
for c in r["conditions"]:
template_rule["conditions"] = copy.deepcopy(template_rule["conditions"])
for c in template_rule["conditions"]:
c.pop("_cache_key", None)
pattern_type = c.pop("pattern_type", None)
@ -52,16 +70,6 @@ def format_push_rules_for_user(
if sender_type == "user_id":
c["sender"] = user.to_string()
rulearray = rules["global"][template_name]
template_rule = _rule_to_template(r)
if template_rule:
if "enabled" in r:
template_rule["enabled"] = r["enabled"]
else:
template_rule["enabled"] = True
rulearray.append(template_rule)
return rules
@ -71,24 +79,24 @@ def _add_empty_priority_class_arrays(d: Dict[str, list]) -> Dict[str, list]:
return d
def _rule_to_template(rule: Dict[str, Any]) -> Optional[Dict[str, Any]]:
unscoped_rule_id = None
if "rule_id" in rule:
unscoped_rule_id = _rule_id_from_namespaced(rule["rule_id"])
def _rule_to_template(rule: PushRule) -> Optional[Dict[str, Any]]:
templaterule: Dict[str, Any]
template_name = _priority_class_to_template_name(rule["priority_class"])
unscoped_rule_id = _rule_id_from_namespaced(rule.rule_id)
template_name = _priority_class_to_template_name(rule.priority_class)
if template_name in ["override", "underride"]:
templaterule = {k: rule[k] for k in ["conditions", "actions"]}
templaterule = {"conditions": rule.conditions, "actions": rule.actions}
elif template_name in ["sender", "room"]:
templaterule = {"actions": rule["actions"]}
unscoped_rule_id = rule["conditions"][0]["pattern"]
templaterule = {"actions": rule.actions}
unscoped_rule_id = rule.conditions[0]["pattern"]
elif template_name == "content":
if len(rule["conditions"]) != 1:
if len(rule.conditions) != 1:
return None
thecond = rule["conditions"][0]
thecond = rule.conditions[0]
if "pattern" not in thecond:
return None
templaterule = {"actions": rule["actions"]}
templaterule = {"actions": rule.actions}
templaterule["pattern"] = thecond["pattern"]
else:
# This should not be reached unless this function is not kept in sync
@ -97,8 +105,8 @@ def _rule_to_template(rule: Dict[str, Any]) -> Optional[Dict[str, Any]]:
if unscoped_rule_id:
templaterule["rule_id"] = unscoped_rule_id
if "default" in rule:
templaterule["default"] = rule["default"]
if rule.default:
templaterule["default"] = True
return templaterule

View File

@ -15,7 +15,18 @@
import logging
import re
from typing import Any, Dict, List, Mapping, Optional, Pattern, Set, Tuple, Union
from typing import (
Any,
Dict,
List,
Mapping,
Optional,
Pattern,
Sequence,
Set,
Tuple,
Union,
)
from matrix_common.regex import glob_to_regex, to_word_pattern
@ -32,14 +43,14 @@ INEQUALITY_EXPR = re.compile("^([=<>]*)([0-9]*)$")
def _room_member_count(
ev: EventBase, condition: Dict[str, Any], room_member_count: int
ev: EventBase, condition: Mapping[str, Any], room_member_count: int
) -> bool:
return _test_ineq_condition(condition, room_member_count)
def _sender_notification_permission(
ev: EventBase,
condition: Dict[str, Any],
condition: Mapping[str, Any],
sender_power_level: int,
power_levels: Dict[str, Union[int, Dict[str, int]]],
) -> bool:
@ -54,7 +65,7 @@ def _sender_notification_permission(
return sender_power_level >= room_notif_level
def _test_ineq_condition(condition: Dict[str, Any], number: int) -> bool:
def _test_ineq_condition(condition: Mapping[str, Any], number: int) -> bool:
if "is" not in condition:
return False
m = INEQUALITY_EXPR.match(condition["is"])
@ -137,7 +148,7 @@ class PushRuleEvaluatorForEvent:
self._condition_cache: Dict[str, bool] = {}
def check_conditions(
self, conditions: List[dict], uid: str, display_name: Optional[str]
self, conditions: Sequence[Mapping], uid: str, display_name: Optional[str]
) -> bool:
"""
Returns true if a user's conditions/user ID/display name match the event.
@ -169,7 +180,7 @@ class PushRuleEvaluatorForEvent:
return True
def matches(
self, condition: Dict[str, Any], user_id: str, display_name: Optional[str]
self, condition: Mapping[str, Any], user_id: str, display_name: Optional[str]
) -> bool:
"""
Returns true if a user's condition/user ID/display name match the event.
@ -204,7 +215,7 @@ class PushRuleEvaluatorForEvent:
# endpoint with an unknown kind, see _rule_tuple_from_request_object.
return True
def _event_match(self, condition: dict, user_id: str) -> bool:
def _event_match(self, condition: Mapping, user_id: str) -> bool:
"""
Check an "event_match" push rule condition.
@ -269,7 +280,7 @@ class PushRuleEvaluatorForEvent:
return bool(r.search(body))
def _relation_match(self, condition: dict, user_id: str) -> bool:
def _relation_match(self, condition: Mapping, user_id: str) -> bool:
"""
Check an "relation_match" push rule condition.

View File

@ -1 +1,12 @@
<html><body>Your account is valid until {{ expiration_ts|format_ts("%d-%m-%Y") }}.</body><html>
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Your account is valid until {{ expiration_ts|format_ts("%d-%m-%Y") }}.</title>
</head>
<body>
Your account is valid until {{ expiration_ts|format_ts("%d-%m-%Y") }}.
</body>
</html>

View File

@ -1 +1,12 @@
<html><body>Your account has been successfully renewed and is valid until {{ expiration_ts|format_ts("%d-%m-%Y") }}.</body><html>
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Your account has been successfully renewed and is valid until {{ expiration_ts|format_ts("%d-%m-%Y") }}.</title>
</head>
<body>
Your account has been successfully renewed and is valid until {{ expiration_ts|format_ts("%d-%m-%Y") }}.
</body>
</html>

View File

@ -1,9 +1,14 @@
<html>
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Request to add an email address to your Matrix account</title>
</head>
<body>
<p>A request to add an email address to your Matrix account has been received. If this was you, please click the link below to confirm adding this email:</p>
<a href="{{ link }}">{{ link }}</a>
<p>If this was not you, you can safely ignore this email. Thank you.</p>
</body>
</html>

View File

@ -1,8 +1,13 @@
<html>
<head></head>
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Request failed</title>
</head>
<body>
<p>The request failed for the following reason: {{ failure_reason }}.</p>
<p>No changes have been made to your account.</p>
<p>The request failed for the following reason: {{ failure_reason }}.</p>
<p>No changes have been made to your account.</p>
</body>
</html>

View File

@ -1,6 +1,12 @@
<html>
<head></head>
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Your email has now been validated</title>
</head>
<body>
<p>Your email has now been validated, please return to your client. You may now close this window.</p>
<p>Your email has now been validated, please return to your client. You may now close this window.</p>
</body>
</html>
</html>

View File

@ -1,8 +1,8 @@
<html>
<head>
<title>Success!</title>
<meta name='viewport' content='width=device-width, initial-scale=1,
user-scalable=no, minimum-scale=1.0, maximum-scale=1.0'>
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<link rel="stylesheet" href="/_matrix/static/client/register/style.css">
<script>
if (window.onAuthDone) {

View File

@ -1 +1,12 @@
<html><body>Invalid renewal token.</body><html>
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Invalid renewal token.</title>
</head>
<body>
Invalid renewal token.
</body>
</html>

View File

@ -1,6 +1,8 @@
<!doctype html>
<html lang="en">
<head>
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<style type="text/css">
{% include 'mail.css' without context %}
{% include "mail-%s.css" % app_name ignore missing without context %}

View File

@ -1,6 +1,8 @@
<!doctype html>
<html lang="en">
<head>
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<style type="text/css">
{%- include 'mail.css' without context %}
{%- include "mail-%s.css" % app_name ignore missing without context %}

View File

@ -1,4 +1,9 @@
<html>
<html lang="en">
<head>
<title>Password reset</title>
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
</head>
<body>
<p>A password reset request has been received for your Matrix account. If this was you, please click the link below to confirm resetting your password:</p>

View File

@ -1,5 +1,9 @@
<html>
<head></head>
<html lang="en">
<head>
<title>Password reset confirmation</title>
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
</head>
<body>
<!--Use a hidden form to resubmit the information necessary to reset the password-->
<form method="post">

View File

@ -1,5 +1,9 @@
<html>
<head></head>
<html lang="en">
<head>
<title>Password reset failure</title>
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
</head>
<body>
<p>The request failed for the following reason: {{ failure_reason }}.</p>

View File

@ -1,5 +1,8 @@
<html>
<head></head>
<html lang="en">
<head>
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
</head>
<body>
<p>Your email has now been validated, please return to your client to reset your password. You may now close this window.</p>
</body>

View File

@ -1,8 +1,8 @@
<html>
<head>
<title>Authentication</title>
<meta name='viewport' content='width=device-width, initial-scale=1,
user-scalable=no, minimum-scale=1.0, maximum-scale=1.0'>
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<script src="https://www.recaptcha.net/recaptcha/api.js"
async defer></script>
<script src="//code.jquery.com/jquery-1.11.2.min.js"></script>

View File

@ -1,4 +1,9 @@
<html>
<html lang="en">
<head>
<title>Registration</title>
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
</head>
<body>
<p>You have asked us to register this email with a new Matrix account. If this was you, please click the link below to confirm your email address:</p>

View File

@ -1,5 +1,8 @@
<html>
<head></head>
<html lang="en">
<head>
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
</head>
<body>
<p>Validation failed for the following reason: {{ failure_reason }}.</p>
</body>

View File

@ -1,5 +1,9 @@
<html>
<head></head>
<html lang="en">
<head>
<title>Your email has now been validated</title>
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
</head>
<body>
<p>Your email has now been validated, please return to your client. You may now close this window.</p>
</body>

View File

@ -1,8 +1,8 @@
<html>
<html lang="en">
<head>
<title>Authentication</title>
<meta name='viewport' content='width=device-width, initial-scale=1,
user-scalable=no, minimum-scale=1.0, maximum-scale=1.0'>
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<link rel="stylesheet" href="/_matrix/static/client/register/style.css">
</head>
<body>

View File

@ -3,8 +3,8 @@
<head>
<meta charset="UTF-8">
<title>SSO account deactivated</title>
<meta name="viewport" content="width=device-width, user-scalable=no">
<style type="text/css">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0"> <style type="text/css">
{% include "sso.css" without context %}
</style>
</head>

View File

@ -3,7 +3,8 @@
<head>
<title>Create your account</title>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, user-scalable=no">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<script type="text/javascript">
let wasKeyboard = false;
document.addEventListener("mousedown", function() { wasKeyboard = false; });

View File

@ -3,7 +3,8 @@
<head>
<meta charset="UTF-8">
<title>Authentication failed</title>
<meta name="viewport" content="width=device-width, user-scalable=no">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<style type="text/css">
{% include "sso.css" without context %}
</style>

View File

@ -3,7 +3,8 @@
<head>
<meta charset="UTF-8">
<title>Confirm it's you</title>
<meta name="viewport" content="width=device-width, user-scalable=no">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<style type="text/css">
{% include "sso.css" without context %}
</style>

View File

@ -3,7 +3,8 @@
<head>
<meta charset="UTF-8">
<title>Authentication successful</title>
<meta name="viewport" content="width=device-width, user-scalable=no">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<style type="text/css">
{% include "sso.css" without context %}
</style>

View File

@ -3,7 +3,8 @@
<head>
<meta charset="UTF-8">
<title>Authentication failed</title>
<meta name="viewport" content="width=device-width, user-scalable=no">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<style type="text/css">
{% include "sso.css" without context %}

View File

@ -1,6 +1,8 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta charset="UTF-8">
<title>Choose identity provider</title>
<style type="text/css">

View File

@ -3,7 +3,8 @@
<head>
<meta charset="UTF-8">
<title>Agree to terms and conditions</title>
<meta name="viewport" content="width=device-width, user-scalable=no">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<style type="text/css">
{% include "sso.css" without context %}

View File

@ -3,7 +3,8 @@
<head>
<meta charset="UTF-8">
<title>Continue to your account</title>
<meta name="viewport" content="width=device-width, user-scalable=no">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<style type="text/css">
{% include "sso.css" without context %}

View File

@ -1,8 +1,8 @@
<html>
<head>
<title>Authentication</title>
<meta name='viewport' content='width=device-width, initial-scale=1,
user-scalable=no, minimum-scale=1.0, maximum-scale=1.0'>
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<link rel="stylesheet" href="/_matrix/static/client/register/style.css">
</head>
<body>

View File

@ -303,6 +303,7 @@ class RoomRestServlet(RestServlet):
members = await self.store.get_users_in_room(room_id)
ret["joined_local_devices"] = await self.store.count_devices_by_users(members)
ret["forgotten"] = await self.store.is_locally_forgotten_room(room_id)
return HTTPStatus.OK, ret

View File

@ -15,10 +15,11 @@
# limitations under the License.
import logging
import random
from http import HTTPStatus
from typing import TYPE_CHECKING, Optional, Tuple
from urllib.parse import urlparse
from pydantic import StrictBool, StrictStr, constr
from twisted.web.server import Request
from synapse.api.constants import LoginType
@ -34,12 +35,15 @@ from synapse.http.server import HttpServer, finish_request, respond_with_html
from synapse.http.servlet import (
RestServlet,
assert_params_in_dict,
parse_and_validate_json_object_from_request,
parse_json_object_from_request,
parse_string,
)
from synapse.http.site import SynapseRequest
from synapse.metrics import threepid_send_requests
from synapse.push.mailer import Mailer
from synapse.rest.client.models import AuthenticationData, EmailRequestTokenBody
from synapse.rest.models import RequestBodyModel
from synapse.types import JsonDict
from synapse.util.msisdn import phone_number_to_msisdn
from synapse.util.stringutils import assert_valid_client_secret, random_string
@ -82,32 +86,16 @@ class EmailPasswordRequestTokenRestServlet(RestServlet):
400, "Email-based password resets have been disabled on this server"
)
body = parse_json_object_from_request(request)
body = parse_and_validate_json_object_from_request(
request, EmailRequestTokenBody
)
assert_params_in_dict(body, ["client_secret", "email", "send_attempt"])
# Extract params from body
client_secret = body["client_secret"]
assert_valid_client_secret(client_secret)
# Canonicalise the email address. The addresses are all stored canonicalised
# in the database. This allows the user to reset his password without having to
# know the exact spelling (eg. upper and lower case) of address in the database.
# Stored in the database "foo@bar.com"
# User requests with "FOO@bar.com" would raise a Not Found error
try:
email = validate_email(body["email"])
except ValueError as e:
raise SynapseError(400, str(e))
send_attempt = body["send_attempt"]
next_link = body.get("next_link") # Optional param
if next_link:
if body.next_link:
# Raise if the provided next_link value isn't valid
assert_valid_next_link(self.hs, next_link)
assert_valid_next_link(self.hs, body.next_link)
await self.identity_handler.ratelimit_request_token_requests(
request, "email", email
request, "email", body.email
)
# The email will be sent to the stored address.
@ -115,7 +103,7 @@ class EmailPasswordRequestTokenRestServlet(RestServlet):
# an email address which is controlled by the attacker but which, after
# canonicalisation, matches the one in our database.
existing_user_id = await self.hs.get_datastores().main.get_user_id_by_threepid(
"email", email
"email", body.email
)
if existing_user_id is None:
@ -135,26 +123,26 @@ class EmailPasswordRequestTokenRestServlet(RestServlet):
# Have the configured identity server handle the request
ret = await self.identity_handler.request_email_token(
self.hs.config.registration.account_threepid_delegate_email,
email,
client_secret,
send_attempt,
next_link,
body.email,
body.client_secret,
body.send_attempt,
body.next_link,
)
else:
# Send password reset emails from Synapse
sid = await self.identity_handler.send_threepid_validation(
email,
client_secret,
send_attempt,
body.email,
body.client_secret,
body.send_attempt,
self.mailer.send_password_reset_mail,
next_link,
body.next_link,
)
# Wrap the session id in a JSON object
ret = {"sid": sid}
threepid_send_requests.labels(type="email", reason="password_reset").observe(
send_attempt
body.send_attempt
)
return 200, ret
@ -172,16 +160,23 @@ class PasswordRestServlet(RestServlet):
self.password_policy_handler = hs.get_password_policy_handler()
self._set_password_handler = hs.get_set_password_handler()
class PostBody(RequestBodyModel):
auth: Optional[AuthenticationData] = None
logout_devices: StrictBool = True
if TYPE_CHECKING:
# workaround for https://github.com/samuelcolvin/pydantic/issues/156
new_password: Optional[StrictStr] = None
else:
new_password: Optional[constr(max_length=512, strict=True)] = None
@interactive_auth_handler
async def on_POST(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
body = parse_json_object_from_request(request)
body = parse_and_validate_json_object_from_request(request, self.PostBody)
# we do basic sanity checks here because the auth layer will store these
# in sessions. Pull out the new password provided to us.
new_password = body.pop("new_password", None)
new_password = body.new_password
if new_password is not None:
if not isinstance(new_password, str) or len(new_password) > 512:
raise SynapseError(400, "Invalid password")
self.password_policy_handler.validate_password(new_password)
# there are two possibilities here. Either the user does not have an
@ -201,7 +196,7 @@ class PasswordRestServlet(RestServlet):
params, session_id = await self.auth_handler.validate_user_via_ui_auth(
requester,
request,
body,
body.dict(),
"modify your account password",
)
except InteractiveAuthIncompleteError as e:
@ -224,7 +219,7 @@ class PasswordRestServlet(RestServlet):
result, params, session_id = await self.auth_handler.check_ui_auth(
[[LoginType.EMAIL_IDENTITY]],
request,
body,
body.dict(),
"modify your account password",
)
except InteractiveAuthIncompleteError as e:
@ -299,37 +294,33 @@ class DeactivateAccountRestServlet(RestServlet):
self.auth_handler = hs.get_auth_handler()
self._deactivate_account_handler = hs.get_deactivate_account_handler()
class PostBody(RequestBodyModel):
auth: Optional[AuthenticationData] = None
id_server: Optional[StrictStr] = None
# Not specced, see https://github.com/matrix-org/matrix-spec/issues/297
erase: StrictBool = False
@interactive_auth_handler
async def on_POST(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
body = parse_json_object_from_request(request)
erase = body.get("erase", False)
if not isinstance(erase, bool):
raise SynapseError(
HTTPStatus.BAD_REQUEST,
"Param 'erase' must be a boolean, if given",
Codes.BAD_JSON,
)
body = parse_and_validate_json_object_from_request(request, self.PostBody)
requester = await self.auth.get_user_by_req(request)
# allow ASes to deactivate their own users
if requester.app_service:
await self._deactivate_account_handler.deactivate_account(
requester.user.to_string(), erase, requester
requester.user.to_string(), body.erase, requester
)
return 200, {}
await self.auth_handler.validate_user_via_ui_auth(
requester,
request,
body,
body.dict(),
"deactivate your account",
)
result = await self._deactivate_account_handler.deactivate_account(
requester.user.to_string(),
erase,
requester,
id_server=body.get("id_server"),
requester.user.to_string(), body.erase, requester, id_server=body.id_server
)
if result:
id_server_unbind_result = "success"
@ -364,28 +355,15 @@ class EmailThreepidRequestTokenRestServlet(RestServlet):
"Adding emails have been disabled due to lack of an email config"
)
raise SynapseError(
400, "Adding an email to your account is disabled on this server"
400,
"Adding an email to your account is disabled on this server",
)
body = parse_json_object_from_request(request)
assert_params_in_dict(body, ["client_secret", "email", "send_attempt"])
client_secret = body["client_secret"]
assert_valid_client_secret(client_secret)
body = parse_and_validate_json_object_from_request(
request, EmailRequestTokenBody
)
# Canonicalise the email address. The addresses are all stored canonicalised
# in the database.
# This ensures that the validation email is sent to the canonicalised address
# as it will later be entered into the database.
# Otherwise the email will be sent to "FOO@bar.com" and stored as
# "foo@bar.com" in database.
try:
email = validate_email(body["email"])
except ValueError as e:
raise SynapseError(400, str(e))
send_attempt = body["send_attempt"]
next_link = body.get("next_link") # Optional param
if not await check_3pid_allowed(self.hs, "email", email):
if not await check_3pid_allowed(self.hs, "email", body.email):
raise SynapseError(
403,
"Your email domain is not authorized on this server",
@ -393,14 +371,14 @@ class EmailThreepidRequestTokenRestServlet(RestServlet):
)
await self.identity_handler.ratelimit_request_token_requests(
request, "email", email
request, "email", body.email
)
if next_link:
if body.next_link:
# Raise if the provided next_link value isn't valid
assert_valid_next_link(self.hs, next_link)
assert_valid_next_link(self.hs, body.next_link)
existing_user_id = await self.store.get_user_id_by_threepid("email", email)
existing_user_id = await self.store.get_user_id_by_threepid("email", body.email)
if existing_user_id is not None:
if self.config.server.request_token_inhibit_3pid_errors:
@ -419,26 +397,26 @@ class EmailThreepidRequestTokenRestServlet(RestServlet):
# Have the configured identity server handle the request
ret = await self.identity_handler.request_email_token(
self.hs.config.registration.account_threepid_delegate_email,
email,
client_secret,
send_attempt,
next_link,
body.email,
body.client_secret,
body.send_attempt,
body.next_link,
)
else:
# Send threepid validation emails from Synapse
sid = await self.identity_handler.send_threepid_validation(
email,
client_secret,
send_attempt,
body.email,
body.client_secret,
body.send_attempt,
self.mailer.send_add_threepid_mail,
next_link,
body.next_link,
)
# Wrap the session id in a JSON object
ret = {"sid": sid}
threepid_send_requests.labels(type="email", reason="add_threepid").observe(
send_attempt
body.send_attempt
)
return 200, ret

View File

@ -0,0 +1,69 @@
# Copyright 2022 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import TYPE_CHECKING, Dict, Optional
from pydantic import Extra, StrictInt, StrictStr, constr, validator
from synapse.rest.models import RequestBodyModel
from synapse.util.threepids import validate_email
class AuthenticationData(RequestBodyModel):
"""
Data used during user-interactive authentication.
(The name "Authentication Data" is taken directly from the spec.)
Additional keys will be present, depending on the `type` field. Use `.dict()` to
access them.
"""
class Config:
extra = Extra.allow
session: Optional[StrictStr] = None
type: Optional[StrictStr] = None
class EmailRequestTokenBody(RequestBodyModel):
if TYPE_CHECKING:
client_secret: StrictStr
else:
# See also assert_valid_client_secret()
client_secret: constr(
regex="[0-9a-zA-Z.=_-]", # noqa: F722
min_length=0,
max_length=255,
strict=True,
)
email: StrictStr
id_server: Optional[StrictStr]
id_access_token: Optional[StrictStr]
next_link: Optional[StrictStr]
send_attempt: StrictInt
@validator("id_access_token", always=True)
def token_required_for_identity_server(
cls, token: Optional[str], values: Dict[str, object]
) -> Optional[str]:
if values.get("id_server") is not None and token is None:
raise ValueError("id_access_token is required if an id_server is supplied.")
return token
# Canonicalise the email address. The addresses are all stored canonicalised
# in the database. This allows the user to reset his password without having to
# know the exact spelling (eg. upper and lower case) of address in the database.
# Without this, an email stored in the database as "foo@bar.com" would cause
# user requests for "FOO@bar.com" to raise a Not Found error.
_email_validator = validator("email", allow_reuse=True)(validate_email)

View File

@ -19,6 +19,8 @@ import re
from typing import TYPE_CHECKING, Awaitable, Dict, List, Optional, Tuple
from urllib import parse as urlparse
from prometheus_client.core import Histogram
from twisted.web.server import Request
from synapse import event_auth
@ -60,6 +62,35 @@ if TYPE_CHECKING:
logger = logging.getLogger(__name__)
# This is an extra metric on top of `synapse_http_server_response_time_seconds`
# which times the same sort of thing but this one allows us to see values
# greater than 10s. We use a separate dedicated histogram with its own buckets
# so that we don't increase the cardinality of the general one because it's
# multiplied across hundreds of servlets.
messsages_response_timer = Histogram(
"synapse_room_message_list_rest_servlet_response_time_seconds",
"sec",
[],
buckets=(
0.005,
0.01,
0.025,
0.05,
0.1,
0.25,
0.5,
1.0,
2.5,
5.0,
10.0,
30.0,
60.0,
120.0,
180.0,
"+Inf",
),
)
class TransactionRestServlet(RestServlet):
def __init__(self, hs: "HomeServer"):
@ -560,6 +591,7 @@ class RoomMessageListRestServlet(RestServlet):
self.auth = hs.get_auth()
self.store = hs.get_datastores().main
@messsages_response_timer.time()
async def on_GET(
self, request: SynapseRequest, room_id: str
) -> Tuple[int, JsonDict]:

23
synapse/rest/models.py Normal file
View File

@ -0,0 +1,23 @@
from pydantic import BaseModel, Extra
class RequestBodyModel(BaseModel):
"""A custom version of Pydantic's BaseModel which
- ignores unknown fields and
- does not allow fields to be overwritten after construction,
but otherwise uses Pydantic's default behaviour.
Ignoring unknown fields is a useful default. It means that clients can provide
unstable field not known to the server without the request being refused outright.
Subclassing in this way is recommended by
https://pydantic-docs.helpmanual.io/usage/model_config/#change-behaviour-globally
"""
class Config:
# By default, ignore fields that we don't recognise.
extra = Extra.ignore
# By default, don't allow fields to be reassigned after parsing.
allow_mutation = False

View File

@ -3,7 +3,8 @@
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<title> Login </title>
<meta name='viewport' content='width=device-width, initial-scale=1, user-scalable=no, minimum-scale=1.0, maximum-scale=1.0'>
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<link rel="stylesheet" href="style.css">
<script src="js/jquery-3.4.1.min.js"></script>
<script src="js/login.js"></script>

View File

@ -2,7 +2,8 @@
<html>
<head>
<title> Registration </title>
<meta name='viewport' content='width=device-width, initial-scale=1, user-scalable=no, minimum-scale=1.0, maximum-scale=1.0'>
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<link rel="stylesheet" href="style.css">
<script src="js/jquery-3.4.1.min.js"></script>
<script src="https://www.recaptcha.net/recaptcha/api/js/recaptcha_ajax.js"></script>

View File

@ -45,8 +45,14 @@ from twisted.internet import defer
from synapse.api.constants import EventTypes, Membership
from synapse.events import EventBase
from synapse.events.snapshot import EventContext
from synapse.logging import opentracing
from synapse.logging.context import PreserveLoggingContext, make_deferred_yieldable
from synapse.logging.opentracing import (
SynapseTags,
active_span,
set_tag,
start_active_span_follows_from,
trace,
)
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.storage.controllers.state import StateStorageController
from synapse.storage.databases import Databases
@ -223,7 +229,7 @@ class _EventPeristenceQueue(Generic[_PersistResult]):
queue.append(end_item)
# also add our active opentracing span to the item so that we get a link back
span = opentracing.active_span()
span = active_span()
if span:
end_item.parent_opentracing_span_contexts.append(span.context)
@ -234,7 +240,7 @@ class _EventPeristenceQueue(Generic[_PersistResult]):
res = await make_deferred_yieldable(end_item.deferred.observe())
# add another opentracing span which links to the persist trace.
with opentracing.start_active_span_follows_from(
with start_active_span_follows_from(
f"{task.name}_complete", (end_item.opentracing_span_context,)
):
pass
@ -266,7 +272,7 @@ class _EventPeristenceQueue(Generic[_PersistResult]):
queue = self._get_drainining_queue(room_id)
for item in queue:
try:
with opentracing.start_active_span_follows_from(
with start_active_span_follows_from(
item.task.name,
item.parent_opentracing_span_contexts,
inherit_force_tracing=True,
@ -355,7 +361,7 @@ class EventsPersistenceStorageController:
f"Found an unexpected task type in event persistence queue: {task}"
)
@opentracing.trace
@trace
async def persist_events(
self,
events_and_contexts: Iterable[Tuple[EventBase, EventContext]],
@ -380,9 +386,21 @@ class EventsPersistenceStorageController:
PartialStateConflictError: if attempting to persist a partial state event in
a room that has been un-partial stated.
"""
event_ids: List[str] = []
partitioned: Dict[str, List[Tuple[EventBase, EventContext]]] = {}
for event, ctx in events_and_contexts:
partitioned.setdefault(event.room_id, []).append((event, ctx))
event_ids.append(event.event_id)
set_tag(
SynapseTags.FUNC_ARG_PREFIX + "event_ids",
str(event_ids),
)
set_tag(
SynapseTags.FUNC_ARG_PREFIX + "event_ids.length",
str(len(event_ids)),
)
set_tag(SynapseTags.FUNC_ARG_PREFIX + "backfilled", str(backfilled))
async def enqueue(
item: Tuple[str, List[Tuple[EventBase, EventContext]]]
@ -418,7 +436,7 @@ class EventsPersistenceStorageController:
self.main_store.get_room_max_token(),
)
@opentracing.trace
@trace
async def persist_event(
self, event: EventBase, context: EventContext, backfilled: bool = False
) -> Tuple[EventBase, PersistedEventPosition, RoomStreamToken]:

View File

@ -29,7 +29,8 @@ from typing import (
from synapse.api.constants import EventTypes
from synapse.events import EventBase
from synapse.logging.opentracing import trace
from synapse.logging.opentracing import tag_args, trace
from synapse.storage.roommember import ProfileInfo
from synapse.storage.state import StateFilter
from synapse.storage.util.partial_state_events_tracker import (
PartialCurrentStateTracker,
@ -228,6 +229,7 @@ class StateStorageController:
return {event: event_to_state[event] for event in event_ids}
@trace
@tag_args
async def get_state_ids_for_events(
self,
event_ids: Collection[str],
@ -332,6 +334,7 @@ class StateStorageController:
)
@trace
@tag_args
async def get_state_group_for_events(
self,
event_ids: Collection[str],
@ -473,6 +476,7 @@ class StateStorageController:
prev_stream_id, max_stream_id
)
@trace
async def get_current_state(
self, room_id: str, state_filter: Optional[StateFilter] = None
) -> StateMap[EventBase]:
@ -506,3 +510,15 @@ class StateStorageController:
await self._partial_state_room_tracker.await_full_state(room_id)
return await self.stores.main.get_current_hosts_in_room(room_id)
async def get_users_in_room_with_profiles(
self, room_id: str
) -> Dict[str, ProfileInfo]:
"""
Get the current users in the room with their profiles.
If the room is currently partial-stated, this will block until the room has
full state.
"""
await self._partial_state_room_tracker.await_full_state(room_id)
return await self.stores.main.get_users_in_room_with_profiles(room_id)

View File

@ -33,6 +33,7 @@ from synapse.api.constants import MAX_DEPTH, EventTypes
from synapse.api.errors import StoreError
from synapse.api.room_versions import EventFormatVersions, RoomVersion
from synapse.events import EventBase, make_event_from_dict
from synapse.logging.opentracing import tag_args, trace
from synapse.metrics.background_process_metrics import wrap_as_background_process
from synapse.storage._base import SQLBaseStore, db_to_json, make_in_list_sql_clause
from synapse.storage.database import (
@ -126,6 +127,8 @@ class EventFederationWorkerStore(SignatureWorkerStore, EventsWorkerStore, SQLBas
)
return await self.get_events_as_list(event_ids)
@trace
@tag_args
async def get_auth_chain_ids(
self,
room_id: str,
@ -709,6 +712,8 @@ class EventFederationWorkerStore(SignatureWorkerStore, EventsWorkerStore, SQLBas
# Return all events where not all sets can reach them.
return {eid for eid, n in event_to_missing_sets.items() if n}
@trace
@tag_args
async def get_oldest_event_ids_with_depth_in_room(
self, room_id: str
) -> List[Tuple[str, int]]:
@ -767,6 +772,7 @@ class EventFederationWorkerStore(SignatureWorkerStore, EventsWorkerStore, SQLBas
room_id,
)
@trace
async def get_insertion_event_backward_extremities_in_room(
self, room_id: str
) -> List[Tuple[str, int]]:
@ -1339,6 +1345,8 @@ class EventFederationWorkerStore(SignatureWorkerStore, EventsWorkerStore, SQLBas
event_results.reverse()
return event_results
@trace
@tag_args
async def get_successor_events(self, event_id: str) -> List[str]:
"""Fetch all events that have the given event as a prev event
@ -1375,6 +1383,7 @@ class EventFederationWorkerStore(SignatureWorkerStore, EventsWorkerStore, SQLBas
_delete_old_forward_extrem_cache_txn,
)
@trace
async def insert_insertion_extremity(self, event_id: str, room_id: str) -> None:
await self.db_pool.simple_upsert(
table="insertion_event_extremities",

View File

@ -74,7 +74,17 @@ receipt.
"""
import logging
from typing import TYPE_CHECKING, Dict, List, Optional, Tuple, Union, cast
from typing import (
TYPE_CHECKING,
Collection,
Dict,
List,
Mapping,
Optional,
Tuple,
Union,
cast,
)
import attr
@ -154,7 +164,9 @@ class NotifCounts:
highlight_count: int = 0
def _serialize_action(actions: List[Union[dict, str]], is_highlight: bool) -> str:
def _serialize_action(
actions: Collection[Union[Mapping, str]], is_highlight: bool
) -> str:
"""Custom serializer for actions. This allows us to "compress" common actions.
We use the fact that most users have the same actions for notifs (and for
@ -227,7 +239,7 @@ class EventPushActionsWorkerStore(ReceiptsWorkerStore, StreamWorkerStore, SQLBas
user_id: str,
) -> NotifCounts:
"""Get the notification count, the highlight count and the unread message count
for a given user in a given room after the given read receipt.
for a given user in a given room after their latest read receipt.
Note that this function assumes the user to be a current member of the room,
since it's either called by the sync handler to handle joined room entries, or by
@ -238,9 +250,8 @@ class EventPushActionsWorkerStore(ReceiptsWorkerStore, StreamWorkerStore, SQLBas
user_id: The user to retrieve the counts for.
Returns
A dict containing the counts mentioned earlier in this docstring,
respectively under the keys "notify_count", "highlight_count" and
"unread_count".
A NotifCounts object containing the notification count, the highlight count
and the unread message count.
"""
return await self.db_pool.runInteraction(
"get_unread_event_push_actions_by_room",
@ -255,6 +266,7 @@ class EventPushActionsWorkerStore(ReceiptsWorkerStore, StreamWorkerStore, SQLBas
room_id: str,
user_id: str,
) -> NotifCounts:
# Get the stream ordering of the user's latest receipt in the room.
result = self.get_last_receipt_for_user_txn(
txn,
user_id,
@ -266,13 +278,11 @@ class EventPushActionsWorkerStore(ReceiptsWorkerStore, StreamWorkerStore, SQLBas
),
)
stream_ordering = None
if result:
_, stream_ordering = result
if stream_ordering is None:
# Either last_read_event_id is None, or it's an event we don't have (e.g.
# because it's been purged), in which case retrieve the stream ordering for
else:
# If the user has no receipts in the room, retrieve the stream ordering for
# the latest membership event from this user in this room (which we assume is
# a join).
event_id = self.db_pool.simple_select_one_onecol_txn(
@ -289,10 +299,26 @@ class EventPushActionsWorkerStore(ReceiptsWorkerStore, StreamWorkerStore, SQLBas
)
def _get_unread_counts_by_pos_txn(
self, txn: LoggingTransaction, room_id: str, user_id: str, stream_ordering: int
self,
txn: LoggingTransaction,
room_id: str,
user_id: str,
receipt_stream_ordering: int,
) -> NotifCounts:
"""Get the number of unread messages for a user/room that have happened
since the given stream ordering.
Args:
txn: The database transaction.
room_id: The room ID to get unread counts for.
user_id: The user ID to get unread counts for.
receipt_stream_ordering: The stream ordering of the user's latest
receipt in the room. If there are no receipts, the stream ordering
of the user's join event.
Returns
A NotifCounts object containing the notification count, the highlight count
and the unread message count.
"""
counts = NotifCounts()
@ -320,7 +346,7 @@ class EventPushActionsWorkerStore(ReceiptsWorkerStore, StreamWorkerStore, SQLBas
OR last_receipt_stream_ordering = ?
)
""",
(room_id, user_id, stream_ordering, stream_ordering),
(room_id, user_id, receipt_stream_ordering, receipt_stream_ordering),
)
row = txn.fetchone()
@ -338,17 +364,20 @@ class EventPushActionsWorkerStore(ReceiptsWorkerStore, StreamWorkerStore, SQLBas
AND stream_ordering > ?
AND highlight = 1
"""
txn.execute(sql, (user_id, room_id, stream_ordering))
txn.execute(sql, (user_id, room_id, receipt_stream_ordering))
row = txn.fetchone()
if row:
counts.highlight_count += row[0]
# Finally we need to count push actions that aren't included in the
# summary returned above, e.g. recent events that haven't been
# summarised yet, or the summary is empty due to a recent read receipt.
stream_ordering = max(stream_ordering, summary_stream_ordering)
# summary returned above. This might be due to recent events that haven't
# been summarised yet or the summary is out of date due to a recent read
# receipt.
start_unread_stream_ordering = max(
receipt_stream_ordering, summary_stream_ordering
)
notify_count, unread_count = self._get_notif_unread_count_for_user_room(
txn, room_id, user_id, stream_ordering
txn, room_id, user_id, start_unread_stream_ordering
)
counts.notify_count += notify_count
@ -733,7 +762,7 @@ class EventPushActionsWorkerStore(ReceiptsWorkerStore, StreamWorkerStore, SQLBas
async def add_push_actions_to_staging(
self,
event_id: str,
user_id_actions: Dict[str, List[Union[dict, str]]],
user_id_actions: Dict[str, Collection[Union[Mapping, str]]],
count_as_unread: bool,
) -> None:
"""Add the push actions for the event to the push action staging area.
@ -750,7 +779,7 @@ class EventPushActionsWorkerStore(ReceiptsWorkerStore, StreamWorkerStore, SQLBas
# This is a helper function for generating the necessary tuple that
# can be used to insert into the `event_push_actions_staging` table.
def _gen_entry(
user_id: str, actions: List[Union[dict, str]]
user_id: str, actions: Collection[Union[Mapping, str]]
) -> Tuple[str, str, str, int, int, int]:
is_highlight = 1 if _action_has_highlight(actions) else 0
notif = 1 if "notify" in actions else 0
@ -1151,8 +1180,6 @@ class EventPushActionsWorkerStore(ReceiptsWorkerStore, StreamWorkerStore, SQLBas
txn: The database transaction.
old_rotate_stream_ordering: The previous maximum event stream ordering.
rotate_to_stream_ordering: The new maximum event stream ordering to summarise.
Returns whether the archiving process has caught up or not.
"""
# Calculate the new counts that should be upserted into event_push_summary
@ -1238,9 +1265,7 @@ class EventPushActionsWorkerStore(ReceiptsWorkerStore, StreamWorkerStore, SQLBas
(rotate_to_stream_ordering,),
)
async def _remove_old_push_actions_that_have_rotated(
self,
) -> None:
async def _remove_old_push_actions_that_have_rotated(self) -> None:
"""Clear out old push actions that have been summarised."""
# We want to clear out anything that is older than a day that *has* already
@ -1397,7 +1422,7 @@ class EventPushActionsStore(EventPushActionsWorkerStore):
]
def _action_has_highlight(actions: List[Union[dict, str]]) -> bool:
def _action_has_highlight(actions: Collection[Union[Mapping, str]]) -> bool:
for action in actions:
if not isinstance(action, dict):
continue

View File

@ -40,6 +40,7 @@ from synapse.api.errors import Codes, SynapseError
from synapse.api.room_versions import RoomVersions
from synapse.events import EventBase, relation_from_event
from synapse.events.snapshot import EventContext
from synapse.logging.opentracing import trace
from synapse.storage._base import db_to_json, make_in_list_sql_clause
from synapse.storage.database import (
DatabasePool,
@ -145,6 +146,7 @@ class PersistEventsStore:
self._backfill_id_gen: AbstractStreamIdGenerator = self.store._backfill_id_gen
self._stream_id_gen: AbstractStreamIdGenerator = self.store._stream_id_gen
@trace
async def _persist_events_and_state_updates(
self,
events_and_contexts: List[Tuple[EventBase, EventContext]],

View File

@ -54,6 +54,7 @@ from synapse.logging.context import (
current_context,
make_deferred_yieldable,
)
from synapse.logging.opentracing import start_active_span, tag_args, trace
from synapse.metrics.background_process_metrics import (
run_as_background_process,
wrap_as_background_process,
@ -430,6 +431,8 @@ class EventsWorkerStore(SQLBaseStore):
return {e.event_id: e for e in events}
@trace
@tag_args
async def get_events_as_list(
self,
event_ids: Collection[str],
@ -1090,23 +1093,42 @@ class EventsWorkerStore(SQLBaseStore):
"""
fetched_event_ids: Set[str] = set()
fetched_events: Dict[str, _EventRow] = {}
events_to_fetch = event_ids
while events_to_fetch:
row_map = await self._enqueue_events(events_to_fetch)
async def _fetch_event_ids_and_get_outstanding_redactions(
event_ids_to_fetch: Collection[str],
) -> Collection[str]:
"""
Fetch all of the given event_ids and return any associated redaction event_ids
that we still need to fetch in the next iteration.
"""
row_map = await self._enqueue_events(event_ids_to_fetch)
# we need to recursively fetch any redactions of those events
redaction_ids: Set[str] = set()
for event_id in events_to_fetch:
for event_id in event_ids_to_fetch:
row = row_map.get(event_id)
fetched_event_ids.add(event_id)
if row:
fetched_events[event_id] = row
redaction_ids.update(row.redactions)
events_to_fetch = redaction_ids.difference(fetched_event_ids)
if events_to_fetch:
logger.debug("Also fetching redaction events %s", events_to_fetch)
event_ids_to_fetch = redaction_ids.difference(fetched_event_ids)
return event_ids_to_fetch
# Grab the initial list of events requested
event_ids_to_fetch = await _fetch_event_ids_and_get_outstanding_redactions(
event_ids
)
# Then go and recursively find all of the associated redactions
with start_active_span("recursively fetching redactions"):
while event_ids_to_fetch:
logger.debug("Also fetching redaction events %s", event_ids_to_fetch)
event_ids_to_fetch = (
await _fetch_event_ids_and_get_outstanding_redactions(
event_ids_to_fetch
)
)
# build a map from event_id to EventBase
event_map: Dict[str, EventBase] = {}
@ -1424,6 +1446,8 @@ class EventsWorkerStore(SQLBaseStore):
return {r["event_id"] for r in rows}
@trace
@tag_args
async def have_seen_events(
self, room_id: str, event_ids: Iterable[str]
) -> Set[str]:
@ -2200,3 +2224,63 @@ class EventsWorkerStore(SQLBaseStore):
(room_id,),
)
return [row[0] for row in txn]
def mark_event_rejected_txn(
self,
txn: LoggingTransaction,
event_id: str,
rejection_reason: Optional[str],
) -> None:
"""Mark an event that was previously accepted as rejected, or vice versa
This can happen, for example, when resyncing state during a faster join.
Args:
txn:
event_id: ID of event to update
rejection_reason: reason it has been rejected, or None if it is now accepted
"""
if rejection_reason is None:
logger.info(
"Marking previously-processed event %s as accepted",
event_id,
)
self.db_pool.simple_delete_txn(
txn,
"rejections",
keyvalues={"event_id": event_id},
)
else:
logger.info(
"Marking previously-processed event %s as rejected(%s)",
event_id,
rejection_reason,
)
self.db_pool.simple_upsert_txn(
txn,
table="rejections",
keyvalues={"event_id": event_id},
values={
"reason": rejection_reason,
"last_check": self._clock.time_msec(),
},
)
self.db_pool.simple_update_txn(
txn,
table="events",
keyvalues={"event_id": event_id},
updatevalues={"rejection_reason": rejection_reason},
)
self.invalidate_get_event_cache_after_txn(txn, event_id)
# TODO(faster_joins): invalidate the cache on workers. Ideally we'd just
# call '_send_invalidation_to_replication', but we actually need the other
# end to call _invalidate_local_get_event_cache() rather than (just)
# _get_event_cache.invalidate().
#
# One solution might be to (somehow) get the workers to call
# _invalidate_caches_for_event() (though that will invalidate more than
# strictly necessary).
#
# https://github.com/matrix-org/synapse/issues/12994

View File

@ -14,11 +14,23 @@
# limitations under the License.
import abc
import logging
from typing import TYPE_CHECKING, Collection, Dict, List, Optional, Tuple, Union, cast
from typing import (
TYPE_CHECKING,
Any,
Collection,
Dict,
List,
Mapping,
Optional,
Sequence,
Tuple,
Union,
cast,
)
from synapse.api.errors import StoreError
from synapse.config.homeserver import ExperimentalConfig
from synapse.push.baserules import list_with_base_rules
from synapse.push.baserules import FilteredPushRules, PushRule, compile_push_rules
from synapse.replication.slave.storage._slaved_id_tracker import SlavedIdTracker
from synapse.storage._base import SQLBaseStore, db_to_json
from synapse.storage.database import (
@ -50,60 +62,30 @@ if TYPE_CHECKING:
logger = logging.getLogger(__name__)
def _is_experimental_rule_enabled(
rule_id: str, experimental_config: ExperimentalConfig
) -> bool:
"""Used by `_load_rules` to filter out experimental rules when they
have not been enabled.
"""
if (
rule_id == "global/override/.org.matrix.msc3786.rule.room.server_acl"
and not experimental_config.msc3786_enabled
):
return False
if (
rule_id == "global/underride/.org.matrix.msc3772.thread_reply"
and not experimental_config.msc3772_enabled
):
return False
return True
def _load_rules(
rawrules: List[JsonDict],
enabled_map: Dict[str, bool],
experimental_config: ExperimentalConfig,
) -> List[JsonDict]:
ruleslist = []
for rawrule in rawrules:
rule = dict(rawrule)
rule["conditions"] = db_to_json(rawrule["conditions"])
rule["actions"] = db_to_json(rawrule["actions"])
rule["default"] = False
ruleslist.append(rule)
) -> FilteredPushRules:
"""Take the DB rows returned from the DB and convert them into a full
`FilteredPushRules` object.
"""
# We're going to be mutating this a lot, so copy it. We also filter out
# any experimental default push rules that aren't enabled.
rules = [
rule
for rule in list_with_base_rules(ruleslist)
if _is_experimental_rule_enabled(rule["rule_id"], experimental_config)
ruleslist = [
PushRule(
rule_id=rawrule["rule_id"],
priority_class=rawrule["priority_class"],
conditions=db_to_json(rawrule["conditions"]),
actions=db_to_json(rawrule["actions"]),
)
for rawrule in rawrules
]
for i, rule in enumerate(rules):
rule_id = rule["rule_id"]
push_rules = compile_push_rules(ruleslist)
if rule_id not in enabled_map:
continue
if rule.get("enabled", True) == bool(enabled_map[rule_id]):
continue
filtered_rules = FilteredPushRules(push_rules, enabled_map, experimental_config)
# Rules are cached across users.
rule = dict(rule)
rule["enabled"] = bool(enabled_map[rule_id])
rules[i] = rule
return rules
return filtered_rules
# The ABCMeta metaclass ensures that it cannot be instantiated without
@ -162,7 +144,7 @@ class PushRulesWorkerStore(
raise NotImplementedError()
@cached(max_entries=5000)
async def get_push_rules_for_user(self, user_id: str) -> List[JsonDict]:
async def get_push_rules_for_user(self, user_id: str) -> FilteredPushRules:
rows = await self.db_pool.simple_select_list(
table="push_rules",
keyvalues={"user_name": user_id},
@ -216,11 +198,11 @@ class PushRulesWorkerStore(
@cachedList(cached_method_name="get_push_rules_for_user", list_name="user_ids")
async def bulk_get_push_rules(
self, user_ids: Collection[str]
) -> Dict[str, List[JsonDict]]:
) -> Dict[str, FilteredPushRules]:
if not user_ids:
return {}
results: Dict[str, List[JsonDict]] = {user_id: [] for user_id in user_ids}
raw_rules: Dict[str, List[JsonDict]] = {user_id: [] for user_id in user_ids}
rows = await self.db_pool.simple_select_many_batch(
table="push_rules",
@ -234,11 +216,13 @@ class PushRulesWorkerStore(
rows.sort(key=lambda row: (-int(row["priority_class"]), -int(row["priority"])))
for row in rows:
results.setdefault(row["user_name"], []).append(row)
raw_rules.setdefault(row["user_name"], []).append(row)
enabled_map_by_user = await self.bulk_get_push_rules_enabled(user_ids)
for user_id, rules in results.items():
results: Dict[str, FilteredPushRules] = {}
for user_id, rules in raw_rules.items():
results[user_id] = _load_rules(
rules, enabled_map_by_user.get(user_id, {}), self.hs.config.experimental
)
@ -345,8 +329,8 @@ class PushRuleStore(PushRulesWorkerStore):
user_id: str,
rule_id: str,
priority_class: int,
conditions: List[Dict[str, str]],
actions: List[Union[JsonDict, str]],
conditions: Sequence[Mapping[str, str]],
actions: Sequence[Union[Mapping[str, Any], str]],
before: Optional[str] = None,
after: Optional[str] = None,
) -> None:
@ -817,7 +801,7 @@ class PushRuleStore(PushRulesWorkerStore):
return self._push_rules_stream_id_gen.get_current_token()
async def copy_push_rule_from_room_to_room(
self, new_room_id: str, user_id: str, rule: dict
self, new_room_id: str, user_id: str, rule: PushRule
) -> None:
"""Copy a single push rule from one room to another for a specific user.
@ -827,21 +811,27 @@ class PushRuleStore(PushRulesWorkerStore):
rule: A push rule.
"""
# Create new rule id
rule_id_scope = "/".join(rule["rule_id"].split("/")[:-1])
rule_id_scope = "/".join(rule.rule_id.split("/")[:-1])
new_rule_id = rule_id_scope + "/" + new_room_id
new_conditions = []
# Change room id in each condition
for condition in rule.get("conditions", []):
for condition in rule.conditions:
new_condition = condition
if condition.get("key") == "room_id":
condition["pattern"] = new_room_id
new_condition = dict(condition)
new_condition["pattern"] = new_room_id
new_conditions.append(new_condition)
# Add the rule for the new room
await self.add_push_rule(
user_id=user_id,
rule_id=new_rule_id,
priority_class=rule["priority_class"],
conditions=rule["conditions"],
actions=rule["actions"],
priority_class=rule.priority_class,
conditions=new_conditions,
actions=rule.actions,
)
async def copy_push_rules_from_room_to_room_for_user(
@ -859,8 +849,11 @@ class PushRuleStore(PushRulesWorkerStore):
user_push_rules = await self.get_push_rules_for_user(user_id)
# Get rules relating to the old room and copy them to the new room
for rule in user_push_rules:
conditions = rule.get("conditions", [])
for rule, enabled in user_push_rules:
if not enabled:
continue
conditions = rule.conditions
if any(
(c.get("key") == "room_id" and c.get("pattern") == old_room_id)
for c in conditions

View File

@ -161,7 +161,7 @@ class ReceiptsWorkerStore(SQLBaseStore):
receipt_type: The receipt types to fetch.
Returns:
The latest receipt, if one exists.
The event ID and stream ordering of the latest receipt, if one exists.
"""
clause, args = make_in_list_sql_clause(

View File

@ -283,6 +283,9 @@ class RoomMemberWorkerStore(EventsWorkerStore):
Returns:
A mapping from user ID to ProfileInfo.
Preconditions:
- There is full state available for the room (it is not partial-stated).
"""
def _get_users_in_room_with_profiles(
@ -1212,6 +1215,30 @@ class RoomMemberWorkerStore(EventsWorkerStore):
"get_forgotten_rooms_for_user", _get_forgotten_rooms_for_user_txn
)
async def is_locally_forgotten_room(self, room_id: str) -> bool:
"""Returns whether all local users have forgotten this room_id.
Args:
room_id: The room ID to query.
Returns:
Whether the room is forgotten.
"""
sql = """
SELECT count(*) > 0 FROM local_current_membership
INNER JOIN room_memberships USING (room_id, event_id)
WHERE
room_id = ?
AND forgotten = 0;
"""
rows = await self.db_pool.execute("is_forgotten_room", None, sql, room_id)
# `count(*)` returns always an integer
# If any rows still exist it means someone has not forgotten this room yet
return not rows[0][0]
async def get_rooms_user_has_been_in(self, user_id: str) -> Set[str]:
"""Get all rooms that the user has ever been in.

View File

@ -430,6 +430,11 @@ class StateGroupWorkerStore(EventsWorkerStore, SQLBaseStore):
updatevalues={"state_group": state_group},
)
# the event may now be rejected where it was not before, or vice versa,
# in which case we need to update the rejected flags.
if bool(context.rejected) != (event.rejected_reason is not None):
self.mark_event_rejected_txn(txn, event.event_id, context.rejected)
self.db_pool.simple_delete_one_txn(
txn,
table="partial_state_events",

View File

@ -539,15 +539,6 @@ class StateFilter:
is_mine_id: a callable which confirms if a given state_key matches a mxid
of a local user
"""
# TODO(faster_joins): it's not entirely clear that this is safe. In particular,
# there may be circumstances in which we return a piece of state that, once we
# resync the state, we discover is invalid. For example: if it turns out that
# the sender of a piece of state wasn't actually in the room, then clearly that
# state shouldn't have been returned.
# We should at least add some tests around this to see what happens.
# https://github.com/matrix-org/synapse/issues/13006
# if we haven't requested membership events, then it depends on the value of
# 'include_others'
if EventTypes.Member not in self.types:

View File

@ -20,6 +20,7 @@ from twisted.internet import defer
from twisted.internet.defer import Deferred
from synapse.logging.context import PreserveLoggingContext, make_deferred_yieldable
from synapse.logging.opentracing import trace_with_opname
from synapse.storage.databases.main.events_worker import EventsWorkerStore
from synapse.storage.databases.main.room import RoomWorkerStore
from synapse.util import unwrapFirstError
@ -58,6 +59,7 @@ class PartialStateEventsTracker:
for o in observers:
o.callback(None)
@trace_with_opname("PartialStateEventsTracker.await_full_state")
async def await_full_state(self, event_ids: Collection[str]) -> None:
"""Wait for all the given events to have full state.
@ -151,6 +153,7 @@ class PartialCurrentStateTracker:
for o in observers:
o.callback(None)
@trace_with_opname("PartialCurrentStateTracker.await_full_state")
async def await_full_state(self, room_id: str) -> None:
# We add the deferred immediately so that the DB call to check for
# partial state doesn't race when we unpartial the room.

View File

@ -27,6 +27,8 @@ from synapse.logging.context import (
make_deferred_yieldable,
run_in_background,
)
from synapse.logging.opentracing import start_active_span
from synapse.metrics import Histogram
from synapse.util import Clock
if typing.TYPE_CHECKING:
@ -35,6 +37,29 @@ if typing.TYPE_CHECKING:
logger = logging.getLogger(__name__)
queue_wait_timer = Histogram(
"synapse_rate_limit_queue_wait_time_seconds",
"sec",
[],
buckets=(
0.005,
0.01,
0.025,
0.05,
0.1,
0.25,
0.5,
0.75,
1.0,
2.5,
5.0,
10.0,
20.0,
"+Inf",
),
)
class FederationRateLimiter:
def __init__(self, clock: Clock, config: FederationRatelimitSettings):
def new_limiter() -> "_PerHostRatelimiter":
@ -176,8 +201,17 @@ class _PerHostRatelimiter:
# Ensure that we've properly cleaned up.
self.sleeping_requests.discard(request_id)
self.ready_request_queue.pop(request_id, None)
wait_span_scope.__exit__(None, None, None)
wait_timer_cm.__exit__(None, None, None)
return r
# Tracing
wait_span_scope = start_active_span("ratelimit wait")
wait_span_scope.__enter__()
# Metrics
wait_timer_cm = queue_wait_timer.time()
wait_timer_cm.__enter__()
ret_defer.addCallbacks(on_start, on_err)
ret_defer.addBoth(on_both)
return make_deferred_yieldable(ret_defer)

View File

@ -73,8 +73,8 @@ async def filter_events_for_client(
* the user is not currently a member of the room, and:
* the user has not been a member of the room since the given
events
always_include_ids: set of event ids to specifically
include (unless sender is ignored)
always_include_ids: set of event ids to specifically include, if present
in events (unless sender is ignored)
filter_send_to_client: Whether we're checking an event that's going to be
sent to a client. This might not always be the case since this function can
also be called to check whether a user can see the state at a given point.

View File

@ -11,11 +11,11 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import Any, Dict
from twisted.test.proto_helpers import MemoryReactor
from synapse.api.constants import AccountDataTypes
from synapse.push.baserules import PushRule
from synapse.push.rulekinds import PRIORITY_CLASS_MAP
from synapse.rest import admin
from synapse.rest.client import account, login
@ -130,12 +130,12 @@ class DeactivateAccountTestCase(HomeserverTestCase):
),
)
def _is_custom_rule(self, push_rule: Dict[str, Any]) -> bool:
def _is_custom_rule(self, push_rule: PushRule) -> bool:
"""
Default rules start with a dot: such as .m.rule and .im.vector.
This function returns true iff a rule is custom (not default).
"""
return "/." not in push_rule["rule_id"]
return "/." not in push_rule.rule_id
def test_push_rules_deleted_upon_account_deactivation(self) -> None:
"""
@ -157,22 +157,21 @@ class DeactivateAccountTestCase(HomeserverTestCase):
)
# Test the rule exists
push_rules = self.get_success(self._store.get_push_rules_for_user(self.user))
filtered_push_rules = self.get_success(
self._store.get_push_rules_for_user(self.user)
)
# Filter out default rules; we don't care
push_rules = list(filter(self._is_custom_rule, push_rules))
push_rules = [r for r, _ in filtered_push_rules if self._is_custom_rule(r)]
# Check our rule made it
self.assertEqual(
push_rules,
[
{
"user_name": "@user:test",
"rule_id": "personal.override.rule1",
"priority_class": 5,
"priority": 0,
"conditions": [],
"actions": [],
"default": False,
}
PushRule(
rule_id="personal.override.rule1",
priority_class=5,
conditions=[],
actions=[],
)
],
push_rules,
)
@ -180,9 +179,11 @@ class DeactivateAccountTestCase(HomeserverTestCase):
# Request the deactivation of our account
self._deactivate_my_account()
push_rules = self.get_success(self._store.get_push_rules_for_user(self.user))
filtered_push_rules = self.get_success(
self._store.get_push_rules_for_user(self.user)
)
# Filter out default rules; we don't care
push_rules = list(filter(self._is_custom_rule, push_rules))
push_rules = [r for r, _ in filtered_push_rules if self._is_custom_rule(r)]
# Check our rule no longer exists
self.assertEqual(push_rules, [], push_rules)

View File

@ -25,6 +25,8 @@ from synapse.logging.context import (
from synapse.logging.opentracing import (
start_active_span,
start_active_span_follows_from,
tag_args,
trace_with_opname,
)
from synapse.util import Clock
@ -38,8 +40,12 @@ try:
except ImportError:
jaeger_client = None # type: ignore
import logging
from tests.unittest import TestCase
logger = logging.getLogger(__name__)
class LogContextScopeManagerTestCase(TestCase):
"""
@ -194,3 +200,80 @@ class LogContextScopeManagerTestCase(TestCase):
self._reporter.get_spans(),
[scopes[1].span, scopes[2].span, scopes[0].span],
)
def test_trace_decorator_sync(self) -> None:
"""
Test whether we can use `@trace_with_opname` (`@trace`) and `@tag_args`
with sync functions
"""
with LoggingContext("root context"):
@trace_with_opname("fixture_sync_func", tracer=self._tracer)
@tag_args
def fixture_sync_func() -> str:
return "foo"
result = fixture_sync_func()
self.assertEqual(result, "foo")
# the span should have been reported
self.assertEqual(
[span.operation_name for span in self._reporter.get_spans()],
["fixture_sync_func"],
)
def test_trace_decorator_deferred(self) -> None:
"""
Test whether we can use `@trace_with_opname` (`@trace`) and `@tag_args`
with functions that return deferreds
"""
reactor = MemoryReactorClock()
with LoggingContext("root context"):
@trace_with_opname("fixture_deferred_func", tracer=self._tracer)
@tag_args
def fixture_deferred_func() -> "defer.Deferred[str]":
d1: defer.Deferred[str] = defer.Deferred()
d1.callback("foo")
return d1
result_d1 = fixture_deferred_func()
# let the tasks complete
reactor.pump((2,) * 8)
self.assertEqual(self.successResultOf(result_d1), "foo")
# the span should have been reported
self.assertEqual(
[span.operation_name for span in self._reporter.get_spans()],
["fixture_deferred_func"],
)
def test_trace_decorator_async(self) -> None:
"""
Test whether we can use `@trace_with_opname` (`@trace`) and `@tag_args`
with async functions
"""
reactor = MemoryReactorClock()
with LoggingContext("root context"):
@trace_with_opname("fixture_async_func", tracer=self._tracer)
@tag_args
async def fixture_async_func() -> str:
return "foo"
d1 = defer.ensureDeferred(fixture_async_func())
# let the tasks complete
reactor.pump((2,) * 8)
self.assertEqual(self.successResultOf(d1), "foo")
# the span should have been reported
self.assertEqual(
[span.operation_name for span in self._reporter.get_spans()],
["fixture_async_func"],
)

View File

@ -13,7 +13,6 @@
# limitations under the License.
import urllib.parse
from http import HTTPStatus
from parameterized import parameterized
@ -79,10 +78,10 @@ class QuarantineMediaTestCase(unittest.HomeserverTestCase):
# Should be quarantined
self.assertEqual(
HTTPStatus.NOT_FOUND,
404,
channel.code,
msg=(
"Expected to receive a HTTPStatus.NOT_FOUND on accessing quarantined media: %s"
"Expected to receive a 404 on accessing quarantined media: %s"
% server_and_media_id
),
)
@ -107,7 +106,7 @@ class QuarantineMediaTestCase(unittest.HomeserverTestCase):
# Expect a forbidden error
self.assertEqual(
HTTPStatus.FORBIDDEN,
403,
channel.code,
msg="Expected forbidden on quarantining media as a non-admin",
)

View File

@ -11,7 +11,6 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from http import HTTPStatus
from typing import Collection
from parameterized import parameterized
@ -51,7 +50,7 @@ class BackgroundUpdatesTestCase(unittest.HomeserverTestCase):
)
def test_requester_is_no_admin(self, method: str, url: str) -> None:
"""
If the user is not a server admin, an error HTTPStatus.FORBIDDEN is returned.
If the user is not a server admin, an error 403 is returned.
"""
self.register_user("user", "pass", admin=False)
@ -64,7 +63,7 @@ class BackgroundUpdatesTestCase(unittest.HomeserverTestCase):
access_token=other_user_tok,
)
self.assertEqual(HTTPStatus.FORBIDDEN, channel.code, msg=channel.json_body)
self.assertEqual(403, channel.code, msg=channel.json_body)
self.assertEqual(Codes.FORBIDDEN, channel.json_body["errcode"])
def test_invalid_parameter(self) -> None:
@ -81,7 +80,7 @@ class BackgroundUpdatesTestCase(unittest.HomeserverTestCase):
access_token=self.admin_user_tok,
)
self.assertEqual(HTTPStatus.BAD_REQUEST, channel.code, msg=channel.json_body)
self.assertEqual(400, channel.code, msg=channel.json_body)
self.assertEqual(Codes.MISSING_PARAM, channel.json_body["errcode"])
# job_name invalid
@ -92,7 +91,7 @@ class BackgroundUpdatesTestCase(unittest.HomeserverTestCase):
access_token=self.admin_user_tok,
)
self.assertEqual(HTTPStatus.BAD_REQUEST, channel.code, msg=channel.json_body)
self.assertEqual(400, channel.code, msg=channel.json_body)
self.assertEqual(Codes.UNKNOWN, channel.json_body["errcode"])
def _register_bg_update(self) -> None:
@ -365,4 +364,4 @@ class BackgroundUpdatesTestCase(unittest.HomeserverTestCase):
access_token=self.admin_user_tok,
)
self.assertEqual(HTTPStatus.BAD_REQUEST, channel.code, msg=channel.json_body)
self.assertEqual(400, channel.code, msg=channel.json_body)

Some files were not shown because too many files have changed in this diff Show More