Merge remote-tracking branch 'origin/release-v1.92' into matrix-org-hotfixes

matrix-org-hotfixes
David Robertson 2023-09-05 14:40:53 +01:00
commit 0e8cbbdb8e
No known key found for this signature in database
GPG Key ID: 903ECE108A39DEDD
172 changed files with 2477 additions and 1788 deletions

View File

@ -47,10 +47,9 @@ if not IS_PR:
"database": "sqlite",
"extras": "all",
}
for version in ("3.9", "3.10", "3.11")
for version in ("3.9", "3.10", "3.11", "3.12.0-rc.1")
)
trial_postgres_tests = [
{
"python-version": "3.8",

View File

@ -57,8 +57,8 @@ jobs:
# `pip install matrix-synapse[all]` as closely as possible.
- run: poetry update --no-dev
- run: poetry run pip list > after.txt && (diff -u before.txt after.txt || true)
- name: Remove warn_unused_ignores from mypy config
run: sed '/warn_unused_ignores = True/d' -i mypy.ini
- name: Remove unhelpful options from mypy config
run: sed -e '/warn_unused_ignores = True/d' -e '/warn_redundant_casts = True/d' -i mypy.ini
- run: poetry run mypy
trial:
needs: check_repo

View File

@ -54,8 +54,8 @@ jobs:
poetry remove twisted
poetry add --extras tls git+https://github.com/twisted/twisted.git#${{ inputs.twisted_ref || 'trunk' }}
poetry install --no-interaction --extras "all test"
- name: Remove warn_unused_ignores from mypy config
run: sed '/warn_unused_ignores = True/d' -i mypy.ini
- name: Remove unhelpful options from mypy config
run: sed -e '/warn_unused_ignores = True/d' -e '/warn_redundant_casts = True/d' -i mypy.ini
- run: poetry run mypy
trial:

View File

@ -1,3 +1,65 @@
# Synapse 1.92.0rc1 (2023-09-05)
### Features
- Add configuration setting for CAS protocol version. Contributed by Aurélien Grimpard. ([\#15816](https://github.com/matrix-org/synapse/issues/15816))
- Suppress notifications from message edits per [MSC3958](https://github.com/matrix-org/matrix-spec-proposals/pull/3958). ([\#16113](https://github.com/matrix-org/synapse/issues/16113))
- Return a `Retry-After` with `M_LIMIT_EXCEEDED` error responses. ([\#16136](https://github.com/matrix-org/synapse/issues/16136))
- Add `last_seen_ts` to the [admin users API](https://matrix-org.github.io/synapse/latest/admin_api/user_admin_api.html). ([\#16218](https://github.com/matrix-org/synapse/issues/16218))
- Improve resource usage when sending data to a large number of remote hosts that are marked as "down". ([\#16223](https://github.com/matrix-org/synapse/issues/16223))
### Bugfixes
- Fix IPv6-related bugs on SMTP settings, adding groundwork to fix similar issues. Contributed by @evilham and @telmich (ungleich.ch). ([\#16155](https://github.com/matrix-org/synapse/issues/16155))
- Fix a spec compliance issue where requests to the `/publicRooms` federation API would specify `include_all_networks` as a string. ([\#16185](https://github.com/matrix-org/synapse/issues/16185))
- Fix inaccurate error message while attempting to ban or unban a user with the same or higher PL by spliting the conditional statements. Contributed by @leviosacz. ([\#16205](https://github.com/matrix-org/synapse/issues/16205))
- Fix a rare bug that broke looping calls, which could lead to e.g. linearly increasing memory usage. Introduced in v1.90.0. ([\#16210](https://github.com/matrix-org/synapse/issues/16210))
- Fix a long-standing bug where uploading images would fail if we could not generate thumbnails for them. ([\#16211](https://github.com/matrix-org/synapse/issues/16211))
- Fix a long-standing bug where we did not correctly back off from servers that had "gone" if they returned 4xx series error codes. ([\#16221](https://github.com/matrix-org/synapse/issues/16221))
### Improved Documentation
- Update links to the [matrix.org blog](https://matrix.org/blog/). ([\#16008](https://github.com/matrix-org/synapse/issues/16008))
- Document which [admin APIs](https://matrix-org.github.io/synapse/latest/usage/administration/admin_api/index.html) are disabled when experimental [MSC3861](https://github.com/matrix-org/matrix-spec-proposals/pull/3861) support is enabled. ([\#16168](https://github.com/matrix-org/synapse/issues/16168))
- Document [`exclude_rooms_from_sync`](https://matrix-org.github.io/synapse/v1.92/usage/configuration/config_documentation.html#exclude_rooms_from_sync) configuration option. ([\#16178](https://github.com/matrix-org/synapse/issues/16178))
### Internal Changes
- Prepare unit tests for Python 3.12. ([\#16099](https://github.com/matrix-org/synapse/issues/16099))
- Fix nightly CI jobs. ([\#16121](https://github.com/matrix-org/synapse/issues/16121), [\#16213](https://github.com/matrix-org/synapse/issues/16213))
- Describe which rate limiter was hit in logs. ([\#16135](https://github.com/matrix-org/synapse/issues/16135))
- Simplify presence code when using workers. ([\#16170](https://github.com/matrix-org/synapse/issues/16170))
- Track per-device information in the presence code. ([\#16171](https://github.com/matrix-org/synapse/issues/16171), [\#16172](https://github.com/matrix-org/synapse/issues/16172))
- Stop using the `event_txn_id` table. ([\#16175](https://github.com/matrix-org/synapse/issues/16175))
- Use `AsyncMock` instead of custom code. ([\#16179](https://github.com/matrix-org/synapse/issues/16179), [\#16180](https://github.com/matrix-org/synapse/issues/16180))
- Improve error reporting of invalid data passed to `/_matrix/key/v2/query`. ([\#16183](https://github.com/matrix-org/synapse/issues/16183))
- Task scheduler: add replication notify for new task to launch ASAP. ([\#16184](https://github.com/matrix-org/synapse/issues/16184))
- Improve type hints. ([\#16186](https://github.com/matrix-org/synapse/issues/16186), [\#16188](https://github.com/matrix-org/synapse/issues/16188), [\#16201](https://github.com/matrix-org/synapse/issues/16201))
- Bump black version to 23.7.0. ([\#16187](https://github.com/matrix-org/synapse/issues/16187))
- Log the details of background update failures. ([\#16212](https://github.com/matrix-org/synapse/issues/16212))
- Cache device resync requests over replication. ([\#16241](https://github.com/matrix-org/synapse/issues/16241))
### Updates to locked dependencies
* Bump anyhow from 1.0.72 to 1.0.75. ([\#16141](https://github.com/matrix-org/synapse/issues/16141))
* Bump furo from 2023.7.26 to 2023.8.19. ([\#16238](https://github.com/matrix-org/synapse/issues/16238))
* Bump phonenumbers from 8.13.18 to 8.13.19. ([\#16237](https://github.com/matrix-org/synapse/issues/16237))
* Bump psycopg2 from 2.9.6 to 2.9.7. ([\#16196](https://github.com/matrix-org/synapse/issues/16196))
* Bump regex from 1.9.3 to 1.9.4. ([\#16195](https://github.com/matrix-org/synapse/issues/16195))
* Bump ruff from 0.0.277 to 0.0.286. ([\#16198](https://github.com/matrix-org/synapse/issues/16198))
* Bump sentry-sdk from 1.29.2 to 1.30.0. ([\#16236](https://github.com/matrix-org/synapse/issues/16236))
* Bump serde from 1.0.184 to 1.0.188. ([\#16194](https://github.com/matrix-org/synapse/issues/16194))
* Bump serde_json from 1.0.104 to 1.0.105. ([\#16140](https://github.com/matrix-org/synapse/issues/16140))
* Bump types-psycopg2 from 2.9.21.10 to 2.9.21.11. ([\#16200](https://github.com/matrix-org/synapse/issues/16200))
* Bump types-pyyaml from 6.0.12.10 to 6.0.12.11. ([\#16199](https://github.com/matrix-org/synapse/issues/16199))
# Synapse 1.91.1 (2023-09-04)
### Bugfixes
- Fix a performance regression introduced in Synapse 1.91.0 where event persistence would cause an excessive linear growth in CPU usage. ([\#16220](https://github.com/matrix-org/synapse/issues/16220))
# Synapse 1.91.0 (2023-08-30)
No significant changes since 1.91.0rc1.

28
Cargo.lock generated
View File

@ -13,9 +13,9 @@ dependencies = [
[[package]]
name = "anyhow"
version = "1.0.72"
version = "1.0.75"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3b13c32d80ecc7ab747b80c3784bce54ee8a7a0cc4fbda9bf4cda2cf6fe90854"
checksum = "a4668cab20f66d8d020e1fbc0ebe47217433c1b6c8f2040faf858554e394ace6"
[[package]]
name = "arc-swap"
@ -291,9 +291,9 @@ dependencies = [
[[package]]
name = "regex"
version = "1.9.3"
version = "1.9.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "81bc1d4caf89fac26a70747fe603c130093b53c773888797a6329091246d651a"
checksum = "12de2eff854e5fa4b1295edd650e227e9d8fb0c9e90b12e7f36d6a6811791a29"
dependencies = [
"aho-corasick",
"memchr",
@ -303,9 +303,9 @@ dependencies = [
[[package]]
name = "regex-automata"
version = "0.3.6"
version = "0.3.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "fed1ceff11a1dddaee50c9dc8e4938bd106e9d89ae372f192311e7da498e3b69"
checksum = "49530408a136e16e5b486e883fbb6ba058e8e4e8ae6621a77b048b314336e629"
dependencies = [
"aho-corasick",
"memchr",
@ -314,9 +314,9 @@ dependencies = [
[[package]]
name = "regex-syntax"
version = "0.7.4"
version = "0.7.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e5ea92a5b6195c6ef2a0295ea818b312502c6fc94dde986c5553242e18fd4ce2"
checksum = "dbb5fb1acd8a1a18b3dd5be62d25485eb770e05afb408a9627d14d451bae12da"
[[package]]
name = "ryu"
@ -332,18 +332,18 @@ checksum = "d29ab0c6d3fc0ee92fe66e2d99f700eab17a8d57d1c1d3b748380fb20baa78cd"
[[package]]
name = "serde"
version = "1.0.184"
version = "1.0.188"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2c911f4b04d7385c9035407a4eff5903bf4fe270fa046fda448b69e797f4fff0"
checksum = "cf9e0fcba69a370eed61bcf2b728575f726b50b55cba78064753d708ddc7549e"
dependencies = [
"serde_derive",
]
[[package]]
name = "serde_derive"
version = "1.0.184"
version = "1.0.188"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c1df27f5b29406ada06609b2e2f77fb34f6dbb104a457a671cc31dbed237e09e"
checksum = "4eca7ac642d82aa35b60049a6eccb4be6be75e599bd2e9adb5f875a737654af2"
dependencies = [
"proc-macro2",
"quote",
@ -352,9 +352,9 @@ dependencies = [
[[package]]
name = "serde_json"
version = "1.0.104"
version = "1.0.105"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "076066c5f1078eac5b722a31827a8832fe108bed65dfa75e233c89f8206e976c"
checksum = "693151e1ac27563d6dbcec9dee9fbd5da8539b20fa14ad3752b2e6d363ace360"
dependencies = [
"itoa",
"ryu",

View File

@ -1 +0,0 @@
Fix a performance regression introduced in Synapse 1.91.0 where event persistence would cause excessive CPU usage over time.

12
debian/changelog vendored
View File

@ -1,3 +1,15 @@
matrix-synapse-py3 (1.92.0~rc1) stable; urgency=medium
* New Synapse release 1.92.0rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 05 Sep 2023 11:21:43 +0100
matrix-synapse-py3 (1.91.1) stable; urgency=medium
* New Synapse release 1.91.1.
-- Synapse Packaging team <packages@matrix.org> Mon, 04 Sep 2023 14:03:18 +0100
matrix-synapse-py3 (1.91.0) stable; urgency=medium
* New Synapse release 1.91.0.

View File

@ -1,5 +1,7 @@
# Account validity API
**Note:** This API is disabled when MSC3861 is enabled. [See #15582](https://github.com/matrix-org/synapse/pull/15582)
This API allows a server administrator to manage the validity of an account. To
use it, you must enable the account validity feature (under
`account_validity`) in Synapse's configuration.

View File

@ -1,5 +1,7 @@
# Shared-Secret Registration
**Note:** This API is disabled when MSC3861 is enabled. [See #15582](https://github.com/matrix-org/synapse/pull/15582)
This API allows for the creation of users in an administrative and
non-interactive way. This is generally used for bootstrapping a Synapse
instance with administrator accounts.

View File

@ -218,7 +218,7 @@ The following parameters should be set in the URL:
- `name` - Is optional and filters to only return users with user ID localparts
**or** displaynames that contain this value.
- `guests` - string representing a bool - Is optional and if `false` will **exclude** guest users.
Defaults to `true` to include guest users.
Defaults to `true` to include guest users. This parameter is not supported when MSC3861 is enabled. [See #15582](https://github.com/matrix-org/synapse/pull/15582)
- `admins` - Optional flag to filter admins. If `true`, only admins are queried. If `false`, admins are excluded from
the query. When the flag is absent (the default), **both** admins and non-admins are included in the search results.
- `deactivated` - string representing a bool - Is optional and if `true` will **include** deactivated users.
@ -242,6 +242,7 @@ The following parameters should be set in the URL:
- `displayname` - Users are ordered alphabetically by `displayname`.
- `avatar_url` - Users are ordered alphabetically by avatar URL.
- `creation_ts` - Users are ordered by when the users was created in ms.
- `last_seen_ts` - Users are ordered by when the user was lastly seen in ms.
- `dir` - Direction of media order. Either `f` for forwards or `b` for backwards.
Setting this value to `b` will reverse the above sort order. Defaults to `f`.
@ -272,6 +273,7 @@ The following fields are returned in the JSON response body:
- `displayname` - string - The user's display name if they have set one.
- `avatar_url` - string - The user's avatar URL if they have set one.
- `creation_ts` - integer - The user's creation timestamp in ms.
- `last_seen_ts` - integer - The user's last activity timestamp in ms.
- `next_token`: string representing a positive integer - Indication for pagination. See above.
- `total` - integer - Total number of media.
@ -390,6 +392,8 @@ The following actions are **NOT** performed. The list may be incomplete.
## Reset password
**Note:** This API is disabled when MSC3861 is enabled. [See #15582](https://github.com/matrix-org/synapse/pull/15582)
Changes the password of another user. This will automatically log the user out of all their devices.
The api is:
@ -413,6 +417,8 @@ The parameter `logout_devices` is optional and defaults to `true`.
## Get whether a user is a server administrator or not
**Note:** This API is disabled when MSC3861 is enabled. [See #15582](https://github.com/matrix-org/synapse/pull/15582)
The api is:
```
@ -430,6 +436,8 @@ A response body like the following is returned:
## Change whether a user is a server administrator or not
**Note:** This API is disabled when MSC3861 is enabled. [See #15582](https://github.com/matrix-org/synapse/pull/15582)
Note that you cannot demote yourself.
The api is:
@ -723,6 +731,8 @@ delete largest/smallest or newest/oldest files first.
## Login as a user
**Note:** This API is disabled when MSC3861 is enabled. [See #15582](https://github.com/matrix-org/synapse/pull/15582)
Get an access token that can be used to authenticate as that user. Useful for
when admins wish to do actions on behalf of a user.

View File

@ -12,7 +12,7 @@ Note that this schedule might be modified depending on the availability of the
Synapse team, e.g. releases may be skipped to avoid holidays.
Release announcements can be found in the
[release category of the Matrix blog](https://matrix.org/blog/category/releases).
[release category of the Matrix blog](https://matrix.org/category/releases).
## Bugfix releases
@ -34,4 +34,4 @@ be held to be released together.
In some cases, a pre-disclosure of a security release will be issued as a notice
to Synapse operators that there is an upcoming security release. These can be
found in the [security category of the Matrix blog](https://matrix.org/blog/category/security).
found in the [security category of the Matrix blog](https://matrix.org/category/security).

View File

@ -1,5 +1,7 @@
# Registration Tokens
**Note:** This API is disabled when MSC3861 is enabled. [See #15582](https://github.com/matrix-org/synapse/pull/15582)
This API allows you to manage tokens which can be used to authenticate
registration requests, as proposed in
[MSC3231](https://github.com/matrix-org/matrix-doc/blob/main/proposals/3231-token-authenticated-registration.md)

View File

@ -3420,6 +3420,7 @@ Has the following sub-options:
to style the login flow according to the identity provider in question.
See the [spec](https://spec.matrix.org/latest/) for possible options here.
* `server_url`: The URL of the CAS authorization endpoint.
* `protocol_version`: The CAS protocol version, defaults to none (version 3 is required if you want to use "required_attributes").
* `displayname_attribute`: The attribute of the CAS response to use as the display name.
If no name is given here, no displayname will be set.
* `required_attributes`: It is possible to configure Synapse to only allow logins if CAS attributes
@ -3433,6 +3434,7 @@ Example configuration:
cas_config:
enabled: true
server_url: "https://cas-server.com"
protocol_version: 3
displayname_attribute: name
required_attributes:
userGroup: "staff"
@ -3865,6 +3867,19 @@ Example configuration:
```yaml
forget_rooms_on_leave: false
```
---
### `exclude_rooms_from_sync`
A list of rooms to exclude from sync responses. This is useful for server
administrators wishing to group users into a room without these users being able
to see it from their client.
By default, no room is excluded.
Example configuration:
```yaml
exclude_rooms_from_sync:
- !foo:example.com
```
---
## Opentracing

View File

@ -87,18 +87,9 @@ ignore_missing_imports = True
[mypy-saml2.*]
ignore_missing_imports = True
[mypy-service_identity.*]
ignore_missing_imports = True
[mypy-srvlookup.*]
ignore_missing_imports = True
# https://github.com/twisted/treq/pull/366
[mypy-treq.*]
ignore_missing_imports = True
[mypy-incremental.*]
ignore_missing_imports = True
[mypy-setuptools_rust.*]
ignore_missing_imports = True

253
poetry.lock generated
View File

@ -1,4 +1,4 @@
# This file is automatically @generated by Poetry 1.5.1 and should not be changed by hand.
# This file is automatically @generated by Poetry 1.6.1 and should not be changed by hand.
[[package]]
name = "alabaster"
@ -148,36 +148,33 @@ lxml = ["lxml"]
[[package]]
name = "black"
version = "23.3.0"
version = "23.7.0"
description = "The uncompromising code formatter."
optional = false
python-versions = ">=3.7"
python-versions = ">=3.8"
files = [
{file = "black-23.3.0-cp310-cp310-macosx_10_16_arm64.whl", hash = "sha256:0945e13506be58bf7db93ee5853243eb368ace1c08a24c65ce108986eac65915"},
{file = "black-23.3.0-cp310-cp310-macosx_10_16_universal2.whl", hash = "sha256:67de8d0c209eb5b330cce2469503de11bca4085880d62f1628bd9972cc3366b9"},
{file = "black-23.3.0-cp310-cp310-macosx_10_16_x86_64.whl", hash = "sha256:7c3eb7cea23904399866c55826b31c1f55bbcd3890ce22ff70466b907b6775c2"},
{file = "black-23.3.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:32daa9783106c28815d05b724238e30718f34155653d4d6e125dc7daec8e260c"},
{file = "black-23.3.0-cp310-cp310-win_amd64.whl", hash = "sha256:35d1381d7a22cc5b2be2f72c7dfdae4072a3336060635718cc7e1ede24221d6c"},
{file = "black-23.3.0-cp311-cp311-macosx_10_16_arm64.whl", hash = "sha256:a8a968125d0a6a404842fa1bf0b349a568634f856aa08ffaff40ae0dfa52e7c6"},
{file = "black-23.3.0-cp311-cp311-macosx_10_16_universal2.whl", hash = "sha256:c7ab5790333c448903c4b721b59c0d80b11fe5e9803d8703e84dcb8da56fec1b"},
{file = "black-23.3.0-cp311-cp311-macosx_10_16_x86_64.whl", hash = "sha256:a6f6886c9869d4daae2d1715ce34a19bbc4b95006d20ed785ca00fa03cba312d"},
{file = "black-23.3.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6f3c333ea1dd6771b2d3777482429864f8e258899f6ff05826c3a4fcc5ce3f70"},
{file = "black-23.3.0-cp311-cp311-win_amd64.whl", hash = "sha256:11c410f71b876f961d1de77b9699ad19f939094c3a677323f43d7a29855fe326"},
{file = "black-23.3.0-cp37-cp37m-macosx_10_16_x86_64.whl", hash = "sha256:1d06691f1eb8de91cd1b322f21e3bfc9efe0c7ca1f0e1eb1db44ea367dff656b"},
{file = "black-23.3.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:50cb33cac881766a5cd9913e10ff75b1e8eb71babf4c7104f2e9c52da1fb7de2"},
{file = "black-23.3.0-cp37-cp37m-win_amd64.whl", hash = "sha256:e114420bf26b90d4b9daa597351337762b63039752bdf72bf361364c1aa05925"},
{file = "black-23.3.0-cp38-cp38-macosx_10_16_arm64.whl", hash = "sha256:48f9d345675bb7fbc3dd85821b12487e1b9a75242028adad0333ce36ed2a6d27"},
{file = "black-23.3.0-cp38-cp38-macosx_10_16_universal2.whl", hash = "sha256:714290490c18fb0126baa0fca0a54ee795f7502b44177e1ce7624ba1c00f2331"},
{file = "black-23.3.0-cp38-cp38-macosx_10_16_x86_64.whl", hash = "sha256:064101748afa12ad2291c2b91c960be28b817c0c7eaa35bec09cc63aa56493c5"},
{file = "black-23.3.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:562bd3a70495facf56814293149e51aa1be9931567474993c7942ff7d3533961"},
{file = "black-23.3.0-cp38-cp38-win_amd64.whl", hash = "sha256:e198cf27888ad6f4ff331ca1c48ffc038848ea9f031a3b40ba36aced7e22f2c8"},
{file = "black-23.3.0-cp39-cp39-macosx_10_16_arm64.whl", hash = "sha256:3238f2aacf827d18d26db07524e44741233ae09a584273aa059066d644ca7b30"},
{file = "black-23.3.0-cp39-cp39-macosx_10_16_universal2.whl", hash = "sha256:f0bd2f4a58d6666500542b26354978218a9babcdc972722f4bf90779524515f3"},
{file = "black-23.3.0-cp39-cp39-macosx_10_16_x86_64.whl", hash = "sha256:92c543f6854c28a3c7f39f4d9b7694f9a6eb9d3c5e2ece488c327b6e7ea9b266"},
{file = "black-23.3.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3a150542a204124ed00683f0db1f5cf1c2aaaa9cc3495b7a3b5976fb136090ab"},
{file = "black-23.3.0-cp39-cp39-win_amd64.whl", hash = "sha256:6b39abdfb402002b8a7d030ccc85cf5afff64ee90fa4c5aebc531e3ad0175ddb"},
{file = "black-23.3.0-py3-none-any.whl", hash = "sha256:ec751418022185b0c1bb7d7736e6933d40bbb14c14a0abcf9123d1b159f98dd4"},
{file = "black-23.3.0.tar.gz", hash = "sha256:1c7b8d606e728a41ea1ccbd7264677e494e87cf630e399262ced92d4a8dac940"},
{file = "black-23.7.0-cp310-cp310-macosx_10_16_arm64.whl", hash = "sha256:5c4bc552ab52f6c1c506ccae05681fab58c3f72d59ae6e6639e8885e94fe2587"},
{file = "black-23.7.0-cp310-cp310-macosx_10_16_universal2.whl", hash = "sha256:552513d5cd5694590d7ef6f46e1767a4df9af168d449ff767b13b084c020e63f"},
{file = "black-23.7.0-cp310-cp310-macosx_10_16_x86_64.whl", hash = "sha256:86cee259349b4448adb4ef9b204bb4467aae74a386bce85d56ba4f5dc0da27be"},
{file = "black-23.7.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:501387a9edcb75d7ae8a4412bb8749900386eaef258f1aefab18adddea1936bc"},
{file = "black-23.7.0-cp310-cp310-win_amd64.whl", hash = "sha256:fb074d8b213749fa1d077d630db0d5f8cc3b2ae63587ad4116e8a436e9bbe995"},
{file = "black-23.7.0-cp311-cp311-macosx_10_16_arm64.whl", hash = "sha256:b5b0ee6d96b345a8b420100b7d71ebfdd19fab5e8301aff48ec270042cd40ac2"},
{file = "black-23.7.0-cp311-cp311-macosx_10_16_universal2.whl", hash = "sha256:893695a76b140881531062d48476ebe4a48f5d1e9388177e175d76234ca247cd"},
{file = "black-23.7.0-cp311-cp311-macosx_10_16_x86_64.whl", hash = "sha256:c333286dc3ddca6fdff74670b911cccedacb4ef0a60b34e491b8a67c833b343a"},
{file = "black-23.7.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:831d8f54c3a8c8cf55f64d0422ee875eecac26f5f649fb6c1df65316b67c8926"},
{file = "black-23.7.0-cp311-cp311-win_amd64.whl", hash = "sha256:7f3bf2dec7d541b4619b8ce526bda74a6b0bffc480a163fed32eb8b3c9aed8ad"},
{file = "black-23.7.0-cp38-cp38-macosx_10_16_arm64.whl", hash = "sha256:f9062af71c59c004cd519e2fb8f5d25d39e46d3af011b41ab43b9c74e27e236f"},
{file = "black-23.7.0-cp38-cp38-macosx_10_16_universal2.whl", hash = "sha256:01ede61aac8c154b55f35301fac3e730baf0c9cf8120f65a9cd61a81cfb4a0c3"},
{file = "black-23.7.0-cp38-cp38-macosx_10_16_x86_64.whl", hash = "sha256:327a8c2550ddc573b51e2c352adb88143464bb9d92c10416feb86b0f5aee5ff6"},
{file = "black-23.7.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6d1c6022b86f83b632d06f2b02774134def5d4d4f1dac8bef16d90cda18ba28a"},
{file = "black-23.7.0-cp38-cp38-win_amd64.whl", hash = "sha256:27eb7a0c71604d5de083757fbdb245b1a4fae60e9596514c6ec497eb63f95320"},
{file = "black-23.7.0-cp39-cp39-macosx_10_16_arm64.whl", hash = "sha256:8417dbd2f57b5701492cd46edcecc4f9208dc75529bcf76c514864e48da867d9"},
{file = "black-23.7.0-cp39-cp39-macosx_10_16_universal2.whl", hash = "sha256:47e56d83aad53ca140da0af87678fb38e44fd6bc0af71eebab2d1f59b1acf1d3"},
{file = "black-23.7.0-cp39-cp39-macosx_10_16_x86_64.whl", hash = "sha256:25cc308838fe71f7065df53aedd20327969d05671bac95b38fdf37ebe70ac087"},
{file = "black-23.7.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:642496b675095d423f9b8448243336f8ec71c9d4d57ec17bf795b67f08132a91"},
{file = "black-23.7.0-cp39-cp39-win_amd64.whl", hash = "sha256:ad0014efc7acf0bd745792bd0d8857413652979200ab924fbf239062adc12491"},
{file = "black-23.7.0-py3-none-any.whl", hash = "sha256:9fd59d418c60c0348505f2ddf9609c1e1de8e7493eab96198fc89d9f865e7a96"},
{file = "black-23.7.0.tar.gz", hash = "sha256:022a582720b0d9480ed82576c920a8c1dde97cc38ff11d8d8859b3bd6ca9eedb"},
]
[package.dependencies]
@ -544,13 +541,13 @@ files = [
[[package]]
name = "elementpath"
version = "4.1.0"
version = "4.1.5"
description = "XPath 1.0/2.0/3.0/3.1 parsers and selectors for ElementTree and lxml"
optional = true
python-versions = ">=3.7"
files = [
{file = "elementpath-4.1.0-py3-none-any.whl", hash = "sha256:2b1b524223d70fd6dd63a36b9bc32e4919c96a272c2d1454094c4d85086bc6f8"},
{file = "elementpath-4.1.0.tar.gz", hash = "sha256:dbd7eba3cf0b3b4934f627ba24851a3e0798ef2bc9104555a4cd831f2e6e8e14"},
{file = "elementpath-4.1.5-py3-none-any.whl", hash = "sha256:2ac1a2fb31eb22bbbf817f8cf6752f844513216263f0e3892c8e79782fe4bb55"},
{file = "elementpath-4.1.5.tar.gz", hash = "sha256:c2d6dc524b29ef751ecfc416b0627668119d8812441c555d7471da41d4bacb8d"},
]
[package.extras]
@ -558,13 +555,13 @@ dev = ["Sphinx", "coverage", "flake8", "lxml", "lxml-stubs", "memory-profiler",
[[package]]
name = "furo"
version = "2023.7.26"
version = "2023.8.19"
description = "A clean customisable Sphinx documentation theme."
optional = false
python-versions = ">=3.7"
python-versions = ">=3.8"
files = [
{file = "furo-2023.7.26-py3-none-any.whl", hash = "sha256:1c7936929ec57c5ddecc7c85f07fa8b2ce536b5c89137764cca508be90e11efd"},
{file = "furo-2023.7.26.tar.gz", hash = "sha256:257f63bab97aa85213a1fa24303837a3c3f30be92901ec732fea74290800f59e"},
{file = "furo-2023.8.19-py3-none-any.whl", hash = "sha256:12f99f87a1873b6746228cfde18f77244e6c1ffb85d7fed95e638aae70d80590"},
{file = "furo-2023.8.19.tar.gz", hash = "sha256:e671ee638ab3f1b472f4033b0167f502ab407830e0db0f843b1c1028119c9cd1"},
]
[package.dependencies]
@ -1448,43 +1445,43 @@ files = [
[[package]]
name = "mypy"
version = "1.0.1"
version = "1.4.1"
description = "Optional static typing for Python"
optional = false
python-versions = ">=3.7"
files = [
{file = "mypy-1.0.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:71a808334d3f41ef011faa5a5cd8153606df5fc0b56de5b2e89566c8093a0c9a"},
{file = "mypy-1.0.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:920169f0184215eef19294fa86ea49ffd4635dedfdea2b57e45cb4ee85d5ccaf"},
{file = "mypy-1.0.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:27a0f74a298769d9fdc8498fcb4f2beb86f0564bcdb1a37b58cbbe78e55cf8c0"},
{file = "mypy-1.0.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:65b122a993d9c81ea0bfde7689b3365318a88bde952e4dfa1b3a8b4ac05d168b"},
{file = "mypy-1.0.1-cp310-cp310-win_amd64.whl", hash = "sha256:5deb252fd42a77add936b463033a59b8e48eb2eaec2976d76b6878d031933fe4"},
{file = "mypy-1.0.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:2013226d17f20468f34feddd6aae4635a55f79626549099354ce641bc7d40262"},
{file = "mypy-1.0.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:48525aec92b47baed9b3380371ab8ab6e63a5aab317347dfe9e55e02aaad22e8"},
{file = "mypy-1.0.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c96b8a0c019fe29040d520d9257d8c8f122a7343a8307bf8d6d4a43f5c5bfcc8"},
{file = "mypy-1.0.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:448de661536d270ce04f2d7dddaa49b2fdba6e3bd8a83212164d4174ff43aa65"},
{file = "mypy-1.0.1-cp311-cp311-win_amd64.whl", hash = "sha256:d42a98e76070a365a1d1c220fcac8aa4ada12ae0db679cb4d910fabefc88b994"},
{file = "mypy-1.0.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:e64f48c6176e243ad015e995de05af7f22bbe370dbb5b32bd6988438ec873919"},
{file = "mypy-1.0.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5fdd63e4f50e3538617887e9aee91855368d9fc1dea30da743837b0df7373bc4"},
{file = "mypy-1.0.1-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:dbeb24514c4acbc78d205f85dd0e800f34062efcc1f4a4857c57e4b4b8712bff"},
{file = "mypy-1.0.1-cp37-cp37m-win_amd64.whl", hash = "sha256:a2948c40a7dd46c1c33765718936669dc1f628f134013b02ff5ac6c7ef6942bf"},
{file = "mypy-1.0.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:5bc8d6bd3b274dd3846597855d96d38d947aedba18776aa998a8d46fabdaed76"},
{file = "mypy-1.0.1-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:17455cda53eeee0a4adb6371a21dd3dbf465897de82843751cf822605d152c8c"},
{file = "mypy-1.0.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e831662208055b006eef68392a768ff83596035ffd6d846786578ba1714ba8f6"},
{file = "mypy-1.0.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:e60d0b09f62ae97a94605c3f73fd952395286cf3e3b9e7b97f60b01ddfbbda88"},
{file = "mypy-1.0.1-cp38-cp38-win_amd64.whl", hash = "sha256:0af4f0e20706aadf4e6f8f8dc5ab739089146b83fd53cb4a7e0e850ef3de0bb6"},
{file = "mypy-1.0.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:24189f23dc66f83b839bd1cce2dfc356020dfc9a8bae03978477b15be61b062e"},
{file = "mypy-1.0.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:93a85495fb13dc484251b4c1fd7a5ac370cd0d812bbfc3b39c1bafefe95275d5"},
{file = "mypy-1.0.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5f546ac34093c6ce33f6278f7c88f0f147a4849386d3bf3ae193702f4fe31407"},
{file = "mypy-1.0.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:c6c2ccb7af7154673c591189c3687b013122c5a891bb5651eca3db8e6c6c55bd"},
{file = "mypy-1.0.1-cp39-cp39-win_amd64.whl", hash = "sha256:15b5a824b58c7c822c51bc66308e759243c32631896743f030daf449fe3677f3"},
{file = "mypy-1.0.1-py3-none-any.whl", hash = "sha256:eda5c8b9949ed411ff752b9a01adda31afe7eae1e53e946dbdf9db23865e66c4"},
{file = "mypy-1.0.1.tar.gz", hash = "sha256:28cea5a6392bb43d266782983b5a4216c25544cd7d80be681a155ddcdafd152d"},
{file = "mypy-1.4.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:566e72b0cd6598503e48ea610e0052d1b8168e60a46e0bfd34b3acf2d57f96a8"},
{file = "mypy-1.4.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:ca637024ca67ab24a7fd6f65d280572c3794665eaf5edcc7e90a866544076878"},
{file = "mypy-1.4.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0dde1d180cd84f0624c5dcaaa89c89775550a675aff96b5848de78fb11adabcd"},
{file = "mypy-1.4.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:8c4d8e89aa7de683e2056a581ce63c46a0c41e31bd2b6d34144e2c80f5ea53dc"},
{file = "mypy-1.4.1-cp310-cp310-win_amd64.whl", hash = "sha256:bfdca17c36ae01a21274a3c387a63aa1aafe72bff976522886869ef131b937f1"},
{file = "mypy-1.4.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:7549fbf655e5825d787bbc9ecf6028731973f78088fbca3a1f4145c39ef09462"},
{file = "mypy-1.4.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:98324ec3ecf12296e6422939e54763faedbfcc502ea4a4c38502082711867258"},
{file = "mypy-1.4.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:141dedfdbfe8a04142881ff30ce6e6653c9685b354876b12e4fe6c78598b45e2"},
{file = "mypy-1.4.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:8207b7105829eca6f3d774f64a904190bb2231de91b8b186d21ffd98005f14a7"},
{file = "mypy-1.4.1-cp311-cp311-win_amd64.whl", hash = "sha256:16f0db5b641ba159eff72cff08edc3875f2b62b2fa2bc24f68c1e7a4e8232d01"},
{file = "mypy-1.4.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:470c969bb3f9a9efcedbadcd19a74ffb34a25f8e6b0e02dae7c0e71f8372f97b"},
{file = "mypy-1.4.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e5952d2d18b79f7dc25e62e014fe5a23eb1a3d2bc66318df8988a01b1a037c5b"},
{file = "mypy-1.4.1-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:190b6bab0302cec4e9e6767d3eb66085aef2a1cc98fe04936d8a42ed2ba77bb7"},
{file = "mypy-1.4.1-cp37-cp37m-win_amd64.whl", hash = "sha256:9d40652cc4fe33871ad3338581dca3297ff5f2213d0df345bcfbde5162abf0c9"},
{file = "mypy-1.4.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:01fd2e9f85622d981fd9063bfaef1aed6e336eaacca00892cd2d82801ab7c042"},
{file = "mypy-1.4.1-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:2460a58faeea905aeb1b9b36f5065f2dc9a9c6e4c992a6499a2360c6c74ceca3"},
{file = "mypy-1.4.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a2746d69a8196698146a3dbe29104f9eb6a2a4d8a27878d92169a6c0b74435b6"},
{file = "mypy-1.4.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:ae704dcfaa180ff7c4cfbad23e74321a2b774f92ca77fd94ce1049175a21c97f"},
{file = "mypy-1.4.1-cp38-cp38-win_amd64.whl", hash = "sha256:43d24f6437925ce50139a310a64b2ab048cb2d3694c84c71c3f2a1626d8101dc"},
{file = "mypy-1.4.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:c482e1246726616088532b5e964e39765b6d1520791348e6c9dc3af25b233828"},
{file = "mypy-1.4.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:43b592511672017f5b1a483527fd2684347fdffc041c9ef53428c8dc530f79a3"},
{file = "mypy-1.4.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:34a9239d5b3502c17f07fd7c0b2ae6b7dd7d7f6af35fbb5072c6208e76295816"},
{file = "mypy-1.4.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:5703097c4936bbb9e9bce41478c8d08edd2865e177dc4c52be759f81ee4dd26c"},
{file = "mypy-1.4.1-cp39-cp39-win_amd64.whl", hash = "sha256:e02d700ec8d9b1859790c0475df4e4092c7bf3272a4fd2c9f33d87fac4427b8f"},
{file = "mypy-1.4.1-py3-none-any.whl", hash = "sha256:45d32cec14e7b97af848bddd97d85ea4f0db4d5a149ed9676caa4eb2f7402bb4"},
{file = "mypy-1.4.1.tar.gz", hash = "sha256:9bbcd9ab8ea1f2e1c8031c21445b511442cc45c89951e49bbf852cbb70755b1b"},
]
[package.dependencies]
mypy-extensions = ">=0.4.3"
mypy-extensions = ">=1.0.0"
tomli = {version = ">=1.1.0", markers = "python_version < \"3.11\""}
typing-extensions = ">=3.10"
typing-extensions = ">=4.1.0"
[package.extras]
dmypy = ["psutil (>=4.0)"]
@ -1505,17 +1502,17 @@ files = [
[[package]]
name = "mypy-zope"
version = "0.9.1"
version = "1.0.0"
description = "Plugin for mypy to support zope interfaces"
optional = false
python-versions = "*"
files = [
{file = "mypy-zope-0.9.1.tar.gz", hash = "sha256:4c87dbc71fec35f6533746ecdf9d400cd9281338d71c16b5676bb5ed00a97ca2"},
{file = "mypy_zope-0.9.1-py3-none-any.whl", hash = "sha256:733d4399affe9e61e332ce9c4049418d6775c39b473e4b9f409d51c207c1b71a"},
{file = "mypy-zope-1.0.0.tar.gz", hash = "sha256:be815c2fcb5333aa87e8ec682029ad3214142fe2a05ea383f9ff2d77c98008b7"},
{file = "mypy_zope-1.0.0-py3-none-any.whl", hash = "sha256:9732e9b2198f2aec3343b38a51905ff49d44dc9e39e8e8bc6fc490b232388209"},
]
[package.dependencies]
mypy = ">=1.0.0,<1.1.0"
mypy = ">=1.0.0,<1.5.0"
"zope.interface" = "*"
"zope.schema" = "*"
@ -1610,13 +1607,13 @@ files = [
[[package]]
name = "phonenumbers"
version = "8.13.18"
version = "8.13.19"
description = "Python version of Google's common library for parsing, formatting, storing and validating international phone numbers."
optional = false
python-versions = "*"
files = [
{file = "phonenumbers-8.13.18-py2.py3-none-any.whl", hash = "sha256:3d802739a22592e4127139349937753dee9b6a20bdd5d56847cd885bdc766b1f"},
{file = "phonenumbers-8.13.18.tar.gz", hash = "sha256:b360c756252805d44b447b5bca6d250cf6bd6c69b6f0f4258f3bfe5ab81bef69"},
{file = "phonenumbers-8.13.19-py2.py3-none-any.whl", hash = "sha256:ba542f20f6dc83be8f127f240f9b5b7e7c1dec42aceff1879400d4dc0c781d81"},
{file = "phonenumbers-8.13.19.tar.gz", hash = "sha256:38180247697240ccedd74dec4bfbdbc22bb108b9c5f991f270ca3e41395e6f96"},
]
[[package]]
@ -1744,24 +1741,22 @@ twisted = ["twisted"]
[[package]]
name = "psycopg2"
version = "2.9.6"
version = "2.9.7"
description = "psycopg2 - Python-PostgreSQL Database Adapter"
optional = true
python-versions = ">=3.6"
files = [
{file = "psycopg2-2.9.6-cp310-cp310-win32.whl", hash = "sha256:f7a7a5ee78ba7dc74265ba69e010ae89dae635eea0e97b055fb641a01a31d2b1"},
{file = "psycopg2-2.9.6-cp310-cp310-win_amd64.whl", hash = "sha256:f75001a1cbbe523e00b0ef896a5a1ada2da93ccd752b7636db5a99bc57c44494"},
{file = "psycopg2-2.9.6-cp311-cp311-win32.whl", hash = "sha256:53f4ad0a3988f983e9b49a5d9765d663bbe84f508ed655affdb810af9d0972ad"},
{file = "psycopg2-2.9.6-cp311-cp311-win_amd64.whl", hash = "sha256:b81fcb9ecfc584f661b71c889edeae70bae30d3ef74fa0ca388ecda50b1222b7"},
{file = "psycopg2-2.9.6-cp36-cp36m-win32.whl", hash = "sha256:11aca705ec888e4f4cea97289a0bf0f22a067a32614f6ef64fcf7b8bfbc53744"},
{file = "psycopg2-2.9.6-cp36-cp36m-win_amd64.whl", hash = "sha256:36c941a767341d11549c0fbdbb2bf5be2eda4caf87f65dfcd7d146828bd27f39"},
{file = "psycopg2-2.9.6-cp37-cp37m-win32.whl", hash = "sha256:869776630c04f335d4124f120b7fb377fe44b0a7645ab3c34b4ba42516951889"},
{file = "psycopg2-2.9.6-cp37-cp37m-win_amd64.whl", hash = "sha256:a8ad4a47f42aa6aec8d061fdae21eaed8d864d4bb0f0cade5ad32ca16fcd6258"},
{file = "psycopg2-2.9.6-cp38-cp38-win32.whl", hash = "sha256:2362ee4d07ac85ff0ad93e22c693d0f37ff63e28f0615a16b6635a645f4b9214"},
{file = "psycopg2-2.9.6-cp38-cp38-win_amd64.whl", hash = "sha256:d24ead3716a7d093b90b27b3d73459fe8cd90fd7065cf43b3c40966221d8c394"},
{file = "psycopg2-2.9.6-cp39-cp39-win32.whl", hash = "sha256:1861a53a6a0fd248e42ea37c957d36950da00266378746588eab4f4b5649e95f"},
{file = "psycopg2-2.9.6-cp39-cp39-win_amd64.whl", hash = "sha256:ded2faa2e6dfb430af7713d87ab4abbfc764d8d7fb73eafe96a24155f906ebf5"},
{file = "psycopg2-2.9.6.tar.gz", hash = "sha256:f15158418fd826831b28585e2ab48ed8df2d0d98f502a2b4fe619e7d5ca29011"},
{file = "psycopg2-2.9.7-cp310-cp310-win32.whl", hash = "sha256:1a6a2d609bce44f78af4556bea0c62a5e7f05c23e5ea9c599e07678995609084"},
{file = "psycopg2-2.9.7-cp310-cp310-win_amd64.whl", hash = "sha256:b22ed9c66da2589a664e0f1ca2465c29b75aaab36fa209d4fb916025fb9119e5"},
{file = "psycopg2-2.9.7-cp311-cp311-win32.whl", hash = "sha256:44d93a0109dfdf22fe399b419bcd7fa589d86895d3931b01fb321d74dadc68f1"},
{file = "psycopg2-2.9.7-cp311-cp311-win_amd64.whl", hash = "sha256:91e81a8333a0037babfc9fe6d11e997a9d4dac0f38c43074886b0d9dead94fe9"},
{file = "psycopg2-2.9.7-cp37-cp37m-win32.whl", hash = "sha256:d1210fcf99aae6f728812d1d2240afc1dc44b9e6cba526a06fb8134f969957c2"},
{file = "psycopg2-2.9.7-cp37-cp37m-win_amd64.whl", hash = "sha256:e9b04cbef584310a1ac0f0d55bb623ca3244c87c51187645432e342de9ae81a8"},
{file = "psycopg2-2.9.7-cp38-cp38-win32.whl", hash = "sha256:d5c5297e2fbc8068d4255f1e606bfc9291f06f91ec31b2a0d4c536210ac5c0a2"},
{file = "psycopg2-2.9.7-cp38-cp38-win_amd64.whl", hash = "sha256:8275abf628c6dc7ec834ea63f6f3846bf33518907a2b9b693d41fd063767a866"},
{file = "psycopg2-2.9.7-cp39-cp39-win32.whl", hash = "sha256:c7949770cafbd2f12cecc97dea410c514368908a103acf519f2a346134caa4d5"},
{file = "psycopg2-2.9.7-cp39-cp39-win_amd64.whl", hash = "sha256:b6bd7d9d3a7a63faae6edf365f0ed0e9b0a1aaf1da3ca146e6b043fb3eb5d723"},
{file = "psycopg2-2.9.7.tar.gz", hash = "sha256:f00cc35bd7119f1fed17b85bd1007855194dde2cbd8de01ab8ebb17487440ad8"},
]
[[package]]
@ -2082,6 +2077,7 @@ files = [
{file = "PyYAML-6.0.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:69b023b2b4daa7548bcfbd4aa3da05b3a74b772db9e23b982788168117739938"},
{file = "PyYAML-6.0.1-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:81e0b275a9ecc9c0c0c07b4b90ba548307583c125f54d5b6946cfee6360c733d"},
{file = "PyYAML-6.0.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ba336e390cd8e4d1739f42dfe9bb83a3cc2e80f567d8805e11b46f4a943f5515"},
{file = "PyYAML-6.0.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:326c013efe8048858a6d312ddd31d56e468118ad4cdeda36c719bf5bb6192290"},
{file = "PyYAML-6.0.1-cp310-cp310-win32.whl", hash = "sha256:bd4af7373a854424dabd882decdc5579653d7868b8fb26dc7d0e99f823aa5924"},
{file = "PyYAML-6.0.1-cp310-cp310-win_amd64.whl", hash = "sha256:fd1592b3fdf65fff2ad0004b5e363300ef59ced41c2e6b3a99d4089fa8c5435d"},
{file = "PyYAML-6.0.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:6965a7bc3cf88e5a1c3bd2e0b5c22f8d677dc88a455344035f03399034eb3007"},
@ -2089,8 +2085,15 @@ files = [
{file = "PyYAML-6.0.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:42f8152b8dbc4fe7d96729ec2b99c7097d656dc1213a3229ca5383f973a5ed6d"},
{file = "PyYAML-6.0.1-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:062582fca9fabdd2c8b54a3ef1c978d786e0f6b3a1510e0ac93ef59e0ddae2bc"},
{file = "PyYAML-6.0.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d2b04aac4d386b172d5b9692e2d2da8de7bfb6c387fa4f801fbf6fb2e6ba4673"},
{file = "PyYAML-6.0.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:e7d73685e87afe9f3b36c799222440d6cf362062f78be1013661b00c5c6f678b"},
{file = "PyYAML-6.0.1-cp311-cp311-win32.whl", hash = "sha256:1635fd110e8d85d55237ab316b5b011de701ea0f29d07611174a1b42f1444741"},
{file = "PyYAML-6.0.1-cp311-cp311-win_amd64.whl", hash = "sha256:bf07ee2fef7014951eeb99f56f39c9bb4af143d8aa3c21b1677805985307da34"},
{file = "PyYAML-6.0.1-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:855fb52b0dc35af121542a76b9a84f8d1cd886ea97c84703eaa6d88e37a2ad28"},
{file = "PyYAML-6.0.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:40df9b996c2b73138957fe23a16a4f0ba614f4c0efce1e9406a184b6d07fa3a9"},
{file = "PyYAML-6.0.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6c22bec3fbe2524cde73d7ada88f6566758a8f7227bfbf93a408a9d86bcc12a0"},
{file = "PyYAML-6.0.1-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:8d4e9c88387b0f5c7d5f281e55304de64cf7f9c0021a3525bd3b1c542da3b0e4"},
{file = "PyYAML-6.0.1-cp312-cp312-win32.whl", hash = "sha256:d483d2cdf104e7c9fa60c544d92981f12ad66a457afae824d146093b8c294c54"},
{file = "PyYAML-6.0.1-cp312-cp312-win_amd64.whl", hash = "sha256:0d3304d8c0adc42be59c5f8a4d9e3d7379e6955ad754aa9d6ab7a398b59dd1df"},
{file = "PyYAML-6.0.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:50550eb667afee136e9a77d6dc71ae76a44df8b3e51e41b77f6de2932bfe0f47"},
{file = "PyYAML-6.0.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1fe35611261b29bd1de0070f0b2f47cb6ff71fa6595c077e42bd0c419fa27b98"},
{file = "PyYAML-6.0.1-cp36-cp36m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:704219a11b772aea0d8ecd7058d0082713c3562b4e271b849ad7dc4a5c90c13c"},
@ -2107,6 +2110,7 @@ files = [
{file = "PyYAML-6.0.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a0cd17c15d3bb3fa06978b4e8958dcdc6e0174ccea823003a106c7d4d7899ac5"},
{file = "PyYAML-6.0.1-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:28c119d996beec18c05208a8bd78cbe4007878c6dd15091efb73a30e90539696"},
{file = "PyYAML-6.0.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7e07cbde391ba96ab58e532ff4803f79c4129397514e1413a7dc761ccd755735"},
{file = "PyYAML-6.0.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:49a183be227561de579b4a36efbb21b3eab9651dd81b1858589f796549873dd6"},
{file = "PyYAML-6.0.1-cp38-cp38-win32.whl", hash = "sha256:184c5108a2aca3c5b3d3bf9395d50893a7ab82a38004c8f61c258d4428e80206"},
{file = "PyYAML-6.0.1-cp38-cp38-win_amd64.whl", hash = "sha256:1e2722cc9fbb45d9b87631ac70924c11d3a401b2d7f410cc0e3bbf249f2dca62"},
{file = "PyYAML-6.0.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:9eb6caa9a297fc2c2fb8862bc5370d0303ddba53ba97e71f08023b6cd73d16a8"},
@ -2114,6 +2118,7 @@ files = [
{file = "PyYAML-6.0.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5773183b6446b2c99bb77e77595dd486303b4faab2b086e7b17bc6bef28865f6"},
{file = "PyYAML-6.0.1-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:b786eecbdf8499b9ca1d697215862083bd6d2a99965554781d0d8d1ad31e13a0"},
{file = "PyYAML-6.0.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bc1bf2925a1ecd43da378f4db9e4f799775d6367bdb94671027b73b393a7c42c"},
{file = "PyYAML-6.0.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:04ac92ad1925b2cff1db0cfebffb6ffc43457495c9b3c39d3fcae417d7125dc5"},
{file = "PyYAML-6.0.1-cp39-cp39-win32.whl", hash = "sha256:faca3bdcf85b2fc05d06ff3fbc1f83e1391b3e724afa3feba7d13eeab355484c"},
{file = "PyYAML-6.0.1-cp39-cp39-win_amd64.whl", hash = "sha256:510c9deebc5c0225e8c96813043e62b680ba2f9c50a08d3724c7f28a747d1486"},
{file = "PyYAML-6.0.1.tar.gz", hash = "sha256:bfdf460b1736c775f2ba9f6a92bca30bc2095067b8a9d77876d1fad6cc3b4a43"},
@ -2329,28 +2334,28 @@ files = [
[[package]]
name = "ruff"
version = "0.0.277"
version = "0.0.286"
description = "An extremely fast Python linter, written in Rust."
optional = false
python-versions = ">=3.7"
files = [
{file = "ruff-0.0.277-py3-none-macosx_10_7_x86_64.whl", hash = "sha256:3250b24333ef419b7a232080d9724ccc4d2da1dbbe4ce85c4caa2290d83200f8"},
{file = "ruff-0.0.277-py3-none-macosx_10_9_x86_64.macosx_11_0_arm64.macosx_10_9_universal2.whl", hash = "sha256:3e60605e07482183ba1c1b7237eca827bd6cbd3535fe8a4ede28cbe2a323cb97"},
{file = "ruff-0.0.277-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:7baa97c3d7186e5ed4d5d4f6834d759a27e56cf7d5874b98c507335f0ad5aadb"},
{file = "ruff-0.0.277-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:74e4b206cb24f2e98a615f87dbe0bde18105217cbcc8eb785bb05a644855ba50"},
{file = "ruff-0.0.277-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:479864a3ccd8a6a20a37a6e7577bdc2406868ee80b1e65605478ad3b8eb2ba0b"},
{file = "ruff-0.0.277-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:468bfb0a7567443cec3d03cf408d6f562b52f30c3c29df19927f1e0e13a40cd7"},
{file = "ruff-0.0.277-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:f32ec416c24542ca2f9cc8c8b65b84560530d338aaf247a4a78e74b99cd476b4"},
{file = "ruff-0.0.277-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:14a7b2f00f149c5a295f188a643ac25226ff8a4d08f7a62b1d4b0a1dc9f9b85c"},
{file = "ruff-0.0.277-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a9879f59f763cc5628aa01c31ad256a0f4dc61a29355c7315b83c2a5aac932b5"},
{file = "ruff-0.0.277-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:f612e0a14b3d145d90eb6ead990064e22f6f27281d847237560b4e10bf2251f3"},
{file = "ruff-0.0.277-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:323b674c98078be9aaded5b8b51c0d9c424486566fb6ec18439b496ce79e5998"},
{file = "ruff-0.0.277-py3-none-musllinux_1_2_i686.whl", hash = "sha256:3a43fbe026ca1a2a8c45aa0d600a0116bec4dfa6f8bf0c3b871ecda51ef2b5dd"},
{file = "ruff-0.0.277-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:734165ea8feb81b0d53e3bf523adc2413fdb76f1264cde99555161dd5a725522"},
{file = "ruff-0.0.277-py3-none-win32.whl", hash = "sha256:88d0f2afb2e0c26ac1120e7061ddda2a566196ec4007bd66d558f13b374b9efc"},
{file = "ruff-0.0.277-py3-none-win_amd64.whl", hash = "sha256:6fe81732f788894a00f6ade1fe69e996cc9e485b7c35b0f53fb00284397284b2"},
{file = "ruff-0.0.277-py3-none-win_arm64.whl", hash = "sha256:2d4444c60f2e705c14cd802b55cd2b561d25bf4311702c463a002392d3116b22"},
{file = "ruff-0.0.277.tar.gz", hash = "sha256:2dab13cdedbf3af6d4427c07f47143746b6b95d9e4a254ac369a0edb9280a0d2"},
{file = "ruff-0.0.286-py3-none-macosx_10_7_x86_64.whl", hash = "sha256:8e22cb557e7395893490e7f9cfea1073d19a5b1dd337f44fd81359b2767da4e9"},
{file = "ruff-0.0.286-py3-none-macosx_10_9_x86_64.macosx_11_0_arm64.macosx_10_9_universal2.whl", hash = "sha256:68ed8c99c883ae79a9133cb1a86d7130feee0397fdf5ba385abf2d53e178d3fa"},
{file = "ruff-0.0.286-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8301f0bb4ec1a5b29cfaf15b83565136c47abefb771603241af9d6038f8981e8"},
{file = "ruff-0.0.286-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:acc4598f810bbc465ce0ed84417ac687e392c993a84c7eaf3abf97638701c1ec"},
{file = "ruff-0.0.286-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:88c8e358b445eb66d47164fa38541cfcc267847d1e7a92dd186dddb1a0a9a17f"},
{file = "ruff-0.0.286-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:0433683d0c5dbcf6162a4beb2356e820a593243f1fa714072fec15e2e4f4c939"},
{file = "ruff-0.0.286-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:ddb61a0c4454cbe4623f4a07fef03c5ae921fe04fede8d15c6e36703c0a73b07"},
{file = "ruff-0.0.286-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:47549c7c0be24c8ae9f2bce6f1c49fbafea83bca80142d118306f08ec7414041"},
{file = "ruff-0.0.286-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:559aa793149ac23dc4310f94f2c83209eedb16908a0343663be19bec42233d25"},
{file = "ruff-0.0.286-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:d73cfb1c3352e7aa0ce6fb2321f36fa1d4a2c48d2ceac694cb03611ddf0e4db6"},
{file = "ruff-0.0.286-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:3dad93b1f973c6d1db4b6a5da8690c5625a3fa32bdf38e543a6936e634b83dc3"},
{file = "ruff-0.0.286-py3-none-musllinux_1_2_i686.whl", hash = "sha256:26afc0851f4fc3738afcf30f5f8b8612a31ac3455cb76e611deea80f5c0bf3ce"},
{file = "ruff-0.0.286-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:9b6b116d1c4000de1b9bf027131dbc3b8a70507788f794c6b09509d28952c512"},
{file = "ruff-0.0.286-py3-none-win32.whl", hash = "sha256:556e965ac07c1e8c1c2d759ac512e526ecff62c00fde1a046acb088d3cbc1a6c"},
{file = "ruff-0.0.286-py3-none-win_amd64.whl", hash = "sha256:5d295c758961376c84aaa92d16e643d110be32add7465e197bfdaec5a431a107"},
{file = "ruff-0.0.286-py3-none-win_arm64.whl", hash = "sha256:1d6142d53ab7f164204b3133d053c4958d4d11ec3a39abf23a40b13b0784e3f0"},
{file = "ruff-0.0.286.tar.gz", hash = "sha256:f1e9d169cce81a384a26ee5bb8c919fe9ae88255f39a1a69fd1ebab233a85ed2"},
]
[[package]]
@ -2385,13 +2390,13 @@ doc = ["Sphinx", "sphinx-rtd-theme"]
[[package]]
name = "sentry-sdk"
version = "1.29.2"
version = "1.30.0"
description = "Python client for Sentry (https://sentry.io)"
optional = true
python-versions = "*"
files = [
{file = "sentry-sdk-1.29.2.tar.gz", hash = "sha256:a99ee105384788c3f228726a88baf515fe7b5f1d2d0f215a03d194369f158df7"},
{file = "sentry_sdk-1.29.2-py2.py3-none-any.whl", hash = "sha256:3e17215d8006612e2df02b0e73115eb8376c37e3f586d8436fa41644e605074d"},
{file = "sentry-sdk-1.30.0.tar.gz", hash = "sha256:7dc873b87e1faf4d00614afd1058bfa1522942f33daef8a59f90de8ed75cd10c"},
{file = "sentry_sdk-1.30.0-py2.py3-none-any.whl", hash = "sha256:2e53ad63f96bb9da6570ba2e755c267e529edcf58580a2c0d2a11ef26e1e678b"},
]
[package.dependencies]
@ -2414,6 +2419,7 @@ httpx = ["httpx (>=0.16.0)"]
huey = ["huey (>=2)"]
loguru = ["loguru (>=0.5)"]
opentelemetry = ["opentelemetry-distro (>=0.35b0)"]
opentelemetry-experimental = ["opentelemetry-distro (>=0.40b0,<1.0)", "opentelemetry-instrumentation-aiohttp-client (>=0.40b0,<1.0)", "opentelemetry-instrumentation-django (>=0.40b0,<1.0)", "opentelemetry-instrumentation-fastapi (>=0.40b0,<1.0)", "opentelemetry-instrumentation-flask (>=0.40b0,<1.0)", "opentelemetry-instrumentation-requests (>=0.40b0,<1.0)", "opentelemetry-instrumentation-sqlite3 (>=0.40b0,<1.0)", "opentelemetry-instrumentation-urllib (>=0.40b0,<1.0)"]
pure-eval = ["asttokens", "executing", "pure-eval"]
pymongo = ["pymongo (>=3.1)"]
pyspark = ["pyspark (>=2.4.4)"]
@ -2467,18 +2473,19 @@ testing-integration = ["build[virtualenv]", "filelock (>=3.4.0)", "jaraco.envs (
[[package]]
name = "setuptools-rust"
version = "1.6.0"
version = "1.7.0"
description = "Setuptools Rust extension plugin"
optional = false
python-versions = ">=3.7"
files = [
{file = "setuptools-rust-1.6.0.tar.gz", hash = "sha256:c86e734deac330597998bfbc08da45187e6b27837e23bd91eadb320732392262"},
{file = "setuptools_rust-1.6.0-py3-none-any.whl", hash = "sha256:e28ae09fb7167c44ab34434eb49279307d611547cb56cb9789955cdb54a1aed9"},
{file = "setuptools-rust-1.7.0.tar.gz", hash = "sha256:c7100999948235a38ae7e555fe199aa66c253dc384b125f5d85473bf81eae3a3"},
{file = "setuptools_rust-1.7.0-py3-none-any.whl", hash = "sha256:071099885949132a2180d16abf907b60837e74b4085047ba7e9c0f5b365310c1"},
]
[package.dependencies]
semantic-version = ">=2.8.2,<3"
setuptools = ">=62.4"
tomli = {version = ">=1.2.1", markers = "python_version < \"3.11\""}
typing-extensions = ">=3.7.4.3"
[[package]]
@ -3002,13 +3009,13 @@ files = [
[[package]]
name = "types-psycopg2"
version = "2.9.21.10"
version = "2.9.21.11"
description = "Typing stubs for psycopg2"
optional = false
python-versions = "*"
files = [
{file = "types-psycopg2-2.9.21.10.tar.gz", hash = "sha256:c2600892312ae1c34e12f145749795d93dc4eac3ef7dbf8a9c1bfd45385e80d7"},
{file = "types_psycopg2-2.9.21.10-py3-none-any.whl", hash = "sha256:918224a0731a3650832e46633e720703b5beef7693a064e777d9748654fcf5e5"},
{file = "types-psycopg2-2.9.21.11.tar.gz", hash = "sha256:d5077eacf90e61db8c0b8eea2fdc9d4a97d7aaa16865fb4bd7034a7571520b4d"},
{file = "types_psycopg2-2.9.21.11-py3-none-any.whl", hash = "sha256:7a323d7744bc8a882fb5a6f63448e903fc70d3dc0d6da9ec1f9c6c4dc10a7102"},
]
[[package]]
@ -3027,13 +3034,13 @@ cryptography = ">=35.0.0"
[[package]]
name = "types-pyyaml"
version = "6.0.12.10"
version = "6.0.12.11"
description = "Typing stubs for PyYAML"
optional = false
python-versions = "*"
files = [
{file = "types-PyYAML-6.0.12.10.tar.gz", hash = "sha256:ebab3d0700b946553724ae6ca636ea932c1b0868701d4af121630e78d695fc97"},
{file = "types_PyYAML-6.0.12.10-py3-none-any.whl", hash = "sha256:662fa444963eff9b68120d70cda1af5a5f2aa57900003c2006d7626450eaae5f"},
{file = "types-PyYAML-6.0.12.11.tar.gz", hash = "sha256:7d340b19ca28cddfdba438ee638cd4084bde213e501a3978738543e27094775b"},
{file = "types_PyYAML-6.0.12.11-py3-none-any.whl", hash = "sha256:a461508f3096d1d5810ec5ab95d7eeecb651f3a15b71959999988942063bf01d"},
]
[[package]]
@ -3207,22 +3214,22 @@ files = [
[[package]]
name = "xmlschema"
version = "2.2.2"
version = "2.4.0"
description = "An XML Schema validator and decoder"
optional = true
python-versions = ">=3.7"
files = [
{file = "xmlschema-2.2.2-py3-none-any.whl", hash = "sha256:557f3632b54b6ff10576736bba62e43db84eb60f6465a83818576cd9ffcc1799"},
{file = "xmlschema-2.2.2.tar.gz", hash = "sha256:0caa96668807b4b51c42a0fe2b6610752bc59f069615df3e34dcfffb962973fd"},
{file = "xmlschema-2.4.0-py3-none-any.whl", hash = "sha256:dc87be0caaa61f42649899189aab2fd8e0d567f2cf548433ba7b79278d231a4a"},
{file = "xmlschema-2.4.0.tar.gz", hash = "sha256:d74cd0c10866ac609e1ef94a5a69b018ad16e39077bc6393408b40c6babee793"},
]
[package.dependencies]
elementpath = ">=4.0.0,<5.0.0"
elementpath = ">=4.1.5,<5.0.0"
[package.extras]
codegen = ["elementpath (>=4.0.0,<5.0.0)", "jinja2"]
dev = ["Sphinx", "coverage", "elementpath (>=4.0.0,<5.0.0)", "flake8", "jinja2", "lxml", "lxml-stubs", "memory-profiler", "mypy", "sphinx-rtd-theme", "tox"]
docs = ["Sphinx", "elementpath (>=4.0.0,<5.0.0)", "jinja2", "sphinx-rtd-theme"]
codegen = ["elementpath (>=4.1.5,<5.0.0)", "jinja2"]
dev = ["Sphinx", "coverage", "elementpath (>=4.1.5,<5.0.0)", "flake8", "jinja2", "lxml", "lxml-stubs", "memory-profiler", "mypy", "sphinx-rtd-theme", "tox"]
docs = ["Sphinx", "elementpath (>=4.1.5,<5.0.0)", "jinja2", "sphinx-rtd-theme"]
[[package]]
name = "zipp"
@ -3343,4 +3350,4 @@ user-search = ["pyicu"]
[metadata]
lock-version = "2.0"
python-versions = "^3.8.0"
content-hash = "0a8c6605e7e1d0ac7188a5d02b47a029bfb0f917458b87cb40755911442383d8"
content-hash = "4a3a82becd89b91e76e2bc2f8ba72123f665c517d9b841d9a34cd01b83a1adc3"

View File

@ -35,7 +35,7 @@
showcontent = true
[tool.black]
target-version = ['py37', 'py38', 'py39', 'py310']
target-version = ['py38', 'py39', 'py310', 'py311']
# black ignores everything in .gitignore by default, see
# https://black.readthedocs.io/en/stable/usage_and_configuration/file_collection_and_discovery.html#gitignore
# Use `extend-exclude` if you want to exclude something in addition to this.
@ -89,7 +89,7 @@ manifest-path = "rust/Cargo.toml"
[tool.poetry]
name = "matrix-synapse"
version = "1.91.0"
version = "1.92.0rc1"
description = "Homeserver for the Matrix decentralised comms protocol"
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
license = "Apache-2.0"
@ -306,10 +306,13 @@ all = [
]
[tool.poetry.dev-dependencies]
# We pin black so that our tests don't start failing on new releases.
# We pin development dependencies in poetry.lock so that our tests don't start
# failing on new releases. Keeping lower bounds loose here means that dependabot
# can bump versions without having to update the content-hash in the lockfile.
# This helps prevents merge conflicts when running a batch of dependabot updates.
isort = ">=5.10.1"
black = ">=22.3.0"
ruff = "0.0.277"
black = ">=22.7.0"
ruff = "0.0.286"
# Typechecking
lxml-stubs = ">=0.4.0"

View File

@ -197,7 +197,6 @@ fn bench_eval_message(b: &mut Bencher) {
false,
false,
false,
false,
);
b.iter(|| eval.run(&rules, Some("bob"), Some("person")));

View File

@ -228,7 +228,7 @@ pub const BASE_APPEND_OVERRIDE_RULES: &[PushRule] = &[
// We don't want to notify on edits *unless* the edit directly mentions a
// user, which is handled above.
PushRule {
rule_id: Cow::Borrowed("global/override/.org.matrix.msc3958.suppress_edits"),
rule_id: Cow::Borrowed("global/override/.m.rule.suppress_edits"),
priority_class: 5,
conditions: Cow::Borrowed(&[Condition::Known(KnownCondition::EventPropertyIs(
EventPropertyIsCondition {

View File

@ -564,7 +564,7 @@ fn test_requires_room_version_supports_condition() {
};
let rules = PushRules::new(vec![custom_rule]);
result = evaluator.run(
&FilteredPushRules::py_new(rules, BTreeMap::new(), true, false, true, false),
&FilteredPushRules::py_new(rules, BTreeMap::new(), true, false, true),
None,
None,
);

View File

@ -527,7 +527,6 @@ pub struct FilteredPushRules {
msc1767_enabled: bool,
msc3381_polls_enabled: bool,
msc3664_enabled: bool,
msc3958_suppress_edits_enabled: bool,
}
#[pymethods]
@ -539,7 +538,6 @@ impl FilteredPushRules {
msc1767_enabled: bool,
msc3381_polls_enabled: bool,
msc3664_enabled: bool,
msc3958_suppress_edits_enabled: bool,
) -> Self {
Self {
push_rules,
@ -547,7 +545,6 @@ impl FilteredPushRules {
msc1767_enabled,
msc3381_polls_enabled,
msc3664_enabled,
msc3958_suppress_edits_enabled,
}
}
@ -584,12 +581,6 @@ impl FilteredPushRules {
return false;
}
if !self.msc3958_suppress_edits_enabled
&& rule.rule_id == "global/override/.org.matrix.msc3958.suppress_edits"
{
return false;
}
true
})
.map(|r| {

View File

@ -46,7 +46,6 @@ class FilteredPushRules:
msc1767_enabled: bool,
msc3381_polls_enabled: bool,
msc3664_enabled: bool,
msc3958_suppress_edits_enabled: bool,
): ...
def rules(self) -> Collection[Tuple[PushRule, bool]]: ...

View File

@ -21,9 +21,14 @@ import os
import sys
from typing import Any, Dict
from PIL import ImageFile
from synapse.util.rust import check_rust_lib_up_to_date
from synapse.util.stringutils import strtobool
# Allow truncated JPEG images to be thumbnailed.
ImageFile.LOAD_TRUNCATED_IMAGES = True
# Check that we're not running on an unsupported Python version.
#
# Note that we use an (unneeded) variable here so that pyupgrade doesn't nuke the

View File

@ -482,7 +482,10 @@ class Porter:
do_backward[0] = False
if forward_rows or backward_rows:
headers = [column[0] for column in txn.description]
assert txn.description is not None
headers: Optional[List[str]] = [
column[0] for column in txn.description
]
else:
headers = None
@ -544,6 +547,7 @@ class Porter:
def r(txn: LoggingTransaction) -> Tuple[List[str], List[Tuple]]:
txn.execute(select, (forward_chunk, self.batch_size))
rows = txn.fetchall()
assert txn.description is not None
headers = [column[0] for column in txn.description]
return headers, rows
@ -919,7 +923,8 @@ class Porter:
def r(txn: LoggingTransaction) -> Tuple[List[str], List[Tuple]]:
txn.execute(select)
rows = txn.fetchall()
headers: List[str] = [column[0] for column in txn.description]
assert txn.description is not None
headers = [column[0] for column in txn.description]
ts_ind = headers.index("ts")

View File

@ -16,6 +16,7 @@
"""Contains exceptions and error codes."""
import logging
import math
import typing
from enum import Enum
from http import HTTPStatus
@ -210,6 +211,11 @@ class SynapseError(CodeMessageException):
def error_dict(self, config: Optional["HomeServerConfig"]) -> "JsonDict":
return cs_error(self.msg, self.errcode, **self._additional_fields)
@property
def debug_context(self) -> Optional[str]:
"""Override this to add debugging context that shouldn't be sent to clients."""
return None
class InvalidAPICallError(SynapseError):
"""You called an existing API endpoint, but fed that endpoint
@ -503,19 +509,31 @@ class InvalidCaptchaError(SynapseError):
class LimitExceededError(SynapseError):
"""A client has sent too many requests and is being throttled."""
include_retry_after_header = False
def __init__(
self,
limiter_name: str,
code: int = 429,
msg: str = "Too Many Requests",
retry_after_ms: Optional[int] = None,
errcode: str = Codes.LIMIT_EXCEEDED,
):
super().__init__(code, msg, errcode)
headers = (
{"Retry-After": str(math.ceil(retry_after_ms / 1000))}
if self.include_retry_after_header and retry_after_ms is not None
else None
)
super().__init__(code, "Too Many Requests", errcode, headers=headers)
self.retry_after_ms = retry_after_ms
self.limiter_name = limiter_name
def error_dict(self, config: Optional["HomeServerConfig"]) -> "JsonDict":
return cs_error(self.msg, self.errcode, retry_after_ms=self.retry_after_ms)
@property
def debug_context(self) -> Optional[str]:
return self.limiter_name
class RoomKeysVersionError(SynapseError):
"""A client has tried to upload to a non-current version of the room_keys store"""

View File

@ -61,12 +61,16 @@ class Ratelimiter:
"""
def __init__(
self, store: DataStore, clock: Clock, rate_hz: float, burst_count: int
self,
store: DataStore,
clock: Clock,
cfg: RatelimitSettings,
):
self.clock = clock
self.rate_hz = rate_hz
self.burst_count = burst_count
self.rate_hz = cfg.per_second
self.burst_count = cfg.burst_count
self.store = store
self._limiter_name = cfg.key
# An ordered dictionary representing the token buckets tracked by this rate
# limiter. Each entry maps a key of arbitrary type to a tuple representing:
@ -305,7 +309,8 @@ class Ratelimiter:
if not allowed:
raise LimitExceededError(
retry_after_ms=int(1000 * (time_allowed - time_now_s))
limiter_name=self._limiter_name,
retry_after_ms=int(1000 * (time_allowed - time_now_s)),
)
@ -322,7 +327,9 @@ class RequestRatelimiter:
# The rate_hz and burst_count are overridden on a per-user basis
self.request_ratelimiter = Ratelimiter(
store=self.store, clock=self.clock, rate_hz=0, burst_count=0
store=self.store,
clock=self.clock,
cfg=RatelimitSettings(key=rc_message.key, per_second=0, burst_count=0),
)
self._rc_message = rc_message
@ -332,8 +339,7 @@ class RequestRatelimiter:
self.admin_redaction_ratelimiter: Optional[Ratelimiter] = Ratelimiter(
store=self.store,
clock=self.clock,
rate_hz=rc_admin_redaction.per_second,
burst_count=rc_admin_redaction.burst_count,
cfg=rc_admin_redaction,
)
else:
self.admin_redaction_ratelimiter = None

View File

@ -186,9 +186,9 @@ class Config:
TypeError, if given something other than an integer or a string
ValueError: if given a string not of the form described above.
"""
if type(value) is int:
if type(value) is int: # noqa: E721
return value
elif type(value) is str:
elif isinstance(value, str):
sizes = {"K": 1024, "M": 1024 * 1024}
size = 1
suffix = value[-1]
@ -218,9 +218,9 @@ class Config:
TypeError, if given something other than an integer or a string
ValueError: if given a string not of the form described above.
"""
if type(value) is int:
if type(value) is int: # noqa: E721
return value
elif type(value) is str:
elif isinstance(value, str):
second = 1000
minute = 60 * second
hour = 60 * minute

View File

@ -34,7 +34,7 @@ class AppServiceConfig(Config):
def read_config(self, config: JsonDict, **kwargs: Any) -> None:
self.app_service_config_files = config.get("app_service_config_files", [])
if not isinstance(self.app_service_config_files, list) or not all(
type(x) is str for x in self.app_service_config_files
isinstance(x, str) for x in self.app_service_config_files
):
raise ConfigError(
"Expected '%s' to be a list of AS config files:"

View File

@ -18,7 +18,7 @@ from typing import Any, List
from synapse.config.sso import SsoAttributeRequirement
from synapse.types import JsonDict
from ._base import Config
from ._base import Config, ConfigError
from ._util import validate_config
@ -41,6 +41,16 @@ class CasConfig(Config):
public_baseurl = self.root.server.public_baseurl
self.cas_service_url = public_baseurl + "_matrix/client/r0/login/cas/ticket"
self.cas_protocol_version = cas_config.get("protocol_version")
if (
self.cas_protocol_version is not None
and self.cas_protocol_version not in [1, 2, 3]
):
raise ConfigError(
"Unsupported CAS protocol version %s (only versions 1, 2, 3 are supported)"
% (self.cas_protocol_version,),
("cas_config", "protocol_version"),
)
self.cas_displayname_attribute = cas_config.get("displayname_attribute")
required_attributes = cas_config.get("required_attributes") or {}
self.cas_required_attributes = _parsed_required_attributes_def(
@ -54,6 +64,7 @@ class CasConfig(Config):
else:
self.cas_server_url = None
self.cas_service_url = None
self.cas_protocol_version = None
self.cas_displayname_attribute = None
self.cas_required_attributes = []

View File

@ -18,6 +18,7 @@ from typing import TYPE_CHECKING, Any, Optional
import attr
import attr.validators
from synapse.api.errors import LimitExceededError
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS, RoomVersions
from synapse.config import ConfigError
from synapse.config._base import Config, RootConfig
@ -383,11 +384,6 @@ class ExperimentalConfig(Config):
# MSC3391: Removing account data.
self.msc3391_enabled = experimental.get("msc3391_enabled", False)
# MSC3959: Do not generate notifications for edits.
self.msc3958_supress_edit_notifs = experimental.get(
"msc3958_supress_edit_notifs", False
)
# MSC3967: Do not require UIA when first uploading cross signing keys
self.msc3967_enabled = experimental.get("msc3967_enabled", False)
@ -411,3 +407,11 @@ class ExperimentalConfig(Config):
self.msc4010_push_rules_account_data = experimental.get(
"msc4010_push_rules_account_data", False
)
# MSC4041: Use HTTP header Retry-After to enable library-assisted retry handling
#
# This is a bit hacky, but the most reasonable way to *alway* include the
# headers.
LimitExceededError.include_retry_after_header = experimental.get(
"msc4041_enabled", False
)

View File

@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import Any, Dict, Optional
from typing import Any, Dict, Optional, cast
import attr
@ -21,16 +21,47 @@ from synapse.types import JsonDict
from ._base import Config
@attr.s(slots=True, frozen=True, auto_attribs=True)
class RatelimitSettings:
def __init__(
self,
config: Dict[str, float],
key: str
per_second: float
burst_count: int
@classmethod
def parse(
cls,
config: Dict[str, Any],
key: str,
defaults: Optional[Dict[str, float]] = None,
):
) -> "RatelimitSettings":
"""Parse config[key] as a new-style rate limiter config.
The key may refer to a nested dictionary using a full stop (.) to separate
each nested key. For example, use the key "a.b.c" to parse the following:
a:
b:
c:
per_second: 10
burst_count: 200
If this lookup fails, we'll fallback to the defaults.
"""
defaults = defaults or {"per_second": 0.17, "burst_count": 3.0}
self.per_second = config.get("per_second", defaults["per_second"])
self.burst_count = int(config.get("burst_count", defaults["burst_count"]))
rl_config = config
for part in key.split("."):
rl_config = rl_config.get(part, {})
# By this point we should have hit the rate limiter parameters.
# We don't actually check this though!
rl_config = cast(Dict[str, float], rl_config)
return cls(
key=key,
per_second=rl_config.get("per_second", defaults["per_second"]),
burst_count=int(rl_config.get("burst_count", defaults["burst_count"])),
)
@attr.s(auto_attribs=True)
@ -49,15 +80,14 @@ class RatelimitConfig(Config):
# Load the new-style messages config if it exists. Otherwise fall back
# to the old method.
if "rc_message" in config:
self.rc_message = RatelimitSettings(
config["rc_message"], defaults={"per_second": 0.2, "burst_count": 10.0}
self.rc_message = RatelimitSettings.parse(
config, "rc_message", defaults={"per_second": 0.2, "burst_count": 10.0}
)
else:
self.rc_message = RatelimitSettings(
{
"per_second": config.get("rc_messages_per_second", 0.2),
"burst_count": config.get("rc_message_burst_count", 10.0),
}
key="rc_messages",
per_second=config.get("rc_messages_per_second", 0.2),
burst_count=config.get("rc_message_burst_count", 10.0),
)
# Load the new-style federation config, if it exists. Otherwise, fall
@ -79,51 +109,59 @@ class RatelimitConfig(Config):
}
)
self.rc_registration = RatelimitSettings(config.get("rc_registration", {}))
self.rc_registration = RatelimitSettings.parse(config, "rc_registration", {})
self.rc_registration_token_validity = RatelimitSettings(
config.get("rc_registration_token_validity", {}),
self.rc_registration_token_validity = RatelimitSettings.parse(
config,
"rc_registration_token_validity",
defaults={"per_second": 0.1, "burst_count": 5},
)
# It is reasonable to login with a bunch of devices at once (i.e. when
# setting up an account), but it is *not* valid to continually be
# logging into new devices.
rc_login_config = config.get("rc_login", {})
self.rc_login_address = RatelimitSettings(
rc_login_config.get("address", {}),
self.rc_login_address = RatelimitSettings.parse(
config,
"rc_login.address",
defaults={"per_second": 0.003, "burst_count": 5},
)
self.rc_login_account = RatelimitSettings(
rc_login_config.get("account", {}),
self.rc_login_account = RatelimitSettings.parse(
config,
"rc_login.account",
defaults={"per_second": 0.003, "burst_count": 5},
)
self.rc_login_failed_attempts = RatelimitSettings(
rc_login_config.get("failed_attempts", {})
self.rc_login_failed_attempts = RatelimitSettings.parse(
config,
"rc_login.failed_attempts",
{},
)
self.federation_rr_transactions_per_room_per_second = config.get(
"federation_rr_transactions_per_room_per_second", 50
)
rc_admin_redaction = config.get("rc_admin_redaction")
self.rc_admin_redaction = None
if rc_admin_redaction:
self.rc_admin_redaction = RatelimitSettings(rc_admin_redaction)
if "rc_admin_redaction" in config:
self.rc_admin_redaction = RatelimitSettings.parse(
config, "rc_admin_redaction", {}
)
self.rc_joins_local = RatelimitSettings(
config.get("rc_joins", {}).get("local", {}),
self.rc_joins_local = RatelimitSettings.parse(
config,
"rc_joins.local",
defaults={"per_second": 0.1, "burst_count": 10},
)
self.rc_joins_remote = RatelimitSettings(
config.get("rc_joins", {}).get("remote", {}),
self.rc_joins_remote = RatelimitSettings.parse(
config,
"rc_joins.remote",
defaults={"per_second": 0.01, "burst_count": 10},
)
# Track the rate of joins to a given room. If there are too many, temporarily
# prevent local joins and remote joins via this server.
self.rc_joins_per_room = RatelimitSettings(
config.get("rc_joins_per_room", {}),
self.rc_joins_per_room = RatelimitSettings.parse(
config,
"rc_joins_per_room",
defaults={"per_second": 1, "burst_count": 10},
)
@ -132,31 +170,37 @@ class RatelimitConfig(Config):
# * For requests received over federation this is keyed by the origin.
#
# Note that this isn't exposed in the configuration as it is obscure.
self.rc_key_requests = RatelimitSettings(
config.get("rc_key_requests", {}),
self.rc_key_requests = RatelimitSettings.parse(
config,
"rc_key_requests",
defaults={"per_second": 20, "burst_count": 100},
)
self.rc_3pid_validation = RatelimitSettings(
config.get("rc_3pid_validation") or {},
self.rc_3pid_validation = RatelimitSettings.parse(
config,
"rc_3pid_validation",
defaults={"per_second": 0.003, "burst_count": 5},
)
self.rc_invites_per_room = RatelimitSettings(
config.get("rc_invites", {}).get("per_room", {}),
self.rc_invites_per_room = RatelimitSettings.parse(
config,
"rc_invites.per_room",
defaults={"per_second": 0.3, "burst_count": 10},
)
self.rc_invites_per_user = RatelimitSettings(
config.get("rc_invites", {}).get("per_user", {}),
self.rc_invites_per_user = RatelimitSettings.parse(
config,
"rc_invites.per_user",
defaults={"per_second": 0.003, "burst_count": 5},
)
self.rc_invites_per_issuer = RatelimitSettings(
config.get("rc_invites", {}).get("per_issuer", {}),
self.rc_invites_per_issuer = RatelimitSettings.parse(
config,
"rc_invites.per_issuer",
defaults={"per_second": 0.3, "burst_count": 10},
)
self.rc_third_party_invite = RatelimitSettings(
config.get("rc_third_party_invite", {}),
self.rc_third_party_invite = RatelimitSettings.parse(
config,
"rc_third_party_invite",
defaults={"per_second": 0.0025, "burst_count": 5},
)

View File

@ -669,12 +669,18 @@ def _is_membership_change_allowed(
errcode=Codes.INSUFFICIENT_POWER,
)
elif Membership.BAN == membership:
if user_level < ban_level or user_level <= target_level:
if user_level < ban_level:
raise UnstableSpecAuthError(
403,
"You don't have permission to ban",
errcode=Codes.INSUFFICIENT_POWER,
)
elif user_level <= target_level:
raise UnstableSpecAuthError(
403,
"You don't have permission to ban this user",
errcode=Codes.INSUFFICIENT_POWER,
)
elif room_version.knock_join_rule and Membership.KNOCK == membership:
if join_rule != JoinRules.KNOCK and (
not room_version.knock_restricted_join_rule
@ -846,11 +852,11 @@ def _check_power_levels(
"kick",
"invite",
}:
if type(v) is not int:
if type(v) is not int: # noqa: E721
raise SynapseError(400, f"{v!r} must be an integer.")
if k in {"events", "notifications", "users"}:
if not isinstance(v, collections.abc.Mapping) or not all(
type(v) is int for v in v.values()
type(v) is int for v in v.values() # noqa: E721
):
raise SynapseError(
400,

View File

@ -702,7 +702,7 @@ def _copy_power_level_value_as_integer(
:raises TypeError: if `old_value` is neither an integer nor a base-10 string
representation of an integer.
"""
if type(old_value) is int:
if type(old_value) is int: # noqa: E721
power_levels[key] = old_value
return
@ -730,7 +730,7 @@ def validate_canonicaljson(value: Any) -> None:
* Floats
* NaN, Infinity, -Infinity
"""
if type(value) is int:
if type(value) is int: # noqa: E721
if value < CANONICALJSON_MIN_INT or CANONICALJSON_MAX_INT < value:
raise SynapseError(400, "JSON integer out of range", Codes.BAD_JSON)

View File

@ -151,7 +151,7 @@ class EventValidator:
max_lifetime = event.content.get("max_lifetime")
if min_lifetime is not None:
if type(min_lifetime) is not int:
if type(min_lifetime) is not int: # noqa: E721
raise SynapseError(
code=400,
msg="'min_lifetime' must be an integer",
@ -159,7 +159,7 @@ class EventValidator:
)
if max_lifetime is not None:
if type(max_lifetime) is not int:
if type(max_lifetime) is not int: # noqa: E721
raise SynapseError(
code=400,
msg="'max_lifetime' must be an integer",

View File

@ -280,7 +280,7 @@ def event_from_pdu_json(pdu_json: JsonDict, room_version: RoomVersion) -> EventB
_strip_unsigned_values(pdu_json)
depth = pdu_json["depth"]
if type(depth) is not int:
if type(depth) is not int: # noqa: E721
raise SynapseError(400, "Depth %r not an intger" % (depth,), Codes.BAD_JSON)
if depth < 0:

View File

@ -1891,7 +1891,7 @@ class TimestampToEventResponse:
)
origin_server_ts = d.get("origin_server_ts")
if type(origin_server_ts) is not int:
if type(origin_server_ts) is not int: # noqa: E721
raise ValueError(
"Invalid response: 'origin_server_ts' must be a int but received %r"
% origin_server_ts

View File

@ -49,7 +49,7 @@ from synapse.api.presence import UserPresenceState
from synapse.federation.sender import AbstractFederationSender, FederationSender
from synapse.metrics import LaterGauge
from synapse.replication.tcp.streams.federation import FederationStream
from synapse.types import JsonDict, ReadReceipt, RoomStreamToken
from synapse.types import JsonDict, ReadReceipt, RoomStreamToken, StrCollection
from synapse.util.metrics import Measure
from .units import Edu
@ -229,7 +229,7 @@ class FederationRemoteSendQueue(AbstractFederationSender):
"""
# nothing to do here: the replication listener will handle it.
def send_presence_to_destinations(
async def send_presence_to_destinations(
self, states: Iterable[UserPresenceState], destinations: Iterable[str]
) -> None:
"""As per FederationSender
@ -245,7 +245,9 @@ class FederationRemoteSendQueue(AbstractFederationSender):
self.notifier.on_new_replication_data()
def send_device_messages(self, destination: str, immediate: bool = True) -> None:
async def send_device_messages(
self, destinations: StrCollection, immediate: bool = True
) -> None:
"""As per FederationSender"""
# We don't need to replicate this as it gets sent down a different
# stream.
@ -463,7 +465,7 @@ class ParsedFederationStreamData:
edus: Dict[str, List[Edu]]
def process_rows_for_federation(
async def process_rows_for_federation(
transaction_queue: FederationSender,
rows: List[FederationStream.FederationStreamRow],
) -> None:
@ -496,7 +498,7 @@ def process_rows_for_federation(
parsed_row.add_to_buffer(buff)
for state, destinations in buff.presence_destinations:
transaction_queue.send_presence_to_destinations(
await transaction_queue.send_presence_to_destinations(
states=[state], destinations=destinations
)

View File

@ -147,7 +147,10 @@ from twisted.internet import defer
import synapse.metrics
from synapse.api.presence import UserPresenceState
from synapse.events import EventBase
from synapse.federation.sender.per_destination_queue import PerDestinationQueue
from synapse.federation.sender.per_destination_queue import (
CATCHUP_RETRY_INTERVAL,
PerDestinationQueue,
)
from synapse.federation.sender.transaction_manager import TransactionManager
from synapse.federation.units import Edu
from synapse.logging.context import make_deferred_yieldable, run_in_background
@ -161,9 +164,10 @@ from synapse.metrics.background_process_metrics import (
run_as_background_process,
wrap_as_background_process,
)
from synapse.types import JsonDict, ReadReceipt, RoomStreamToken
from synapse.types import JsonDict, ReadReceipt, RoomStreamToken, StrCollection
from synapse.util import Clock
from synapse.util.metrics import Measure
from synapse.util.retryutils import filter_destinations_by_retry_limiter
if TYPE_CHECKING:
from synapse.events.presence_router import PresenceRouter
@ -213,7 +217,7 @@ class AbstractFederationSender(metaclass=abc.ABCMeta):
raise NotImplementedError()
@abc.abstractmethod
def send_presence_to_destinations(
async def send_presence_to_destinations(
self, states: Iterable[UserPresenceState], destinations: Iterable[str]
) -> None:
"""Send the given presence states to the given destinations.
@ -242,9 +246,11 @@ class AbstractFederationSender(metaclass=abc.ABCMeta):
raise NotImplementedError()
@abc.abstractmethod
def send_device_messages(self, destination: str, immediate: bool = True) -> None:
async def send_device_messages(
self, destinations: StrCollection, immediate: bool = True
) -> None:
"""Tells the sender that a new device message is ready to be sent to the
destination. The `immediate` flag specifies whether the messages should
destinations. The `immediate` flag specifies whether the messages should
be tried to be sent immediately, or whether it can be delayed for a
short while (to aid performance).
"""
@ -716,6 +722,13 @@ class FederationSender(AbstractFederationSender):
pdu.internal_metadata.stream_ordering,
)
destinations = await filter_destinations_by_retry_limiter(
destinations,
clock=self.clock,
store=self.store,
retry_due_within_ms=CATCHUP_RETRY_INTERVAL,
)
for destination in destinations:
self._get_per_destination_queue(destination).send_pdu(pdu)
@ -763,12 +776,20 @@ class FederationSender(AbstractFederationSender):
domains_set = await self._storage_controllers.state.get_current_hosts_in_room_or_partial_state_approximation(
room_id
)
domains = [
domains: StrCollection = [
d
for d in domains_set
if not self.is_mine_server_name(d)
and self._federation_shard_config.should_handle(self._instance_name, d)
]
domains = await filter_destinations_by_retry_limiter(
domains,
clock=self.clock,
store=self.store,
retry_due_within_ms=CATCHUP_RETRY_INTERVAL,
)
if not domains:
return
@ -816,7 +837,7 @@ class FederationSender(AbstractFederationSender):
for queue in queues:
queue.flush_read_receipts_for_room(room_id)
def send_presence_to_destinations(
async def send_presence_to_destinations(
self, states: Iterable[UserPresenceState], destinations: Iterable[str]
) -> None:
"""Send the given presence states to the given destinations.
@ -831,13 +852,20 @@ class FederationSender(AbstractFederationSender):
for state in states:
assert self.is_mine_id(state.user_id)
destinations = await filter_destinations_by_retry_limiter(
[
d
for d in destinations
if self._federation_shard_config.should_handle(self._instance_name, d)
],
clock=self.clock,
store=self.store,
retry_due_within_ms=CATCHUP_RETRY_INTERVAL,
)
for destination in destinations:
if self.is_mine_server_name(destination):
continue
if not self._federation_shard_config.should_handle(
self._instance_name, destination
):
continue
self._get_per_destination_queue(destination).send_presence(
states, start_loop=False
@ -896,21 +924,29 @@ class FederationSender(AbstractFederationSender):
else:
queue.send_edu(edu)
def send_device_messages(self, destination: str, immediate: bool = True) -> None:
if self.is_mine_server_name(destination):
logger.warning("Not sending device update to ourselves")
return
async def send_device_messages(
self, destinations: StrCollection, immediate: bool = True
) -> None:
destinations = await filter_destinations_by_retry_limiter(
[
destination
for destination in destinations
if self._federation_shard_config.should_handle(
self._instance_name, destination
)
and not self.is_mine_server_name(destination)
],
clock=self.clock,
store=self.store,
retry_due_within_ms=CATCHUP_RETRY_INTERVAL,
)
if not self._federation_shard_config.should_handle(
self._instance_name, destination
):
return
if immediate:
self._get_per_destination_queue(destination).attempt_new_transaction()
else:
self._get_per_destination_queue(destination).mark_new_data()
self._destination_wakeup_queue.add_to_queue(destination)
for destination in destinations:
if immediate:
self._get_per_destination_queue(destination).attempt_new_transaction()
else:
self._get_per_destination_queue(destination).mark_new_data()
self._destination_wakeup_queue.add_to_queue(destination)
def wake_destination(self, destination: str) -> None:
"""Called when we want to retry sending transactions to a remote.

View File

@ -59,6 +59,10 @@ sent_edus_by_type = Counter(
)
# If the retry interval is larger than this then we enter "catchup" mode
CATCHUP_RETRY_INTERVAL = 60 * 60 * 1000
class PerDestinationQueue:
"""
Manages the per-destination transmission queues.
@ -370,7 +374,7 @@ class PerDestinationQueue:
),
)
if e.retry_interval > 60 * 60 * 1000:
if e.retry_interval > CATCHUP_RETRY_INTERVAL:
# we won't retry for another hour!
# (this suggests a significant outage)
# We drop pending EDUs because otherwise they will

View File

@ -249,8 +249,10 @@ class TransportLayerClient:
data=json_data,
json_data_callback=json_data_callback,
long_retries=True,
backoff_on_404=True, # If we get a 404 the other side has gone
try_trailing_slash_on_400=True,
# Sending a transaction should always succeed, if it doesn't
# then something is wrong and we should backoff.
backoff_on_all_error_codes=True,
)
async def make_query(
@ -475,13 +477,11 @@ class TransportLayerClient:
See synapse.federation.federation_client.FederationClient.get_public_rooms for
more information.
"""
path = _create_v1_path("/publicRooms")
if search_filter:
# this uses MSC2197 (Search Filtering over Federation)
path = _create_v1_path("/publicRooms")
data: Dict[str, Any] = {
"include_all_networks": "true" if include_all_networks else "false"
}
data: Dict[str, Any] = {"include_all_networks": include_all_networks}
if third_party_instance_id:
data["third_party_instance_id"] = third_party_instance_id
if limit:
@ -505,17 +505,15 @@ class TransportLayerClient:
)
raise
else:
path = _create_v1_path("/publicRooms")
args: Dict[str, Union[str, Iterable[str]]] = {
"include_all_networks": "true" if include_all_networks else "false"
}
if third_party_instance_id:
args["third_party_instance_id"] = (third_party_instance_id,)
args["third_party_instance_id"] = third_party_instance_id
if limit:
args["limit"] = [str(limit)]
args["limit"] = str(limit)
if since_token:
args["since"] = [since_token]
args["since"] = since_token
try:
response = await self.client.get_json(

View File

@ -76,6 +76,7 @@ class AdminHandler:
"consent_ts",
"user_type",
"is_guest",
"last_seen_ts",
}
if self._msc3866_enabled:

View File

@ -218,19 +218,17 @@ class AuthHandler:
self._failed_uia_attempts_ratelimiter = Ratelimiter(
store=self.store,
clock=self.clock,
rate_hz=self.hs.config.ratelimiting.rc_login_failed_attempts.per_second,
burst_count=self.hs.config.ratelimiting.rc_login_failed_attempts.burst_count,
cfg=self.hs.config.ratelimiting.rc_login_failed_attempts,
)
# The number of seconds to keep a UI auth session active.
self._ui_auth_session_timeout = hs.config.auth.ui_auth_session_timeout
# Ratelimitier for failed /login attempts
# Ratelimiter for failed /login attempts
self._failed_login_attempts_ratelimiter = Ratelimiter(
store=self.store,
clock=hs.get_clock(),
rate_hz=self.hs.config.ratelimiting.rc_login_failed_attempts.per_second,
burst_count=self.hs.config.ratelimiting.rc_login_failed_attempts.burst_count,
cfg=self.hs.config.ratelimiting.rc_login_failed_attempts,
)
self._clock = self.hs.get_clock()

View File

@ -67,6 +67,7 @@ class CasHandler:
self._cas_server_url = hs.config.cas.cas_server_url
self._cas_service_url = hs.config.cas.cas_service_url
self._cas_protocol_version = hs.config.cas.cas_protocol_version
self._cas_displayname_attribute = hs.config.cas.cas_displayname_attribute
self._cas_required_attributes = hs.config.cas.cas_required_attributes
@ -121,7 +122,10 @@ class CasHandler:
Returns:
The parsed CAS response.
"""
uri = self._cas_server_url + "/proxyValidate"
if self._cas_protocol_version == 3:
uri = self._cas_server_url + "/p3/proxyValidate"
else:
uri = self._cas_server_url + "/proxyValidate"
args = {
"ticket": ticket,
"service": self._build_service_param(service_args),

View File

@ -836,17 +836,16 @@ class DeviceHandler(DeviceWorkerHandler):
user_id,
hosts,
)
for host in hosts:
self.federation_sender.send_device_messages(
host, immediate=False
)
# TODO: when called, this isn't in a logging context.
# This leads to log spam, sentry event spam, and massive
# memory usage.
# See https://github.com/matrix-org/synapse/issues/12552.
# log_kv(
# {"message": "sent device update to host", "host": host}
# )
await self.federation_sender.send_device_messages(
hosts, immediate=False
)
# TODO: when called, this isn't in a logging context.
# This leads to log spam, sentry event spam, and massive
# memory usage.
# See https://github.com/matrix-org/synapse/issues/12552.
# log_kv(
# {"message": "sent device update to host", "host": host}
# )
if current_stream_id != stream_id:
# Clear the set of hosts we've already sent to as we're
@ -951,8 +950,9 @@ class DeviceHandler(DeviceWorkerHandler):
# Notify things that device lists need to be sent out.
self.notifier.notify_replication()
for host in potentially_changed_hosts:
self.federation_sender.send_device_messages(host, immediate=False)
await self.federation_sender.send_device_messages(
potentially_changed_hosts, immediate=False
)
def _update_device_from_client_ips(

View File

@ -90,8 +90,7 @@ class DeviceMessageHandler:
self._ratelimiter = Ratelimiter(
store=self.store,
clock=hs.get_clock(),
rate_hz=hs.config.ratelimiting.rc_key_requests.per_second,
burst_count=hs.config.ratelimiting.rc_key_requests.burst_count,
cfg=hs.config.ratelimiting.rc_key_requests,
)
async def on_direct_to_device_edu(self, origin: str, content: JsonDict) -> None:
@ -303,10 +302,9 @@ class DeviceMessageHandler:
)
if self.federation_sender:
for destination in remote_messages.keys():
# Enqueue a new federation transaction to send the new
# device messages to each remote destination.
self.federation_sender.send_device_messages(destination)
# Enqueue a new federation transaction to send the new
# device messages to each remote destination.
await self.federation_sender.send_device_messages(remote_messages.keys())
async def get_events_for_dehydrated_device(
self,

View File

@ -67,6 +67,7 @@ class EventStreamHandler:
context = await presence_handler.user_syncing(
requester.user.to_string(),
requester.device_id,
affect_presence=affect_presence,
presence_state=PresenceState.ONLINE,
)

View File

@ -66,14 +66,12 @@ class IdentityHandler:
self._3pid_validation_ratelimiter_ip = Ratelimiter(
store=self.store,
clock=hs.get_clock(),
rate_hz=hs.config.ratelimiting.rc_3pid_validation.per_second,
burst_count=hs.config.ratelimiting.rc_3pid_validation.burst_count,
cfg=hs.config.ratelimiting.rc_3pid_validation,
)
self._3pid_validation_ratelimiter_address = Ratelimiter(
store=self.store,
clock=hs.get_clock(),
rate_hz=hs.config.ratelimiting.rc_3pid_validation.per_second,
burst_count=hs.config.ratelimiting.rc_3pid_validation.burst_count,
cfg=hs.config.ratelimiting.rc_3pid_validation,
)
async def ratelimit_request_token_requests(

View File

@ -379,7 +379,7 @@ class MessageHandler:
"""
expiry_ts = event.content.get(EventContentFields.SELF_DESTRUCT_AFTER)
if type(expiry_ts) is not int or event.is_state():
if type(expiry_ts) is not int or event.is_state(): # noqa: E721
return
# _schedule_expiry_for_event won't actually schedule anything if there's already
@ -908,19 +908,6 @@ class EventCreationHandler:
if existing_event_id:
return existing_event_id
# Some requsters don't have device IDs (appservice, guests, and access
# tokens minted with the admin API), fallback to checking the access token
# ID, which should be close enough.
if requester.access_token_id:
existing_event_id = (
await self.store.get_event_id_from_transaction_id_and_token_id(
room_id,
requester.user.to_string(),
requester.access_token_id,
txn_id,
)
)
return existing_event_id
async def get_event_from_transaction(
@ -1474,23 +1461,23 @@ class EventCreationHandler:
# We now persist the event (and update the cache in parallel, since we
# don't want to block on it).
event, context = events_and_context[0]
#
# Note: mypy gets confused if we inline dl and check with twisted#11770.
# Some kind of bug in mypy's deduction?
deferreds = (
run_in_background(
self._persist_events,
requester=requester,
events_and_context=events_and_context,
ratelimit=ratelimit,
extra_users=extra_users,
),
run_in_background(
self.cache_joined_hosts_for_events, events_and_context
).addErrback(log_failure, "cache_joined_hosts_for_event failed"),
)
result, _ = await make_deferred_yieldable(
gather_results(
(
run_in_background(
self._persist_events,
requester=requester,
events_and_context=events_and_context,
ratelimit=ratelimit,
extra_users=extra_users,
),
run_in_background(
self.cache_joined_hosts_for_events, events_and_context
).addErrback(log_failure, "cache_joined_hosts_for_event failed"),
),
consumeErrors=True,
)
gather_results(deferreds, consumeErrors=True)
).addErrback(unwrapFirstError)
return result
@ -1921,7 +1908,10 @@ class EventCreationHandler:
# We don't want to block sending messages on any presence code. This
# matters as sometimes presence code can take a while.
run_as_background_process(
"bump_presence_active_time", self._bump_active_time, requester.user
"bump_presence_active_time",
self._bump_active_time,
requester.user,
requester.device_id,
)
async def _notify() -> None:
@ -1958,10 +1948,10 @@ class EventCreationHandler:
logger.info("maybe_kick_guest_users %r", current_state)
await self.hs.get_room_member_handler().kick_guest_users(current_state)
async def _bump_active_time(self, user: UserID) -> None:
async def _bump_active_time(self, user: UserID, device_id: Optional[str]) -> None:
try:
presence = self.hs.get_presence_handler()
await presence.bump_presence_active_time(user)
await presence.bump_presence_active_time(user, device_id)
except Exception:
logger.exception("Error bumping presence active time")

View File

@ -23,6 +23,7 @@ The methods that define policy are:
"""
import abc
import contextlib
import itertools
import logging
from bisect import bisect
from contextlib import contextmanager
@ -151,15 +152,13 @@ class BasePresenceHandler(abc.ABC):
self._federation_queue = PresenceFederationQueue(hs, self)
self._busy_presence_enabled = hs.config.experimental.msc3026_enabled
self.VALID_PRESENCE: Tuple[str, ...] = (
PresenceState.ONLINE,
PresenceState.UNAVAILABLE,
PresenceState.OFFLINE,
)
if self._busy_presence_enabled:
if hs.config.experimental.msc3026_enabled:
self.VALID_PRESENCE += (PresenceState.BUSY,)
active_presence = self.store.take_presence_startup_info()
@ -167,7 +166,11 @@ class BasePresenceHandler(abc.ABC):
@abc.abstractmethod
async def user_syncing(
self, user_id: str, affect_presence: bool, presence_state: str
self,
user_id: str,
device_id: Optional[str],
affect_presence: bool,
presence_state: str,
) -> ContextManager[None]:
"""Returns a context manager that should surround any stream requests
from the user.
@ -178,6 +181,7 @@ class BasePresenceHandler(abc.ABC):
Args:
user_id: the user that is starting a sync
device_id: the user's device that is starting a sync
affect_presence: If false this function will be a no-op.
Useful for streams that are not associated with an actual
client that is being used by a user.
@ -185,15 +189,17 @@ class BasePresenceHandler(abc.ABC):
"""
@abc.abstractmethod
def get_currently_syncing_users_for_replication(self) -> Iterable[str]:
"""Get an iterable of syncing users on this worker, to send to the presence handler
def get_currently_syncing_users_for_replication(
self,
) -> Iterable[Tuple[str, Optional[str]]]:
"""Get an iterable of syncing users and devices on this worker, to send to the presence handler
This is called when a replication connection is established. It should return
a list of user ids, which are then sent as USER_SYNC commands to inform the
process handling presence about those users.
a list of tuples of user ID & device ID, which are then sent as USER_SYNC commands
to inform the process handling presence about those users/devices.
Returns:
An iterable of user_id strings.
An iterable of tuples of user ID and device ID.
"""
async def get_state(self, target_user: UserID) -> UserPresenceState:
@ -254,28 +260,39 @@ class BasePresenceHandler(abc.ABC):
async def set_state(
self,
target_user: UserID,
device_id: Optional[str],
state: JsonDict,
ignore_status_msg: bool = False,
force_notify: bool = False,
is_sync: bool = False,
) -> None:
"""Set the presence state of the user.
Args:
target_user: The ID of the user to set the presence state of.
device_id: the device that the user is setting the presence state of.
state: The presence state as a JSON dictionary.
ignore_status_msg: True to ignore the "status_msg" field of the `state` dict.
If False, the user's current status will be updated.
force_notify: Whether to force notification of the update to clients.
is_sync: True if this update was from a sync, which results in
*not* overriding a previously set BUSY status, updating the
user's last_user_sync_ts, and ignoring the "status_msg" field of
the `state` dict.
"""
@abc.abstractmethod
async def bump_presence_active_time(self, user: UserID) -> None:
async def bump_presence_active_time(
self, user: UserID, device_id: Optional[str]
) -> None:
"""We've seen the user do something that indicates they're interacting
with the app.
"""
async def update_external_syncs_row( # noqa: B027 (no-op by design)
self, process_id: str, user_id: str, is_syncing: bool, sync_time_msec: int
self,
process_id: str,
user_id: str,
device_id: Optional[str],
is_syncing: bool,
sync_time_msec: int,
) -> None:
"""Update the syncing users for an external process as a delta.
@ -286,6 +303,7 @@ class BasePresenceHandler(abc.ABC):
syncing against. This allows synapse to process updates
as user start and stop syncing against a given process.
user_id: The user who has started or stopped syncing
device_id: The user's device that has started or stopped syncing
is_syncing: Whether or not the user is now syncing
sync_time_msec: Time in ms when the user was last syncing
"""
@ -336,7 +354,9 @@ class BasePresenceHandler(abc.ABC):
)
for destination, host_states in hosts_to_states.items():
self._federation.send_presence_to_destinations(host_states, [destination])
await self._federation.send_presence_to_destinations(
host_states, [destination]
)
async def send_full_presence_to_users(self, user_ids: StrCollection) -> None:
"""
@ -381,7 +401,9 @@ class BasePresenceHandler(abc.ABC):
# We set force_notify=True here so that this presence update is guaranteed to
# increment the presence stream ID (which resending the current user's presence
# otherwise would not do).
await self.set_state(UserID.from_string(user_id), state, force_notify=True)
await self.set_state(
UserID.from_string(user_id), None, state, force_notify=True
)
async def is_visible(self, observed_user: UserID, observer_user: UserID) -> bool:
raise NotImplementedError(
@ -414,16 +436,18 @@ class WorkerPresenceHandler(BasePresenceHandler):
hs.config.worker.writers.presence,
)
# The number of ongoing syncs on this process, by user id.
# The number of ongoing syncs on this process, by (user ID, device ID).
# Empty if _presence_enabled is false.
self._user_to_num_current_syncs: Dict[str, int] = {}
self._user_device_to_num_current_syncs: Dict[
Tuple[str, Optional[str]], int
] = {}
self.notifier = hs.get_notifier()
self.instance_id = hs.get_instance_id()
# user_id -> last_sync_ms. Lists the users that have stopped syncing but
# we haven't notified the presence writer of that yet
self.users_going_offline: Dict[str, int] = {}
# (user_id, device_id) -> last_sync_ms. Lists the devices that have stopped
# syncing but we haven't notified the presence writer of that yet
self._user_devices_going_offline: Dict[Tuple[str, Optional[str]], int] = {}
self._bump_active_client = ReplicationBumpPresenceActiveTime.make_client(hs)
self._set_state_client = ReplicationPresenceSetState.make_client(hs)
@ -446,42 +470,54 @@ class WorkerPresenceHandler(BasePresenceHandler):
ClearUserSyncsCommand(self.instance_id)
)
def send_user_sync(self, user_id: str, is_syncing: bool, last_sync_ms: int) -> None:
def send_user_sync(
self,
user_id: str,
device_id: Optional[str],
is_syncing: bool,
last_sync_ms: int,
) -> None:
if self._presence_enabled:
self.hs.get_replication_command_handler().send_user_sync(
self.instance_id, user_id, is_syncing, last_sync_ms
self.instance_id, user_id, device_id, is_syncing, last_sync_ms
)
def mark_as_coming_online(self, user_id: str) -> None:
def mark_as_coming_online(self, user_id: str, device_id: Optional[str]) -> None:
"""A user has started syncing. Send a UserSync to the presence writer,
unless they had recently stopped syncing.
"""
going_offline = self.users_going_offline.pop(user_id, None)
going_offline = self._user_devices_going_offline.pop((user_id, device_id), None)
if not going_offline:
# Safe to skip because we haven't yet told the presence writer they
# were offline
self.send_user_sync(user_id, True, self.clock.time_msec())
self.send_user_sync(user_id, device_id, True, self.clock.time_msec())
def mark_as_going_offline(self, user_id: str) -> None:
def mark_as_going_offline(self, user_id: str, device_id: Optional[str]) -> None:
"""A user has stopped syncing. We wait before notifying the presence
writer as its likely they'll come back soon. This allows us to avoid
sending a stopped syncing immediately followed by a started syncing
notification to the presence writer
"""
self.users_going_offline[user_id] = self.clock.time_msec()
self._user_devices_going_offline[(user_id, device_id)] = self.clock.time_msec()
def send_stop_syncing(self) -> None:
"""Check if there are any users who have stopped syncing a while ago and
haven't come back yet. If there are poke the presence writer about them.
"""
now = self.clock.time_msec()
for user_id, last_sync_ms in list(self.users_going_offline.items()):
for (user_id, device_id), last_sync_ms in list(
self._user_devices_going_offline.items()
):
if now - last_sync_ms > UPDATE_SYNCING_USERS_MS:
self.users_going_offline.pop(user_id, None)
self.send_user_sync(user_id, False, last_sync_ms)
self._user_devices_going_offline.pop((user_id, device_id), None)
self.send_user_sync(user_id, device_id, False, last_sync_ms)
async def user_syncing(
self, user_id: str, affect_presence: bool, presence_state: str
self,
user_id: str,
device_id: Optional[str],
affect_presence: bool,
presence_state: str,
) -> ContextManager[None]:
"""Record that a user is syncing.
@ -491,36 +527,32 @@ class WorkerPresenceHandler(BasePresenceHandler):
if not affect_presence or not self._presence_enabled:
return _NullContextManager()
prev_state = await self.current_state_for_user(user_id)
if prev_state.state != PresenceState.BUSY:
# We set state here but pass ignore_status_msg = True as we don't want to
# cause the status message to be cleared.
# Note that this causes last_active_ts to be incremented which is not
# what the spec wants: see comment in the BasePresenceHandler version
# of this function.
await self.set_state(
UserID.from_string(user_id),
{"presence": presence_state},
ignore_status_msg=True,
)
# Note that this causes last_active_ts to be incremented which is not
# what the spec wants.
await self.set_state(
UserID.from_string(user_id),
device_id,
state={"presence": presence_state},
is_sync=True,
)
curr_sync = self._user_to_num_current_syncs.get(user_id, 0)
self._user_to_num_current_syncs[user_id] = curr_sync + 1
curr_sync = self._user_device_to_num_current_syncs.get((user_id, device_id), 0)
self._user_device_to_num_current_syncs[(user_id, device_id)] = curr_sync + 1
# If we went from no in flight sync to some, notify replication
if self._user_to_num_current_syncs[user_id] == 1:
self.mark_as_coming_online(user_id)
# If this is the first in-flight sync, notify replication
if self._user_device_to_num_current_syncs[(user_id, device_id)] == 1:
self.mark_as_coming_online(user_id, device_id)
def _end() -> None:
# We check that the user_id is in user_to_num_current_syncs because
# user_to_num_current_syncs may have been cleared if we are
# shutting down.
if user_id in self._user_to_num_current_syncs:
self._user_to_num_current_syncs[user_id] -= 1
if (user_id, device_id) in self._user_device_to_num_current_syncs:
self._user_device_to_num_current_syncs[(user_id, device_id)] -= 1
# If we went from one in flight sync to non, notify replication
if self._user_to_num_current_syncs[user_id] == 0:
self.mark_as_going_offline(user_id)
# If there are no more in-flight syncs, notify replication
if self._user_device_to_num_current_syncs[(user_id, device_id)] == 0:
self.mark_as_going_offline(user_id, device_id)
@contextlib.contextmanager
def _user_syncing() -> Generator[None, None, None]:
@ -587,28 +619,34 @@ class WorkerPresenceHandler(BasePresenceHandler):
# If this is a federation sender, notify about presence updates.
await self.maybe_send_presence_to_interested_destinations(state_to_notify)
def get_currently_syncing_users_for_replication(self) -> Iterable[str]:
def get_currently_syncing_users_for_replication(
self,
) -> Iterable[Tuple[str, Optional[str]]]:
return [
user_id
for user_id, count in self._user_to_num_current_syncs.items()
user_id_device_id
for user_id_device_id, count in self._user_device_to_num_current_syncs.items()
if count > 0
]
async def set_state(
self,
target_user: UserID,
device_id: Optional[str],
state: JsonDict,
ignore_status_msg: bool = False,
force_notify: bool = False,
is_sync: bool = False,
) -> None:
"""Set the presence state of the user.
Args:
target_user: The ID of the user to set the presence state of.
device_id: the device that the user is setting the presence state of.
state: The presence state as a JSON dictionary.
ignore_status_msg: True to ignore the "status_msg" field of the `state` dict.
If False, the user's current status will be updated.
force_notify: Whether to force notification of the update to clients.
is_sync: True if this update was from a sync, which results in
*not* overriding a previously set BUSY status, updating the
user's last_user_sync_ts, and ignoring the "status_msg" field of
the `state` dict.
"""
presence = state["presence"]
@ -625,12 +663,15 @@ class WorkerPresenceHandler(BasePresenceHandler):
await self._set_state_client(
instance_name=self._presence_writer_instance,
user_id=user_id,
device_id=device_id,
state=state,
ignore_status_msg=ignore_status_msg,
force_notify=force_notify,
is_sync=is_sync,
)
async def bump_presence_active_time(self, user: UserID) -> None:
async def bump_presence_active_time(
self, user: UserID, device_id: Optional[str]
) -> None:
"""We've seen the user do something that indicates they're interacting
with the app.
"""
@ -641,7 +682,9 @@ class WorkerPresenceHandler(BasePresenceHandler):
# Proxy request to instance that writes presence
user_id = user.to_string()
await self._bump_active_client(
instance_name=self._presence_writer_instance, user_id=user_id
instance_name=self._presence_writer_instance,
user_id=user_id,
device_id=device_id,
)
@ -703,17 +746,23 @@ class PresenceHandler(BasePresenceHandler):
# Keeps track of the number of *ongoing* syncs on this process. While
# this is non zero a user will never go offline.
self.user_to_num_current_syncs: Dict[str, int] = {}
self._user_device_to_num_current_syncs: Dict[
Tuple[str, Optional[str]], int
] = {}
# Keeps track of the number of *ongoing* syncs on other processes.
#
# While any sync is ongoing on another process the user will never
# go offline.
#
# Each process has a unique identifier and an update frequency. If
# no update is received from that process within the update period then
# we assume that all the sync requests on that process have stopped.
# Stored as a dict from process_id to set of user_id, and a dict of
# process_id to millisecond timestamp last updated.
self.external_process_to_current_syncs: Dict[str, Set[str]] = {}
# Stored as a dict from process_id to set of (user_id, device_id), and
# a dict of process_id to millisecond timestamp last updated.
self.external_process_to_current_syncs: Dict[
str, Set[Tuple[str, Optional[str]]]
] = {}
self.external_process_last_updated_ms: Dict[str, int] = {}
self.external_sync_linearizer = Linearizer(name="external_sync_linearizer")
@ -889,7 +938,7 @@ class PresenceHandler(BasePresenceHandler):
)
for destination, states in hosts_to_states.items():
self._federation_queue.send_presence_to_destinations(
await self._federation_queue.send_presence_to_destinations(
states, [destination]
)
@ -918,7 +967,10 @@ class PresenceHandler(BasePresenceHandler):
# that were syncing on that process to see if they need to be timed
# out.
users_to_check.update(
self.external_process_to_current_syncs.pop(process_id, ())
user_id
for user_id, device_id in self.external_process_to_current_syncs.pop(
process_id, ()
)
)
self.external_process_last_updated_ms.pop(process_id)
@ -931,11 +983,15 @@ class PresenceHandler(BasePresenceHandler):
syncing_user_ids = {
user_id
for user_id, count in self.user_to_num_current_syncs.items()
for (user_id, _), count in self._user_device_to_num_current_syncs.items()
if count
}
for user_ids in self.external_process_to_current_syncs.values():
syncing_user_ids.update(user_ids)
syncing_user_ids.update(
user_id
for user_id, _ in itertools.chain(
*self.external_process_to_current_syncs.values()
)
)
changes = handle_timeouts(
states,
@ -946,7 +1002,9 @@ class PresenceHandler(BasePresenceHandler):
return await self._update_states(changes)
async def bump_presence_active_time(self, user: UserID) -> None:
async def bump_presence_active_time(
self, user: UserID, device_id: Optional[str]
) -> None:
"""We've seen the user do something that indicates they're interacting
with the app.
"""
@ -969,6 +1027,7 @@ class PresenceHandler(BasePresenceHandler):
async def user_syncing(
self,
user_id: str,
device_id: Optional[str],
affect_presence: bool = True,
presence_state: str = PresenceState.ONLINE,
) -> ContextManager[None]:
@ -980,7 +1039,8 @@ class PresenceHandler(BasePresenceHandler):
when users disconnect/reconnect.
Args:
user_id
user_id: the user that is starting a sync
device_id: the user's device that is starting a sync
affect_presence: If false this function will be a no-op.
Useful for streams that are not associated with an actual
client that is being used by a user.
@ -989,52 +1049,21 @@ class PresenceHandler(BasePresenceHandler):
if not affect_presence or not self._presence_enabled:
return _NullContextManager()
curr_sync = self.user_to_num_current_syncs.get(user_id, 0)
self.user_to_num_current_syncs[user_id] = curr_sync + 1
curr_sync = self._user_device_to_num_current_syncs.get((user_id, device_id), 0)
self._user_device_to_num_current_syncs[(user_id, device_id)] = curr_sync + 1
prev_state = await self.current_state_for_user(user_id)
# If they're busy then they don't stop being busy just by syncing,
# so just update the last sync time.
if prev_state.state != PresenceState.BUSY:
# XXX: We set_state separately here and just update the last_active_ts above
# This keeps the logic as similar as possible between the worker and single
# process modes. Using set_state will actually cause last_active_ts to be
# updated always, which is not what the spec calls for, but synapse has done
# this for... forever, I think.
await self.set_state(
UserID.from_string(user_id),
{"presence": presence_state},
ignore_status_msg=True,
)
# Retrieve the new state for the logic below. This should come from the
# in-memory cache.
prev_state = await self.current_state_for_user(user_id)
# To keep the single process behaviour consistent with worker mode, run the
# same logic as `update_external_syncs_row`, even though it looks weird.
if prev_state.state == PresenceState.OFFLINE:
await self._update_states(
[
prev_state.copy_and_replace(
state=PresenceState.ONLINE,
last_active_ts=self.clock.time_msec(),
last_user_sync_ts=self.clock.time_msec(),
)
]
)
# otherwise, set the new presence state & update the last sync time,
# but don't update last_active_ts as this isn't an indication that
# they've been active (even though it's probably been updated by
# set_state above)
else:
await self._update_states(
[prev_state.copy_and_replace(last_user_sync_ts=self.clock.time_msec())]
)
# Note that this causes last_active_ts to be incremented which is not
# what the spec wants.
await self.set_state(
UserID.from_string(user_id),
device_id,
state={"presence": presence_state},
is_sync=True,
)
async def _end() -> None:
try:
self.user_to_num_current_syncs[user_id] -= 1
self._user_device_to_num_current_syncs[(user_id, device_id)] -= 1
prev_state = await self.current_state_for_user(user_id)
await self._update_states(
@ -1056,12 +1085,19 @@ class PresenceHandler(BasePresenceHandler):
return _user_syncing()
def get_currently_syncing_users_for_replication(self) -> Iterable[str]:
def get_currently_syncing_users_for_replication(
self,
) -> Iterable[Tuple[str, Optional[str]]]:
# since we are the process handling presence, there is nothing to do here.
return []
async def update_external_syncs_row(
self, process_id: str, user_id: str, is_syncing: bool, sync_time_msec: int
self,
process_id: str,
user_id: str,
device_id: Optional[str],
is_syncing: bool,
sync_time_msec: int,
) -> None:
"""Update the syncing users for an external process as a delta.
@ -1070,6 +1106,7 @@ class PresenceHandler(BasePresenceHandler):
syncing against. This allows synapse to process updates
as user start and stop syncing against a given process.
user_id: The user who has started or stopped syncing
device_id: The user's device that has started or stopped syncing
is_syncing: Whether or not the user is now syncing
sync_time_msec: Time in ms when the user was last syncing
"""
@ -1080,31 +1117,27 @@ class PresenceHandler(BasePresenceHandler):
process_id, set()
)
updates = []
if is_syncing and user_id not in process_presence:
if prev_state.state == PresenceState.OFFLINE:
updates.append(
prev_state.copy_and_replace(
state=PresenceState.ONLINE,
last_active_ts=sync_time_msec,
last_user_sync_ts=sync_time_msec,
)
)
else:
updates.append(
prev_state.copy_and_replace(last_user_sync_ts=sync_time_msec)
)
process_presence.add(user_id)
elif user_id in process_presence:
updates.append(
prev_state.copy_and_replace(last_user_sync_ts=sync_time_msec)
# USER_SYNC is sent when a user's device starts or stops syncing on
# a remote # process. (But only for the initial and last sync for that
# device.)
#
# When a device *starts* syncing it also calls set_state(...) which
# will update the state, last_active_ts, and last_user_sync_ts.
# Simply ensure the user & device is tracked as syncing in this case.
#
# When a device *stops* syncing, update the last_user_sync_ts and mark
# them as no longer syncing. Note this doesn't quite match the
# monolith behaviour, which updates last_user_sync_ts at the end of
# every sync, not just the last in-flight sync.
if is_syncing and (user_id, device_id) not in process_presence:
process_presence.add((user_id, device_id))
elif not is_syncing and (user_id, device_id) in process_presence:
new_state = prev_state.copy_and_replace(
last_user_sync_ts=sync_time_msec
)
await self._update_states([new_state])
if not is_syncing:
process_presence.discard(user_id)
if updates:
await self._update_states(updates)
process_presence.discard((user_id, device_id))
self.external_process_last_updated_ms[process_id] = self.clock.time_msec()
@ -1118,7 +1151,9 @@ class PresenceHandler(BasePresenceHandler):
process_presence = self.external_process_to_current_syncs.pop(
process_id, set()
)
prev_states = await self.current_state_for_users(process_presence)
prev_states = await self.current_state_for_users(
{user_id for user_id, device_id in process_presence}
)
time_now_ms = self.clock.time_msec()
await self._update_states(
@ -1203,18 +1238,22 @@ class PresenceHandler(BasePresenceHandler):
async def set_state(
self,
target_user: UserID,
device_id: Optional[str],
state: JsonDict,
ignore_status_msg: bool = False,
force_notify: bool = False,
is_sync: bool = False,
) -> None:
"""Set the presence state of the user.
Args:
target_user: The ID of the user to set the presence state of.
device_id: the device that the user is setting the presence state of.
state: The presence state as a JSON dictionary.
ignore_status_msg: True to ignore the "status_msg" field of the `state` dict.
If False, the user's current status will be updated.
force_notify: Whether to force notification of the update to clients.
is_sync: True if this update was from a sync, which results in
*not* overriding a previously set BUSY status, updating the
user's last_user_sync_ts, and ignoring the "status_msg" field of
the `state` dict.
"""
status_msg = state.get("status_msg", None)
presence = state["presence"]
@ -1227,18 +1266,27 @@ class PresenceHandler(BasePresenceHandler):
return
user_id = target_user.to_string()
now = self.clock.time_msec()
prev_state = await self.current_state_for_user(user_id)
# Syncs do not override a previous presence of busy.
#
# TODO: This is a hack for lack of multi-device support. Unfortunately
# removing this requires coordination with clients.
if prev_state.state == PresenceState.BUSY and is_sync:
presence = PresenceState.BUSY
new_fields = {"state": presence}
if not ignore_status_msg:
new_fields["status_msg"] = status_msg
if presence == PresenceState.ONLINE or presence == PresenceState.BUSY:
new_fields["last_active_ts"] = now
if presence == PresenceState.ONLINE or (
presence == PresenceState.BUSY and self._busy_presence_enabled
):
new_fields["last_active_ts"] = self.clock.time_msec()
if is_sync:
new_fields["last_user_sync_ts"] = now
else:
# Syncs do not override the status message.
new_fields["status_msg"] = status_msg
await self._update_states(
[prev_state.copy_and_replace(**new_fields)], force_notify=force_notify
@ -1462,7 +1510,7 @@ class PresenceHandler(BasePresenceHandler):
or state.status_msg is not None
]
self._federation_queue.send_presence_to_destinations(
await self._federation_queue.send_presence_to_destinations(
destinations=newly_joined_remote_hosts,
states=states,
)
@ -1473,7 +1521,7 @@ class PresenceHandler(BasePresenceHandler):
prev_remote_hosts or newly_joined_remote_hosts
):
local_states = await self.current_state_for_users(newly_joined_local_users)
self._federation_queue.send_presence_to_destinations(
await self._federation_queue.send_presence_to_destinations(
destinations=prev_remote_hosts | newly_joined_remote_hosts,
states=list(local_states.values()),
)
@ -2136,7 +2184,7 @@ class PresenceFederationQueue:
index = bisect(self._queue, (clear_before,))
self._queue = self._queue[index:]
def send_presence_to_destinations(
async def send_presence_to_destinations(
self, states: Collection[UserPresenceState], destinations: StrCollection
) -> None:
"""Send the presence states to the given destinations.
@ -2156,7 +2204,7 @@ class PresenceFederationQueue:
return
if self._federation:
self._federation.send_presence_to_destinations(
await self._federation.send_presence_to_destinations(
states=states,
destinations=destinations,
)
@ -2279,7 +2327,7 @@ class PresenceFederationQueue:
for host, user_ids in hosts_to_users.items():
states = await self._presence_handler.current_state_for_users(user_ids)
self._federation.send_presence_to_destinations(
await self._federation.send_presence_to_destinations(
states=states.values(),
destinations=[host],
)

View File

@ -112,8 +112,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
self._join_rate_limiter_local = Ratelimiter(
store=self.store,
clock=self.clock,
rate_hz=hs.config.ratelimiting.rc_joins_local.per_second,
burst_count=hs.config.ratelimiting.rc_joins_local.burst_count,
cfg=hs.config.ratelimiting.rc_joins_local,
)
# Tracks joins from local users to rooms this server isn't a member of.
# I.e. joins this server makes by requesting /make_join /send_join from
@ -121,8 +120,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
self._join_rate_limiter_remote = Ratelimiter(
store=self.store,
clock=self.clock,
rate_hz=hs.config.ratelimiting.rc_joins_remote.per_second,
burst_count=hs.config.ratelimiting.rc_joins_remote.burst_count,
cfg=hs.config.ratelimiting.rc_joins_remote,
)
# TODO: find a better place to keep this Ratelimiter.
# It needs to be
@ -135,8 +133,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
self._join_rate_per_room_limiter = Ratelimiter(
store=self.store,
clock=self.clock,
rate_hz=hs.config.ratelimiting.rc_joins_per_room.per_second,
burst_count=hs.config.ratelimiting.rc_joins_per_room.burst_count,
cfg=hs.config.ratelimiting.rc_joins_per_room,
)
# Ratelimiter for invites, keyed by room (across all issuers, all
@ -144,8 +141,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
self._invites_per_room_limiter = Ratelimiter(
store=self.store,
clock=self.clock,
rate_hz=hs.config.ratelimiting.rc_invites_per_room.per_second,
burst_count=hs.config.ratelimiting.rc_invites_per_room.burst_count,
cfg=hs.config.ratelimiting.rc_invites_per_room,
)
# Ratelimiter for invites, keyed by recipient (across all rooms, all
@ -153,8 +149,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
self._invites_per_recipient_limiter = Ratelimiter(
store=self.store,
clock=self.clock,
rate_hz=hs.config.ratelimiting.rc_invites_per_user.per_second,
burst_count=hs.config.ratelimiting.rc_invites_per_user.burst_count,
cfg=hs.config.ratelimiting.rc_invites_per_user,
)
# Ratelimiter for invites, keyed by issuer (across all rooms, all
@ -162,15 +157,13 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
self._invites_per_issuer_limiter = Ratelimiter(
store=self.store,
clock=self.clock,
rate_hz=hs.config.ratelimiting.rc_invites_per_issuer.per_second,
burst_count=hs.config.ratelimiting.rc_invites_per_issuer.burst_count,
cfg=hs.config.ratelimiting.rc_invites_per_issuer,
)
self._third_party_invite_limiter = Ratelimiter(
store=self.store,
clock=self.clock,
rate_hz=hs.config.ratelimiting.rc_third_party_invite.per_second,
burst_count=hs.config.ratelimiting.rc_third_party_invite.burst_count,
cfg=hs.config.ratelimiting.rc_third_party_invite,
)
self.request_ratelimiter = hs.get_request_ratelimiter()

View File

@ -35,6 +35,7 @@ from synapse.api.errors import (
UnsupportedRoomVersionError,
)
from synapse.api.ratelimiting import Ratelimiter
from synapse.config.ratelimiting import RatelimitSettings
from synapse.events import EventBase
from synapse.types import JsonDict, Requester, StrCollection
from synapse.util.caches.response_cache import ResponseCache
@ -94,7 +95,9 @@ class RoomSummaryHandler:
self._server_name = hs.hostname
self._federation_client = hs.get_federation_client()
self._ratelimiter = Ratelimiter(
store=self._store, clock=hs.get_clock(), rate_hz=5, burst_count=10
store=self._store,
clock=hs.get_clock(),
cfg=RatelimitSettings("<room summary>", per_second=5, burst_count=10),
)
# If a user tries to fetch the same page multiple times in quick succession,

View File

@ -23,9 +23,11 @@ from pkg_resources import parse_version
import twisted
from twisted.internet.defer import Deferred
from twisted.internet.interfaces import IOpenSSLContextFactory
from twisted.internet.endpoints import HostnameEndpoint
from twisted.internet.interfaces import IOpenSSLContextFactory, IProtocolFactory
from twisted.internet.ssl import optionsForClientTLS
from twisted.mail.smtp import ESMTPSender, ESMTPSenderFactory
from twisted.protocols.tls import TLSMemoryBIOFactory
from synapse.logging.context import make_deferred_yieldable
from synapse.types import ISynapseReactor
@ -97,6 +99,7 @@ async def _sendmail(
**kwargs,
)
factory: IProtocolFactory
if _is_old_twisted:
# before twisted 21.2, we have to override the ESMTPSender protocol to disable
# TLS
@ -110,22 +113,13 @@ async def _sendmail(
factory = build_sender_factory(hostname=smtphost if enable_tls else None)
if force_tls:
reactor.connectSSL(
smtphost,
smtpport,
factory,
optionsForClientTLS(smtphost),
timeout=30,
bindAddress=None,
)
else:
reactor.connectTCP(
smtphost,
smtpport,
factory,
timeout=30,
bindAddress=None,
)
factory = TLSMemoryBIOFactory(optionsForClientTLS(smtphost), True, factory)
endpoint = HostnameEndpoint(
reactor, smtphost, smtpport, timeout=30, bindAddress=None
)
await make_deferred_yieldable(endpoint.connect(factory))
await make_deferred_yieldable(d)

View File

@ -26,9 +26,10 @@ from synapse.metrics.background_process_metrics import (
)
from synapse.replication.tcp.streams import TypingStream
from synapse.streams import EventSource
from synapse.types import JsonDict, Requester, StreamKeyType, UserID
from synapse.types import JsonDict, Requester, StrCollection, StreamKeyType, UserID
from synapse.util.caches.stream_change_cache import StreamChangeCache
from synapse.util.metrics import Measure
from synapse.util.retryutils import filter_destinations_by_retry_limiter
from synapse.util.wheel_timer import WheelTimer
if TYPE_CHECKING:
@ -150,8 +151,15 @@ class FollowerTypingHandler:
now=now, obj=member, then=now + FEDERATION_PING_INTERVAL
)
hosts = await self._storage_controllers.state.get_current_hosts_in_room(
member.room_id
hosts: StrCollection = (
await self._storage_controllers.state.get_current_hosts_in_room(
member.room_id
)
)
hosts = await filter_destinations_by_retry_limiter(
hosts,
clock=self.clock,
store=self.store,
)
for domain in hosts:
if not self.is_mine_server_name(domain):

View File

@ -243,7 +243,7 @@ class LegacyJsonSendParser(_BaseJsonParser[Tuple[int, JsonDict]]):
return (
isinstance(v, list)
and len(v) == 2
and type(v[0]) == int
and type(v[0]) == int # noqa: E721
and isinstance(v[1], dict)
)
@ -512,6 +512,7 @@ class MatrixFederationHttpClient:
long_retries: bool = False,
ignore_backoff: bool = False,
backoff_on_404: bool = False,
backoff_on_all_error_codes: bool = False,
) -> IResponse:
"""
Sends a request to the given server.
@ -552,6 +553,7 @@ class MatrixFederationHttpClient:
and try the request anyway.
backoff_on_404: Back off if we get a 404
backoff_on_all_error_codes: Back off if we get any error response
Returns:
Resolves with the HTTP response object on success.
@ -594,6 +596,7 @@ class MatrixFederationHttpClient:
ignore_backoff=ignore_backoff,
notifier=self.hs.get_notifier(),
replication_client=self.hs.get_replication_command_handler(),
backoff_on_all_error_codes=backoff_on_all_error_codes,
)
method_bytes = request.method.encode("ascii")
@ -889,6 +892,7 @@ class MatrixFederationHttpClient:
backoff_on_404: bool = False,
try_trailing_slash_on_400: bool = False,
parser: Literal[None] = None,
backoff_on_all_error_codes: bool = False,
) -> JsonDict:
...
@ -906,6 +910,7 @@ class MatrixFederationHttpClient:
backoff_on_404: bool = False,
try_trailing_slash_on_400: bool = False,
parser: Optional[ByteParser[T]] = None,
backoff_on_all_error_codes: bool = False,
) -> T:
...
@ -922,6 +927,7 @@ class MatrixFederationHttpClient:
backoff_on_404: bool = False,
try_trailing_slash_on_400: bool = False,
parser: Optional[ByteParser[T]] = None,
backoff_on_all_error_codes: bool = False,
) -> Union[JsonDict, T]:
"""Sends the specified json data using PUT
@ -957,6 +963,7 @@ class MatrixFederationHttpClient:
enabled.
parser: The parser to use to decode the response. Defaults to
parsing as JSON.
backoff_on_all_error_codes: Back off if we get any error response
Returns:
Succeeds when we get a 2xx HTTP response. The
@ -990,6 +997,7 @@ class MatrixFederationHttpClient:
ignore_backoff=ignore_backoff,
long_retries=long_retries,
timeout=timeout,
backoff_on_all_error_codes=backoff_on_all_error_codes,
)
if timeout is not None:

View File

@ -115,7 +115,13 @@ def return_json_error(
if exc.headers is not None:
for header, value in exc.headers.items():
request.setHeader(header, value)
logger.info("%s SynapseError: %s - %s", request, error_code, exc.msg)
error_ctx = exc.debug_context
if error_ctx:
logger.info(
"%s SynapseError: %s - %s (%s)", request, error_code, exc.msg, error_ctx
)
else:
logger.info("%s SynapseError: %s - %s", request, error_code, exc.msg)
elif f.check(CancelledError):
error_code = HTTP_STATUS_REQUEST_CANCELLED
error_dict = {"error": "Request cancelled", "errcode": Codes.UNKNOWN}

View File

@ -44,6 +44,7 @@ _IGNORED_LOG_RECORD_ATTRIBUTES = {
"processName",
"relativeCreated",
"stack_info",
"taskName",
"thread",
"threadName",
}

View File

@ -809,23 +809,24 @@ def run_in_background( # type: ignore[misc]
# `res` may be a coroutine, `Deferred`, some other kind of awaitable, or a plain
# value. Convert it to a `Deferred`.
d: "defer.Deferred[R]"
if isinstance(res, typing.Coroutine):
# Wrap the coroutine in a `Deferred`.
res = defer.ensureDeferred(res)
d = defer.ensureDeferred(res)
elif isinstance(res, defer.Deferred):
pass
d = res
elif isinstance(res, Awaitable):
# `res` is probably some kind of completed awaitable, such as a `DoneAwaitable`
# or `Future` from `make_awaitable`.
res = defer.ensureDeferred(_unwrap_awaitable(res))
d = defer.ensureDeferred(_unwrap_awaitable(res))
else:
# `res` is a plain value. Wrap it in a `Deferred`.
res = defer.succeed(res)
d = defer.succeed(res)
if res.called and not res.paused:
if d.called and not d.paused:
# The function should have maintained the logcontext, so we can
# optimise out the messing about
return res
return d
# The function may have reset the context before returning, so
# we need to restore it now.
@ -843,8 +844,8 @@ def run_in_background( # type: ignore[misc]
# which is supposed to have a single entry and exit point. But
# by spawning off another deferred, we are effectively
# adding a new exit point.)
res.addBoth(_set_context_cb, ctx)
return res
d.addBoth(_set_context_cb, ctx)
return d
T = TypeVar("T")
@ -877,7 +878,7 @@ def make_deferred_yieldable(deferred: "defer.Deferred[T]") -> "defer.Deferred[T]
ResultT = TypeVar("ResultT")
def _set_context_cb(result: ResultT, context: LoggingContext) -> ResultT:
def _set_context_cb(result: ResultT, context: LoggingContextOrSentinel) -> ResultT:
"""A callback function which just sets the logging context"""
set_current_context(context)
return result

View File

@ -910,10 +910,10 @@ def _custom_sync_async_decorator(
async def _wrapper(
*args: P.args, **kwargs: P.kwargs
) -> Any: # Return type is RInner
with wrapping_logic(func, *args, **kwargs):
# type-ignore: func() returns R, but mypy doesn't know that R is
# Awaitable here.
return await func(*args, **kwargs) # type: ignore[misc]
# type-ignore: func() returns R, but mypy doesn't know that R is
# Awaitable here.
with wrapping_logic(func, *args, **kwargs): # type: ignore[arg-type]
return await func(*args, **kwargs)
else:
# The other case here handles sync functions including those decorated with
@ -980,8 +980,7 @@ def trace_with_opname(
See the module's doc string for usage examples.
"""
# type-ignore: mypy bug, see https://github.com/python/mypy/issues/12909
@contextlib.contextmanager # type: ignore[arg-type]
@contextlib.contextmanager
def _wrapping_logic(
func: Callable[P, R], *args: P.args, **kwargs: P.kwargs
) -> Generator[None, None, None]:
@ -1024,8 +1023,7 @@ def tag_args(func: Callable[P, R]) -> Callable[P, R]:
if not opentracing:
return func
# type-ignore: mypy bug, see https://github.com/python/mypy/issues/12909
@contextlib.contextmanager # type: ignore[arg-type]
@contextlib.contextmanager
def _wrapping_logic(
func: Callable[P, R], *args: P.args, **kwargs: P.kwargs
) -> Generator[None, None, None]:

View File

@ -214,7 +214,10 @@ class MediaRepository:
user_id=auth_user,
)
await self._generate_thumbnails(None, media_id, media_id, media_type)
try:
await self._generate_thumbnails(None, media_id, media_id, media_type)
except Exception as e:
logger.info("Failed to generate thumbnails: %s", e)
return MXCUri(self.server_name, media_id)

View File

@ -204,7 +204,7 @@ class OEmbedProvider:
calc_description_and_urls(open_graph_response, oembed["html"])
for size in ("width", "height"):
val = oembed.get(size)
if type(val) is int:
if type(val) is int: # noqa: E721
open_graph_response[f"og:video:{size}"] = val
elif oembed_type == "link":

View File

@ -78,7 +78,7 @@ class Thumbnailer:
image_exif = self.image._getexif() # type: ignore
if image_exif is not None:
image_orientation = image_exif.get(EXIF_ORIENTATION_TAG)
assert type(image_orientation) is int
assert type(image_orientation) is int # noqa: E721
self.transpose_method = EXIF_TRANSPOSE_MAPPINGS.get(image_orientation)
except Exception as e:
# A lot of parsing errors can happen when parsing EXIF

View File

@ -1180,7 +1180,7 @@ class ModuleApi:
# Send to remote destinations.
destination = UserID.from_string(user).domain
presence_handler.get_federation_queue().send_presence_to_destinations(
await presence_handler.get_federation_queue().send_presence_to_destinations(
presence_events, [destination]
)

View File

@ -379,7 +379,7 @@ class BulkPushRuleEvaluator:
keys = list(notification_levels.keys())
for key in keys:
level = notification_levels.get(key, SENTINEL)
if level is not SENTINEL and type(level) is not int:
if level is not SENTINEL and type(level) is not int: # noqa: E721
try:
notification_levels[key] = int(level)
except (TypeError, ValueError):
@ -472,7 +472,11 @@ StateGroup = Union[object, int]
def _is_simple_value(value: Any) -> bool:
return isinstance(value, (bool, str)) or type(value) is int or value is None
return (
isinstance(value, (bool, str))
or type(value) is int # noqa: E721
or value is None
)
def _flatten_dict(

View File

@ -62,7 +62,7 @@ class ReplicationMultiUserDevicesResyncRestServlet(ReplicationEndpoint):
NAME = "multi_user_device_resync"
PATH_ARGS = ()
CACHE = False
CACHE = True
def __init__(self, hs: "HomeServer"):
super().__init__(hs)

View File

@ -13,7 +13,7 @@
# limitations under the License.
import logging
from typing import TYPE_CHECKING, Tuple
from typing import TYPE_CHECKING, Optional, Tuple
from twisted.web.server import Request
@ -51,14 +51,14 @@ class ReplicationBumpPresenceActiveTime(ReplicationEndpoint):
self._presence_handler = hs.get_presence_handler()
@staticmethod
async def _serialize_payload(user_id: str) -> JsonDict: # type: ignore[override]
return {}
async def _serialize_payload(user_id: str, device_id: Optional[str]) -> JsonDict: # type: ignore[override]
return {"device_id": device_id}
async def _handle_request( # type: ignore[override]
self, request: Request, content: JsonDict, user_id: str
) -> Tuple[int, JsonDict]:
await self._presence_handler.bump_presence_active_time(
UserID.from_string(user_id)
UserID.from_string(user_id), content.get("device_id")
)
return (200, {})
@ -73,8 +73,8 @@ class ReplicationPresenceSetState(ReplicationEndpoint):
{
"state": { ... },
"ignore_status_msg": false,
"force_notify": false
"force_notify": false,
"is_sync": false
}
200 OK
@ -95,14 +95,16 @@ class ReplicationPresenceSetState(ReplicationEndpoint):
@staticmethod
async def _serialize_payload( # type: ignore[override]
user_id: str,
device_id: Optional[str],
state: JsonDict,
ignore_status_msg: bool = False,
force_notify: bool = False,
is_sync: bool = False,
) -> JsonDict:
return {
"device_id": device_id,
"state": state,
"ignore_status_msg": ignore_status_msg,
"force_notify": force_notify,
"is_sync": is_sync,
}
async def _handle_request( # type: ignore[override]
@ -110,9 +112,10 @@ class ReplicationPresenceSetState(ReplicationEndpoint):
) -> Tuple[int, JsonDict]:
await self._presence_handler.set_state(
UserID.from_string(user_id),
content.get("device_id"),
content["state"],
content["ignore_status_msg"],
content["force_notify"],
content.get("is_sync", False),
)
return (200, {})

View File

@ -422,7 +422,7 @@ class FederationSenderHandler:
# The federation stream contains things that we want to send out, e.g.
# presence, typing, etc.
if stream_name == "federation":
send_queue.process_rows_for_federation(self.federation_sender, rows)
await send_queue.process_rows_for_federation(self.federation_sender, rows)
await self.update_token(token)
# ... and when new receipts happen
@ -439,16 +439,14 @@ class FederationSenderHandler:
for row in rows
if not row.entity.startswith("@") and not row.is_signature
}
for host in hosts:
self.federation_sender.send_device_messages(host, immediate=False)
await self.federation_sender.send_device_messages(hosts, immediate=False)
elif stream_name == ToDeviceStream.NAME:
# The to_device stream includes stuff to be pushed to both local
# clients and remote servers, so we ignore entities that start with
# '@' (since they'll be local users rather than destinations).
hosts = {row.entity for row in rows if not row.entity.startswith("@")}
for host in hosts:
self.federation_sender.send_device_messages(host)
await self.federation_sender.send_device_messages(hosts)
async def _on_new_receipts(
self, rows: Iterable[ReceiptsStream.ReceiptsStreamRow]

View File

@ -267,27 +267,38 @@ class UserSyncCommand(Command):
NAME = "USER_SYNC"
def __init__(
self, instance_id: str, user_id: str, is_syncing: bool, last_sync_ms: int
self,
instance_id: str,
user_id: str,
device_id: Optional[str],
is_syncing: bool,
last_sync_ms: int,
):
self.instance_id = instance_id
self.user_id = user_id
self.device_id = device_id
self.is_syncing = is_syncing
self.last_sync_ms = last_sync_ms
@classmethod
def from_line(cls: Type["UserSyncCommand"], line: str) -> "UserSyncCommand":
instance_id, user_id, state, last_sync_ms = line.split(" ", 3)
device_id: Optional[str]
instance_id, user_id, device_id, state, last_sync_ms = line.split(" ", 4)
if device_id == "None":
device_id = None
if state not in ("start", "end"):
raise Exception("Invalid USER_SYNC state %r" % (state,))
return cls(instance_id, user_id, state == "start", int(last_sync_ms))
return cls(instance_id, user_id, device_id, state == "start", int(last_sync_ms))
def to_line(self) -> str:
return " ".join(
(
self.instance_id,
self.user_id,
str(self.device_id),
"start" if self.is_syncing else "end",
str(self.last_sync_ms),
)
@ -452,6 +463,17 @@ class LockReleasedCommand(Command):
return json_encoder.encode([self.instance_name, self.lock_name, self.lock_key])
class NewActiveTaskCommand(_SimpleCommand):
"""Sent to inform instance handling background tasks that a new active task is available to run.
Format::
NEW_ACTIVE_TASK "<task_id>"
"""
NAME = "NEW_ACTIVE_TASK"
_COMMANDS: Tuple[Type[Command], ...] = (
ServerCommand,
RdataCommand,
@ -466,6 +488,7 @@ _COMMANDS: Tuple[Type[Command], ...] = (
RemoteServerUpCommand,
ClearUserSyncsCommand,
LockReleasedCommand,
NewActiveTaskCommand,
)
# Map of command name to command type.

View File

@ -40,6 +40,7 @@ from synapse.replication.tcp.commands import (
Command,
FederationAckCommand,
LockReleasedCommand,
NewActiveTaskCommand,
PositionCommand,
RdataCommand,
RemoteServerUpCommand,
@ -238,6 +239,10 @@ class ReplicationCommandHandler:
if self._is_master:
self._server_notices_sender = hs.get_server_notices_sender()
self._task_scheduler = None
if hs.config.worker.run_background_tasks:
self._task_scheduler = hs.get_task_scheduler()
if hs.config.redis.redis_enabled:
# If we're using Redis, it's the background worker that should
# receive USER_IP commands and store the relevant client IPs.
@ -423,7 +428,11 @@ class ReplicationCommandHandler:
if self._is_presence_writer:
return self._presence_handler.update_external_syncs_row(
cmd.instance_id, cmd.user_id, cmd.is_syncing, cmd.last_sync_ms
cmd.instance_id,
cmd.user_id,
cmd.device_id,
cmd.is_syncing,
cmd.last_sync_ms,
)
else:
return None
@ -663,6 +672,15 @@ class ReplicationCommandHandler:
cmd.instance_name, cmd.lock_name, cmd.lock_key
)
async def on_NEW_ACTIVE_TASK(
self, conn: IReplicationConnection, cmd: NewActiveTaskCommand
) -> None:
"""Called when get a new NEW_ACTIVE_TASK command."""
if self._task_scheduler:
task = await self._task_scheduler.get_task(cmd.data)
if task:
await self._task_scheduler._launch_task(task)
def new_connection(self, connection: IReplicationConnection) -> None:
"""Called when we have a new connection."""
self._connections.append(connection)
@ -685,9 +703,9 @@ class ReplicationCommandHandler:
)
now = self._clock.time_msec()
for user_id in currently_syncing:
for user_id, device_id in currently_syncing:
connection.send_command(
UserSyncCommand(self._instance_id, user_id, True, now)
UserSyncCommand(self._instance_id, user_id, device_id, True, now)
)
def lost_connection(self, connection: IReplicationConnection) -> None:
@ -739,11 +757,16 @@ class ReplicationCommandHandler:
self.send_command(FederationAckCommand(self._instance_name, token))
def send_user_sync(
self, instance_id: str, user_id: str, is_syncing: bool, last_sync_ms: int
self,
instance_id: str,
user_id: str,
device_id: Optional[str],
is_syncing: bool,
last_sync_ms: int,
) -> None:
"""Poke the master that a user has started/stopped syncing."""
self.send_command(
UserSyncCommand(instance_id, user_id, is_syncing, last_sync_ms)
UserSyncCommand(instance_id, user_id, device_id, is_syncing, last_sync_ms)
)
def send_user_ip(
@ -776,6 +799,10 @@ class ReplicationCommandHandler:
if instance_name == self._instance_name:
self.send_command(LockReleasedCommand(instance_name, lock_name, lock_key))
def send_new_active_task(self, task_id: str) -> None:
"""Called when a new task has been scheduled for immediate launch and is ACTIVE."""
self.send_command(NewActiveTaskCommand(task_id))
UpdateToken = TypeVar("UpdateToken")
UpdateRow = TypeVar("UpdateRow")

View File

@ -157,7 +157,7 @@ class PurgeHistoryRestServlet(RestServlet):
logger.info("[purge] purging up to token %s (event_id %s)", token, event_id)
elif "purge_up_to_ts" in body:
ts = body["purge_up_to_ts"]
if type(ts) is not int:
if type(ts) is not int: # noqa: E721
raise SynapseError(
HTTPStatus.BAD_REQUEST,
"purge_up_to_ts must be an int",

View File

@ -143,7 +143,7 @@ class NewRegistrationTokenRestServlet(RestServlet):
else:
# Get length of token to generate (default is 16)
length = body.get("length", 16)
if type(length) is not int:
if type(length) is not int: # noqa: E721
raise SynapseError(
HTTPStatus.BAD_REQUEST,
"length must be an integer",
@ -163,7 +163,8 @@ class NewRegistrationTokenRestServlet(RestServlet):
uses_allowed = body.get("uses_allowed", None)
if not (
uses_allowed is None or (type(uses_allowed) is int and uses_allowed >= 0)
uses_allowed is None
or (type(uses_allowed) is int and uses_allowed >= 0) # noqa: E721
):
raise SynapseError(
HTTPStatus.BAD_REQUEST,
@ -172,13 +173,16 @@ class NewRegistrationTokenRestServlet(RestServlet):
)
expiry_time = body.get("expiry_time", None)
if type(expiry_time) not in (int, type(None)):
if expiry_time is not None and type(expiry_time) is not int: # noqa: E721
raise SynapseError(
HTTPStatus.BAD_REQUEST,
"expiry_time must be an integer or null",
Codes.INVALID_PARAM,
)
if type(expiry_time) is int and expiry_time < self.clock.time_msec():
if (
type(expiry_time) is int # noqa: E721
and expiry_time < self.clock.time_msec()
):
raise SynapseError(
HTTPStatus.BAD_REQUEST,
"expiry_time must not be in the past",
@ -283,7 +287,7 @@ class RegistrationTokenRestServlet(RestServlet):
uses_allowed = body["uses_allowed"]
if not (
uses_allowed is None
or (type(uses_allowed) is int and uses_allowed >= 0)
or (type(uses_allowed) is int and uses_allowed >= 0) # noqa: E721
):
raise SynapseError(
HTTPStatus.BAD_REQUEST,
@ -294,13 +298,16 @@ class RegistrationTokenRestServlet(RestServlet):
if "expiry_time" in body:
expiry_time = body["expiry_time"]
if type(expiry_time) not in (int, type(None)):
if expiry_time is not None and type(expiry_time) is not int: # noqa: E721
raise SynapseError(
HTTPStatus.BAD_REQUEST,
"expiry_time must be an integer or null",
Codes.INVALID_PARAM,
)
if type(expiry_time) is int and expiry_time < self.clock.time_msec():
if (
type(expiry_time) is int # noqa: E721
and expiry_time < self.clock.time_msec()
):
raise SynapseError(
HTTPStatus.BAD_REQUEST,
"expiry_time must not be in the past",

View File

@ -132,6 +132,7 @@ class UsersRestServletV2(RestServlet):
UserSortOrder.AVATAR_URL.value,
UserSortOrder.SHADOW_BANNED.value,
UserSortOrder.CREATION_TS.value,
UserSortOrder.LAST_SEEN_TS.value,
),
)
@ -1172,14 +1173,17 @@ class RateLimitRestServlet(RestServlet):
messages_per_second = body.get("messages_per_second", 0)
burst_count = body.get("burst_count", 0)
if type(messages_per_second) is not int or messages_per_second < 0:
if (
type(messages_per_second) is not int # noqa: E721
or messages_per_second < 0
):
raise SynapseError(
HTTPStatus.BAD_REQUEST,
"%r parameter must be a positive int" % (messages_per_second,),
errcode=Codes.INVALID_PARAM,
)
if type(burst_count) is not int or burst_count < 0:
if type(burst_count) is not int or burst_count < 0: # noqa: E721
raise SynapseError(
HTTPStatus.BAD_REQUEST,
"%r parameter must be a positive int" % (burst_count,),

View File

@ -120,14 +120,12 @@ class LoginRestServlet(RestServlet):
self._address_ratelimiter = Ratelimiter(
store=self._main_store,
clock=hs.get_clock(),
rate_hz=self.hs.config.ratelimiting.rc_login_address.per_second,
burst_count=self.hs.config.ratelimiting.rc_login_address.burst_count,
cfg=self.hs.config.ratelimiting.rc_login_address,
)
self._account_ratelimiter = Ratelimiter(
store=self._main_store,
clock=hs.get_clock(),
rate_hz=self.hs.config.ratelimiting.rc_login_account.per_second,
burst_count=self.hs.config.ratelimiting.rc_login_account.burst_count,
cfg=self.hs.config.ratelimiting.rc_login_account,
)
# ensure the CAS/SAML/OIDC handlers are loaded on this worker instance.

View File

@ -16,6 +16,7 @@ import logging
from typing import TYPE_CHECKING, Tuple
from synapse.api.ratelimiting import Ratelimiter
from synapse.config.ratelimiting import RatelimitSettings
from synapse.http.server import HttpServer
from synapse.http.servlet import RestServlet, parse_json_object_from_request
from synapse.http.site import SynapseRequest
@ -66,15 +67,18 @@ class LoginTokenRequestServlet(RestServlet):
self.token_timeout = hs.config.auth.login_via_existing_token_timeout
self._require_ui_auth = hs.config.auth.login_via_existing_require_ui_auth
# Ratelimit aggressively to a maxmimum of 1 request per minute.
# Ratelimit aggressively to a maximum of 1 request per minute.
#
# This endpoint can be used to spawn additional sessions and could be
# abused by a malicious client to create many sessions.
self._ratelimiter = Ratelimiter(
store=self._main_store,
clock=hs.get_clock(),
rate_hz=1 / 60,
burst_count=1,
cfg=RatelimitSettings(
key="<login token request>",
per_second=1 / 60,
burst_count=1,
),
)
@interactive_auth_handler

View File

@ -97,7 +97,7 @@ class PresenceStatusRestServlet(RestServlet):
raise SynapseError(400, "Unable to parse state")
if self._use_presence:
await self.presence_handler.set_state(user, state)
await self.presence_handler.set_state(user, requester.device_id, state)
return 200, {}

View File

@ -52,7 +52,9 @@ class ReadMarkerRestServlet(RestServlet):
) -> Tuple[int, JsonDict]:
requester = await self.auth.get_user_by_req(request)
await self.presence_handler.bump_presence_active_time(requester.user)
await self.presence_handler.bump_presence_active_time(
requester.user, requester.device_id
)
body = parse_json_object_from_request(request)

View File

@ -94,7 +94,9 @@ class ReceiptRestServlet(RestServlet):
Codes.INVALID_PARAM,
)
await self.presence_handler.bump_presence_active_time(requester.user)
await self.presence_handler.bump_presence_active_time(
requester.user, requester.device_id
)
if receipt_type == ReceiptTypes.FULLY_READ:
await self.read_marker_handler.received_client_read_marker(

View File

@ -376,8 +376,7 @@ class RegistrationTokenValidityRestServlet(RestServlet):
self.ratelimiter = Ratelimiter(
store=self.store,
clock=hs.get_clock(),
rate_hz=hs.config.ratelimiting.rc_registration_token_validity.per_second,
burst_count=hs.config.ratelimiting.rc_registration_token_validity.burst_count,
cfg=hs.config.ratelimiting.rc_registration_token_validity,
)
async def on_GET(self, request: Request) -> Tuple[int, JsonDict]:

View File

@ -55,7 +55,7 @@ class ReportEventRestServlet(RestServlet):
"Param 'reason' must be a string",
Codes.BAD_JSON,
)
if type(body.get("score", 0)) is not int:
if type(body.get("score", 0)) is not int: # noqa: E721
raise SynapseError(
HTTPStatus.BAD_REQUEST,
"Param 'score' must be an integer",

View File

@ -1229,7 +1229,9 @@ class RoomTypingRestServlet(RestServlet):
content = parse_json_object_from_request(request)
await self.presence_handler.bump_presence_active_time(requester.user)
await self.presence_handler.bump_presence_active_time(
requester.user, requester.device_id
)
# Limit timeout to stop people from setting silly typing timeouts.
timeout = min(content.get("timeout", 30000), 120000)

View File

@ -205,6 +205,7 @@ class SyncRestServlet(RestServlet):
context = await self.presence_handler.user_syncing(
user.to_string(),
requester.device_id,
affect_presence=affect_presence,
presence_state=set_presence,
)

View File

@ -16,6 +16,7 @@ import logging
import re
from typing import TYPE_CHECKING, Dict, Mapping, Optional, Set, Tuple
from pydantic import Extra, StrictInt, StrictStr
from signedjson.sign import sign_json
from twisted.web.server import Request
@ -24,9 +25,10 @@ from synapse.crypto.keyring import ServerKeyFetcher
from synapse.http.server import HttpServer
from synapse.http.servlet import (
RestServlet,
parse_and_validate_json_object_from_request,
parse_integer,
parse_json_object_from_request,
)
from synapse.rest.models import RequestBodyModel
from synapse.storage.keys import FetchKeyResultForRemote
from synapse.types import JsonDict
from synapse.util import json_decoder
@ -38,6 +40,13 @@ if TYPE_CHECKING:
logger = logging.getLogger(__name__)
class _KeyQueryCriteriaDataModel(RequestBodyModel):
class Config:
extra = Extra.allow
minimum_valid_until_ts: Optional[StrictInt]
class RemoteKey(RestServlet):
"""HTTP resource for retrieving the TLS certificate and NACL signature
verification keys for a collection of servers. Checks that the reported
@ -96,6 +105,9 @@ class RemoteKey(RestServlet):
CATEGORY = "Federation requests"
class PostBody(RequestBodyModel):
server_keys: Dict[StrictStr, Dict[StrictStr, _KeyQueryCriteriaDataModel]]
def __init__(self, hs: "HomeServer"):
self.fetcher = ServerKeyFetcher(hs)
self.store = hs.get_datastores().main
@ -137,24 +149,29 @@ class RemoteKey(RestServlet):
)
minimum_valid_until_ts = parse_integer(request, "minimum_valid_until_ts")
arguments = {}
if minimum_valid_until_ts is not None:
arguments["minimum_valid_until_ts"] = minimum_valid_until_ts
query = {server: {key_id: arguments}}
query = {
server: {
key_id: _KeyQueryCriteriaDataModel(
minimum_valid_until_ts=minimum_valid_until_ts
)
}
}
else:
query = {server: {}}
return 200, await self.query_keys(query, query_remote_on_cache_miss=True)
async def on_POST(self, request: Request) -> Tuple[int, JsonDict]:
content = parse_json_object_from_request(request)
content = parse_and_validate_json_object_from_request(request, self.PostBody)
query = content["server_keys"]
query = content.server_keys
return 200, await self.query_keys(query, query_remote_on_cache_miss=True)
async def query_keys(
self, query: JsonDict, query_remote_on_cache_miss: bool = False
self,
query: Dict[str, Dict[str, _KeyQueryCriteriaDataModel]],
query_remote_on_cache_miss: bool = False,
) -> JsonDict:
logger.info("Handling query for keys %r", query)
@ -196,8 +213,10 @@ class RemoteKey(RestServlet):
else:
ts_added_ms = key_result.added_ts
ts_valid_until_ms = key_result.valid_until_ts
req_key = query.get(server_name, {}).get(key_id, {})
req_valid_until = req_key.get("minimum_valid_until_ts")
req_key = query.get(server_name, {}).get(
key_id, _KeyQueryCriteriaDataModel(minimum_valid_until_ts=None)
)
req_valid_until = req_key.minimum_valid_until_ts
if req_valid_until is not None:
if ts_valid_until_ms < req_valid_until:
logger.debug(

View File

@ -408,8 +408,7 @@ class HomeServer(metaclass=abc.ABCMeta):
return Ratelimiter(
store=self.get_datastores().main,
clock=self.get_clock(),
rate_hz=self.config.ratelimiting.rc_registration.per_second,
burst_count=self.config.ratelimiting.rc_registration.burst_count,
cfg=self.config.ratelimiting.rc_registration,
)
@cache_in_self

View File

@ -405,14 +405,14 @@ class BackgroundUpdater:
try:
result = await self.do_next_background_update(sleep)
back_to_back_failures = 0
except Exception:
except Exception as e:
logger.exception("Error doing update: %s", e)
back_to_back_failures += 1
if back_to_back_failures >= 5:
self._aborted = True
raise RuntimeError(
"5 back-to-back background update failures; aborting."
)
logger.exception("Error doing update")
else:
if result:
logger.info(

View File

@ -31,6 +31,7 @@ from typing import (
Iterator,
List,
Optional,
Sequence,
Tuple,
Type,
TypeVar,
@ -358,7 +359,21 @@ class LoggingTransaction:
return self.txn.rowcount
@property
def description(self) -> Any:
def description(
self,
) -> Optional[
Sequence[
Tuple[
str,
Optional[Any],
Optional[int],
Optional[int],
Optional[int],
Optional[int],
Optional[int],
]
]
]:
return self.txn.description
def execute_batch(self, sql: str, args: Iterable[Iterable[Any]]) -> None:

View File

@ -277,6 +277,10 @@ class DataStore(
FROM users as u
LEFT JOIN profiles AS p ON u.name = p.full_user_id
LEFT JOIN erased_users AS eu ON u.name = eu.user_id
LEFT JOIN (
SELECT user_id, MAX(last_seen) AS last_seen_ts
FROM user_ips GROUP BY user_id
) ls ON u.name = ls.user_id
{where_clause}
"""
sql = "SELECT COUNT(*) as total_users " + sql_base
@ -286,7 +290,7 @@ class DataStore(
sql = f"""
SELECT name, user_type, is_guest, admin, deactivated, shadow_banned,
displayname, avatar_url, creation_ts * 1000 as creation_ts, approved,
eu.user_id is not null as erased
eu.user_id is not null as erased, last_seen_ts
{sql_base}
ORDER BY {order_by_column} {order}, u.name ASC
LIMIT ? OFFSET ?

View File

@ -978,26 +978,12 @@ class PersistEventsStore:
"""Persist the mapping from transaction IDs to event IDs (if defined)."""
inserted_ts = self._clock.time_msec()
to_insert_token_id: List[Tuple[str, str, str, int, str, int]] = []
to_insert_device_id: List[Tuple[str, str, str, str, str, int]] = []
for event, _ in events_and_contexts:
txn_id = getattr(event.internal_metadata, "txn_id", None)
token_id = getattr(event.internal_metadata, "token_id", None)
device_id = getattr(event.internal_metadata, "device_id", None)
if txn_id is not None:
if token_id is not None:
to_insert_token_id.append(
(
event.event_id,
event.room_id,
event.sender,
token_id,
txn_id,
inserted_ts,
)
)
if device_id is not None:
to_insert_device_id.append(
(
@ -1010,26 +996,7 @@ class PersistEventsStore:
)
)
# Synapse usually relies on the device_id to scope transactions for events,
# except for users without device IDs (appservice, guests, and access
# tokens minted with the admin API) which use the access token ID instead.
#
# TODO https://github.com/matrix-org/synapse/issues/16042
if to_insert_token_id:
self.db_pool.simple_insert_many_txn(
txn,
table="event_txn_id",
keys=(
"event_id",
"room_id",
"user_id",
"token_id",
"txn_id",
"inserted_ts",
),
values=to_insert_token_id,
)
# Synapse relies on the device_id to scope transactions for events..
if to_insert_device_id:
self.db_pool.simple_insert_many_txn(
txn,
@ -1671,7 +1638,7 @@ class PersistEventsStore:
if self._ephemeral_messages_enabled:
# If there's an expiry timestamp on the event, store it.
expiry_ts = event.content.get(EventContentFields.SELF_DESTRUCT_AFTER)
if type(expiry_ts) is int and not event.is_state():
if type(expiry_ts) is int and not event.is_state(): # noqa: E721
self._insert_event_expiry_txn(txn, event.event_id, expiry_ts)
# Insert into the room_memberships table.
@ -2039,10 +2006,10 @@ class PersistEventsStore:
):
if (
"min_lifetime" in event.content
and type(event.content["min_lifetime"]) is not int
and type(event.content["min_lifetime"]) is not int # noqa: E721
) or (
"max_lifetime" in event.content
and type(event.content["max_lifetime"]) is not int
and type(event.content["max_lifetime"]) is not int # noqa: E721
):
# Ignore the event if one of the value isn't an integer.
return

View File

@ -2022,25 +2022,6 @@ class EventsWorkerStore(SQLBaseStore):
desc="get_next_event_to_expire", func=get_next_event_to_expire_txn
)
async def get_event_id_from_transaction_id_and_token_id(
self, room_id: str, user_id: str, token_id: int, txn_id: str
) -> Optional[str]:
"""Look up if we have already persisted an event for the transaction ID,
returning the event ID if so.
"""
return await self.db_pool.simple_select_one_onecol(
table="event_txn_id",
keyvalues={
"room_id": room_id,
"user_id": user_id,
"token_id": token_id,
"txn_id": txn_id,
},
retcol="event_id",
allow_none=True,
desc="get_event_id_from_transaction_id_and_token_id",
)
async def get_event_id_from_transaction_id_and_device_id(
self, room_id: str, user_id: str, device_id: str, txn_id: str
) -> Optional[str]:
@ -2072,29 +2053,35 @@ class EventsWorkerStore(SQLBaseStore):
"""
mapping = {}
txn_id_to_event: Dict[Tuple[str, int, str], str] = {}
txn_id_to_event: Dict[Tuple[str, str, str, str], str] = {}
for event in events:
token_id = getattr(event.internal_metadata, "token_id", None)
device_id = getattr(event.internal_metadata, "device_id", None)
txn_id = getattr(event.internal_metadata, "txn_id", None)
if token_id and txn_id:
if device_id and txn_id:
# Check if this is a duplicate of an event in the given events.
existing = txn_id_to_event.get((event.room_id, token_id, txn_id))
existing = txn_id_to_event.get(
(event.room_id, event.sender, device_id, txn_id)
)
if existing:
mapping[event.event_id] = existing
continue
# Check if this is a duplicate of an event we've already
# persisted.
existing = await self.get_event_id_from_transaction_id_and_token_id(
event.room_id, event.sender, token_id, txn_id
existing = await self.get_event_id_from_transaction_id_and_device_id(
event.room_id, event.sender, device_id, txn_id
)
if existing:
mapping[event.event_id] = existing
txn_id_to_event[(event.room_id, token_id, txn_id)] = existing
txn_id_to_event[
(event.room_id, event.sender, device_id, txn_id)
] = existing
else:
txn_id_to_event[(event.room_id, token_id, txn_id)] = event.event_id
txn_id_to_event[
(event.room_id, event.sender, device_id, txn_id)
] = event.event_id
return mapping

View File

@ -17,7 +17,7 @@ from types import TracebackType
from typing import TYPE_CHECKING, Collection, Optional, Set, Tuple, Type
from weakref import WeakValueDictionary
from twisted.internet.interfaces import IReactorCore
from twisted.internet.task import LoopingCall
from synapse.metrics.background_process_metrics import wrap_as_background_process
from synapse.storage._base import SQLBaseStore
@ -26,6 +26,7 @@ from synapse.storage.database import (
LoggingDatabaseConnection,
LoggingTransaction,
)
from synapse.types import ISynapseReactor
from synapse.util import Clock
from synapse.util.stringutils import random_string
@ -358,7 +359,7 @@ class Lock:
def __init__(
self,
reactor: IReactorCore,
reactor: ISynapseReactor,
clock: Clock,
store: LockStore,
read_write: bool,
@ -377,19 +378,25 @@ class Lock:
self._table = "worker_read_write_locks" if read_write else "worker_locks"
self._looping_call = clock.looping_call(
self._renew,
_RENEWAL_INTERVAL_MS,
store,
clock,
read_write,
lock_name,
lock_key,
token,
)
# We might be called from a non-main thread, so we defer setting up the
# looping call.
self._looping_call: Optional[LoopingCall] = None
reactor.callFromThread(self._setup_looping_call)
self._dropped = False
def _setup_looping_call(self) -> None:
self._looping_call = self._clock.looping_call(
self._renew,
_RENEWAL_INTERVAL_MS,
self._store,
self._clock,
self._read_write,
self._lock_name,
self._lock_key,
self._token,
)
@staticmethod
@wrap_as_background_process("Lock._renew")
async def _renew(
@ -459,7 +466,7 @@ class Lock:
if self._dropped:
return
if self._looping_call.running:
if self._looping_call and self._looping_call.running:
self._looping_call.stop()
await self._store.db_pool.simple_delete(
@ -486,8 +493,9 @@ class Lock:
# We should not be dropped without the lock being released (unless
# we're shutting down), but if we are then let's at least stop
# renewing the lock.
if self._looping_call.running:
self._looping_call.stop()
if self._looping_call and self._looping_call.running:
# We might be called from a non-main thread.
self._reactor.callFromThread(self._looping_call.stop)
if self._reactor.running:
logger.error(

View File

@ -88,7 +88,6 @@ def _load_rules(
msc1767_enabled=experimental_config.msc1767_enabled,
msc3664_enabled=experimental_config.msc3664_enabled,
msc3381_polls_enabled=experimental_config.msc3381_polls_enabled,
msc3958_suppress_edits_enabled=experimental_config.msc3958_supress_edit_notifs,
)
return filtered_rules

View File

@ -206,8 +206,12 @@ class RegistrationWorkerStore(CacheInvalidationWorkerStore):
consent_server_notice_sent, appservice_id, creation_ts, user_type,
deactivated, COALESCE(shadow_banned, FALSE) AS shadow_banned,
COALESCE(approved, TRUE) AS approved,
COALESCE(locked, FALSE) AS locked
COALESCE(locked, FALSE) AS locked, last_seen_ts
FROM users
LEFT JOIN (
SELECT user_id, MAX(last_seen) AS last_seen_ts
FROM user_ips GROUP BY user_id
) ls ON users.name = ls.user_id
WHERE name = ?
""",
(user_id,),
@ -268,6 +272,7 @@ class RegistrationWorkerStore(CacheInvalidationWorkerStore):
is_shadow_banned=bool(user_data["shadow_banned"]),
user_id=UserID.from_string(user_data["name"]),
user_type=user_data["user_type"],
last_seen_ts=user_data["last_seen_ts"],
)
async def is_trial_user(self, user_id: str) -> bool:

View File

@ -107,6 +107,7 @@ class UserSortOrder(Enum):
AVATAR_URL = "avatar_url"
SHADOW_BANNED = "shadow_banned"
CREATION_TS = "creation_ts"
LAST_SEEN_TS = "last_seen_ts"
class StatsStore(StateDeltasStore):

View File

@ -14,7 +14,7 @@
import logging
from enum import Enum
from typing import TYPE_CHECKING, Iterable, List, Optional, Tuple, cast
from typing import TYPE_CHECKING, Dict, Iterable, List, Optional, Tuple, cast
import attr
from canonicaljson import encode_canonical_json
@ -28,8 +28,8 @@ from synapse.storage.database import (
LoggingTransaction,
)
from synapse.storage.databases.main.cache import CacheInvalidationWorkerStore
from synapse.types import JsonDict
from synapse.util.caches.descriptors import cached
from synapse.types import JsonDict, StrCollection
from synapse.util.caches.descriptors import cached, cachedList
if TYPE_CHECKING:
from synapse.server import HomeServer
@ -205,6 +205,26 @@ class TransactionWorkerStore(CacheInvalidationWorkerStore):
else:
return None
@cachedList(
cached_method_name="get_destination_retry_timings", list_name="destinations"
)
async def get_destination_retry_timings_batch(
self, destinations: StrCollection
) -> Dict[str, Optional[DestinationRetryTimings]]:
rows = await self.db_pool.simple_select_many_batch(
table="destinations",
iterable=destinations,
column="destination",
retcols=("destination", "failure_ts", "retry_last_ts", "retry_interval"),
desc="get_destination_retry_timings_batch",
)
return {
row.pop("destination"): DestinationRetryTimings(**row)
for row in rows
if row["retry_last_ts"] and row["failure_ts"] and row["retry_interval"]
}
async def set_destination_retry_timings(
self,
destination: str,

View File

@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
SCHEMA_VERSION = 80 # remember to update the list below when updating
SCHEMA_VERSION = 81 # remember to update the list below when updating
"""Represents the expectations made by the codebase about the database schema
This should be incremented whenever the codebase changes its requirements on the
@ -114,19 +114,15 @@ Changes in SCHEMA_VERSION = 79
Changes in SCHEMA_VERSION = 80
- The event_txn_id_device_id is always written to for new events.
- Add tables for the task scheduler.
Changes in SCHEMA_VERSION = 81
- The event_txn_id is no longer written to for new events.
"""
SCHEMA_COMPAT_VERSION = (
# Queries against `event_stream_ordering` columns in membership tables must
# be disambiguated.
#
# The threads_id column must written to with non-null values for the
# event_push_actions, event_push_actions_staging, and event_push_summary tables.
#
# insertions to the column `full_user_id` of tables profiles and user_filters can no
# longer be null
76
# The `event_txn_id_device_id` must be written to for new events.
80
)
"""Limit on how far the synapse codebase can be rolled back without breaking db compat

View File

@ -946,6 +946,7 @@ class UserInfo:
is_guest: True if the user is a guest user.
is_shadow_banned: True if the user has been shadow-banned.
user_type: User type (None for normal user, 'support' and 'bot' other options).
last_seen_ts: Last activity timestamp of the user.
"""
user_id: UserID
@ -958,6 +959,7 @@ class UserInfo:
is_deactivated: bool
is_guest: bool
is_shadow_banned: bool
last_seen_ts: Optional[int]
class UserProfile(TypedDict):

View File

@ -470,7 +470,7 @@ class CacheMultipleEntries(CacheEntry[KT, VT]):
def deferred(self, key: KT) -> "defer.Deferred[VT]":
if not self._deferred:
self._deferred = ObservableDeferred(defer.Deferred(), consumeErrors=True)
return self._deferred.observe().addCallback(lambda res: res.get(key))
return self._deferred.observe().addCallback(lambda res: res[key])
def add_invalidation_callback(
self, key: KT, callback: Optional[Callable[[], None]]

View File

@ -51,9 +51,9 @@ class DependencyException(Exception):
DEV_EXTRAS = {"lint", "mypy", "test", "dev"}
RUNTIME_EXTRAS = (
set(metadata.metadata(DISTRIBUTION_NAME).get_all("Provides-Extra")) - DEV_EXTRAS
)
ALL_EXTRAS = metadata.metadata(DISTRIBUTION_NAME).get_all("Provides-Extra")
assert ALL_EXTRAS is not None
RUNTIME_EXTRAS = set(ALL_EXTRAS) - DEV_EXTRAS
VERSION = metadata.version(DISTRIBUTION_NAME)

View File

@ -291,7 +291,8 @@ class _PerHostRatelimiter:
if self.metrics_name:
rate_limit_reject_counter.labels(self.metrics_name).inc()
raise LimitExceededError(
retry_after_ms=int(self.window_size / self.sleep_limit)
limiter_name="rc_federation",
retry_after_ms=int(self.window_size / self.sleep_limit),
)
self.request_times.append(time_now)

View File

@ -19,6 +19,7 @@ from typing import TYPE_CHECKING, Any, Optional, Type
from synapse.api.errors import CodeMessageException
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.storage import DataStore
from synapse.types import StrCollection
from synapse.util import Clock
if TYPE_CHECKING:
@ -116,6 +117,30 @@ async def get_retry_limiter(
)
async def filter_destinations_by_retry_limiter(
destinations: StrCollection,
clock: Clock,
store: DataStore,
retry_due_within_ms: int = 0,
) -> StrCollection:
"""Filter down the list of destinations to only those that will are either
alive or due for a retry (within `retry_due_within_ms`)
"""
if not destinations:
return destinations
retry_timings = await store.get_destination_retry_timings_batch(destinations)
now = int(clock.time_msec())
return [
destination
for destination, timings in retry_timings.items()
if timings is None
or timings.retry_last_ts + timings.retry_interval <= now + retry_due_within_ms
]
class RetryDestinationLimiter:
def __init__(
self,
@ -128,6 +153,7 @@ class RetryDestinationLimiter:
backoff_on_failure: bool = True,
notifier: Optional["Notifier"] = None,
replication_client: Optional["ReplicationCommandHandler"] = None,
backoff_on_all_error_codes: bool = False,
):
"""Marks the destination as "down" if an exception is thrown in the
context, except for CodeMessageException with code < 500.
@ -147,6 +173,9 @@ class RetryDestinationLimiter:
backoff_on_failure: set to False if we should not increase the
retry interval on a failure.
backoff_on_all_error_codes: Whether we should back off on any
error code.
"""
self.clock = clock
self.store = store
@ -156,6 +185,7 @@ class RetryDestinationLimiter:
self.retry_interval = retry_interval
self.backoff_on_404 = backoff_on_404
self.backoff_on_failure = backoff_on_failure
self.backoff_on_all_error_codes = backoff_on_all_error_codes
self.notifier = notifier
self.replication_client = replication_client
@ -179,6 +209,7 @@ class RetryDestinationLimiter:
exc_val: Optional[BaseException],
exc_tb: Optional[TracebackType],
) -> None:
success = exc_type is None
valid_err_code = False
if exc_type is None:
valid_err_code = True
@ -195,7 +226,9 @@ class RetryDestinationLimiter:
# won't accept our requests for at least a while.
# 429 is us being aggressively rate limited, so lets rate limit
# ourselves.
if exc_val.code == 404 and self.backoff_on_404:
if self.backoff_on_all_error_codes:
valid_err_code = False
elif exc_val.code == 404 and self.backoff_on_404:
valid_err_code = False
elif exc_val.code in (401, 429):
valid_err_code = False
@ -204,7 +237,7 @@ class RetryDestinationLimiter:
else:
valid_err_code = False
if valid_err_code:
if success:
# We connected successfully.
if not self.retry_interval:
return
@ -215,6 +248,12 @@ class RetryDestinationLimiter:
self.failure_ts = None
retry_last_ts = 0
self.retry_interval = 0
elif valid_err_code:
# We got a potentially valid error code back. We don't reset the
# timers though, as the other side might actually be down anyway
# (e.g. some deprovisioned servers will always return a 404 or 403,
# and we don't want to keep resetting the retry timers for them).
return
elif not self.backoff_on_failure:
return
else:

View File

@ -57,14 +57,13 @@ class TaskScheduler:
the code launching the task.
You can also specify the `result` (and/or an `error`) when returning from the function.
The reconciliation loop runs every 5 mns, so this is not a precise scheduler. When wanting
to launch now, the launch will still not happen before the next loop run.
Tasks will be run on the worker specified with `run_background_tasks_on` config,
or the main one by default.
The reconciliation loop runs every minute, so this is not a precise scheduler.
There is a limit of 10 concurrent tasks, so tasks may be delayed if the pool is already
full. In this regard, please take great care that scheduled tasks can actually finished.
For now there is no mechanism to stop a running task if it is stuck.
Tasks will be run on the worker specified with `run_background_tasks_on` config,
or the main one by default.
"""
# Precision of the scheduler, evaluation of tasks to run will only happen
@ -85,7 +84,7 @@ class TaskScheduler:
self._actions: Dict[
str,
Callable[
[ScheduledTask, bool],
[ScheduledTask],
Awaitable[Tuple[TaskStatus, Optional[JsonMapping], Optional[str]]],
],
] = {}
@ -98,11 +97,13 @@ class TaskScheduler:
"handle_scheduled_tasks",
self._handle_scheduled_tasks,
)
else:
self.replication_client = hs.get_replication_command_handler()
def register_action(
self,
function: Callable[
[ScheduledTask, bool],
[ScheduledTask],
Awaitable[Tuple[TaskStatus, Optional[JsonMapping], Optional[str]]],
],
action_name: str,
@ -115,10 +116,9 @@ class TaskScheduler:
calling `schedule_task` but rather in an `__init__` method.
Args:
function: The function to be executed for this action. The parameters
passed to the function when launched are the `ScheduledTask` being run,
and a `first_launch` boolean to signal if it's a resumed task or the first
launch of it. The function should return a tuple of new `status`, `result`
function: The function to be executed for this action. The parameter
passed to the function when launched is the `ScheduledTask` being run.
The function should return a tuple of new `status`, `result`
and `error` as specified in `ScheduledTask`.
action_name: The name of the action to be associated with the function
"""
@ -171,6 +171,12 @@ class TaskScheduler:
)
await self._store.insert_scheduled_task(task)
if status == TaskStatus.ACTIVE:
if self._run_background_tasks:
await self._launch_task(task)
else:
self.replication_client.send_new_active_task(task.id)
return task.id
async def update_task(
@ -265,21 +271,13 @@ class TaskScheduler:
Args:
id: id of the task to delete
"""
if self.task_is_running(id):
raise Exception(f"Task {id} is currently running and can't be deleted")
task = await self.get_task(id)
if task is None:
raise Exception(f"Task {id} does not exist")
if task.status == TaskStatus.ACTIVE:
raise Exception(f"Task {id} is currently ACTIVE and can't be deleted")
await self._store.delete_scheduled_task(id)
def task_is_running(self, id: str) -> bool:
"""Check if a task is currently running.
Can only be called from the worker handling the task scheduling.
Args:
id: id of the task to check
"""
assert self._run_background_tasks
return id in self._running_tasks
async def _handle_scheduled_tasks(self) -> None:
"""Main loop taking care of launching tasks and cleaning up old ones."""
await self._launch_scheduled_tasks()
@ -288,29 +286,11 @@ class TaskScheduler:
async def _launch_scheduled_tasks(self) -> None:
"""Retrieve and launch scheduled tasks that should be running at that time."""
for task in await self.get_tasks(statuses=[TaskStatus.ACTIVE]):
if not self.task_is_running(task.id):
if (
len(self._running_tasks)
< TaskScheduler.MAX_CONCURRENT_RUNNING_TASKS
):
await self._launch_task(task, first_launch=False)
else:
if (
self._clock.time_msec()
> task.timestamp + TaskScheduler.LAST_UPDATE_BEFORE_WARNING_MS
):
logger.warn(
f"Task {task.id} (action {task.action}) has seen no update for more than 24h and may be stuck"
)
await self._launch_task(task)
for task in await self.get_tasks(
statuses=[TaskStatus.SCHEDULED], max_timestamp=self._clock.time_msec()
):
if (
not self.task_is_running(task.id)
and len(self._running_tasks)
< TaskScheduler.MAX_CONCURRENT_RUNNING_TASKS
):
await self._launch_task(task, first_launch=True)
await self._launch_task(task)
running_tasks_gauge.set(len(self._running_tasks))
@ -320,27 +300,27 @@ class TaskScheduler:
statuses=[TaskStatus.FAILED, TaskStatus.COMPLETE]
):
# FAILED and COMPLETE tasks should never be running
assert not self.task_is_running(task.id)
assert task.id not in self._running_tasks
if (
self._clock.time_msec()
> task.timestamp + TaskScheduler.KEEP_TASKS_FOR_MS
):
await self._store.delete_scheduled_task(task.id)
async def _launch_task(self, task: ScheduledTask, first_launch: bool) -> None:
async def _launch_task(self, task: ScheduledTask) -> None:
"""Launch a scheduled task now.
Args:
task: the task to launch
first_launch: `True` if it's the first time is launched, `False` otherwise
"""
assert task.action in self._actions
assert self._run_background_tasks
assert task.action in self._actions
function = self._actions[task.action]
async def wrapper() -> None:
try:
(status, result, error) = await function(task, first_launch)
(status, result, error) = await function(task)
except Exception:
f = Failure()
logger.error(
@ -360,6 +340,20 @@ class TaskScheduler:
)
self._running_tasks.remove(task.id)
if len(self._running_tasks) >= TaskScheduler.MAX_CONCURRENT_RUNNING_TASKS:
return
if (
self._clock.time_msec()
> task.timestamp + TaskScheduler.LAST_UPDATE_BEFORE_WARNING_MS
):
logger.warn(
f"Task {task.id} (action {task.action}) has seen no update for more than 24h and may be stuck"
)
if task.id in self._running_tasks:
return
self._running_tasks.add(task.id)
await self.update_task(task.id, status=TaskStatus.ACTIVE)
description = f"{task.id}-{task.action}"

View File

@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from unittest.mock import Mock
from unittest.mock import AsyncMock, Mock
import pymacaroons
@ -35,7 +35,6 @@ from synapse.types import Requester, UserID
from synapse.util import Clock
from tests import unittest
from tests.test_utils import simple_async_mock
from tests.unittest import override_config
from tests.utils import mock_getRawHeaders
@ -60,16 +59,16 @@ class AuthTestCase(unittest.HomeserverTestCase):
# this is overridden for the appservice tests
self.store.get_app_service_by_token = Mock(return_value=None)
self.store.insert_client_ip = simple_async_mock(None)
self.store.is_support_user = simple_async_mock(False)
self.store.insert_client_ip = AsyncMock(return_value=None)
self.store.is_support_user = AsyncMock(return_value=False)
def test_get_user_by_req_user_valid_token(self) -> None:
user_info = TokenLookupResult(
user_id=self.test_user, token_id=5, device_id="device"
)
self.store.get_user_by_access_token = simple_async_mock(user_info)
self.store.mark_access_token_as_used = simple_async_mock(None)
self.store.get_user_locked_status = simple_async_mock(False)
self.store.get_user_by_access_token = AsyncMock(return_value=user_info)
self.store.mark_access_token_as_used = AsyncMock(return_value=None)
self.store.get_user_locked_status = AsyncMock(return_value=False)
request = Mock(args={})
request.args[b"access_token"] = [self.test_token]
@ -78,7 +77,7 @@ class AuthTestCase(unittest.HomeserverTestCase):
self.assertEqual(requester.user.to_string(), self.test_user)
def test_get_user_by_req_user_bad_token(self) -> None:
self.store.get_user_by_access_token = simple_async_mock(None)
self.store.get_user_by_access_token = AsyncMock(return_value=None)
request = Mock(args={})
request.args[b"access_token"] = [self.test_token]
@ -91,7 +90,7 @@ class AuthTestCase(unittest.HomeserverTestCase):
def test_get_user_by_req_user_missing_token(self) -> None:
user_info = TokenLookupResult(user_id=self.test_user, token_id=5)
self.store.get_user_by_access_token = simple_async_mock(user_info)
self.store.get_user_by_access_token = AsyncMock(return_value=user_info)
request = Mock(args={})
request.requestHeaders.getRawHeaders = mock_getRawHeaders()
@ -106,7 +105,7 @@ class AuthTestCase(unittest.HomeserverTestCase):
token="foobar", url="a_url", sender=self.test_user, ip_range_whitelist=None
)
self.store.get_app_service_by_token = Mock(return_value=app_service)
self.store.get_user_by_access_token = simple_async_mock(None)
self.store.get_user_by_access_token = AsyncMock(return_value=None)
request = Mock(args={})
request.getClientAddress.return_value.host = "127.0.0.1"
@ -125,7 +124,7 @@ class AuthTestCase(unittest.HomeserverTestCase):
ip_range_whitelist=IPSet(["192.168/16"]),
)
self.store.get_app_service_by_token = Mock(return_value=app_service)
self.store.get_user_by_access_token = simple_async_mock(None)
self.store.get_user_by_access_token = AsyncMock(return_value=None)
request = Mock(args={})
request.getClientAddress.return_value.host = "192.168.10.10"
@ -144,7 +143,7 @@ class AuthTestCase(unittest.HomeserverTestCase):
ip_range_whitelist=IPSet(["192.168/16"]),
)
self.store.get_app_service_by_token = Mock(return_value=app_service)
self.store.get_user_by_access_token = simple_async_mock(None)
self.store.get_user_by_access_token = AsyncMock(return_value=None)
request = Mock(args={})
request.getClientAddress.return_value.host = "131.111.8.42"
@ -158,7 +157,7 @@ class AuthTestCase(unittest.HomeserverTestCase):
def test_get_user_by_req_appservice_bad_token(self) -> None:
self.store.get_app_service_by_token = Mock(return_value=None)
self.store.get_user_by_access_token = simple_async_mock(None)
self.store.get_user_by_access_token = AsyncMock(return_value=None)
request = Mock(args={})
request.args[b"access_token"] = [self.test_token]
@ -172,7 +171,7 @@ class AuthTestCase(unittest.HomeserverTestCase):
def test_get_user_by_req_appservice_missing_token(self) -> None:
app_service = Mock(token="foobar", url="a_url", sender=self.test_user)
self.store.get_app_service_by_token = Mock(return_value=app_service)
self.store.get_user_by_access_token = simple_async_mock(None)
self.store.get_user_by_access_token = AsyncMock(return_value=None)
request = Mock(args={})
request.requestHeaders.getRawHeaders = mock_getRawHeaders()
@ -190,8 +189,8 @@ class AuthTestCase(unittest.HomeserverTestCase):
app_service.is_interested_in_user = Mock(return_value=True)
self.store.get_app_service_by_token = Mock(return_value=app_service)
# This just needs to return a truth-y value.
self.store.get_user_by_id = simple_async_mock({"is_guest": False})
self.store.get_user_by_access_token = simple_async_mock(None)
self.store.get_user_by_id = AsyncMock(return_value={"is_guest": False})
self.store.get_user_by_access_token = AsyncMock(return_value=None)
request = Mock(args={})
request.getClientAddress.return_value.host = "127.0.0.1"
@ -210,7 +209,7 @@ class AuthTestCase(unittest.HomeserverTestCase):
)
app_service.is_interested_in_user = Mock(return_value=False)
self.store.get_app_service_by_token = Mock(return_value=app_service)
self.store.get_user_by_access_token = simple_async_mock(None)
self.store.get_user_by_access_token = AsyncMock(return_value=None)
request = Mock(args={})
request.getClientAddress.return_value.host = "127.0.0.1"
@ -234,10 +233,10 @@ class AuthTestCase(unittest.HomeserverTestCase):
app_service.is_interested_in_user = Mock(return_value=True)
self.store.get_app_service_by_token = Mock(return_value=app_service)
# This just needs to return a truth-y value.
self.store.get_user_by_id = simple_async_mock({"is_guest": False})
self.store.get_user_by_access_token = simple_async_mock(None)
self.store.get_user_by_id = AsyncMock(return_value={"is_guest": False})
self.store.get_user_by_access_token = AsyncMock(return_value=None)
# This also needs to just return a truth-y value
self.store.get_device = simple_async_mock({"hidden": False})
self.store.get_device = AsyncMock(return_value={"hidden": False})
request = Mock(args={})
request.getClientAddress.return_value.host = "127.0.0.1"
@ -266,10 +265,10 @@ class AuthTestCase(unittest.HomeserverTestCase):
app_service.is_interested_in_user = Mock(return_value=True)
self.store.get_app_service_by_token = Mock(return_value=app_service)
# This just needs to return a truth-y value.
self.store.get_user_by_id = simple_async_mock({"is_guest": False})
self.store.get_user_by_access_token = simple_async_mock(None)
self.store.get_user_by_id = AsyncMock(return_value={"is_guest": False})
self.store.get_user_by_access_token = AsyncMock(return_value=None)
# This also needs to just return a falsey value
self.store.get_device = simple_async_mock(None)
self.store.get_device = AsyncMock(return_value=None)
request = Mock(args={})
request.getClientAddress.return_value.host = "127.0.0.1"
@ -283,8 +282,8 @@ class AuthTestCase(unittest.HomeserverTestCase):
self.assertEqual(failure.value.errcode, Codes.EXCLUSIVE)
def test_get_user_by_req__puppeted_token__not_tracking_puppeted_mau(self) -> None:
self.store.get_user_by_access_token = simple_async_mock(
TokenLookupResult(
self.store.get_user_by_access_token = AsyncMock(
return_value=TokenLookupResult(
user_id="@baldrick:matrix.org",
device_id="device",
token_id=5,
@ -292,9 +291,9 @@ class AuthTestCase(unittest.HomeserverTestCase):
token_used=True,
)
)
self.store.insert_client_ip = simple_async_mock(None)
self.store.mark_access_token_as_used = simple_async_mock(None)
self.store.get_user_locked_status = simple_async_mock(False)
self.store.insert_client_ip = AsyncMock(return_value=None)
self.store.mark_access_token_as_used = AsyncMock(return_value=None)
self.store.get_user_locked_status = AsyncMock(return_value=False)
request = Mock(args={})
request.getClientAddress.return_value.host = "127.0.0.1"
request.args[b"access_token"] = [self.test_token]
@ -304,8 +303,8 @@ class AuthTestCase(unittest.HomeserverTestCase):
def test_get_user_by_req__puppeted_token__tracking_puppeted_mau(self) -> None:
self.auth._track_puppeted_user_ips = True
self.store.get_user_by_access_token = simple_async_mock(
TokenLookupResult(
self.store.get_user_by_access_token = AsyncMock(
return_value=TokenLookupResult(
user_id="@baldrick:matrix.org",
device_id="device",
token_id=5,
@ -313,9 +312,9 @@ class AuthTestCase(unittest.HomeserverTestCase):
token_used=True,
)
)
self.store.get_user_locked_status = simple_async_mock(False)
self.store.insert_client_ip = simple_async_mock(None)
self.store.mark_access_token_as_used = simple_async_mock(None)
self.store.get_user_locked_status = AsyncMock(return_value=False)
self.store.insert_client_ip = AsyncMock(return_value=None)
self.store.mark_access_token_as_used = AsyncMock(return_value=None)
request = Mock(args={})
request.getClientAddress.return_value.host = "127.0.0.1"
request.args[b"access_token"] = [self.test_token]
@ -324,7 +323,7 @@ class AuthTestCase(unittest.HomeserverTestCase):
self.assertEqual(self.store.insert_client_ip.call_count, 2)
def test_get_user_from_macaroon(self) -> None:
self.store.get_user_by_access_token = simple_async_mock(None)
self.store.get_user_by_access_token = AsyncMock(return_value=None)
user_id = "@baldrick:matrix.org"
macaroon = pymacaroons.Macaroon(
@ -342,8 +341,8 @@ class AuthTestCase(unittest.HomeserverTestCase):
)
def test_get_guest_user_from_macaroon(self) -> None:
self.store.get_user_by_id = simple_async_mock({"is_guest": True})
self.store.get_user_by_access_token = simple_async_mock(None)
self.store.get_user_by_id = AsyncMock(return_value={"is_guest": True})
self.store.get_user_by_access_token = AsyncMock(return_value=None)
user_id = "@baldrick:matrix.org"
macaroon = pymacaroons.Macaroon(
@ -373,7 +372,7 @@ class AuthTestCase(unittest.HomeserverTestCase):
self.auth_blocking._limit_usage_by_mau = True
self.store.get_monthly_active_count = simple_async_mock(lots_of_users)
self.store.get_monthly_active_count = AsyncMock(return_value=lots_of_users)
e = self.get_failure(
self.auth_blocking.check_auth_blocking(), ResourceLimitError
@ -383,25 +382,27 @@ class AuthTestCase(unittest.HomeserverTestCase):
self.assertEqual(e.value.code, 403)
# Ensure does not throw an error
self.store.get_monthly_active_count = simple_async_mock(small_number_of_users)
self.store.get_monthly_active_count = AsyncMock(
return_value=small_number_of_users
)
self.get_success(self.auth_blocking.check_auth_blocking())
def test_blocking_mau__depending_on_user_type(self) -> None:
self.auth_blocking._max_mau_value = 50
self.auth_blocking._limit_usage_by_mau = True
self.store.get_monthly_active_count = simple_async_mock(100)
self.store.get_monthly_active_count = AsyncMock(return_value=100)
# Support users allowed
self.get_success(
self.auth_blocking.check_auth_blocking(user_type=UserTypes.SUPPORT)
)
self.store.get_monthly_active_count = simple_async_mock(100)
self.store.get_monthly_active_count = AsyncMock(return_value=100)
# Bots not allowed
self.get_failure(
self.auth_blocking.check_auth_blocking(user_type=UserTypes.BOT),
ResourceLimitError,
)
self.store.get_monthly_active_count = simple_async_mock(100)
self.store.get_monthly_active_count = AsyncMock(return_value=100)
# Real users not allowed
self.get_failure(self.auth_blocking.check_auth_blocking(), ResourceLimitError)
@ -412,9 +413,9 @@ class AuthTestCase(unittest.HomeserverTestCase):
self.auth_blocking._limit_usage_by_mau = True
self.auth_blocking._track_appservice_user_ips = False
self.store.get_monthly_active_count = simple_async_mock(100)
self.store.user_last_seen_monthly_active = simple_async_mock()
self.store.is_trial_user = simple_async_mock()
self.store.get_monthly_active_count = AsyncMock(return_value=100)
self.store.user_last_seen_monthly_active = AsyncMock(return_value=None)
self.store.is_trial_user = AsyncMock(return_value=False)
appservice = ApplicationService(
"abcd",
@ -443,9 +444,9 @@ class AuthTestCase(unittest.HomeserverTestCase):
self.auth_blocking._limit_usage_by_mau = True
self.auth_blocking._track_appservice_user_ips = True
self.store.get_monthly_active_count = simple_async_mock(100)
self.store.user_last_seen_monthly_active = simple_async_mock()
self.store.is_trial_user = simple_async_mock()
self.store.get_monthly_active_count = AsyncMock(return_value=100)
self.store.user_last_seen_monthly_active = AsyncMock(return_value=None)
self.store.is_trial_user = AsyncMock(return_value=False)
appservice = ApplicationService(
"abcd",
@ -473,7 +474,7 @@ class AuthTestCase(unittest.HomeserverTestCase):
def test_reserved_threepid(self) -> None:
self.auth_blocking._limit_usage_by_mau = True
self.auth_blocking._max_mau_value = 1
self.store.get_monthly_active_count = simple_async_mock(2)
self.store.get_monthly_active_count = AsyncMock(return_value=2)
threepid = {"medium": "email", "address": "reserved@server.com"}
unknown_threepid = {"medium": "email", "address": "unreserved@server.com"}
self.auth_blocking._mau_limits_reserved_threepids = [threepid]

43
tests/api/test_errors.py Normal file
View File

@ -0,0 +1,43 @@
# Copyright 2023 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import json
from synapse.api.errors import LimitExceededError
from tests import unittest
class LimitExceededErrorTestCase(unittest.TestCase):
def test_key_appears_in_context_but_not_error_dict(self) -> None:
err = LimitExceededError("needle")
serialised = json.dumps(err.error_dict(None))
self.assertIn("needle", err.debug_context)
self.assertNotIn("needle", serialised)
# Create a sub-class to avoid mutating the class-level property.
class LimitExceededErrorHeaders(LimitExceededError):
include_retry_after_header = True
def test_limit_exceeded_header(self) -> None:
err = self.LimitExceededErrorHeaders(limiter_name="test", retry_after_ms=100)
self.assertEqual(err.error_dict(None).get("retry_after_ms"), 100)
assert err.headers is not None
self.assertEqual(err.headers.get("Retry-After"), "1")
def test_limit_exceeded_rounding(self) -> None:
err = self.LimitExceededErrorHeaders(limiter_name="test", retry_after_ms=3001)
self.assertEqual(err.error_dict(None).get("retry_after_ms"), 3001)
assert err.headers is not None
self.assertEqual(err.headers.get("Retry-After"), "4")

Some files were not shown because too many files have changed in this diff Show More