Merge remote-tracking branch 'origin/develop' into clokep/db-upgrades

clokep/db-upgrades
Patrick Cloke 2023-10-16 15:42:54 -04:00
commit c1878cd4ae
168 changed files with 2414 additions and 1787 deletions

View File

@ -56,6 +56,7 @@ jobs:
- 'pyproject.toml' - 'pyproject.toml'
- 'poetry.lock' - 'poetry.lock'
- 'docker/**' - 'docker/**'
- 'scripts-dev/complement.sh'
linting: linting:
- 'synapse/**' - 'synapse/**'
@ -280,7 +281,6 @@ jobs:
- check-lockfile - check-lockfile
- lint-clippy - lint-clippy
- lint-rustfmt - lint-rustfmt
- check-signoff
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- run: "true" - run: "true"

View File

@ -1,3 +1,73 @@
# Synapse 1.94.0 (2023-10-10)
No significant changes since 1.94.0rc1.
However, please take note of the security advisory that follows.
## Security advisory
The following issue is fixed in 1.94.0 (and RC).
- [GHSA-5chr-wjw5-3gq4](https://github.com/matrix-org/synapse/security/advisories/GHSA-5chr-wjw5-3gq4) / [CVE-2023-45129](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-45129) — Moderate Severity
A malicious server ACL event can impact performance temporarily or permanently leading to a persistent denial of service.
Homeservers running on a closed federation (which presumably do not need to use server ACLs) are not affected.
See the advisory for more details. If you have any questions, email security@matrix.org.
# Synapse 1.94.0rc1 (2023-10-03)
### Features
- Render plain, CSS, CSV, JSON and common image formats in the browser (inline) when requested through the /download endpoint. ([\#15988](https://github.com/matrix-org/synapse/issues/15988))
- Add experimental support for [MSC4028](https://github.com/matrix-org/matrix-spec-proposals/pull/4028) to push all encrypted events to clients. ([\#16361](https://github.com/matrix-org/synapse/issues/16361))
- Minor performance improvement when sending presence to federated servers. ([\#16385](https://github.com/matrix-org/synapse/issues/16385))
- Minor performance improvement by caching server ACL checking. ([\#16360](https://github.com/matrix-org/synapse/issues/16360))
### Improved Documentation
- Add developer documentation concerning gradual schema migrations with column alterations. ([\#15691](https://github.com/matrix-org/synapse/issues/15691))
- Improve documentation of the user directory search algorithm. ([\#16320](https://github.com/matrix-org/synapse/issues/16320))
- Fix rendering of user admin API documentation around deactivation. This was broken in Synapse 1.91.0. ([\#16355](https://github.com/matrix-org/synapse/issues/16355))
- Update documentation around message retention policies. ([\#16382](https://github.com/matrix-org/synapse/issues/16382))
- Add note to `federation_domain_whitelist` config option to clarify its usage. ([\#16416](https://github.com/matrix-org/synapse/issues/16416))
- Improve legacy release notes. ([\#16418](https://github.com/matrix-org/synapse/issues/16418))
### Deprecations and Removals
- Remove Python version from `/_synapse/admin/v1/server_version`. ([\#16380](https://github.com/matrix-org/synapse/issues/16380))
### Internal Changes
- Avoid running CI steps when the files they check have not been changed. ([\#14745](https://github.com/matrix-org/synapse/issues/14745), [\#16387](https://github.com/matrix-org/synapse/issues/16387))
- Improve type hints. ([\#14911](https://github.com/matrix-org/synapse/issues/14911), [\#16350](https://github.com/matrix-org/synapse/issues/16350), [\#16356](https://github.com/matrix-org/synapse/issues/16356), [\#16395](https://github.com/matrix-org/synapse/issues/16395))
- Added support for pydantic v2 in addition to pydantic v1. Contributed by Maxwell G (@gotmax23). ([\#16332](https://github.com/matrix-org/synapse/issues/16332))
- Get CI to check PRs have been signed-off. ([\#16348](https://github.com/matrix-org/synapse/issues/16348))
- Add missing licence header. ([\#16359](https://github.com/matrix-org/synapse/issues/16359))
- Improve type hints, and bump types-psycopg2 from 2.9.21.11 to 2.9.21.14. ([\#16381](https://github.com/matrix-org/synapse/issues/16381))
- Improve comments in `StateGroupBackgroundUpdateStore`. ([\#16383](https://github.com/matrix-org/synapse/issues/16383))
- Update maturin configuration. ([\#16394](https://github.com/matrix-org/synapse/issues/16394))
- Downgrade replication stream time out error log lines to warning. ([\#16401](https://github.com/matrix-org/synapse/issues/16401))
### Updates to locked dependencies
* Bump actions/checkout from 3 to 4. ([\#16250](https://github.com/matrix-org/synapse/issues/16250))
* Bump cryptography from 41.0.3 to 41.0.4. ([\#16362](https://github.com/matrix-org/synapse/issues/16362))
* Bump dawidd6/action-download-artifact from 2.27.0 to 2.28.0. ([\#16374](https://github.com/matrix-org/synapse/issues/16374))
* Bump docker/setup-buildx-action from 2 to 3. ([\#16375](https://github.com/matrix-org/synapse/issues/16375))
* Bump gitpython from 3.1.35 to 3.1.37. ([\#16376](https://github.com/matrix-org/synapse/issues/16376))
* Bump msgpack from 1.0.5 to 1.0.6. ([\#16377](https://github.com/matrix-org/synapse/issues/16377))
* Bump msgpack from 1.0.6 to 1.0.7. ([\#16412](https://github.com/matrix-org/synapse/issues/16412))
* Bump phonenumbers from 8.13.19 to 8.13.22. ([\#16413](https://github.com/matrix-org/synapse/issues/16413))
* Bump psycopg2 from 2.9.7 to 2.9.8. ([\#16409](https://github.com/matrix-org/synapse/issues/16409))
* Bump pydantic from 2.3.0 to 2.4.2. ([\#16410](https://github.com/matrix-org/synapse/issues/16410))
* Bump regex from 1.9.5 to 1.9.6. ([\#16408](https://github.com/matrix-org/synapse/issues/16408))
* Bump sentry-sdk from 1.30.0 to 1.31.0. ([\#16378](https://github.com/matrix-org/synapse/issues/16378))
* Bump types-netaddr from 0.8.0.9 to 0.9.0.1. ([\#16411](https://github.com/matrix-org/synapse/issues/16411))
* Bump types-psycopg2 from 2.9.21.11 to 2.9.21.14. ([\#16381](https://github.com/matrix-org/synapse/issues/16381))
* Bump urllib3 from 1.26.15 to 1.26.17. ([\#16422](https://github.com/matrix-org/synapse/issues/16422))
# Synapse 1.93.0 (2023-09-26) # Synapse 1.93.0 (2023-09-26)
No significant changes since 1.93.0rc1. No significant changes since 1.93.0rc1.

40
Cargo.lock generated
View File

@ -144,9 +144,9 @@ checksum = "8f232d6ef707e1956a43342693d2a31e72989554d58299d7a88738cc95b0d35c"
[[package]] [[package]]
name = "memoffset" name = "memoffset"
version = "0.6.5" version = "0.9.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5aa361d4faea93603064a027415f07bd8e1d5c88c9fbf68bf56a285428fd79ce" checksum = "5a634b1c61a95585bd15607c6ab0c4e5b226e695ff2800ba0cdccddf208c406c"
dependencies = [ dependencies = [
"autocfg", "autocfg",
] ]
@ -191,9 +191,9 @@ dependencies = [
[[package]] [[package]]
name = "pyo3" name = "pyo3"
version = "0.17.3" version = "0.19.2"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "268be0c73583c183f2b14052337465768c07726936a260f480f0857cb95ba543" checksum = "e681a6cfdc4adcc93b4d3cf993749a4552018ee0a9b65fc0ccfad74352c72a38"
dependencies = [ dependencies = [
"anyhow", "anyhow",
"cfg-if", "cfg-if",
@ -209,9 +209,9 @@ dependencies = [
[[package]] [[package]]
name = "pyo3-build-config" name = "pyo3-build-config"
version = "0.17.3" version = "0.19.2"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "28fcd1e73f06ec85bf3280c48c67e731d8290ad3d730f8be9dc07946923005c8" checksum = "076c73d0bc438f7a4ef6fdd0c3bb4732149136abd952b110ac93e4edb13a6ba5"
dependencies = [ dependencies = [
"once_cell", "once_cell",
"target-lexicon", "target-lexicon",
@ -219,9 +219,9 @@ dependencies = [
[[package]] [[package]]
name = "pyo3-ffi" name = "pyo3-ffi"
version = "0.17.3" version = "0.19.2"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0f6cb136e222e49115b3c51c32792886defbfb0adead26a688142b346a0b9ffc" checksum = "e53cee42e77ebe256066ba8aa77eff722b3bb91f3419177cf4cd0f304d3284d9"
dependencies = [ dependencies = [
"libc", "libc",
"pyo3-build-config", "pyo3-build-config",
@ -229,9 +229,9 @@ dependencies = [
[[package]] [[package]]
name = "pyo3-log" name = "pyo3-log"
version = "0.8.3" version = "0.8.4"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f47b0777feb17f61eea78667d61103758b243a871edc09a7786500a50467b605" checksum = "c09c2b349b6538d8a73d436ca606dab6ce0aaab4dad9e6b7bdd57a4f556c3bc3"
dependencies = [ dependencies = [
"arc-swap", "arc-swap",
"log", "log",
@ -240,9 +240,9 @@ dependencies = [
[[package]] [[package]]
name = "pyo3-macros" name = "pyo3-macros"
version = "0.17.3" version = "0.19.2"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "94144a1266e236b1c932682136dc35a9dee8d3589728f68130c7c3861ef96b28" checksum = "dfeb4c99597e136528c6dd7d5e3de5434d1ceaf487436a3f03b2d56b6fc9efd1"
dependencies = [ dependencies = [
"proc-macro2", "proc-macro2",
"pyo3-macros-backend", "pyo3-macros-backend",
@ -252,9 +252,9 @@ dependencies = [
[[package]] [[package]]
name = "pyo3-macros-backend" name = "pyo3-macros-backend"
version = "0.17.3" version = "0.19.2"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c8df9be978a2d2f0cdebabb03206ed73b11314701a5bfe71b0d753b81997777f" checksum = "947dc12175c254889edc0c02e399476c2f652b4b9ebd123aa655c224de259536"
dependencies = [ dependencies = [
"proc-macro2", "proc-macro2",
"quote", "quote",
@ -263,9 +263,9 @@ dependencies = [
[[package]] [[package]]
name = "pythonize" name = "pythonize"
version = "0.17.0" version = "0.19.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0f7f0c136f5fbc01868185eef462800e49659eb23acca83b9e884367a006acb6" checksum = "8e35b716d430ace57e2d1b4afb51c9e5b7c46d2bce72926e07f9be6a98ced03e"
dependencies = [ dependencies = [
"pyo3", "pyo3",
"serde", "serde",
@ -332,18 +332,18 @@ checksum = "d29ab0c6d3fc0ee92fe66e2d99f700eab17a8d57d1c1d3b748380fb20baa78cd"
[[package]] [[package]]
name = "serde" name = "serde"
version = "1.0.188" version = "1.0.189"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "cf9e0fcba69a370eed61bcf2b728575f726b50b55cba78064753d708ddc7549e" checksum = "8e422a44e74ad4001bdc8eede9a4570ab52f71190e9c076d14369f38b9200537"
dependencies = [ dependencies = [
"serde_derive", "serde_derive",
] ]
[[package]] [[package]]
name = "serde_derive" name = "serde_derive"
version = "1.0.188" version = "1.0.189"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4eca7ac642d82aa35b60049a6eccb4be6be75e599bd2e9adb5f875a737654af2" checksum = "1e48d1f918009ce3145511378cf68d613e3b3d9137d67272562080d68a2b32d5"
dependencies = [ dependencies = [
"proc-macro2", "proc-macro2",
"quote", "quote",

View File

@ -1 +0,0 @@
Avoid running CI steps when the files they check have not been changed.

View File

@ -1 +0,0 @@
Add developer documentation concerning gradual schema migrations with column alterations.

View File

@ -1 +0,0 @@
Render plain, CSS, CSV, JSON and common image formats media content in the browser (inline) when requested through the /download endpoint.

1
changelog.d/16162.misc Normal file
View File

@ -0,0 +1 @@
Bump pyo3 from 0.17.1 to 0.19.2.

View File

@ -1 +0,0 @@
Improve documentation of the user directory search algorithm.

View File

@ -1 +0,0 @@
Added support for pydantic v2 in addition to pydantic v1. Contributed by Maxwell G (@gotmax23).

View File

@ -1 +0,0 @@
Get CI to check PRs have been signed-off.

View File

@ -1 +0,0 @@
Fix rendering of user admin API documentation around deactivation. This was broken in Synapse 1.91.0.

View File

@ -1 +0,0 @@
Add missing licence header.

View File

@ -1 +0,0 @@
Cache server ACL checking.

View File

@ -1 +0,0 @@
Experimental support for [MSC4028](https://github.com/matrix-org/matrix-spec-proposals/pull/4028) to push all encrypted events to clients.

View File

@ -1 +0,0 @@
Remove Python version from `/_synapse/admin/v1/server_version`.

View File

@ -1 +0,0 @@
Improve type hints, and bump types-psycopg2 from 2.9.21.11 to 2.9.21.14.

View File

@ -1 +0,0 @@
Update documentation around message retention policies.

View File

@ -1 +0,0 @@
Improve comments in `StateGroupBackgroundUpdateStore`.

View File

@ -1 +0,0 @@
Minor performance improvement when sending presence to federated servers.

View File

@ -1 +0,0 @@
Avoid running CI steps when the files they check have not been changed.

View File

@ -1 +0,0 @@
Update maturin configuration.

View File

@ -1 +0,0 @@
Improve type hints.

View File

@ -1 +0,0 @@
Downgrade replication stream time out error log lines to warning.

1
changelog.d/16403.bugfix Normal file
View File

@ -0,0 +1 @@
Remove legacy unspecced `knock_state_events` field returned in some responses.

1
changelog.d/16404.bugfix Normal file
View File

@ -0,0 +1 @@
Fixes possbile `AttributeError` when `_matrix/client/v3/account/whoami` is called over a unix socket. Contributed by @Sir-Photch.

View File

@ -1 +0,0 @@
Improve legacy release notes.

1
changelog.d/16419.misc Normal file
View File

@ -0,0 +1 @@
Update registration of media repository URLs.

1
changelog.d/16420.doc Normal file
View File

@ -0,0 +1 @@
Document internal background update mechanism.

1
changelog.d/16426.misc Normal file
View File

@ -0,0 +1 @@
Refactor some code to simplify and better type receipts stream adjacent code.

1
changelog.d/16427.misc Normal file
View File

@ -0,0 +1 @@
Factor out `MultiWriter` token from `RoomStreamToken`.

1
changelog.d/16428.misc Normal file
View File

@ -0,0 +1 @@
Improve code comments.

1
changelog.d/16429.misc Normal file
View File

@ -0,0 +1 @@
Reduce memory allocations.

1
changelog.d/16431.misc Normal file
View File

@ -0,0 +1 @@
Reduce memory allocations.

1
changelog.d/16433.misc Normal file
View File

@ -0,0 +1 @@
Reduce memory allocations.

1
changelog.d/16434.misc Normal file
View File

@ -0,0 +1 @@
Reduce memory allocations.

1
changelog.d/16435.misc Normal file
View File

@ -0,0 +1 @@
Remove unused method.

1
changelog.d/16438.misc Normal file
View File

@ -0,0 +1 @@
Reduce memory allocations.

1
changelog.d/16440.bugfix Normal file
View File

@ -0,0 +1 @@
Properly return inline media when content types have parameters.

1
changelog.d/16441.misc Normal file
View File

@ -0,0 +1 @@
Improve rate limiting logic.

1
changelog.d/16444.misc Normal file
View File

@ -0,0 +1 @@
Reduce memory allocations.

1
changelog.d/16454.misc Normal file
View File

@ -0,0 +1 @@
Do not block running of CI behind the check for sign-off on PRs.

1
changelog.d/16455.bugfix Normal file
View File

@ -0,0 +1 @@
Prevent the purging of large rooms from timing out when Postgres is in use. The timeout which causes this issue was introduced in Synapse 1.88.0.

1
changelog.d/16457.bugfix Normal file
View File

@ -0,0 +1 @@
Improve the performance of purging rooms, particularly encrypted rooms.

1
changelog.d/16461.misc Normal file
View File

@ -0,0 +1 @@
Update the release script to remind releaser to check for special release notes.

1
changelog.d/16466.misc Normal file
View File

@ -0,0 +1 @@
Update complement.sh to match new public API shape.

1
changelog.d/16477.doc Normal file
View File

@ -0,0 +1 @@
Fix a typo in the sql for [useful SQL for admins document](https://matrix-org.github.io/synapse/latest/usage/administration/useful_sql_for_admins.html).

1
changelog.d/16488.misc Normal file
View File

@ -0,0 +1 @@
Clean up logging on event persister endpoints.

1
changelog.d/16491.misc Normal file
View File

@ -0,0 +1 @@
Remove useless async job to delete device messages on sync, since we only deliver (and hence delete) up to 100 device messages at a time.

12
debian/changelog vendored
View File

@ -1,3 +1,15 @@
matrix-synapse-py3 (1.94.0) stable; urgency=medium
* New Synapse release 1.94.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 10 Oct 2023 10:57:41 +0100
matrix-synapse-py3 (1.94.0~rc1) stable; urgency=medium
* New Synapse release 1.94.0rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 03 Oct 2023 11:48:18 +0100
matrix-synapse-py3 (1.93.0) stable; urgency=medium matrix-synapse-py3 (1.93.0) stable; urgency=medium
* New Synapse release 1.93.0. * New Synapse release 1.93.0.

View File

@ -150,6 +150,67 @@ def run_upgrade(
... ...
``` ```
## Background updates
It is sometimes appropriate to perform database migrations as part of a background
process (instead of blocking Synapse until the migration is done). In particular,
this is useful for migrating data when adding new columns or tables.
Pending background updates stored in the `background_updates` table and are denoted
by a unique name, the current status (stored in JSON), and some dependency information:
* Whether the update requires a previous update to be complete.
* A rough ordering for which to complete updates.
A new background updates needs to be added to the `background_updates` table:
```sql
INSERT INTO background_updates (ordering, update_name, depends_on, progress_json) VALUES
(7706, 'my_background_update', 'a_previous_background_update' '{}');
```
And then needs an associated handler in the appropriate datastore:
```python
self.db_pool.updates.register_background_update_handler(
"my_background_update",
update_handler=self._my_background_update,
)
```
There are a few types of updates that can be performed, see the `BackgroundUpdater`:
* `register_background_update_handler`: A generic handler for custom SQL
* `register_background_index_update`: Create an index in the background
* `register_background_validate_constraint`: Validate a constraint in the background
(PostgreSQL-only)
* `register_background_validate_constraint_and_delete_rows`: Similar to
`register_background_validate_constraint`, but deletes rows which don't fit
the constraint.
For `register_background_update_handler`, the generic handler must track progress
and then finalize the background update:
```python
async def _my_background_update(self, progress: JsonDict, batch_size: int) -> int:
def _do_something(txn: LoggingTransaction) -> int:
...
self.db_pool.updates._background_update_progress_txn(
txn, "my_background_update", {"last_processed": last_processed}
)
return last_processed - prev_last_processed
num_processed = await self.db_pool.runInteraction("_do_something", _do_something)
await self.db_pool.updates._end_background_update("my_background_update")
return num_processed
```
Synapse will attempt to rate-limit how often background updates are run via the
given batch-size and the returned number of processed entries (and how long the
function took to run). See
[background update controller callbacks](../modules/background_update_controller_callbacks.md).
## Boolean columns ## Boolean columns
Boolean columns require special treatment, since SQLite treats booleans the Boolean columns require special treatment, since SQLite treats booleans the

View File

@ -193,7 +193,7 @@ SELECT rss.room_id, rss.name, rss.canonical_alias, rss.topic, rss.encryption,
rsc.joined_members, rsc.local_users_in_room, rss.join_rules rsc.joined_members, rsc.local_users_in_room, rss.join_rules
FROM room_stats_state rss FROM room_stats_state rss
LEFT JOIN room_stats_current rsc USING (room_id) LEFT JOIN room_stats_current rsc USING (room_id)
WHERE room_id IN ( WHERE room_id IN ( WHERE room_id IN (
'!OGEhHVWSdvArJzumhm:matrix.org', '!OGEhHVWSdvArJzumhm:matrix.org',
'!YTvKGNlinIzlkMTVRl:matrix.org' '!YTvKGNlinIzlkMTVRl:matrix.org'
); );

View File

@ -1190,6 +1190,11 @@ inbound federation traffic as early as possible, rather than relying
purely on this application-layer restriction. If not specified, the purely on this application-layer restriction. If not specified, the
default is to whitelist everything. default is to whitelist everything.
Note: this does not stop a server from joining rooms that servers not on the
whitelist are in. As such, this option is really only useful to establish a
"private federation", where a group of servers all whitelist each other and have
the same whitelist.
Example configuration: Example configuration:
```yaml ```yaml
federation_domain_whitelist: federation_domain_whitelist:

View File

@ -32,6 +32,7 @@ files =
docker/, docker/,
scripts-dev/, scripts-dev/,
synapse/, synapse/,
synmark/,
tests/, tests/,
build_rust.py build_rust.py
@ -80,6 +81,9 @@ ignore_missing_imports = True
[mypy-pympler.*] [mypy-pympler.*]
ignore_missing_imports = True ignore_missing_imports = True
[mypy-pyperf.*]
ignore_missing_imports = True
[mypy-rust_python_jaeger_reporter.*] [mypy-rust_python_jaeger_reporter.*]
ignore_missing_imports = True ignore_missing_imports = True

246
poetry.lock generated
View File

@ -208,13 +208,13 @@ uvloop = ["uvloop (>=0.15.2)"]
[[package]] [[package]]
name = "bleach" name = "bleach"
version = "6.0.0" version = "6.1.0"
description = "An easy safelist-based HTML-sanitizing tool." description = "An easy safelist-based HTML-sanitizing tool."
optional = false optional = false
python-versions = ">=3.7" python-versions = ">=3.8"
files = [ files = [
{file = "bleach-6.0.0-py3-none-any.whl", hash = "sha256:33c16e3353dbd13028ab4799a0f89a83f113405c766e9c122df8a06f5b85b3f4"}, {file = "bleach-6.1.0-py3-none-any.whl", hash = "sha256:3225f354cfc436b9789c66c4ee030194bee0568fbf9cbdad3bc8b5c26c5f12b6"},
{file = "bleach-6.0.0.tar.gz", hash = "sha256:1a1a85c1595e07d8db14c5f09f09e6433502c51c595970edc090551f0db99414"}, {file = "bleach-6.1.0.tar.gz", hash = "sha256:0a31f1837963c41d46bbf1331b8778e1308ea0791db03cc4e7357b97cf42a8fe"},
] ]
[package.dependencies] [package.dependencies]
@ -222,7 +222,7 @@ six = ">=1.9.0"
webencodings = "*" webencodings = "*"
[package.extras] [package.extras]
css = ["tinycss2 (>=1.1.0,<1.2)"] css = ["tinycss2 (>=1.1.0,<1.3)"]
[[package]] [[package]]
name = "canonicaljson" name = "canonicaljson"
@ -767,6 +767,17 @@ files = [
{file = "ijson-3.2.3-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:4a3a6a2fbbe7550ffe52d151cf76065e6b89cfb3e9d0463e49a7e322a25d0426"}, {file = "ijson-3.2.3-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:4a3a6a2fbbe7550ffe52d151cf76065e6b89cfb3e9d0463e49a7e322a25d0426"},
{file = "ijson-3.2.3-cp311-cp311-win32.whl", hash = "sha256:6a4db2f7fb9acfb855c9ae1aae602e4648dd1f88804a0d5cfb78c3639bcf156c"}, {file = "ijson-3.2.3-cp311-cp311-win32.whl", hash = "sha256:6a4db2f7fb9acfb855c9ae1aae602e4648dd1f88804a0d5cfb78c3639bcf156c"},
{file = "ijson-3.2.3-cp311-cp311-win_amd64.whl", hash = "sha256:ccd6be56335cbb845f3d3021b1766299c056c70c4c9165fb2fbe2d62258bae3f"}, {file = "ijson-3.2.3-cp311-cp311-win_amd64.whl", hash = "sha256:ccd6be56335cbb845f3d3021b1766299c056c70c4c9165fb2fbe2d62258bae3f"},
{file = "ijson-3.2.3-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:055b71bbc37af5c3c5861afe789e15211d2d3d06ac51ee5a647adf4def19c0ea"},
{file = "ijson-3.2.3-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:c075a547de32f265a5dd139ab2035900fef6653951628862e5cdce0d101af557"},
{file = "ijson-3.2.3-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:457f8a5fc559478ac6b06b6d37ebacb4811f8c5156e997f0d87d708b0d8ab2ae"},
{file = "ijson-3.2.3-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9788f0c915351f41f0e69ec2618b81ebfcf9f13d9d67c6d404c7f5afda3e4afb"},
{file = "ijson-3.2.3-cp312-cp312-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:fa234ab7a6a33ed51494d9d2197fb96296f9217ecae57f5551a55589091e7853"},
{file = "ijson-3.2.3-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bdd0dc5da4f9dc6d12ab6e8e0c57d8b41d3c8f9ceed31a99dae7b2baf9ea769a"},
{file = "ijson-3.2.3-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:c6beb80df19713e39e68dc5c337b5c76d36ccf69c30b79034634e5e4c14d6904"},
{file = "ijson-3.2.3-cp312-cp312-musllinux_1_1_i686.whl", hash = "sha256:a2973ce57afb142d96f35a14e9cfec08308ef178a2c76b8b5e1e98f3960438bf"},
{file = "ijson-3.2.3-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:105c314fd624e81ed20f925271ec506523b8dd236589ab6c0208b8707d652a0e"},
{file = "ijson-3.2.3-cp312-cp312-win32.whl", hash = "sha256:ac44781de5e901ce8339352bb5594fcb3b94ced315a34dbe840b4cff3450e23b"},
{file = "ijson-3.2.3-cp312-cp312-win_amd64.whl", hash = "sha256:0567e8c833825b119e74e10a7c29761dc65fcd155f5d4cb10f9d3b8916ef9912"},
{file = "ijson-3.2.3-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:eeb286639649fb6bed37997a5e30eefcacddac79476d24128348ec890b2a0ccb"}, {file = "ijson-3.2.3-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:eeb286639649fb6bed37997a5e30eefcacddac79476d24128348ec890b2a0ccb"},
{file = "ijson-3.2.3-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:396338a655fb9af4ac59dd09c189885b51fa0eefc84d35408662031023c110d1"}, {file = "ijson-3.2.3-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:396338a655fb9af4ac59dd09c189885b51fa0eefc84d35408662031023c110d1"},
{file = "ijson-3.2.3-cp36-cp36m-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:0e0243d166d11a2a47c17c7e885debf3b19ed136be2af1f5d1c34212850236ac"}, {file = "ijson-3.2.3-cp36-cp36m-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:0e0243d166d11a2a47c17c7e885debf3b19ed136be2af1f5d1c34212850236ac"},
@ -987,13 +998,13 @@ i18n = ["Babel (>=2.7)"]
[[package]] [[package]]
name = "jsonschema" name = "jsonschema"
version = "4.19.0" version = "4.19.1"
description = "An implementation of JSON Schema validation for Python" description = "An implementation of JSON Schema validation for Python"
optional = false optional = false
python-versions = ">=3.8" python-versions = ">=3.8"
files = [ files = [
{file = "jsonschema-4.19.0-py3-none-any.whl", hash = "sha256:043dc26a3845ff09d20e4420d6012a9c91c9aa8999fa184e7efcfeccb41e32cb"}, {file = "jsonschema-4.19.1-py3-none-any.whl", hash = "sha256:cd5f1f9ed9444e554b38ba003af06c0a8c2868131e56bfbef0550fb450c0330e"},
{file = "jsonschema-4.19.0.tar.gz", hash = "sha256:6e1e7569ac13be8139b2dd2c21a55d350066ee3f80df06c608b398cdc6f30e8f"}, {file = "jsonschema-4.19.1.tar.gz", hash = "sha256:ec84cc37cfa703ef7cd4928db24f9cb31428a5d0fa77747b8b51a847458e0bbf"},
] ]
[package.dependencies] [package.dependencies]
@ -1557,13 +1568,13 @@ testing-docutils = ["pygments", "pytest (>=7,<8)", "pytest-param-files (>=0.3.4,
[[package]] [[package]]
name = "netaddr" name = "netaddr"
version = "0.8.0" version = "0.9.0"
description = "A network address manipulation library for Python" description = "A network address manipulation library for Python"
optional = false optional = false
python-versions = "*" python-versions = "*"
files = [ files = [
{file = "netaddr-0.8.0-py2.py3-none-any.whl", hash = "sha256:9666d0232c32d2656e5e5f8d735f58fd6c7457ce52fc21c98d45f2af78f990ac"}, {file = "netaddr-0.9.0-py3-none-any.whl", hash = "sha256:5148b1055679d2a1ec070c521b7db82137887fabd6d7e37f5199b44f775c3bb1"},
{file = "netaddr-0.8.0.tar.gz", hash = "sha256:d6cc57c7a07b1d9d2e917aa8b36ae8ce61c35ba3fcd1b83ca31c5a0ee2b5a243"}, {file = "netaddr-0.9.0.tar.gz", hash = "sha256:7b46fa9b1a2d71fd5de9e4a3784ef339700a53a08c8040f08baf5f1194da0128"},
] ]
[[package]] [[package]]
@ -1581,13 +1592,13 @@ tests = ["Sphinx", "doubles", "flake8", "flake8-quotes", "gevent", "mock", "pyte
[[package]] [[package]]
name = "packaging" name = "packaging"
version = "23.1" version = "23.2"
description = "Core utilities for Python packages" description = "Core utilities for Python packages"
optional = false optional = false
python-versions = ">=3.7" python-versions = ">=3.7"
files = [ files = [
{file = "packaging-23.1-py3-none-any.whl", hash = "sha256:994793af429502c4ea2ebf6bf664629d07c1a9fe974af92966e4b8d2df7edc61"}, {file = "packaging-23.2-py3-none-any.whl", hash = "sha256:8c491190033a9af7e1d931d0b5dacc2ef47509b34dd0de67ed209b5203fc88c7"},
{file = "packaging-23.1.tar.gz", hash = "sha256:a392980d2b6cffa644431898be54b0045151319d1e7ec34f0cfed48767dd334f"}, {file = "packaging-23.2.tar.gz", hash = "sha256:048fb0e9405036518eaaf48a55953c750c11e1a1b68e0dd1a9d62ed0c092cfc5"},
] ]
[[package]] [[package]]
@ -1628,65 +1639,65 @@ files = [
[[package]] [[package]]
name = "pillow" name = "pillow"
version = "10.0.1" version = "10.1.0"
description = "Python Imaging Library (Fork)" description = "Python Imaging Library (Fork)"
optional = false optional = false
python-versions = ">=3.8" python-versions = ">=3.8"
files = [ files = [
{file = "Pillow-10.0.1-cp310-cp310-macosx_10_10_x86_64.whl", hash = "sha256:8f06be50669087250f319b706decf69ca71fdecd829091a37cc89398ca4dc17a"}, {file = "Pillow-10.1.0-cp310-cp310-macosx_10_10_x86_64.whl", hash = "sha256:1ab05f3db77e98f93964697c8efc49c7954b08dd61cff526b7f2531a22410106"},
{file = "Pillow-10.0.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:50bd5f1ebafe9362ad622072a1d2f5850ecfa44303531ff14353a4059113b12d"}, {file = "Pillow-10.1.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:6932a7652464746fcb484f7fc3618e6503d2066d853f68a4bd97193a3996e273"},
{file = "Pillow-10.0.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e6a90167bcca1216606223a05e2cf991bb25b14695c518bc65639463d7db722d"}, {file = "Pillow-10.1.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a5f63b5a68daedc54c7c3464508d8c12075e56dcfbd42f8c1bf40169061ae666"},
{file = "Pillow-10.0.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f11c9102c56ffb9ca87134bd025a43d2aba3f1155f508eff88f694b33a9c6d19"}, {file = "Pillow-10.1.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c0949b55eb607898e28eaccb525ab104b2d86542a85c74baf3a6dc24002edec2"},
{file = "Pillow-10.0.1-cp310-cp310-manylinux_2_28_aarch64.whl", hash = "sha256:186f7e04248103482ea6354af6d5bcedb62941ee08f7f788a1c7707bc720c66f"}, {file = "Pillow-10.1.0-cp310-cp310-manylinux_2_28_aarch64.whl", hash = "sha256:ae88931f93214777c7a3aa0a8f92a683f83ecde27f65a45f95f22d289a69e593"},
{file = "Pillow-10.0.1-cp310-cp310-manylinux_2_28_x86_64.whl", hash = "sha256:0462b1496505a3462d0f35dc1c4d7b54069747d65d00ef48e736acda2c8cbdff"}, {file = "Pillow-10.1.0-cp310-cp310-manylinux_2_28_x86_64.whl", hash = "sha256:b0eb01ca85b2361b09480784a7931fc648ed8b7836f01fb9241141b968feb1db"},
{file = "Pillow-10.0.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:d889b53ae2f030f756e61a7bff13684dcd77e9af8b10c6048fb2c559d6ed6eaf"}, {file = "Pillow-10.1.0-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:d27b5997bdd2eb9fb199982bb7eb6164db0426904020dc38c10203187ae2ff2f"},
{file = "Pillow-10.0.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:552912dbca585b74d75279a7570dd29fa43b6d93594abb494ebb31ac19ace6bd"}, {file = "Pillow-10.1.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:7df5608bc38bd37ef585ae9c38c9cd46d7c81498f086915b0f97255ea60c2818"},
{file = "Pillow-10.0.1-cp310-cp310-win_amd64.whl", hash = "sha256:787bb0169d2385a798888e1122c980c6eff26bf941a8ea79747d35d8f9210ca0"}, {file = "Pillow-10.1.0-cp310-cp310-win_amd64.whl", hash = "sha256:41f67248d92a5e0a2076d3517d8d4b1e41a97e2df10eb8f93106c89107f38b57"},
{file = "Pillow-10.0.1-cp311-cp311-macosx_10_10_x86_64.whl", hash = "sha256:fd2a5403a75b54661182b75ec6132437a181209b901446ee5724b589af8edef1"}, {file = "Pillow-10.1.0-cp311-cp311-macosx_10_10_x86_64.whl", hash = "sha256:1fb29c07478e6c06a46b867e43b0bcdb241b44cc52be9bc25ce5944eed4648e7"},
{file = "Pillow-10.0.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:2d7e91b4379f7a76b31c2dda84ab9e20c6220488e50f7822e59dac36b0cd92b1"}, {file = "Pillow-10.1.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:2cdc65a46e74514ce742c2013cd4a2d12e8553e3a2563c64879f7c7e4d28bce7"},
{file = "Pillow-10.0.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:19e9adb3f22d4c416e7cd79b01375b17159d6990003633ff1d8377e21b7f1b21"}, {file = "Pillow-10.1.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:50d08cd0a2ecd2a8657bd3d82c71efd5a58edb04d9308185d66c3a5a5bed9610"},
{file = "Pillow-10.0.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:93139acd8109edcdeffd85e3af8ae7d88b258b3a1e13a038f542b79b6d255c54"}, {file = "Pillow-10.1.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:062a1610e3bc258bff2328ec43f34244fcec972ee0717200cb1425214fe5b839"},
{file = "Pillow-10.0.1-cp311-cp311-manylinux_2_28_aarch64.whl", hash = "sha256:92a23b0431941a33242b1f0ce6c88a952e09feeea9af4e8be48236a68ffe2205"}, {file = "Pillow-10.1.0-cp311-cp311-manylinux_2_28_aarch64.whl", hash = "sha256:61f1a9d247317fa08a308daaa8ee7b3f760ab1809ca2da14ecc88ae4257d6172"},
{file = "Pillow-10.0.1-cp311-cp311-manylinux_2_28_x86_64.whl", hash = "sha256:cbe68deb8580462ca0d9eb56a81912f59eb4542e1ef8f987405e35a0179f4ea2"}, {file = "Pillow-10.1.0-cp311-cp311-manylinux_2_28_x86_64.whl", hash = "sha256:a646e48de237d860c36e0db37ecaecaa3619e6f3e9d5319e527ccbc8151df061"},
{file = "Pillow-10.0.1-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:522ff4ac3aaf839242c6f4e5b406634bfea002469656ae8358644fc6c4856a3b"}, {file = "Pillow-10.1.0-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:47e5bf85b80abc03be7455c95b6d6e4896a62f6541c1f2ce77a7d2bb832af262"},
{file = "Pillow-10.0.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:84efb46e8d881bb06b35d1d541aa87f574b58e87f781cbba8d200daa835b42e1"}, {file = "Pillow-10.1.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:a92386125e9ee90381c3369f57a2a50fa9e6aa8b1cf1d9c4b200d41a7dd8e992"},
{file = "Pillow-10.0.1-cp311-cp311-win_amd64.whl", hash = "sha256:898f1d306298ff40dc1b9ca24824f0488f6f039bc0e25cfb549d3195ffa17088"}, {file = "Pillow-10.1.0-cp311-cp311-win_amd64.whl", hash = "sha256:0f7c276c05a9767e877a0b4c5050c8bee6a6d960d7f0c11ebda6b99746068c2a"},
{file = "Pillow-10.0.1-cp312-cp312-macosx_10_10_x86_64.whl", hash = "sha256:bcf1207e2f2385a576832af02702de104be71301c2696d0012b1b93fe34aaa5b"}, {file = "Pillow-10.1.0-cp312-cp312-macosx_10_10_x86_64.whl", hash = "sha256:a89b8312d51715b510a4fe9fc13686283f376cfd5abca8cd1c65e4c76e21081b"},
{file = "Pillow-10.0.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:5d6c9049c6274c1bb565021367431ad04481ebb54872edecfcd6088d27edd6ed"}, {file = "Pillow-10.1.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:00f438bb841382b15d7deb9a05cc946ee0f2c352653c7aa659e75e592f6fa17d"},
{file = "Pillow-10.0.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:28444cb6ad49726127d6b340217f0627abc8732f1194fd5352dec5e6a0105635"}, {file = "Pillow-10.1.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3d929a19f5469b3f4df33a3df2983db070ebb2088a1e145e18facbc28cae5b27"},
{file = "Pillow-10.0.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:de596695a75496deb3b499c8c4f8e60376e0516e1a774e7bc046f0f48cd620ad"}, {file = "Pillow-10.1.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9a92109192b360634a4489c0c756364c0c3a2992906752165ecb50544c251312"},
{file = "Pillow-10.0.1-cp312-cp312-manylinux_2_28_aarch64.whl", hash = "sha256:2872f2d7846cf39b3dbff64bc1104cc48c76145854256451d33c5faa55c04d1a"}, {file = "Pillow-10.1.0-cp312-cp312-manylinux_2_28_aarch64.whl", hash = "sha256:0248f86b3ea061e67817c47ecbe82c23f9dd5d5226200eb9090b3873d3ca32de"},
{file = "Pillow-10.0.1-cp312-cp312-manylinux_2_28_x86_64.whl", hash = "sha256:4ce90f8a24e1c15465048959f1e94309dfef93af272633e8f37361b824532e91"}, {file = "Pillow-10.1.0-cp312-cp312-manylinux_2_28_x86_64.whl", hash = "sha256:9882a7451c680c12f232a422730f986a1fcd808da0fd428f08b671237237d651"},
{file = "Pillow-10.0.1-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:ee7810cf7c83fa227ba9125de6084e5e8b08c59038a7b2c9045ef4dde61663b4"}, {file = "Pillow-10.1.0-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:1c3ac5423c8c1da5928aa12c6e258921956757d976405e9467c5f39d1d577a4b"},
{file = "Pillow-10.0.1-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:b1be1c872b9b5fcc229adeadbeb51422a9633abd847c0ff87dc4ef9bb184ae08"}, {file = "Pillow-10.1.0-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:806abdd8249ba3953c33742506fe414880bad78ac25cc9a9b1c6ae97bedd573f"},
{file = "Pillow-10.0.1-cp312-cp312-win_amd64.whl", hash = "sha256:98533fd7fa764e5f85eebe56c8e4094db912ccbe6fbf3a58778d543cadd0db08"}, {file = "Pillow-10.1.0-cp312-cp312-win_amd64.whl", hash = "sha256:eaed6977fa73408b7b8a24e8b14e59e1668cfc0f4c40193ea7ced8e210adf996"},
{file = "Pillow-10.0.1-cp38-cp38-macosx_10_10_x86_64.whl", hash = "sha256:764d2c0daf9c4d40ad12fbc0abd5da3af7f8aa11daf87e4fa1b834000f4b6b0a"}, {file = "Pillow-10.1.0-cp38-cp38-macosx_10_10_x86_64.whl", hash = "sha256:fe1e26e1ffc38be097f0ba1d0d07fcade2bcfd1d023cda5b29935ae8052bd793"},
{file = "Pillow-10.0.1-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:fcb59711009b0168d6ee0bd8fb5eb259c4ab1717b2f538bbf36bacf207ef7a68"}, {file = "Pillow-10.1.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:7a7e3daa202beb61821c06d2517428e8e7c1aab08943e92ec9e5755c2fc9ba5e"},
{file = "Pillow-10.0.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:697a06bdcedd473b35e50a7e7506b1d8ceb832dc238a336bd6f4f5aa91a4b500"}, {file = "Pillow-10.1.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:24fadc71218ad2b8ffe437b54876c9382b4a29e030a05a9879f615091f42ffc2"},
{file = "Pillow-10.0.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9f665d1e6474af9f9da5e86c2a3a2d2d6204e04d5af9c06b9d42afa6ebde3f21"}, {file = "Pillow-10.1.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:fa1d323703cfdac2036af05191b969b910d8f115cf53093125e4058f62012c9a"},
{file = "Pillow-10.0.1-cp38-cp38-manylinux_2_28_aarch64.whl", hash = "sha256:2fa6dd2661838c66f1a5473f3b49ab610c98a128fc08afbe81b91a1f0bf8c51d"}, {file = "Pillow-10.1.0-cp38-cp38-manylinux_2_28_aarch64.whl", hash = "sha256:912e3812a1dbbc834da2b32299b124b5ddcb664ed354916fd1ed6f193f0e2d01"},
{file = "Pillow-10.0.1-cp38-cp38-manylinux_2_28_x86_64.whl", hash = "sha256:3a04359f308ebee571a3127fdb1bd01f88ba6f6fb6d087f8dd2e0d9bff43f2a7"}, {file = "Pillow-10.1.0-cp38-cp38-manylinux_2_28_x86_64.whl", hash = "sha256:7dbaa3c7de82ef37e7708521be41db5565004258ca76945ad74a8e998c30af8d"},
{file = "Pillow-10.0.1-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:723bd25051454cea9990203405fa6b74e043ea76d4968166dfd2569b0210886a"}, {file = "Pillow-10.1.0-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:9d7bc666bd8c5a4225e7ac71f2f9d12466ec555e89092728ea0f5c0c2422ea80"},
{file = "Pillow-10.0.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:71671503e3015da1b50bd18951e2f9daf5b6ffe36d16f1eb2c45711a301521a7"}, {file = "Pillow-10.1.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:baada14941c83079bf84c037e2d8b7506ce201e92e3d2fa0d1303507a8538212"},
{file = "Pillow-10.0.1-cp38-cp38-win_amd64.whl", hash = "sha256:44e7e4587392953e5e251190a964675f61e4dae88d1e6edbe9f36d6243547ff3"}, {file = "Pillow-10.1.0-cp38-cp38-win_amd64.whl", hash = "sha256:2ef6721c97894a7aa77723740a09547197533146fba8355e86d6d9a4a1056b14"},
{file = "Pillow-10.0.1-cp39-cp39-macosx_10_10_x86_64.whl", hash = "sha256:3855447d98cced8670aaa63683808df905e956f00348732448b5a6df67ee5849"}, {file = "Pillow-10.1.0-cp39-cp39-macosx_10_10_x86_64.whl", hash = "sha256:0a026c188be3b443916179f5d04548092e253beb0c3e2ee0a4e2cdad72f66099"},
{file = "Pillow-10.0.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:ed2d9c0704f2dc4fa980b99d565c0c9a543fe5101c25b3d60488b8ba80f0cce1"}, {file = "Pillow-10.1.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:04f6f6149f266a100374ca3cc368b67fb27c4af9f1cc8cb6306d849dcdf12616"},
{file = "Pillow-10.0.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f5bb289bb835f9fe1a1e9300d011eef4d69661bb9b34d5e196e5e82c4cb09b37"}, {file = "Pillow-10.1.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bb40c011447712d2e19cc261c82655f75f32cb724788df315ed992a4d65696bb"},
{file = "Pillow-10.0.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3a0d3e54ab1df9df51b914b2233cf779a5a10dfd1ce339d0421748232cea9876"}, {file = "Pillow-10.1.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1a8413794b4ad9719346cd9306118450b7b00d9a15846451549314a58ac42219"},
{file = "Pillow-10.0.1-cp39-cp39-manylinux_2_28_aarch64.whl", hash = "sha256:2cc6b86ece42a11f16f55fe8903595eff2b25e0358dec635d0a701ac9586588f"}, {file = "Pillow-10.1.0-cp39-cp39-manylinux_2_28_aarch64.whl", hash = "sha256:c9aeea7b63edb7884b031a35305629a7593272b54f429a9869a4f63a1bf04c34"},
{file = "Pillow-10.0.1-cp39-cp39-manylinux_2_28_x86_64.whl", hash = "sha256:ca26ba5767888c84bf5a0c1a32f069e8204ce8c21d00a49c90dabeba00ce0145"}, {file = "Pillow-10.1.0-cp39-cp39-manylinux_2_28_x86_64.whl", hash = "sha256:b4005fee46ed9be0b8fb42be0c20e79411533d1fd58edabebc0dd24626882cfd"},
{file = "Pillow-10.0.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:f0b4b06da13275bc02adfeb82643c4a6385bd08d26f03068c2796f60d125f6f2"}, {file = "Pillow-10.1.0-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:4d0152565c6aa6ebbfb1e5d8624140a440f2b99bf7afaafbdbf6430426497f28"},
{file = "Pillow-10.0.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:bc2e3069569ea9dbe88d6b8ea38f439a6aad8f6e7a6283a38edf61ddefb3a9bf"}, {file = "Pillow-10.1.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:d921bc90b1defa55c9917ca6b6b71430e4286fc9e44c55ead78ca1a9f9eba5f2"},
{file = "Pillow-10.0.1-cp39-cp39-win_amd64.whl", hash = "sha256:8b451d6ead6e3500b6ce5c7916a43d8d8d25ad74b9102a629baccc0808c54971"}, {file = "Pillow-10.1.0-cp39-cp39-win_amd64.whl", hash = "sha256:cfe96560c6ce2f4c07d6647af2d0f3c54cc33289894ebd88cfbb3bcd5391e256"},
{file = "Pillow-10.0.1-pp310-pypy310_pp73-macosx_10_10_x86_64.whl", hash = "sha256:32bec7423cdf25c9038fef614a853c9d25c07590e1a870ed471f47fb80b244db"}, {file = "Pillow-10.1.0-pp310-pypy310_pp73-macosx_10_10_x86_64.whl", hash = "sha256:937bdc5a7f5343d1c97dc98149a0be7eb9704e937fe3dc7140e229ae4fc572a7"},
{file = "Pillow-10.0.1-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b7cf63d2c6928b51d35dfdbda6f2c1fddbe51a6bc4a9d4ee6ea0e11670dd981e"}, {file = "Pillow-10.1.0-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b1c25762197144e211efb5f4e8ad656f36c8d214d390585d1d21281f46d556ba"},
{file = "Pillow-10.0.1-pp310-pypy310_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:f6d3d4c905e26354e8f9d82548475c46d8e0889538cb0657aa9c6f0872a37aa4"}, {file = "Pillow-10.1.0-pp310-pypy310_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:afc8eef765d948543a4775f00b7b8c079b3321d6b675dde0d02afa2ee23000b4"},
{file = "Pillow-10.0.1-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:847e8d1017c741c735d3cd1883fa7b03ded4f825a6e5fcb9378fd813edee995f"}, {file = "Pillow-10.1.0-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:883f216eac8712b83a63f41b76ddfb7b2afab1b74abbb413c5df6680f071a6b9"},
{file = "Pillow-10.0.1-pp39-pypy39_pp73-macosx_10_10_x86_64.whl", hash = "sha256:7f771e7219ff04b79e231d099c0a28ed83aa82af91fd5fa9fdb28f5b8d5addaf"}, {file = "Pillow-10.1.0-pp39-pypy39_pp73-macosx_10_10_x86_64.whl", hash = "sha256:b920e4d028f6442bea9a75b7491c063f0b9a3972520731ed26c83e254302eb1e"},
{file = "Pillow-10.0.1-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:459307cacdd4138edee3875bbe22a2492519e060660eaf378ba3b405d1c66317"}, {file = "Pillow-10.1.0-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1c41d960babf951e01a49c9746f92c5a7e0d939d1652d7ba30f6b3090f27e412"},
{file = "Pillow-10.0.1-pp39-pypy39_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:b059ac2c4c7a97daafa7dc850b43b2d3667def858a4f112d1aa082e5c3d6cf7d"}, {file = "Pillow-10.1.0-pp39-pypy39_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:1fafabe50a6977ac70dfe829b2d5735fd54e190ab55259ec8aea4aaea412fa0b"},
{file = "Pillow-10.0.1-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:d6caf3cd38449ec3cd8a68b375e0c6fe4b6fd04edb6c9766b55ef84a6e8ddf2d"}, {file = "Pillow-10.1.0-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:3b834f4b16173e5b92ab6566f0473bfb09f939ba14b23b8da1f54fa63e4b623f"},
{file = "Pillow-10.0.1.tar.gz", hash = "sha256:d72967b06be9300fed5cfbc8b5bafceec48bf7cdc7dab66b1d2549035287191d"}, {file = "Pillow-10.1.0.tar.gz", hash = "sha256:e6bf8de6c36ed96c86ea3b6e1d5273c53f46ef518a062464cd7ef5dd2cf92e38"},
] ]
[package.extras] [package.extras]
@ -1749,22 +1760,22 @@ twisted = ["twisted"]
[[package]] [[package]]
name = "psycopg2" name = "psycopg2"
version = "2.9.8" version = "2.9.9"
description = "psycopg2 - Python-PostgreSQL Database Adapter" description = "psycopg2 - Python-PostgreSQL Database Adapter"
optional = true optional = true
python-versions = ">=3.6" python-versions = ">=3.7"
files = [ files = [
{file = "psycopg2-2.9.8-cp310-cp310-win32.whl", hash = "sha256:2f8594f92bbb5d8b59ffec04e2686c416401e2d4297de1193f8e75235937e71d"}, {file = "psycopg2-2.9.9-cp310-cp310-win32.whl", hash = "sha256:38a8dcc6856f569068b47de286b472b7c473ac7977243593a288ebce0dc89516"},
{file = "psycopg2-2.9.8-cp310-cp310-win_amd64.whl", hash = "sha256:f9ecbf504c4eaff90139d5c9b95d47275f2b2651e14eba56392b4041fbf4c2b3"}, {file = "psycopg2-2.9.9-cp310-cp310-win_amd64.whl", hash = "sha256:426f9f29bde126913a20a96ff8ce7d73fd8a216cfb323b1f04da402d452853c3"},
{file = "psycopg2-2.9.8-cp311-cp311-win32.whl", hash = "sha256:65f81e72136d8b9ac8abf5206938d60f50da424149a43b6073f1546063c0565e"}, {file = "psycopg2-2.9.9-cp311-cp311-win32.whl", hash = "sha256:ade01303ccf7ae12c356a5e10911c9e1c51136003a9a1d92f7aa9d010fb98372"},
{file = "psycopg2-2.9.8-cp311-cp311-win_amd64.whl", hash = "sha256:f7e62095d749359b7854143843f27edd7dccfcd3e1d833b880562aa5702d92b0"}, {file = "psycopg2-2.9.9-cp311-cp311-win_amd64.whl", hash = "sha256:121081ea2e76729acfb0673ff33755e8703d45e926e416cb59bae3a86c6a4981"},
{file = "psycopg2-2.9.8-cp37-cp37m-win32.whl", hash = "sha256:81b21424023a290a40884c7f8b0093ba6465b59bd785c18f757e76945f65594c"}, {file = "psycopg2-2.9.9-cp37-cp37m-win32.whl", hash = "sha256:5e0d98cade4f0e0304d7d6f25bbfbc5bd186e07b38eac65379309c4ca3193efa"},
{file = "psycopg2-2.9.8-cp37-cp37m-win_amd64.whl", hash = "sha256:67c2f32f3aba79afb15799575e77ee2db6b46b8acf943c21d34d02d4e1041d50"}, {file = "psycopg2-2.9.9-cp37-cp37m-win_amd64.whl", hash = "sha256:7e2dacf8b009a1c1e843b5213a87f7c544b2b042476ed7755be813eaf4e8347a"},
{file = "psycopg2-2.9.8-cp38-cp38-win32.whl", hash = "sha256:287a64ef168ef7fb9f382964705ff664b342bfff47e7242bf0a04ef203269dd5"}, {file = "psycopg2-2.9.9-cp38-cp38-win32.whl", hash = "sha256:ff432630e510709564c01dafdbe996cb552e0b9f3f065eb89bdce5bd31fabf4c"},
{file = "psycopg2-2.9.8-cp38-cp38-win_amd64.whl", hash = "sha256:dcde3cad4920e29e74bf4e76c072649764914facb2069e6b7fa1ddbebcd49e9f"}, {file = "psycopg2-2.9.9-cp38-cp38-win_amd64.whl", hash = "sha256:bac58c024c9922c23550af2a581998624d6e02350f4ae9c5f0bc642c633a2d5e"},
{file = "psycopg2-2.9.8-cp39-cp39-win32.whl", hash = "sha256:d4ad050ea50a16731d219c3a85e8f2debf49415a070f0b8331ccc96c81700d9b"}, {file = "psycopg2-2.9.9-cp39-cp39-win32.whl", hash = "sha256:c92811b2d4c9b6ea0285942b2e7cac98a59e166d59c588fe5cfe1eda58e72d59"},
{file = "psycopg2-2.9.8-cp39-cp39-win_amd64.whl", hash = "sha256:d39bb3959788b2c9d7bf5ff762e29f436172b241cd7b47529baac77746fd7918"}, {file = "psycopg2-2.9.9-cp39-cp39-win_amd64.whl", hash = "sha256:de80739447af31525feddeb8effd640782cf5998e1a4e9192ebdf829717e3913"},
{file = "psycopg2-2.9.8.tar.gz", hash = "sha256:3da6488042a53b50933244085f3f91803f1b7271f970f3e5536efa69314f6a49"}, {file = "psycopg2-2.9.9.tar.gz", hash = "sha256:d1454bde93fb1e224166811694d600e746430c006fbb031ea06ecc2ea41bf156"},
] ]
[[package]] [[package]]
@ -2427,28 +2438,28 @@ files = [
[[package]] [[package]]
name = "ruff" name = "ruff"
version = "0.0.290" version = "0.0.292"
description = "An extremely fast Python linter, written in Rust." description = "An extremely fast Python linter, written in Rust."
optional = false optional = false
python-versions = ">=3.7" python-versions = ">=3.7"
files = [ files = [
{file = "ruff-0.0.290-py3-none-macosx_10_7_x86_64.whl", hash = "sha256:0e2b09ac4213b11a3520221083866a5816616f3ae9da123037b8ab275066fbac"}, {file = "ruff-0.0.292-py3-none-macosx_10_7_x86_64.whl", hash = "sha256:02f29db018c9d474270c704e6c6b13b18ed0ecac82761e4fcf0faa3728430c96"},
{file = "ruff-0.0.290-py3-none-macosx_10_9_x86_64.macosx_11_0_arm64.macosx_10_9_universal2.whl", hash = "sha256:4ca6285aa77b3d966be32c9a3cd531655b3d4a0171e1f9bf26d66d0372186767"}, {file = "ruff-0.0.292-py3-none-macosx_10_9_x86_64.macosx_11_0_arm64.macosx_10_9_universal2.whl", hash = "sha256:69654e564342f507edfa09ee6897883ca76e331d4bbc3676d8a8403838e9fade"},
{file = "ruff-0.0.290-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:35e3550d1d9f2157b0fcc77670f7bb59154f223bff281766e61bdd1dd854e0c5"}, {file = "ruff-0.0.292-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6c3c91859a9b845c33778f11902e7b26440d64b9d5110edd4e4fa1726c41e0a4"},
{file = "ruff-0.0.290-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:d748c8bd97874f5751aed73e8dde379ce32d16338123d07c18b25c9a2796574a"}, {file = "ruff-0.0.292-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:f4476f1243af2d8c29da5f235c13dca52177117935e1f9393f9d90f9833f69e4"},
{file = "ruff-0.0.290-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:982af5ec67cecd099e2ef5e238650407fb40d56304910102d054c109f390bf3c"}, {file = "ruff-0.0.292-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:be8eb50eaf8648070b8e58ece8e69c9322d34afe367eec4210fdee9a555e4ca7"},
{file = "ruff-0.0.290-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:bbd37352cea4ee007c48a44c9bc45a21f7ba70a57edfe46842e346651e2b995a"}, {file = "ruff-0.0.292-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:9889bac18a0c07018aac75ef6c1e6511d8411724d67cb879103b01758e110a81"},
{file = "ruff-0.0.290-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1d9be6351b7889462912e0b8185a260c0219c35dfd920fb490c7f256f1d8313e"}, {file = "ruff-0.0.292-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:6bdfabd4334684a4418b99b3118793f2c13bb67bf1540a769d7816410402a205"},
{file = "ruff-0.0.290-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:75cdc7fe32dcf33b7cec306707552dda54632ac29402775b9e212a3c16aad5e6"}, {file = "ruff-0.0.292-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:aa7c77c53bfcd75dbcd4d1f42d6cabf2485d2e1ee0678da850f08e1ab13081a8"},
{file = "ruff-0.0.290-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:eb07f37f7aecdbbc91d759c0c09870ce0fb3eed4025eebedf9c4b98c69abd527"}, {file = "ruff-0.0.292-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8e087b24d0d849c5c81516ec740bf4fd48bf363cfb104545464e0fca749b6af9"},
{file = "ruff-0.0.290-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:2ab41bc0ba359d3f715fc7b705bdeef19c0461351306b70a4e247f836b9350ed"}, {file = "ruff-0.0.292-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:f160b5ec26be32362d0774964e218f3fcf0a7da299f7e220ef45ae9e3e67101a"},
{file = "ruff-0.0.290-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:150bf8050214cea5b990945b66433bf9a5e0cef395c9bc0f50569e7de7540c86"}, {file = "ruff-0.0.292-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:ac153eee6dd4444501c4bb92bff866491d4bfb01ce26dd2fff7ca472c8df9ad0"},
{file = "ruff-0.0.290-py3-none-musllinux_1_2_i686.whl", hash = "sha256:75386ebc15fe5467248c039f5bf6a0cfe7bfc619ffbb8cd62406cd8811815fca"}, {file = "ruff-0.0.292-py3-none-musllinux_1_2_i686.whl", hash = "sha256:87616771e72820800b8faea82edd858324b29bb99a920d6aa3d3949dd3f88fb0"},
{file = "ruff-0.0.290-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:ac93eadf07bc4ab4c48d8bb4e427bf0f58f3a9c578862eb85d99d704669f5da0"}, {file = "ruff-0.0.292-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:b76deb3bdbea2ef97db286cf953488745dd6424c122d275f05836c53f62d4016"},
{file = "ruff-0.0.290-py3-none-win32.whl", hash = "sha256:461fbd1fb9ca806d4e3d5c745a30e185f7cf3ca77293cdc17abb2f2a990ad3f7"}, {file = "ruff-0.0.292-py3-none-win32.whl", hash = "sha256:e854b05408f7a8033a027e4b1c7f9889563dd2aca545d13d06711e5c39c3d003"},
{file = "ruff-0.0.290-py3-none-win_amd64.whl", hash = "sha256:f1f49f5ec967fd5778813780b12a5650ab0ebcb9ddcca28d642c689b36920796"}, {file = "ruff-0.0.292-py3-none-win_amd64.whl", hash = "sha256:f27282bedfd04d4c3492e5c3398360c9d86a295be00eccc63914438b4ac8a83c"},
{file = "ruff-0.0.290-py3-none-win_arm64.whl", hash = "sha256:ae5a92dfbdf1f0c689433c223f8dac0782c2b2584bd502dfdbc76475669f1ba1"}, {file = "ruff-0.0.292-py3-none-win_arm64.whl", hash = "sha256:7f67a69c8f12fbc8daf6ae6d36705037bde315abf8b82b6e1f4c9e74eb750f68"},
{file = "ruff-0.0.290.tar.gz", hash = "sha256:949fecbc5467bb11b8db810a7fa53c7e02633856ee6bd1302b2f43adcd71b88d"}, {file = "ruff-0.0.292.tar.gz", hash = "sha256:1093449e37dd1e9b813798f6ad70932b57cf614e5c2b5c51005bf67d55db33ac"},
] ]
[[package]] [[package]]
@ -2483,13 +2494,13 @@ doc = ["Sphinx", "sphinx-rtd-theme"]
[[package]] [[package]]
name = "sentry-sdk" name = "sentry-sdk"
version = "1.31.0" version = "1.32.0"
description = "Python client for Sentry (https://sentry.io)" description = "Python client for Sentry (https://sentry.io)"
optional = true optional = true
python-versions = "*" python-versions = "*"
files = [ files = [
{file = "sentry-sdk-1.31.0.tar.gz", hash = "sha256:6de2e88304873484207fed836388e422aeff000609b104c802749fd89d56ba5b"}, {file = "sentry-sdk-1.32.0.tar.gz", hash = "sha256:935e8fbd7787a3702457393b74b13d89a5afb67185bc0af85c00cb27cbd42e7c"},
{file = "sentry_sdk-1.31.0-py2.py3-none-any.whl", hash = "sha256:64a7141005fb775b9db298a30de93e3b83e0ddd1232dc6f36eb38aebc1553291"}, {file = "sentry_sdk-1.32.0-py2.py3-none-any.whl", hash = "sha256:eeb0b3550536f3bbc05bb1c7e0feb3a78d74acb43b607159a606ed2ec0a33a4d"},
] ]
[package.dependencies] [package.dependencies]
@ -3037,13 +3048,13 @@ twisted = "*"
[[package]] [[package]]
name = "types-bleach" name = "types-bleach"
version = "6.0.0.4" version = "6.1.0.0"
description = "Typing stubs for bleach" description = "Typing stubs for bleach"
optional = false optional = false
python-versions = "*" python-versions = ">=3.7"
files = [ files = [
{file = "types-bleach-6.0.0.4.tar.gz", hash = "sha256:357b0226f65c4f20ab3b13ca8d78a6b91c78aad256d8ec168d4e90fc3303ebd4"}, {file = "types-bleach-6.1.0.0.tar.gz", hash = "sha256:3cf0e55d4618890a00af1151f878b2e2a7a96433850b74e12bede7663d774532"},
{file = "types_bleach-6.0.0.4-py3-none-any.whl", hash = "sha256:2b8767eb407c286b7f02803678732e522e04db8d56cbc9f1270bee49627eae92"}, {file = "types_bleach-6.1.0.0-py3-none-any.whl", hash = "sha256:f0bc75d0f6475036ac69afebf37c41d116dfba78dae55db80437caf0fcd35c28"},
] ]
[[package]] [[package]]
@ -3059,15 +3070,18 @@ files = [
[[package]] [[package]]
name = "types-jsonschema" name = "types-jsonschema"
version = "4.17.0.10" version = "4.19.0.3"
description = "Typing stubs for jsonschema" description = "Typing stubs for jsonschema"
optional = false optional = false
python-versions = "*" python-versions = ">=3.8"
files = [ files = [
{file = "types-jsonschema-4.17.0.10.tar.gz", hash = "sha256:8e979db34d69bc9f9b3d6e8b89bdbc60b3a41cfce4e1fb87bf191d205c7f5098"}, {file = "types-jsonschema-4.19.0.3.tar.gz", hash = "sha256:e0fc0f5d51fd0988bf193be42174a5376b0096820ff79505d9c1b66de23f0581"},
{file = "types_jsonschema-4.17.0.10-py3-none-any.whl", hash = "sha256:3aa2a89afbd9eaa6ce0c15618b36f02692a621433889ce73014656f7d8caf971"}, {file = "types_jsonschema-4.19.0.3-py3-none-any.whl", hash = "sha256:5cedbb661e5ca88d95b94b79902423e3f97a389c245e5fe0ab384122f27d56b9"},
] ]
[package.dependencies]
referencing = "*"
[[package]] [[package]]
name = "types-netaddr" name = "types-netaddr"
version = "0.9.0.1" version = "0.9.0.1"
@ -3197,17 +3211,17 @@ files = [
[[package]] [[package]]
name = "urllib3" name = "urllib3"
version = "1.26.15" version = "1.26.17"
description = "HTTP library with thread-safe connection pooling, file post, and more." description = "HTTP library with thread-safe connection pooling, file post, and more."
optional = false optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*" python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*"
files = [ files = [
{file = "urllib3-1.26.15-py2.py3-none-any.whl", hash = "sha256:aa751d169e23c7479ce47a0cb0da579e3ede798f994f5816a74e4f4500dcea42"}, {file = "urllib3-1.26.17-py2.py3-none-any.whl", hash = "sha256:94a757d178c9be92ef5539b8840d48dc9cf1b2709c9d6b588232a055c524458b"},
{file = "urllib3-1.26.15.tar.gz", hash = "sha256:8a388717b9476f934a21484e8c8e61875ab60644d29b9b39e11e4b9dc1c6b305"}, {file = "urllib3-1.26.17.tar.gz", hash = "sha256:24d6a242c28d29af46c3fae832c36db3bbebcc533dd1bb549172cd739c82df21"},
] ]
[package.extras] [package.extras]
brotli = ["brotli (>=1.0.9)", "brotlicffi (>=0.8.0)", "brotlipy (>=0.6.0)"] brotli = ["brotli (==1.0.9)", "brotli (>=1.0.9)", "brotlicffi (>=0.8.0)", "brotlipy (>=0.6.0)"]
secure = ["certifi", "cryptography (>=1.3.4)", "idna (>=2.0.0)", "ipaddress", "pyOpenSSL (>=0.14)", "urllib3-secure-extra"] secure = ["certifi", "cryptography (>=1.3.4)", "idna (>=2.0.0)", "ipaddress", "pyOpenSSL (>=0.14)", "urllib3-secure-extra"]
socks = ["PySocks (>=1.5.6,!=1.5.7,<2.0)"] socks = ["PySocks (>=1.5.6,!=1.5.7,<2.0)"]
@ -3444,4 +3458,4 @@ user-search = ["pyicu"]
[metadata] [metadata]
lock-version = "2.0" lock-version = "2.0"
python-versions = "^3.8.0" python-versions = "^3.8.0"
content-hash = "364c309486e9d93d4da8a1a3784d5ecd7d2a9734cf84dcd4a991f2cd54f0b5b5" content-hash = "a08543c65f18cc7e9dea648e89c18ab88fc1747aa2e029aa208f777fc3db06dd"

View File

@ -96,7 +96,7 @@ module-name = "synapse.synapse_rust"
[tool.poetry] [tool.poetry]
name = "matrix-synapse" name = "matrix-synapse"
version = "1.93.0" version = "1.94.0"
description = "Homeserver for the Matrix decentralised comms protocol" description = "Homeserver for the Matrix decentralised comms protocol"
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"] authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
license = "Apache-2.0" license = "Apache-2.0"
@ -321,7 +321,7 @@ all = [
# This helps prevents merge conflicts when running a batch of dependabot updates. # This helps prevents merge conflicts when running a batch of dependabot updates.
isort = ">=5.10.1" isort = ">=5.10.1"
black = ">=22.7.0" black = ">=22.7.0"
ruff = "0.0.290" ruff = "0.0.292"
# Type checking only works with the pydantic.v1 compat module from pydantic v2 # Type checking only works with the pydantic.v1 compat module from pydantic v2
pydantic = "^2" pydantic = "^2"

View File

@ -25,14 +25,14 @@ name = "synapse.synapse_rust"
anyhow = "1.0.63" anyhow = "1.0.63"
lazy_static = "1.4.0" lazy_static = "1.4.0"
log = "0.4.17" log = "0.4.17"
pyo3 = { version = "0.17.1", features = [ pyo3 = { version = "0.19.2", features = [
"macros", "macros",
"anyhow", "anyhow",
"abi3", "abi3",
"abi3-py37", "abi3-py38",
] } ] }
pyo3-log = "0.8.1" pyo3-log = "0.8.1"
pythonize = "0.17.0" pythonize = "0.19.0"
regex = "1.6.0" regex = "1.6.0"
serde = { version = "1.0.144", features = ["derive"] } serde = { version = "1.0.144", features = ["derive"] }
serde_json = "1.0.85" serde_json = "1.0.85"

View File

@ -105,6 +105,17 @@ impl PushRuleEvaluator {
/// Create a new `PushRuleEvaluator`. See struct docstring for details. /// Create a new `PushRuleEvaluator`. See struct docstring for details.
#[allow(clippy::too_many_arguments)] #[allow(clippy::too_many_arguments)]
#[new] #[new]
#[pyo3(signature = (
flattened_keys,
has_mentions,
room_member_count,
sender_power_level,
notification_power_levels,
related_events_flattened,
related_event_match_enabled,
room_version_feature_flags,
msc3931_enabled,
))]
pub fn py_new( pub fn py_new(
flattened_keys: BTreeMap<String, JsonValue>, flattened_keys: BTreeMap<String, JsonValue>,
has_mentions: bool, has_mentions: bool,

View File

@ -214,7 +214,7 @@ fi
extra_test_args=() extra_test_args=()
test_tags="synapse_blacklist,msc3874,msc3890,msc3391,msc3930,faster_joins" test_packages="./tests/csapi ./tests ./tests/msc3874 ./tests/msc3890 ./tests/msc3391 ./tests/msc3930 ./tests/msc3902"
# All environment variables starting with PASS_ will be shared. # All environment variables starting with PASS_ will be shared.
# (The prefix is stripped off before reaching the container.) # (The prefix is stripped off before reaching the container.)
@ -277,4 +277,4 @@ export PASS_SYNAPSE_LOG_TESTING=1
echo "Images built; running complement" echo "Images built; running complement"
cd "$COMPLEMENT_DIR" cd "$COMPLEMENT_DIR"
go test -v -tags $test_tags -count=1 "${extra_test_args[@]}" "$@" ./tests/... go test -v -tags "synapse_blacklist" -count=1 "${extra_test_args[@]}" "$@" $test_packages

View File

@ -684,6 +684,10 @@ def full(gh_token: str) -> None:
click.echo("1. If this is a security release, read the security wiki page.") click.echo("1. If this is a security release, read the security wiki page.")
click.echo("2. Check for any release blockers before proceeding.") click.echo("2. Check for any release blockers before proceeding.")
click.echo(" https://github.com/matrix-org/synapse/labels/X-Release-Blocker") click.echo(" https://github.com/matrix-org/synapse/labels/X-Release-Blocker")
click.echo(
"3. Check for any other special release notes, including announcements to add to the changelog or special deployment instructions."
)
click.echo(" See the 'Synapse Maintainer Report'.")
click.confirm("Ready?", abort=True) click.confirm("Ready?", abort=True)

View File

@ -115,7 +115,7 @@ class InternalAuth(BaseAuth):
Once get_user_by_req has set up the opentracing span, this does the actual work. Once get_user_by_req has set up the opentracing span, this does the actual work.
""" """
try: try:
ip_addr = request.getClientAddress().host ip_addr = request.get_client_ip_if_available()
user_agent = get_request_user_agent(request) user_agent = get_request_user_agent(request)
access_token = self.get_access_token_from_request(request) access_token = self.get_access_token_from_request(request)

View File

@ -80,10 +80,6 @@ class UserPresenceState:
def as_dict(self) -> JsonDict: def as_dict(self) -> JsonDict:
return attr.asdict(self) return attr.asdict(self)
@staticmethod
def from_dict(d: JsonDict) -> "UserPresenceState":
return UserPresenceState(**d)
def copy_and_replace(self, **kwargs: Any) -> "UserPresenceState": def copy_and_replace(self, **kwargs: Any) -> "UserPresenceState":
return attr.evolve(self, **kwargs) return attr.evolve(self, **kwargs)

View File

@ -1402,7 +1402,7 @@ class FederationClient(FederationBase):
The remote homeserver return some state from the room. The response The remote homeserver return some state from the room. The response
dictionary is in the form: dictionary is in the form:
{"knock_state_events": [<state event dict>, ...]} {"knock_room_state": [<state event dict>, ...]}
The list of state events may be empty. The list of state events may be empty.
@ -1429,7 +1429,7 @@ class FederationClient(FederationBase):
The remote homeserver can optionally return some state from the room. The response The remote homeserver can optionally return some state from the room. The response
dictionary is in the form: dictionary is in the form:
{"knock_state_events": [<state event dict>, ...]} {"knock_room_state": [<state event dict>, ...]}
The list of state events may be empty. The list of state events may be empty.
""" """

View File

@ -850,14 +850,7 @@ class FederationServer(FederationBase):
context, self._room_prejoin_state_types context, self._room_prejoin_state_types
) )
) )
return { return {"knock_room_state": stripped_room_state}
"knock_room_state": stripped_room_state,
# Since v1.37, Synapse incorrectly used "knock_state_events" for this field.
# Thus, we also populate a 'knock_state_events' with the same content to
# support old instances.
# See https://github.com/matrix-org/synapse/issues/14088.
"knock_state_events": stripped_room_state,
}
async def _on_send_membership_event( async def _on_send_membership_event(
self, origin: str, content: JsonDict, membership_type: str, room_id: str self, origin: str, content: JsonDict, membership_type: str, room_id: str

View File

@ -395,7 +395,7 @@ class PresenceDestinationsRow(BaseFederationRow):
@staticmethod @staticmethod
def from_data(data: JsonDict) -> "PresenceDestinationsRow": def from_data(data: JsonDict) -> "PresenceDestinationsRow":
return PresenceDestinationsRow( return PresenceDestinationsRow(
state=UserPresenceState.from_dict(data["state"]), destinations=data["dests"] state=UserPresenceState(**data["state"]), destinations=data["dests"]
) )
def to_data(self) -> JsonDict: def to_data(self) -> JsonDict:

View File

@ -67,7 +67,7 @@ The loop continues so long as there is anything to send. At each iteration of th
When the `PerDestinationQueue` has the catch-up flag set, the *Catch-Up Transmission Loop* When the `PerDestinationQueue` has the catch-up flag set, the *Catch-Up Transmission Loop*
(`_catch_up_transmission_loop`) is used in lieu of the regular `_transaction_transmission_loop`. (`_catch_up_transmission_loop`) is used in lieu of the regular `_transaction_transmission_loop`.
(Only once the catch-up mode has been exited can the regular tranaction transmission behaviour (Only once the catch-up mode has been exited can the regular transaction transmission behaviour
be resumed.) be resumed.)
*Catch-Up Mode*, entered upon Synapse startup or once a homeserver has fallen behind due to *Catch-Up Mode*, entered upon Synapse startup or once a homeserver has fallen behind due to

View File

@ -431,7 +431,7 @@ class TransportLayerClient:
The remote homeserver can optionally return some state from the room. The response The remote homeserver can optionally return some state from the room. The response
dictionary is in the form: dictionary is in the form:
{"knock_state_events": [<state event dict>, ...]} {"knock_room_state": [<state event dict>, ...]}
The list of state events may be empty. The list of state events may be empty.
""" """

View File

@ -212,8 +212,8 @@ class AccountValidityHandler:
addresses = [] addresses = []
for threepid in threepids: for threepid in threepids:
if threepid["medium"] == "email": if threepid.medium == "email":
addresses.append(threepid["address"]) addresses.append(threepid.address)
return addresses return addresses

View File

@ -16,6 +16,8 @@ import abc
import logging import logging
from typing import TYPE_CHECKING, Any, Dict, List, Mapping, Optional, Sequence, Set from typing import TYPE_CHECKING, Any, Dict, List, Mapping, Optional, Sequence, Set
import attr
from synapse.api.constants import Direction, Membership from synapse.api.constants import Direction, Membership
from synapse.events import EventBase from synapse.events import EventBase
from synapse.types import JsonMapping, RoomStreamToken, StateMap, UserID, UserInfo from synapse.types import JsonMapping, RoomStreamToken, StateMap, UserID, UserInfo
@ -93,7 +95,7 @@ class AdminHandler:
] ]
user_info_dict["displayname"] = profile.display_name user_info_dict["displayname"] = profile.display_name
user_info_dict["avatar_url"] = profile.avatar_url user_info_dict["avatar_url"] = profile.avatar_url
user_info_dict["threepids"] = threepids user_info_dict["threepids"] = [attr.asdict(t) for t in threepids]
user_info_dict["external_ids"] = external_ids user_info_dict["external_ids"] = external_ids
user_info_dict["erased"] = await self._store.is_user_erased(user.to_string()) user_info_dict["erased"] = await self._store.is_user_erased(user.to_string())
@ -171,8 +173,8 @@ class AdminHandler:
else: else:
stream_ordering = room.stream_ordering stream_ordering = room.stream_ordering
from_key = RoomStreamToken(0, 0) from_key = RoomStreamToken(topological=0, stream=0)
to_key = RoomStreamToken(None, stream_ordering) to_key = RoomStreamToken(stream=stream_ordering)
# Events that we've processed in this room # Events that we've processed in this room
written_events: Set[str] = set() written_events: Set[str] = set()

View File

@ -216,7 +216,7 @@ class ApplicationServicesHandler:
def notify_interested_services_ephemeral( def notify_interested_services_ephemeral(
self, self,
stream_key: str, stream_key: StreamKeyType,
new_token: Union[int, RoomStreamToken], new_token: Union[int, RoomStreamToken],
users: Collection[Union[str, UserID]], users: Collection[Union[str, UserID]],
) -> None: ) -> None:
@ -326,7 +326,7 @@ class ApplicationServicesHandler:
async def _notify_interested_services_ephemeral( async def _notify_interested_services_ephemeral(
self, self,
services: List[ApplicationService], services: List[ApplicationService],
stream_key: str, stream_key: StreamKeyType,
new_token: int, new_token: int,
users: Collection[Union[str, UserID]], users: Collection[Union[str, UserID]],
) -> None: ) -> None:

View File

@ -117,9 +117,9 @@ class DeactivateAccountHandler:
# Remove any local threepid associations for this account. # Remove any local threepid associations for this account.
local_threepids = await self.store.user_get_threepids(user_id) local_threepids = await self.store.user_get_threepids(user_id)
for threepid in local_threepids: for local_threepid in local_threepids:
await self._auth_handler.delete_local_threepid( await self._auth_handler.delete_local_threepid(
user_id, threepid["medium"], threepid["address"] user_id, local_threepid.medium, local_threepid.address
) )
# delete any devices belonging to the user, which will also # delete any devices belonging to the user, which will also

View File

@ -845,7 +845,6 @@ class DeviceHandler(DeviceWorkerHandler):
else: else:
assert max_stream_id == stream_id assert max_stream_id == stream_id
# Avoid moving `room_id` backwards. # Avoid moving `room_id` backwards.
pass
if self._handle_new_device_update_new_data: if self._handle_new_device_update_new_data:
continue continue

View File

@ -868,19 +868,10 @@ class FederationHandler:
# This is a bit of a hack and is cribbing off of invites. Basically we # This is a bit of a hack and is cribbing off of invites. Basically we
# store the room state here and retrieve it again when this event appears # store the room state here and retrieve it again when this event appears
# in the invitee's sync stream. It is stripped out for all other local users. # in the invitee's sync stream. It is stripped out for all other local users.
stripped_room_state = ( stripped_room_state = knock_response.get("knock_room_state")
knock_response.get("knock_room_state")
# Since v1.37, Synapse incorrectly used "knock_state_events" for this field.
# Thus, we also check for a 'knock_state_events' to support old instances.
# See https://github.com/matrix-org/synapse/issues/14088.
or knock_response.get("knock_state_events")
)
if stripped_room_state is None: if stripped_room_state is None:
raise KeyError( raise KeyError("Missing 'knock_room_state' field in send_knock response")
"Missing 'knock_room_state' (or legacy 'knock_state_events') field in "
"send_knock response"
)
event.unsigned["knock_room_state"] = stripped_room_state event.unsigned["knock_room_state"] = stripped_room_state
@ -1506,7 +1497,6 @@ class FederationHandler:
# in the meantime and context needs to be recomputed, so let's do so. # in the meantime and context needs to be recomputed, so let's do so.
if i == max_retries - 1: if i == max_retries - 1:
raise e raise e
pass
else: else:
destinations = {x.split(":", 1)[-1] for x in (sender_user_id, room_id)} destinations = {x.split(":", 1)[-1] for x in (sender_user_id, room_id)}
@ -1582,7 +1572,6 @@ class FederationHandler:
# in the meantime and context needs to be recomputed, so let's do so. # in the meantime and context needs to be recomputed, so let's do so.
if i == max_retries - 1: if i == max_retries - 1:
raise e raise e
pass
async def add_display_name_to_third_party_invite( async def add_display_name_to_third_party_invite(
self, self,

View File

@ -192,8 +192,7 @@ class InitialSyncHandler:
) )
elif event.membership == Membership.LEAVE: elif event.membership == Membership.LEAVE:
room_end_token = RoomStreamToken( room_end_token = RoomStreamToken(
None, stream=event.stream_ordering,
event.stream_ordering,
) )
deferred_room_state = run_in_background( deferred_room_state = run_in_background(
self._state_storage_controller.get_state_for_events, self._state_storage_controller.get_state_for_events,

View File

@ -1133,7 +1133,6 @@ class EventCreationHandler:
# in the meantime and context needs to be recomputed, so let's do so. # in the meantime and context needs to be recomputed, so let's do so.
if i == max_retries - 1: if i == max_retries - 1:
raise e raise e
pass
# we know it was persisted, so must have a stream ordering # we know it was persisted, so must have a stream ordering
assert ev.internal_metadata.stream_ordering assert ev.internal_metadata.stream_ordering
@ -2038,7 +2037,6 @@ class EventCreationHandler:
# in the meantime and context needs to be recomputed, so let's do so. # in the meantime and context needs to be recomputed, so let's do so.
if i == max_retries - 1: if i == max_retries - 1:
raise e raise e
pass
return True return True
except AuthError: except AuthError:
logger.info( logger.info(

View File

@ -110,6 +110,7 @@ from synapse.replication.http.streams import ReplicationGetStreamUpdates
from synapse.replication.tcp.commands import ClearUserSyncsCommand from synapse.replication.tcp.commands import ClearUserSyncsCommand
from synapse.replication.tcp.streams import PresenceFederationStream, PresenceStream from synapse.replication.tcp.streams import PresenceFederationStream, PresenceStream
from synapse.storage.databases.main import DataStore from synapse.storage.databases.main import DataStore
from synapse.storage.databases.main.state_deltas import StateDelta
from synapse.streams import EventSource from synapse.streams import EventSource
from synapse.types import ( from synapse.types import (
JsonDict, JsonDict,
@ -1499,9 +1500,9 @@ class PresenceHandler(BasePresenceHandler):
# We may get multiple deltas for different rooms, but we want to # We may get multiple deltas for different rooms, but we want to
# handle them on a room by room basis, so we batch them up by # handle them on a room by room basis, so we batch them up by
# room. # room.
deltas_by_room: Dict[str, List[JsonDict]] = {} deltas_by_room: Dict[str, List[StateDelta]] = {}
for delta in deltas: for delta in deltas:
deltas_by_room.setdefault(delta["room_id"], []).append(delta) deltas_by_room.setdefault(delta.room_id, []).append(delta)
for room_id, deltas_for_room in deltas_by_room.items(): for room_id, deltas_for_room in deltas_by_room.items():
await self._handle_state_delta(room_id, deltas_for_room) await self._handle_state_delta(room_id, deltas_for_room)
@ -1513,7 +1514,7 @@ class PresenceHandler(BasePresenceHandler):
max_pos max_pos
) )
async def _handle_state_delta(self, room_id: str, deltas: List[JsonDict]) -> None: async def _handle_state_delta(self, room_id: str, deltas: List[StateDelta]) -> None:
"""Process current state deltas for the room to find new joins that need """Process current state deltas for the room to find new joins that need
to be handled. to be handled.
""" """
@ -1524,31 +1525,30 @@ class PresenceHandler(BasePresenceHandler):
newly_joined_users = set() newly_joined_users = set()
for delta in deltas: for delta in deltas:
assert room_id == delta["room_id"] assert room_id == delta.room_id
typ = delta["type"] logger.debug(
state_key = delta["state_key"] "Handling: %r %r, %s", delta.event_type, delta.state_key, delta.event_id
event_id = delta["event_id"] )
prev_event_id = delta["prev_event_id"]
logger.debug("Handling: %r %r, %s", typ, state_key, event_id)
# Drop any event that isn't a membership join # Drop any event that isn't a membership join
if typ != EventTypes.Member: if delta.event_type != EventTypes.Member:
continue continue
if event_id is None: if delta.event_id is None:
# state has been deleted, so this is not a join. We only care about # state has been deleted, so this is not a join. We only care about
# joins. # joins.
continue continue
event = await self.store.get_event(event_id, allow_none=True) event = await self.store.get_event(delta.event_id, allow_none=True)
if not event or event.content.get("membership") != Membership.JOIN: if not event or event.content.get("membership") != Membership.JOIN:
# We only care about joins # We only care about joins
continue continue
if prev_event_id: if delta.prev_event_id:
prev_event = await self.store.get_event(prev_event_id, allow_none=True) prev_event = await self.store.get_event(
delta.prev_event_id, allow_none=True
)
if ( if (
prev_event prev_event
and prev_event.content.get("membership") == Membership.JOIN and prev_event.content.get("membership") == Membership.JOIN
@ -1556,7 +1556,7 @@ class PresenceHandler(BasePresenceHandler):
# Ignore changes to join events. # Ignore changes to join events.
continue continue
newly_joined_users.add(state_key) newly_joined_users.add(delta.state_key)
if not newly_joined_users: if not newly_joined_users:
# If nobody has joined then there's nothing to do. # If nobody has joined then there's nothing to do.

View File

@ -19,7 +19,7 @@ from synapse.api.errors import SynapseError, UnrecognizedRequestError
from synapse.push.clientformat import format_push_rules_for_user from synapse.push.clientformat import format_push_rules_for_user
from synapse.storage.push_rule import RuleNotFoundException from synapse.storage.push_rule import RuleNotFoundException
from synapse.synapse_rust.push import get_base_rule_ids from synapse.synapse_rust.push import get_base_rule_ids
from synapse.types import JsonDict, UserID from synapse.types import JsonDict, StreamKeyType, UserID
if TYPE_CHECKING: if TYPE_CHECKING:
from synapse.server import HomeServer from synapse.server import HomeServer
@ -114,7 +114,9 @@ class PushRulesHandler:
user_id: the user ID the change is for. user_id: the user ID the change is for.
""" """
stream_id = self._main_store.get_max_push_rules_stream_id() stream_id = self._main_store.get_max_push_rules_stream_id()
self._notifier.on_new_event("push_rules_key", stream_id, users=[user_id]) self._notifier.on_new_event(
StreamKeyType.PUSH_RULES, stream_id, users=[user_id]
)
async def push_rules_for_user( async def push_rules_for_user(
self, user: UserID self, user: UserID

View File

@ -130,11 +130,10 @@ class ReceiptsHandler:
async def _handle_new_receipts(self, receipts: List[ReadReceipt]) -> bool: async def _handle_new_receipts(self, receipts: List[ReadReceipt]) -> bool:
"""Takes a list of receipts, stores them and informs the notifier.""" """Takes a list of receipts, stores them and informs the notifier."""
min_batch_id: Optional[int] = None
max_batch_id: Optional[int] = None
receipts_persisted: List[ReadReceipt] = []
for receipt in receipts: for receipt in receipts:
res = await self.store.insert_receipt( stream_id = await self.store.insert_receipt(
receipt.room_id, receipt.room_id,
receipt.receipt_type, receipt.receipt_type,
receipt.user_id, receipt.user_id,
@ -143,30 +142,26 @@ class ReceiptsHandler:
receipt.data, receipt.data,
) )
if not res: if stream_id is None:
# res will be None if this receipt is 'old' # stream_id will be None if this receipt is 'old'
continue continue
stream_id, max_persisted_id = res receipts_persisted.append(receipt)
if min_batch_id is None or stream_id < min_batch_id: if not receipts_persisted:
min_batch_id = stream_id
if max_batch_id is None or max_persisted_id > max_batch_id:
max_batch_id = max_persisted_id
# Either both of these should be None or neither.
if min_batch_id is None or max_batch_id is None:
# no new receipts # no new receipts
return False return False
affected_room_ids = list({r.room_id for r in receipts}) max_batch_id = self.store.get_max_receipt_stream_id()
affected_room_ids = list({r.room_id for r in receipts_persisted})
self.notifier.on_new_event( self.notifier.on_new_event(
StreamKeyType.RECEIPT, max_batch_id, rooms=affected_room_ids StreamKeyType.RECEIPT, max_batch_id, rooms=affected_room_ids
) )
# Note that the min here shouldn't be relied upon to be accurate. # Note that the min here shouldn't be relied upon to be accurate.
await self.hs.get_pusherpool().on_new_receipts( await self.hs.get_pusherpool().on_new_receipts(
min_batch_id, max_batch_id, affected_room_ids {r.user_id for r in receipts_persisted}
) )
return True return True

View File

@ -261,7 +261,6 @@ class RoomCreationHandler:
# in the meantime and context needs to be recomputed, so let's do so. # in the meantime and context needs to be recomputed, so let's do so.
if i == max_retries - 1: if i == max_retries - 1:
raise e raise e
pass
# This is to satisfy mypy and should never happen # This is to satisfy mypy and should never happen
raise PartialStateConflictError() raise PartialStateConflictError()
@ -1708,7 +1707,7 @@ class RoomEventSource(EventSource[RoomStreamToken, EventBase]):
if from_key.topological: if from_key.topological:
logger.warning("Stream has topological part!!!! %r", from_key) logger.warning("Stream has topological part!!!! %r", from_key)
from_key = RoomStreamToken(None, from_key.stream) from_key = RoomStreamToken(stream=from_key.stream)
app_service = self.store.get_app_service_by_user_id(user.to_string()) app_service = self.store.get_app_service_by_user_id(user.to_string())
if app_service: if app_service:

View File

@ -16,7 +16,7 @@ import abc
import logging import logging
import random import random
from http import HTTPStatus from http import HTTPStatus
from typing import TYPE_CHECKING, Any, Dict, Iterable, List, Optional, Set, Tuple from typing import TYPE_CHECKING, Iterable, List, Optional, Set, Tuple
from synapse import types from synapse import types
from synapse.api.constants import ( from synapse.api.constants import (
@ -44,6 +44,7 @@ from synapse.handlers.worker_lock import NEW_EVENT_DURING_PURGE_LOCK_NAME
from synapse.logging import opentracing from synapse.logging import opentracing
from synapse.metrics import event_processing_positions from synapse.metrics import event_processing_positions
from synapse.metrics.background_process_metrics import run_as_background_process from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.storage.databases.main.state_deltas import StateDelta
from synapse.types import ( from synapse.types import (
JsonDict, JsonDict,
Requester, Requester,
@ -382,8 +383,10 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
and persist a new event for the new membership change. and persist a new event for the new membership change.
Args: Args:
requester: requester: User requesting the membership change, i.e. the sender of the
target: desired membership event.
target: Use whose membership should change, i.e. the state_key of the
desired membership event.
room_id: room_id:
membership: membership:
@ -415,7 +418,6 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
Returns: Returns:
Tuple of event ID and stream ordering position Tuple of event ID and stream ordering position
""" """
user_id = target.to_string() user_id = target.to_string()
if content is None: if content is None:
@ -475,21 +477,6 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
(EventTypes.Member, user_id), None (EventTypes.Member, user_id), None
) )
if event.membership == Membership.JOIN:
newly_joined = True
if prev_member_event_id:
prev_member_event = await self.store.get_event(
prev_member_event_id
)
newly_joined = prev_member_event.membership != Membership.JOIN
# Only rate-limit if the user actually joined the room, otherwise we'll end
# up blocking profile updates.
if newly_joined and ratelimit:
await self._join_rate_limiter_local.ratelimit(requester)
await self._join_rate_per_room_limiter.ratelimit(
requester, key=room_id, update=False
)
with opentracing.start_active_span("handle_new_client_event"): with opentracing.start_active_span("handle_new_client_event"):
result_event = ( result_event = (
await self.event_creation_handler.handle_new_client_event( await self.event_creation_handler.handle_new_client_event(
@ -514,7 +501,6 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
# in the meantime and context needs to be recomputed, so let's do so. # in the meantime and context needs to be recomputed, so let's do so.
if i == max_retries - 1: if i == max_retries - 1:
raise e raise e
pass
# we know it was persisted, so should have a stream ordering # we know it was persisted, so should have a stream ordering
assert result_event.internal_metadata.stream_ordering assert result_event.internal_metadata.stream_ordering
@ -618,6 +604,25 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
Raises: Raises:
ShadowBanError if a shadow-banned requester attempts to send an invite. ShadowBanError if a shadow-banned requester attempts to send an invite.
""" """
if ratelimit:
if action == Membership.JOIN:
# Only rate-limit if the user isn't already joined to the room, otherwise
# we'll end up blocking profile updates.
(
current_membership,
_,
) = await self.store.get_local_current_membership_for_user_in_room(
requester.user.to_string(),
room_id,
)
if current_membership != Membership.JOIN:
await self._join_rate_limiter_local.ratelimit(requester)
await self._join_rate_per_room_limiter.ratelimit(
requester, key=room_id, update=False
)
elif action == Membership.INVITE:
await self.ratelimit_invite(requester, room_id, target.to_string())
if action == Membership.INVITE and requester.shadow_banned: if action == Membership.INVITE and requester.shadow_banned:
# We randomly sleep a bit just to annoy the requester. # We randomly sleep a bit just to annoy the requester.
await self.clock.sleep(random.randint(1, 10)) await self.clock.sleep(random.randint(1, 10))
@ -794,8 +799,6 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
if effective_membership_state == Membership.INVITE: if effective_membership_state == Membership.INVITE:
target_id = target.to_string() target_id = target.to_string()
if ratelimit:
await self.ratelimit_invite(requester, room_id, target_id)
# block any attempts to invite the server notices mxid # block any attempts to invite the server notices mxid
if target_id == self._server_notices_mxid: if target_id == self._server_notices_mxid:
@ -2002,7 +2005,6 @@ class RoomMemberMasterHandler(RoomMemberHandler):
# in the meantime and context needs to be recomputed, so let's do so. # in the meantime and context needs to be recomputed, so let's do so.
if i == max_retries - 1: if i == max_retries - 1:
raise e raise e
pass
# we know it was persisted, so must have a stream ordering # we know it was persisted, so must have a stream ordering
assert result_event.internal_metadata.stream_ordering assert result_event.internal_metadata.stream_ordering
@ -2145,24 +2147,18 @@ class RoomForgetterHandler(StateDeltasHandler):
await self._store.update_room_forgetter_stream_pos(max_pos) await self._store.update_room_forgetter_stream_pos(max_pos)
async def _handle_deltas(self, deltas: List[Dict[str, Any]]) -> None: async def _handle_deltas(self, deltas: List[StateDelta]) -> None:
"""Called with the state deltas to process""" """Called with the state deltas to process"""
for delta in deltas: for delta in deltas:
typ = delta["type"] if delta.event_type != EventTypes.Member:
state_key = delta["state_key"]
room_id = delta["room_id"]
event_id = delta["event_id"]
prev_event_id = delta["prev_event_id"]
if typ != EventTypes.Member:
continue continue
if not self._hs.is_mine_id(state_key): if not self._hs.is_mine_id(delta.state_key):
continue continue
change = await self._get_key_change( change = await self._get_key_change(
prev_event_id, delta.prev_event_id,
event_id, delta.event_id,
key_name="membership", key_name="membership",
public_value=Membership.JOIN, public_value=Membership.JOIN,
) )
@ -2171,7 +2167,7 @@ class RoomForgetterHandler(StateDeltasHandler):
if is_leave: if is_leave:
try: try:
await self._room_member_handler.forget( await self._room_member_handler.forget(
UserID.from_string(state_key), room_id UserID.from_string(delta.state_key), delta.room_id
) )
except SynapseError as e: except SynapseError as e:
if e.code == 400: if e.code == 400:

View File

@ -27,6 +27,7 @@ from typing import (
from synapse.api.constants import EventContentFields, EventTypes, Membership from synapse.api.constants import EventContentFields, EventTypes, Membership
from synapse.metrics import event_processing_positions from synapse.metrics import event_processing_positions
from synapse.metrics.background_process_metrics import run_as_background_process from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.storage.databases.main.state_deltas import StateDelta
from synapse.types import JsonDict from synapse.types import JsonDict
if TYPE_CHECKING: if TYPE_CHECKING:
@ -142,7 +143,7 @@ class StatsHandler:
self.pos = max_pos self.pos = max_pos
async def _handle_deltas( async def _handle_deltas(
self, deltas: Iterable[JsonDict] self, deltas: Iterable[StateDelta]
) -> Tuple[Dict[str, CounterType[str]], Dict[str, CounterType[str]]]: ) -> Tuple[Dict[str, CounterType[str]], Dict[str, CounterType[str]]]:
"""Called with the state deltas to process """Called with the state deltas to process
@ -157,51 +158,50 @@ class StatsHandler:
room_to_state_updates: Dict[str, Dict[str, Any]] = {} room_to_state_updates: Dict[str, Dict[str, Any]] = {}
for delta in deltas: for delta in deltas:
typ = delta["type"] logger.debug(
state_key = delta["state_key"] "Handling: %r, %r %r, %s",
room_id = delta["room_id"] delta.room_id,
event_id = delta["event_id"] delta.event_type,
stream_id = delta["stream_id"] delta.state_key,
prev_event_id = delta["prev_event_id"] delta.event_id,
)
logger.debug("Handling: %r, %r %r, %s", room_id, typ, state_key, event_id) token = await self.store.get_earliest_token_for_stats("room", delta.room_id)
token = await self.store.get_earliest_token_for_stats("room", room_id)
# If the earliest token to begin from is larger than our current # If the earliest token to begin from is larger than our current
# stream ID, skip processing this delta. # stream ID, skip processing this delta.
if token is not None and token >= stream_id: if token is not None and token >= delta.stream_id:
logger.debug( logger.debug(
"Ignoring: %s as earlier than this room's initial ingestion event", "Ignoring: %s as earlier than this room's initial ingestion event",
event_id, delta.event_id,
) )
continue continue
if event_id is None and prev_event_id is None: if delta.event_id is None and delta.prev_event_id is None:
logger.error( logger.error(
"event ID is None and so is the previous event ID. stream_id: %s", "event ID is None and so is the previous event ID. stream_id: %s",
stream_id, delta.stream_id,
) )
continue continue
event_content: JsonDict = {} event_content: JsonDict = {}
if event_id is not None: if delta.event_id is not None:
event = await self.store.get_event(event_id, allow_none=True) event = await self.store.get_event(delta.event_id, allow_none=True)
if event: if event:
event_content = event.content or {} event_content = event.content or {}
# All the values in this dict are deltas (RELATIVE changes) # All the values in this dict are deltas (RELATIVE changes)
room_stats_delta = room_to_stats_deltas.setdefault(room_id, Counter()) room_stats_delta = room_to_stats_deltas.setdefault(delta.room_id, Counter())
room_state = room_to_state_updates.setdefault(room_id, {}) room_state = room_to_state_updates.setdefault(delta.room_id, {})
if prev_event_id is None: if delta.prev_event_id is None:
# this state event doesn't overwrite another, # this state event doesn't overwrite another,
# so it is a new effective/current state event # so it is a new effective/current state event
room_stats_delta["current_state_events"] += 1 room_stats_delta["current_state_events"] += 1
if typ == EventTypes.Member: if delta.event_type == EventTypes.Member:
# we could use StateDeltasHandler._get_key_change here but it's # we could use StateDeltasHandler._get_key_change here but it's
# a bit inefficient given we're not testing for a specific # a bit inefficient given we're not testing for a specific
# result; might as well just grab the prev_membership and # result; might as well just grab the prev_membership and
@ -210,9 +210,9 @@ class StatsHandler:
# in the absence of a previous event because we do not want to # in the absence of a previous event because we do not want to
# reduce the leave count when a new-to-the-room user joins. # reduce the leave count when a new-to-the-room user joins.
prev_membership = None prev_membership = None
if prev_event_id is not None: if delta.prev_event_id is not None:
prev_event = await self.store.get_event( prev_event = await self.store.get_event(
prev_event_id, allow_none=True delta.prev_event_id, allow_none=True
) )
if prev_event: if prev_event:
prev_event_content = prev_event.content prev_event_content = prev_event.content
@ -256,7 +256,7 @@ class StatsHandler:
else: else:
raise ValueError("%r is not a valid membership" % (membership,)) raise ValueError("%r is not a valid membership" % (membership,))
user_id = state_key user_id = delta.state_key
if self.is_mine_id(user_id): if self.is_mine_id(user_id):
# this accounts for transitions like leave → ban and so on. # this accounts for transitions like leave → ban and so on.
has_changed_joinedness = (prev_membership == Membership.JOIN) != ( has_changed_joinedness = (prev_membership == Membership.JOIN) != (
@ -272,30 +272,30 @@ class StatsHandler:
room_stats_delta["local_users_in_room"] += membership_delta room_stats_delta["local_users_in_room"] += membership_delta
elif typ == EventTypes.Create: elif delta.event_type == EventTypes.Create:
room_state["is_federatable"] = ( room_state["is_federatable"] = (
event_content.get(EventContentFields.FEDERATE, True) is True event_content.get(EventContentFields.FEDERATE, True) is True
) )
room_type = event_content.get(EventContentFields.ROOM_TYPE) room_type = event_content.get(EventContentFields.ROOM_TYPE)
if isinstance(room_type, str): if isinstance(room_type, str):
room_state["room_type"] = room_type room_state["room_type"] = room_type
elif typ == EventTypes.JoinRules: elif delta.event_type == EventTypes.JoinRules:
room_state["join_rules"] = event_content.get("join_rule") room_state["join_rules"] = event_content.get("join_rule")
elif typ == EventTypes.RoomHistoryVisibility: elif delta.event_type == EventTypes.RoomHistoryVisibility:
room_state["history_visibility"] = event_content.get( room_state["history_visibility"] = event_content.get(
"history_visibility" "history_visibility"
) )
elif typ == EventTypes.RoomEncryption: elif delta.event_type == EventTypes.RoomEncryption:
room_state["encryption"] = event_content.get("algorithm") room_state["encryption"] = event_content.get("algorithm")
elif typ == EventTypes.Name: elif delta.event_type == EventTypes.Name:
room_state["name"] = event_content.get("name") room_state["name"] = event_content.get("name")
elif typ == EventTypes.Topic: elif delta.event_type == EventTypes.Topic:
room_state["topic"] = event_content.get("topic") room_state["topic"] = event_content.get("topic")
elif typ == EventTypes.RoomAvatar: elif delta.event_type == EventTypes.RoomAvatar:
room_state["avatar"] = event_content.get("url") room_state["avatar"] = event_content.get("url")
elif typ == EventTypes.CanonicalAlias: elif delta.event_type == EventTypes.CanonicalAlias:
room_state["canonical_alias"] = event_content.get("alias") room_state["canonical_alias"] = event_content.get("alias")
elif typ == EventTypes.GuestAccess: elif delta.event_type == EventTypes.GuestAccess:
room_state["guest_access"] = event_content.get( room_state["guest_access"] = event_content.get(
EventContentFields.GUEST_ACCESS EventContentFields.GUEST_ACCESS
) )

View File

@ -40,7 +40,6 @@ from synapse.api.filtering import FilterCollection
from synapse.api.presence import UserPresenceState from synapse.api.presence import UserPresenceState
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS from synapse.api.room_versions import KNOWN_ROOM_VERSIONS
from synapse.events import EventBase from synapse.events import EventBase
from synapse.handlers.device import DELETE_DEVICE_MSGS_TASK_NAME
from synapse.handlers.relations import BundledAggregations from synapse.handlers.relations import BundledAggregations
from synapse.logging import issue9533_logger from synapse.logging import issue9533_logger
from synapse.logging.context import current_context from synapse.logging.context import current_context
@ -363,36 +362,15 @@ class SyncHandler:
# (since we now know that the device has received them) # (since we now know that the device has received them)
if since_token is not None: if since_token is not None:
since_stream_id = since_token.to_device_key since_stream_id = since_token.to_device_key
# Fast path: delete a limited number of to-device messages up front.
# We do this to avoid the overhead of scheduling a task for every
# sync.
device_deletion_limit = 100
deleted = await self.store.delete_messages_for_device( deleted = await self.store.delete_messages_for_device(
sync_config.user.to_string(), sync_config.user.to_string(),
sync_config.device_id, sync_config.device_id,
since_stream_id, since_stream_id,
limit=device_deletion_limit,
) )
logger.debug( logger.debug(
"Deleted %d to-device messages up to %d", deleted, since_stream_id "Deleted %d to-device messages up to %d", deleted, since_stream_id
) )
# If we hit the limit, schedule a background task to delete the rest.
if deleted >= device_deletion_limit:
await self._task_scheduler.schedule_task(
DELETE_DEVICE_MSGS_TASK_NAME,
resource_id=sync_config.device_id,
params={
"user_id": sync_config.user.to_string(),
"device_id": sync_config.device_id,
"up_to_stream_id": since_stream_id,
},
)
logger.debug(
"Deletion of to-device messages up to %d scheduled",
since_stream_id,
)
if timeout == 0 or since_token is None or full_state: if timeout == 0 or since_token is None or full_state:
# we are going to return immediately, so don't bother calling # we are going to return immediately, so don't bother calling
# notifier.wait_for_events. # notifier.wait_for_events.
@ -2333,7 +2311,7 @@ class SyncHandler:
continue continue
leave_token = now_token.copy_and_replace( leave_token = now_token.copy_and_replace(
StreamKeyType.ROOM, RoomStreamToken(None, event.stream_ordering) StreamKeyType.ROOM, RoomStreamToken(stream=event.stream_ordering)
) )
room_entries.append( room_entries.append(
RoomSyncResultBuilder( RoomSyncResultBuilder(

View File

@ -14,7 +14,7 @@
import logging import logging
from http import HTTPStatus from http import HTTPStatus
from typing import TYPE_CHECKING, Any, Dict, List, Optional, Set, Tuple from typing import TYPE_CHECKING, List, Optional, Set, Tuple
from twisted.internet.interfaces import IDelayedCall from twisted.internet.interfaces import IDelayedCall
@ -23,6 +23,7 @@ from synapse.api.constants import EventTypes, HistoryVisibility, JoinRules, Memb
from synapse.api.errors import Codes, SynapseError from synapse.api.errors import Codes, SynapseError
from synapse.handlers.state_deltas import MatchChange, StateDeltasHandler from synapse.handlers.state_deltas import MatchChange, StateDeltasHandler
from synapse.metrics.background_process_metrics import run_as_background_process from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.storage.databases.main.state_deltas import StateDelta
from synapse.storage.databases.main.user_directory import SearchResult from synapse.storage.databases.main.user_directory import SearchResult
from synapse.storage.roommember import ProfileInfo from synapse.storage.roommember import ProfileInfo
from synapse.types import UserID from synapse.types import UserID
@ -247,32 +248,31 @@ class UserDirectoryHandler(StateDeltasHandler):
await self.store.update_user_directory_stream_pos(max_pos) await self.store.update_user_directory_stream_pos(max_pos)
async def _handle_deltas(self, deltas: List[Dict[str, Any]]) -> None: async def _handle_deltas(self, deltas: List[StateDelta]) -> None:
"""Called with the state deltas to process""" """Called with the state deltas to process"""
for delta in deltas: for delta in deltas:
typ = delta["type"] logger.debug(
state_key = delta["state_key"] "Handling: %r %r, %s", delta.event_type, delta.state_key, delta.event_id
room_id = delta["room_id"] )
event_id: Optional[str] = delta["event_id"]
prev_event_id: Optional[str] = delta["prev_event_id"]
logger.debug("Handling: %r %r, %s", typ, state_key, event_id)
# For join rule and visibility changes we need to check if the room # For join rule and visibility changes we need to check if the room
# may have become public or not and add/remove the users in said room # may have become public or not and add/remove the users in said room
if typ in (EventTypes.RoomHistoryVisibility, EventTypes.JoinRules): if delta.event_type in (
EventTypes.RoomHistoryVisibility,
EventTypes.JoinRules,
):
await self._handle_room_publicity_change( await self._handle_room_publicity_change(
room_id, prev_event_id, event_id, typ delta.room_id, delta.prev_event_id, delta.event_id, delta.event_type
) )
elif typ == EventTypes.Member: elif delta.event_type == EventTypes.Member:
await self._handle_room_membership_event( await self._handle_room_membership_event(
room_id, delta.room_id,
prev_event_id, delta.prev_event_id,
event_id, delta.event_id,
state_key, delta.state_key,
) )
else: else:
logger.debug("Ignoring irrelevant type: %r", typ) logger.debug("Ignoring irrelevant type: %r", delta.event_type)
async def _handle_room_publicity_change( async def _handle_room_publicity_change(
self, self,

View File

@ -266,7 +266,7 @@ class HttpServer(Protocol):
def register_paths( def register_paths(
self, self,
method: str, method: str,
path_patterns: Iterable[Pattern], path_patterns: Iterable[Pattern[str]],
callback: ServletCallback, callback: ServletCallback,
servlet_classname: str, servlet_classname: str,
) -> None: ) -> None:

View File

@ -26,11 +26,11 @@ from twisted.internet.interfaces import IConsumer
from twisted.protocols.basic import FileSender from twisted.protocols.basic import FileSender
from twisted.web.server import Request from twisted.web.server import Request
from synapse.api.errors import Codes, SynapseError, cs_error from synapse.api.errors import Codes, cs_error
from synapse.http.server import finish_request, respond_with_json from synapse.http.server import finish_request, respond_with_json
from synapse.http.site import SynapseRequest from synapse.http.site import SynapseRequest
from synapse.logging.context import make_deferred_yieldable from synapse.logging.context import make_deferred_yieldable
from synapse.util.stringutils import is_ascii, parse_and_validate_server_name from synapse.util.stringutils import is_ascii
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -84,52 +84,12 @@ INLINE_CONTENT_TYPES = [
] ]
def parse_media_id(request: Request) -> Tuple[str, str, Optional[str]]:
"""Parses the server name, media ID and optional file name from the request URI
Also performs some rough validation on the server name.
Args:
request: The `Request`.
Returns:
A tuple containing the parsed server name, media ID and optional file name.
Raises:
SynapseError(404): if parsing or validation fail for any reason
"""
try:
# The type on postpath seems incorrect in Twisted 21.2.0.
postpath: List[bytes] = request.postpath # type: ignore
assert postpath
# This allows users to append e.g. /test.png to the URL. Useful for
# clients that parse the URL to see content type.
server_name_bytes, media_id_bytes = postpath[:2]
server_name = server_name_bytes.decode("utf-8")
media_id = media_id_bytes.decode("utf8")
# Validate the server name, raising if invalid
parse_and_validate_server_name(server_name)
file_name = None
if len(postpath) > 2:
try:
file_name = urllib.parse.unquote(postpath[-1].decode("utf-8"))
except UnicodeDecodeError:
pass
return server_name, media_id, file_name
except Exception:
raise SynapseError(
404, "Invalid media id token %r" % (request.postpath,), Codes.UNKNOWN
)
def respond_404(request: SynapseRequest) -> None: def respond_404(request: SynapseRequest) -> None:
assert request.path is not None
respond_with_json( respond_with_json(
request, request,
404, 404,
cs_error("Not found %r" % (request.postpath,), code=Codes.NOT_FOUND), cs_error("Not found '%s'" % (request.path.decode(),), code=Codes.NOT_FOUND),
send_cors=True, send_cors=True,
) )
@ -188,7 +148,9 @@ def add_file_headers(
# A strict subset of content types is allowed to be inlined so that they may # A strict subset of content types is allowed to be inlined so that they may
# be viewed directly in a browser. Other file types are forced to be downloads. # be viewed directly in a browser. Other file types are forced to be downloads.
if media_type.lower() in INLINE_CONTENT_TYPES: #
# Only the type & subtype are important, parameters can be ignored.
if media_type.lower().split(";", 1)[0] in INLINE_CONTENT_TYPES:
disposition = "inline" disposition = "inline"
else: else:
disposition = "attachment" disposition = "attachment"
@ -372,7 +334,7 @@ class ThumbnailInfo:
# Content type of thumbnail, e.g. image/png # Content type of thumbnail, e.g. image/png
type: str type: str
# The size of the media file, in bytes. # The size of the media file, in bytes.
length: Optional[int] = None length: int
@attr.s(slots=True, frozen=True, auto_attribs=True) @attr.s(slots=True, frozen=True, auto_attribs=True)

View File

@ -48,6 +48,7 @@ from synapse.media.filepath import MediaFilePaths
from synapse.media.media_storage import MediaStorage from synapse.media.media_storage import MediaStorage
from synapse.media.storage_provider import StorageProviderWrapper from synapse.media.storage_provider import StorageProviderWrapper
from synapse.media.thumbnailer import Thumbnailer, ThumbnailError from synapse.media.thumbnailer import Thumbnailer, ThumbnailError
from synapse.media.url_previewer import UrlPreviewer
from synapse.metrics.background_process_metrics import run_as_background_process from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.types import UserID from synapse.types import UserID
from synapse.util.async_helpers import Linearizer from synapse.util.async_helpers import Linearizer
@ -114,7 +115,7 @@ class MediaRepository:
) )
storage_providers.append(provider) storage_providers.append(provider)
self.media_storage = MediaStorage( self.media_storage: MediaStorage = MediaStorage(
self.hs, self.primary_base_path, self.filepaths, storage_providers self.hs, self.primary_base_path, self.filepaths, storage_providers
) )
@ -142,6 +143,13 @@ class MediaRepository:
MEDIA_RETENTION_CHECK_PERIOD_MS, MEDIA_RETENTION_CHECK_PERIOD_MS,
) )
if hs.config.media.url_preview_enabled:
self.url_previewer: Optional[UrlPreviewer] = UrlPreviewer(
hs, self, self.media_storage
)
else:
self.url_previewer = None
def _start_update_recently_accessed(self) -> Deferred: def _start_update_recently_accessed(self) -> Deferred:
return run_as_background_process( return run_as_background_process(
"update_recently_accessed_media", self._update_recently_accessed "update_recently_accessed_media", self._update_recently_accessed
@ -616,6 +624,7 @@ class MediaRepository:
height=t_height, height=t_height,
method=t_method, method=t_method,
type=t_type, type=t_type,
length=t_byte_source.tell(),
), ),
) )
@ -686,6 +695,7 @@ class MediaRepository:
height=t_height, height=t_height,
method=t_method, method=t_method,
type=t_type, type=t_type,
length=t_byte_source.tell(),
), ),
) )
@ -831,6 +841,7 @@ class MediaRepository:
height=t_height, height=t_height,
method=t_method, method=t_method,
type=t_type, type=t_type,
length=t_byte_source.tell(),
), ),
) )

View File

@ -678,7 +678,7 @@ class ModuleApi:
"msisdn" for phone numbers, and an "address" key which value is the "msisdn" for phone numbers, and an "address" key which value is the
threepid's address. threepid's address.
""" """
return await self._store.user_get_threepids(user_id) return [attr.asdict(t) for t in await self._store.user_get_threepids(user_id)]
def check_user_exists(self, user_id: str) -> "defer.Deferred[Optional[str]]": def check_user_exists(self, user_id: str) -> "defer.Deferred[Optional[str]]":
"""Check if user exists. """Check if user exists.

View File

@ -126,7 +126,7 @@ class _NotifierUserStream:
def notify( def notify(
self, self,
stream_key: str, stream_key: StreamKeyType,
stream_id: Union[int, RoomStreamToken], stream_id: Union[int, RoomStreamToken],
time_now_ms: int, time_now_ms: int,
) -> None: ) -> None:
@ -454,7 +454,7 @@ class Notifier:
def on_new_event( def on_new_event(
self, self,
stream_key: str, stream_key: StreamKeyType,
new_token: Union[int, RoomStreamToken], new_token: Union[int, RoomStreamToken],
users: Optional[Collection[Union[str, UserID]]] = None, users: Optional[Collection[Union[str, UserID]]] = None,
rooms: Optional[StrCollection] = None, rooms: Optional[StrCollection] = None,
@ -655,30 +655,29 @@ class Notifier:
events: List[Union[JsonDict, EventBase]] = [] events: List[Union[JsonDict, EventBase]] = []
end_token = from_token end_token = from_token
for name, source in self.event_sources.sources.get_sources(): for keyname, source in self.event_sources.sources.get_sources():
keyname = "%s_key" % name before_id = before_token.get_field(keyname)
before_id = getattr(before_token, keyname) after_id = after_token.get_field(keyname)
after_id = getattr(after_token, keyname)
if before_id == after_id: if before_id == after_id:
continue continue
new_events, new_key = await source.get_new_events( new_events, new_key = await source.get_new_events(
user=user, user=user,
from_key=getattr(from_token, keyname), from_key=from_token.get_field(keyname),
limit=limit, limit=limit,
is_guest=is_peeking, is_guest=is_peeking,
room_ids=room_ids, room_ids=room_ids,
explicit_room_id=explicit_room_id, explicit_room_id=explicit_room_id,
) )
if name == "room": if keyname == StreamKeyType.ROOM:
new_events = await filter_events_for_client( new_events = await filter_events_for_client(
self._storage_controllers, self._storage_controllers,
user.to_string(), user.to_string(),
new_events, new_events,
is_peeking=is_peeking, is_peeking=is_peeking,
) )
elif name == "presence": elif keyname == StreamKeyType.PRESENCE:
now = self.clock.time_msec() now = self.clock.time_msec()
new_events[:] = [ new_events[:] = [
{ {

View File

@ -101,7 +101,7 @@ if TYPE_CHECKING:
class PusherConfig: class PusherConfig:
"""Parameters necessary to configure a pusher.""" """Parameters necessary to configure a pusher."""
id: Optional[str] id: Optional[int]
user_name: str user_name: str
profile_tag: str profile_tag: str
@ -182,7 +182,7 @@ class Pusher(metaclass=abc.ABCMeta):
raise NotImplementedError() raise NotImplementedError()
@abc.abstractmethod @abc.abstractmethod
def on_new_receipts(self, min_stream_id: int, max_stream_id: int) -> None: def on_new_receipts(self) -> None:
raise NotImplementedError() raise NotImplementedError()
@abc.abstractmethod @abc.abstractmethod

View File

@ -99,7 +99,7 @@ class EmailPusher(Pusher):
pass pass
self.timed_call = None self.timed_call = None
def on_new_receipts(self, min_stream_id: int, max_stream_id: int) -> None: def on_new_receipts(self) -> None:
# We could wake up and cancel the timer but there tend to be quite a # We could wake up and cancel the timer but there tend to be quite a
# lot of read receipts so it's probably less work to just let the # lot of read receipts so it's probably less work to just let the
# timer fire # timer fire

View File

@ -160,7 +160,7 @@ class HttpPusher(Pusher):
if should_check_for_notifs: if should_check_for_notifs:
self._start_processing() self._start_processing()
def on_new_receipts(self, min_stream_id: int, max_stream_id: int) -> None: def on_new_receipts(self) -> None:
# Note that the min here shouldn't be relied upon to be accurate. # Note that the min here shouldn't be relied upon to be accurate.
# We could check the receipts are actually m.read receipts here, # We could check the receipts are actually m.read receipts here,

View File

@ -292,20 +292,12 @@ class PusherPool:
except Exception: except Exception:
logger.exception("Exception in pusher on_new_notifications") logger.exception("Exception in pusher on_new_notifications")
async def on_new_receipts( async def on_new_receipts(self, users_affected: StrCollection) -> None:
self, min_stream_id: int, max_stream_id: int, affected_room_ids: Iterable[str]
) -> None:
if not self.pushers: if not self.pushers:
# nothing to do here. # nothing to do here.
return return
try: try:
# Need to subtract 1 from the minimum because the lower bound here
# is not inclusive
users_affected = await self.store.get_users_sent_receipts_between(
min_stream_id - 1, max_stream_id
)
for u in users_affected: for u in users_affected:
# Don't push if the user account has expired # Don't push if the user account has expired
expired = await self._account_validity_handler.is_user_expired(u) expired = await self._account_validity_handler.is_user_expired(u)
@ -314,7 +306,7 @@ class PusherPool:
if u in self.pushers: if u in self.pushers:
for p in self.pushers[u].values(): for p in self.pushers[u].values():
p.on_new_receipts(min_stream_id, max_stream_id) p.on_new_receipts()
except Exception: except Exception:
logger.exception("Exception in pusher on_new_receipts") logger.exception("Exception in pusher on_new_receipts")

View File

@ -138,7 +138,11 @@ class ReplicationFederationSendEventsRestServlet(ReplicationEndpoint):
event_and_contexts.append((event, context)) event_and_contexts.append((event, context))
logger.info("Got %d events from federation", len(event_and_contexts)) logger.info(
"Got batch of %i events to persist to room %s",
len(event_and_contexts),
room_id,
)
max_stream_id = await self.federation_event_handler.persist_events_and_notify( max_stream_id = await self.federation_event_handler.persist_events_and_notify(
room_id, event_and_contexts, backfilled room_id, event_and_contexts, backfilled

View File

@ -118,6 +118,7 @@ class ReplicationSendEventsRestServlet(ReplicationEndpoint):
with Measure(self.clock, "repl_send_events_parse"): with Measure(self.clock, "repl_send_events_parse"):
events_and_context = [] events_and_context = []
events = payload["events"] events = payload["events"]
rooms = set()
for event_payload in events: for event_payload in events:
event_dict = event_payload["event"] event_dict = event_payload["event"]
@ -144,11 +145,13 @@ class ReplicationSendEventsRestServlet(ReplicationEndpoint):
UserID.from_string(u) for u in event_payload["extra_users"] UserID.from_string(u) for u in event_payload["extra_users"]
] ]
logger.info( # all the rooms *should* be the same, but we'll log separately to be
"Got batch of events to send, last ID of batch is: %s, sending into room: %s", # sure.
event.event_id, rooms.add(event.room_id)
event.room_id,
) logger.info(
"Got batch of %i events to persist to rooms %s", len(events), rooms
)
last_event = ( last_event = (
await self.event_creation_handler.persist_and_notify_client_events( await self.event_creation_handler.persist_and_notify_client_events(

View File

@ -129,9 +129,7 @@ class ReplicationDataHandler:
self.notifier.on_new_event( self.notifier.on_new_event(
StreamKeyType.RECEIPT, token, rooms=[row.room_id for row in rows] StreamKeyType.RECEIPT, token, rooms=[row.room_id for row in rows]
) )
await self._pusher_pool.on_new_receipts( await self._pusher_pool.on_new_receipts({row.user_id for row in rows})
token, token, {row.room_id for row in rows}
)
elif stream_name == ToDeviceStream.NAME: elif stream_name == ToDeviceStream.NAME:
entities = [row.entity for row in rows if row.entity.startswith("@")] entities = [row.entity for row in rows if row.entity.startswith("@")]
if entities: if entities:

View File

@ -18,7 +18,7 @@ allowed to be sent by which side.
""" """
import abc import abc
import logging import logging
from typing import Optional, Tuple, Type, TypeVar from typing import List, Optional, Tuple, Type, TypeVar
from synapse.replication.tcp.streams._base import StreamRow from synapse.replication.tcp.streams._base import StreamRow
from synapse.util import json_decoder, json_encoder from synapse.util import json_decoder, json_encoder
@ -74,6 +74,8 @@ SC = TypeVar("SC", bound="_SimpleCommand")
class _SimpleCommand(Command): class _SimpleCommand(Command):
"""An implementation of Command whose argument is just a 'data' string.""" """An implementation of Command whose argument is just a 'data' string."""
__slots__ = ["data"]
def __init__(self, data: str): def __init__(self, data: str):
self.data = data self.data = data
@ -122,6 +124,8 @@ class RdataCommand(Command):
RDATA presence master 59 ["@baz:example.com", "online", ...] RDATA presence master 59 ["@baz:example.com", "online", ...]
""" """
__slots__ = ["stream_name", "instance_name", "token", "row"]
NAME = "RDATA" NAME = "RDATA"
def __init__( def __init__(
@ -179,6 +183,8 @@ class PositionCommand(Command):
of the stream. of the stream.
""" """
__slots__ = ["stream_name", "instance_name", "prev_token", "new_token"]
NAME = "POSITION" NAME = "POSITION"
def __init__( def __init__(
@ -235,6 +241,8 @@ class ReplicateCommand(Command):
REPLICATE REPLICATE
""" """
__slots__: List[str] = []
NAME = "REPLICATE" NAME = "REPLICATE"
def __init__(self) -> None: def __init__(self) -> None:
@ -264,6 +272,8 @@ class UserSyncCommand(Command):
Where <state> is either "start" or "end" Where <state> is either "start" or "end"
""" """
__slots__ = ["instance_id", "user_id", "device_id", "is_syncing", "last_sync_ms"]
NAME = "USER_SYNC" NAME = "USER_SYNC"
def __init__( def __init__(
@ -316,6 +326,8 @@ class ClearUserSyncsCommand(Command):
CLEAR_USER_SYNC <instance_id> CLEAR_USER_SYNC <instance_id>
""" """
__slots__ = ["instance_id"]
NAME = "CLEAR_USER_SYNC" NAME = "CLEAR_USER_SYNC"
def __init__(self, instance_id: str): def __init__(self, instance_id: str):
@ -343,6 +355,8 @@ class FederationAckCommand(Command):
FEDERATION_ACK <instance_name> <token> FEDERATION_ACK <instance_name> <token>
""" """
__slots__ = ["instance_name", "token"]
NAME = "FEDERATION_ACK" NAME = "FEDERATION_ACK"
def __init__(self, instance_name: str, token: int): def __init__(self, instance_name: str, token: int):
@ -368,6 +382,15 @@ class UserIpCommand(Command):
USER_IP <user_id>, <access_token>, <ip>, <device_id>, <last_seen>, <user_agent> USER_IP <user_id>, <access_token>, <ip>, <device_id>, <last_seen>, <user_agent>
""" """
__slots__ = [
"user_id",
"access_token",
"ip",
"user_agent",
"device_id",
"last_seen",
]
NAME = "USER_IP" NAME = "USER_IP"
def __init__( def __init__(
@ -423,8 +446,6 @@ class RemoteServerUpCommand(_SimpleCommand):
"""Sent when a worker has detected that a remote server is no longer """Sent when a worker has detected that a remote server is no longer
"down" and retry timings should be reset. "down" and retry timings should be reset.
If sent from a client the server will relay to all other workers.
Format:: Format::
REMOTE_SERVER_UP <server> REMOTE_SERVER_UP <server>
@ -441,6 +462,8 @@ class LockReleasedCommand(Command):
LOCK_RELEASED ["<instance_name>", "<lock_name>", "<lock_key>"] LOCK_RELEASED ["<instance_name>", "<lock_name>", "<lock_key>"]
""" """
__slots__ = ["instance_name", "lock_name", "lock_key"]
NAME = "LOCK_RELEASED" NAME = "LOCK_RELEASED"
def __init__( def __init__(

View File

@ -146,7 +146,7 @@ class PurgeHistoryRestServlet(RestServlet):
# RoomStreamToken expects [int] not Optional[int] # RoomStreamToken expects [int] not Optional[int]
assert event.internal_metadata.stream_ordering is not None assert event.internal_metadata.stream_ordering is not None
room_token = RoomStreamToken( room_token = RoomStreamToken(
event.depth, event.internal_metadata.stream_ordering topological=event.depth, stream=event.internal_metadata.stream_ordering
) )
token = await room_token.to_string(self.store) token = await room_token.to_string(self.store)

View File

@ -198,7 +198,13 @@ class DestinationMembershipRestServlet(RestServlet):
rooms, total = await self._store.get_destination_rooms_paginate( rooms, total = await self._store.get_destination_rooms_paginate(
destination, start, limit, direction destination, start, limit, direction
) )
response = {"rooms": rooms, "total": total} response = {
"rooms": [
{"room_id": room_id, "stream_ordering": stream_ordering}
for room_id, stream_ordering in rooms
],
"total": total,
}
if (start + limit) < total: if (start + limit) < total:
response["next_token"] = str(start + len(rooms)) response["next_token"] = str(start + len(rooms))

Some files were not shown because too many files have changed in this diff Show More