Compare commits

...

122 Commits

Author SHA1 Message Date
Will Lewis d58dd1d1ff Update README.rst
(cherry picked from commit be65a8ec01)
2023-12-13 14:55:57 +00:00
Will Lewis f65f316bc3 Update README.rst 2023-12-13 14:47:55 +00:00
Erik Johnston 70c020b532 Update text 2023-12-12 20:32:48 +00:00
Patrick Cloke e1f8440c89 Update the README pointing to the Element fork. 2023-12-12 20:28:30 +00:00
Erik Johnston 15733b0931 Update changelog 2023-12-12 15:51:28 +00:00
Erik Johnston 128aad4fe3 1.98.0 2023-12-12 15:10:16 +00:00
Action Bot ee37039031 Version picker added for v1.98 docs 2023-12-11 14:51:26 +00:00
David Robertson 44377f5ac0
Revert postgres logical replication deltaas
This reverts two commits:

    0bb8e418a4
    "Fix postgres schema after dropping old tables (#16730)"

and

    51e4e35653
    "Add a Postgres `REPLICA IDENTITY` to tables that do not have an implicit one. This should allow use of Postgres logical replication. (take  2, now with no added deadlocks!) (#16658)"

and also amends the changelog.
2023-12-05 16:10:48 +00:00
David Robertson c8a24c55a9
Amend changelog typo 2023-12-05 13:38:09 +00:00
David Robertson 386649325a
Fixup dependency bumps syntax in changelog 2023-12-05 13:16:59 +00:00
David Robertson 3c83d8f0af
1.98.0rc1 2023-12-05 13:14:36 +00:00
David Robertson 0a00c99823
Fix upgrading a room without `events` field in power levels (#16725) 2023-12-05 12:06:21 +00:00
Amanda H. L. de Andrade Katz e87499b3f4
Add how to validate configuration file with synapse.config script (#16714) 2023-12-05 11:42:56 +00:00
Will Hunt ea783550bb
Set response values to zero if None for /_synapse/admin/v1/federation/destinations (#16729) 2023-12-05 11:40:27 +00:00
David Robertson 0bb8e418a4
Fix postgres schema after dropping old tables (#16730) 2023-12-05 11:08:40 +00:00
reivilibre 51e4e35653
Add a Postgres `REPLICA IDENTITY` to tables that do not have an implicit one. This should allow use of Postgres logical replication. (take 2, now with no added deadlocks!) (#16658)
* Add `ALTER TABLE ... REPLICA IDENTITY ...` for individual tables

We can't combine them into one file as it makes it likely to hit a deadlock

if Synapse is running, as it only takes one other transaction to access two

tables in a different order to the schema delta.

* Add notes

* Newsfile

Signed-off-by: Olivier Wilkinson (reivilibre) <oliverw@matrix.org>

* Re-introduce REPLICA IDENTITY test

---------

Signed-off-by: Olivier Wilkinson (reivilibre) <oliverw@matrix.org>
2023-12-04 14:57:28 +00:00
villepeh 0aa4d3b6f7
Switch UNIX socket paths to /run, and add a UNIX socket example for HAProxy (#16700) 2023-12-04 12:38:46 +00:00
dependabot[bot] 15c46cf86a
Bump phonenumbers from 8.13.23 to 8.13.26 (#16722)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-04 12:32:06 +00:00
Mathieu Velten 9e7f80037d
Server notices: add an autojoin setting for the notices room (#16699)
Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>
2023-12-04 12:31:42 +00:00
dependabot[bot] 506f5c7553
Bump matrix-org/netlify-pr-preview from 2 to 3 (#16719)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-04 12:02:54 +00:00
Travis Ralston d6e194b2bc
Implement MSC4069: Inhibit profile propagation (#16636)
MSC: https://github.com/matrix-org/matrix-spec-proposals/pull/4069
2023-12-04 11:36:12 +00:00
dependabot[bot] 2686a05766
Bump idna from 3.4 to 3.6 (#16720)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-04 11:34:53 +00:00
dependabot[bot] c915b91840
Bump cryptography from 41.0.6 to 41.0.7 (#16721)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-04 11:30:15 +00:00
dependabot[bot] dd02c6340e
Bump sphinx-autodoc2 from 0.4.2 to 0.5.0 (#16723)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-04 11:17:42 +00:00
dependabot[bot] a5c14346fa
Bump types-jsonschema from 4.19.0.4 to 4.20.0.0 (#16724)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-04 11:16:45 +00:00
Andrew Yasinishyn 63d96bfc61
ModuleAPI SSO auth callbacks (#15207)
Signed-off-by: Andrii Yasynyshyn yasinishyn.a.n@gmail.com
2023-12-01 14:31:50 +00:00
Patrick Cloke 579c6be5f6
Drop unused tables & unneeded access token ID for events. (#16522) 2023-12-01 10:12:00 +00:00
Mo Balaa 3a092699e5
Upgrade poetry-core range to fix issue with .so file (#16702)
poetry-core 1.8.x includes a fix which properly moves the generate
synapse_rust.abi3.so file to the synapse directory when using an
editable install.

Without this change developers are left with a confusing experience
of the synapse.synapse_rust module not being found after installation.
2023-11-29 15:46:43 -05:00
Patrick Cloke dcf949cd87
Declare support for Matrix v1.7, v1.8, and v1.9. (#16707) 2023-11-29 15:02:09 -05:00
Patrick Cloke d6c3b7584f
Request & follow redirects for /media/v3/download (#16701)
Implement MSC3860 to follow redirects for federated media downloads.

Note that the Client-Server API doesn't support this (yet) since the media
repository in Synapse doesn't have a way of supporting redirects.
2023-11-29 19:03:42 +00:00
Erik Johnston a14678492e
Reduce DB load when forget on leave setting is disabled (#16668)
* Reduce DB load when forget on leave setting is disabled

* Newsfile
2023-11-29 18:21:30 +00:00
Erik Johnston 19dac97480
Add a workflow to try and automatically fixup a PR (#16704)
Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>
2023-11-29 14:07:32 +00:00
Erik Johnston df366966b4
Speed up pruning of `user_ips` table (#16667)
Silly query planner
2023-11-29 11:54:42 +00:00
dependabot[bot] 6f2be7794e
Bump cryptography from 41.0.5 to 41.0.6 (#16703) 2023-11-28 19:57:48 -05:00
Erik Johnston 825ac7e6a1 Merge branch 'master' into develop 2023-11-28 16:35:11 +00:00
Patrick Cloke 77882b6a7d
Document which versions of Synapse have compatible schema versions. (#16661) 2023-11-28 11:01:24 -05:00
Erik Johnston d75d6d65d1 1.97.0 2023-11-28 14:09:21 +00:00
Mathieu Velten b0ed14d815
Ignore `encryption_enabled_by_default_for_room_type` for notices room (#16677) 2023-11-28 13:15:26 +00:00
Patrick Cloke d199b84006
Remove old full schema dumps. (#16697)
These are not useful and make it difficult to search for
table definitions, etc.
2023-11-28 07:28:07 -05:00
David Robertson 8751f0ef32
Fix poetry version typo in contributors' guide (#16695) 2023-11-27 15:16:20 +00:00
dependabot[bot] b3e8d503c7
Bump prometheus-client from 0.18.0 to 0.19.0 (#16691)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-27 11:27:27 +00:00
dependabot[bot] 62e96a2929
Bump pyasn1 from 0.5.0 to 0.5.1 (#16689)
Bumps [pyasn1](https://github.com/pyasn1/pyasn1) from 0.5.0 to 0.5.1.
- [Release notes](https://github.com/pyasn1/pyasn1/releases)
- [Changelog](https://github.com/pyasn1/pyasn1/blob/main/CHANGES.rst)
- [Commits](https://github.com/pyasn1/pyasn1/compare/v0.5.0...v0.5.1)

---
updated-dependencies:
- dependency-name: pyasn1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-27 11:15:52 +00:00
dependabot[bot] 3238ae3aa0
Bump types-setuptools from 68.2.0.0 to 68.2.0.2 (#16688)
Bumps [types-setuptools](https://github.com/python/typeshed) from 68.2.0.0 to 68.2.0.2.
- [Commits](https://github.com/python/typeshed/commits)

---
updated-dependencies:
- dependency-name: types-setuptools
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-27 10:49:15 +00:00
dependabot[bot] 73794dd8c4
Bump ruff from 0.1.4 to 0.1.6 (#16690)
Bumps [ruff](https://github.com/astral-sh/ruff) from 0.1.4 to 0.1.6.
- [Release notes](https://github.com/astral-sh/ruff/releases)
- [Changelog](https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md)
- [Commits](https://github.com/astral-sh/ruff/compare/v0.1.4...v0.1.6)

---
updated-dependencies:
- dependency-name: ruff
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-27 10:48:53 +00:00
dependabot[bot] 9a6181fb4e
Bump jsonschema from 4.19.1 to 4.20.0 (#16692)
Bumps [jsonschema](https://github.com/python-jsonschema/jsonschema) from 4.19.1 to 4.20.0.
- [Release notes](https://github.com/python-jsonschema/jsonschema/releases)
- [Changelog](https://github.com/python-jsonschema/jsonschema/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/python-jsonschema/jsonschema/compare/v4.19.1...v4.20.0)

---
updated-dependencies:
- dependency-name: jsonschema
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-27 10:48:41 +00:00
dependabot[bot] 1c63dfedfd
Bump serde from 1.0.192 to 1.0.193 (#16693)
Bumps [serde](https://github.com/serde-rs/serde) from 1.0.192 to 1.0.193.
- [Release notes](https://github.com/serde-rs/serde/releases)
- [Commits](https://github.com/serde-rs/serde/compare/v1.0.192...v1.0.193)

---
updated-dependencies:
- dependency-name: serde
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-27 10:48:27 +00:00
David Robertson 0619c2bbd2
Move media retention tests out of rest tests (#16684)
* Move media retention tests out of rest tests

AFAICS this doesn't make any HTTP requests and so it ought not to belong
in `tests.rest`.

* Changelog
2023-11-27 01:29:46 +00:00
David Robertson c3627d0f99
Correctly read to-device stream pos on SQLite (#16682) 2023-11-24 13:42:38 +00:00
David Robertson 32a59a6495
Keep track of `user_ips` and `monthly_active_users` when delegating auth (#16672)
* Describe `insert_client_ip`
* Pull out client_ips and MAU tracking to BaseAuth
* Define HAS_AUTHLIB once in tests

sick of copypasting

* Track ips and token usage when delegating auth
* Test that we track MAU and user_ips
* Don't track `__oidc_admin`
2023-11-23 12:35:37 +00:00
Charles Wright 1a5f9bb651
Enable refreshable tokens on the admin registration endpoint (#16642)
Signed-off-by: Charles Wright <cvwright@futo.org>
2023-11-22 15:01:09 +00:00
V02460 f2430b16d1
Bump pyo3 (0.20), pythonize (0.20), pyo3-log (0.9) (#16673)
Signed-off-by: Kai A. Hiller <V02460@gmail.com>
2023-11-22 14:55:43 +00:00
Mathieu Velten c432d8f18f
Admin API for server notice: consistently bypass rate limits (#16670)
* Admin API for server notice: disable rate limit for all calls

* Add changelog

* Update changelog.d/16670.bugfix
2023-11-22 13:47:29 +00:00
dependabot[bot] c8118ba8c9
Bump pydantic from 2.4.2 to 2.5.1 (#16663)
Bumps [pydantic](https://github.com/pydantic/pydantic) from 2.4.2 to 2.5.1.
- [Release notes](https://github.com/pydantic/pydantic/releases)
- [Changelog](https://github.com/pydantic/pydantic/blob/main/HISTORY.md)
- [Commits](https://github.com/pydantic/pydantic/compare/v2.4.2...v2.5.1)

---
updated-dependencies:
- dependency-name: pydantic
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-22 13:14:00 +00:00
Jason Little 460743da16
Filter out auth chain queries that don't exist (#16552) 2023-11-22 10:59:16 +00:00
David Robertson 8d5c1fe921
Merge branch 'release-v1.97' into develop 2023-11-21 14:07:08 +00:00
David Robertson 536f9c96d9
fix changelog typo 2023-11-21 13:27:32 +00:00
David Robertson bb86eb9814
1.97.0rc1 2023-11-21 12:38:46 +00:00
dependabot[bot] 7611df705e
Bump sentry-sdk from 1.32.0 to 1.35.0 (#16666)
Bumps [sentry-sdk](https://github.com/getsentry/sentry-python) from 1.32.0 to 1.35.0.
- [Release notes](https://github.com/getsentry/sentry-python/releases)
- [Changelog](https://github.com/getsentry/sentry-python/blob/master/CHANGELOG.md)
- [Commits](https://github.com/getsentry/sentry-python/compare/1.32.0...1.35.0)

---
updated-dependencies:
- dependency-name: sentry-sdk
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-20 12:14:06 +00:00
dependabot[bot] e934e7f7e7
Bump pyopenssl from 23.2.0 to 23.3.0 (#16662)
Bumps [pyopenssl](https://github.com/pyca/pyopenssl) from 23.2.0 to 23.3.0.
- [Changelog](https://github.com/pyca/pyopenssl/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pyca/pyopenssl/compare/23.2.0...23.3.0)

---
updated-dependencies:
- dependency-name: pyopenssl
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-20 12:11:15 +00:00
dependabot[bot] d792e0f2d9
Bump types-pillow from 10.1.0.0 to 10.1.0.2 (#16664)
Bumps [types-pillow](https://github.com/python/typeshed) from 10.1.0.0 to 10.1.0.2.
- [Commits](https://github.com/python/typeshed/commits)

---
updated-dependencies:
- dependency-name: types-pillow
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-20 12:08:25 +00:00
dependabot[bot] 89d9ab0a0a
Bump types-psycopg2 from 2.9.21.15 to 2.9.21.16 (#16665)
Bumps [types-psycopg2](https://github.com/python/typeshed) from 2.9.21.15 to 2.9.21.16.
- [Commits](https://github.com/python/typeshed/commits)

---
updated-dependencies:
- dependency-name: types-psycopg2
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-20 12:07:34 +00:00
Erik Johnston 6088303efb
Speed up how quickly we launch new tasks (#16660)
Now that we're reducing concurrency (#16656), this is more important.
2023-11-17 16:36:02 +00:00
Patrick Cloke d9dcfe2a35
Bump requests-toolbelt from 0.10.1 to 1.0.0. (#16659) 2023-11-17 10:23:07 -05:00
Erik Johnston 9c02ef21e0
Speed up purge room by adding index (#16657)
What it says on the tin
2023-11-17 14:15:44 +00:00
Erik Johnston 6fec2d035f
Also discard 'caches' and 'backfill' stream POSITIONS (#16655)
Follow on from #16640
2023-11-17 14:14:29 +00:00
Patrick Cloke bdb0cbc5ca Merge branch 'master' into develop 2023-11-17 08:43:47 -05:00
Michael Weimann 518e4de758
Update admin user API return types in docs. (#16654) 2023-11-17 13:38:25 +00:00
Erik Johnston 700c8a0de5
Reduce task concurrency (#16656) 2023-11-17 13:14:26 +00:00
Patrick Cloke c4f5522189 Tweaks from review. 2023-11-17 08:01:13 -05:00
Patrick Cloke 6a1352e564 Move the forking note to 1.96.1. 2023-11-17 07:52:54 -05:00
Patrick Cloke 76f990c244 1.96.1 2023-11-17 07:51:59 -05:00
Patrick Cloke 47c682101f
Fix building wheels in CI. (#16653)
pip was using a vendored setuptools that was incompatible with
Python 3.12. Upgrading cibuildwheels to a version with a newer
version of pip (and thus a newer version of setuptools) fixes
the issue.
2023-11-17 07:42:49 -05:00
Patrick Cloke 2de2258bd2 Add blogpost link to changelog. 2023-11-16 13:01:32 -05:00
Patrick Cloke ff0148a165 1.96.0 2023-11-16 12:58:00 -05:00
Erik Johnston 4d6b800385
Revert "Fix test not detecting tables with missing primary keys and missing replica identities, then add more replica identities. (#16647)" (#16652)
This reverts commit 830988ae72.
2023-11-16 16:57:26 +00:00
Erik Johnston ef5329a9f9
Revert "Add a Postgres `REPLICA IDENTITY` to tables that do not have an implicit one. This should allow use of Postgres logical replication. (#16456)" (#16651)
This reverts commit 69afe3f7a0.
2023-11-16 16:48:48 +00:00
Erik Johnston 3e8531d3ba
Speed up deleting device messages (#16643)
Keeping track of a lower bound of stream ID where we've deleted everything below makes the queries much faster. Otherwise, every time we scan for rows to delete we'd re-scan across all the rows that have previously deleted (until the next table VACUUM).
2023-11-16 15:19:35 +00:00
Erik Johnston 1b238e8837
Speed up persisting large number of outliers (#16649)
Recalculating the roots tuple every iteration could be very expensive, so instead let's do a topological sort.
2023-11-16 14:25:35 +00:00
Erik Johnston fef08cbee8
Fix sending out of order `POSITION` over replication (#16639)
If a worker reconnects to Redis we send out the current positions of all our streams. However, if we're also trying to send out a backlog of RDATA at the same time then we can end up sending a `POSITION` with the current token *before* we've sent all the RDATA before the current token.

This doesn't cause actual bugs as the receiving servers see the POSITION, fetch the relevant rows from the DB, and then ignore the old RDATA as they come in. However, this is inefficient so it'd be better if we didn't  send out-of-order positions
2023-11-16 13:05:09 +00:00
Erik Johnston 898655fd12
More efficiently handle no-op POSITION (#16640)
We may receive `POSITION` commands where we already know that worker has
advanced past that position, so there is no point in handling it.
2023-11-16 12:32:17 +00:00
reivilibre 830988ae72
Fix test not detecting tables with missing primary keys and missing replica identities, then add more replica identities. (#16647)
* Fix the CI query that did not detect all cases of missing primary keys

* Add more missing REPLICA IDENTITY entries

* Newsfile

Signed-off-by: Olivier Wilkinson (reivilibre) <oliverw@matrix.org>

---------

Signed-off-by: Olivier Wilkinson (reivilibre) <oliverw@matrix.org>
2023-11-16 12:26:27 +00:00
David Robertson 43d1aa75e8
Add an Admin API to temporarily grant the ability to update an existing cross-signing key without UIA (#16634) 2023-11-15 17:28:10 +00:00
Sumner Evans 999bd77d3a
Asynchronous Uploads (#15503)
Support asynchronous uploads as defined in MSC2246.
2023-11-15 09:19:24 -05:00
Patrick Cloke 80922dc46e
Add links to pre-1.0 changelog issue/PR references. (#16638) 2023-11-15 13:31:24 +00:00
Patrick Cloke f2f2c7c1f0
Use full GitHub links instead of bare issue numbers. (#16637) 2023-11-15 08:02:11 -05:00
Will Hunt 4dd18bdc2e
Improve documentation for `/_synapse/admin/v1/rooms/<room_id>/timestamp_to_event` (#16631) 2023-11-14 11:43:44 -05:00
Nick Mills-Barrett 0e36a57b60
Remove whole table locks on push rule add/delete (#16051)
The statements are already executed within a transaction thus a table
level lock is unnecessary.
2023-11-13 16:57:44 +00:00
reivilibre 69afe3f7a0
Add a Postgres `REPLICA IDENTITY` to tables that do not have an implicit one. This should allow use of Postgres logical replication. (#16456)
* Add Postgres replica identities to tables that don't have an implicit one

Fixes #16224

* Newsfile

Signed-off-by: Olivier Wilkinson (reivilibre) <oliverw@matrix.org>

* Move the delta to version 83 as we missed the boat for 82

* Add a test that all tables have a REPLICA IDENTITY

* Extend the test to include when indices are deleted

* isort

* black

* Fully qualify `oid` as it is a 'hidden attribute' in Postgres 11

* Update tests/storage/test_database.py

Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>

* Add missed tables

---------

Signed-off-by: Olivier Wilkinson (reivilibre) <oliverw@matrix.org>
Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>
2023-11-13 16:03:22 +00:00
David Robertson fb2554b11f
Fix outbound_federation_restricted_to docs & note when added (#16628) 2023-11-13 14:26:49 +00:00
dependabot[bot] 7455b9e27d
Bump serde from 1.0.190 to 1.0.192 (#16627)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-13 11:10:28 +00:00
dependabot[bot] 35fac66d20
Bump prometheus-client from 0.17.1 to 0.18.0 (#16626)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-13 11:09:30 +00:00
dependabot[bot] 69d1ee3feb
Bump treq from 22.2.0 to 23.11.0 (#16623)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-13 11:08:31 +00:00
dependabot[bot] f92af19fa5
Bump types-pyopenssl from 23.2.0.2 to 23.3.0.0 (#16625)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-13 11:06:10 +00:00
dependabot[bot] 22a513014d
Bump types-bleach from 6.1.0.0 to 6.1.0.1 (#16624)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-13 11:05:37 +00:00
dependabot[bot] ca7421b5fd
Bump towncrier from 23.6.0 to 23.11.0 (#16622)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-13 11:05:02 +00:00
Patrick Cloke 2c6a7dfcbf
Use attempt_to_set_autocommit everywhere. (#16615)
To avoid asserting the type of the database connection.
2023-11-09 16:19:42 -05:00
reivilibre dc7f068d9c
Fix a long-standing bug where Synapse would not unbind third-party identifiers for Application Service users when deactivated and would not emit a compliant response. (#16617)
* Don't skip unbinding 3PIDs and returning success status when deactivating AS user

Fixes #16608

* Newsfile

Signed-off-by: Olivier Wilkinson (reivilibre) <oliverw@matrix.org>

---------

Signed-off-by: Olivier Wilkinson (reivilibre) <oliverw@matrix.org>
2023-11-09 20:18:25 +00:00
Patrick Cloke bc4372ad81
Use dbname instead of database for Postgres config. (#16618) 2023-11-09 14:40:45 -05:00
Patrick Cloke 9f514dd0fb
Use _invalidate_cache_and_stream_bulk in more places. (#16616)
This takes advantage of the new bulk method in more places to
invalidate caches for many keys at once (and then to stream that
over replication).
2023-11-09 14:40:30 -05:00
Patrick Cloke ab3f1b3b53
Convert simple_select_one_txn and simple_select_one to return tuples. (#16612) 2023-11-09 11:13:31 -05:00
Patrick Cloke ff716b483b
Return attrs for more media repo APIs. (#16611) 2023-11-09 11:00:30 -05:00
David Robertson 91587d4cf9
Bulk-invalidate e2e cached queries after claiming keys (#16613)
Co-authored-by: Patrick Cloke <patrickc@matrix.org>
2023-11-09 15:57:09 +00:00
dependabot[bot] f6aa047aa2
Bump pyicu from 2.11 to 2.12 (#16603)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-08 09:28:51 -05:00
dependabot[bot] 2a336cd2fc
Bump serde_json from 1.0.107 to 1.0.108 (#16604)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-08 09:27:51 -05:00
Patrick Cloke 455ef04187
Avoid updating the same rows multiple times with simple_update_many_txn. (#16609)
simple_update_many_txn had a bug in it which would cause each
update to be applied twice.
2023-11-07 14:02:09 -05:00
Patrick Cloke 9738b1c497
Avoid executing no-op queries. (#16583)
If simple_{insert,upsert,update}_many_txn is called without any data
to modify then return instead of executing the query.

This matches the behavior of simple_{select,delete}_many_txn.
2023-11-07 14:00:25 -05:00
Patrick Cloke ec9ff389f4
More tests for the simple_* methods. (#16596)
Expand tests for the simple_* database methods, additionally
test against both PostgreSQL and SQLite variants.
2023-11-07 09:34:23 -05:00
Patrick Cloke 7e5d3b06fa
Collect information for PushRuleEvaluator in parallel. (#16590)
Fetch information needed for push rule evaluation in parallel.
Ideally this would use query pipelining, but this is not
available in psycopg2.

Due to the database thread pool this may result in little
to no parallelization.
2023-11-06 15:41:57 -05:00
Patrick Cloke 1dd3074629
Bump setuptools_rust to match pinned version. (#16605) 2023-11-06 09:13:53 -05:00
Patrick Cloke cc4fe68adf
Support reactor timing metric on more reactors. (#16532)
Previously only Twisted's EPollReactor was compatible with the
reactor timing metric, notably not working when asyncio was used.

After this change, the following configurations support the reactor
timing metric:

* poll, epoll, or select reactors
* asyncio reactor with a poll, epoll, select, /dev/poll, or kqueue event loop.
2023-11-06 08:31:22 -05:00
dependabot[bot] 1a9b22a3d1
Bump setuptools-rust from 1.8.0 to 1.8.1 (#16601)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-06 08:12:18 -05:00
dependabot[bot] 5cf2988694
Bump types-pyyaml from 6.0.12.11 to 6.0.12.12 (#16602)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-06 08:12:01 -05:00
dependabot[bot] a28339b867
Bump types-jsonschema from 4.19.0.3 to 4.19.0.4 (#16599)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-06 08:11:31 -05:00
dependabot[bot] 2f689a6326
Bump ruff from 0.0.292 to 0.1.4 (#16600)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-06 08:11:15 -05:00
Patrick Cloke 92828a7f95
Simplify event persistence code (#16584)
The event persistence code used to handle multiple rooms
at a time, but was simplified to only ever be called with a
single room at a time (different rooms are now handled in
parallel). The code is still generic to multiple rooms causing
a lot of work that is unnecessary (e.g. unnecessary loops, and
partitioning data by room).

This strips out the ability to handle multiple rooms at once, greatly
simplifying the code.
2023-11-03 07:30:31 -04:00
Patrick Cloke bf69b57422
Fix "'int' object is not iterable" error in set_device_id_for_pushers background update (#16594)
A regression from removing the cursor_to_dict call, adds back
the wrapping into a tuple.
2023-11-02 14:00:18 +00:00
Patrick Cloke 0afbef30cf
Use simple_select_many_txn in event persistance code. (#16585)
Just to standardize on the normal helpers, it might also have
a slight perf improvement on PostgreSQL which will now use
`ANY (?)` instead of `IN (?, ?, ...)`.
2023-11-02 09:41:00 -04:00
dependabot[bot] c812f43bd7
Bump twisted from 23.8.0 to 23.10.0 (#16588) 2023-11-01 10:23:13 +00:00
Patrick Cloke ed1b879576
Do not call getfullargspec on every call. (#16589)
getfullargspec is relatively expensive and the results will
not change between calls, so precalculate it outside the
wrapper.
2023-10-31 20:16:17 +00:00
Patrick Cloke cfb6d38c47
Remove remaining usage of cursor_to_dict. (#16564) 2023-10-31 13:13:28 -04:00
Erik Johnston c0ba319b22 Merge branch 'release-v1.96' into develop 2023-10-31 16:30:16 +00:00
Patrick Cloke 70b503f144 Fix import ordering issue introduced in 7a3a55ac98. 2023-10-31 10:32:35 -04:00
205 changed files with 6574 additions and 5509 deletions

View File

@ -8,21 +8,21 @@
# If ignoring a pull request that was not squash merged, only the merge
# commit needs to be put here. Child commits will be resolved from it.
# Run black (#3679).
# Run black (https://github.com/matrix-org/synapse/pull/3679).
8b3d9b6b199abb87246f982d5db356f1966db925
# Black reformatting (#5482).
# Black reformatting (https://github.com/matrix-org/synapse/pull/5482).
32e7c9e7f20b57dd081023ac42d6931a8da9b3a3
# Target Python 3.5 with black (#8664).
# Target Python 3.5 with black (https://github.com/matrix-org/synapse/pull/8664).
aff1eb7c671b0a3813407321d2702ec46c71fa56
# Update black to 20.8b1 (#9381).
# Update black to 20.8b1 (https://github.com/matrix-org/synapse/pull/9381).
0a00b7ff14890987f09112a2ae696c61001e6cf1
# Convert tests/rest/admin/test_room.py to unix file endings (#7953).
# Convert tests/rest/admin/test_room.py to unix file endings (https://github.com/matrix-org/synapse/pull/7953).
c4268e3da64f1abb5b31deaeb5769adb6510c0a7
# Update black to 23.1.0 (#15103)
# Update black to 23.1.0 (https://github.com/matrix-org/synapse/pull/15103)
9bb2eac71962970d02842bca441f4bcdbbf93a11

View File

@ -22,7 +22,7 @@ jobs:
path: book
- name: 📤 Deploy to Netlify
uses: matrix-org/netlify-pr-preview@v2
uses: matrix-org/netlify-pr-preview@v3
with:
path: book
owner: ${{ github.event.workflow_run.head_repository.owner.login }}

View File

@ -6,6 +6,7 @@ on:
- docs/**
- book.toml
- .github/workflows/docs-pr.yaml
- scripts-dev/schema_versions.py
jobs:
pages:
@ -13,12 +14,22 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
# Fetch all history so that the schema_versions script works.
fetch-depth: 0
- name: Setup mdbook
uses: peaceiris/actions-mdbook@adeb05db28a0c0004681db83893d56c0388ea9ea # v1.2.0
with:
mdbook-version: '0.4.17'
- name: Setup python
uses: actions/setup-python@v4
with:
python-version: "3.x"
- run: "pip install 'packaging>=20.0' 'GitPython>=3.1.20'"
- name: Build the documentation
# mdbook will only create an index.html if we're including docs/README.md in SUMMARY.md.
# However, we're using docs/README.md for other purposes and need to pick a new page

View File

@ -51,12 +51,22 @@ jobs:
- pre
steps:
- uses: actions/checkout@v4
with:
# Fetch all history so that the schema_versions script works.
fetch-depth: 0
- name: Setup mdbook
uses: peaceiris/actions-mdbook@adeb05db28a0c0004681db83893d56c0388ea9ea # v1.2.0
with:
mdbook-version: '0.4.17'
- name: Setup python
uses: actions/setup-python@v4
with:
python-version: "3.x"
- run: "pip install 'packaging>=20.0' 'GitPython>=3.1.20'"
- name: Build the documentation
# mdbook will only create an index.html if we're including docs/README.md in SUMMARY.md.
# However, we're using docs/README.md for other purposes and need to pick a new page

52
.github/workflows/fix_lint.yaml vendored Normal file
View File

@ -0,0 +1,52 @@
# A helper workflow to automatically fixup any linting errors on a PR. Must be
# triggered manually.
name: Attempt to automatically fix linting errors
on:
workflow_dispatch:
jobs:
fixup:
name: Fix up
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Install Rust
uses: dtolnay/rust-toolchain@master
with:
# We use nightly so that `fmt` correctly groups together imports, and
# clippy correctly fixes up the benchmarks.
toolchain: nightly-2022-12-01
components: rustfmt
- uses: Swatinem/rust-cache@v2
- name: Setup Poetry
uses: matrix-org/setup-python-poetry@v1
with:
install-project: "false"
- name: Import order (isort)
continue-on-error: true
run: poetry run isort .
- name: Code style (black)
continue-on-error: true
run: poetry run black .
- name: Semantic checks (ruff)
continue-on-error: true
run: poetry run ruff --fix .
- run: cargo clippy --all-features --fix -- -D warnings
continue-on-error: true
- run: cargo fmt
continue-on-error: true
- uses: stefanzweifel/git-auto-commit-action@v5
with:
commit_message: "Attempt to fix linting"

View File

@ -130,7 +130,7 @@ jobs:
python-version: "3.x"
- name: Install cibuildwheel
run: python -m pip install cibuildwheel==2.9.0
run: python -m pip install cibuildwheel==2.16.2
- name: Set up QEMU to emulate aarch64
if: matrix.arch == 'aarch64'

View File

@ -1,3 +1,184 @@
# Synapse 1.98.0 (2023-12-12)
Synapse 1.98.0 will be the last Synapse release in 2023; the regular release cadence will resume in January 2024.
Synapse will soon be forked by Element under an AGPLv3.0 licence (with CLA, for
proprietary dual licensing). You can read more about this here:
- https://matrix.org/blog/2023/11/06/future-of-synapse-dendrite/
- https://element.io/blog/element-to-adopt-agplv3/
The Matrix.org Foundation copy of the project will be archived. Any changes needed
by server administrators will be communicated via our usual announcements channels,
but we are striving to make this as seamless as possible.
No significant changes since 1.98.0rc1.
# Synapse 1.98.0rc1 (2023-12-05)
### Features
- Synapse now declares support for Matrix v1.7, v1.8, and v1.9. ([\#16707](https://github.com/matrix-org/synapse/issues/16707))
- Add `on_user_login` [module API](https://matrix-org.github.io/synapse/latest/modules/writing_a_module.html) callback for when a user logs in. ([\#15207](https://github.com/matrix-org/synapse/issues/15207))
- Support [MSC4069: Inhibit profile propagation](https://github.com/matrix-org/matrix-spec-proposals/pull/4069). ([\#16636](https://github.com/matrix-org/synapse/issues/16636))
- Restore tracking of requests and monthly active users when delegating authentication via [MSC3861](https://github.com/matrix-org/synapse/pull/16672) to an OIDC provider. ([\#16672](https://github.com/matrix-org/synapse/issues/16672))
- Add an autojoin setting for server notices rooms, so users may be joined directly instead of receiving an invite. ([\#16699](https://github.com/matrix-org/synapse/issues/16699))
- Follow redirects when downloading media over federation (per [MSC3860](https://github.com/matrix-org/matrix-spec-proposals/pull/3860)). ([\#16701](https://github.com/matrix-org/synapse/issues/16701))
### Bugfixes
- Enable refreshable tokens on the admin registration endpoint. ([\#16642](https://github.com/matrix-org/synapse/issues/16642))
- Consistently bypass rate limits when using the server notice admin API. ([\#16670](https://github.com/matrix-org/synapse/issues/16670))
- Fix a bug introduced in Synapse 1.7.2 where rooms whose power levels lacked an `events` field could not be upgraded. ([\#16725](https://github.com/matrix-org/synapse/issues/16725))
- Fix `GET /_synapse/admin/v1/federation/destinations` [admin API](https://matrix-org.github.io/synapse/latest/usage/administration/admin_api/index.html) returning null (instead of 0) for `retry_last_ts` and `retry_interval`. ([\#16729](https://github.com/matrix-org/synapse/issues/16729))
### Improved Documentation
- Add schema rollback information to documentation. ([\#16661](https://github.com/matrix-org/synapse/issues/16661))
- Fix poetry version typo in the [contributors' guide](https://matrix-org.github.io/synapse/latest/development/contributing_guide.html). ([\#16695](https://github.com/matrix-org/synapse/issues/16695))
- Switch the example UNIX socket paths to `/run`. Add HAProxy example configuration for UNIX sockets. ([\#16700](https://github.com/matrix-org/synapse/issues/16700))
- Add documentation for how to validate the configuration file with `synapse.config` script. ([\#16714](https://github.com/matrix-org/synapse/issues/16714))
### Internal Changes
- Clean-up unused tables. ([\#16522](https://github.com/matrix-org/synapse/issues/16522))
- Reduce a little database load while processing state auth chains. ([\#16552](https://github.com/matrix-org/synapse/issues/16552))
- Reduce database load of pruning old `user_ips`. ([\#16667](https://github.com/matrix-org/synapse/issues/16667))
- Reduce DB load when forget on leave setting is disabled. ([\#16668](https://github.com/matrix-org/synapse/issues/16668))
- Ignore `encryption_enabled_by_default_for_room_type` setting when creating server notices room, since the notices will be send unencrypted anyway. ([\#16677](https://github.com/matrix-org/synapse/issues/16677))
- Correctly read the to-device stream ID on startup using SQLite. ([\#16682](https://github.com/matrix-org/synapse/issues/16682))
- Reoranganise test files. ([\#16684](https://github.com/matrix-org/synapse/issues/16684))
- Remove old full schema dumps which are no longer used. ([\#16697](https://github.com/matrix-org/synapse/issues/16697))
- Raise poetry-core upper bound to <=1.8.1. This allows contributors to import Synapse after `poetry install`ing with Poetry 1.6 and above. Contributed by Mo Balaa. ([\#16702](https://github.com/matrix-org/synapse/issues/16702))
- Add a workflow to try and automatically fixup linting in a PR. ([\#16704](https://github.com/matrix-org/synapse/issues/16704))
### Updates to locked dependencies
* Bump cryptography from 41.0.5 to 41.0.6. ([\#16703](https://github.com/matrix-org/synapse/issues/16703))
* Bump cryptography from 41.0.6 to 41.0.7. ([\#16721](https://github.com/matrix-org/synapse/issues/16721))
* Bump idna from 3.4 to 3.6. ([\#16720](https://github.com/matrix-org/synapse/issues/16720))
* Bump jsonschema from 4.19.1 to 4.20.0. ([\#16692](https://github.com/matrix-org/synapse/issues/16692))
* Bump matrix-org/netlify-pr-preview from 2 to 3. ([\#16719](https://github.com/matrix-org/synapse/issues/16719))
* Bump phonenumbers from 8.13.23 to 8.13.26. ([\#16722](https://github.com/matrix-org/synapse/issues/16722))
* Bump prometheus-client from 0.18.0 to 0.19.0. ([\#16691](https://github.com/matrix-org/synapse/issues/16691))
* Bump pyasn1 from 0.5.0 to 0.5.1. ([\#16689](https://github.com/matrix-org/synapse/issues/16689))
* Bump pydantic from 2.4.2 to 2.5.1. ([\#16663](https://github.com/matrix-org/synapse/issues/16663))
* Bump pyo3 (0.19.2→0.20.0), pythonize (0.19.0→0.20.0) and pyo3-log (0.8.1→0.9.0). ([\#16673](https://github.com/matrix-org/synapse/issues/16673))
* Bump pyopenssl from 23.2.0 to 23.3.0. ([\#16662](https://github.com/matrix-org/synapse/issues/16662))
* Bump ruff from 0.1.4 to 0.1.6. ([\#16690](https://github.com/matrix-org/synapse/issues/16690))
* Bump sentry-sdk from 1.32.0 to 1.35.0. ([\#16666](https://github.com/matrix-org/synapse/issues/16666))
* Bump serde from 1.0.192 to 1.0.193. ([\#16693](https://github.com/matrix-org/synapse/issues/16693))
* Bump sphinx-autodoc2 from 0.4.2 to 0.5.0. ([\#16723](https://github.com/matrix-org/synapse/issues/16723))
* Bump types-jsonschema from 4.19.0.4 to 4.20.0.0. ([\#16724](https://github.com/matrix-org/synapse/issues/16724))
* Bump types-pillow from 10.1.0.0 to 10.1.0.2. ([\#16664](https://github.com/matrix-org/synapse/issues/16664))
* Bump types-psycopg2 from 2.9.21.15 to 2.9.21.16. ([\#16665](https://github.com/matrix-org/synapse/issues/16665))
* Bump types-setuptools from 68.2.0.0 to 68.2.0.2. ([\#16688](https://github.com/matrix-org/synapse/issues/16688))
# Synapse 1.97.0 (2023-11-28)
Synapse will soon be forked by Element under an AGPLv3.0 licence (with CLA, for
proprietary dual licensing). You can read more about this here:
- https://matrix.org/blog/2023/11/06/future-of-synapse-dendrite/
- https://element.io/blog/element-to-adopt-agplv3/
The Matrix.org Foundation copy of the project will be archived. Any changes needed
by server administrators will be communicated via our usual announcements channels,
but we are striving to make this as seamless as possible.
No significant changes since 1.97.0rc1.
# Synapse 1.97.0rc1 (2023-11-21)
### Features
- Add support for asynchronous uploads as defined by [MSC2246](https://github.com/matrix-org/matrix-spec-proposals/pull/2246). Contributed by @sumnerevans at @beeper. ([\#15503](https://github.com/matrix-org/synapse/issues/15503))
- Improve the performance of some operations in multi-worker deployments. ([\#16613](https://github.com/matrix-org/synapse/issues/16613), [\#16616](https://github.com/matrix-org/synapse/issues/16616))
### Bugfixes
- Fix a long-standing bug where some queries updated the same row twice. Introduced in Synapse 1.57.0. ([\#16609](https://github.com/matrix-org/synapse/issues/16609))
- Fix a long-standing bug where Synapse would not unbind third-party identifiers for Application Service users when deactivated and would not emit a compliant response. ([\#16617](https://github.com/matrix-org/synapse/issues/16617))
- Fix sending out of order `POSITION` over replication, causing additional database load. ([\#16639](https://github.com/matrix-org/synapse/issues/16639))
### Improved Documentation
- Note that the option [`outbound_federation_restricted_to`](https://matrix-org.github.io/synapse/latest/usage/configuration/config_documentation.html#outbound_federation_restricted_to) was added in Synapse 1.89.0, and fix a nearby formatting error. ([\#16628](https://github.com/matrix-org/synapse/issues/16628))
- Update parameter information for the `/timestamp_to_event` admin API. ([\#16631](https://github.com/matrix-org/synapse/issues/16631))
- Provide an example for a common encrypted media response from the admin user media API and mention possible null values. ([\#16654](https://github.com/matrix-org/synapse/issues/16654))
### Internal Changes
- Remove whole table locks on push rule modifications. Contributed by Nick @ Beeper (@fizzadar). ([\#16051](https://github.com/matrix-org/synapse/issues/16051))
- Support reactor tick timings on more types of event loops. ([\#16532](https://github.com/matrix-org/synapse/issues/16532))
- Improve type hints. ([\#16564](https://github.com/matrix-org/synapse/issues/16564), [\#16611](https://github.com/matrix-org/synapse/issues/16611), [\#16612](https://github.com/matrix-org/synapse/issues/16612))
- Avoid executing no-op queries. ([\#16583](https://github.com/matrix-org/synapse/issues/16583))
- Simplify persistence code to be per-room. ([\#16584](https://github.com/matrix-org/synapse/issues/16584))
- Use standard SQL helpers in persistence code. ([\#16585](https://github.com/matrix-org/synapse/issues/16585))
- Avoid updating the stream cache unnecessarily. ([\#16586](https://github.com/matrix-org/synapse/issues/16586))
- Improve performance when using opentracing. ([\#16589](https://github.com/matrix-org/synapse/issues/16589))
- Run push rule evaluator setup in parallel. ([\#16590](https://github.com/matrix-org/synapse/issues/16590))
- Improve tests of the SQL generator. ([\#16596](https://github.com/matrix-org/synapse/issues/16596))
- Use more generic database methods. ([\#16615](https://github.com/matrix-org/synapse/issues/16615))
- Use `dbname` instead of the deprecated `database` connection parameter for psycopg2. ([\#16618](https://github.com/matrix-org/synapse/issues/16618))
- Add an internal [Admin API endpoint](https://matrix-org.github.io/synapse/v1.97/usage/configuration/config_documentation.html#allow-replacing-master-cross-signing-key-without-user-interactive-auth) to temporarily grant the ability to update an existing cross-signing key without UIA. ([\#16634](https://github.com/matrix-org/synapse/issues/16634))
- Improve references to GitHub issues. ([\#16637](https://github.com/matrix-org/synapse/issues/16637), [\#16638](https://github.com/matrix-org/synapse/issues/16638))
- More efficiently handle no-op `POSITION` over replication. ([\#16640](https://github.com/matrix-org/synapse/issues/16640), [\#16655](https://github.com/matrix-org/synapse/issues/16655))
- Speed up deleting of device messages when deleting a device. ([\#16643](https://github.com/matrix-org/synapse/issues/16643))
- Speed up persisting large number of outliers. ([\#16649](https://github.com/matrix-org/synapse/issues/16649))
- Reduce max concurrency of background tasks, reducing potential max DB load. ([\#16656](https://github.com/matrix-org/synapse/issues/16656), [\#16660](https://github.com/matrix-org/synapse/issues/16660))
- Speed up purge room by adding an index to `event_push_summary`. ([\#16657](https://github.com/matrix-org/synapse/issues/16657))
### Updates to locked dependencies
* Bump prometheus-client from 0.17.1 to 0.18.0. ([\#16626](https://github.com/matrix-org/synapse/issues/16626))
* Bump pyicu from 2.11 to 2.12. ([\#16603](https://github.com/matrix-org/synapse/issues/16603))
* Bump requests-toolbelt from 0.10.1 to 1.0.0. ([\#16659](https://github.com/matrix-org/synapse/issues/16659))
* Bump ruff from 0.0.292 to 0.1.4. ([\#16600](https://github.com/matrix-org/synapse/issues/16600))
* Bump serde from 1.0.190 to 1.0.192. ([\#16627](https://github.com/matrix-org/synapse/issues/16627))
* Bump serde_json from 1.0.107 to 1.0.108. ([\#16604](https://github.com/matrix-org/synapse/issues/16604))
* Bump setuptools-rust from 1.8.0 to 1.8.1. ([\#16601](https://github.com/matrix-org/synapse/issues/16601))
* Bump towncrier from 23.6.0 to 23.11.0. ([\#16622](https://github.com/matrix-org/synapse/issues/16622))
* Bump treq from 22.2.0 to 23.11.0. ([\#16623](https://github.com/matrix-org/synapse/issues/16623))
* Bump twisted from 23.8.0 to 23.10.0. ([\#16588](https://github.com/matrix-org/synapse/issues/16588))
* Bump types-bleach from 6.1.0.0 to 6.1.0.1. ([\#16624](https://github.com/matrix-org/synapse/issues/16624))
* Bump types-jsonschema from 4.19.0.3 to 4.19.0.4. ([\#16599](https://github.com/matrix-org/synapse/issues/16599))
* Bump types-pyopenssl from 23.2.0.2 to 23.3.0.0. ([\#16625](https://github.com/matrix-org/synapse/issues/16625))
* Bump types-pyyaml from 6.0.12.11 to 6.0.12.12. ([\#16602](https://github.com/matrix-org/synapse/issues/16602))
# Synapse 1.96.1 (2023-11-17)
Synapse will soon be forked by Element under an AGPLv3.0 licence (with CLA, for
proprietary dual licensing). You can read more about this here:
* https://matrix.org/blog/2023/11/06/future-of-synapse-dendrite/
* https://element.io/blog/element-to-adopt-agplv3/
The Matrix.org Foundation copy of the project will be archived. Any changes needed
by server administrators will be communicated via our usual
[announcements channels](https://matrix.to/#/#homeowners:matrix.org), but we are
striving to make this as seamless as possible.
This minor release was needed only because of CI-related trouble on [v1.96.0](https://github.com/matrix-org/synapse/releases/tag/v1.96.0), which was never released.
### Internal Changes
- Fix building of wheels in CI. ([\#16653](https://github.com/matrix-org/synapse/issues/16653))
# Synapse 1.96.0 (2023-11-16)
### Bugfixes
- Fix "'int' object is not iterable" error in `set_device_id_for_pushers` background update introduced in Synapse 1.95.0. ([\#16594](https://github.com/matrix-org/synapse/issues/16594))
# Synapse 1.96.0rc1 (2023-10-31)
### Features

72
Cargo.lock generated
View File

@ -90,6 +90,12 @@ dependencies = [
"version_check",
]
[[package]]
name = "heck"
version = "0.4.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "95505c38b4572b2d910cecb0281560f54b440a19336cbbcb27bf6ce6adc6f5a8"
[[package]]
name = "hex"
version = "0.4.3"
@ -98,9 +104,9 @@ checksum = "7f24254aa9a54b5c858eaee2f5bccdb46aaf0e486a595ed5fd8f86ba55232a70"
[[package]]
name = "indoc"
version = "1.0.7"
version = "2.0.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "adab1eaa3408fb7f0c777a73e7465fd5656136fc93b670eb6df3c88c2c1344e3"
checksum = "1e186cfbae8084e513daff4240b4797e342f988cecda4fb6c939150f96315fd8"
[[package]]
name = "itoa"
@ -191,9 +197,9 @@ dependencies = [
[[package]]
name = "pyo3"
version = "0.19.2"
version = "0.20.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e681a6cfdc4adcc93b4d3cf993749a4552018ee0a9b65fc0ccfad74352c72a38"
checksum = "04e8453b658fe480c3e70c8ed4e3d3ec33eb74988bd186561b0cc66b85c3bc4b"
dependencies = [
"anyhow",
"cfg-if",
@ -209,9 +215,9 @@ dependencies = [
[[package]]
name = "pyo3-build-config"
version = "0.19.2"
version = "0.20.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "076c73d0bc438f7a4ef6fdd0c3bb4732149136abd952b110ac93e4edb13a6ba5"
checksum = "a96fe70b176a89cff78f2fa7b3c930081e163d5379b4dcdf993e3ae29ca662e5"
dependencies = [
"once_cell",
"target-lexicon",
@ -219,9 +225,9 @@ dependencies = [
[[package]]
name = "pyo3-ffi"
version = "0.19.2"
version = "0.20.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e53cee42e77ebe256066ba8aa77eff722b3bb91f3419177cf4cd0f304d3284d9"
checksum = "214929900fd25e6604661ed9cf349727c8920d47deff196c4e28165a6ef2a96b"
dependencies = [
"libc",
"pyo3-build-config",
@ -229,9 +235,9 @@ dependencies = [
[[package]]
name = "pyo3-log"
version = "0.8.4"
version = "0.9.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c09c2b349b6538d8a73d436ca606dab6ce0aaab4dad9e6b7bdd57a4f556c3bc3"
checksum = "4c10808ee7250403bedb24bc30c32493e93875fef7ba3e4292226fe924f398bd"
dependencies = [
"arc-swap",
"log",
@ -240,32 +246,33 @@ dependencies = [
[[package]]
name = "pyo3-macros"
version = "0.19.2"
version = "0.20.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "dfeb4c99597e136528c6dd7d5e3de5434d1ceaf487436a3f03b2d56b6fc9efd1"
checksum = "dac53072f717aa1bfa4db832b39de8c875b7c7af4f4a6fe93cdbf9264cf8383b"
dependencies = [
"proc-macro2",
"pyo3-macros-backend",
"quote",
"syn 1.0.104",
"syn",
]
[[package]]
name = "pyo3-macros-backend"
version = "0.19.2"
version = "0.20.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "947dc12175c254889edc0c02e399476c2f652b4b9ebd123aa655c224de259536"
checksum = "7774b5a8282bd4f25f803b1f0d945120be959a36c72e08e7cd031c792fdfd424"
dependencies = [
"heck",
"proc-macro2",
"quote",
"syn 1.0.104",
"syn",
]
[[package]]
name = "pythonize"
version = "0.19.0"
version = "0.20.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8e35b716d430ace57e2d1b4afb51c9e5b7c46d2bce72926e07f9be6a98ced03e"
checksum = "ffd1c3ef39c725d63db5f9bc455461bafd80540cb7824c61afb823501921a850"
dependencies = [
"pyo3",
"serde",
@ -332,29 +339,29 @@ checksum = "d29ab0c6d3fc0ee92fe66e2d99f700eab17a8d57d1c1d3b748380fb20baa78cd"
[[package]]
name = "serde"
version = "1.0.190"
version = "1.0.193"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "91d3c334ca1ee894a2c6f6ad698fe8c435b76d504b13d436f0685d648d6d96f7"
checksum = "25dd9975e68d0cb5aa1120c288333fc98731bd1dd12f561e468ea4728c042b89"
dependencies = [
"serde_derive",
]
[[package]]
name = "serde_derive"
version = "1.0.190"
version = "1.0.193"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "67c5609f394e5c2bd7fc51efda478004ea80ef42fee983d5c67a65e34f32c0e3"
checksum = "43576ca501357b9b071ac53cdc7da8ef0cbd9493d8df094cd821777ea6e894d3"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.28",
"syn",
]
[[package]]
name = "serde_json"
version = "1.0.107"
version = "1.0.108"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6b420ce6e3d8bd882e9b243c6eed35dbc9a6110c9769e74b584e0d68d1f20c65"
checksum = "3d1c7e3eac408d115102c4c24ad393e0821bb3a5df4d506a80f85f7a742a526b"
dependencies = [
"itoa",
"ryu",
@ -373,17 +380,6 @@ version = "2.4.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6bdef32e8150c2a081110b42772ffe7d7c9032b606bc226c8260fd97e0976601"
[[package]]
name = "syn"
version = "1.0.104"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4ae548ec36cf198c0ef7710d3c230987c2d6d7bd98ad6edc0274462724c585ce"
dependencies = [
"proc-macro2",
"quote",
"unicode-ident",
]
[[package]]
name = "syn"
version = "2.0.28"
@ -432,9 +428,9 @@ checksum = "6ceab39d59e4c9499d4e5a8ee0e2735b891bb7308ac83dfb4e80cad195c9f6f3"
[[package]]
name = "unindent"
version = "0.1.10"
version = "0.2.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "58ee9362deb4a96cef4d437d1ad49cffc9b9e92d202b6995674e928ce684f112"
checksum = "c7de7d73e1754487cb58364ee906a499937a0dfabd86bcb980fa99ec8c8fa2ce"
[[package]]
name = "version_check"

View File

@ -2,10 +2,19 @@
Synapse |support| |development| |documentation| |license| |pypi| |python|
=========================================================================
Synapse is an open-source `Matrix <https://matrix.org/>`_ homeserver written and
maintained by the Matrix.org Foundation. We began rapid development in 2014,
reaching v1.0.0 in 2019. Development on Synapse and the Matrix protocol itself continues
in earnest today.
Synapse is now actively maintained at `element-hq/synapse <https://github.com/element-hq/synapse>`_
=================================================================================================
Synapse is an open-source `Matrix <https://matrix.org/>`_ homeserver developed
from 2019 through 2023 as part of the Matrix.org Foundation. The Matrix.org
Foundation is not able to resource maintenance of Synapse and it
`continues to be developed by Element <https://github.com/element-hq/synapse>`_;
additionally you have the choice of `other Matrix homeservers <https://matrix.org/ecosystem/servers/>`_.
See `The future of Synapse and Dendrite <https://matrix.org/blog/2023/11/06/future-of-synapse-dendrite/>`_
blog post for more information.
=========================================================================
Briefly, Matrix is an open standard for communications on the internet, supporting
federation, encryption and VoIP. Matrix.org has more to say about the `goals of the

View File

@ -34,6 +34,14 @@ additional-css = [
"docs/website_files/table-of-contents.css",
"docs/website_files/remove-nav-buttons.css",
"docs/website_files/indent-section-headers.css",
"docs/website_files/version-picker.css",
]
additional-js = ["docs/website_files/table-of-contents.js"]
theme = "docs/website_files/theme"
additional-js = [
"docs/website_files/table-of-contents.js",
"docs/website_files/version-picker.js",
"docs/website_files/version.js",
]
theme = "docs/website_files/theme"
[preprocessor.schema_versions]
command = "./scripts-dev/schema_versions.py"

38
debian/changelog vendored
View File

@ -1,3 +1,39 @@
matrix-synapse-py3 (1.98.0) stable; urgency=medium
* New Synapse release 1.98.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 12 Dec 2023 15:04:31 +0000
matrix-synapse-py3 (1.98.0~rc1) stable; urgency=medium
* New Synapse release 1.98.0rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 05 Dec 2023 13:08:42 +0000
matrix-synapse-py3 (1.97.0) stable; urgency=medium
* New Synapse release 1.97.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 28 Nov 2023 14:08:58 +0000
matrix-synapse-py3 (1.97.0~rc1) stable; urgency=medium
* New Synapse release 1.97.0rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 21 Nov 2023 12:32:03 +0000
matrix-synapse-py3 (1.96.1) stable; urgency=medium
* New synapse release 1.96.1.
-- Synapse Packaging team <packages@matrix.org> Fri, 17 Nov 2023 12:48:45 +0000
matrix-synapse-py3 (1.96.0) stable; urgency=medium
* New synapse release 1.96.0.
-- Synapse Packaging team <packages@matrix.org> Thu, 16 Nov 2023 17:54:26 +0000
matrix-synapse-py3 (1.96.0~rc1) stable; urgency=medium
* New Synapse release 1.96.0rc1.
@ -1637,7 +1673,7 @@ matrix-synapse-py3 (0.99.3.1) stable; urgency=medium
matrix-synapse-py3 (0.99.3) stable; urgency=medium
[ Richard van der Hoff ]
* Fix warning during preconfiguration. (Fixes: #4819)
* Fix warning during preconfiguration. (Fixes: https://github.com/matrix-org/synapse/issues/4819)
[ Synapse Packaging team ]
* New synapse release 0.99.3.

View File

@ -536,7 +536,8 @@ The following query parameters are available:
**Response**
* `event_id` - converted from timestamp
* `event_id` - The event ID closest to the given timestamp.
* `origin_server_ts` - The timestamp of the event in milliseconds since the Unix epoch.
# Block Room API
The Block Room admin API allows server admins to block and unblock rooms,

View File

@ -618,6 +618,16 @@ A response body like the following is returned:
"quarantined_by": null,
"safe_from_quarantine": false,
"upload_name": "test2.png"
},
{
"created_ts": 300400,
"last_access_ts": 300700,
"media_id": "BzYNLRUgGHphBkdKGbzXwbjX",
"media_length": 1337,
"media_type": "application/octet-stream",
"quarantined_by": null,
"safe_from_quarantine": false,
"upload_name": null
}
],
"next_token": 3,
@ -679,16 +689,17 @@ The following fields are returned in the JSON response body:
- `media` - An array of objects, each containing information about a media.
Media objects contain the following fields:
- `created_ts` - integer - Timestamp when the content was uploaded in ms.
- `last_access_ts` - integer - Timestamp when the content was last accessed in ms.
- `last_access_ts` - integer or null - Timestamp when the content was last accessed in ms.
Null if there was no access, yet.
- `media_id` - string - The id used to refer to the media. Details about the format
are documented under
[media repository](../media_repository.md).
- `media_length` - integer - Length of the media in bytes.
- `media_type` - string - The MIME-type of the media.
- `quarantined_by` - string - The user ID that initiated the quarantine request
for this media.
- `quarantined_by` - string or null - The user ID that initiated the quarantine request
for this media. Null if not quarantined.
- `safe_from_quarantine` - bool - Status if this media is safe from quarantining.
- `upload_name` - string - The name the media was uploaded with.
- `upload_name` - string or null - The name the media was uploaded with. Null if not provided during upload.
- `next_token`: integer - Indication for pagination. See above.
- `total` - integer - Total number of media.
@ -773,6 +784,43 @@ Note: The token will expire if the *admin* user calls `/logout/all` from any
of their devices, but the token will *not* expire if the target user does the
same.
## Allow replacing master cross-signing key without User-Interactive Auth
This endpoint is not intended for server administrator usage;
we describe it here for completeness.
This API temporarily permits a user to replace their master cross-signing key
without going through
[user-interactive authentication](https://spec.matrix.org/v1.8/client-server-api/#user-interactive-authentication-api) (UIA).
This is useful when Synapse has delegated its authentication to the
[Matrix Authentication Service](https://github.com/matrix-org/matrix-authentication-service/);
as Synapse cannot perform UIA is not possible in these circumstances.
The API is
```http request
POST /_synapse/admin/v1/users/<user_id>/_allow_cross_signing_replacement_without_uia
{}
```
If the user does not exist, or does exist but has no master cross-signing key,
this will return with status code `404 Not Found`.
Otherwise, a response body like the following is returned, with status `200 OK`:
```json
{
"updatable_without_uia_before_ms": 1234567890
}
```
The response body is a JSON object with a single field:
- `updatable_without_uia_before_ms`: integer. The timestamp in milliseconds
before which the user is permitted to replace their cross-signing key without
going through UIA.
_Added in Synapse 1.97.0._
## User devices

File diff suppressed because it is too large Load Diff

View File

@ -66,7 +66,7 @@ Of their installation methods, we recommend
```shell
pip install --user pipx
pipx install poetry==1.5.2 # Problems with Poetry 1.6, see https://github.com/matrix-org/synapse/issues/16147
pipx install poetry==1.5.1 # Problems with Poetry 1.6, see https://github.com/matrix-org/synapse/issues/16147
```
but see poetry's [installation instructions](https://python-poetry.org/docs/#installation)

View File

@ -42,3 +42,16 @@ operations to keep track of them. (e.g. add them to a database table). The user
represented by their Matrix user ID.
If multiple modules implement this callback, Synapse runs them all in order.
### `on_user_login`
_First introduced in Synapse v1.98.0_
```python
async def on_user_login(user_id: str, auth_provider_type: str, auth_provider_id: str) -> None
```
Called after successfully login or registration of a user for cases when module needs to perform extra operations after auth.
represented by their Matrix user ID.
If multiple modules implement this callback, Synapse runs them all in order.

View File

@ -66,7 +66,7 @@ database:
args:
user: <user>
password: <pass>
database: <db>
dbname: <db>
host: <host>
cp_min: 5
cp_max: 10

View File

@ -181,7 +181,11 @@ frontend matrix-federation
backend matrix
server matrix 127.0.0.1:8008
```
Example configuration, if using a UNIX socket. The configuration lines regarding the frontends do not need to be modified.
```
backend matrix
server matrix unix@/run/synapse/main_public.sock
```
[Delegation](delegate.md) example:
```

View File

@ -46,6 +46,7 @@ server_notices:
system_mxid_display_name: "Server Notices"
system_mxid_avatar_url: "mxc://server.com/oumMVlgDnLYFaPVkExemNVVZ"
room_name: "Server Notices"
auto_join: true
```
The only compulsory setting is `system_mxid_localpart`, which defines the user
@ -55,6 +56,8 @@ room which will be created.
`system_mxid_display_name` and `system_mxid_avatar_url` can be used to set the
displayname and avatar of the Server Notices user.
`auto_join` will autojoin users to the notices room instead of sending an invite.
## Sending notices
To send server notices to users you can use the

View File

@ -88,6 +88,15 @@ process, for example:
dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb
```
Generally Synapse database schemas are compatible across multiple versions, once
a version of Synapse is deployed you may not be able to rollback automatically.
The following table gives the version ranges and the earliest version they can
be rolled back to. E.g. Synapse versions v1.58.0 through v1.61.1 can be rolled
back safely to v1.57.0, but starting with v1.62.0 it is only safe to rollback to
v1.61.0.
<!-- REPLACE_WITH_SCHEMA_VERSIONS -->
# Upgrading to v1.93.0
## Minimum supported Rust version

View File

@ -33,6 +33,23 @@ In addition, configuration options referring to size use the following suffixes:
For example, setting `max_avatar_size: 10M` means that Synapse will not accept files larger than 10,485,760 bytes
for a user avatar.
## Config Validation
The configuration file can be validated with the following command:
```bash
python -m synapse.config read <config key to print> -c <path to config>
```
To validate the entire file, omit `read <config key to print>`:
```bash
python -m synapse.config -c <path to config>
```
To see how to set other options, check the help reference:
```bash
python -m synapse.config --help
```
### YAML
The configuration file is a [YAML](https://yaml.org/) file, which means that certain syntax rules
apply if you want your config file to be read properly. A few helpful things to know:
@ -566,7 +583,7 @@ listeners:
# Note that x_forwarded will default to true, when using a UNIX socket. Please see
# https://matrix-org.github.io/synapse/latest/reverse_proxy.html.
#
- path: /var/run/synapse/main_public.sock
- path: /run/synapse/main_public.sock
type: http
resources:
- names: [client, federation]
@ -1447,7 +1464,7 @@ database:
args:
user: synapse_user
password: secretpassword
database: synapse
dbname: synapse
host: localhost
port: 5432
cp_min: 5
@ -1526,7 +1543,7 @@ databases:
args:
user: synapse_user
password: secretpassword
database: synapse_main
dbname: synapse_main
host: localhost
port: 5432
cp_min: 5
@ -1539,7 +1556,7 @@ databases:
args:
user: synapse_user
password: secretpassword
database: synapse_state
dbname: synapse_state
host: localhost
port: 5432
cp_min: 5
@ -1753,6 +1770,19 @@ rc_third_party_invite:
burst_count: 10
```
---
### `rc_media_create`
This option ratelimits creation of MXC URIs via the `/_matrix/media/v1/create`
endpoint based on the account that's creating the media. Defaults to
`per_second: 10`, `burst_count: 50`.
Example configuration:
```yaml
rc_media_create:
per_second: 10
burst_count: 50
```
---
### `rc_federation`
Defines limits on federation requests.
@ -1814,6 +1844,27 @@ Example configuration:
media_store_path: "DATADIR/media_store"
```
---
### `max_pending_media_uploads`
How many *pending media uploads* can a given user have? A pending media upload
is a created MXC URI that (a) is not expired (the `unused_expires_at` timestamp
has not passed) and (b) the media has not yet been uploaded for. Defaults to 5.
Example configuration:
```yaml
max_pending_media_uploads: 5
```
---
### `unused_expiration_time`
How long to wait in milliseconds before expiring created media IDs. Defaults to
"24h"
Example configuration:
```yaml
unused_expiration_time: "1h"
```
---
### `media_storage_providers`
Media storage providers allow media to be stored in different
@ -3781,6 +3832,8 @@ Sub-options for this setting include:
* `system_mxid_display_name`: set the display name of the "notices" user
* `system_mxid_avatar_url`: set the avatar for the "notices" user
* `room_name`: set the room name of the server notices room
* `auto_join`: boolean. If true, the user will be automatically joined to the room instead of being invited.
Defaults to false. _Added in Synapse 1.98.0._
Example configuration:
```yaml
@ -3789,6 +3842,7 @@ server_notices:
system_mxid_display_name: "Server Notices"
system_mxid_avatar_url: "mxc://server.com/oumMVlgDnLYFaPVkExemNVVZ"
room_name: "Server Notices"
auto_join: true
```
---
### `enable_room_list_search`
@ -4181,9 +4235,9 @@ Example configuration(#2, for UNIX sockets):
```yaml
instance_map:
main:
path: /var/run/synapse/main_replication.sock
path: /run/synapse/main_replication.sock
worker1:
path: /var/run/synapse/worker1_replication.sock
path: /run/synapse/worker1_replication.sock
```
---
### `stream_writers`
@ -4219,6 +4273,9 @@ outbound_federation_restricted_to:
Also see the [worker
documentation](../../workers.md#restrict-outbound-federation-traffic-to-a-specific-set-of-workers)
for more info.
_Added in Synapse 1.89.0._
---
### `run_background_tasks_on`
@ -4366,13 +4423,13 @@ Example configuration(#2, using UNIX sockets with a `replication` listener):
```yaml
worker_listeners:
- type: http
path: /var/run/synapse/worker_public.sock
resources:
- names: [client, federation]
- type: http
path: /var/run/synapse/worker_replication.sock
path: /run/synapse/worker_replication.sock
resources:
- names: [replication]
- type: http
path: /run/synapse/worker_public.sock
resources:
- names: [client, federation]
```
---
### `worker_manhole`

View File

@ -24,6 +24,11 @@ Finally, we also stylise the chapter titles in the left sidebar by indenting the
slightly so that they are more visually distinguishable from the section headers
(the bold titles). This is done through the `indent-section-headers.css` file.
In addition to these modifications, we have added a version picker to the documentation.
Users can switch between documentations for different versions of Synapse.
This functionality was implemented through the `version-picker.js` and
`version-picker.css` files.
More information can be found in mdbook's official documentation for
[injecting page JS/CSS](https://rust-lang.github.io/mdBook/format/config.html)
and

View File

@ -131,6 +131,18 @@
<i class="fa fa-search"></i>
</button>
{{/if}}
<div class="version-picker">
<div class="dropdown">
<div class="select">
<span></span>
<i class="fa fa-chevron-down"></i>
</div>
<input type="hidden" name="version">
<ul class="dropdown-menu">
<!-- Versions will be added dynamically in version-picker.js -->
</ul>
</div>
</div>
</div>
<h1 class="menu-title">{{ book_title }}</h1>
@ -309,4 +321,4 @@
{{/if}}
</body>
</html>
</html>

View File

@ -0,0 +1,78 @@
.version-picker {
display: flex;
align-items: center;
}
.version-picker .dropdown {
width: 130px;
max-height: 29px;
margin-left: 10px;
display: inline-block;
border-radius: 4px;
border: 1px solid var(--theme-popup-border);
position: relative;
font-size: 13px;
color: var(--fg);
height: 100%;
text-align: left;
}
.version-picker .dropdown .select {
cursor: pointer;
display: block;
padding: 5px 2px 5px 15px;
}
.version-picker .dropdown .select > i {
font-size: 10px;
color: var(--fg);
cursor: pointer;
float: right;
line-height: 20px !important;
}
.version-picker .dropdown:hover {
border: 1px solid var(--theme-popup-border);
}
.version-picker .dropdown:active {
background-color: var(--theme-popup-bg);
}
.version-picker .dropdown.active:hover,
.version-picker .dropdown.active {
border: 1px solid var(--theme-popup-border);
border-radius: 2px 2px 0 0;
background-color: var(--theme-popup-bg);
}
.version-picker .dropdown.active .select > i {
transform: rotate(-180deg);
}
.version-picker .dropdown .dropdown-menu {
position: absolute;
background-color: var(--theme-popup-bg);
width: 100%;
left: -1px;
right: 1px;
margin-top: 1px;
border: 1px solid var(--theme-popup-border);
border-radius: 0 0 4px 4px;
overflow: hidden;
display: none;
max-height: 300px;
overflow-y: auto;
z-index: 9;
}
.version-picker .dropdown .dropdown-menu li {
font-size: 12px;
padding: 6px 20px;
cursor: pointer;
}
.version-picker .dropdown .dropdown-menu {
padding: 0;
list-style: none;
}
.version-picker .dropdown .dropdown-menu li:hover {
background-color: var(--theme-hover);
}
.version-picker .dropdown .dropdown-menu li.active::before {
display: inline-block;
content: "✓";
margin-inline-start: -14px;
width: 14px;
}

View File

@ -0,0 +1,127 @@
const dropdown = document.querySelector('.version-picker .dropdown');
const dropdownMenu = dropdown.querySelector('.dropdown-menu');
fetchVersions(dropdown, dropdownMenu).then(() => {
initializeVersionDropdown(dropdown, dropdownMenu);
});
/**
* Initialize the dropdown functionality for version selection.
*
* @param {Element} dropdown - The dropdown element.
* @param {Element} dropdownMenu - The dropdown menu element.
*/
function initializeVersionDropdown(dropdown, dropdownMenu) {
// Toggle the dropdown menu on click
dropdown.addEventListener('click', function () {
this.setAttribute('tabindex', 1);
this.classList.toggle('active');
dropdownMenu.style.display = (dropdownMenu.style.display === 'block') ? 'none' : 'block';
});
// Remove the 'active' class and hide the dropdown menu on focusout
dropdown.addEventListener('focusout', function () {
this.classList.remove('active');
dropdownMenu.style.display = 'none';
});
// Handle item selection within the dropdown menu
const dropdownMenuItems = dropdownMenu.querySelectorAll('li');
dropdownMenuItems.forEach(function (item) {
item.addEventListener('click', function () {
dropdownMenuItems.forEach(function (item) {
item.classList.remove('active');
});
this.classList.add('active');
dropdown.querySelector('span').textContent = this.textContent;
dropdown.querySelector('input').value = this.getAttribute('id');
window.location.href = changeVersion(window.location.href, this.textContent);
});
});
};
/**
* This function fetches the available versions from a GitHub repository
* and inserts them into the version picker.
*
* @param {Element} dropdown - The dropdown element.
* @param {Element} dropdownMenu - The dropdown menu element.
* @returns {Promise<Array<string>>} A promise that resolves with an array of available versions.
*/
function fetchVersions(dropdown, dropdownMenu) {
return new Promise((resolve, reject) => {
window.addEventListener("load", () => {
fetch("https://api.github.com/repos/matrix-org/synapse/git/trees/gh-pages", {
cache: "force-cache",
}).then(res =>
res.json()
).then(resObject => {
const excluded = ['dev-docs', 'v1.91.0', 'v1.80.0', 'v1.69.0'];
const tree = resObject.tree.filter(item => item.type === "tree" && !excluded.includes(item.path));
const versions = tree.map(item => item.path).sort(sortVersions);
// Create a list of <li> items for versions
versions.forEach((version) => {
const li = document.createElement("li");
li.textContent = version;
li.id = version;
if (window.SYNAPSE_VERSION === version) {
li.classList.add('active');
dropdown.querySelector('span').textContent = version;
dropdown.querySelector('input').value = version;
}
dropdownMenu.appendChild(li);
});
resolve(versions);
}).catch(ex => {
console.error("Failed to fetch version data", ex);
reject(ex);
})
});
});
}
/**
* Custom sorting function to sort an array of version strings.
*
* @param {string} a - The first version string to compare.
* @param {string} b - The second version string to compare.
* @returns {number} - A negative number if a should come before b, a positive number if b should come before a, or 0 if they are equal.
*/
function sortVersions(a, b) {
// Put 'develop' and 'latest' at the top
if (a === 'develop' || a === 'latest') return -1;
if (b === 'develop' || b === 'latest') return 1;
const versionA = (a.match(/v\d+(\.\d+)+/) || [])[0];
const versionB = (b.match(/v\d+(\.\d+)+/) || [])[0];
return versionB.localeCompare(versionA);
}
/**
* Change the version in a URL path.
*
* @param {string} url - The original URL to be modified.
* @param {string} newVersion - The new version to replace the existing version in the URL.
* @returns {string} The updated URL with the new version.
*/
function changeVersion(url, newVersion) {
const parsedURL = new URL(url);
const pathSegments = parsedURL.pathname.split('/');
// Modify the version
pathSegments[2] = newVersion;
// Reconstruct the URL
parsedURL.pathname = pathSegments.join('/');
return parsedURL.href;
}

View File

@ -0,0 +1 @@
window.SYNAPSE_VERSION = 'v1.98';

View File

@ -37,8 +37,8 @@ files =
build_rust.py
[mypy-synapse.metrics._reactor_metrics]
# This module imports select.epoll. That exists on Linux, but doesn't on macOS.
# See https://github.com/matrix-org/synapse/pull/11771.
# This module pokes at the internals of OS-specific classes, to appease mypy
# on different systems we add additional ignores.
warn_unused_ignores = False
[mypy-synapse.util.caches.treecache]

484
poetry.lock generated
View File

@ -1,4 +1,4 @@
# This file is automatically @generated by Poetry 1.6.1 and should not be changed by hand.
# This file is automatically @generated by Poetry 1.7.1 and should not be changed by hand.
[[package]]
name = "alabaster"
@ -416,19 +416,6 @@ files = [
[package.dependencies]
colorama = {version = "*", markers = "platform_system == \"Windows\""}
[[package]]
name = "click-default-group"
version = "1.2.2"
description = "Extends click.Group to invoke a command without explicit subcommand name"
optional = false
python-versions = "*"
files = [
{file = "click-default-group-1.2.2.tar.gz", hash = "sha256:d9560e8e8dfa44b3562fbc9425042a0fd6d21956fcc2db0077f63f34253ab904"},
]
[package.dependencies]
click = "*"
[[package]]
name = "colorama"
version = "0.4.6"
@ -467,34 +454,34 @@ files = [
[[package]]
name = "cryptography"
version = "41.0.5"
version = "41.0.7"
description = "cryptography is a package which provides cryptographic recipes and primitives to Python developers."
optional = false
python-versions = ">=3.7"
files = [
{file = "cryptography-41.0.5-cp37-abi3-macosx_10_12_universal2.whl", hash = "sha256:da6a0ff8f1016ccc7477e6339e1d50ce5f59b88905585f77193ebd5068f1e797"},
{file = "cryptography-41.0.5-cp37-abi3-macosx_10_12_x86_64.whl", hash = "sha256:b948e09fe5fb18517d99994184854ebd50b57248736fd4c720ad540560174ec5"},
{file = "cryptography-41.0.5-cp37-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d38e6031e113b7421db1de0c1b1f7739564a88f1684c6b89234fbf6c11b75147"},
{file = "cryptography-41.0.5-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e270c04f4d9b5671ebcc792b3ba5d4488bf7c42c3c241a3748e2599776f29696"},
{file = "cryptography-41.0.5-cp37-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:ec3b055ff8f1dce8e6ef28f626e0972981475173d7973d63f271b29c8a2897da"},
{file = "cryptography-41.0.5-cp37-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:7d208c21e47940369accfc9e85f0de7693d9a5d843c2509b3846b2db170dfd20"},
{file = "cryptography-41.0.5-cp37-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:8254962e6ba1f4d2090c44daf50a547cd5f0bf446dc658a8e5f8156cae0d8548"},
{file = "cryptography-41.0.5-cp37-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:a48e74dad1fb349f3dc1d449ed88e0017d792997a7ad2ec9587ed17405667e6d"},
{file = "cryptography-41.0.5-cp37-abi3-win32.whl", hash = "sha256:d3977f0e276f6f5bf245c403156673db103283266601405376f075c849a0b936"},
{file = "cryptography-41.0.5-cp37-abi3-win_amd64.whl", hash = "sha256:73801ac9736741f220e20435f84ecec75ed70eda90f781a148f1bad546963d81"},
{file = "cryptography-41.0.5-pp310-pypy310_pp73-macosx_10_12_x86_64.whl", hash = "sha256:3be3ca726e1572517d2bef99a818378bbcf7d7799d5372a46c79c29eb8d166c1"},
{file = "cryptography-41.0.5-pp310-pypy310_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:e886098619d3815e0ad5790c973afeee2c0e6e04b4da90b88e6bd06e2a0b1b72"},
{file = "cryptography-41.0.5-pp310-pypy310_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:573eb7128cbca75f9157dcde974781209463ce56b5804983e11a1c462f0f4e88"},
{file = "cryptography-41.0.5-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:0c327cac00f082013c7c9fb6c46b7cc9fa3c288ca702c74773968173bda421bf"},
{file = "cryptography-41.0.5-pp38-pypy38_pp73-macosx_10_12_x86_64.whl", hash = "sha256:227ec057cd32a41c6651701abc0328135e472ed450f47c2766f23267b792a88e"},
{file = "cryptography-41.0.5-pp38-pypy38_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:22892cc830d8b2c89ea60148227631bb96a7da0c1b722f2aac8824b1b7c0b6b8"},
{file = "cryptography-41.0.5-pp38-pypy38_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:5a70187954ba7292c7876734183e810b728b4f3965fbe571421cb2434d279179"},
{file = "cryptography-41.0.5-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:88417bff20162f635f24f849ab182b092697922088b477a7abd6664ddd82291d"},
{file = "cryptography-41.0.5-pp39-pypy39_pp73-macosx_10_12_x86_64.whl", hash = "sha256:c707f7afd813478e2019ae32a7c49cd932dd60ab2d2a93e796f68236b7e1fbf1"},
{file = "cryptography-41.0.5-pp39-pypy39_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:580afc7b7216deeb87a098ef0674d6ee34ab55993140838b14c9b83312b37b86"},
{file = "cryptography-41.0.5-pp39-pypy39_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:fba1e91467c65fe64a82c689dc6cf58151158993b13eb7a7f3f4b7f395636723"},
{file = "cryptography-41.0.5-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:0d2a6a598847c46e3e321a7aef8af1436f11c27f1254933746304ff014664d84"},
{file = "cryptography-41.0.5.tar.gz", hash = "sha256:392cb88b597247177172e02da6b7a63deeff1937fa6fec3bbf902ebd75d97ec7"},
{file = "cryptography-41.0.7-cp37-abi3-macosx_10_12_universal2.whl", hash = "sha256:3c78451b78313fa81607fa1b3f1ae0a5ddd8014c38a02d9db0616133987b9cdf"},
{file = "cryptography-41.0.7-cp37-abi3-macosx_10_12_x86_64.whl", hash = "sha256:928258ba5d6f8ae644e764d0f996d61a8777559f72dfeb2eea7e2fe0ad6e782d"},
{file = "cryptography-41.0.7-cp37-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5a1b41bc97f1ad230a41657d9155113c7521953869ae57ac39ac7f1bb471469a"},
{file = "cryptography-41.0.7-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:841df4caa01008bad253bce2a6f7b47f86dc9f08df4b433c404def869f590a15"},
{file = "cryptography-41.0.7-cp37-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:5429ec739a29df2e29e15d082f1d9ad683701f0ec7709ca479b3ff2708dae65a"},
{file = "cryptography-41.0.7-cp37-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:43f2552a2378b44869fe8827aa19e69512e3245a219104438692385b0ee119d1"},
{file = "cryptography-41.0.7-cp37-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:af03b32695b24d85a75d40e1ba39ffe7db7ffcb099fe507b39fd41a565f1b157"},
{file = "cryptography-41.0.7-cp37-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:49f0805fc0b2ac8d4882dd52f4a3b935b210935d500b6b805f321addc8177406"},
{file = "cryptography-41.0.7-cp37-abi3-win32.whl", hash = "sha256:f983596065a18a2183e7f79ab3fd4c475205b839e02cbc0efbbf9666c4b3083d"},
{file = "cryptography-41.0.7-cp37-abi3-win_amd64.whl", hash = "sha256:90452ba79b8788fa380dfb587cca692976ef4e757b194b093d845e8d99f612f2"},
{file = "cryptography-41.0.7-pp310-pypy310_pp73-macosx_10_12_x86_64.whl", hash = "sha256:079b85658ea2f59c4f43b70f8119a52414cdb7be34da5d019a77bf96d473b960"},
{file = "cryptography-41.0.7-pp310-pypy310_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:b640981bf64a3e978a56167594a0e97db71c89a479da8e175d8bb5be5178c003"},
{file = "cryptography-41.0.7-pp310-pypy310_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:e3114da6d7f95d2dee7d3f4eec16dacff819740bbab931aff8648cb13c5ff5e7"},
{file = "cryptography-41.0.7-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:d5ec85080cce7b0513cfd233914eb8b7bbd0633f1d1703aa28d1dd5a72f678ec"},
{file = "cryptography-41.0.7-pp38-pypy38_pp73-macosx_10_12_x86_64.whl", hash = "sha256:7a698cb1dac82c35fcf8fe3417a3aaba97de16a01ac914b89a0889d364d2f6be"},
{file = "cryptography-41.0.7-pp38-pypy38_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:37a138589b12069efb424220bf78eac59ca68b95696fc622b6ccc1c0a197204a"},
{file = "cryptography-41.0.7-pp38-pypy38_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:68a2dec79deebc5d26d617bfdf6e8aab065a4f34934b22d3b5010df3ba36612c"},
{file = "cryptography-41.0.7-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:09616eeaef406f99046553b8a40fbf8b1e70795a91885ba4c96a70793de5504a"},
{file = "cryptography-41.0.7-pp39-pypy39_pp73-macosx_10_12_x86_64.whl", hash = "sha256:48a0476626da912a44cc078f9893f292f0b3e4c739caf289268168d8f4702a39"},
{file = "cryptography-41.0.7-pp39-pypy39_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:c7f3201ec47d5207841402594f1d7950879ef890c0c495052fa62f58283fde1a"},
{file = "cryptography-41.0.7-pp39-pypy39_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:c5ca78485a255e03c32b513f8c2bc39fedb7f5c5f8535545bdc223a03b24f248"},
{file = "cryptography-41.0.7-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:d6c391c021ab1f7a82da5d8d0b3cee2f4b2c455ec86c8aebbc84837a631ff309"},
{file = "cryptography-41.0.7.tar.gz", hash = "sha256:13f93ce9bea8016c253b34afc6bd6a75993e5c40672ed5405a9c832f0d4a00bc"},
]
[package.dependencies]
@ -725,13 +712,13 @@ idna = ">=2.5"
[[package]]
name = "idna"
version = "3.4"
version = "3.6"
description = "Internationalized Domain Names in Applications (IDNA)"
optional = false
python-versions = ">=3.5"
files = [
{file = "idna-3.4-py3-none-any.whl", hash = "sha256:90b77e79eaa3eba6de819a0c442c0b4ceefc341a7a2ab77d7562bf49f425c5c2"},
{file = "idna-3.4.tar.gz", hash = "sha256:814f528e8dead7d329833b91c5faa87d60bf71824cd12a7530b5526063d02cb4"},
{file = "idna-3.6-py3-none-any.whl", hash = "sha256:c05567e9c24a6b9faaa835c4821bad0590fbb9d5779e7caa6e1cc4978e7eb24f"},
{file = "idna-3.6.tar.gz", hash = "sha256:9ecdbbd083b06798ae1e86adcbfe8ab1479cf864e4ee30fe4e46a003d12491ca"},
]
[[package]]
@ -994,13 +981,13 @@ i18n = ["Babel (>=2.7)"]
[[package]]
name = "jsonschema"
version = "4.19.1"
version = "4.20.0"
description = "An implementation of JSON Schema validation for Python"
optional = false
python-versions = ">=3.8"
files = [
{file = "jsonschema-4.19.1-py3-none-any.whl", hash = "sha256:cd5f1f9ed9444e554b38ba003af06c0a8c2868131e56bfbef0550fb450c0330e"},
{file = "jsonschema-4.19.1.tar.gz", hash = "sha256:ec84cc37cfa703ef7cd4928db24f9cb31428a5d0fa77747b8b51a847458e0bbf"},
{file = "jsonschema-4.20.0-py3-none-any.whl", hash = "sha256:ed6231f0429ecf966f5bc8dfef245998220549cbbcf140f913b7464c52c3b6b3"},
{file = "jsonschema-4.20.0.tar.gz", hash = "sha256:4f614fd46d8d61258610998997743ec5492a648b33cf478c1ddc23ed4598a5fa"},
]
[package.dependencies]
@ -1624,13 +1611,13 @@ files = [
[[package]]
name = "phonenumbers"
version = "8.13.23"
version = "8.13.26"
description = "Python version of Google's common library for parsing, formatting, storing and validating international phone numbers."
optional = false
python-versions = "*"
files = [
{file = "phonenumbers-8.13.23-py2.py3-none-any.whl", hash = "sha256:34d6cb279dd4a64714e324c71350f96e5bda3237be28d11b4c555c44701544cd"},
{file = "phonenumbers-8.13.23.tar.gz", hash = "sha256:869e44fcaaf276eca6b953a401e2b27d57461f3a18a66cf5f13377e7bb0e228c"},
{file = "phonenumbers-8.13.26-py2.py3-none-any.whl", hash = "sha256:b2308c9c5750b8f10dd30d94547afd66bce60ac5e93aff227f95740557f32752"},
{file = "phonenumbers-8.13.26.tar.gz", hash = "sha256:937d70aeceb317f5831dfec28de855a60260ef4a9d551964bec8e7a7d0cf81cd"},
]
[[package]]
@ -1742,13 +1729,13 @@ test = ["appdirs (==1.4.4)", "covdefaults (>=2.2.2)", "pytest (>=7.2.1)", "pytes
[[package]]
name = "prometheus-client"
version = "0.17.1"
version = "0.19.0"
description = "Python client for the Prometheus monitoring system."
optional = false
python-versions = ">=3.6"
python-versions = ">=3.8"
files = [
{file = "prometheus_client-0.17.1-py3-none-any.whl", hash = "sha256:e537f37160f6807b8202a6fc4764cdd19bac5480ddd3e0d463c3002b34462101"},
{file = "prometheus_client-0.17.1.tar.gz", hash = "sha256:21e674f39831ae3f8acde238afd9a27a37d0d2fb5a28ea094f0ce25d2cbf2091"},
{file = "prometheus_client-0.19.0-py3-none-any.whl", hash = "sha256:c88b1e6ecf6b41cd8fb5731c7ae919bf66df6ec6fafa555cd6c0e16ca169ae92"},
{file = "prometheus_client-0.19.0.tar.gz", hash = "sha256:4585b0d1223148c27a225b10dbec5ae9bc4c81a99a3fa80774fa6209935324e1"},
]
[package.extras]
@ -1805,13 +1792,13 @@ psycopg2 = "*"
[[package]]
name = "pyasn1"
version = "0.5.0"
version = "0.5.1"
description = "Pure-Python implementation of ASN.1 types and DER/BER/CER codecs (X.208)"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,>=2.7"
files = [
{file = "pyasn1-0.5.0-py2.py3-none-any.whl", hash = "sha256:87a2121042a1ac9358cabcaf1d07680ff97ee6404333bacca15f76aa8ad01a57"},
{file = "pyasn1-0.5.0.tar.gz", hash = "sha256:97b7290ca68e62a832558ec3976f15cbf911bf5d7c7039d8b861c2a0ece69fde"},
{file = "pyasn1-0.5.1-py2.py3-none-any.whl", hash = "sha256:4439847c58d40b1d0a573d07e3856e95333f1976294494c325775aeca506eb58"},
{file = "pyasn1-0.5.1.tar.gz", hash = "sha256:6d391a96e59b23130a5cfa74d6fd7f388dbbe26cc8f1edf39fdddf08d9d6676c"},
]
[[package]]
@ -1841,18 +1828,18 @@ files = [
[[package]]
name = "pydantic"
version = "2.4.2"
version = "2.5.1"
description = "Data validation using Python type hints"
optional = false
python-versions = ">=3.7"
files = [
{file = "pydantic-2.4.2-py3-none-any.whl", hash = "sha256:bc3ddf669d234f4220e6e1c4d96b061abe0998185a8d7855c0126782b7abc8c1"},
{file = "pydantic-2.4.2.tar.gz", hash = "sha256:94f336138093a5d7f426aac732dcfe7ab4eb4da243c88f891d65deb4a2556ee7"},
{file = "pydantic-2.5.1-py3-none-any.whl", hash = "sha256:dc5244a8939e0d9a68f1f1b5f550b2e1c879912033b1becbedb315accc75441b"},
{file = "pydantic-2.5.1.tar.gz", hash = "sha256:0b8be5413c06aadfbe56f6dc1d45c9ed25fd43264414c571135c97dd77c2bedb"},
]
[package.dependencies]
annotated-types = ">=0.4.0"
pydantic-core = "2.10.1"
pydantic-core = "2.14.3"
typing-extensions = ">=4.6.1"
[package.extras]
@ -1860,117 +1847,116 @@ email = ["email-validator (>=2.0.0)"]
[[package]]
name = "pydantic-core"
version = "2.10.1"
version = "2.14.3"
description = ""
optional = false
python-versions = ">=3.7"
files = [
{file = "pydantic_core-2.10.1-cp310-cp310-macosx_10_7_x86_64.whl", hash = "sha256:d64728ee14e667ba27c66314b7d880b8eeb050e58ffc5fec3b7a109f8cddbd63"},
{file = "pydantic_core-2.10.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:48525933fea744a3e7464c19bfede85df4aba79ce90c60b94d8b6e1eddd67096"},
{file = "pydantic_core-2.10.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ef337945bbd76cce390d1b2496ccf9f90b1c1242a3a7bc242ca4a9fc5993427a"},
{file = "pydantic_core-2.10.1-cp310-cp310-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:a1392e0638af203cee360495fd2cfdd6054711f2db5175b6e9c3c461b76f5175"},
{file = "pydantic_core-2.10.1-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:0675ba5d22de54d07bccde38997e780044dcfa9a71aac9fd7d4d7a1d2e3e65f7"},
{file = "pydantic_core-2.10.1-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:128552af70a64660f21cb0eb4876cbdadf1a1f9d5de820fed6421fa8de07c893"},
{file = "pydantic_core-2.10.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8f6e6aed5818c264412ac0598b581a002a9f050cb2637a84979859e70197aa9e"},
{file = "pydantic_core-2.10.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:ecaac27da855b8d73f92123e5f03612b04c5632fd0a476e469dfc47cd37d6b2e"},
{file = "pydantic_core-2.10.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:b3c01c2fb081fced3bbb3da78510693dc7121bb893a1f0f5f4b48013201f362e"},
{file = "pydantic_core-2.10.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:92f675fefa977625105708492850bcbc1182bfc3e997f8eecb866d1927c98ae6"},
{file = "pydantic_core-2.10.1-cp310-none-win32.whl", hash = "sha256:420a692b547736a8d8703c39ea935ab5d8f0d2573f8f123b0a294e49a73f214b"},
{file = "pydantic_core-2.10.1-cp310-none-win_amd64.whl", hash = "sha256:0880e239827b4b5b3e2ce05e6b766a7414e5f5aedc4523be6b68cfbc7f61c5d0"},
{file = "pydantic_core-2.10.1-cp311-cp311-macosx_10_7_x86_64.whl", hash = "sha256:073d4a470b195d2b2245d0343569aac7e979d3a0dcce6c7d2af6d8a920ad0bea"},
{file = "pydantic_core-2.10.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:600d04a7b342363058b9190d4e929a8e2e715c5682a70cc37d5ded1e0dd370b4"},
{file = "pydantic_core-2.10.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:39215d809470f4c8d1881758575b2abfb80174a9e8daf8f33b1d4379357e417c"},
{file = "pydantic_core-2.10.1-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:eeb3d3d6b399ffe55f9a04e09e635554012f1980696d6b0aca3e6cf42a17a03b"},
{file = "pydantic_core-2.10.1-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:a7a7902bf75779bc12ccfc508bfb7a4c47063f748ea3de87135d433a4cca7a2f"},
{file = "pydantic_core-2.10.1-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:3625578b6010c65964d177626fde80cf60d7f2e297d56b925cb5cdeda6e9925a"},
{file = "pydantic_core-2.10.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:caa48fc31fc7243e50188197b5f0c4228956f97b954f76da157aae7f67269ae8"},
{file = "pydantic_core-2.10.1-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:07ec6d7d929ae9c68f716195ce15e745b3e8fa122fc67698ac6498d802ed0fa4"},
{file = "pydantic_core-2.10.1-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:e6f31a17acede6a8cd1ae2d123ce04d8cca74056c9d456075f4f6f85de055607"},
{file = "pydantic_core-2.10.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:d8f1ebca515a03e5654f88411420fea6380fc841d1bea08effb28184e3d4899f"},
{file = "pydantic_core-2.10.1-cp311-none-win32.whl", hash = "sha256:6db2eb9654a85ada248afa5a6db5ff1cf0f7b16043a6b070adc4a5be68c716d6"},
{file = "pydantic_core-2.10.1-cp311-none-win_amd64.whl", hash = "sha256:4a5be350f922430997f240d25f8219f93b0c81e15f7b30b868b2fddfc2d05f27"},
{file = "pydantic_core-2.10.1-cp311-none-win_arm64.whl", hash = "sha256:5fdb39f67c779b183b0c853cd6b45f7db84b84e0571b3ef1c89cdb1dfc367325"},
{file = "pydantic_core-2.10.1-cp312-cp312-macosx_10_7_x86_64.whl", hash = "sha256:b1f22a9ab44de5f082216270552aa54259db20189e68fc12484873d926426921"},
{file = "pydantic_core-2.10.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:8572cadbf4cfa95fb4187775b5ade2eaa93511f07947b38f4cd67cf10783b118"},
{file = "pydantic_core-2.10.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:db9a28c063c7c00844ae42a80203eb6d2d6bbb97070cfa00194dff40e6f545ab"},
{file = "pydantic_core-2.10.1-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:0e2a35baa428181cb2270a15864ec6286822d3576f2ed0f4cd7f0c1708472aff"},
{file = "pydantic_core-2.10.1-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:05560ab976012bf40f25d5225a58bfa649bb897b87192a36c6fef1ab132540d7"},
{file = "pydantic_core-2.10.1-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:d6495008733c7521a89422d7a68efa0a0122c99a5861f06020ef5b1f51f9ba7c"},
{file = "pydantic_core-2.10.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:14ac492c686defc8e6133e3a2d9eaf5261b3df26b8ae97450c1647286750b901"},
{file = "pydantic_core-2.10.1-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:8282bab177a9a3081fd3d0a0175a07a1e2bfb7fcbbd949519ea0980f8a07144d"},
{file = "pydantic_core-2.10.1-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:aafdb89fdeb5fe165043896817eccd6434aee124d5ee9b354f92cd574ba5e78f"},
{file = "pydantic_core-2.10.1-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:f6defd966ca3b187ec6c366604e9296f585021d922e666b99c47e78738b5666c"},
{file = "pydantic_core-2.10.1-cp312-none-win32.whl", hash = "sha256:7c4d1894fe112b0864c1fa75dffa045720a194b227bed12f4be7f6045b25209f"},
{file = "pydantic_core-2.10.1-cp312-none-win_amd64.whl", hash = "sha256:5994985da903d0b8a08e4935c46ed8daf5be1cf217489e673910951dc533d430"},
{file = "pydantic_core-2.10.1-cp312-none-win_arm64.whl", hash = "sha256:0d8a8adef23d86d8eceed3e32e9cca8879c7481c183f84ed1a8edc7df073af94"},
{file = "pydantic_core-2.10.1-cp37-cp37m-macosx_10_7_x86_64.whl", hash = "sha256:9badf8d45171d92387410b04639d73811b785b5161ecadabf056ea14d62d4ede"},
{file = "pydantic_core-2.10.1-cp37-cp37m-macosx_11_0_arm64.whl", hash = "sha256:ebedb45b9feb7258fac0a268a3f6bec0a2ea4d9558f3d6f813f02ff3a6dc6698"},
{file = "pydantic_core-2.10.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:cfe1090245c078720d250d19cb05d67e21a9cd7c257698ef139bc41cf6c27b4f"},
{file = "pydantic_core-2.10.1-cp37-cp37m-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:e357571bb0efd65fd55f18db0a2fb0ed89d0bb1d41d906b138f088933ae618bb"},
{file = "pydantic_core-2.10.1-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:b3dcd587b69bbf54fc04ca157c2323b8911033e827fffaecf0cafa5a892a0904"},
{file = "pydantic_core-2.10.1-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:9c120c9ce3b163b985a3b966bb701114beb1da4b0468b9b236fc754783d85aa3"},
{file = "pydantic_core-2.10.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:15d6bca84ffc966cc9976b09a18cf9543ed4d4ecbd97e7086f9ce9327ea48891"},
{file = "pydantic_core-2.10.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:5cabb9710f09d5d2e9e2748c3e3e20d991a4c5f96ed8f1132518f54ab2967221"},
{file = "pydantic_core-2.10.1-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:82f55187a5bebae7d81d35b1e9aaea5e169d44819789837cdd4720d768c55d15"},
{file = "pydantic_core-2.10.1-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:1d40f55222b233e98e3921df7811c27567f0e1a4411b93d4c5c0f4ce131bc42f"},
{file = "pydantic_core-2.10.1-cp37-none-win32.whl", hash = "sha256:14e09ff0b8fe6e46b93d36a878f6e4a3a98ba5303c76bb8e716f4878a3bee92c"},
{file = "pydantic_core-2.10.1-cp37-none-win_amd64.whl", hash = "sha256:1396e81b83516b9d5c9e26a924fa69164156c148c717131f54f586485ac3c15e"},
{file = "pydantic_core-2.10.1-cp38-cp38-macosx_10_7_x86_64.whl", hash = "sha256:6835451b57c1b467b95ffb03a38bb75b52fb4dc2762bb1d9dbed8de31ea7d0fc"},
{file = "pydantic_core-2.10.1-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:b00bc4619f60c853556b35f83731bd817f989cba3e97dc792bb8c97941b8053a"},
{file = "pydantic_core-2.10.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0fa467fd300a6f046bdb248d40cd015b21b7576c168a6bb20aa22e595c8ffcdd"},
{file = "pydantic_core-2.10.1-cp38-cp38-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:d99277877daf2efe074eae6338453a4ed54a2d93fb4678ddfe1209a0c93a2468"},
{file = "pydantic_core-2.10.1-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:fa7db7558607afeccb33c0e4bf1c9a9a835e26599e76af6fe2fcea45904083a6"},
{file = "pydantic_core-2.10.1-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:aad7bd686363d1ce4ee930ad39f14e1673248373f4a9d74d2b9554f06199fb58"},
{file = "pydantic_core-2.10.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:443fed67d33aa85357464f297e3d26e570267d1af6fef1c21ca50921d2976302"},
{file = "pydantic_core-2.10.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:042462d8d6ba707fd3ce9649e7bf268633a41018d6a998fb5fbacb7e928a183e"},
{file = "pydantic_core-2.10.1-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:ecdbde46235f3d560b18be0cb706c8e8ad1b965e5c13bbba7450c86064e96561"},
{file = "pydantic_core-2.10.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:ed550ed05540c03f0e69e6d74ad58d026de61b9eaebebbaaf8873e585cbb18de"},
{file = "pydantic_core-2.10.1-cp38-none-win32.whl", hash = "sha256:8cdbbd92154db2fec4ec973d45c565e767ddc20aa6dbaf50142676484cbff8ee"},
{file = "pydantic_core-2.10.1-cp38-none-win_amd64.whl", hash = "sha256:9f6f3e2598604956480f6c8aa24a3384dbf6509fe995d97f6ca6103bb8c2534e"},
{file = "pydantic_core-2.10.1-cp39-cp39-macosx_10_7_x86_64.whl", hash = "sha256:655f8f4c8d6a5963c9a0687793da37b9b681d9ad06f29438a3b2326d4e6b7970"},
{file = "pydantic_core-2.10.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:e570ffeb2170e116a5b17e83f19911020ac79d19c96f320cbfa1fa96b470185b"},
{file = "pydantic_core-2.10.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:64322bfa13e44c6c30c518729ef08fda6026b96d5c0be724b3c4ae4da939f875"},
{file = "pydantic_core-2.10.1-cp39-cp39-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:485a91abe3a07c3a8d1e082ba29254eea3e2bb13cbbd4351ea4e5a21912cc9b0"},
{file = "pydantic_core-2.10.1-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:f7c2b8eb9fc872e68b46eeaf835e86bccc3a58ba57d0eedc109cbb14177be531"},
{file = "pydantic_core-2.10.1-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a5cb87bdc2e5f620693148b5f8f842d293cae46c5f15a1b1bf7ceeed324a740c"},
{file = "pydantic_core-2.10.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:25bd966103890ccfa028841a8f30cebcf5875eeac8c4bde4fe221364c92f0c9a"},
{file = "pydantic_core-2.10.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:f323306d0556351735b54acbf82904fe30a27b6a7147153cbe6e19aaaa2aa429"},
{file = "pydantic_core-2.10.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:0c27f38dc4fbf07b358b2bc90edf35e82d1703e22ff2efa4af4ad5de1b3833e7"},
{file = "pydantic_core-2.10.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:f1365e032a477c1430cfe0cf2856679529a2331426f8081172c4a74186f1d595"},
{file = "pydantic_core-2.10.1-cp39-none-win32.whl", hash = "sha256:a1c311fd06ab3b10805abb72109f01a134019739bd3286b8ae1bc2fc4e50c07a"},
{file = "pydantic_core-2.10.1-cp39-none-win_amd64.whl", hash = "sha256:ae8a8843b11dc0b03b57b52793e391f0122e740de3df1474814c700d2622950a"},
{file = "pydantic_core-2.10.1-pp310-pypy310_pp73-macosx_10_7_x86_64.whl", hash = "sha256:d43002441932f9a9ea5d6f9efaa2e21458221a3a4b417a14027a1d530201ef1b"},
{file = "pydantic_core-2.10.1-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:fcb83175cc4936a5425dde3356f079ae03c0802bbdf8ff82c035f8a54b333521"},
{file = "pydantic_core-2.10.1-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:962ed72424bf1f72334e2f1e61b68f16c0e596f024ca7ac5daf229f7c26e4208"},
{file = "pydantic_core-2.10.1-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2cf5bb4dd67f20f3bbc1209ef572a259027c49e5ff694fa56bed62959b41e1f9"},
{file = "pydantic_core-2.10.1-pp310-pypy310_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:e544246b859f17373bed915182ab841b80849ed9cf23f1f07b73b7c58baee5fb"},
{file = "pydantic_core-2.10.1-pp310-pypy310_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:c0877239307b7e69d025b73774e88e86ce82f6ba6adf98f41069d5b0b78bd1bf"},
{file = "pydantic_core-2.10.1-pp310-pypy310_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:53df009d1e1ba40f696f8995683e067e3967101d4bb4ea6f667931b7d4a01357"},
{file = "pydantic_core-2.10.1-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:a1254357f7e4c82e77c348dabf2d55f1d14d19d91ff025004775e70a6ef40ada"},
{file = "pydantic_core-2.10.1-pp37-pypy37_pp73-macosx_10_7_x86_64.whl", hash = "sha256:524ff0ca3baea164d6d93a32c58ac79eca9f6cf713586fdc0adb66a8cdeab96a"},
{file = "pydantic_core-2.10.1-pp37-pypy37_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3f0ac9fb8608dbc6eaf17956bf623c9119b4db7dbb511650910a82e261e6600f"},
{file = "pydantic_core-2.10.1-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:320f14bd4542a04ab23747ff2c8a778bde727158b606e2661349557f0770711e"},
{file = "pydantic_core-2.10.1-pp37-pypy37_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:63974d168b6233b4ed6a0046296803cb13c56637a7b8106564ab575926572a55"},
{file = "pydantic_core-2.10.1-pp37-pypy37_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:417243bf599ba1f1fef2bb8c543ceb918676954734e2dcb82bf162ae9d7bd514"},
{file = "pydantic_core-2.10.1-pp37-pypy37_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:dda81e5ec82485155a19d9624cfcca9be88a405e2857354e5b089c2a982144b2"},
{file = "pydantic_core-2.10.1-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:14cfbb00959259e15d684505263d5a21732b31248a5dd4941f73a3be233865b9"},
{file = "pydantic_core-2.10.1-pp38-pypy38_pp73-macosx_10_7_x86_64.whl", hash = "sha256:631cb7415225954fdcc2a024119101946793e5923f6c4d73a5914d27eb3d3a05"},
{file = "pydantic_core-2.10.1-pp38-pypy38_pp73-macosx_11_0_arm64.whl", hash = "sha256:bec7dd208a4182e99c5b6c501ce0b1f49de2802448d4056091f8e630b28e9a52"},
{file = "pydantic_core-2.10.1-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:149b8a07712f45b332faee1a2258d8ef1fb4a36f88c0c17cb687f205c5dc6e7d"},
{file = "pydantic_core-2.10.1-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4d966c47f9dd73c2d32a809d2be529112d509321c5310ebf54076812e6ecd884"},
{file = "pydantic_core-2.10.1-pp38-pypy38_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:7eb037106f5c6b3b0b864ad226b0b7ab58157124161d48e4b30c4a43fef8bc4b"},
{file = "pydantic_core-2.10.1-pp38-pypy38_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:154ea7c52e32dce13065dbb20a4a6f0cc012b4f667ac90d648d36b12007fa9f7"},
{file = "pydantic_core-2.10.1-pp38-pypy38_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:e562617a45b5a9da5be4abe72b971d4f00bf8555eb29bb91ec2ef2be348cd132"},
{file = "pydantic_core-2.10.1-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:f23b55eb5464468f9e0e9a9935ce3ed2a870608d5f534025cd5536bca25b1402"},
{file = "pydantic_core-2.10.1-pp39-pypy39_pp73-macosx_10_7_x86_64.whl", hash = "sha256:e9121b4009339b0f751955baf4543a0bfd6bc3f8188f8056b1a25a2d45099934"},
{file = "pydantic_core-2.10.1-pp39-pypy39_pp73-macosx_11_0_arm64.whl", hash = "sha256:0523aeb76e03f753b58be33b26540880bac5aa54422e4462404c432230543f33"},
{file = "pydantic_core-2.10.1-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2e0e2959ef5d5b8dc9ef21e1a305a21a36e254e6a34432d00c72a92fdc5ecda5"},
{file = "pydantic_core-2.10.1-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:da01bec0a26befab4898ed83b362993c844b9a607a86add78604186297eb047e"},
{file = "pydantic_core-2.10.1-pp39-pypy39_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:f2e9072d71c1f6cfc79a36d4484c82823c560e6f5599c43c1ca6b5cdbd54f881"},
{file = "pydantic_core-2.10.1-pp39-pypy39_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:f36a3489d9e28fe4b67be9992a23029c3cec0babc3bd9afb39f49844a8c721c5"},
{file = "pydantic_core-2.10.1-pp39-pypy39_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:f64f82cc3443149292b32387086d02a6c7fb39b8781563e0ca7b8d7d9cf72bd7"},
{file = "pydantic_core-2.10.1-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:b4a6db486ac8e99ae696e09efc8b2b9fea67b63c8f88ba7a1a16c24a057a0776"},
{file = "pydantic_core-2.10.1.tar.gz", hash = "sha256:0f8682dbdd2f67f8e1edddcbffcc29f60a6182b4901c367fc8c1c40d30bb0a82"},
{file = "pydantic_core-2.14.3-cp310-cp310-macosx_10_7_x86_64.whl", hash = "sha256:ba44fad1d114539d6a1509966b20b74d2dec9a5b0ee12dd7fd0a1bb7b8785e5f"},
{file = "pydantic_core-2.14.3-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:4a70d23eedd88a6484aa79a732a90e36701048a1509078d1b59578ef0ea2cdf5"},
{file = "pydantic_core-2.14.3-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:7cc24728a1a9cef497697e53b3d085fb4d3bc0ef1ef4d9b424d9cf808f52c146"},
{file = "pydantic_core-2.14.3-cp310-cp310-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:ab4a2381005769a4af2ffddae74d769e8a4aae42e970596208ec6d615c6fb080"},
{file = "pydantic_core-2.14.3-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:905a12bf088d6fa20e094f9a477bf84bd823651d8b8384f59bcd50eaa92e6a52"},
{file = "pydantic_core-2.14.3-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:38aed5a1bbc3025859f56d6a32f6e53ca173283cb95348e03480f333b1091e7d"},
{file = "pydantic_core-2.14.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1767bd3f6370458e60c1d3d7b1d9c2751cc1ad743434e8ec84625a610c8b9195"},
{file = "pydantic_core-2.14.3-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:7cb0c397f29688a5bd2c0dbd44451bc44ebb9b22babc90f97db5ec3e5bb69977"},
{file = "pydantic_core-2.14.3-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:9ff737f24b34ed26de62d481ef522f233d3c5927279f6b7229de9b0deb3f76b5"},
{file = "pydantic_core-2.14.3-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:a1a39fecb5f0b19faee9a8a8176c805ed78ce45d760259a4ff3d21a7daa4dfc1"},
{file = "pydantic_core-2.14.3-cp310-none-win32.whl", hash = "sha256:ccbf355b7276593c68fa824030e68cb29f630c50e20cb11ebb0ee450ae6b3d08"},
{file = "pydantic_core-2.14.3-cp310-none-win_amd64.whl", hash = "sha256:536e1f58419e1ec35f6d1310c88496f0d60e4f182cacb773d38076f66a60b149"},
{file = "pydantic_core-2.14.3-cp311-cp311-macosx_10_7_x86_64.whl", hash = "sha256:f1f46700402312bdc31912f6fc17f5ecaaaa3bafe5487c48f07c800052736289"},
{file = "pydantic_core-2.14.3-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:88ec906eb2d92420f5b074f59cf9e50b3bb44f3cb70e6512099fdd4d88c2f87c"},
{file = "pydantic_core-2.14.3-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:056ea7cc3c92a7d2a14b5bc9c9fa14efa794d9f05b9794206d089d06d3433dc7"},
{file = "pydantic_core-2.14.3-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:076edc972b68a66870cec41a4efdd72a6b655c4098a232314b02d2bfa3bfa157"},
{file = "pydantic_core-2.14.3-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:e71f666c3bf019f2490a47dddb44c3ccea2e69ac882f7495c68dc14d4065eac2"},
{file = "pydantic_core-2.14.3-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:f518eac285c9632be337323eef9824a856f2680f943a9b68ac41d5f5bad7df7c"},
{file = "pydantic_core-2.14.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9dbab442a8d9ca918b4ed99db8d89d11b1f067a7dadb642476ad0889560dac79"},
{file = "pydantic_core-2.14.3-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:0653fb9fc2fa6787f2fa08631314ab7fc8070307bd344bf9471d1b7207c24623"},
{file = "pydantic_core-2.14.3-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:c54af5069da58ea643ad34ff32fd6bc4eebb8ae0fef9821cd8919063e0aeeaab"},
{file = "pydantic_core-2.14.3-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:cc956f78651778ec1ab105196e90e0e5f5275884793ab67c60938c75bcca3989"},
{file = "pydantic_core-2.14.3-cp311-none-win32.whl", hash = "sha256:5b73441a1159f1fb37353aaefb9e801ab35a07dd93cb8177504b25a317f4215a"},
{file = "pydantic_core-2.14.3-cp311-none-win_amd64.whl", hash = "sha256:7349f99f1ef8b940b309179733f2cad2e6037a29560f1b03fdc6aa6be0a8d03c"},
{file = "pydantic_core-2.14.3-cp311-none-win_arm64.whl", hash = "sha256:ec79dbe23702795944d2ae4c6925e35a075b88acd0d20acde7c77a817ebbce94"},
{file = "pydantic_core-2.14.3-cp312-cp312-macosx_10_7_x86_64.whl", hash = "sha256:8f5624f0f67f2b9ecaa812e1dfd2e35b256487566585160c6c19268bf2ffeccc"},
{file = "pydantic_core-2.14.3-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:6c2d118d1b6c9e2d577e215567eedbe11804c3aafa76d39ec1f8bc74e918fd07"},
{file = "pydantic_core-2.14.3-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:fe863491664c6720d65ae438d4efaa5eca766565a53adb53bf14bc3246c72fe0"},
{file = "pydantic_core-2.14.3-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:136bc7247e97a921a020abbd6ef3169af97569869cd6eff41b6a15a73c44ea9b"},
{file = "pydantic_core-2.14.3-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:aeafc7f5bbddc46213707266cadc94439bfa87ecf699444de8be044d6d6eb26f"},
{file = "pydantic_core-2.14.3-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:e16aaf788f1de5a85c8f8fcc9c1ca1dd7dd52b8ad30a7889ca31c7c7606615b8"},
{file = "pydantic_core-2.14.3-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f8fc652c354d3362e2932a79d5ac4bbd7170757a41a62c4fe0f057d29f10bebb"},
{file = "pydantic_core-2.14.3-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:f1b92e72babfd56585c75caf44f0b15258c58e6be23bc33f90885cebffde3400"},
{file = "pydantic_core-2.14.3-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:75f3f534f33651b73f4d3a16d0254de096f43737d51e981478d580f4b006b427"},
{file = "pydantic_core-2.14.3-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:c9ffd823c46e05ef3eb28b821aa7bc501efa95ba8880b4a1380068e32c5bed47"},
{file = "pydantic_core-2.14.3-cp312-none-win32.whl", hash = "sha256:12e05a76b223577a4696c76d7a6b36a0ccc491ffb3c6a8cf92d8001d93ddfd63"},
{file = "pydantic_core-2.14.3-cp312-none-win_amd64.whl", hash = "sha256:1582f01eaf0537a696c846bea92082082b6bfc1103a88e777e983ea9fbdc2a0f"},
{file = "pydantic_core-2.14.3-cp312-none-win_arm64.whl", hash = "sha256:96fb679c7ca12a512d36d01c174a4fbfd912b5535cc722eb2c010c7b44eceb8e"},
{file = "pydantic_core-2.14.3-cp37-cp37m-macosx_10_7_x86_64.whl", hash = "sha256:71ed769b58d44e0bc2701aa59eb199b6665c16e8a5b8b4a84db01f71580ec448"},
{file = "pydantic_core-2.14.3-cp37-cp37m-macosx_11_0_arm64.whl", hash = "sha256:5402ee0f61e7798ea93a01b0489520f2abfd9b57b76b82c93714c4318c66ca06"},
{file = "pydantic_core-2.14.3-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:eaab9dc009e22726c62fe3b850b797e7f0e7ba76d245284d1064081f512c7226"},
{file = "pydantic_core-2.14.3-cp37-cp37m-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:92486a04d54987054f8b4405a9af9d482e5100d6fe6374fc3303015983fc8bda"},
{file = "pydantic_core-2.14.3-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:cf08b43d1d5d1678f295f0431a4a7e1707d4652576e1d0f8914b5e0213bfeee5"},
{file = "pydantic_core-2.14.3-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a8ca13480ce16daad0504be6ce893b0ee8ec34cd43b993b754198a89e2787f7e"},
{file = "pydantic_core-2.14.3-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:44afa3c18d45053fe8d8228950ee4c8eaf3b5a7f3b64963fdeac19b8342c987f"},
{file = "pydantic_core-2.14.3-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:56814b41486e2d712a8bc02a7b1f17b87fa30999d2323bbd13cf0e52296813a1"},
{file = "pydantic_core-2.14.3-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:c3dc2920cc96f9aa40c6dc54256e436cc95c0a15562eb7bd579e1811593c377e"},
{file = "pydantic_core-2.14.3-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:e483b8b913fcd3b48badec54185c150cb7ab0e6487914b84dc7cde2365e0c892"},
{file = "pydantic_core-2.14.3-cp37-none-win32.whl", hash = "sha256:364dba61494e48f01ef50ae430e392f67ee1ee27e048daeda0e9d21c3ab2d609"},
{file = "pydantic_core-2.14.3-cp37-none-win_amd64.whl", hash = "sha256:a402ae1066be594701ac45661278dc4a466fb684258d1a2c434de54971b006ca"},
{file = "pydantic_core-2.14.3-cp38-cp38-macosx_10_7_x86_64.whl", hash = "sha256:10904368261e4509c091cbcc067e5a88b070ed9a10f7ad78f3029c175487490f"},
{file = "pydantic_core-2.14.3-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:260692420028319e201b8649b13ac0988974eeafaaef95d0dfbf7120c38dc000"},
{file = "pydantic_core-2.14.3-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3c1bf1a7b05a65d3b37a9adea98e195e0081be6b17ca03a86f92aeb8b110f468"},
{file = "pydantic_core-2.14.3-cp38-cp38-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:d7abd17a838a52140e3aeca271054e321226f52df7e0a9f0da8f91ea123afe98"},
{file = "pydantic_core-2.14.3-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:a5c51460ede609fbb4fa883a8fe16e749964ddb459966d0518991ec02eb8dfb9"},
{file = "pydantic_core-2.14.3-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:d06c78074646111fb01836585f1198367b17d57c9f427e07aaa9ff499003e58d"},
{file = "pydantic_core-2.14.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:af452e69446fadf247f18ac5d153b1f7e61ef708f23ce85d8c52833748c58075"},
{file = "pydantic_core-2.14.3-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:e3ad4968711fb379a67c8c755beb4dae8b721a83737737b7bcee27c05400b047"},
{file = "pydantic_core-2.14.3-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:c5ea0153482e5b4d601c25465771c7267c99fddf5d3f3bdc238ef930e6d051cf"},
{file = "pydantic_core-2.14.3-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:96eb10ef8920990e703da348bb25fedb8b8653b5966e4e078e5be382b430f9e0"},
{file = "pydantic_core-2.14.3-cp38-none-win32.whl", hash = "sha256:ea1498ce4491236d1cffa0eee9ad0968b6ecb0c1cd711699c5677fc689905f00"},
{file = "pydantic_core-2.14.3-cp38-none-win_amd64.whl", hash = "sha256:2bc736725f9bd18a60eec0ed6ef9b06b9785454c8d0105f2be16e4d6274e63d0"},
{file = "pydantic_core-2.14.3-cp39-cp39-macosx_10_7_x86_64.whl", hash = "sha256:1ea992659c03c3ea811d55fc0a997bec9dde863a617cc7b25cfde69ef32e55af"},
{file = "pydantic_core-2.14.3-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:d2b53e1f851a2b406bbb5ac58e16c4a5496038eddd856cc900278fa0da97f3fc"},
{file = "pydantic_core-2.14.3-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0c7f8e8a7cf8e81ca7d44bea4f181783630959d41b4b51d2f74bc50f348a090f"},
{file = "pydantic_core-2.14.3-cp39-cp39-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:8d3b9c91eeb372a64ec6686c1402afd40cc20f61a0866850f7d989b6bf39a41a"},
{file = "pydantic_core-2.14.3-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:9ef3e2e407e4cad2df3c89488a761ed1f1c33f3b826a2ea9a411b0a7d1cccf1b"},
{file = "pydantic_core-2.14.3-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:f86f20a9d5bee1a6ede0f2757b917bac6908cde0f5ad9fcb3606db1e2968bcf5"},
{file = "pydantic_core-2.14.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:61beaa79d392d44dc19d6f11ccd824d3cccb865c4372157c40b92533f8d76dd0"},
{file = "pydantic_core-2.14.3-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:d41df8e10b094640a6b234851b624b76a41552f637b9fb34dc720b9fe4ef3be4"},
{file = "pydantic_core-2.14.3-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:2c08ac60c3caa31f825b5dbac47e4875bd4954d8f559650ad9e0b225eaf8ed0c"},
{file = "pydantic_core-2.14.3-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:98d8b3932f1a369364606417ded5412c4ffb15bedbcf797c31317e55bd5d920e"},
{file = "pydantic_core-2.14.3-cp39-none-win32.whl", hash = "sha256:caa94726791e316f0f63049ee00dff3b34a629b0d099f3b594770f7d0d8f1f56"},
{file = "pydantic_core-2.14.3-cp39-none-win_amd64.whl", hash = "sha256:2494d20e4c22beac30150b4be3b8339bf2a02ab5580fa6553ca274bc08681a65"},
{file = "pydantic_core-2.14.3-pp310-pypy310_pp73-macosx_10_7_x86_64.whl", hash = "sha256:fe272a72c7ed29f84c42fedd2d06c2f9858dc0c00dae3b34ba15d6d8ae0fbaaf"},
{file = "pydantic_core-2.14.3-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:7e63a56eb7fdee1587d62f753ccd6d5fa24fbeea57a40d9d8beaef679a24bdd6"},
{file = "pydantic_core-2.14.3-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b7692f539a26265cece1e27e366df5b976a6db6b1f825a9e0466395b314ee48b"},
{file = "pydantic_core-2.14.3-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:af46f0b7a1342b49f208fed31f5a83b8495bb14b652f621e0a6787d2f10f24ee"},
{file = "pydantic_core-2.14.3-pp310-pypy310_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:6e2f9d76c00e805d47f19c7a96a14e4135238a7551a18bfd89bb757993fd0933"},
{file = "pydantic_core-2.14.3-pp310-pypy310_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:de52ddfa6e10e892d00f747bf7135d7007302ad82e243cf16d89dd77b03b649d"},
{file = "pydantic_core-2.14.3-pp310-pypy310_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:38113856c7fad8c19be7ddd57df0c3e77b1b2336459cb03ee3903ce9d5e236ce"},
{file = "pydantic_core-2.14.3-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:354db020b1f8f11207b35360b92d95725621eb92656725c849a61e4b550f4acc"},
{file = "pydantic_core-2.14.3-pp37-pypy37_pp73-macosx_10_7_x86_64.whl", hash = "sha256:76fc18653a5c95e5301a52d1b5afb27c9adc77175bf00f73e94f501caf0e05ad"},
{file = "pydantic_core-2.14.3-pp37-pypy37_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2646f8270f932d79ba61102a15ea19a50ae0d43b314e22b3f8f4b5fabbfa6e38"},
{file = "pydantic_core-2.14.3-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:37dad73a2f82975ed563d6a277fd9b50e5d9c79910c4aec787e2d63547202315"},
{file = "pydantic_core-2.14.3-pp37-pypy37_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:113752a55a8eaece2e4ac96bc8817f134c2c23477e477d085ba89e3aa0f4dc44"},
{file = "pydantic_core-2.14.3-pp37-pypy37_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:8488e973547e8fb1b4193fd9faf5236cf1b7cd5e9e6dc7ff6b4d9afdc4c720cb"},
{file = "pydantic_core-2.14.3-pp37-pypy37_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:3d1dde10bd9962b1434053239b1d5490fc31a2b02d8950a5f731bc584c7a5a0f"},
{file = "pydantic_core-2.14.3-pp38-pypy38_pp73-macosx_10_7_x86_64.whl", hash = "sha256:2c83892c7bf92b91d30faca53bb8ea21f9d7e39f0ae4008ef2c2f91116d0464a"},
{file = "pydantic_core-2.14.3-pp38-pypy38_pp73-macosx_11_0_arm64.whl", hash = "sha256:849cff945284c577c5f621d2df76ca7b60f803cc8663ff01b778ad0af0e39bb9"},
{file = "pydantic_core-2.14.3-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4aa89919fbd8a553cd7d03bf23d5bc5deee622e1b5db572121287f0e64979476"},
{file = "pydantic_core-2.14.3-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bf15145b1f8056d12c67255cd3ce5d317cd4450d5ee747760d8d088d85d12a2d"},
{file = "pydantic_core-2.14.3-pp38-pypy38_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:4cc6bb11f4e8e5ed91d78b9880774fbc0856cb226151b0a93b549c2b26a00c19"},
{file = "pydantic_core-2.14.3-pp38-pypy38_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:832d16f248ca0cc96929139734ec32d21c67669dcf8a9f3f733c85054429c012"},
{file = "pydantic_core-2.14.3-pp38-pypy38_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:b02b5e1f54c3396c48b665050464803c23c685716eb5d82a1d81bf81b5230da4"},
{file = "pydantic_core-2.14.3-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:1f2d4516c32255782153e858f9a900ca6deadfb217fd3fb21bb2b60b4e04d04d"},
{file = "pydantic_core-2.14.3-pp39-pypy39_pp73-macosx_10_7_x86_64.whl", hash = "sha256:0a3e51c2be472b7867eb0c5d025b91400c2b73a0823b89d4303a9097e2ec6655"},
{file = "pydantic_core-2.14.3-pp39-pypy39_pp73-macosx_11_0_arm64.whl", hash = "sha256:df33902464410a1f1a0411a235f0a34e7e129f12cb6340daca0f9d1390f5fe10"},
{file = "pydantic_core-2.14.3-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:27828f0227b54804aac6fb077b6bb48e640b5435fdd7fbf0c274093a7b78b69c"},
{file = "pydantic_core-2.14.3-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1e2979dc80246e18e348de51246d4c9b410186ffa3c50e77924bec436b1e36cb"},
{file = "pydantic_core-2.14.3-pp39-pypy39_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:b28996872b48baf829ee75fa06998b607c66a4847ac838e6fd7473a6b2ab68e7"},
{file = "pydantic_core-2.14.3-pp39-pypy39_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:ca55c9671bb637ce13d18ef352fd32ae7aba21b4402f300a63f1fb1fd18e0364"},
{file = "pydantic_core-2.14.3-pp39-pypy39_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:aecd5ed096b0e5d93fb0367fd8f417cef38ea30b786f2501f6c34eabd9062c38"},
{file = "pydantic_core-2.14.3-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:44aaf1a07ad0824e407dafc637a852e9a44d94664293bbe7d8ee549c356c8882"},
{file = "pydantic_core-2.14.3.tar.gz", hash = "sha256:3ad083df8fe342d4d8d00cc1d3c1a23f0dc84fce416eb301e69f1ddbbe124d3f"},
]
[package.dependencies]
@ -2012,12 +1998,12 @@ plugins = ["importlib-metadata"]
[[package]]
name = "pyicu"
version = "2.11"
version = "2.12"
description = "Python extension wrapping the ICU C++ API"
optional = true
python-versions = "*"
files = [
{file = "PyICU-2.11.tar.gz", hash = "sha256:3ab531264cfe9132b3d2ac5d708da9a4649d25f6e6813730ac88cf040a08a844"},
{file = "PyICU-2.12.tar.gz", hash = "sha256:bd7ab5efa93ad692e6daa29cd249364e521218329221726a113ca3cb281c8611"},
]
[[package]]
@ -2094,20 +2080,20 @@ tests = ["hypothesis (>=3.27.0)", "pytest (>=3.2.1,!=3.3.0)"]
[[package]]
name = "pyopenssl"
version = "23.2.0"
version = "23.3.0"
description = "Python wrapper module around the OpenSSL library"
optional = false
python-versions = ">=3.6"
python-versions = ">=3.7"
files = [
{file = "pyOpenSSL-23.2.0-py3-none-any.whl", hash = "sha256:24f0dc5227396b3e831f4c7f602b950a5e9833d292c8e4a2e06b709292806ae2"},
{file = "pyOpenSSL-23.2.0.tar.gz", hash = "sha256:276f931f55a452e7dea69c7173e984eb2a4407ce413c918aa34b55f82f9b8bac"},
{file = "pyOpenSSL-23.3.0-py3-none-any.whl", hash = "sha256:6756834481d9ed5470f4a9393455154bc92fe7a64b7bc6ee2c804e78c52099b2"},
{file = "pyOpenSSL-23.3.0.tar.gz", hash = "sha256:6b2cba5cc46e822750ec3e5a81ee12819850b11303630d575e98108a079c2b12"},
]
[package.dependencies]
cryptography = ">=38.0.0,<40.0.0 || >40.0.0,<40.0.1 || >40.0.1,<42"
cryptography = ">=41.0.5,<42"
[package.extras]
docs = ["sphinx (!=5.2.0,!=5.2.0.post0)", "sphinx-rtd-theme"]
docs = ["sphinx (!=5.2.0,!=5.2.0.post0,!=7.2.5)", "sphinx-rtd-theme"]
test = ["flaky", "pretend", "pytest (>=3.0.1)"]
[[package]]
@ -2286,13 +2272,13 @@ use-chardet-on-py3 = ["chardet (>=3.0.2,<6)"]
[[package]]
name = "requests-toolbelt"
version = "0.10.1"
version = "1.0.0"
description = "A utility belt for advanced users of python-requests"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
files = [
{file = "requests-toolbelt-0.10.1.tar.gz", hash = "sha256:62e09f7ff5ccbda92772a29f394a49c3ad6cb181d568b1337626b2abb628a63d"},
{file = "requests_toolbelt-0.10.1-py2.py3-none-any.whl", hash = "sha256:18565aa58116d9951ac39baa288d3adb5b3ff975c4f25eee78555d89e8f247f7"},
{file = "requests-toolbelt-1.0.0.tar.gz", hash = "sha256:7681a0a3d047012b5bdc0ee37d7f8f07ebe76ab08caeccfc3921ce23c88d5bc6"},
{file = "requests_toolbelt-1.0.0-py2.py3-none-any.whl", hash = "sha256:cccfdd665f0a24fcf4726e690f65639d272bb0637b9b92dfd91a5568ccf6bd06"},
]
[package.dependencies]
@ -2439,28 +2425,28 @@ files = [
[[package]]
name = "ruff"
version = "0.0.292"
description = "An extremely fast Python linter, written in Rust."
version = "0.1.6"
description = "An extremely fast Python linter and code formatter, written in Rust."
optional = false
python-versions = ">=3.7"
files = [
{file = "ruff-0.0.292-py3-none-macosx_10_7_x86_64.whl", hash = "sha256:02f29db018c9d474270c704e6c6b13b18ed0ecac82761e4fcf0faa3728430c96"},
{file = "ruff-0.0.292-py3-none-macosx_10_9_x86_64.macosx_11_0_arm64.macosx_10_9_universal2.whl", hash = "sha256:69654e564342f507edfa09ee6897883ca76e331d4bbc3676d8a8403838e9fade"},
{file = "ruff-0.0.292-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6c3c91859a9b845c33778f11902e7b26440d64b9d5110edd4e4fa1726c41e0a4"},
{file = "ruff-0.0.292-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:f4476f1243af2d8c29da5f235c13dca52177117935e1f9393f9d90f9833f69e4"},
{file = "ruff-0.0.292-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:be8eb50eaf8648070b8e58ece8e69c9322d34afe367eec4210fdee9a555e4ca7"},
{file = "ruff-0.0.292-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:9889bac18a0c07018aac75ef6c1e6511d8411724d67cb879103b01758e110a81"},
{file = "ruff-0.0.292-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:6bdfabd4334684a4418b99b3118793f2c13bb67bf1540a769d7816410402a205"},
{file = "ruff-0.0.292-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:aa7c77c53bfcd75dbcd4d1f42d6cabf2485d2e1ee0678da850f08e1ab13081a8"},
{file = "ruff-0.0.292-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8e087b24d0d849c5c81516ec740bf4fd48bf363cfb104545464e0fca749b6af9"},
{file = "ruff-0.0.292-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:f160b5ec26be32362d0774964e218f3fcf0a7da299f7e220ef45ae9e3e67101a"},
{file = "ruff-0.0.292-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:ac153eee6dd4444501c4bb92bff866491d4bfb01ce26dd2fff7ca472c8df9ad0"},
{file = "ruff-0.0.292-py3-none-musllinux_1_2_i686.whl", hash = "sha256:87616771e72820800b8faea82edd858324b29bb99a920d6aa3d3949dd3f88fb0"},
{file = "ruff-0.0.292-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:b76deb3bdbea2ef97db286cf953488745dd6424c122d275f05836c53f62d4016"},
{file = "ruff-0.0.292-py3-none-win32.whl", hash = "sha256:e854b05408f7a8033a027e4b1c7f9889563dd2aca545d13d06711e5c39c3d003"},
{file = "ruff-0.0.292-py3-none-win_amd64.whl", hash = "sha256:f27282bedfd04d4c3492e5c3398360c9d86a295be00eccc63914438b4ac8a83c"},
{file = "ruff-0.0.292-py3-none-win_arm64.whl", hash = "sha256:7f67a69c8f12fbc8daf6ae6d36705037bde315abf8b82b6e1f4c9e74eb750f68"},
{file = "ruff-0.0.292.tar.gz", hash = "sha256:1093449e37dd1e9b813798f6ad70932b57cf614e5c2b5c51005bf67d55db33ac"},
{file = "ruff-0.1.6-py3-none-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl", hash = "sha256:88b8cdf6abf98130991cbc9f6438f35f6e8d41a02622cc5ee130a02a0ed28703"},
{file = "ruff-0.1.6-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:5c549ed437680b6105a1299d2cd30e4964211606eeb48a0ff7a93ef70b902248"},
{file = "ruff-0.1.6-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1cf5f701062e294f2167e66d11b092bba7af6a057668ed618a9253e1e90cfd76"},
{file = "ruff-0.1.6-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:05991ee20d4ac4bb78385360c684e4b417edd971030ab12a4fbd075ff535050e"},
{file = "ruff-0.1.6-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:87455a0c1f739b3c069e2f4c43b66479a54dea0276dd5d4d67b091265f6fd1dc"},
{file = "ruff-0.1.6-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:683aa5bdda5a48cb8266fcde8eea2a6af4e5700a392c56ea5fb5f0d4bfdc0240"},
{file = "ruff-0.1.6-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:137852105586dcbf80c1717facb6781555c4e99f520c9c827bd414fac67ddfb6"},
{file = "ruff-0.1.6-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:bd98138a98d48a1c36c394fd6b84cd943ac92a08278aa8ac8c0fdefcf7138f35"},
{file = "ruff-0.1.6-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3a0cd909d25f227ac5c36d4e7e681577275fb74ba3b11d288aff7ec47e3ae745"},
{file = "ruff-0.1.6-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:e8fd1c62a47aa88a02707b5dd20c5ff20d035d634aa74826b42a1da77861b5ff"},
{file = "ruff-0.1.6-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:fd89b45d374935829134a082617954120d7a1470a9f0ec0e7f3ead983edc48cc"},
{file = "ruff-0.1.6-py3-none-musllinux_1_2_i686.whl", hash = "sha256:491262006e92f825b145cd1e52948073c56560243b55fb3b4ecb142f6f0e9543"},
{file = "ruff-0.1.6-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:ea284789861b8b5ca9d5443591a92a397ac183d4351882ab52f6296b4fdd5462"},
{file = "ruff-0.1.6-py3-none-win32.whl", hash = "sha256:1610e14750826dfc207ccbcdd7331b6bd285607d4181df9c1c6ae26646d6848a"},
{file = "ruff-0.1.6-py3-none-win_amd64.whl", hash = "sha256:4558b3e178145491e9bc3b2ee3c4b42f19d19384eaa5c59d10acf6e8f8b57e33"},
{file = "ruff-0.1.6-py3-none-win_arm64.whl", hash = "sha256:03910e81df0d8db0e30050725a5802441c2022ea3ae4fe0609b76081731accbc"},
{file = "ruff-0.1.6.tar.gz", hash = "sha256:1b09f29b16c6ead5ea6b097ef2764b42372aebe363722f1605ecbcd2b9207184"},
]
[[package]]
@ -2495,13 +2481,13 @@ doc = ["Sphinx", "sphinx-rtd-theme"]
[[package]]
name = "sentry-sdk"
version = "1.32.0"
version = "1.35.0"
description = "Python client for Sentry (https://sentry.io)"
optional = true
python-versions = "*"
files = [
{file = "sentry-sdk-1.32.0.tar.gz", hash = "sha256:935e8fbd7787a3702457393b74b13d89a5afb67185bc0af85c00cb27cbd42e7c"},
{file = "sentry_sdk-1.32.0-py2.py3-none-any.whl", hash = "sha256:eeb0b3550536f3bbc05bb1c7e0feb3a78d74acb43b607159a606ed2ec0a33a4d"},
{file = "sentry-sdk-1.35.0.tar.gz", hash = "sha256:04e392db9a0d59bd49a51b9e3a92410ac5867556820465057c2ef89a38e953e9"},
{file = "sentry_sdk-1.35.0-py2.py3-none-any.whl", hash = "sha256:a7865952701e46d38b41315c16c075367675c48d049b90a4cc2e41991ebc7efa"},
]
[package.dependencies]
@ -2580,13 +2566,13 @@ testing-integration = ["build[virtualenv]", "filelock (>=3.4.0)", "jaraco.envs (
[[package]]
name = "setuptools-rust"
version = "1.8.0"
version = "1.8.1"
description = "Setuptools Rust extension plugin"
optional = false
python-versions = ">=3.8"
files = [
{file = "setuptools-rust-1.8.0.tar.gz", hash = "sha256:5e02b7a80058853bf64127314f6b97d0efed11e08b94c88ca639a20976f6adc4"},
{file = "setuptools_rust-1.8.0-py3-none-any.whl", hash = "sha256:95ec67edee2ca73233c9e75250e9d23a302aa23b4c8413dfd19c14c30d08f703"},
{file = "setuptools-rust-1.8.1.tar.gz", hash = "sha256:94b1dd5d5308b3138d5b933c3a2b55e6d6927d1a22632e509fcea9ddd0f7e486"},
{file = "setuptools_rust-1.8.1-py3-none-any.whl", hash = "sha256:b5324493949ccd6aa0c03890c5f6b5f02de4512e3ac1697d02e9a6c02b18aa8e"},
]
[package.dependencies]
@ -2705,17 +2691,17 @@ test = ["cython", "filelock", "html5lib", "pytest (>=4.6)"]
[[package]]
name = "sphinx-autodoc2"
version = "0.4.2"
version = "0.5.0"
description = "Analyse a python project and create documentation for it."
optional = false
python-versions = ">=3.8"
files = [
{file = "sphinx-autodoc2-0.4.2.tar.gz", hash = "sha256:06da226a25a4339e173b34bb0e590e0ba9b4570b414796140aee1939d09acb3a"},
{file = "sphinx_autodoc2-0.4.2-py3-none-any.whl", hash = "sha256:00835ba8c980b9c510ea794c3e2060e5a254a74c6c22badc9bfd3642dc1034b4"},
{file = "sphinx_autodoc2-0.5.0-py3-none-any.whl", hash = "sha256:e867013b1512f9d6d7e6f6799f8b537d6884462acd118ef361f3f619a60b5c9e"},
{file = "sphinx_autodoc2-0.5.0.tar.gz", hash = "sha256:7d76044aa81d6af74447080182b6868c7eb066874edc835e8ddf810735b6565a"},
]
[package.dependencies]
astroid = ">=2.7"
astroid = ">=2.7,<4"
tomli = {version = "*", markers = "python_version < \"3.11\""}
typing-extensions = "*"
@ -2723,7 +2709,7 @@ typing-extensions = "*"
cli = ["typer[all]"]
docs = ["furo", "myst-parser", "sphinx (>=4.0.0)"]
sphinx = ["sphinx (>=4.0.0)"]
testing = ["pytest", "pytest-cov", "pytest-regressions", "sphinx (>=4.0.0)"]
testing = ["pytest", "pytest-cov", "pytest-regressions", "sphinx (>=4.0.0,<7)"]
[[package]]
name = "sphinx-basic-ng"
@ -2906,18 +2892,17 @@ files = [
[[package]]
name = "towncrier"
version = "23.6.0"
version = "23.11.0"
description = "Building newsfiles for your project."
optional = false
python-versions = ">=3.7"
python-versions = ">=3.8"
files = [
{file = "towncrier-23.6.0-py3-none-any.whl", hash = "sha256:da552f29192b3c2b04d630133f194c98e9f14f0558669d427708e203fea4d0a5"},
{file = "towncrier-23.6.0.tar.gz", hash = "sha256:fc29bd5ab4727c8dacfbe636f7fb5dc53b99805b62da1c96b214836159ff70c1"},
{file = "towncrier-23.11.0-py3-none-any.whl", hash = "sha256:2e519ca619426d189e3c98c99558fe8be50c9ced13ea1fc20a4a353a95d2ded7"},
{file = "towncrier-23.11.0.tar.gz", hash = "sha256:13937c247e3f8ae20ac44d895cf5f96a60ad46cfdcc1671759530d7837d9ee5d"},
]
[package.dependencies]
click = "*"
click-default-group = "*"
importlib-resources = {version = ">=5", markers = "python_version < \"3.10\""}
incremental = "*"
jinja2 = "*"
@ -2928,13 +2913,13 @@ dev = ["furo", "packaging", "sphinx (>=5)", "twisted"]
[[package]]
name = "treq"
version = "22.2.0"
version = "23.11.0"
description = "High-level Twisted HTTP Client API"
optional = false
python-versions = ">=3.6"
files = [
{file = "treq-22.2.0-py3-none-any.whl", hash = "sha256:27d95b07c5c14be3e7b280416139b036087617ad5595be913b1f9b3ce981b9b2"},
{file = "treq-22.2.0.tar.gz", hash = "sha256:df757e3f141fc782ede076a604521194ffcb40fa2645cf48e5a37060307f52ec"},
{file = "treq-23.11.0-py3-none-any.whl", hash = "sha256:f494c2218d61cab2cabbee37cd6606d3eea9d16cf14190323095c95d22c467e9"},
{file = "treq-23.11.0.tar.gz", hash = "sha256:0914ff929fd1632ce16797235260f8bc19d20ff7c459c1deabd65b8c68cbeac5"},
]
[package.dependencies]
@ -2942,11 +2927,11 @@ attrs = "*"
hyperlink = ">=21.0.0"
incremental = "*"
requests = ">=2.1.0"
Twisted = {version = ">=18.7.0", extras = ["tls"]}
Twisted = {version = ">=22.10.0", extras = ["tls"]}
[package.extras]
dev = ["httpbin (==0.5.0)", "pep8", "pyflakes"]
docs = ["sphinx (>=1.4.8)"]
dev = ["httpbin (==0.7.0)", "pep8", "pyflakes", "werkzeug (==2.0.3)"]
docs = ["sphinx (<7.0.0)"]
[[package]]
name = "twine"
@ -2972,13 +2957,13 @@ urllib3 = ">=1.26.0"
[[package]]
name = "twisted"
version = "23.8.0"
version = "23.10.0"
description = "An asynchronous networking framework written in Python"
optional = false
python-versions = ">=3.7.1"
python-versions = ">=3.8.0"
files = [
{file = "twisted-23.8.0-py3-none-any.whl", hash = "sha256:b8bdba145de120ffb36c20e6e071cce984e89fba798611ed0704216fb7f884cd"},
{file = "twisted-23.8.0.tar.gz", hash = "sha256:3c73360add17336a622c0d811c2a2ce29866b6e59b1125fd6509b17252098a24"},
{file = "twisted-23.10.0-py3-none-any.whl", hash = "sha256:4ae8bce12999a35f7fe6443e7f1893e6fe09588c8d2bed9c35cdce8ff2d5b444"},
{file = "twisted-23.10.0.tar.gz", hash = "sha256:987847a0790a2c597197613686e2784fd54167df3a55d0fb17c8412305d76ce5"},
]
[package.dependencies]
@ -2991,19 +2976,18 @@ incremental = ">=22.10.0"
pyopenssl = {version = ">=21.0.0", optional = true, markers = "extra == \"tls\""}
service-identity = {version = ">=18.1.0", optional = true, markers = "extra == \"tls\""}
twisted-iocpsupport = {version = ">=1.0.2,<2", markers = "platform_system == \"Windows\""}
typing-extensions = ">=3.10.0"
typing-extensions = ">=4.2.0"
zope-interface = ">=5"
[package.extras]
all-non-platform = ["twisted[conch,contextvars,http2,serial,test,tls]", "twisted[conch,contextvars,http2,serial,test,tls]"]
all-non-platform = ["twisted[conch,http2,serial,test,tls]", "twisted[conch,http2,serial,test,tls]"]
conch = ["appdirs (>=1.4.0)", "bcrypt (>=3.1.3)", "cryptography (>=3.3)"]
contextvars = ["contextvars (>=2.4,<3)"]
dev = ["coverage (>=6b1,<7)", "pyflakes (>=2.2,<3.0)", "python-subunit (>=1.4,<2.0)", "twisted[dev-release]", "twistedchecker (>=0.7,<1.0)"]
dev-release = ["pydoctor (>=23.4.0,<23.5.0)", "pydoctor (>=23.4.0,<23.5.0)", "readthedocs-sphinx-ext (>=2.2,<3.0)", "readthedocs-sphinx-ext (>=2.2,<3.0)", "sphinx (>=5,<7)", "sphinx (>=5,<7)", "sphinx-rtd-theme (>=1.2,<2.0)", "sphinx-rtd-theme (>=1.2,<2.0)", "towncrier (>=22.12,<23.0)", "towncrier (>=22.12,<23.0)", "urllib3 (<2)", "urllib3 (<2)"]
dev-release = ["pydoctor (>=23.9.0,<23.10.0)", "pydoctor (>=23.9.0,<23.10.0)", "sphinx (>=6,<7)", "sphinx (>=6,<7)", "sphinx-rtd-theme (>=1.3,<2.0)", "sphinx-rtd-theme (>=1.3,<2.0)", "towncrier (>=23.6,<24.0)", "towncrier (>=23.6,<24.0)"]
gtk-platform = ["pygobject", "pygobject", "twisted[all-non-platform]", "twisted[all-non-platform]"]
http2 = ["h2 (>=3.0,<5.0)", "priority (>=1.1.0,<2.0)"]
macos-platform = ["pyobjc-core", "pyobjc-core", "pyobjc-framework-cfnetwork", "pyobjc-framework-cfnetwork", "pyobjc-framework-cocoa", "pyobjc-framework-cocoa", "twisted[all-non-platform]", "twisted[all-non-platform]"]
mypy = ["mypy (==0.981)", "mypy-extensions (==0.4.3)", "mypy-zope (==0.3.11)", "twisted[all-non-platform,dev]", "types-pyopenssl", "types-setuptools"]
mypy = ["mypy (>=1.5.1,<1.6.0)", "mypy-zope (>=1.0.1,<1.1.0)", "twisted[all-non-platform,dev]", "types-pyopenssl", "types-setuptools"]
osx-platform = ["twisted[macos-platform]", "twisted[macos-platform]"]
serial = ["pyserial (>=3.0)", "pywin32 (!=226)"]
test = ["cython-test-exception-raiser (>=1.0.2,<2)", "hypothesis (>=6.56)", "pyhamcrest (>=2)"]
@ -3048,13 +3032,13 @@ twisted = "*"
[[package]]
name = "types-bleach"
version = "6.1.0.0"
version = "6.1.0.1"
description = "Typing stubs for bleach"
optional = false
python-versions = ">=3.7"
files = [
{file = "types-bleach-6.1.0.0.tar.gz", hash = "sha256:3cf0e55d4618890a00af1151f878b2e2a7a96433850b74e12bede7663d774532"},
{file = "types_bleach-6.1.0.0-py3-none-any.whl", hash = "sha256:f0bc75d0f6475036ac69afebf37c41d116dfba78dae55db80437caf0fcd35c28"},
{file = "types-bleach-6.1.0.1.tar.gz", hash = "sha256:1e43c437e734a90efe4f40ebfe831057599568d3b275939ffbd6094848a18a27"},
{file = "types_bleach-6.1.0.1-py3-none-any.whl", hash = "sha256:f83f80e0709f13d809a9c79b958a1089df9b99e68059287beb196e38967e4ddf"},
]
[[package]]
@ -3070,13 +3054,13 @@ files = [
[[package]]
name = "types-jsonschema"
version = "4.19.0.3"
version = "4.20.0.0"
description = "Typing stubs for jsonschema"
optional = false
python-versions = ">=3.8"
files = [
{file = "types-jsonschema-4.19.0.3.tar.gz", hash = "sha256:e0fc0f5d51fd0988bf193be42174a5376b0096820ff79505d9c1b66de23f0581"},
{file = "types_jsonschema-4.19.0.3-py3-none-any.whl", hash = "sha256:5cedbb661e5ca88d95b94b79902423e3f97a389c245e5fe0ab384122f27d56b9"},
{file = "types-jsonschema-4.20.0.0.tar.gz", hash = "sha256:0de1032d243f1d3dba8b745ad84efe8c1af71665a9deb1827636ac535dcb79c1"},
{file = "types_jsonschema-4.20.0.0-py3-none-any.whl", hash = "sha256:e6d5df18aaca4412f0aae246a294761a92040e93d7bc840f002b7329a8b72d26"},
]
[package.dependencies]
@ -3106,35 +3090,35 @@ files = [
[[package]]
name = "types-pillow"
version = "10.1.0.0"
version = "10.1.0.2"
description = "Typing stubs for Pillow"
optional = false
python-versions = ">=3.7"
files = [
{file = "types-Pillow-10.1.0.0.tar.gz", hash = "sha256:0f5e7cf010ed226800cb5821e87781e5d0e81257d948a9459baa74a8c8b7d822"},
{file = "types_Pillow-10.1.0.0-py3-none-any.whl", hash = "sha256:f97f596b6a39ddfd26da3eb67421062193e10732d2310f33898d36f9694331b5"},
{file = "types-Pillow-10.1.0.2.tar.gz", hash = "sha256:525c1c5ee67b0ac1721c40d2bc618226ef2123c347e527e14e05b920721a13b9"},
{file = "types_Pillow-10.1.0.2-py3-none-any.whl", hash = "sha256:131078ffa547bf9a201d39ffcdc65633e108148085f4f1b07d4647fcfec6e923"},
]
[[package]]
name = "types-psycopg2"
version = "2.9.21.15"
version = "2.9.21.16"
description = "Typing stubs for psycopg2"
optional = false
python-versions = ">=3.7"
files = [
{file = "types-psycopg2-2.9.21.15.tar.gz", hash = "sha256:cf99b62ab32cd4ef412fc3c4da1c29ca5a130847dff06d709b84a523802406f0"},
{file = "types_psycopg2-2.9.21.15-py3-none-any.whl", hash = "sha256:cc80479def02e4dd1ef21649d82f04426c73bc0693bcc0a8b5223c7c168472af"},
{file = "types-psycopg2-2.9.21.16.tar.gz", hash = "sha256:44a3ae748173bb637cff31654d6bd12de9ad0c7ad73afe737df6152830ed82ed"},
{file = "types_psycopg2-2.9.21.16-py3-none-any.whl", hash = "sha256:e2f24b651239ccfda320ab3457099af035cf37962c36c9fa26a4dc65991aebed"},
]
[[package]]
name = "types-pyopenssl"
version = "23.2.0.2"
version = "23.3.0.0"
description = "Typing stubs for pyOpenSSL"
optional = false
python-versions = "*"
python-versions = ">=3.7"
files = [
{file = "types-pyOpenSSL-23.2.0.2.tar.gz", hash = "sha256:6a010dac9ecd42b582d7dd2cc3e9e40486b79b3b64bb2fffba1474ff96af906d"},
{file = "types_pyOpenSSL-23.2.0.2-py3-none-any.whl", hash = "sha256:19536aa3debfbe25a918cf0d898e9f5fbbe6f3594a429da7914bf331deb1b342"},
{file = "types-pyOpenSSL-23.3.0.0.tar.gz", hash = "sha256:5ffb077fe70b699c88d5caab999ae80e192fe28bf6cda7989b7e79b1e4e2dcd3"},
{file = "types_pyOpenSSL-23.3.0.0-py3-none-any.whl", hash = "sha256:00171433653265843b7469ddb9f3c86d698668064cc33ef10537822156130ebf"},
]
[package.dependencies]
@ -3142,13 +3126,13 @@ cryptography = ">=35.0.0"
[[package]]
name = "types-pyyaml"
version = "6.0.12.11"
version = "6.0.12.12"
description = "Typing stubs for PyYAML"
optional = false
python-versions = "*"
files = [
{file = "types-PyYAML-6.0.12.11.tar.gz", hash = "sha256:7d340b19ca28cddfdba438ee638cd4084bde213e501a3978738543e27094775b"},
{file = "types_PyYAML-6.0.12.11-py3-none-any.whl", hash = "sha256:a461508f3096d1d5810ec5ab95d7eeecb651f3a15b71959999988942063bf01d"},
{file = "types-PyYAML-6.0.12.12.tar.gz", hash = "sha256:334373d392fde0fdf95af5c3f1661885fa10c52167b14593eb856289e1855062"},
{file = "types_PyYAML-6.0.12.12-py3-none-any.whl", hash = "sha256:c05bc6c158facb0676674b7f11fe3960db4f389718e19e62bd2b84d6205cfd24"},
]
[[package]]
@ -3167,13 +3151,13 @@ urllib3 = ">=2"
[[package]]
name = "types-setuptools"
version = "68.2.0.0"
version = "68.2.0.2"
description = "Typing stubs for setuptools"
optional = false
python-versions = "*"
python-versions = ">=3.7"
files = [
{file = "types-setuptools-68.2.0.0.tar.gz", hash = "sha256:a4216f1e2ef29d089877b3af3ab2acf489eb869ccaf905125c69d2dc3932fd85"},
{file = "types_setuptools-68.2.0.0-py3-none-any.whl", hash = "sha256:77edcc843e53f8fc83bb1a840684841f3dc804ec94562623bfa2ea70d5a2ba1b"},
{file = "types-setuptools-68.2.0.2.tar.gz", hash = "sha256:09efc380ad5c7f78e30bca1546f706469568cf26084cfab73ecf83dea1d28446"},
{file = "types_setuptools-68.2.0.2-py3-none-any.whl", hash = "sha256:d5b5ff568ea2474eb573dcb783def7dadfd9b1ff638bb653b3c7051ce5aeb6d1"},
]
[[package]]
@ -3448,4 +3432,4 @@ user-search = ["pyicu"]
[metadata]
lock-version = "2.0"
python-versions = "^3.8.0"
content-hash = "a08543c65f18cc7e9dea648e89c18ab88fc1747aa2e029aa208f777fc3db06dd"
content-hash = "57716a9580b3493c3d2038492a6d4c36d1d16a79c5a0880b6eadcaf681503d3a"

View File

@ -96,7 +96,7 @@ module-name = "synapse.synapse_rust"
[tool.poetry]
name = "matrix-synapse"
version = "1.96.0rc1"
version = "1.98.0"
description = "Homeserver for the Matrix decentralised comms protocol"
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
license = "Apache-2.0"
@ -192,7 +192,7 @@ phonenumbers = ">=8.2.0"
# we use GaugeHistogramMetric, which was added in prom-client 0.4.0.
prometheus-client = ">=0.4.0"
# we use `order`, which arrived in attrs 19.2.0.
# Note: 21.1.0 broke `/sync`, see #9936
# Note: 21.1.0 broke `/sync`, see https://github.com/matrix-org/synapse/issues/9936
attrs = ">=19.2.0,!=21.1.0"
netaddr = ">=0.7.18"
# Jinja 2.x is incompatible with MarkupSafe>=2.1. To ensure that admins do not
@ -321,7 +321,7 @@ all = [
# This helps prevents merge conflicts when running a batch of dependabot updates.
isort = ">=5.10.1"
black = ">=22.7.0"
ruff = "0.0.292"
ruff = "0.1.6"
# Type checking only works with the pydantic.v1 compat module from pydantic v2
pydantic = "^2"
@ -357,7 +357,7 @@ commonmark = ">=0.9.1"
pygithub = ">=1.55"
# The following are executed as commands by the release script.
twine = "*"
# Towncrier min version comes from #3425. Rationale unclear.
# Towncrier min version comes from https://github.com/matrix-org/synapse/pull/3425. Rationale unclear.
towncrier = ">=18.6.0rc1"
# Used for checking the Poetry lockfile
@ -370,18 +370,19 @@ optional = true
[tool.poetry.group.dev-docs.dependencies]
sphinx = {version = "^6.1", python = "^3.8"}
sphinx-autodoc2 = {version = "^0.4.2", python = "^3.8"}
sphinx-autodoc2 = {version = ">=0.4.2,<0.6.0", python = "^3.8"}
myst-parser = {version = "^1.0.0", python = "^3.8"}
furo = ">=2022.12.7,<2024.0.0"
[build-system]
# The upper bounds here are defensive, intended to prevent situations like
# #13849 and #14079 where we see buildtime or runtime errors caused by build
# system changes.
# https://github.com/matrix-org/synapse/issues/13849 and
# https://github.com/matrix-org/synapse/issues/14079 where we see buildtime or
# runtime errors caused by build system changes.
# We are happy to raise these upper bounds upon request,
# provided we check that it's safe to do so (i.e. that CI passes).
requires = ["poetry-core>=1.1.0,<=1.7.0", "setuptools_rust>=1.3,<=1.8.0"]
requires = ["poetry-core>=1.1.0,<=1.8.1", "setuptools_rust>=1.3,<=1.8.1"]
build-backend = "poetry.core.masonry.api"

View File

@ -25,14 +25,14 @@ name = "synapse.synapse_rust"
anyhow = "1.0.63"
lazy_static = "1.4.0"
log = "0.4.17"
pyo3 = { version = "0.19.2", features = [
pyo3 = { version = "0.20.0", features = [
"macros",
"anyhow",
"abi3",
"abi3-py38",
] }
pyo3-log = "0.8.1"
pythonize = "0.19.0"
pyo3-log = "0.9.0"
pythonize = "0.20.0"
regex = "1.6.0"
serde = { version = "1.0.144", features = ["derive"] }
serde_json = "1.0.85"

View File

@ -296,8 +296,7 @@ impl<'source> FromPyObject<'source> for JsonValue {
match l.iter().map(SimpleJsonValue::extract).collect() {
Ok(a) => Ok(JsonValue::Array(a)),
Err(e) => Err(PyTypeError::new_err(format!(
"Can't convert to JsonValue::Array: {}",
e
"Can't convert to JsonValue::Array: {e}"
))),
}
} else if let Ok(v) = SimpleJsonValue::extract(ob) {

181
scripts-dev/schema_versions.py Executable file
View File

@ -0,0 +1,181 @@
#!/usr/bin/env python
# Copyright 2023 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""A script to calculate which versions of Synapse have backwards-compatible
database schemas. It creates a Markdown table of Synapse versions and the earliest
compatible version.
It is compatible with the mdbook protocol for preprocessors (see
https://rust-lang.github.io/mdBook/for_developers/preprocessors.html#implementing-a-preprocessor-with-a-different-language):
Exit 0 to denote support for all renderers:
./scripts-dev/schema_versions.py supports <mdbook renderer>
Parse a JSON list from stdin and add the table to the proper documetnation page:
./scripts-dev/schema_versions.py
Additionally, the script supports dumping the table to stdout for debugging:
./scripts-dev/schema_versions.py dump
"""
import io
import json
import sys
from collections import defaultdict
from typing import Any, Dict, Iterator, Optional, Tuple
import git
from packaging import version
# The schema version has moved around over the years.
SCHEMA_VERSION_FILES = (
"synapse/storage/schema/__init__.py",
"synapse/storage/prepare_database.py",
"synapse/storage/__init__.py",
"synapse/app/homeserver.py",
)
# Skip versions of Synapse < v1.0, they're old and essentially not
# compatible with today's federation.
OLDEST_SHOWN_VERSION = version.parse("v1.0")
def get_schema_versions(tag: git.Tag) -> Tuple[Optional[int], Optional[int]]:
"""Get the schema and schema compat versions for a tag."""
schema_version = None
schema_compat_version = None
for file in SCHEMA_VERSION_FILES:
try:
schema_file = tag.commit.tree / file
except KeyError:
continue
# We (usually) can't execute the code since it might have unknown imports.
if file != "synapse/storage/schema/__init__.py":
with io.BytesIO(schema_file.data_stream.read()) as f:
for line in f.readlines():
if line.startswith(b"SCHEMA_VERSION"):
schema_version = int(line.split()[2])
# Bail early.
break
else:
# SCHEMA_COMPAT_VERSION is sometimes across multiple lines, the easist
# thing to do is exec the code. Luckily it has only ever existed in
# a file which imports nothing else from Synapse.
locals: Dict[str, Any] = {}
exec(schema_file.data_stream.read().decode("utf-8"), {}, locals)
schema_version = locals["SCHEMA_VERSION"]
schema_compat_version = locals.get("SCHEMA_COMPAT_VERSION")
return schema_version, schema_compat_version
def get_tags(repo: git.Repo) -> Iterator[git.Tag]:
"""Return an iterator of tags sorted by version."""
tags = []
for tag in repo.tags:
# All "real" Synapse tags are of the form vX.Y.Z.
if not tag.name.startswith("v"):
continue
# There's a weird tag from the initial react UI.
if tag.name == "v0.1":
continue
try:
tag_version = version.parse(tag.name)
except version.InvalidVersion:
# Skip invalid versions.
continue
# Skip pre- and post-release versions.
if tag_version.is_prerelease or tag_version.is_postrelease or tag_version.local:
continue
# Skip old versions.
if tag_version < OLDEST_SHOWN_VERSION:
continue
tags.append((tag_version, tag))
# Sort based on the version number (not lexically).
return (tag for _, tag in sorted(tags, key=lambda t: t[0]))
def calculate_version_chart() -> str:
repo = git.Repo(path=".")
# Map of schema version -> Synapse versions which are at that schema version.
schema_versions = defaultdict(list)
# Map of schema version -> Synapse versions which are compatible with that
# schema version.
schema_compat_versions = defaultdict(list)
# Find ranges of versions which are compatible with a schema version.
#
# There are two modes of operation:
#
# 1. Pre-schema_compat_version (i.e. schema_compat_version of None), then
# Synapse is compatible up/downgrading to a version with
# schema_version >= its current version.
#
# 2. Post-schema_compat_version (i.e. schema_compat_version is *not* None),
# then Synapse is compatible up/downgrading to a version with
# schema version >= schema_compat_version.
#
# This is more generous and avoids versions that cannot be rolled back.
#
# See https://github.com/matrix-org/synapse/pull/9933 which was included in v1.37.0.
for tag in get_tags(repo):
schema_version, schema_compat_version = get_schema_versions(tag)
# If a schema compat version is given, prefer that over the schema version.
schema_versions[schema_version].append(tag.name)
schema_compat_versions[schema_compat_version or schema_version].append(tag.name)
# Generate a table which maps the latest Synapse version compatible with each
# schema version.
result = f"| {'Versions': ^19} | Compatible version |\n"
result += f"|{'-' * (19 + 2)}|{'-' * (18 + 2)}|\n"
for schema_version, synapse_versions in schema_compat_versions.items():
result += f"| {synapse_versions[0] + ' ' + synapse_versions[-1]: ^19} | {schema_versions[schema_version][0]: ^18} |\n"
return result
if __name__ == "__main__":
if len(sys.argv) == 3 and sys.argv[1] == "supports":
# We don't care about the renderer which is being used, which is the second argument.
sys.exit(0)
elif len(sys.argv) == 2 and sys.argv[1] == "dump":
print(calculate_version_chart())
else:
# Expect JSON data on stdin.
context, book = json.load(sys.stdin)
for section in book["sections"]:
if "Chapter" in section and section["Chapter"]["path"] == "upgrade.md":
section["Chapter"]["content"] = section["Chapter"]["content"].replace(
"<!-- REPLACE_WITH_SCHEMA_VERSIONS -->", calculate_version_chart()
)
# Print the result back out to stdout.
print(json.dumps(book))

View File

@ -29,6 +29,18 @@ from synapse.util.stringutils import strtobool
# Allow truncated JPEG images to be thumbnailed.
ImageFile.LOAD_TRUNCATED_IMAGES = True
# Update your remotes folks.
announcement = """
Synapse is no longer being developed under the matrix-org organization. See the
README.rst for more details.
Please update your git remote to pull from element-hq/synapse:
git remote set-url origin git@github.com:element-hq/synapse.git
"""
print(announcement)
sys.exit(1)
# Check that we're not running on an unsupported Python version.
#
# Note that we use an (unneeded) variable here so that pyupgrade doesn't nuke the

View File

@ -348,8 +348,7 @@ class Porter:
backward_chunk = 0
already_ported = 0
else:
forward_chunk = row["forward_rowid"]
backward_chunk = row["backward_rowid"]
forward_chunk, backward_chunk = row
if total_to_port is None:
already_ported, total_to_port = await self._get_total_count_to_port(

View File

@ -27,6 +27,8 @@ from synapse.api.errors import (
UnstableSpecAuthError,
)
from synapse.appservice import ApplicationService
from synapse.http import get_request_user_agent
from synapse.http.site import SynapseRequest
from synapse.logging.opentracing import trace
from synapse.types import Requester, create_requester
from synapse.util.cancellation import cancellable
@ -45,6 +47,9 @@ class BaseAuth:
self.store = hs.get_datastores().main
self._storage_controllers = hs.get_storage_controllers()
self._track_appservice_user_ips = hs.config.appservice.track_appservice_user_ips
self._track_puppeted_user_ips = hs.config.api.track_puppeted_user_ips
async def check_user_in_room(
self,
room_id: str,
@ -349,3 +354,46 @@ class BaseAuth:
return create_requester(
effective_user_id, app_service=app_service, device_id=effective_device_id
)
async def _record_request(
self, request: SynapseRequest, requester: Requester
) -> None:
"""Record that this request was made.
This updates the client_ips and monthly_active_user tables.
"""
ip_addr = request.get_client_ip_if_available()
if ip_addr and (not requester.app_service or self._track_appservice_user_ips):
user_agent = get_request_user_agent(request)
access_token = self.get_access_token_from_request(request)
# XXX(quenting): I'm 95% confident that we could skip setting the
# device_id to "dummy-device" for appservices, and that the only impact
# would be some rows which whould not deduplicate in the 'user_ips'
# table during the transition
recorded_device_id = (
"dummy-device"
if requester.device_id is None and requester.app_service is not None
else requester.device_id
)
await self.store.insert_client_ip(
user_id=requester.authenticated_entity,
access_token=access_token,
ip=ip_addr,
user_agent=user_agent,
device_id=recorded_device_id,
)
# Track also the puppeted user client IP if enabled and the user is puppeting
if (
requester.user.to_string() != requester.authenticated_entity
and self._track_puppeted_user_ips
):
await self.store.insert_client_ip(
user_id=requester.user.to_string(),
access_token=access_token,
ip=ip_addr,
user_agent=user_agent,
device_id=requester.device_id,
)

View File

@ -22,7 +22,6 @@ from synapse.api.errors import (
InvalidClientTokenError,
MissingClientTokenError,
)
from synapse.http import get_request_user_agent
from synapse.http.site import SynapseRequest
from synapse.logging.opentracing import active_span, force_tracing, start_active_span
from synapse.types import Requester, create_requester
@ -48,8 +47,6 @@ class InternalAuth(BaseAuth):
self._account_validity_handler = hs.get_account_validity_handler()
self._macaroon_generator = hs.get_macaroon_generator()
self._track_appservice_user_ips = hs.config.appservice.track_appservice_user_ips
self._track_puppeted_user_ips = hs.config.api.track_puppeted_user_ips
self._force_tracing_for_users = hs.config.tracing.force_tracing_for_users
@cancellable
@ -115,9 +112,6 @@ class InternalAuth(BaseAuth):
Once get_user_by_req has set up the opentracing span, this does the actual work.
"""
try:
ip_addr = request.get_client_ip_if_available()
user_agent = get_request_user_agent(request)
access_token = self.get_access_token_from_request(request)
# First check if it could be a request from an appservice
@ -154,38 +148,7 @@ class InternalAuth(BaseAuth):
errcode=Codes.EXPIRED_ACCOUNT,
)
if ip_addr and (
not requester.app_service or self._track_appservice_user_ips
):
# XXX(quenting): I'm 95% confident that we could skip setting the
# device_id to "dummy-device" for appservices, and that the only impact
# would be some rows which whould not deduplicate in the 'user_ips'
# table during the transition
recorded_device_id = (
"dummy-device"
if requester.device_id is None and requester.app_service is not None
else requester.device_id
)
await self.store.insert_client_ip(
user_id=requester.authenticated_entity,
access_token=access_token,
ip=ip_addr,
user_agent=user_agent,
device_id=recorded_device_id,
)
# Track also the puppeted user client IP if enabled and the user is puppeting
if (
requester.user.to_string() != requester.authenticated_entity
and self._track_puppeted_user_ips
):
await self.store.insert_client_ip(
user_id=requester.user.to_string(),
access_token=access_token,
ip=ip_addr,
user_agent=user_agent,
device_id=requester.device_id,
)
await self._record_request(request, requester)
if requester.is_guest and not allow_guest:
raise AuthError(

View File

@ -227,6 +227,10 @@ class MSC3861DelegatedAuth(BaseAuth):
# so that we don't provision the user if they don't have enough permission:
requester = await self.get_user_by_access_token(access_token, allow_expired)
# Do not record requests from MAS using the virtual `__oidc_admin` user.
if access_token != self._admin_token:
await self._record_request(request, requester)
if not allow_guest and requester.is_guest:
raise OAuthInsufficientScopeError([SCOPE_MATRIX_API])

View File

@ -83,6 +83,8 @@ class Codes(str, Enum):
USER_DEACTIVATED = "M_USER_DEACTIVATED"
# USER_LOCKED = "M_USER_LOCKED"
USER_LOCKED = "ORG_MATRIX_MSC3939_USER_LOCKED"
NOT_YET_UPLOADED = "M_NOT_YET_UPLOADED"
CANNOT_OVERWRITE_MEDIA = "M_CANNOT_OVERWRITE_MEDIA"
# Part of MSC3848
# https://github.com/matrix-org/matrix-spec-proposals/pull/3848

View File

@ -104,8 +104,8 @@ logger = logging.getLogger("synapse.app.generic_worker")
class GenericWorkerStore(
# FIXME(#3714): We need to add UserDirectoryStore as we write directly
# rather than going via the correct worker.
# FIXME(https://github.com/matrix-org/synapse/issues/3714): We need to add
# UserDirectoryStore as we write directly rather than going via the correct worker.
UserDirectoryStore,
StatsStore,
UIAuthWorkerStore,

View File

@ -419,3 +419,7 @@ class ExperimentalConfig(Config):
self.msc4028_push_encrypted_events = experimental.get(
"msc4028_push_encrypted_events", False
)
self.msc4069_profile_inhibit_propagation = experimental.get(
"msc4069_profile_inhibit_propagation", False
)

View File

@ -204,3 +204,10 @@ class RatelimitConfig(Config):
"rc_third_party_invite",
defaults={"per_second": 0.0025, "burst_count": 5},
)
# Ratelimit create media requests:
self.rc_media_create = RatelimitSettings.parse(
config,
"rc_media_create",
defaults={"per_second": 10, "burst_count": 50},
)

View File

@ -141,6 +141,12 @@ class ContentRepositoryConfig(Config):
"prevent_media_downloads_from", []
)
self.unused_expiration_time = self.parse_duration(
config.get("unused_expiration_time", "24h")
)
self.max_pending_media_uploads = config.get("max_pending_media_uploads", 5)
self.media_store_path = self.ensure_directory(
config.get("media_store_path", "media_store")
)

View File

@ -48,6 +48,7 @@ class ServerNoticesConfig(Config):
self.server_notices_mxid_display_name: Optional[str] = None
self.server_notices_mxid_avatar_url: Optional[str] = None
self.server_notices_room_name: Optional[str] = None
self.server_notices_auto_join: bool = False
def read_config(self, config: JsonDict, **kwargs: Any) -> None:
c = config.get("server_notices")
@ -62,3 +63,4 @@ class ServerNoticesConfig(Config):
self.server_notices_mxid_avatar_url = c.get("system_mxid_avatar_url", None)
# todo: i18n
self.server_notices_room_name = c.get("room_name", "Server Notices")
self.server_notices_auto_join = c.get("auto_join", False)

View File

@ -21,6 +21,7 @@ from typing import (
TYPE_CHECKING,
AbstractSet,
Awaitable,
BinaryIO,
Callable,
Collection,
Container,
@ -1862,6 +1863,43 @@ class FederationClient(FederationBase):
return filtered_statuses, filtered_failures
async def download_media(
self,
destination: str,
media_id: str,
output_stream: BinaryIO,
max_size: int,
max_timeout_ms: int,
) -> Tuple[int, Dict[bytes, List[bytes]]]:
try:
return await self.transport_layer.download_media_v3(
destination,
media_id,
output_stream=output_stream,
max_size=max_size,
max_timeout_ms=max_timeout_ms,
)
except HttpResponseException as e:
# If an error is received that is due to an unrecognised endpoint,
# fallback to the r0 endpoint. Otherwise, consider it a legitimate error
# and raise.
if not is_unknown_endpoint(e):
raise
logger.debug(
"Couldn't download media %s/%s with the v3 API, falling back to the r0 API",
destination,
media_id,
)
return await self.transport_layer.download_media_r0(
destination,
media_id,
output_stream=output_stream,
max_size=max_size,
max_timeout_ms=max_timeout_ms,
)
@attr.s(frozen=True, slots=True, auto_attribs=True)
class TimestampToEventResponse:

View File

@ -84,7 +84,7 @@ from synapse.replication.http.federation import (
from synapse.storage.databases.main.lock import Lock
from synapse.storage.databases.main.roommember import extract_heroes_from_room_summary
from synapse.storage.roommember import MemberSummary
from synapse.types import JsonDict, StateMap, get_domain_from_id, UserID
from synapse.types import JsonDict, StateMap, UserID, get_domain_from_id
from synapse.util import unwrapFirstError
from synapse.util.async_helpers import Linearizer, concurrently_execute, gather_results
from synapse.util.caches.response_cache import ResponseCache

View File

@ -581,14 +581,14 @@ class FederationSender(AbstractFederationSender):
"get_joined_hosts", str(sg)
)
if destinations is None:
# Add logging to help track down #13444
# Add logging to help track down https://github.com/matrix-org/synapse/issues/13444
logger.info(
"Unexpectedly did not have cached destinations for %s / %s",
sg,
event.event_id,
)
else:
# Add logging to help track down #13444
# Add logging to help track down https://github.com/matrix-org/synapse/issues/13444
logger.info(
"Unexpectedly did not have cached prev group for %s",
event.event_id,

View File

@ -18,6 +18,7 @@ import urllib
from typing import (
TYPE_CHECKING,
Any,
BinaryIO,
Callable,
Collection,
Dict,
@ -804,6 +805,58 @@ class TransportLayerClient:
destination=destination, path=path, data={"user_ids": user_ids}
)
async def download_media_r0(
self,
destination: str,
media_id: str,
output_stream: BinaryIO,
max_size: int,
max_timeout_ms: int,
) -> Tuple[int, Dict[bytes, List[bytes]]]:
path = f"/_matrix/media/r0/download/{destination}/{media_id}"
return await self.client.get_file(
destination,
path,
output_stream=output_stream,
max_size=max_size,
args={
# tell the remote server to 404 if it doesn't
# recognise the server_name, to make sure we don't
# end up with a routing loop.
"allow_remote": "false",
"timeout_ms": str(max_timeout_ms),
},
)
async def download_media_v3(
self,
destination: str,
media_id: str,
output_stream: BinaryIO,
max_size: int,
max_timeout_ms: int,
) -> Tuple[int, Dict[bytes, List[bytes]]]:
path = f"/_matrix/media/v3/download/{destination}/{media_id}"
return await self.client.get_file(
destination,
path,
output_stream=output_stream,
max_size=max_size,
args={
# tell the remote server to 404 if it doesn't
# recognise the server_name, to make sure we don't
# end up with a routing loop.
"allow_remote": "false",
"timeout_ms": str(max_timeout_ms),
# Matrix 1.7 allows for this to redirect to another URL, this should
# just be ignored for an old homeserver, so always provide it.
"allow_redirect": "true",
},
follow_redirects=True,
)
def _create_path(federation_prefix: str, path: str, *args: str) -> str:
"""

View File

@ -98,6 +98,22 @@ class AccountValidityHandler:
for callback in self._module_api_callbacks.on_user_registration_callbacks:
await callback(user_id)
async def on_user_login(
self,
user_id: str,
auth_provider_type: Optional[str],
auth_provider_id: Optional[str],
) -> None:
"""Tell third-party modules about a user logins.
Args:
user_id: The mxID of the user.
auth_provider_type: The type of login.
auth_provider_id: The ID of the auth provider.
"""
for callback in self._module_api_callbacks.on_user_login_callbacks:
await callback(user_id, auth_provider_type, auth_provider_id)
@wrap_as_background_process("send_renewals")
async def _send_renewal_emails(self) -> None:
"""Gets the list of users whose account is expiring in the amount of time

View File

@ -283,7 +283,7 @@ class AdminHandler:
start, limit, user_id
)
for media in media_ids:
writer.write_media_id(media["media_id"], media)
writer.write_media_id(media.media_id, attr.asdict(media))
logger.info(
"[%s] Written %d media_ids of %s",

View File

@ -212,6 +212,7 @@ class AuthHandler:
self._password_enabled_for_reauth = hs.config.auth.password_enabled_for_reauth
self._password_localdb_enabled = hs.config.auth.password_localdb_enabled
self._third_party_rules = hs.get_module_api_callbacks().third_party_event_rules
self._account_validity_handler = hs.get_account_validity_handler()
# Ratelimiter for failed auth during UIA. Uses same ratelimit config
# as per `rc_login.failed_attempts`.
@ -1783,6 +1784,13 @@ class AuthHandler:
client_redirect_url, "loginToken", login_token
)
# Run post-login module callback handlers
await self._account_validity_handler.on_user_login(
user_id=registered_user_id,
auth_provider_type=LoginType.SSO,
auth_provider_id=auth_provider_id,
)
# if the client is whitelisted, we can redirect straight to it
if client_redirect_url.startswith(self._whitelisted_sso_clients):
request.redirect(redirect_url)

View File

@ -383,7 +383,7 @@ class DeviceWorkerHandler:
)
DEVICE_MSGS_DELETE_BATCH_LIMIT = 1000
DEVICE_MSGS_DELETE_SLEEP_MS = 1000
DEVICE_MSGS_DELETE_SLEEP_MS = 100
async def _delete_device_messages(
self,
@ -396,15 +396,17 @@ class DeviceWorkerHandler:
up_to_stream_id = task.params["up_to_stream_id"]
# Delete the messages in batches to avoid too much DB load.
from_stream_id = None
while True:
res = await self.store.delete_messages_for_device(
from_stream_id, _ = await self.store.delete_messages_for_device_between(
user_id=user_id,
device_id=device_id,
up_to_stream_id=up_to_stream_id,
from_stream_id=from_stream_id,
to_stream_id=up_to_stream_id,
limit=DeviceHandler.DEVICE_MSGS_DELETE_BATCH_LIMIT,
)
if res < DeviceHandler.DEVICE_MSGS_DELETE_BATCH_LIMIT:
if from_stream_id is None:
return TaskStatus.COMPLETE, None, None
await self.clock.sleep(DeviceHandler.DEVICE_MSGS_DELETE_SLEEP_MS / 1000.0)

View File

@ -1450,19 +1450,25 @@ class E2eKeysHandler:
return desired_key_data
async def is_cross_signing_set_up_for_user(self, user_id: str) -> bool:
async def check_cross_signing_setup(self, user_id: str) -> Tuple[bool, bool]:
"""Checks if the user has cross-signing set up
Args:
user_id: The user to check
Returns:
True if the user has cross-signing set up, False otherwise
Returns: a 2-tuple of booleans
- whether the user has cross-signing set up, and
- whether the user's master cross-signing key may be replaced without UIA.
"""
existing_master_key = await self.store.get_e2e_cross_signing_key(
user_id, "master"
)
return existing_master_key is not None
(
exists,
ts_replacable_without_uia_before,
) = await self.store.get_master_cross_signing_key_updatable_before(user_id)
if ts_replacable_without_uia_before is None:
return exists, False
else:
return exists, self.clock.time_msec() < ts_replacable_without_uia_before
def _check_cross_signing_key(

View File

@ -88,7 +88,7 @@ from synapse.types import (
)
from synapse.types.state import StateFilter
from synapse.util.async_helpers import Linearizer, concurrently_execute
from synapse.util.iterutils import batch_iter, partition
from synapse.util.iterutils import batch_iter, partition, sorted_topologically_batched
from synapse.util.retryutils import NotRetryingDestination
from synapse.util.stringutils import shortstr
@ -748,7 +748,7 @@ class FederationEventHandler:
# fetching fresh state for the room if the missing event
# can't be found, which slightly reduces our security.
# it may also increase our DAG extremity count for the room,
# causing additional state resolution? See #1760.
# causing additional state resolution? See https://github.com/matrix-org/synapse/issues/1760.
# However, fetching state doesn't hold the linearizer lock
# apparently.
#
@ -1669,14 +1669,13 @@ class FederationEventHandler:
# XXX: it might be possible to kick this process off in parallel with fetching
# the events.
while event_map:
# build a list of events whose auth events are not in the queue.
roots = tuple(
ev
for ev in event_map.values()
if not any(aid in event_map for aid in ev.auth_event_ids())
)
# We need to persist an event's auth events before the event.
auth_graph = {
ev: [event_map[e_id] for e_id in ev.auth_event_ids() if e_id in event_map]
for ev in event_map.values()
}
for roots in sorted_topologically_batched(event_map.values(), auth_graph):
if not roots:
# if *none* of the remaining events are ready, that means
# we have a loop. This either means a bug in our logic, or that
@ -1698,9 +1697,6 @@ class FederationEventHandler:
await self._auth_and_persist_outliers_inner(room_id, roots)
for ev in roots:
del event_map[ev.event_id]
async def _auth_and_persist_outliers_inner(
self, room_id: str, fetched_events: Collection[EventBase]
) -> None:

View File

@ -693,13 +693,9 @@ class EventCreationHandler:
if require_consent and not is_exempt:
await self.assert_accepted_privacy_policy(requester)
# Save the access token ID, the device ID and the transaction ID in the event
# internal metadata. This is useful to determine if we should echo the
# transaction_id in events.
# Save the the device ID and the transaction ID in the event internal metadata.
# This is useful to determine if we should echo the transaction_id in events.
# See `synapse.events.utils.EventClientSerializer.serialize_event`
if requester.access_token_id is not None:
builder.internal_metadata.token_id = requester.access_token_id
if requester.device_id is not None:
builder.internal_metadata.device_id = requester.device_id

View File

@ -1816,7 +1816,7 @@ class PresenceEventSource(EventSource[int, UserPresenceState]):
# the same token repeatedly.
#
# Hence this guard where we just return nothing so that the sync
# doesn't return. C.f. #5503.
# doesn't return. C.f. https://github.com/matrix-org/synapse/issues/5503.
return [], max_token
# Figure out which other users this user should explicitly receive

View File

@ -13,7 +13,7 @@
# limitations under the License.
import logging
import random
from typing import TYPE_CHECKING, Optional
from typing import TYPE_CHECKING, Optional, Union
from synapse.api.errors import (
AuthError,
@ -23,6 +23,7 @@ from synapse.api.errors import (
StoreError,
SynapseError,
)
from synapse.storage.databases.main.media_repository import LocalMedia, RemoteMedia
from synapse.types import JsonDict, Requester, UserID, create_requester
from synapse.util.caches.descriptors import cached
from synapse.util.stringutils import parse_and_validate_mxc_uri
@ -128,6 +129,7 @@ class ProfileHandler:
new_displayname: str,
by_admin: bool = False,
deactivation: bool = False,
propagate: bool = True,
) -> None:
"""Set the displayname of a user
@ -137,6 +139,7 @@ class ProfileHandler:
new_displayname: The displayname to give this user.
by_admin: Whether this change was made by an administrator.
deactivation: Whether this change was made while deactivating the user.
propagate: Whether this change also applies to the user's membership events.
"""
if not self.hs.is_mine(target_user):
raise SynapseError(400, "User is not hosted on this homeserver")
@ -187,7 +190,8 @@ class ProfileHandler:
target_user.to_string(), profile, by_admin, deactivation
)
await self._update_join_states(requester, target_user)
if propagate:
await self._update_join_states(requester, target_user)
async def get_avatar_url(self, target_user: UserID) -> Optional[str]:
if self.hs.is_mine(target_user):
@ -220,6 +224,7 @@ class ProfileHandler:
new_avatar_url: str,
by_admin: bool = False,
deactivation: bool = False,
propagate: bool = True,
) -> None:
"""Set a new avatar URL for a user.
@ -229,6 +234,7 @@ class ProfileHandler:
new_avatar_url: The avatar URL to give this user.
by_admin: Whether this change was made by an administrator.
deactivation: Whether this change was made while deactivating the user.
propagate: Whether this change also applies to the user's membership events.
"""
if not self.hs.is_mine(target_user):
raise SynapseError(400, "User is not hosted on this homeserver")
@ -277,7 +283,8 @@ class ProfileHandler:
target_user.to_string(), profile, by_admin, deactivation
)
await self._update_join_states(requester, target_user)
if propagate:
await self._update_join_states(requester, target_user)
@cached()
async def check_avatar_size_and_mime_type(self, mxc: str) -> bool:
@ -306,7 +313,9 @@ class ProfileHandler:
server_name = host
if self._is_mine_server_name(server_name):
media_info = await self.store.get_local_media(media_id)
media_info: Optional[
Union[LocalMedia, RemoteMedia]
] = await self.store.get_local_media(media_id)
else:
media_info = await self.store.get_cached_remote_media(server_name, media_id)
@ -322,12 +331,12 @@ class ProfileHandler:
if self.max_avatar_size:
# Ensure avatar does not exceed max allowed avatar size
if media_info["media_length"] > self.max_avatar_size:
if media_info.media_length > self.max_avatar_size:
logger.warning(
"Forbidding avatar change to %s: %d bytes is above the allowed size "
"limit",
mxc,
media_info["media_length"],
media_info.media_length,
)
return False
@ -335,12 +344,12 @@ class ProfileHandler:
# Ensure the avatar's file type is allowed
if (
self.allowed_avatar_mimetypes
and media_info["media_type"] not in self.allowed_avatar_mimetypes
and media_info.media_type not in self.allowed_avatar_mimetypes
):
logger.warning(
"Forbidding avatar change to %s: mimetype %s not allowed",
mxc,
media_info["media_type"],
media_info.media_type,
)
return False

View File

@ -269,7 +269,7 @@ class RoomCreationHandler:
self,
requester: Requester,
old_room_id: str,
old_room: Dict[str, Any],
old_room: Tuple[bool, str, bool],
new_room_id: str,
new_version: RoomVersion,
tombstone_event: EventBase,
@ -279,7 +279,7 @@ class RoomCreationHandler:
Args:
requester: the user requesting the upgrade
old_room_id: the id of the room to be replaced
old_room: a dict containing room information for the room to be replaced,
old_room: a tuple containing room information for the room to be replaced,
as returned by `RoomWorkerStore.get_room`.
new_room_id: the id of the replacement room
new_version: the version to upgrade the room to
@ -299,7 +299,7 @@ class RoomCreationHandler:
await self.store.store_room(
room_id=new_room_id,
room_creator_user_id=user_id,
is_public=old_room["is_public"],
is_public=old_room[0],
room_version=new_version,
)
@ -549,7 +549,7 @@ class RoomCreationHandler:
except (TypeError, ValueError):
ban = 50
needed_power_level = max(
state_default_int, ban, max(event_power_levels.values())
state_default_int, ban, max(event_power_levels.values(), default=0)
)
# Get the user's current power level, this matches the logic in get_user_power_level,
@ -698,6 +698,7 @@ class RoomCreationHandler:
config: JsonDict,
ratelimit: bool = True,
creator_join_profile: Optional[JsonDict] = None,
ignore_forced_encryption: bool = False,
) -> Tuple[str, Optional[RoomAlias], int]:
"""Creates a new room.
@ -714,6 +715,8 @@ class RoomCreationHandler:
derived from the user's profile. If set, should contain the
values to go in the body of the 'join' event (typically
`avatar_url` and/or `displayname`.
ignore_forced_encryption:
Ignore encryption forced by `encryption_enabled_by_default_for_room_type` setting.
Returns:
A 3-tuple containing:
@ -1015,6 +1018,7 @@ class RoomCreationHandler:
room_alias: Optional[RoomAlias] = None,
power_level_content_override: Optional[JsonDict] = None,
creator_join_profile: Optional[JsonDict] = None,
ignore_forced_encryption: bool = False,
) -> Tuple[int, str, int]:
"""Sends the initial events into a new room. Sends the room creation, membership,
and power level events into the room sequentially, then creates and batches up the
@ -1049,6 +1053,8 @@ class RoomCreationHandler:
creator_join_profile:
Set to override the displayname and avatar for the creating
user in this room.
ignore_forced_encryption:
Ignore encryption forced by `encryption_enabled_by_default_for_room_type` setting.
Returns:
A tuple containing the stream ID, event ID and depth of the last
@ -1251,7 +1257,7 @@ class RoomCreationHandler:
)
events_to_send.append((event, context))
if config["encrypted"]:
if config["encrypted"] and not ignore_forced_encryption:
encryption_event, encryption_context = await create_event(
EventTypes.RoomEncryption,
{"algorithm": RoomEncryptionAlgorithms.DEFAULT},

View File

@ -33,6 +33,7 @@ from synapse.api.errors import (
RequestSendFailed,
SynapseError,
)
from synapse.storage.databases.main.room import LargestRoomStats
from synapse.types import JsonDict, JsonMapping, ThirdPartyInstanceID
from synapse.util.caches.descriptors import _CacheContext, cached
from synapse.util.caches.response_cache import ResponseCache
@ -170,26 +171,24 @@ class RoomListHandler:
ignore_non_federatable=from_federation,
)
def build_room_entry(room: JsonDict) -> JsonDict:
def build_room_entry(room: LargestRoomStats) -> JsonDict:
entry = {
"room_id": room["room_id"],
"name": room["name"],
"topic": room["topic"],
"canonical_alias": room["canonical_alias"],
"num_joined_members": room["joined_members"],
"avatar_url": room["avatar"],
"world_readable": room["history_visibility"]
"room_id": room.room_id,
"name": room.name,
"topic": room.topic,
"canonical_alias": room.canonical_alias,
"num_joined_members": room.joined_members,
"avatar_url": room.avatar,
"world_readable": room.history_visibility
== HistoryVisibility.WORLD_READABLE,
"guest_can_join": room["guest_access"] == "can_join",
"join_rule": room["join_rules"],
"room_type": room["room_type"],
"guest_can_join": room.guest_access == "can_join",
"join_rule": room.join_rules,
"room_type": room.room_type,
}
# Filter out Nones rather omit the field altogether
return {k: v for k, v in entry.items() if v is not None}
results = [build_room_entry(r) for r in results]
response: JsonDict = {}
num_results = len(results)
if limit is not None:
@ -212,33 +211,33 @@ class RoomListHandler:
# If there was a token given then we assume that there
# must be previous results.
response["prev_batch"] = RoomListNextBatch(
last_joined_members=initial_entry["num_joined_members"],
last_room_id=initial_entry["room_id"],
last_joined_members=initial_entry.joined_members,
last_room_id=initial_entry.room_id,
direction_is_forward=False,
).to_token()
if more_to_come:
response["next_batch"] = RoomListNextBatch(
last_joined_members=final_entry["num_joined_members"],
last_room_id=final_entry["room_id"],
last_joined_members=final_entry.joined_members,
last_room_id=final_entry.room_id,
direction_is_forward=True,
).to_token()
else:
if has_batch_token:
response["next_batch"] = RoomListNextBatch(
last_joined_members=final_entry["num_joined_members"],
last_room_id=final_entry["room_id"],
last_joined_members=final_entry.joined_members,
last_room_id=final_entry.room_id,
direction_is_forward=True,
).to_token()
if more_to_come:
response["prev_batch"] = RoomListNextBatch(
last_joined_members=initial_entry["num_joined_members"],
last_room_id=initial_entry["room_id"],
last_joined_members=initial_entry.joined_members,
last_room_id=initial_entry.room_id,
direction_is_forward=False,
).to_token()
response["chunk"] = results
response["chunk"] = [build_room_entry(r) for r in results]
response["total_room_count_estimate"] = await self.store.count_public_rooms(
network_tuple,

View File

@ -1260,7 +1260,8 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
# Add new room to the room directory if the old room was there
# Remove old room from the room directory
old_room = await self.store.get_room(old_room_id)
if old_room is not None and old_room["is_public"]:
# If the old room exists and is public.
if old_room is not None and old_room[0]:
await self.store.set_room_is_public(old_room_id, False)
await self.store.set_room_is_public(room_id, True)
@ -2110,9 +2111,14 @@ class RoomForgetterHandler(StateDeltasHandler):
self.pos = room_max_stream_ordering
if not self._hs.config.room.forget_on_leave:
# Update the processing position, so that if the server admin turns the
# feature on at a later date, we don't decide to forget every room that
# has ever been left in the past.
# Update the processing position, so that if the server admin turns
# the feature on at a later date, we don't decide to forget every
# room that has ever been left in the past.
#
# We wait for a short time so that we don't "tight" loop just
# keeping the table up to date.
await self._clock.sleep(0.5)
self.pos = self._store.get_room_max_stream_ordering()
await self._store.update_room_forgetter_stream_pos(self.pos)
return

View File

@ -703,24 +703,24 @@ class RoomSummaryHandler:
# there should always be an entry
assert stats is not None, "unable to retrieve stats for %s" % (room_id,)
entry = {
"room_id": stats["room_id"],
"name": stats["name"],
"topic": stats["topic"],
"canonical_alias": stats["canonical_alias"],
"num_joined_members": stats["joined_members"],
"avatar_url": stats["avatar"],
"join_rule": stats["join_rules"],
entry: JsonDict = {
"room_id": stats.room_id,
"name": stats.name,
"topic": stats.topic,
"canonical_alias": stats.canonical_alias,
"num_joined_members": stats.joined_members,
"avatar_url": stats.avatar,
"join_rule": stats.join_rules,
"world_readable": (
stats["history_visibility"] == HistoryVisibility.WORLD_READABLE
stats.history_visibility == HistoryVisibility.WORLD_READABLE
),
"guest_can_join": stats["guest_access"] == "can_join",
"room_type": stats["room_type"],
"guest_can_join": stats.guest_access == "can_join",
"room_type": stats.room_type,
}
if self._msc3266_enabled:
entry["im.nheko.summary.version"] = stats["version"]
entry["im.nheko.summary.encryption"] = stats["encryption"]
entry["im.nheko.summary.version"] = stats.version
entry["im.nheko.summary.encryption"] = stats.encryption
# Federation requests need to provide additional information so the
# requested server is able to filter the response appropriately.

View File

@ -806,7 +806,7 @@ class SsoHandler:
media_id = profile["avatar_url"].split("/")[-1]
if self._is_mine_server_name(server_name):
media = await self._media_repo.store.get_local_media(media_id)
if media is not None and upload_name == media["upload_name"]:
if media is not None and upload_name == media.upload_name:
logger.info("skipping saving the user avatar")
return True

View File

@ -399,7 +399,7 @@ class SyncHandler:
#
# If that happens, we mustn't cache it, so that when the client comes back
# with the same cache token, we don't immediately return the same empty
# result, causing a tightloop. (#8518)
# result, causing a tightloop. (https://github.com/matrix-org/synapse/issues/8518)
if result.next_batch == since_token:
cache_context.should_cache = False
@ -1003,7 +1003,7 @@ class SyncHandler:
# always make sure we LL ourselves so we know we're in the room
# (if we are) to fix https://github.com/vector-im/riot-web/issues/7209
# We only need apply this on full state syncs given we disabled
# LL for incr syncs in #3840.
# LL for incr syncs in https://github.com/matrix-org/synapse/pull/3840.
# We don't insert ourselves into `members_to_fetch`, because in some
# rare cases (an empty event batch with a now_token after the user's
# leave in a partial state room which another local user has

View File

@ -184,8 +184,8 @@ class UserDirectoryHandler(StateDeltasHandler):
"""Called to update index of our local user profiles when they change
irrespective of any rooms the user may be in.
"""
# FIXME(#3714): We should probably do this in the same worker as all
# the other changes.
# FIXME(https://github.com/matrix-org/synapse/issues/3714): We should
# probably do this in the same worker as all the other changes.
if await self.store.should_include_local_user_in_dir(user_id):
await self.store.update_profile_in_user_dir(
@ -194,8 +194,8 @@ class UserDirectoryHandler(StateDeltasHandler):
async def handle_local_user_deactivated(self, user_id: str) -> None:
"""Called when a user ID is deactivated"""
# FIXME(#3714): We should probably do this in the same worker as all
# the other changes.
# FIXME(https://github.com/matrix-org/synapse/issues/3714): We should
# probably do this in the same worker as all the other changes.
await self.store.remove_from_user_dir(user_id)
async def _unsafe_process(self) -> None:

View File

@ -153,12 +153,18 @@ class MatrixFederationRequest:
"""Query arguments.
"""
txn_id: Optional[str] = None
"""Unique ID for this request (for logging)
txn_id: str = attr.ib(init=False)
"""Unique ID for this request (for logging), this is autogenerated.
"""
uri: bytes = attr.ib(init=False)
"""The URI of this request
uri: bytes = b""
"""The URI of this request, usually generated from the above information.
"""
_generate_uri: bool = True
"""True to automatically generate the uri field based on the above information.
Set to False if manually configuring the URI.
"""
def __attrs_post_init__(self) -> None:
@ -168,22 +174,23 @@ class MatrixFederationRequest:
object.__setattr__(self, "txn_id", txn_id)
destination_bytes = self.destination.encode("ascii")
path_bytes = self.path.encode("ascii")
query_bytes = encode_query_args(self.query)
if self._generate_uri:
destination_bytes = self.destination.encode("ascii")
path_bytes = self.path.encode("ascii")
query_bytes = encode_query_args(self.query)
# The object is frozen so we can pre-compute this.
uri = urllib.parse.urlunparse(
(
b"matrix-federation",
destination_bytes,
path_bytes,
None,
query_bytes,
b"",
# The object is frozen so we can pre-compute this.
uri = urllib.parse.urlunparse(
(
b"matrix-federation",
destination_bytes,
path_bytes,
None,
query_bytes,
b"",
)
)
)
object.__setattr__(self, "uri", uri)
object.__setattr__(self, "uri", uri)
def get_json(self) -> Optional[JsonDict]:
if self.json_callback:
@ -465,7 +472,7 @@ class MatrixFederationHttpClient:
"""Wrapper for _send_request which can optionally retry the request
upon receiving a combination of a 400 HTTP response code and a
'M_UNRECOGNIZED' errcode. This is a workaround for Synapse <= v0.99.3
due to #3622.
due to https://github.com/matrix-org/synapse/issues/3622.
Args:
request: details of request to be sent
@ -513,6 +520,7 @@ class MatrixFederationHttpClient:
ignore_backoff: bool = False,
backoff_on_404: bool = False,
backoff_on_all_error_codes: bool = False,
follow_redirects: bool = False,
) -> IResponse:
"""
Sends a request to the given server.
@ -555,6 +563,9 @@ class MatrixFederationHttpClient:
backoff_on_404: Back off if we get a 404
backoff_on_all_error_codes: Back off if we get any error response
follow_redirects: True to follow the Location header of 307/308 redirect
responses. This does not recurse.
Returns:
Resolves with the HTTP response object on success.
@ -714,6 +725,26 @@ class MatrixFederationHttpClient:
response.code,
response_phrase,
)
elif (
response.code in (307, 308)
and follow_redirects
and response.headers.hasHeader("Location")
):
# The Location header *might* be relative so resolve it.
location = response.headers.getRawHeaders(b"Location")[0]
new_uri = urllib.parse.urljoin(request.uri, location)
return await self._send_request(
attr.evolve(request, uri=new_uri, generate_uri=False),
retry_on_dns_fail,
timeout,
long_retries,
ignore_backoff,
backoff_on_404,
backoff_on_all_error_codes,
# Do not continue following redirects.
follow_redirects=False,
)
else:
logger.info(
"{%s} [%s] Got response headers: %d %s",
@ -958,9 +989,9 @@ class MatrixFederationHttpClient:
requests).
try_trailing_slash_on_400: True if on a 400 M_UNRECOGNIZED
response we should try appending a trailing slash to the end
of the request. Workaround for #3622 in Synapse <= v0.99.3. This
will be attempted before backing off if backing off has been
enabled.
of the request. Workaround for https://github.com/matrix-org/synapse/issues/3622
in Synapse <= v0.99.3. This will be attempted before backing off if
backing off has been enabled.
parser: The parser to use to decode the response. Defaults to
parsing as JSON.
backoff_on_all_error_codes: Back off if we get any error response
@ -1155,7 +1186,8 @@ class MatrixFederationHttpClient:
try_trailing_slash_on_400: True if on a 400 M_UNRECOGNIZED
response we should try appending a trailing slash to the end of
the request. Workaround for #3622 in Synapse <= v0.99.3.
the request. Workaround for https://github.com/matrix-org/synapse/issues/3622
in Synapse <= v0.99.3.
parser: The parser to use to decode the response. Defaults to
parsing as JSON.
@ -1250,7 +1282,8 @@ class MatrixFederationHttpClient:
try_trailing_slash_on_400: True if on a 400 M_UNRECOGNIZED
response we should try appending a trailing slash to the end of
the request. Workaround for #3622 in Synapse <= v0.99.3.
the request. Workaround for https://github.com/matrix-org/synapse/issues/3622
in Synapse <= v0.99.3.
parser: The parser to use to decode the response. Defaults to
parsing as JSON.
@ -1381,6 +1414,7 @@ class MatrixFederationHttpClient:
retry_on_dns_fail: bool = True,
max_size: Optional[int] = None,
ignore_backoff: bool = False,
follow_redirects: bool = False,
) -> Tuple[int, Dict[bytes, List[bytes]]]:
"""GETs a file from a given homeserver
Args:
@ -1390,6 +1424,8 @@ class MatrixFederationHttpClient:
args: Optional dictionary used to create the query string.
ignore_backoff: true to ignore the historical backoff data
and try the request anyway.
follow_redirects: True to follow the Location header of 307/308 redirect
responses. This does not recurse.
Returns:
Resolves with an (int,dict) tuple of
@ -1410,7 +1446,10 @@ class MatrixFederationHttpClient:
)
response = await self._send_request(
request, retry_on_dns_fail=retry_on_dns_fail, ignore_backoff=ignore_backoff
request,
retry_on_dns_fail=retry_on_dns_fail,
ignore_backoff=ignore_backoff,
follow_redirects=follow_redirects,
)
headers = dict(response.headers.getAllRawHeaders())

View File

@ -1019,11 +1019,14 @@ def tag_args(func: Callable[P, R]) -> Callable[P, R]:
if not opentracing:
return func
# getfullargspec is somewhat expensive, so ensure it is only called a single
# time (the function signature shouldn't change anyway).
argspec = inspect.getfullargspec(func)
@contextlib.contextmanager
def _wrapping_logic(
func: Callable[P, R], *args: P.args, **kwargs: P.kwargs
_func: Callable[P, R], *args: P.args, **kwargs: P.kwargs
) -> Generator[None, None, None]:
argspec = inspect.getfullargspec(func)
# We use `[1:]` to skip the `self` object reference and `start=1` to
# make the index line up with `argspec.args`.
#

View File

@ -83,6 +83,12 @@ INLINE_CONTENT_TYPES = [
"audio/x-flac",
]
# Default timeout_ms for download and thumbnail requests
DEFAULT_MAX_TIMEOUT_MS = 20_000
# Maximum allowed timeout_ms for download and thumbnail requests
MAXIMUM_ALLOWED_MAX_TIMEOUT_MS = 60_000
def respond_404(request: SynapseRequest) -> None:
assert request.path is not None

View File

@ -19,6 +19,7 @@ import shutil
from io import BytesIO
from typing import IO, TYPE_CHECKING, Dict, List, Optional, Set, Tuple
import attr
from matrix_common.types.mxc_uri import MXCUri
import twisted.internet.error
@ -26,13 +27,16 @@ import twisted.web.http
from twisted.internet.defer import Deferred
from synapse.api.errors import (
Codes,
FederationDeniedError,
HttpResponseException,
NotFoundError,
RequestSendFailed,
SynapseError,
cs_error,
)
from synapse.config.repository import ThumbnailRequirement
from synapse.http.server import respond_with_json
from synapse.http.site import SynapseRequest
from synapse.logging.context import defer_to_thread
from synapse.logging.opentracing import trace
@ -50,6 +54,7 @@ from synapse.media.storage_provider import StorageProviderWrapper
from synapse.media.thumbnailer import Thumbnailer, ThumbnailError
from synapse.media.url_previewer import UrlPreviewer
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.storage.databases.main.media_repository import LocalMedia, RemoteMedia
from synapse.types import UserID
from synapse.util.async_helpers import Linearizer
from synapse.util.retryutils import NotRetryingDestination
@ -72,12 +77,14 @@ class MediaRepository:
def __init__(self, hs: "HomeServer"):
self.hs = hs
self.auth = hs.get_auth()
self.client = hs.get_federation_http_client()
self.client = hs.get_federation_client()
self.clock = hs.get_clock()
self.server_name = hs.hostname
self.store = hs.get_datastores().main
self.max_upload_size = hs.config.media.max_upload_size
self.max_image_pixels = hs.config.media.max_image_pixels
self.unused_expiration_time = hs.config.media.unused_expiration_time
self.max_pending_media_uploads = hs.config.media.max_pending_media_uploads
Thumbnailer.set_limits(self.max_image_pixels)
@ -183,6 +190,117 @@ class MediaRepository:
else:
self.recently_accessed_locals.add(media_id)
@trace
async def create_media_id(self, auth_user: UserID) -> Tuple[str, int]:
"""Create and store a media ID for a local user and return the MXC URI and its
expiration.
Args:
auth_user: The user_id of the uploader
Returns:
A tuple containing the MXC URI of the stored content and the timestamp at
which the MXC URI expires.
"""
media_id = random_string(24)
now = self.clock.time_msec()
await self.store.store_local_media_id(
media_id=media_id,
time_now_ms=now,
user_id=auth_user,
)
return f"mxc://{self.server_name}/{media_id}", now + self.unused_expiration_time
@trace
async def reached_pending_media_limit(self, auth_user: UserID) -> Tuple[bool, int]:
"""Check if the user is over the limit for pending media uploads.
Args:
auth_user: The user_id of the uploader
Returns:
A tuple with a boolean and an integer indicating whether the user has too
many pending media uploads and the timestamp at which the first pending
media will expire, respectively.
"""
pending, first_expiration_ts = await self.store.count_pending_media(
user_id=auth_user
)
return pending >= self.max_pending_media_uploads, first_expiration_ts
@trace
async def verify_can_upload(self, media_id: str, auth_user: UserID) -> None:
"""Verify that the media ID can be uploaded to by the given user. This
function checks that:
* the media ID exists
* the media ID does not already have content
* the user uploading is the same as the one who created the media ID
* the media ID has not expired
Args:
media_id: The media ID to verify
auth_user: The user_id of the uploader
"""
media = await self.store.get_local_media(media_id)
if media is None:
raise SynapseError(404, "Unknow media ID", errcode=Codes.NOT_FOUND)
if media.user_id != auth_user.to_string():
raise SynapseError(
403,
"Only the creator of the media ID can upload to it",
errcode=Codes.FORBIDDEN,
)
if media.media_length is not None:
raise SynapseError(
409,
"Media ID already has content",
errcode=Codes.CANNOT_OVERWRITE_MEDIA,
)
expired_time_ms = self.clock.time_msec() - self.unused_expiration_time
if media.created_ts < expired_time_ms:
raise NotFoundError("Media ID has expired")
@trace
async def update_content(
self,
media_id: str,
media_type: str,
upload_name: Optional[str],
content: IO,
content_length: int,
auth_user: UserID,
) -> None:
"""Update the content of the given media ID.
Args:
media_id: The media ID to replace.
media_type: The content type of the file.
upload_name: The name of the file, if provided.
content: A file like object that is the content to store
content_length: The length of the content
auth_user: The user_id of the uploader
"""
file_info = FileInfo(server_name=None, file_id=media_id)
fname = await self.media_storage.store_file(content, file_info)
logger.info("Stored local media in file %r", fname)
await self.store.update_local_media(
media_id=media_id,
media_type=media_type,
upload_name=upload_name,
media_length=content_length,
user_id=auth_user,
)
try:
await self._generate_thumbnails(None, media_id, media_id, media_type)
except Exception as e:
logger.info("Failed to generate thumbnails: %s", e)
@trace
async def create_content(
self,
@ -229,8 +347,74 @@ class MediaRepository:
return MXCUri(self.server_name, media_id)
def respond_not_yet_uploaded(self, request: SynapseRequest) -> None:
respond_with_json(
request,
504,
cs_error("Media has not been uploaded yet", code=Codes.NOT_YET_UPLOADED),
send_cors=True,
)
async def get_local_media_info(
self, request: SynapseRequest, media_id: str, max_timeout_ms: int
) -> Optional[LocalMedia]:
"""Gets the info dictionary for given local media ID. If the media has
not been uploaded yet, this function will wait up to ``max_timeout_ms``
milliseconds for the media to be uploaded.
Args:
request: The incoming request.
media_id: The media ID of the content. (This is the same as
the file_id for local content.)
max_timeout_ms: the maximum number of milliseconds to wait for the
media to be uploaded.
Returns:
Either the info dictionary for the given local media ID or
``None``. If ``None``, then no further processing is necessary as
this function will send the necessary JSON response.
"""
wait_until = self.clock.time_msec() + max_timeout_ms
while True:
# Get the info for the media
media_info = await self.store.get_local_media(media_id)
if not media_info:
logger.info("Media %s is unknown", media_id)
respond_404(request)
return None
if media_info.quarantined_by:
logger.info("Media %s is quarantined", media_id)
respond_404(request)
return None
# The file has been uploaded, so stop looping
if media_info.media_length is not None:
return media_info
# Check if the media ID has expired and still hasn't been uploaded to.
now = self.clock.time_msec()
expired_time_ms = now - self.unused_expiration_time
if media_info.created_ts < expired_time_ms:
logger.info("Media %s has expired without being uploaded", media_id)
respond_404(request)
return None
if now >= wait_until:
break
await self.clock.sleep(0.5)
logger.info("Media %s has not yet been uploaded", media_id)
self.respond_not_yet_uploaded(request)
return None
async def get_local_media(
self, request: SynapseRequest, media_id: str, name: Optional[str]
self,
request: SynapseRequest,
media_id: str,
name: Optional[str],
max_timeout_ms: int,
) -> None:
"""Responds to requests for local media, if exists, or returns 404.
@ -240,23 +424,24 @@ class MediaRepository:
the file_id for local content.)
name: Optional name that, if specified, will be used as
the filename in the Content-Disposition header of the response.
max_timeout_ms: the maximum number of milliseconds to wait for the
media to be uploaded.
Returns:
Resolves once a response has successfully been written to request
"""
media_info = await self.store.get_local_media(media_id)
if not media_info or media_info["quarantined_by"]:
respond_404(request)
media_info = await self.get_local_media_info(request, media_id, max_timeout_ms)
if not media_info:
return
self.mark_recently_accessed(None, media_id)
media_type = media_info["media_type"]
media_type = media_info.media_type
if not media_type:
media_type = "application/octet-stream"
media_length = media_info["media_length"]
upload_name = name if name else media_info["upload_name"]
url_cache = media_info["url_cache"]
media_length = media_info.media_length
upload_name = name if name else media_info.upload_name
url_cache = media_info.url_cache
file_info = FileInfo(None, media_id, url_cache=bool(url_cache))
@ -271,6 +456,7 @@ class MediaRepository:
server_name: str,
media_id: str,
name: Optional[str],
max_timeout_ms: int,
) -> None:
"""Respond to requests for remote media.
@ -280,6 +466,8 @@ class MediaRepository:
media_id: The media ID of the content (as defined by the remote server).
name: Optional name that, if specified, will be used as
the filename in the Content-Disposition header of the response.
max_timeout_ms: the maximum number of milliseconds to wait for the
media to be uploaded.
Returns:
Resolves once a response has successfully been written to request
@ -305,27 +493,33 @@ class MediaRepository:
key = (server_name, media_id)
async with self.remote_media_linearizer.queue(key):
responder, media_info = await self._get_remote_media_impl(
server_name, media_id
server_name, media_id, max_timeout_ms
)
# We deliberately stream the file outside the lock
if responder:
media_type = media_info["media_type"]
media_length = media_info["media_length"]
upload_name = name if name else media_info["upload_name"]
if responder and media_info:
upload_name = name if name else media_info.upload_name
await respond_with_responder(
request, responder, media_type, media_length, upload_name
request,
responder,
media_info.media_type,
media_info.media_length,
upload_name,
)
else:
respond_404(request)
async def get_remote_media_info(self, server_name: str, media_id: str) -> dict:
async def get_remote_media_info(
self, server_name: str, media_id: str, max_timeout_ms: int
) -> RemoteMedia:
"""Gets the media info associated with the remote file, downloading
if necessary.
Args:
server_name: Remote server_name where the media originated.
media_id: The media ID of the content (as defined by the remote server).
max_timeout_ms: the maximum number of milliseconds to wait for the
media to be uploaded.
Returns:
The media info of the file
@ -341,7 +535,7 @@ class MediaRepository:
key = (server_name, media_id)
async with self.remote_media_linearizer.queue(key):
responder, media_info = await self._get_remote_media_impl(
server_name, media_id
server_name, media_id, max_timeout_ms
)
# Ensure we actually use the responder so that it releases resources
@ -352,8 +546,8 @@ class MediaRepository:
return media_info
async def _get_remote_media_impl(
self, server_name: str, media_id: str
) -> Tuple[Optional[Responder], dict]:
self, server_name: str, media_id: str, max_timeout_ms: int
) -> Tuple[Optional[Responder], RemoteMedia]:
"""Looks for media in local cache, if not there then attempt to
download from remote server.
@ -361,6 +555,8 @@ class MediaRepository:
server_name: Remote server_name where the media originated.
media_id: The media ID of the content (as defined by the
remote server).
max_timeout_ms: the maximum number of milliseconds to wait for the
media to be uploaded.
Returns:
A tuple of responder and the media info of the file.
@ -373,15 +569,17 @@ class MediaRepository:
# If we have an entry in the DB, try and look for it
if media_info:
file_id = media_info["filesystem_id"]
file_id = media_info.filesystem_id
file_info = FileInfo(server_name, file_id)
if media_info["quarantined_by"]:
if media_info.quarantined_by:
logger.info("Media is quarantined")
raise NotFoundError()
if not media_info["media_type"]:
media_info["media_type"] = "application/octet-stream"
if not media_info.media_type:
media_info = attr.evolve(
media_info, media_type="application/octet-stream"
)
responder = await self.media_storage.fetch_media(file_info)
if responder:
@ -391,8 +589,7 @@ class MediaRepository:
try:
media_info = await self._download_remote_file(
server_name,
media_id,
server_name, media_id, max_timeout_ms
)
except SynapseError:
raise
@ -403,9 +600,9 @@ class MediaRepository:
if not media_info:
raise e
file_id = media_info["filesystem_id"]
if not media_info["media_type"]:
media_info["media_type"] = "application/octet-stream"
file_id = media_info.filesystem_id
if not media_info.media_type:
media_info = attr.evolve(media_info, media_type="application/octet-stream")
file_info = FileInfo(server_name, file_id)
# We generate thumbnails even if another process downloaded the media
@ -415,7 +612,7 @@ class MediaRepository:
# otherwise they'll request thumbnails and get a 404 if they're not
# ready yet.
await self._generate_thumbnails(
server_name, media_id, file_id, media_info["media_type"]
server_name, media_id, file_id, media_info.media_type
)
responder = await self.media_storage.fetch_media(file_info)
@ -425,7 +622,8 @@ class MediaRepository:
self,
server_name: str,
media_id: str,
) -> dict:
max_timeout_ms: int,
) -> RemoteMedia:
"""Attempt to download the remote file from the given server name,
using the given file_id as the local id.
@ -434,7 +632,8 @@ class MediaRepository:
media_id: The media ID of the content (as defined by the
remote server). This is different than the file_id, which is
locally generated.
file_id: Local file ID
max_timeout_ms: the maximum number of milliseconds to wait for the
media to be uploaded.
Returns:
The media info of the file.
@ -445,21 +644,13 @@ class MediaRepository:
file_info = FileInfo(server_name=server_name, file_id=file_id)
with self.media_storage.store_into_file(file_info) as (f, fname, finish):
request_path = "/".join(
("/_matrix/media/r0/download", server_name, media_id)
)
try:
length, headers = await self.client.get_file(
length, headers = await self.client.download_media(
server_name,
request_path,
media_id,
output_stream=f,
max_size=self.max_upload_size,
args={
# tell the remote server to 404 if it doesn't
# recognise the server_name, to make sure we don't
# end up with a routing loop.
"allow_remote": "false"
},
max_timeout_ms=max_timeout_ms,
)
except RequestSendFailed as e:
logger.warning(
@ -518,7 +709,7 @@ class MediaRepository:
origin=server_name,
media_id=media_id,
media_type=media_type,
time_now_ms=self.clock.time_msec(),
time_now_ms=time_now_ms,
upload_name=upload_name,
media_length=length,
filesystem_id=file_id,
@ -526,15 +717,17 @@ class MediaRepository:
logger.info("Stored remote media in file %r", fname)
media_info = {
"media_type": media_type,
"media_length": length,
"upload_name": upload_name,
"created_ts": time_now_ms,
"filesystem_id": file_id,
}
return media_info
return RemoteMedia(
media_origin=server_name,
media_id=media_id,
media_type=media_type,
media_length=length,
upload_name=upload_name,
created_ts=time_now_ms,
filesystem_id=file_id,
last_access_ts=time_now_ms,
quarantined_by=None,
)
def _get_thumbnail_requirements(
self, media_type: str

View File

@ -240,15 +240,14 @@ class UrlPreviewer:
cache_result = await self.store.get_url_cache(url, ts)
if (
cache_result
and cache_result["expires_ts"] > ts
and cache_result["response_code"] / 100 == 2
and cache_result.expires_ts > ts
and cache_result.response_code // 100 == 2
):
# It may be stored as text in the database, not as bytes (such as
# PostgreSQL). If so, encode it back before handing it on.
og = cache_result["og"]
if isinstance(og, str):
og = og.encode("utf8")
return og
if isinstance(cache_result.og, str):
return cache_result.og.encode("utf8")
return cache_result.og
# If this URL can be accessed via an allowed oEmbed, use that instead.
url_to_download = url

View File

@ -12,17 +12,45 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import select
import logging
import time
from typing import Any, Iterable, List, Tuple
from selectors import SelectSelector, _PollLikeSelector # type: ignore[attr-defined]
from typing import Any, Callable, Iterable
from prometheus_client import Histogram, Metric
from prometheus_client.core import REGISTRY, GaugeMetricFamily
from twisted.internet import reactor
from twisted.internet import reactor, selectreactor
from twisted.internet.asyncioreactor import AsyncioSelectorReactor
from synapse.metrics._types import Collector
try:
from selectors import KqueueSelector
except ImportError:
class KqueueSelector: # type: ignore[no-redef]
pass
try:
from twisted.internet.epollreactor import EPollReactor
except ImportError:
class EPollReactor: # type: ignore[no-redef]
pass
try:
from twisted.internet.pollreactor import PollReactor
except ImportError:
class PollReactor: # type: ignore[no-redef]
pass
logger = logging.getLogger(__name__)
#
# Twisted reactor metrics
#
@ -34,52 +62,100 @@ tick_time = Histogram(
)
class EpollWrapper:
"""a wrapper for an epoll object which records the time between polls"""
class CallWrapper:
"""A wrapper for a callable which records the time between calls"""
def __init__(self, poller: "select.epoll"): # type: ignore[name-defined]
def __init__(self, wrapped: Callable[..., Any]):
self.last_polled = time.time()
self._poller = poller
self._wrapped = wrapped
def poll(self, *args, **kwargs) -> List[Tuple[int, int]]: # type: ignore[no-untyped-def]
# record the time since poll() was last called. This gives a good proxy for
def __call__(self, *args, **kwargs) -> Any: # type: ignore[no-untyped-def]
# record the time since this was last called. This gives a good proxy for
# how long it takes to run everything in the reactor - ie, how long anything
# waiting for the next tick will have to wait.
tick_time.observe(time.time() - self.last_polled)
ret = self._poller.poll(*args, **kwargs)
ret = self._wrapped(*args, **kwargs)
self.last_polled = time.time()
return ret
class ObjWrapper:
"""A wrapper for an object which wraps a specified method in CallWrapper.
Other methods/attributes are passed to the original object.
This is necessary when the wrapped object does not allow the attribute to be
overwritten.
"""
def __init__(self, wrapped: Any, method_name: str):
self._wrapped = wrapped
self._method_name = method_name
self._wrapped_method = CallWrapper(getattr(wrapped, method_name))
def __getattr__(self, item: str) -> Any:
return getattr(self._poller, item)
if item == self._method_name:
return self._wrapped_method
return getattr(self._wrapped, item)
class ReactorLastSeenMetric(Collector):
def __init__(self, epoll_wrapper: EpollWrapper):
self._epoll_wrapper = epoll_wrapper
def __init__(self, call_wrapper: CallWrapper):
self._call_wrapper = call_wrapper
def collect(self) -> Iterable[Metric]:
cm = GaugeMetricFamily(
"python_twisted_reactor_last_seen",
"Seconds since the Twisted reactor was last seen",
)
cm.add_metric([], time.time() - self._epoll_wrapper.last_polled)
cm.add_metric([], time.time() - self._call_wrapper.last_polled)
yield cm
# Twisted has already select a reasonable reactor for us, so assumptions can be
# made about the shape.
wrapper = None
try:
# if the reactor has a `_poller` attribute, which is an `epoll` object
# (ie, it's an EPollReactor), we wrap the `epoll` with a thing that will
# measure the time between ticks
from select import epoll # type: ignore[attr-defined]
if isinstance(reactor, (PollReactor, EPollReactor)):
reactor._poller = ObjWrapper(reactor._poller, "poll") # type: ignore[attr-defined]
wrapper = reactor._poller._wrapped_method # type: ignore[attr-defined]
poller = reactor._poller # type: ignore[attr-defined]
except (AttributeError, ImportError):
pass
else:
if isinstance(poller, epoll):
poller = EpollWrapper(poller)
reactor._poller = poller # type: ignore[attr-defined]
REGISTRY.register(ReactorLastSeenMetric(poller))
elif isinstance(reactor, selectreactor.SelectReactor):
# Twisted uses a module-level _select function.
wrapper = selectreactor._select = CallWrapper(selectreactor._select)
elif isinstance(reactor, AsyncioSelectorReactor):
# For asyncio look at the underlying asyncio event loop.
asyncio_loop = reactor._asyncioEventloop # A sub-class of BaseEventLoop,
# A sub-class of BaseSelector.
selector = asyncio_loop._selector # type: ignore[attr-defined]
if isinstance(selector, SelectSelector):
wrapper = selector._select = CallWrapper(selector._select) # type: ignore[attr-defined]
# poll, epoll, and /dev/poll.
elif isinstance(selector, _PollLikeSelector):
selector._selector = ObjWrapper(selector._selector, "poll") # type: ignore[attr-defined]
wrapper = selector._selector._wrapped_method # type: ignore[attr-defined]
elif isinstance(selector, KqueueSelector):
selector._selector = ObjWrapper(selector._selector, "control") # type: ignore[attr-defined]
wrapper = selector._selector._wrapped_method # type: ignore[attr-defined]
else:
# E.g. this does not support the (Windows-only) ProactorEventLoop.
logger.warning(
"Skipping configuring ReactorLastSeenMetric: unexpected asyncio loop selector: %r via %r",
selector,
asyncio_loop,
)
except Exception as e:
logger.warning("Configuring ReactorLastSeenMetric failed: %r", e)
if wrapper:
REGISTRY.register(ReactorLastSeenMetric(wrapper))

View File

@ -80,6 +80,7 @@ from synapse.module_api.callbacks.account_validity_callbacks import (
ON_LEGACY_ADMIN_REQUEST,
ON_LEGACY_RENEW_CALLBACK,
ON_LEGACY_SEND_MAIL_CALLBACK,
ON_USER_LOGIN_CALLBACK,
ON_USER_REGISTRATION_CALLBACK,
)
from synapse.module_api.callbacks.spamchecker_callbacks import (
@ -334,6 +335,7 @@ class ModuleApi:
*,
is_user_expired: Optional[IS_USER_EXPIRED_CALLBACK] = None,
on_user_registration: Optional[ON_USER_REGISTRATION_CALLBACK] = None,
on_user_login: Optional[ON_USER_LOGIN_CALLBACK] = None,
on_legacy_send_mail: Optional[ON_LEGACY_SEND_MAIL_CALLBACK] = None,
on_legacy_renew: Optional[ON_LEGACY_RENEW_CALLBACK] = None,
on_legacy_admin_request: Optional[ON_LEGACY_ADMIN_REQUEST] = None,
@ -345,6 +347,7 @@ class ModuleApi:
return self._callbacks.account_validity.register_callbacks(
is_user_expired=is_user_expired,
on_user_registration=on_user_registration,
on_user_login=on_user_login,
on_legacy_send_mail=on_legacy_send_mail,
on_legacy_renew=on_legacy_renew,
on_legacy_admin_request=on_legacy_admin_request,
@ -1860,7 +1863,8 @@ class PublicRoomListManager:
if not room:
return False
return room.get("is_public", False)
# The first item is whether the room is public.
return room[0]
async def add_room_to_public_room_list(self, room_id: str) -> None:
"""Publishes a room to the public room list.

View File

@ -22,6 +22,7 @@ logger = logging.getLogger(__name__)
# Types for callbacks to be registered via the module api
IS_USER_EXPIRED_CALLBACK = Callable[[str], Awaitable[Optional[bool]]]
ON_USER_REGISTRATION_CALLBACK = Callable[[str], Awaitable]
ON_USER_LOGIN_CALLBACK = Callable[[str, Optional[str], Optional[str]], Awaitable]
# Temporary hooks to allow for a transition from `/_matrix/client` endpoints
# to `/_synapse/client/account_validity`. See `register_callbacks` below.
ON_LEGACY_SEND_MAIL_CALLBACK = Callable[[str], Awaitable]
@ -33,6 +34,7 @@ class AccountValidityModuleApiCallbacks:
def __init__(self) -> None:
self.is_user_expired_callbacks: List[IS_USER_EXPIRED_CALLBACK] = []
self.on_user_registration_callbacks: List[ON_USER_REGISTRATION_CALLBACK] = []
self.on_user_login_callbacks: List[ON_USER_LOGIN_CALLBACK] = []
self.on_legacy_send_mail_callback: Optional[ON_LEGACY_SEND_MAIL_CALLBACK] = None
self.on_legacy_renew_callback: Optional[ON_LEGACY_RENEW_CALLBACK] = None
@ -44,6 +46,7 @@ class AccountValidityModuleApiCallbacks:
self,
is_user_expired: Optional[IS_USER_EXPIRED_CALLBACK] = None,
on_user_registration: Optional[ON_USER_REGISTRATION_CALLBACK] = None,
on_user_login: Optional[ON_USER_LOGIN_CALLBACK] = None,
on_legacy_send_mail: Optional[ON_LEGACY_SEND_MAIL_CALLBACK] = None,
on_legacy_renew: Optional[ON_LEGACY_RENEW_CALLBACK] = None,
on_legacy_admin_request: Optional[ON_LEGACY_ADMIN_REQUEST] = None,
@ -55,6 +58,9 @@ class AccountValidityModuleApiCallbacks:
if on_user_registration is not None:
self.on_user_registration_callbacks.append(on_user_registration)
if on_user_login is not None:
self.on_user_login_callbacks.append(on_user_login)
# The builtin account validity feature exposes 3 endpoints (send_mail, renew, and
# an admin one). As part of moving the feature into a module, we need to change
# the path from /_matrix/client/unstable/account_validity/... to

View File

@ -295,7 +295,8 @@ class ThirdPartyEventRulesModuleApiCallbacks:
raise
except SynapseError as e:
# FIXME: Being able to throw SynapseErrors is relied upon by
# some modules. PR #10386 accidentally broke this ability.
# some modules. PR https://github.com/matrix-org/synapse/pull/10386
# accidentally broke this ability.
# That said, we aren't keen on exposing this implementation detail
# to modules and we should one day have a proper way to do what
# is wanted.

View File

@ -25,10 +25,13 @@ from typing import (
Sequence,
Tuple,
Union,
cast,
)
from prometheus_client import Counter
from twisted.internet.defer import Deferred
from synapse.api.constants import (
MAIN_TIMELINE,
EventContentFields,
@ -40,11 +43,15 @@ from synapse.api.room_versions import PushRuleRoomFlag
from synapse.event_auth import auth_types_for_event, get_user_power_level
from synapse.events import EventBase, relation_from_event
from synapse.events.snapshot import EventContext
from synapse.logging.context import make_deferred_yieldable, run_in_background
from synapse.state import POWER_KEY
from synapse.storage.databases.main.roommember import EventIdMembership
from synapse.storage.roommember import ProfileInfo
from synapse.synapse_rust.push import FilteredPushRules, PushRuleEvaluator
from synapse.types import JsonValue
from synapse.types.state import StateFilter
from synapse.util import unwrapFirstError
from synapse.util.async_helpers import gather_results
from synapse.util.caches import register_cache
from synapse.util.metrics import measure_func
from synapse.visibility import filter_event_for_clients_with_state
@ -342,15 +349,41 @@ class BulkPushRuleEvaluator:
rules_by_user = await self._get_rules_for_event(event)
actions_by_user: Dict[str, Collection[Union[Mapping, str]]] = {}
room_member_count = await self.store.get_number_joined_users_in_room(
event.room_id
)
# Gather a bunch of info in parallel.
#
# This has a lot of ignored types and casting due to the use of @cached
# decorated functions passed into run_in_background.
#
# See https://github.com/matrix-org/synapse/issues/16606
(
power_levels,
sender_power_level,
) = await self._get_power_levels_and_sender_level(
event, context, event_id_to_event
room_member_count,
(power_levels, sender_power_level),
related_events,
profiles,
) = await make_deferred_yieldable(
cast(
"Deferred[Tuple[int, Tuple[dict, Optional[int]], Dict[str, Dict[str, JsonValue]], Mapping[str, ProfileInfo]]]",
gather_results(
(
run_in_background( # type: ignore[call-arg]
self.store.get_number_joined_users_in_room, event.room_id # type: ignore[arg-type]
),
run_in_background(
self._get_power_levels_and_sender_level,
event,
context,
event_id_to_event,
),
run_in_background(self._related_events, event),
run_in_background( # type: ignore[call-arg]
self.store.get_subset_users_in_room_with_profiles,
event.room_id, # type: ignore[arg-type]
rules_by_user.keys(), # type: ignore[arg-type]
),
),
consumeErrors=True,
).addErrback(unwrapFirstError),
)
)
# Find the event's thread ID.
@ -366,8 +399,6 @@ class BulkPushRuleEvaluator:
# the parent is part of a thread.
thread_id = await self.store.get_thread_id(relation.parent_id)
related_events = await self._related_events(event)
# It's possible that old room versions have non-integer power levels (floats or
# strings; even the occasional `null`). For old rooms, we interpret these as if
# they were integers. Do this here for the `@room` power level threshold.
@ -400,11 +431,6 @@ class BulkPushRuleEvaluator:
self.hs.config.experimental.msc1767_enabled, # MSC3931 flag
)
users = rules_by_user.keys()
profiles = await self.store.get_subset_users_in_room_with_profiles(
event.room_id, users
)
for uid, rules in rules_by_user.items():
if event.sender == uid:
continue

View File

@ -257,6 +257,11 @@ class ReplicationCommandHandler:
if hs.config.redis.redis_enabled:
self._notifier.add_lock_released_callback(self.on_lock_released)
# Marks if we should send POSITION commands for all streams ASAP. This
# is checked by the `ReplicationStreamer` which manages sending
# RDATA/POSITION commands
self._should_announce_positions = True
def subscribe_to_channel(self, channel_name: str) -> None:
"""
Indicates that we wish to subscribe to a Redis channel by name.
@ -397,29 +402,23 @@ class ReplicationCommandHandler:
return self._streams_to_replicate
def on_REPLICATE(self, conn: IReplicationConnection, cmd: ReplicateCommand) -> None:
self.send_positions_to_connection(conn)
self.send_positions_to_connection()
def send_positions_to_connection(self, conn: IReplicationConnection) -> None:
def send_positions_to_connection(self) -> None:
"""Send current position of all streams this process is source of to
the connection.
"""
# We respond with current position of all streams this instance
# replicates.
for stream in self.get_streams_to_replicate():
# Note that we use the current token as the prev token here (rather
# than stream.last_token), as we can't be sure that there have been
# no rows written between last token and the current token (since we
# might be racing with the replication sending bg process).
current_token = stream.current_token(self._instance_name)
self.send_command(
PositionCommand(
stream.NAME,
self._instance_name,
current_token,
current_token,
)
)
self._should_announce_positions = True
self._notifier.notify_replication()
def should_announce_positions(self) -> bool:
"""Check if we should send POSITION commands for all streams ASAP."""
return self._should_announce_positions
def will_announce_positions(self) -> None:
"""Mark that we're about to send POSITIONs out for all streams."""
self._should_announce_positions = False
def on_USER_SYNC(
self, conn: IReplicationConnection, cmd: UserSyncCommand
@ -588,6 +587,21 @@ class ReplicationCommandHandler:
logger.debug("Handling '%s %s'", cmd.NAME, cmd.to_line())
# Check if we can early discard this position. We can only do so for
# connected streams.
stream = self._streams[cmd.stream_name]
if stream.can_discard_position(
cmd.instance_name, cmd.prev_token, cmd.new_token
) and self.is_stream_connected(conn, cmd.stream_name):
logger.debug(
"Discarding redundant POSITION %s/%s %s %s",
cmd.instance_name,
cmd.stream_name,
cmd.prev_token,
cmd.new_token,
)
return
self._add_command_to_stream_queue(conn, cmd)
async def _process_position(
@ -599,6 +613,18 @@ class ReplicationCommandHandler:
"""
stream = self._streams[stream_name]
if stream.can_discard_position(
cmd.instance_name, cmd.prev_token, cmd.new_token
) and self.is_stream_connected(conn, cmd.stream_name):
logger.debug(
"Discarding redundant POSITION %s/%s %s %s",
cmd.instance_name,
cmd.stream_name,
cmd.prev_token,
cmd.new_token,
)
return
# We're about to go and catch up with the stream, so remove from set
# of connected streams.
for streams in self._streams_by_connection.values():
@ -626,8 +652,9 @@ class ReplicationCommandHandler:
# for why this can happen.
logger.info(
"Fetching replication rows for '%s' between %i and %i",
"Fetching replication rows for '%s' / %s between %i and %i",
stream_name,
cmd.instance_name,
current_token,
cmd.new_token,
)
@ -657,6 +684,13 @@ class ReplicationCommandHandler:
self._streams_by_connection.setdefault(conn, set()).add(stream_name)
def is_stream_connected(
self, conn: IReplicationConnection, stream_name: str
) -> bool:
"""Return if stream has been successfully connected and is ready to
receive updates"""
return stream_name in self._streams_by_connection.get(conn, ())
def on_REMOTE_SERVER_UP(
self, conn: IReplicationConnection, cmd: RemoteServerUpCommand
) -> None:

View File

@ -141,7 +141,7 @@ class RedisSubscriber(SubscriberProtocol):
# We send out our positions when there is a new connection in case the
# other side missed updates. We do this for Redis connections as the
# otherside won't know we've connected and so won't issue a REPLICATE.
self.synapse_handler.send_positions_to_connection(self)
self.synapse_handler.send_positions_to_connection()
def messageReceived(self, pattern: str, channel: str, message: str) -> None:
"""Received a message from redis."""

View File

@ -123,7 +123,7 @@ class ReplicationStreamer:
# We check up front to see if anything has actually changed, as we get
# poked because of changes that happened on other instances.
if all(
if not self.command_handler.should_announce_positions() and all(
stream.last_token == stream.current_token(self._instance_name)
for stream in self.streams
):
@ -158,6 +158,21 @@ class ReplicationStreamer:
all_streams = list(all_streams)
random.shuffle(all_streams)
if self.command_handler.should_announce_positions():
# We need to send out POSITIONs for all streams, usually
# because a worker has reconnected.
self.command_handler.will_announce_positions()
for stream in all_streams:
self.command_handler.send_command(
PositionCommand(
stream.NAME,
self._instance_name,
stream.last_token,
stream.last_token,
)
)
for stream in all_streams:
if stream.last_token == stream.current_token(
self._instance_name

View File

@ -144,6 +144,16 @@ class Stream:
"""
raise NotImplementedError()
def can_discard_position(
self, instance_name: str, prev_token: int, new_token: int
) -> bool:
"""Whether or not a position command for this stream can be discarded.
Useful for streams that can never go backwards and where we already know
the stream ID for the instance has advanced.
"""
return False
def discard_updates_and_advance(self) -> None:
"""Called when the stream should advance but the updates would be discarded,
e.g. when there are no currently connected workers.
@ -221,6 +231,14 @@ class _StreamFromIdGen(Stream):
def minimal_local_current_token(self) -> Token:
return self._stream_id_gen.get_minimal_local_current_token()
def can_discard_position(
self, instance_name: str, prev_token: int, new_token: int
) -> bool:
# These streams can't go backwards, so we know we can ignore any
# positions where the tokens are from before the current token.
return new_token <= self.current_token(instance_name)
def current_token_without_instance(
current_token: Callable[[], int]
@ -287,6 +305,14 @@ class BackfillStream(Stream):
# which means we need to negate it.
return -self.store._backfill_id_gen.get_minimal_local_current_token()
def can_discard_position(
self, instance_name: str, prev_token: int, new_token: int
) -> bool:
# Backfill stream can't go backwards, so we know we can ignore any
# positions where the tokens are from before the current token.
return new_token <= self.current_token(instance_name)
class PresenceStream(_StreamFromIdGen):
@attr.s(slots=True, frozen=True, auto_attribs=True)
@ -501,6 +527,14 @@ class CachesStream(Stream):
return self.store._cache_id_gen.get_minimal_local_current_token()
return self.current_token(self.local_instance_name)
def can_discard_position(
self, instance_name: str, prev_token: int, new_token: int
) -> bool:
# Caches streams can't go backwards, so we know we can ignore any
# positions where the tokens are from before the current token.
return new_token <= self.current_token(instance_name)
class DeviceListsStream(_StreamFromIdGen):
"""Either a user has updated their devices or a remote server needs to be
@ -587,7 +621,7 @@ class ToDeviceStream(_StreamFromIdGen):
super().__init__(
hs.get_instance_name(),
store.get_all_new_device_messages,
store._device_inbox_id_gen,
store._to_device_msg_id_gen,
)

View File

@ -88,6 +88,7 @@ from synapse.rest.admin.users import (
UserByThreePid,
UserMembershipRestServlet,
UserRegisterServlet,
UserReplaceMasterCrossSigningKeyRestServlet,
UserRestServletV2,
UsersRestServletV2,
UserTokenRestServlet,
@ -292,6 +293,7 @@ def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
ListDestinationsRestServlet(hs).register(http_server)
RoomMessagesRestServlet(hs).register(http_server)
RoomTimestampToEventRestServlet(hs).register(http_server)
UserReplaceMasterCrossSigningKeyRestServlet(hs).register(http_server)
UserByExternalId(hs).register(http_server)
UserByThreePid(hs).register(http_server)

View File

@ -89,8 +89,8 @@ class ListDestinationsRestServlet(RestServlet):
"destinations": [
{
"destination": r[0],
"retry_last_ts": r[1],
"retry_interval": r[2],
"retry_last_ts": r[1] or 0,
"retry_interval": r[2] or 0,
"failure_ts": r[3],
"last_successful_stream_ordering": r[4],
}

View File

@ -17,6 +17,8 @@ import logging
from http import HTTPStatus
from typing import TYPE_CHECKING, Optional, Tuple
import attr
from synapse.api.constants import Direction
from synapse.api.errors import Codes, NotFoundError, SynapseError
from synapse.http.server import HttpServer
@ -418,7 +420,7 @@ class UserMediaRestServlet(RestServlet):
start, limit, user_id, order_by, direction
)
ret = {"media": media, "total": total}
ret = {"media": [attr.asdict(m) for m in media], "total": total}
if (start + limit) < total:
ret["next_token"] = start + len(media)
@ -477,7 +479,7 @@ class UserMediaRestServlet(RestServlet):
)
deleted_media, total = await self.media_repository.delete_local_media_ids(
[row["media_id"] for row in media]
[m.media_id for m in media]
)
return HTTPStatus.OK, {"deleted_media": deleted_media, "total": total}

View File

@ -77,7 +77,18 @@ class ListRegistrationTokensRestServlet(RestServlet):
await assert_requester_is_admin(self.auth, request)
valid = parse_boolean(request, "valid")
token_list = await self.store.get_registration_tokens(valid)
return HTTPStatus.OK, {"registration_tokens": token_list}
return HTTPStatus.OK, {
"registration_tokens": [
{
"token": t[0],
"uses_allowed": t[1],
"pending": t[2],
"completed": t[3],
"expiry_time": t[4],
}
for t in token_list
]
}
class NewRegistrationTokenRestServlet(RestServlet):

View File

@ -16,6 +16,8 @@ from http import HTTPStatus
from typing import TYPE_CHECKING, List, Optional, Tuple, cast
from urllib import parse as urlparse
import attr
from synapse.api.constants import Direction, EventTypes, JoinRules, Membership
from synapse.api.errors import AuthError, Codes, NotFoundError, SynapseError
from synapse.api.filtering import Filter
@ -306,10 +308,13 @@ class RoomRestServlet(RestServlet):
raise NotFoundError("Room not found")
members = await self.store.get_users_in_room(room_id)
ret["joined_local_devices"] = await self.store.count_devices_by_users(members)
ret["forgotten"] = await self.store.is_locally_forgotten_room(room_id)
result = attr.asdict(ret)
result["joined_local_devices"] = await self.store.count_devices_by_users(
members
)
result["forgotten"] = await self.store.is_locally_forgotten_room(room_id)
return HTTPStatus.OK, ret
return HTTPStatus.OK, result
async def on_DELETE(
self, request: SynapseRequest, room_id: str
@ -408,8 +413,8 @@ class RoomMembersRestServlet(RestServlet):
) -> Tuple[int, JsonDict]:
await assert_requester_is_admin(self.auth, request)
ret = await self.store.get_room(room_id)
if not ret:
room = await self.store.get_room(room_id)
if not room:
raise NotFoundError("Room not found")
members = await self.store.get_users_in_room(room_id)
@ -437,8 +442,8 @@ class RoomStateRestServlet(RestServlet):
) -> Tuple[int, JsonDict]:
await assert_requester_is_admin(self.auth, request)
ret = await self.store.get_room(room_id)
if not ret:
room = await self.store.get_room(room_id)
if not room:
raise NotFoundError("Room not found")
event_ids = await self._storage_controllers.state.get_current_state_ids(room_id)

View File

@ -18,6 +18,8 @@ import secrets
from http import HTTPStatus
from typing import TYPE_CHECKING, Dict, List, Optional, Tuple
import attr
from synapse.api.constants import Direction, UserTypes
from synapse.api.errors import Codes, NotFoundError, SynapseError
from synapse.http.servlet import (
@ -161,11 +163,13 @@ class UsersRestServletV2(RestServlet):
)
# If support for MSC3866 is not enabled, don't show the approval flag.
filter = None
if not self._msc3866_enabled:
for user in users:
del user["approved"]
ret = {"users": users, "total": total}
def _filter(a: attr.Attribute) -> bool:
return a.name != "approved"
ret = {"users": [attr.asdict(u, filter=filter) for u in users], "total": total}
if (start + limit) < total:
ret["next_token"] = str(start + len(users))
@ -626,6 +630,12 @@ class UserRegisterServlet(RestServlet):
if not hmac.compare_digest(want_mac.encode("ascii"), got_mac.encode("ascii")):
raise SynapseError(HTTPStatus.FORBIDDEN, "HMAC incorrect")
should_issue_refresh_token = body.get("refresh_token", False)
if not isinstance(should_issue_refresh_token, bool):
raise SynapseError(
HTTPStatus.BAD_REQUEST, "refresh_token must be a boolean"
)
# Reuse the parts of RegisterRestServlet to reduce code duplication
from synapse.rest.client.register import RegisterRestServlet
@ -641,7 +651,9 @@ class UserRegisterServlet(RestServlet):
approved=True,
)
result = await register._create_registration_details(user_id, body)
result = await register._create_registration_details(
user_id, body, should_issue_refresh_token=should_issue_refresh_token
)
return HTTPStatus.OK, result
@ -1266,6 +1278,46 @@ class AccountDataRestServlet(RestServlet):
}
class UserReplaceMasterCrossSigningKeyRestServlet(RestServlet):
"""Allow a given user to replace their master cross-signing key without UIA.
This replacement is permitted for a limited period (currently 10 minutes).
While this is exposed via the admin API, this is intended for use by the
Matrix Authentication Service rather than server admins.
"""
PATTERNS = admin_patterns(
"/users/(?P<user_id>[^/]*)/_allow_cross_signing_replacement_without_uia"
)
REPLACEMENT_PERIOD_MS = 10 * 60 * 1000 # 10 minutes
def __init__(self, hs: "HomeServer"):
self._auth = hs.get_auth()
self._store = hs.get_datastores().main
async def on_POST(
self,
request: SynapseRequest,
user_id: str,
) -> Tuple[int, JsonDict]:
await assert_requester_is_admin(self._auth, request)
if user_id is None:
raise NotFoundError("User not found")
timestamp = (
await self._store.allow_master_cross_signing_key_replacement_without_uia(
user_id, self.REPLACEMENT_PERIOD_MS
)
)
if timestamp is None:
raise NotFoundError("User has no master cross-signing key")
return HTTPStatus.OK, {"updatable_without_uia_before_ms": timestamp}
class UserByExternalId(RestServlet):
"""Find a user based on an external ID from an auth provider"""

View File

@ -299,19 +299,16 @@ class DeactivateAccountRestServlet(RestServlet):
requester = await self.auth.get_user_by_req(request)
# allow ASes to deactivate their own users
if requester.app_service:
await self._deactivate_account_handler.deactivate_account(
requester.user.to_string(), body.erase, requester
# allow ASes to deactivate their own users:
# ASes don't need user-interactive auth
if not requester.app_service:
await self.auth_handler.validate_user_via_ui_auth(
requester,
request,
body.dict(exclude_unset=True),
"deactivate your account",
)
return 200, {}
await self.auth_handler.validate_user_via_ui_auth(
requester,
request,
body.dict(exclude_unset=True),
"deactivate your account",
)
result = await self._deactivate_account_handler.deactivate_account(
requester.user.to_string(), body.erase, requester, id_server=body.id_server
)

View File

@ -147,7 +147,7 @@ class ClientDirectoryListServer(RestServlet):
if room is None:
raise NotFoundError("Unknown room")
return 200, {"visibility": "public" if room["is_public"] else "private"}
return 200, {"visibility": "public" if room[0] else "private"}
class PutBody(RequestBodyModel):
visibility: Literal["public", "private"] = "public"

View File

@ -376,9 +376,10 @@ class SigningKeyUploadServlet(RestServlet):
user_id = requester.user.to_string()
body = parse_json_object_from_request(request)
is_cross_signing_setup = (
await self.e2e_keys_handler.is_cross_signing_set_up_for_user(user_id)
)
(
is_cross_signing_setup,
master_key_updatable_without_uia,
) = await self.e2e_keys_handler.check_cross_signing_setup(user_id)
# Before MSC3967 we required UIA both when setting up cross signing for the
# first time and when resetting the device signing key. With MSC3967 we only
@ -386,9 +387,14 @@ class SigningKeyUploadServlet(RestServlet):
# time. Because there is no UIA in MSC3861, for now we throw an error if the
# user tries to reset the device signing key when MSC3861 is enabled, but allow
# first-time setup.
#
# XXX: We now have a get-out clause by which MAS can temporarily mark the master
# key as replaceable. It should do its own equivalent of user interactive auth
# before doing so.
if self.hs.config.experimental.msc3861.enabled:
# There is no way to reset the device signing key with MSC3861
if is_cross_signing_setup:
# The auth service has to explicitly mark the master key as replaceable
# without UIA to reset the device signing key with MSC3861.
if is_cross_signing_setup and not master_key_updatable_without_uia:
raise SynapseError(
HTTPStatus.NOT_IMPLEMENTED,
"Resetting cross signing keys is not yet supported with MSC3861",

View File

@ -115,6 +115,7 @@ class LoginRestServlet(RestServlet):
self.registration_handler = hs.get_registration_handler()
self._sso_handler = hs.get_sso_handler()
self._spam_checker = hs.get_module_api_callbacks().spam_checker
self._account_validity_handler = hs.get_account_validity_handler()
self._well_known_builder = WellKnownBuilder(hs)
self._address_ratelimiter = Ratelimiter(
@ -470,6 +471,13 @@ class LoginRestServlet(RestServlet):
device_id=device_id,
)
# execute the callback
await self._account_validity_handler.on_user_login(
user_id,
auth_provider_type=login_submission.get("type"),
auth_provider_id=auth_provider_id,
)
if valid_until_ms is not None:
expires_in_ms = valid_until_ms - self.clock.time_msec()
result["expires_in_ms"] = expires_in_ms

View File

@ -13,12 +13,17 @@
# limitations under the License.
""" This module contains REST servlets to do with profile: /profile/<paths> """
from http import HTTPStatus
from typing import TYPE_CHECKING, Tuple
from synapse.api.errors import Codes, SynapseError
from synapse.http.server import HttpServer
from synapse.http.servlet import RestServlet, parse_json_object_from_request
from synapse.http.servlet import (
RestServlet,
parse_boolean,
parse_json_object_from_request,
)
from synapse.http.site import SynapseRequest
from synapse.rest.client._base import client_patterns
from synapse.types import JsonDict, UserID
@ -27,6 +32,20 @@ if TYPE_CHECKING:
from synapse.server import HomeServer
def _read_propagate(hs: "HomeServer", request: SynapseRequest) -> bool:
# This will always be set by the time Twisted calls us.
assert request.args is not None
propagate = True
if hs.config.experimental.msc4069_profile_inhibit_propagation:
do_propagate = request.args.get(b"org.matrix.msc4069.propagate")
if do_propagate is not None:
propagate = parse_boolean(
request, "org.matrix.msc4069.propagate", default=False
)
return propagate
class ProfileDisplaynameRestServlet(RestServlet):
PATTERNS = client_patterns("/profile/(?P<user_id>[^/]*)/displayname", v1=True)
CATEGORY = "Event sending requests"
@ -80,7 +99,11 @@ class ProfileDisplaynameRestServlet(RestServlet):
errcode=Codes.BAD_JSON,
)
await self.profile_handler.set_displayname(user, requester, new_name, is_admin)
propagate = _read_propagate(self.hs, request)
await self.profile_handler.set_displayname(
user, requester, new_name, is_admin, propagate=propagate
)
return 200, {}
@ -135,8 +158,10 @@ class ProfileAvatarURLRestServlet(RestServlet):
400, "Missing key 'avatar_url'", errcode=Codes.MISSING_PARAM
)
propagate = _read_propagate(self.hs, request)
await self.profile_handler.set_avatar_url(
user, requester, new_avatar_url, is_admin
user, requester, new_avatar_url, is_admin, propagate=propagate
)
return 200, {}

View File

@ -80,6 +80,9 @@ class VersionsRestServlet(RestServlet):
"v1.4",
"v1.5",
"v1.6",
"v1.7",
"v1.8",
"v1.9",
],
# as per MSC1497:
"unstable_features": {
@ -126,6 +129,8 @@ class VersionsRestServlet(RestServlet):
"org.matrix.msc3981": self.config.experimental.msc3981_recurse_relations,
# Adds support for deleting account data.
"org.matrix.msc3391": self.config.experimental.msc3391_enabled,
# Allows clients to inhibit profile update propagation.
"org.matrix.msc4069": self.config.experimental.msc4069_profile_inhibit_propagation,
},
},
)

View File

@ -0,0 +1,83 @@
# Copyright 2023 Beeper Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import re
from typing import TYPE_CHECKING
from synapse.api.errors import LimitExceededError
from synapse.api.ratelimiting import Ratelimiter
from synapse.http.server import respond_with_json
from synapse.http.servlet import RestServlet
from synapse.http.site import SynapseRequest
if TYPE_CHECKING:
from synapse.media.media_repository import MediaRepository
from synapse.server import HomeServer
logger = logging.getLogger(__name__)
class CreateResource(RestServlet):
PATTERNS = [re.compile("/_matrix/media/v1/create")]
def __init__(self, hs: "HomeServer", media_repo: "MediaRepository"):
super().__init__()
self.media_repo = media_repo
self.clock = hs.get_clock()
self.auth = hs.get_auth()
self.max_pending_media_uploads = hs.config.media.max_pending_media_uploads
# A rate limiter for creating new media IDs.
self._create_media_rate_limiter = Ratelimiter(
store=hs.get_datastores().main,
clock=self.clock,
cfg=hs.config.ratelimiting.rc_media_create,
)
async def on_POST(self, request: SynapseRequest) -> None:
requester = await self.auth.get_user_by_req(request)
# If the create media requests for the user are over the limit, drop them.
await self._create_media_rate_limiter.ratelimit(requester)
(
reached_pending_limit,
first_expiration_ts,
) = await self.media_repo.reached_pending_media_limit(requester.user)
if reached_pending_limit:
raise LimitExceededError(
limiter_name="max_pending_media_uploads",
retry_after_ms=first_expiration_ts - self.clock.time_msec(),
)
content_uri, unused_expires_at = await self.media_repo.create_media_id(
requester.user
)
logger.info(
"Created Media URI %r that if unused will expire at %d",
content_uri,
unused_expires_at,
)
respond_with_json(
request,
200,
{
"content_uri": content_uri,
"unused_expires_at": unused_expires_at,
},
send_cors=True,
)

View File

@ -17,9 +17,13 @@ import re
from typing import TYPE_CHECKING, Optional
from synapse.http.server import set_corp_headers, set_cors_headers
from synapse.http.servlet import RestServlet, parse_boolean
from synapse.http.servlet import RestServlet, parse_boolean, parse_integer
from synapse.http.site import SynapseRequest
from synapse.media._base import respond_404
from synapse.media._base import (
DEFAULT_MAX_TIMEOUT_MS,
MAXIMUM_ALLOWED_MAX_TIMEOUT_MS,
respond_404,
)
from synapse.util.stringutils import parse_and_validate_server_name
if TYPE_CHECKING:
@ -65,12 +69,16 @@ class DownloadResource(RestServlet):
)
# Limited non-standard form of CSP for IE11
request.setHeader(b"X-Content-Security-Policy", b"sandbox;")
request.setHeader(
b"Referrer-Policy",
b"no-referrer",
request.setHeader(b"Referrer-Policy", b"no-referrer")
max_timeout_ms = parse_integer(
request, "timeout_ms", default=DEFAULT_MAX_TIMEOUT_MS
)
max_timeout_ms = min(max_timeout_ms, MAXIMUM_ALLOWED_MAX_TIMEOUT_MS)
if self._is_mine_server_name(server_name):
await self.media_repo.get_local_media(request, media_id, file_name)
await self.media_repo.get_local_media(
request, media_id, file_name, max_timeout_ms
)
else:
allow_remote = parse_boolean(request, "allow_remote", default=True)
if not allow_remote:
@ -83,5 +91,5 @@ class DownloadResource(RestServlet):
return
await self.media_repo.get_remote_media(
request, server_name, media_id, file_name
request, server_name, media_id, file_name, max_timeout_ms
)

View File

@ -18,10 +18,11 @@ from synapse.config._base import ConfigError
from synapse.http.server import HttpServer, JsonResource
from .config_resource import MediaConfigResource
from .create_resource import CreateResource
from .download_resource import DownloadResource
from .preview_url_resource import PreviewUrlResource
from .thumbnail_resource import ThumbnailResource
from .upload_resource import UploadResource
from .upload_resource import AsyncUploadServlet, UploadServlet
if TYPE_CHECKING:
from synapse.server import HomeServer
@ -91,8 +92,9 @@ class MediaRepositoryResource(JsonResource):
# Note that many of these should not exist as v1 endpoints, but empirically
# a lot of traffic still goes to them.
UploadResource(hs, media_repo).register(http_server)
CreateResource(hs, media_repo).register(http_server)
UploadServlet(hs, media_repo).register(http_server)
AsyncUploadServlet(hs, media_repo).register(http_server)
DownloadResource(hs, media_repo).register(http_server)
ThumbnailResource(hs, media_repo, media_repo.media_storage).register(
http_server

View File

@ -23,6 +23,8 @@ from synapse.http.server import respond_with_json, set_corp_headers, set_cors_he
from synapse.http.servlet import RestServlet, parse_integer, parse_string
from synapse.http.site import SynapseRequest
from synapse.media._base import (
DEFAULT_MAX_TIMEOUT_MS,
MAXIMUM_ALLOWED_MAX_TIMEOUT_MS,
FileInfo,
ThumbnailInfo,
respond_404,
@ -75,15 +77,19 @@ class ThumbnailResource(RestServlet):
method = parse_string(request, "method", "scale")
# TODO Parse the Accept header to get an prioritised list of thumbnail types.
m_type = "image/png"
max_timeout_ms = parse_integer(
request, "timeout_ms", default=DEFAULT_MAX_TIMEOUT_MS
)
max_timeout_ms = min(max_timeout_ms, MAXIMUM_ALLOWED_MAX_TIMEOUT_MS)
if self._is_mine_server_name(server_name):
if self.dynamic_thumbnails:
await self._select_or_generate_local_thumbnail(
request, media_id, width, height, method, m_type
request, media_id, width, height, method, m_type, max_timeout_ms
)
else:
await self._respond_local_thumbnail(
request, media_id, width, height, method, m_type
request, media_id, width, height, method, m_type, max_timeout_ms
)
self.media_repo.mark_recently_accessed(None, media_id)
else:
@ -95,14 +101,21 @@ class ThumbnailResource(RestServlet):
respond_404(request)
return
if self.dynamic_thumbnails:
await self._select_or_generate_remote_thumbnail(
request, server_name, media_id, width, height, method, m_type
)
else:
await self._respond_remote_thumbnail(
request, server_name, media_id, width, height, method, m_type
)
remote_resp_function = (
self._select_or_generate_remote_thumbnail
if self.dynamic_thumbnails
else self._respond_remote_thumbnail
)
await remote_resp_function(
request,
server_name,
media_id,
width,
height,
method,
m_type,
max_timeout_ms,
)
self.media_repo.mark_recently_accessed(server_name, media_id)
async def _respond_local_thumbnail(
@ -113,15 +126,12 @@ class ThumbnailResource(RestServlet):
height: int,
method: str,
m_type: str,
max_timeout_ms: int,
) -> None:
media_info = await self.store.get_local_media(media_id)
media_info = await self.media_repo.get_local_media_info(
request, media_id, max_timeout_ms
)
if not media_info:
respond_404(request)
return
if media_info["quarantined_by"]:
logger.info("Media is quarantined")
respond_404(request)
return
thumbnail_infos = await self.store.get_local_media_thumbnails(media_id)
@ -134,7 +144,7 @@ class ThumbnailResource(RestServlet):
thumbnail_infos,
media_id,
media_id,
url_cache=bool(media_info["url_cache"]),
url_cache=bool(media_info.url_cache),
server_name=None,
)
@ -146,15 +156,13 @@ class ThumbnailResource(RestServlet):
desired_height: int,
desired_method: str,
desired_type: str,
max_timeout_ms: int,
) -> None:
media_info = await self.store.get_local_media(media_id)
media_info = await self.media_repo.get_local_media_info(
request, media_id, max_timeout_ms
)
if not media_info:
respond_404(request)
return
if media_info["quarantined_by"]:
logger.info("Media is quarantined")
respond_404(request)
return
thumbnail_infos = await self.store.get_local_media_thumbnails(media_id)
@ -168,7 +176,7 @@ class ThumbnailResource(RestServlet):
file_info = FileInfo(
server_name=None,
file_id=media_id,
url_cache=media_info["url_cache"],
url_cache=bool(media_info.url_cache),
thumbnail=info,
)
@ -188,7 +196,7 @@ class ThumbnailResource(RestServlet):
desired_height,
desired_method,
desired_type,
url_cache=bool(media_info["url_cache"]),
url_cache=bool(media_info.url_cache),
)
if file_path:
@ -206,14 +214,20 @@ class ThumbnailResource(RestServlet):
desired_height: int,
desired_method: str,
desired_type: str,
max_timeout_ms: int,
) -> None:
media_info = await self.media_repo.get_remote_media_info(server_name, media_id)
media_info = await self.media_repo.get_remote_media_info(
server_name, media_id, max_timeout_ms
)
if not media_info:
respond_404(request)
return
thumbnail_infos = await self.store.get_remote_media_thumbnails(
server_name, media_id
)
file_id = media_info["filesystem_id"]
file_id = media_info.filesystem_id
for info in thumbnail_infos:
t_w = info.width == desired_width
@ -224,7 +238,7 @@ class ThumbnailResource(RestServlet):
if t_w and t_h and t_method and t_type:
file_info = FileInfo(
server_name=server_name,
file_id=media_info["filesystem_id"],
file_id=file_id,
thumbnail=info,
)
@ -263,11 +277,16 @@ class ThumbnailResource(RestServlet):
height: int,
method: str,
m_type: str,
max_timeout_ms: int,
) -> None:
# TODO: Don't download the whole remote file
# We should proxy the thumbnail from the remote server instead of
# downloading the remote file and generating our own thumbnails.
media_info = await self.media_repo.get_remote_media_info(server_name, media_id)
media_info = await self.media_repo.get_remote_media_info(
server_name, media_id, max_timeout_ms
)
if not media_info:
return
thumbnail_infos = await self.store.get_remote_media_thumbnails(
server_name, media_id
@ -280,7 +299,7 @@ class ThumbnailResource(RestServlet):
m_type,
thumbnail_infos,
media_id,
media_info["filesystem_id"],
media_info.filesystem_id,
url_cache=False,
server_name=server_name,
)

View File

@ -15,7 +15,7 @@
import logging
import re
from typing import IO, TYPE_CHECKING, Dict, List, Optional
from typing import IO, TYPE_CHECKING, Dict, List, Optional, Tuple
from synapse.api.errors import Codes, SynapseError
from synapse.http.server import respond_with_json
@ -29,23 +29,24 @@ if TYPE_CHECKING:
logger = logging.getLogger(__name__)
# The name of the lock to use when uploading media.
_UPLOAD_MEDIA_LOCK_NAME = "upload_media"
class UploadResource(RestServlet):
PATTERNS = [re.compile("/_matrix/media/(r0|v3|v1)/upload")]
class BaseUploadServlet(RestServlet):
def __init__(self, hs: "HomeServer", media_repo: "MediaRepository"):
super().__init__()
self.media_repo = media_repo
self.filepaths = media_repo.filepaths
self.store = hs.get_datastores().main
self.clock = hs.get_clock()
self.server_name = hs.hostname
self.auth = hs.get_auth()
self.max_upload_size = hs.config.media.max_upload_size
self.clock = hs.get_clock()
async def on_POST(self, request: SynapseRequest) -> None:
requester = await self.auth.get_user_by_req(request)
def _get_file_metadata(
self, request: SynapseRequest
) -> Tuple[int, Optional[str], str]:
raw_content_length = request.getHeader("Content-Length")
if raw_content_length is None:
raise SynapseError(msg="Request must specify a Content-Length", code=400)
@ -88,6 +89,16 @@ class UploadResource(RestServlet):
# disposition = headers.getRawHeaders(b"Content-Disposition")[0]
# TODO(markjh): parse content-dispostion
return content_length, upload_name, media_type
class UploadServlet(BaseUploadServlet):
PATTERNS = [re.compile("/_matrix/media/(r0|v3|v1)/upload$")]
async def on_POST(self, request: SynapseRequest) -> None:
requester = await self.auth.get_user_by_req(request)
content_length, upload_name, media_type = self._get_file_metadata(request)
try:
content: IO = request.content # type: ignore
content_uri = await self.media_repo.create_content(
@ -103,3 +114,53 @@ class UploadResource(RestServlet):
respond_with_json(
request, 200, {"content_uri": str(content_uri)}, send_cors=True
)
class AsyncUploadServlet(BaseUploadServlet):
PATTERNS = [
re.compile(
"/_matrix/media/v3/upload/(?P<server_name>[^/]*)/(?P<media_id>[^/]*)$"
)
]
async def on_PUT(
self, request: SynapseRequest, server_name: str, media_id: str
) -> None:
requester = await self.auth.get_user_by_req(request)
if server_name != self.server_name:
raise SynapseError(
404,
"Non-local server name specified",
errcode=Codes.NOT_FOUND,
)
lock = await self.store.try_acquire_lock(_UPLOAD_MEDIA_LOCK_NAME, media_id)
if not lock:
raise SynapseError(
409,
"Media ID cannot be overwritten",
errcode=Codes.CANNOT_OVERWRITE_MEDIA,
)
async with lock:
await self.media_repo.verify_can_upload(media_id, requester.user)
content_length, upload_name, media_type = self._get_file_metadata(request)
try:
content: IO = request.content # type: ignore
await self.media_repo.update_content(
media_id,
media_type,
upload_name,
content,
content_length,
requester.user,
)
except SpamMediaException:
# For uploading of media we want to respond with a 400, instead of
# the default 404, as that would just be confusing.
raise SynapseError(400, "Bad content")
logger.info("Uploaded content for media ID %r", media_id)
respond_with_json(request, 200, {}, send_cors=True)

View File

@ -178,6 +178,8 @@ class ServerNoticesManager:
"avatar_url": self._config.servernotices.server_notices_mxid_avatar_url,
}
# `ignore_forced_encryption` is used to bypass `encryption_enabled_by_default_for_room_type`
# setting if it set, since the server notices will not be encrypted anyway.
room_id, _, _ = await self._room_creation_handler.create_room(
requester,
config={
@ -187,6 +189,7 @@ class ServerNoticesManager:
},
ratelimit=False,
creator_join_profile=join_profile,
ignore_forced_encryption=True,
)
self.maybe_get_notice_room_for_user.invalidate((user_id,))
@ -221,13 +224,27 @@ class ServerNoticesManager:
if room.room_id == room_id:
return
user_id_obj = UserID.from_string(user_id)
await self._room_member_handler.update_membership(
requester=requester,
target=UserID.from_string(user_id),
target=user_id_obj,
room_id=room_id,
action="invite",
ratelimit=False,
)
if self._config.servernotices.server_notices_auto_join:
user_requester = create_requester(
user_id, authenticated_entity=self._server_name
)
await self._room_member_handler.update_membership(
requester=user_requester,
target=user_id_obj,
room_id=room_id,
action="join",
ratelimit=False,
)
async def _update_notice_user_profile_if_changed(
self,
requester: Requester,
@ -268,5 +285,6 @@ class ServerNoticesManager:
target=UserID.from_string(self.server_notices_mxid),
room_id=room_id,
action="join",
ratelimit=False,
content={"displayname": display_name, "avatar_url": avatar_url},
)

View File

@ -28,6 +28,7 @@ from typing import (
Sequence,
Tuple,
Type,
cast,
)
import attr
@ -48,7 +49,11 @@ else:
if TYPE_CHECKING:
from synapse.server import HomeServer
from synapse.storage.database import DatabasePool, LoggingTransaction
from synapse.storage.database import (
DatabasePool,
LoggingDatabaseConnection,
LoggingTransaction,
)
logger = logging.getLogger(__name__)
@ -488,14 +493,14 @@ class BackgroundUpdater:
True if we have finished running all the background updates, otherwise False
"""
def get_background_updates_txn(txn: Cursor) -> List[Dict[str, Any]]:
def get_background_updates_txn(txn: Cursor) -> List[Tuple[str, Optional[str]]]:
txn.execute(
"""
SELECT update_name, depends_on FROM background_updates
ORDER BY ordering, update_name
"""
)
return self.db_pool.cursor_to_dict(txn)
return cast(List[Tuple[str, Optional[str]]], txn.fetchall())
if not self._current_background_update:
all_pending_updates = await self.db_pool.runInteraction(
@ -507,14 +512,13 @@ class BackgroundUpdater:
return True
# find the first update which isn't dependent on another one in the queue.
pending = {update["update_name"] for update in all_pending_updates}
for upd in all_pending_updates:
depends_on = upd["depends_on"]
pending = {update_name for update_name, depends_on in all_pending_updates}
for update_name, depends_on in all_pending_updates:
if not depends_on or depends_on not in pending:
break
logger.info(
"Not starting on bg update %s until %s is done",
upd["update_name"],
update_name,
depends_on,
)
else:
@ -524,7 +528,7 @@ class BackgroundUpdater:
"another: dependency cycle?"
)
self._current_background_update = upd["update_name"]
self._current_background_update = update_name
# We have a background update to run, otherwise we would have returned
# early.
@ -746,10 +750,10 @@ class BackgroundUpdater:
The named index will be dropped upon completion of the new index.
"""
def create_index_psql(conn: Connection) -> None:
def create_index_psql(conn: "LoggingDatabaseConnection") -> None:
conn.rollback()
# postgres insists on autocommit for the index
conn.set_session(autocommit=True) # type: ignore
conn.engine.attempt_to_set_autocommit(conn.conn, True)
try:
c = conn.cursor()
@ -793,9 +797,9 @@ class BackgroundUpdater:
undo_timeout_sql = f"SET statement_timeout = {default_timeout}"
conn.cursor().execute(undo_timeout_sql)
conn.set_session(autocommit=False) # type: ignore
conn.engine.attempt_to_set_autocommit(conn.conn, False)
def create_index_sqlite(conn: Connection) -> None:
def create_index_sqlite(conn: "LoggingDatabaseConnection") -> None:
# Sqlite doesn't support concurrent creation of indexes.
#
# We assume that sqlite doesn't give us invalid indices; however
@ -825,7 +829,9 @@ class BackgroundUpdater:
c.execute(sql)
if isinstance(self.db_pool.engine, engines.PostgresEngine):
runner: Optional[Callable[[Connection], None]] = create_index_psql
runner: Optional[
Callable[[LoggingDatabaseConnection], None]
] = create_index_psql
elif psql_only:
runner = None
else:

View File

@ -542,13 +542,15 @@ class EventsPersistenceStorageController:
return await res.get_state(self._state_controller, StateFilter.all())
async def _persist_event_batch(
self, _room_id: str, task: _PersistEventsTask
self, room_id: str, task: _PersistEventsTask
) -> Dict[str, str]:
"""Callback for the _event_persist_queue
Calculates the change to current state and forward extremities, and
persists the given events and with those updates.
Assumes that we are only persisting events for one room at a time.
Returns:
A dictionary of event ID to event ID we didn't persist as we already
had another event persisted with the same TXN ID.
@ -594,140 +596,23 @@ class EventsPersistenceStorageController:
# We can't easily parallelize these since different chunks
# might contain the same event. :(
# NB: Assumes that we are only persisting events for one room
# at a time.
# map room_id->set[event_ids] giving the new forward
# extremities in each room
new_forward_extremities: Dict[str, Set[str]] = {}
# map room_id->(to_delete, to_insert) where to_delete is a list
# of type/state keys to remove from current state, and to_insert
# is a map (type,key)->event_id giving the state delta in each
# room
state_delta_for_room: Dict[str, DeltaState] = {}
new_forward_extremities = None
state_delta_for_room = None
if not backfilled:
with Measure(self._clock, "_calculate_state_and_extrem"):
# Work out the new "current state" for each room.
# Work out the new "current state" for the room.
# We do this by working out what the new extremities are and then
# calculating the state from that.
events_by_room: Dict[str, List[Tuple[EventBase, EventContext]]] = {}
for event, context in chunk:
events_by_room.setdefault(event.room_id, []).append(
(event, context)
)
for room_id, ev_ctx_rm in events_by_room.items():
latest_event_ids = (
await self.main_store.get_latest_event_ids_in_room(room_id)
)
new_latest_event_ids = await self._calculate_new_extremities(
room_id, ev_ctx_rm, latest_event_ids
)
if new_latest_event_ids == latest_event_ids:
# No change in extremities, so no change in state
continue
# there should always be at least one forward extremity.
# (except during the initial persistence of the send_join
# results, in which case there will be no existing
# extremities, so we'll `continue` above and skip this bit.)
assert new_latest_event_ids, "No forward extremities left!"
new_forward_extremities[room_id] = new_latest_event_ids
len_1 = (
len(latest_event_ids) == 1
and len(new_latest_event_ids) == 1
)
if len_1:
all_single_prev_not_state = all(
len(event.prev_event_ids()) == 1
and not event.is_state()
for event, ctx in ev_ctx_rm
)
# Don't bother calculating state if they're just
# a long chain of single ancestor non-state events.
if all_single_prev_not_state:
continue
state_delta_counter.inc()
if len(new_latest_event_ids) == 1:
state_delta_single_event_counter.inc()
# This is a fairly handwavey check to see if we could
# have guessed what the delta would have been when
# processing one of these events.
# What we're interested in is if the latest extremities
# were the same when we created the event as they are
# now. When this server creates a new event (as opposed
# to receiving it over federation) it will use the
# forward extremities as the prev_events, so we can
# guess this by looking at the prev_events and checking
# if they match the current forward extremities.
for ev, _ in ev_ctx_rm:
prev_event_ids = set(ev.prev_event_ids())
if latest_event_ids == prev_event_ids:
state_delta_reuse_delta_counter.inc()
break
logger.debug("Calculating state delta for room %s", room_id)
with Measure(
self._clock, "persist_events.get_new_state_after_events"
):
res = await self._get_new_state_after_events(
room_id,
ev_ctx_rm,
latest_event_ids,
new_latest_event_ids,
)
current_state, delta_ids, new_latest_event_ids = res
# there should always be at least one forward extremity.
# (except during the initial persistence of the send_join
# results, in which case there will be no existing
# extremities, so we'll `continue` above and skip this bit.)
assert new_latest_event_ids, "No forward extremities left!"
new_forward_extremities[room_id] = new_latest_event_ids
# If either are not None then there has been a change,
# and we need to work out the delta (or use that
# given)
delta = None
if delta_ids is not None:
# If there is a delta we know that we've
# only added or replaced state, never
# removed keys entirely.
delta = DeltaState([], delta_ids)
elif current_state is not None:
with Measure(
self._clock, "persist_events.calculate_state_delta"
):
delta = await self._calculate_state_delta(
room_id, current_state
)
if delta:
# If we have a change of state then lets check
# whether we're actually still a member of the room,
# or if our last user left. If we're no longer in
# the room then we delete the current state and
# extremities.
is_still_joined = await self._is_server_still_joined(
room_id,
ev_ctx_rm,
delta,
)
if not is_still_joined:
logger.info("Server no longer in room %s", room_id)
delta.no_longer_in_room = True
state_delta_for_room[room_id] = delta
(
new_forward_extremities,
state_delta_for_room,
) = await self._calculate_new_forward_extremities_and_state_delta(
room_id, chunk
)
await self.persist_events_store._persist_events_and_state_updates(
room_id,
chunk,
state_delta_for_room=state_delta_for_room,
new_forward_extremities=new_forward_extremities,
@ -737,6 +622,117 @@ class EventsPersistenceStorageController:
return replaced_events
async def _calculate_new_forward_extremities_and_state_delta(
self, room_id: str, ev_ctx_rm: List[Tuple[EventBase, EventContext]]
) -> Tuple[Optional[Set[str]], Optional[DeltaState]]:
"""Calculates the new forward extremities and state delta for a room
given events to persist.
Assumes that we are only persisting events for one room at a time.
Returns:
A tuple of:
A set of str giving the new forward extremities the room
The state delta for the room.
"""
latest_event_ids = await self.main_store.get_latest_event_ids_in_room(room_id)
new_latest_event_ids = await self._calculate_new_extremities(
room_id, ev_ctx_rm, latest_event_ids
)
if new_latest_event_ids == latest_event_ids:
# No change in extremities, so no change in state
return (None, None)
# there should always be at least one forward extremity.
# (except during the initial persistence of the send_join
# results, in which case there will be no existing
# extremities, so we'll `continue` above and skip this bit.)
assert new_latest_event_ids, "No forward extremities left!"
new_forward_extremities = new_latest_event_ids
len_1 = len(latest_event_ids) == 1 and len(new_latest_event_ids) == 1
if len_1:
all_single_prev_not_state = all(
len(event.prev_event_ids()) == 1 and not event.is_state()
for event, ctx in ev_ctx_rm
)
# Don't bother calculating state if they're just
# a long chain of single ancestor non-state events.
if all_single_prev_not_state:
return (new_forward_extremities, None)
state_delta_counter.inc()
if len(new_latest_event_ids) == 1:
state_delta_single_event_counter.inc()
# This is a fairly handwavey check to see if we could
# have guessed what the delta would have been when
# processing one of these events.
# What we're interested in is if the latest extremities
# were the same when we created the event as they are
# now. When this server creates a new event (as opposed
# to receiving it over federation) it will use the
# forward extremities as the prev_events, so we can
# guess this by looking at the prev_events and checking
# if they match the current forward extremities.
for ev, _ in ev_ctx_rm:
prev_event_ids = set(ev.prev_event_ids())
if latest_event_ids == prev_event_ids:
state_delta_reuse_delta_counter.inc()
break
logger.debug("Calculating state delta for room %s", room_id)
with Measure(self._clock, "persist_events.get_new_state_after_events"):
res = await self._get_new_state_after_events(
room_id,
ev_ctx_rm,
latest_event_ids,
new_latest_event_ids,
)
current_state, delta_ids, new_latest_event_ids = res
# there should always be at least one forward extremity.
# (except during the initial persistence of the send_join
# results, in which case there will be no existing
# extremities, so we'll `continue` above and skip this bit.)
assert new_latest_event_ids, "No forward extremities left!"
new_forward_extremities = new_latest_event_ids
# If either are not None then there has been a change,
# and we need to work out the delta (or use that
# given)
delta = None
if delta_ids is not None:
# If there is a delta we know that we've
# only added or replaced state, never
# removed keys entirely.
delta = DeltaState([], delta_ids)
elif current_state is not None:
with Measure(self._clock, "persist_events.calculate_state_delta"):
delta = await self._calculate_state_delta(room_id, current_state)
if delta:
# If we have a change of state then lets check
# whether we're actually still a member of the room,
# or if our last user left. If we're no longer in
# the room then we delete the current state and
# extremities.
is_still_joined = await self._is_server_still_joined(
room_id,
ev_ctx_rm,
delta,
)
if not is_still_joined:
logger.info("Server no longer in room %s", room_id)
delta.no_longer_in_room = True
return (new_forward_extremities, delta)
async def _calculate_new_extremities(
self,
room_id: str,

View File

@ -18,7 +18,6 @@ import logging
import time
import types
from collections import defaultdict
from sys import intern
from time import monotonic as monotonic_time
from typing import (
TYPE_CHECKING,
@ -1042,20 +1041,6 @@ class DatabasePool:
self._db_pool.runWithConnection(inner_func, *args, **kwargs)
)
@staticmethod
def cursor_to_dict(cursor: Cursor) -> List[Dict[str, Any]]:
"""Converts a SQL cursor into an list of dicts.
Args:
cursor: The DBAPI cursor which has executed a query.
Returns:
A list of dicts where the key is the column header.
"""
assert cursor.description is not None, "cursor.description was None"
col_headers = [intern(str(column[0])) for column in cursor.description]
results = [dict(zip(col_headers, row)) for row in cursor]
return results
async def execute(self, desc: str, query: str, *args: Any) -> List[Tuple[Any, ...]]:
"""Runs a single query for a result set.
@ -1131,8 +1116,8 @@ class DatabasePool:
def simple_insert_many_txn(
txn: LoggingTransaction,
table: str,
keys: Collection[str],
values: Iterable[Iterable[Any]],
keys: Sequence[str],
values: Collection[Iterable[Any]],
) -> None:
"""Executes an INSERT query on the named table.
@ -1145,6 +1130,9 @@ class DatabasePool:
keys: list of column names
values: for each row, a list of values in the same order as `keys`
"""
# If there's nothing to insert, then skip executing the query.
if not values:
return
if isinstance(txn.database_engine, PostgresEngine):
# We use `execute_values` as it can be a lot faster than `execute_batch`,
@ -1416,12 +1404,12 @@ class DatabasePool:
allvalues.update(values)
latter = "UPDATE SET " + ", ".join(k + "=EXCLUDED." + k for k in values)
sql = "INSERT INTO %s (%s) VALUES (%s) ON CONFLICT (%s) %s DO %s" % (
sql = "INSERT INTO %s (%s) VALUES (%s) ON CONFLICT (%s) %sDO %s" % (
table,
", ".join(k for k in allvalues),
", ".join("?" for _ in allvalues),
", ".join(k for k in keyvalues),
f"WHERE {where_clause}" if where_clause else "",
f"WHERE {where_clause} " if where_clause else "",
latter,
)
txn.execute(sql, list(allvalues.values()))
@ -1470,7 +1458,7 @@ class DatabasePool:
key_names: Collection[str],
key_values: Collection[Iterable[Any]],
value_names: Collection[str],
value_values: Iterable[Iterable[Any]],
value_values: Collection[Iterable[Any]],
) -> None:
"""
Upsert, many times.
@ -1483,6 +1471,19 @@ class DatabasePool:
value_values: A list of each row's value column values.
Ignored if value_names is empty.
"""
# If there's nothing to upsert, then skip executing the query.
if not key_values:
return
# No value columns, therefore make a blank list so that the following
# zip() works correctly.
if not value_names:
value_values = [() for x in range(len(key_values))]
elif len(value_values) != len(key_values):
raise ValueError(
f"{len(key_values)} key rows and {len(value_values)} value rows: should be the same number."
)
if table not in self._unsafe_to_upsert_tables:
return self.simple_upsert_many_txn_native_upsert(
txn, table, key_names, key_values, value_names, value_values
@ -1517,10 +1518,6 @@ class DatabasePool:
value_values: A list of each row's value column values.
Ignored if value_names is empty.
"""
# No value columns, therefore make a blank list so that the following
# zip() works correctly.
if not value_names:
value_values = [() for x in range(len(key_values))]
# Lock the table just once, to prevent it being done once per row.
# Note that, according to Postgres' documentation, once obtained,
@ -1558,10 +1555,7 @@ class DatabasePool:
allnames.extend(value_names)
if not value_names:
# No value columns, therefore make a blank list so that the
# following zip() works correctly.
latter = "NOTHING"
value_values = [() for x in range(len(key_values))]
else:
latter = "UPDATE SET " + ", ".join(
k + "=EXCLUDED." + k for k in value_names
@ -1603,7 +1597,7 @@ class DatabasePool:
retcols: Collection[str],
allow_none: Literal[False] = False,
desc: str = "simple_select_one",
) -> Dict[str, Any]:
) -> Tuple[Any, ...]:
...
@overload
@ -1614,7 +1608,7 @@ class DatabasePool:
retcols: Collection[str],
allow_none: Literal[True] = True,
desc: str = "simple_select_one",
) -> Optional[Dict[str, Any]]:
) -> Optional[Tuple[Any, ...]]:
...
async def simple_select_one(
@ -1624,7 +1618,7 @@ class DatabasePool:
retcols: Collection[str],
allow_none: bool = False,
desc: str = "simple_select_one",
) -> Optional[Dict[str, Any]]:
) -> Optional[Tuple[Any, ...]]:
"""Executes a SELECT query on the named table, which is expected to
return a single row, returning multiple columns from it.
@ -1925,6 +1919,7 @@ class DatabasePool:
Returns:
The results as a list of tuples.
"""
# If there's nothing to select, then skip executing the query.
if not iterable:
return []
@ -2059,13 +2054,13 @@ class DatabasePool:
raise ValueError(
f"{len(key_values)} key rows and {len(value_values)} value rows: should be the same number."
)
# If there is nothing to update, then skip executing the query.
if not key_values:
return
# List of tuples of (value values, then key values)
# (This matches the order needed for the query)
args = [tuple(x) + tuple(y) for x, y in zip(value_values, key_values)]
for ks, vs in zip(key_values, value_values):
args.append(tuple(vs) + tuple(ks))
args = [tuple(vv) + tuple(kv) for vv, kv in zip(value_values, key_values)]
# 'col1 = ?, col2 = ?, ...'
set_clause = ", ".join(f"{n} = ?" for n in value_names)
@ -2077,9 +2072,7 @@ class DatabasePool:
where_clause = ""
# UPDATE mytable SET col1 = ?, col2 = ? WHERE col3 = ? AND col4 = ?
sql = f"""
UPDATE {table} SET {set_clause} {where_clause}
"""
sql = f"UPDATE {table} SET {set_clause} {where_clause}"
txn.execute_batch(sql, args)
@ -2134,7 +2127,7 @@ class DatabasePool:
keyvalues: Dict[str, Any],
retcols: Collection[str],
allow_none: bool = False,
) -> Optional[Dict[str, Any]]:
) -> Optional[Tuple[Any, ...]]:
select_sql = "SELECT %s FROM %s" % (", ".join(retcols), table)
if keyvalues:
@ -2152,7 +2145,7 @@ class DatabasePool:
if txn.rowcount > 1:
raise StoreError(500, "More than one row matched (%s)" % (table,))
return dict(zip(retcols, row))
return row
async def simple_delete_one(
self, table: str, keyvalues: Dict[str, Any], desc: str = "simple_delete_one"
@ -2295,11 +2288,10 @@ class DatabasePool:
Returns:
Number rows deleted
"""
# If there's nothing to delete, then skip executing the query.
if not values:
return 0
sql = "DELETE FROM %s" % table
clause, values = make_in_list_sql_clause(txn.database_engine, column, values)
clauses = [clause]
@ -2307,8 +2299,7 @@ class DatabasePool:
clauses.append("%s = ?" % (key,))
values.append(value)
if clauses:
sql = "%s WHERE %s" % (sql, " AND ".join(clauses))
sql = "DELETE FROM %s WHERE %s" % (table, " AND ".join(clauses))
txn.execute(sql, values)
return txn.rowcount

View File

@ -45,7 +45,7 @@ class Databases(Generic[DataStoreT]):
"""
databases: List[DatabasePool]
main: "DataStore" # FIXME: #11165: actually an instance of `main_store_class`
main: "DataStore" # FIXME: https://github.com/matrix-org/synapse/issues/11165: actually an instance of `main_store_class`
state: StateGroupDataStore
persist_events: Optional[PersistEventsStore]

View File

@ -17,6 +17,8 @@
import logging
from typing import TYPE_CHECKING, List, Optional, Tuple, Union, cast
import attr
from synapse.api.constants import Direction
from synapse.config.homeserver import HomeServerConfig
from synapse.storage._base import make_in_list_sql_clause
@ -28,7 +30,7 @@ from synapse.storage.database import (
from synapse.storage.databases.main.stats import UserSortOrder
from synapse.storage.engines import BaseDatabaseEngine
from synapse.storage.types import Cursor
from synapse.types import JsonDict, get_domain_from_id
from synapse.types import get_domain_from_id
from .account_data import AccountDataStore
from .appservice import ApplicationServiceStore, ApplicationServiceTransactionStore
@ -82,6 +84,25 @@ if TYPE_CHECKING:
logger = logging.getLogger(__name__)
@attr.s(slots=True, frozen=True, auto_attribs=True)
class UserPaginateResponse:
"""This is very similar to UserInfo, but not quite the same."""
name: str
user_type: Optional[str]
is_guest: bool
admin: bool
deactivated: bool
shadow_banned: bool
displayname: Optional[str]
avatar_url: Optional[str]
creation_ts: Optional[int]
approved: bool
erased: bool
last_seen_ts: int
locked: bool
class DataStore(
EventsBackgroundUpdatesStore,
ExperimentalFeaturesStore,
@ -156,7 +177,7 @@ class DataStore(
approved: bool = True,
not_user_types: Optional[List[str]] = None,
locked: bool = False,
) -> Tuple[List[JsonDict], int]:
) -> Tuple[List[UserPaginateResponse], int]:
"""Function to retrieve a paginated list of users from
users list. This will return a json list of users and the
total number of users matching the filter criteria.
@ -182,7 +203,7 @@ class DataStore(
def get_users_paginate_txn(
txn: LoggingTransaction,
) -> Tuple[List[JsonDict], int]:
) -> Tuple[List[UserPaginateResponse], int]:
filters = []
args: list = []
@ -282,13 +303,24 @@ class DataStore(
"""
args += [limit, start]
txn.execute(sql, args)
users = self.db_pool.cursor_to_dict(txn)
# some of those boolean values are returned as integers when we're on SQLite
columns_to_boolify = ["erased"]
for user in users:
for column in columns_to_boolify:
user[column] = bool(user[column])
users = [
UserPaginateResponse(
name=row[0],
user_type=row[1],
is_guest=bool(row[2]),
admin=bool(row[3]),
deactivated=bool(row[4]),
shadow_banned=bool(row[5]),
displayname=row[6],
avatar_url=row[7],
creation_ts=row[8],
approved=bool(row[9]),
erased=bool(row[10]),
last_seen_ts=row[11],
locked=bool(row[12]),
)
for row in txn
]
return users, count

Some files were not shown because too many files have changed in this diff Show More