Merge branch 'release-v1.95' into matrix-org-hotfixes
commit
07b3b9a95e
|
@ -56,6 +56,7 @@ jobs:
|
|||
- 'pyproject.toml'
|
||||
- 'poetry.lock'
|
||||
- 'docker/**'
|
||||
- 'scripts-dev/complement.sh'
|
||||
|
||||
linting:
|
||||
- 'synapse/**'
|
||||
|
@ -280,7 +281,6 @@ jobs:
|
|||
- check-lockfile
|
||||
- lint-clippy
|
||||
- lint-rustfmt
|
||||
- check-signoff
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- run: "true"
|
||||
|
|
70
CHANGES.md
70
CHANGES.md
|
@ -1,9 +1,75 @@
|
|||
# Synapse 1.95.0rc1 (2023-10-17)
|
||||
|
||||
### Bugfixes
|
||||
|
||||
- Remove legacy unspecced `knock_state_events` field returned in some responses. ([\#16403](https://github.com/matrix-org/synapse/issues/16403))
|
||||
- Fixes possbile `AttributeError` when `_matrix/client/v3/account/whoami` is called over a unix socket. Contributed by @Sir-Photch. ([\#16404](https://github.com/matrix-org/synapse/issues/16404))
|
||||
- Properly return inline media when content types have parameters. ([\#16440](https://github.com/matrix-org/synapse/issues/16440))
|
||||
- Prevent the purging of large rooms from timing out when Postgres is in use. The timeout which causes this issue was introduced in Synapse 1.88.0. ([\#16455](https://github.com/matrix-org/synapse/issues/16455))
|
||||
- Improve the performance of purging rooms, particularly encrypted rooms. ([\#16457](https://github.com/matrix-org/synapse/issues/16457))
|
||||
- Fix a bug introduced in Synapse 1.59.0 where servers would be incorrectly marked as available when a request resulted in an error. ([\#16506](https://github.com/matrix-org/synapse/issues/16506))
|
||||
|
||||
### Improved Documentation
|
||||
|
||||
- Document internal background update mechanism. ([\#16420](https://github.com/matrix-org/synapse/issues/16420))
|
||||
- Fix a typo in the sql for [useful SQL for admins document](https://matrix-org.github.io/synapse/latest/usage/administration/useful_sql_for_admins.html). ([\#16477](https://github.com/matrix-org/synapse/issues/16477))
|
||||
|
||||
### Internal Changes
|
||||
|
||||
- Bump pyo3 from 0.17.1 to 0.19.2. ([\#16162](https://github.com/matrix-org/synapse/issues/16162))
|
||||
- Update registration of media repository URLs. ([\#16419](https://github.com/matrix-org/synapse/issues/16419))
|
||||
- Improve type hints. ([\#16421](https://github.com/matrix-org/synapse/issues/16421), [\#16468](https://github.com/matrix-org/synapse/issues/16468), [\#16469](https://github.com/matrix-org/synapse/issues/16469), [\#16507](https://github.com/matrix-org/synapse/issues/16507))
|
||||
- Refactor some code to simplify and better type receipts stream adjacent code. ([\#16426](https://github.com/matrix-org/synapse/issues/16426))
|
||||
- Factor out `MultiWriter` token from `RoomStreamToken`. ([\#16427](https://github.com/matrix-org/synapse/issues/16427))
|
||||
- Improve code comments. ([\#16428](https://github.com/matrix-org/synapse/issues/16428))
|
||||
- Reduce memory allocations. ([\#16429](https://github.com/matrix-org/synapse/issues/16429), [\#16431](https://github.com/matrix-org/synapse/issues/16431), [\#16433](https://github.com/matrix-org/synapse/issues/16433), [\#16434](https://github.com/matrix-org/synapse/issues/16434), [\#16438](https://github.com/matrix-org/synapse/issues/16438), [\#16444](https://github.com/matrix-org/synapse/issues/16444))
|
||||
- Remove unused method. ([\#16435](https://github.com/matrix-org/synapse/issues/16435))
|
||||
- Improve rate limiting logic. ([\#16441](https://github.com/matrix-org/synapse/issues/16441))
|
||||
- Do not block running of CI behind the check for sign-off on PRs. ([\#16454](https://github.com/matrix-org/synapse/issues/16454))
|
||||
- Update the release script to remind releaser to check for special release notes. ([\#16461](https://github.com/matrix-org/synapse/issues/16461))
|
||||
- Update complement.sh to match new public API shape. ([\#16466](https://github.com/matrix-org/synapse/issues/16466))
|
||||
- Clean up logging on event persister endpoints. ([\#16488](https://github.com/matrix-org/synapse/issues/16488))
|
||||
- Remove useless async job to delete device messages on sync, since we only deliver (and hence delete) up to 100 device messages at a time. ([\#16491](https://github.com/matrix-org/synapse/issues/16491))
|
||||
|
||||
### Updates to locked dependencies
|
||||
|
||||
* Bump bleach from 6.0.0 to 6.1.0. ([\#16451](https://github.com/matrix-org/synapse/issues/16451))
|
||||
* Bump jsonschema from 4.19.0 to 4.19.1. ([\#16500](https://github.com/matrix-org/synapse/issues/16500))
|
||||
* Bump netaddr from 0.8.0 to 0.9.0. ([\#16453](https://github.com/matrix-org/synapse/issues/16453))
|
||||
* Bump packaging from 23.1 to 23.2. ([\#16497](https://github.com/matrix-org/synapse/issues/16497))
|
||||
* Bump pillow from 10.0.1 to 10.1.0. ([\#16498](https://github.com/matrix-org/synapse/issues/16498))
|
||||
* Bump psycopg2 from 2.9.8 to 2.9.9. ([\#16452](https://github.com/matrix-org/synapse/issues/16452))
|
||||
* Bump pyo3-log from 0.8.3 to 0.8.4. ([\#16495](https://github.com/matrix-org/synapse/issues/16495))
|
||||
* Bump ruff from 0.0.290 to 0.0.292. ([\#16449](https://github.com/matrix-org/synapse/issues/16449))
|
||||
* Bump sentry-sdk from 1.31.0 to 1.32.0. ([\#16496](https://github.com/matrix-org/synapse/issues/16496))
|
||||
* Bump serde from 1.0.188 to 1.0.189. ([\#16494](https://github.com/matrix-org/synapse/issues/16494))
|
||||
* Bump types-bleach from 6.0.0.4 to 6.1.0.0. ([\#16450](https://github.com/matrix-org/synapse/issues/16450))
|
||||
* Bump types-jsonschema from 4.17.0.10 to 4.19.0.3. ([\#16499](https://github.com/matrix-org/synapse/issues/16499))
|
||||
|
||||
# Synapse 1.94.0 (2023-10-10)
|
||||
|
||||
No significant changes since 1.94.0rc1.
|
||||
However, please take note of the security advisory that follows.
|
||||
|
||||
## Security advisory
|
||||
|
||||
The following issue is fixed in 1.94.0 (and RC).
|
||||
|
||||
- [GHSA-5chr-wjw5-3gq4](https://github.com/matrix-org/synapse/security/advisories/GHSA-5chr-wjw5-3gq4) / [CVE-2023-45129](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-45129) — Moderate Severity
|
||||
|
||||
A malicious server ACL event can impact performance temporarily or permanently leading to a persistent denial of service.
|
||||
|
||||
Homeservers running on a closed federation (which presumably do not need to use server ACLs) are not affected.
|
||||
|
||||
See the advisory for more details. If you have any questions, email security@matrix.org.
|
||||
|
||||
|
||||
# Synapse 1.94.0rc1 (2023-10-03)
|
||||
|
||||
### Features
|
||||
|
||||
- Render plain, CSS, CSV, JSON and common image formats media content in the browser (inline) when requested through the /download endpoint. ([\#15988](https://github.com/matrix-org/synapse/issues/15988))
|
||||
- Experimental support for [MSC4028](https://github.com/matrix-org/matrix-spec-proposals/pull/4028) to push all encrypted events to clients. ([\#16361](https://github.com/matrix-org/synapse/issues/16361))
|
||||
- Render plain, CSS, CSV, JSON and common image formats in the browser (inline) when requested through the /download endpoint. ([\#15988](https://github.com/matrix-org/synapse/issues/15988))
|
||||
- Add experimental support for [MSC4028](https://github.com/matrix-org/matrix-spec-proposals/pull/4028) to push all encrypted events to clients. ([\#16361](https://github.com/matrix-org/synapse/issues/16361))
|
||||
- Minor performance improvement when sending presence to federated servers. ([\#16385](https://github.com/matrix-org/synapse/issues/16385))
|
||||
- Minor performance improvement by caching server ACL checking. ([\#16360](https://github.com/matrix-org/synapse/issues/16360))
|
||||
|
||||
|
|
|
@ -144,9 +144,9 @@ checksum = "8f232d6ef707e1956a43342693d2a31e72989554d58299d7a88738cc95b0d35c"
|
|||
|
||||
[[package]]
|
||||
name = "memoffset"
|
||||
version = "0.6.5"
|
||||
version = "0.9.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "5aa361d4faea93603064a027415f07bd8e1d5c88c9fbf68bf56a285428fd79ce"
|
||||
checksum = "5a634b1c61a95585bd15607c6ab0c4e5b226e695ff2800ba0cdccddf208c406c"
|
||||
dependencies = [
|
||||
"autocfg",
|
||||
]
|
||||
|
@ -191,9 +191,9 @@ dependencies = [
|
|||
|
||||
[[package]]
|
||||
name = "pyo3"
|
||||
version = "0.17.3"
|
||||
version = "0.19.2"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "268be0c73583c183f2b14052337465768c07726936a260f480f0857cb95ba543"
|
||||
checksum = "e681a6cfdc4adcc93b4d3cf993749a4552018ee0a9b65fc0ccfad74352c72a38"
|
||||
dependencies = [
|
||||
"anyhow",
|
||||
"cfg-if",
|
||||
|
@ -209,9 +209,9 @@ dependencies = [
|
|||
|
||||
[[package]]
|
||||
name = "pyo3-build-config"
|
||||
version = "0.17.3"
|
||||
version = "0.19.2"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "28fcd1e73f06ec85bf3280c48c67e731d8290ad3d730f8be9dc07946923005c8"
|
||||
checksum = "076c73d0bc438f7a4ef6fdd0c3bb4732149136abd952b110ac93e4edb13a6ba5"
|
||||
dependencies = [
|
||||
"once_cell",
|
||||
"target-lexicon",
|
||||
|
@ -219,9 +219,9 @@ dependencies = [
|
|||
|
||||
[[package]]
|
||||
name = "pyo3-ffi"
|
||||
version = "0.17.3"
|
||||
version = "0.19.2"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "0f6cb136e222e49115b3c51c32792886defbfb0adead26a688142b346a0b9ffc"
|
||||
checksum = "e53cee42e77ebe256066ba8aa77eff722b3bb91f3419177cf4cd0f304d3284d9"
|
||||
dependencies = [
|
||||
"libc",
|
||||
"pyo3-build-config",
|
||||
|
@ -229,9 +229,9 @@ dependencies = [
|
|||
|
||||
[[package]]
|
||||
name = "pyo3-log"
|
||||
version = "0.8.3"
|
||||
version = "0.8.4"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "f47b0777feb17f61eea78667d61103758b243a871edc09a7786500a50467b605"
|
||||
checksum = "c09c2b349b6538d8a73d436ca606dab6ce0aaab4dad9e6b7bdd57a4f556c3bc3"
|
||||
dependencies = [
|
||||
"arc-swap",
|
||||
"log",
|
||||
|
@ -240,9 +240,9 @@ dependencies = [
|
|||
|
||||
[[package]]
|
||||
name = "pyo3-macros"
|
||||
version = "0.17.3"
|
||||
version = "0.19.2"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "94144a1266e236b1c932682136dc35a9dee8d3589728f68130c7c3861ef96b28"
|
||||
checksum = "dfeb4c99597e136528c6dd7d5e3de5434d1ceaf487436a3f03b2d56b6fc9efd1"
|
||||
dependencies = [
|
||||
"proc-macro2",
|
||||
"pyo3-macros-backend",
|
||||
|
@ -252,9 +252,9 @@ dependencies = [
|
|||
|
||||
[[package]]
|
||||
name = "pyo3-macros-backend"
|
||||
version = "0.17.3"
|
||||
version = "0.19.2"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "c8df9be978a2d2f0cdebabb03206ed73b11314701a5bfe71b0d753b81997777f"
|
||||
checksum = "947dc12175c254889edc0c02e399476c2f652b4b9ebd123aa655c224de259536"
|
||||
dependencies = [
|
||||
"proc-macro2",
|
||||
"quote",
|
||||
|
@ -263,9 +263,9 @@ dependencies = [
|
|||
|
||||
[[package]]
|
||||
name = "pythonize"
|
||||
version = "0.17.0"
|
||||
version = "0.19.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "0f7f0c136f5fbc01868185eef462800e49659eb23acca83b9e884367a006acb6"
|
||||
checksum = "8e35b716d430ace57e2d1b4afb51c9e5b7c46d2bce72926e07f9be6a98ced03e"
|
||||
dependencies = [
|
||||
"pyo3",
|
||||
"serde",
|
||||
|
@ -332,18 +332,18 @@ checksum = "d29ab0c6d3fc0ee92fe66e2d99f700eab17a8d57d1c1d3b748380fb20baa78cd"
|
|||
|
||||
[[package]]
|
||||
name = "serde"
|
||||
version = "1.0.188"
|
||||
version = "1.0.189"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "cf9e0fcba69a370eed61bcf2b728575f726b50b55cba78064753d708ddc7549e"
|
||||
checksum = "8e422a44e74ad4001bdc8eede9a4570ab52f71190e9c076d14369f38b9200537"
|
||||
dependencies = [
|
||||
"serde_derive",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "serde_derive"
|
||||
version = "1.0.188"
|
||||
version = "1.0.189"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "4eca7ac642d82aa35b60049a6eccb4be6be75e599bd2e9adb5f875a737654af2"
|
||||
checksum = "1e48d1f918009ce3145511378cf68d613e3b3d9137d67272562080d68a2b32d5"
|
||||
dependencies = [
|
||||
"proc-macro2",
|
||||
"quote",
|
||||
|
|
|
@ -1,3 +1,15 @@
|
|||
matrix-synapse-py3 (1.95.0~rc1) stable; urgency=medium
|
||||
|
||||
* New synapse release 1.95.0rc1.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Tue, 17 Oct 2023 15:50:17 +0000
|
||||
|
||||
matrix-synapse-py3 (1.94.0) stable; urgency=medium
|
||||
|
||||
* New Synapse release 1.94.0.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Tue, 10 Oct 2023 10:57:41 +0100
|
||||
|
||||
matrix-synapse-py3 (1.94.0~rc1) stable; urgency=medium
|
||||
|
||||
* New Synapse release 1.94.0rc1.
|
||||
|
|
|
@ -150,6 +150,67 @@ def run_upgrade(
|
|||
...
|
||||
```
|
||||
|
||||
## Background updates
|
||||
|
||||
It is sometimes appropriate to perform database migrations as part of a background
|
||||
process (instead of blocking Synapse until the migration is done). In particular,
|
||||
this is useful for migrating data when adding new columns or tables.
|
||||
|
||||
Pending background updates stored in the `background_updates` table and are denoted
|
||||
by a unique name, the current status (stored in JSON), and some dependency information:
|
||||
|
||||
* Whether the update requires a previous update to be complete.
|
||||
* A rough ordering for which to complete updates.
|
||||
|
||||
A new background updates needs to be added to the `background_updates` table:
|
||||
|
||||
```sql
|
||||
INSERT INTO background_updates (ordering, update_name, depends_on, progress_json) VALUES
|
||||
(7706, 'my_background_update', 'a_previous_background_update' '{}');
|
||||
```
|
||||
|
||||
And then needs an associated handler in the appropriate datastore:
|
||||
|
||||
```python
|
||||
self.db_pool.updates.register_background_update_handler(
|
||||
"my_background_update",
|
||||
update_handler=self._my_background_update,
|
||||
)
|
||||
```
|
||||
|
||||
There are a few types of updates that can be performed, see the `BackgroundUpdater`:
|
||||
|
||||
* `register_background_update_handler`: A generic handler for custom SQL
|
||||
* `register_background_index_update`: Create an index in the background
|
||||
* `register_background_validate_constraint`: Validate a constraint in the background
|
||||
(PostgreSQL-only)
|
||||
* `register_background_validate_constraint_and_delete_rows`: Similar to
|
||||
`register_background_validate_constraint`, but deletes rows which don't fit
|
||||
the constraint.
|
||||
|
||||
For `register_background_update_handler`, the generic handler must track progress
|
||||
and then finalize the background update:
|
||||
|
||||
```python
|
||||
async def _my_background_update(self, progress: JsonDict, batch_size: int) -> int:
|
||||
def _do_something(txn: LoggingTransaction) -> int:
|
||||
...
|
||||
self.db_pool.updates._background_update_progress_txn(
|
||||
txn, "my_background_update", {"last_processed": last_processed}
|
||||
)
|
||||
return last_processed - prev_last_processed
|
||||
|
||||
num_processed = await self.db_pool.runInteraction("_do_something", _do_something)
|
||||
await self.db_pool.updates._end_background_update("my_background_update")
|
||||
|
||||
return num_processed
|
||||
```
|
||||
|
||||
Synapse will attempt to rate-limit how often background updates are run via the
|
||||
given batch-size and the returned number of processed entries (and how long the
|
||||
function took to run). See
|
||||
[background update controller callbacks](../modules/background_update_controller_callbacks.md).
|
||||
|
||||
## Boolean columns
|
||||
|
||||
Boolean columns require special treatment, since SQLite treats booleans the
|
||||
|
|
|
@ -193,7 +193,7 @@ SELECT rss.room_id, rss.name, rss.canonical_alias, rss.topic, rss.encryption,
|
|||
rsc.joined_members, rsc.local_users_in_room, rss.join_rules
|
||||
FROM room_stats_state rss
|
||||
LEFT JOIN room_stats_current rsc USING (room_id)
|
||||
WHERE room_id IN ( WHERE room_id IN (
|
||||
WHERE room_id IN (
|
||||
'!OGEhHVWSdvArJzumhm:matrix.org',
|
||||
'!YTvKGNlinIzlkMTVRl:matrix.org'
|
||||
);
|
||||
|
|
4
mypy.ini
4
mypy.ini
|
@ -32,6 +32,7 @@ files =
|
|||
docker/,
|
||||
scripts-dev/,
|
||||
synapse/,
|
||||
synmark/,
|
||||
tests/,
|
||||
build_rust.py
|
||||
|
||||
|
@ -80,6 +81,9 @@ ignore_missing_imports = True
|
|||
[mypy-pympler.*]
|
||||
ignore_missing_imports = True
|
||||
|
||||
[mypy-pyperf.*]
|
||||
ignore_missing_imports = True
|
||||
|
||||
[mypy-rust_python_jaeger_reporter.*]
|
||||
ignore_missing_imports = True
|
||||
|
||||
|
|
|
@ -208,13 +208,13 @@ uvloop = ["uvloop (>=0.15.2)"]
|
|||
|
||||
[[package]]
|
||||
name = "bleach"
|
||||
version = "6.0.0"
|
||||
version = "6.1.0"
|
||||
description = "An easy safelist-based HTML-sanitizing tool."
|
||||
optional = false
|
||||
python-versions = ">=3.7"
|
||||
python-versions = ">=3.8"
|
||||
files = [
|
||||
{file = "bleach-6.0.0-py3-none-any.whl", hash = "sha256:33c16e3353dbd13028ab4799a0f89a83f113405c766e9c122df8a06f5b85b3f4"},
|
||||
{file = "bleach-6.0.0.tar.gz", hash = "sha256:1a1a85c1595e07d8db14c5f09f09e6433502c51c595970edc090551f0db99414"},
|
||||
{file = "bleach-6.1.0-py3-none-any.whl", hash = "sha256:3225f354cfc436b9789c66c4ee030194bee0568fbf9cbdad3bc8b5c26c5f12b6"},
|
||||
{file = "bleach-6.1.0.tar.gz", hash = "sha256:0a31f1837963c41d46bbf1331b8778e1308ea0791db03cc4e7357b97cf42a8fe"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
|
@ -222,7 +222,7 @@ six = ">=1.9.0"
|
|||
webencodings = "*"
|
||||
|
||||
[package.extras]
|
||||
css = ["tinycss2 (>=1.1.0,<1.2)"]
|
||||
css = ["tinycss2 (>=1.1.0,<1.3)"]
|
||||
|
||||
[[package]]
|
||||
name = "canonicaljson"
|
||||
|
@ -767,6 +767,17 @@ files = [
|
|||
{file = "ijson-3.2.3-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:4a3a6a2fbbe7550ffe52d151cf76065e6b89cfb3e9d0463e49a7e322a25d0426"},
|
||||
{file = "ijson-3.2.3-cp311-cp311-win32.whl", hash = "sha256:6a4db2f7fb9acfb855c9ae1aae602e4648dd1f88804a0d5cfb78c3639bcf156c"},
|
||||
{file = "ijson-3.2.3-cp311-cp311-win_amd64.whl", hash = "sha256:ccd6be56335cbb845f3d3021b1766299c056c70c4c9165fb2fbe2d62258bae3f"},
|
||||
{file = "ijson-3.2.3-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:055b71bbc37af5c3c5861afe789e15211d2d3d06ac51ee5a647adf4def19c0ea"},
|
||||
{file = "ijson-3.2.3-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:c075a547de32f265a5dd139ab2035900fef6653951628862e5cdce0d101af557"},
|
||||
{file = "ijson-3.2.3-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:457f8a5fc559478ac6b06b6d37ebacb4811f8c5156e997f0d87d708b0d8ab2ae"},
|
||||
{file = "ijson-3.2.3-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9788f0c915351f41f0e69ec2618b81ebfcf9f13d9d67c6d404c7f5afda3e4afb"},
|
||||
{file = "ijson-3.2.3-cp312-cp312-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:fa234ab7a6a33ed51494d9d2197fb96296f9217ecae57f5551a55589091e7853"},
|
||||
{file = "ijson-3.2.3-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bdd0dc5da4f9dc6d12ab6e8e0c57d8b41d3c8f9ceed31a99dae7b2baf9ea769a"},
|
||||
{file = "ijson-3.2.3-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:c6beb80df19713e39e68dc5c337b5c76d36ccf69c30b79034634e5e4c14d6904"},
|
||||
{file = "ijson-3.2.3-cp312-cp312-musllinux_1_1_i686.whl", hash = "sha256:a2973ce57afb142d96f35a14e9cfec08308ef178a2c76b8b5e1e98f3960438bf"},
|
||||
{file = "ijson-3.2.3-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:105c314fd624e81ed20f925271ec506523b8dd236589ab6c0208b8707d652a0e"},
|
||||
{file = "ijson-3.2.3-cp312-cp312-win32.whl", hash = "sha256:ac44781de5e901ce8339352bb5594fcb3b94ced315a34dbe840b4cff3450e23b"},
|
||||
{file = "ijson-3.2.3-cp312-cp312-win_amd64.whl", hash = "sha256:0567e8c833825b119e74e10a7c29761dc65fcd155f5d4cb10f9d3b8916ef9912"},
|
||||
{file = "ijson-3.2.3-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:eeb286639649fb6bed37997a5e30eefcacddac79476d24128348ec890b2a0ccb"},
|
||||
{file = "ijson-3.2.3-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:396338a655fb9af4ac59dd09c189885b51fa0eefc84d35408662031023c110d1"},
|
||||
{file = "ijson-3.2.3-cp36-cp36m-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:0e0243d166d11a2a47c17c7e885debf3b19ed136be2af1f5d1c34212850236ac"},
|
||||
|
@ -987,13 +998,13 @@ i18n = ["Babel (>=2.7)"]
|
|||
|
||||
[[package]]
|
||||
name = "jsonschema"
|
||||
version = "4.19.0"
|
||||
version = "4.19.1"
|
||||
description = "An implementation of JSON Schema validation for Python"
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
files = [
|
||||
{file = "jsonschema-4.19.0-py3-none-any.whl", hash = "sha256:043dc26a3845ff09d20e4420d6012a9c91c9aa8999fa184e7efcfeccb41e32cb"},
|
||||
{file = "jsonschema-4.19.0.tar.gz", hash = "sha256:6e1e7569ac13be8139b2dd2c21a55d350066ee3f80df06c608b398cdc6f30e8f"},
|
||||
{file = "jsonschema-4.19.1-py3-none-any.whl", hash = "sha256:cd5f1f9ed9444e554b38ba003af06c0a8c2868131e56bfbef0550fb450c0330e"},
|
||||
{file = "jsonschema-4.19.1.tar.gz", hash = "sha256:ec84cc37cfa703ef7cd4928db24f9cb31428a5d0fa77747b8b51a847458e0bbf"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
|
@ -1557,13 +1568,13 @@ testing-docutils = ["pygments", "pytest (>=7,<8)", "pytest-param-files (>=0.3.4,
|
|||
|
||||
[[package]]
|
||||
name = "netaddr"
|
||||
version = "0.8.0"
|
||||
version = "0.9.0"
|
||||
description = "A network address manipulation library for Python"
|
||||
optional = false
|
||||
python-versions = "*"
|
||||
files = [
|
||||
{file = "netaddr-0.8.0-py2.py3-none-any.whl", hash = "sha256:9666d0232c32d2656e5e5f8d735f58fd6c7457ce52fc21c98d45f2af78f990ac"},
|
||||
{file = "netaddr-0.8.0.tar.gz", hash = "sha256:d6cc57c7a07b1d9d2e917aa8b36ae8ce61c35ba3fcd1b83ca31c5a0ee2b5a243"},
|
||||
{file = "netaddr-0.9.0-py3-none-any.whl", hash = "sha256:5148b1055679d2a1ec070c521b7db82137887fabd6d7e37f5199b44f775c3bb1"},
|
||||
{file = "netaddr-0.9.0.tar.gz", hash = "sha256:7b46fa9b1a2d71fd5de9e4a3784ef339700a53a08c8040f08baf5f1194da0128"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
|
@ -1581,13 +1592,13 @@ tests = ["Sphinx", "doubles", "flake8", "flake8-quotes", "gevent", "mock", "pyte
|
|||
|
||||
[[package]]
|
||||
name = "packaging"
|
||||
version = "23.1"
|
||||
version = "23.2"
|
||||
description = "Core utilities for Python packages"
|
||||
optional = false
|
||||
python-versions = ">=3.7"
|
||||
files = [
|
||||
{file = "packaging-23.1-py3-none-any.whl", hash = "sha256:994793af429502c4ea2ebf6bf664629d07c1a9fe974af92966e4b8d2df7edc61"},
|
||||
{file = "packaging-23.1.tar.gz", hash = "sha256:a392980d2b6cffa644431898be54b0045151319d1e7ec34f0cfed48767dd334f"},
|
||||
{file = "packaging-23.2-py3-none-any.whl", hash = "sha256:8c491190033a9af7e1d931d0b5dacc2ef47509b34dd0de67ed209b5203fc88c7"},
|
||||
{file = "packaging-23.2.tar.gz", hash = "sha256:048fb0e9405036518eaaf48a55953c750c11e1a1b68e0dd1a9d62ed0c092cfc5"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
|
@ -1628,65 +1639,65 @@ files = [
|
|||
|
||||
[[package]]
|
||||
name = "pillow"
|
||||
version = "10.0.1"
|
||||
version = "10.1.0"
|
||||
description = "Python Imaging Library (Fork)"
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
files = [
|
||||
{file = "Pillow-10.0.1-cp310-cp310-macosx_10_10_x86_64.whl", hash = "sha256:8f06be50669087250f319b706decf69ca71fdecd829091a37cc89398ca4dc17a"},
|
||||
{file = "Pillow-10.0.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:50bd5f1ebafe9362ad622072a1d2f5850ecfa44303531ff14353a4059113b12d"},
|
||||
{file = "Pillow-10.0.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e6a90167bcca1216606223a05e2cf991bb25b14695c518bc65639463d7db722d"},
|
||||
{file = "Pillow-10.0.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f11c9102c56ffb9ca87134bd025a43d2aba3f1155f508eff88f694b33a9c6d19"},
|
||||
{file = "Pillow-10.0.1-cp310-cp310-manylinux_2_28_aarch64.whl", hash = "sha256:186f7e04248103482ea6354af6d5bcedb62941ee08f7f788a1c7707bc720c66f"},
|
||||
{file = "Pillow-10.0.1-cp310-cp310-manylinux_2_28_x86_64.whl", hash = "sha256:0462b1496505a3462d0f35dc1c4d7b54069747d65d00ef48e736acda2c8cbdff"},
|
||||
{file = "Pillow-10.0.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:d889b53ae2f030f756e61a7bff13684dcd77e9af8b10c6048fb2c559d6ed6eaf"},
|
||||
{file = "Pillow-10.0.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:552912dbca585b74d75279a7570dd29fa43b6d93594abb494ebb31ac19ace6bd"},
|
||||
{file = "Pillow-10.0.1-cp310-cp310-win_amd64.whl", hash = "sha256:787bb0169d2385a798888e1122c980c6eff26bf941a8ea79747d35d8f9210ca0"},
|
||||
{file = "Pillow-10.0.1-cp311-cp311-macosx_10_10_x86_64.whl", hash = "sha256:fd2a5403a75b54661182b75ec6132437a181209b901446ee5724b589af8edef1"},
|
||||
{file = "Pillow-10.0.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:2d7e91b4379f7a76b31c2dda84ab9e20c6220488e50f7822e59dac36b0cd92b1"},
|
||||
{file = "Pillow-10.0.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:19e9adb3f22d4c416e7cd79b01375b17159d6990003633ff1d8377e21b7f1b21"},
|
||||
{file = "Pillow-10.0.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:93139acd8109edcdeffd85e3af8ae7d88b258b3a1e13a038f542b79b6d255c54"},
|
||||
{file = "Pillow-10.0.1-cp311-cp311-manylinux_2_28_aarch64.whl", hash = "sha256:92a23b0431941a33242b1f0ce6c88a952e09feeea9af4e8be48236a68ffe2205"},
|
||||
{file = "Pillow-10.0.1-cp311-cp311-manylinux_2_28_x86_64.whl", hash = "sha256:cbe68deb8580462ca0d9eb56a81912f59eb4542e1ef8f987405e35a0179f4ea2"},
|
||||
{file = "Pillow-10.0.1-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:522ff4ac3aaf839242c6f4e5b406634bfea002469656ae8358644fc6c4856a3b"},
|
||||
{file = "Pillow-10.0.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:84efb46e8d881bb06b35d1d541aa87f574b58e87f781cbba8d200daa835b42e1"},
|
||||
{file = "Pillow-10.0.1-cp311-cp311-win_amd64.whl", hash = "sha256:898f1d306298ff40dc1b9ca24824f0488f6f039bc0e25cfb549d3195ffa17088"},
|
||||
{file = "Pillow-10.0.1-cp312-cp312-macosx_10_10_x86_64.whl", hash = "sha256:bcf1207e2f2385a576832af02702de104be71301c2696d0012b1b93fe34aaa5b"},
|
||||
{file = "Pillow-10.0.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:5d6c9049c6274c1bb565021367431ad04481ebb54872edecfcd6088d27edd6ed"},
|
||||
{file = "Pillow-10.0.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:28444cb6ad49726127d6b340217f0627abc8732f1194fd5352dec5e6a0105635"},
|
||||
{file = "Pillow-10.0.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:de596695a75496deb3b499c8c4f8e60376e0516e1a774e7bc046f0f48cd620ad"},
|
||||
{file = "Pillow-10.0.1-cp312-cp312-manylinux_2_28_aarch64.whl", hash = "sha256:2872f2d7846cf39b3dbff64bc1104cc48c76145854256451d33c5faa55c04d1a"},
|
||||
{file = "Pillow-10.0.1-cp312-cp312-manylinux_2_28_x86_64.whl", hash = "sha256:4ce90f8a24e1c15465048959f1e94309dfef93af272633e8f37361b824532e91"},
|
||||
{file = "Pillow-10.0.1-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:ee7810cf7c83fa227ba9125de6084e5e8b08c59038a7b2c9045ef4dde61663b4"},
|
||||
{file = "Pillow-10.0.1-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:b1be1c872b9b5fcc229adeadbeb51422a9633abd847c0ff87dc4ef9bb184ae08"},
|
||||
{file = "Pillow-10.0.1-cp312-cp312-win_amd64.whl", hash = "sha256:98533fd7fa764e5f85eebe56c8e4094db912ccbe6fbf3a58778d543cadd0db08"},
|
||||
{file = "Pillow-10.0.1-cp38-cp38-macosx_10_10_x86_64.whl", hash = "sha256:764d2c0daf9c4d40ad12fbc0abd5da3af7f8aa11daf87e4fa1b834000f4b6b0a"},
|
||||
{file = "Pillow-10.0.1-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:fcb59711009b0168d6ee0bd8fb5eb259c4ab1717b2f538bbf36bacf207ef7a68"},
|
||||
{file = "Pillow-10.0.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:697a06bdcedd473b35e50a7e7506b1d8ceb832dc238a336bd6f4f5aa91a4b500"},
|
||||
{file = "Pillow-10.0.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9f665d1e6474af9f9da5e86c2a3a2d2d6204e04d5af9c06b9d42afa6ebde3f21"},
|
||||
{file = "Pillow-10.0.1-cp38-cp38-manylinux_2_28_aarch64.whl", hash = "sha256:2fa6dd2661838c66f1a5473f3b49ab610c98a128fc08afbe81b91a1f0bf8c51d"},
|
||||
{file = "Pillow-10.0.1-cp38-cp38-manylinux_2_28_x86_64.whl", hash = "sha256:3a04359f308ebee571a3127fdb1bd01f88ba6f6fb6d087f8dd2e0d9bff43f2a7"},
|
||||
{file = "Pillow-10.0.1-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:723bd25051454cea9990203405fa6b74e043ea76d4968166dfd2569b0210886a"},
|
||||
{file = "Pillow-10.0.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:71671503e3015da1b50bd18951e2f9daf5b6ffe36d16f1eb2c45711a301521a7"},
|
||||
{file = "Pillow-10.0.1-cp38-cp38-win_amd64.whl", hash = "sha256:44e7e4587392953e5e251190a964675f61e4dae88d1e6edbe9f36d6243547ff3"},
|
||||
{file = "Pillow-10.0.1-cp39-cp39-macosx_10_10_x86_64.whl", hash = "sha256:3855447d98cced8670aaa63683808df905e956f00348732448b5a6df67ee5849"},
|
||||
{file = "Pillow-10.0.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:ed2d9c0704f2dc4fa980b99d565c0c9a543fe5101c25b3d60488b8ba80f0cce1"},
|
||||
{file = "Pillow-10.0.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f5bb289bb835f9fe1a1e9300d011eef4d69661bb9b34d5e196e5e82c4cb09b37"},
|
||||
{file = "Pillow-10.0.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3a0d3e54ab1df9df51b914b2233cf779a5a10dfd1ce339d0421748232cea9876"},
|
||||
{file = "Pillow-10.0.1-cp39-cp39-manylinux_2_28_aarch64.whl", hash = "sha256:2cc6b86ece42a11f16f55fe8903595eff2b25e0358dec635d0a701ac9586588f"},
|
||||
{file = "Pillow-10.0.1-cp39-cp39-manylinux_2_28_x86_64.whl", hash = "sha256:ca26ba5767888c84bf5a0c1a32f069e8204ce8c21d00a49c90dabeba00ce0145"},
|
||||
{file = "Pillow-10.0.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:f0b4b06da13275bc02adfeb82643c4a6385bd08d26f03068c2796f60d125f6f2"},
|
||||
{file = "Pillow-10.0.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:bc2e3069569ea9dbe88d6b8ea38f439a6aad8f6e7a6283a38edf61ddefb3a9bf"},
|
||||
{file = "Pillow-10.0.1-cp39-cp39-win_amd64.whl", hash = "sha256:8b451d6ead6e3500b6ce5c7916a43d8d8d25ad74b9102a629baccc0808c54971"},
|
||||
{file = "Pillow-10.0.1-pp310-pypy310_pp73-macosx_10_10_x86_64.whl", hash = "sha256:32bec7423cdf25c9038fef614a853c9d25c07590e1a870ed471f47fb80b244db"},
|
||||
{file = "Pillow-10.0.1-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b7cf63d2c6928b51d35dfdbda6f2c1fddbe51a6bc4a9d4ee6ea0e11670dd981e"},
|
||||
{file = "Pillow-10.0.1-pp310-pypy310_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:f6d3d4c905e26354e8f9d82548475c46d8e0889538cb0657aa9c6f0872a37aa4"},
|
||||
{file = "Pillow-10.0.1-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:847e8d1017c741c735d3cd1883fa7b03ded4f825a6e5fcb9378fd813edee995f"},
|
||||
{file = "Pillow-10.0.1-pp39-pypy39_pp73-macosx_10_10_x86_64.whl", hash = "sha256:7f771e7219ff04b79e231d099c0a28ed83aa82af91fd5fa9fdb28f5b8d5addaf"},
|
||||
{file = "Pillow-10.0.1-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:459307cacdd4138edee3875bbe22a2492519e060660eaf378ba3b405d1c66317"},
|
||||
{file = "Pillow-10.0.1-pp39-pypy39_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:b059ac2c4c7a97daafa7dc850b43b2d3667def858a4f112d1aa082e5c3d6cf7d"},
|
||||
{file = "Pillow-10.0.1-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:d6caf3cd38449ec3cd8a68b375e0c6fe4b6fd04edb6c9766b55ef84a6e8ddf2d"},
|
||||
{file = "Pillow-10.0.1.tar.gz", hash = "sha256:d72967b06be9300fed5cfbc8b5bafceec48bf7cdc7dab66b1d2549035287191d"},
|
||||
{file = "Pillow-10.1.0-cp310-cp310-macosx_10_10_x86_64.whl", hash = "sha256:1ab05f3db77e98f93964697c8efc49c7954b08dd61cff526b7f2531a22410106"},
|
||||
{file = "Pillow-10.1.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:6932a7652464746fcb484f7fc3618e6503d2066d853f68a4bd97193a3996e273"},
|
||||
{file = "Pillow-10.1.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a5f63b5a68daedc54c7c3464508d8c12075e56dcfbd42f8c1bf40169061ae666"},
|
||||
{file = "Pillow-10.1.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c0949b55eb607898e28eaccb525ab104b2d86542a85c74baf3a6dc24002edec2"},
|
||||
{file = "Pillow-10.1.0-cp310-cp310-manylinux_2_28_aarch64.whl", hash = "sha256:ae88931f93214777c7a3aa0a8f92a683f83ecde27f65a45f95f22d289a69e593"},
|
||||
{file = "Pillow-10.1.0-cp310-cp310-manylinux_2_28_x86_64.whl", hash = "sha256:b0eb01ca85b2361b09480784a7931fc648ed8b7836f01fb9241141b968feb1db"},
|
||||
{file = "Pillow-10.1.0-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:d27b5997bdd2eb9fb199982bb7eb6164db0426904020dc38c10203187ae2ff2f"},
|
||||
{file = "Pillow-10.1.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:7df5608bc38bd37ef585ae9c38c9cd46d7c81498f086915b0f97255ea60c2818"},
|
||||
{file = "Pillow-10.1.0-cp310-cp310-win_amd64.whl", hash = "sha256:41f67248d92a5e0a2076d3517d8d4b1e41a97e2df10eb8f93106c89107f38b57"},
|
||||
{file = "Pillow-10.1.0-cp311-cp311-macosx_10_10_x86_64.whl", hash = "sha256:1fb29c07478e6c06a46b867e43b0bcdb241b44cc52be9bc25ce5944eed4648e7"},
|
||||
{file = "Pillow-10.1.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:2cdc65a46e74514ce742c2013cd4a2d12e8553e3a2563c64879f7c7e4d28bce7"},
|
||||
{file = "Pillow-10.1.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:50d08cd0a2ecd2a8657bd3d82c71efd5a58edb04d9308185d66c3a5a5bed9610"},
|
||||
{file = "Pillow-10.1.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:062a1610e3bc258bff2328ec43f34244fcec972ee0717200cb1425214fe5b839"},
|
||||
{file = "Pillow-10.1.0-cp311-cp311-manylinux_2_28_aarch64.whl", hash = "sha256:61f1a9d247317fa08a308daaa8ee7b3f760ab1809ca2da14ecc88ae4257d6172"},
|
||||
{file = "Pillow-10.1.0-cp311-cp311-manylinux_2_28_x86_64.whl", hash = "sha256:a646e48de237d860c36e0db37ecaecaa3619e6f3e9d5319e527ccbc8151df061"},
|
||||
{file = "Pillow-10.1.0-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:47e5bf85b80abc03be7455c95b6d6e4896a62f6541c1f2ce77a7d2bb832af262"},
|
||||
{file = "Pillow-10.1.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:a92386125e9ee90381c3369f57a2a50fa9e6aa8b1cf1d9c4b200d41a7dd8e992"},
|
||||
{file = "Pillow-10.1.0-cp311-cp311-win_amd64.whl", hash = "sha256:0f7c276c05a9767e877a0b4c5050c8bee6a6d960d7f0c11ebda6b99746068c2a"},
|
||||
{file = "Pillow-10.1.0-cp312-cp312-macosx_10_10_x86_64.whl", hash = "sha256:a89b8312d51715b510a4fe9fc13686283f376cfd5abca8cd1c65e4c76e21081b"},
|
||||
{file = "Pillow-10.1.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:00f438bb841382b15d7deb9a05cc946ee0f2c352653c7aa659e75e592f6fa17d"},
|
||||
{file = "Pillow-10.1.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3d929a19f5469b3f4df33a3df2983db070ebb2088a1e145e18facbc28cae5b27"},
|
||||
{file = "Pillow-10.1.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9a92109192b360634a4489c0c756364c0c3a2992906752165ecb50544c251312"},
|
||||
{file = "Pillow-10.1.0-cp312-cp312-manylinux_2_28_aarch64.whl", hash = "sha256:0248f86b3ea061e67817c47ecbe82c23f9dd5d5226200eb9090b3873d3ca32de"},
|
||||
{file = "Pillow-10.1.0-cp312-cp312-manylinux_2_28_x86_64.whl", hash = "sha256:9882a7451c680c12f232a422730f986a1fcd808da0fd428f08b671237237d651"},
|
||||
{file = "Pillow-10.1.0-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:1c3ac5423c8c1da5928aa12c6e258921956757d976405e9467c5f39d1d577a4b"},
|
||||
{file = "Pillow-10.1.0-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:806abdd8249ba3953c33742506fe414880bad78ac25cc9a9b1c6ae97bedd573f"},
|
||||
{file = "Pillow-10.1.0-cp312-cp312-win_amd64.whl", hash = "sha256:eaed6977fa73408b7b8a24e8b14e59e1668cfc0f4c40193ea7ced8e210adf996"},
|
||||
{file = "Pillow-10.1.0-cp38-cp38-macosx_10_10_x86_64.whl", hash = "sha256:fe1e26e1ffc38be097f0ba1d0d07fcade2bcfd1d023cda5b29935ae8052bd793"},
|
||||
{file = "Pillow-10.1.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:7a7e3daa202beb61821c06d2517428e8e7c1aab08943e92ec9e5755c2fc9ba5e"},
|
||||
{file = "Pillow-10.1.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:24fadc71218ad2b8ffe437b54876c9382b4a29e030a05a9879f615091f42ffc2"},
|
||||
{file = "Pillow-10.1.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:fa1d323703cfdac2036af05191b969b910d8f115cf53093125e4058f62012c9a"},
|
||||
{file = "Pillow-10.1.0-cp38-cp38-manylinux_2_28_aarch64.whl", hash = "sha256:912e3812a1dbbc834da2b32299b124b5ddcb664ed354916fd1ed6f193f0e2d01"},
|
||||
{file = "Pillow-10.1.0-cp38-cp38-manylinux_2_28_x86_64.whl", hash = "sha256:7dbaa3c7de82ef37e7708521be41db5565004258ca76945ad74a8e998c30af8d"},
|
||||
{file = "Pillow-10.1.0-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:9d7bc666bd8c5a4225e7ac71f2f9d12466ec555e89092728ea0f5c0c2422ea80"},
|
||||
{file = "Pillow-10.1.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:baada14941c83079bf84c037e2d8b7506ce201e92e3d2fa0d1303507a8538212"},
|
||||
{file = "Pillow-10.1.0-cp38-cp38-win_amd64.whl", hash = "sha256:2ef6721c97894a7aa77723740a09547197533146fba8355e86d6d9a4a1056b14"},
|
||||
{file = "Pillow-10.1.0-cp39-cp39-macosx_10_10_x86_64.whl", hash = "sha256:0a026c188be3b443916179f5d04548092e253beb0c3e2ee0a4e2cdad72f66099"},
|
||||
{file = "Pillow-10.1.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:04f6f6149f266a100374ca3cc368b67fb27c4af9f1cc8cb6306d849dcdf12616"},
|
||||
{file = "Pillow-10.1.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bb40c011447712d2e19cc261c82655f75f32cb724788df315ed992a4d65696bb"},
|
||||
{file = "Pillow-10.1.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1a8413794b4ad9719346cd9306118450b7b00d9a15846451549314a58ac42219"},
|
||||
{file = "Pillow-10.1.0-cp39-cp39-manylinux_2_28_aarch64.whl", hash = "sha256:c9aeea7b63edb7884b031a35305629a7593272b54f429a9869a4f63a1bf04c34"},
|
||||
{file = "Pillow-10.1.0-cp39-cp39-manylinux_2_28_x86_64.whl", hash = "sha256:b4005fee46ed9be0b8fb42be0c20e79411533d1fd58edabebc0dd24626882cfd"},
|
||||
{file = "Pillow-10.1.0-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:4d0152565c6aa6ebbfb1e5d8624140a440f2b99bf7afaafbdbf6430426497f28"},
|
||||
{file = "Pillow-10.1.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:d921bc90b1defa55c9917ca6b6b71430e4286fc9e44c55ead78ca1a9f9eba5f2"},
|
||||
{file = "Pillow-10.1.0-cp39-cp39-win_amd64.whl", hash = "sha256:cfe96560c6ce2f4c07d6647af2d0f3c54cc33289894ebd88cfbb3bcd5391e256"},
|
||||
{file = "Pillow-10.1.0-pp310-pypy310_pp73-macosx_10_10_x86_64.whl", hash = "sha256:937bdc5a7f5343d1c97dc98149a0be7eb9704e937fe3dc7140e229ae4fc572a7"},
|
||||
{file = "Pillow-10.1.0-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b1c25762197144e211efb5f4e8ad656f36c8d214d390585d1d21281f46d556ba"},
|
||||
{file = "Pillow-10.1.0-pp310-pypy310_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:afc8eef765d948543a4775f00b7b8c079b3321d6b675dde0d02afa2ee23000b4"},
|
||||
{file = "Pillow-10.1.0-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:883f216eac8712b83a63f41b76ddfb7b2afab1b74abbb413c5df6680f071a6b9"},
|
||||
{file = "Pillow-10.1.0-pp39-pypy39_pp73-macosx_10_10_x86_64.whl", hash = "sha256:b920e4d028f6442bea9a75b7491c063f0b9a3972520731ed26c83e254302eb1e"},
|
||||
{file = "Pillow-10.1.0-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1c41d960babf951e01a49c9746f92c5a7e0d939d1652d7ba30f6b3090f27e412"},
|
||||
{file = "Pillow-10.1.0-pp39-pypy39_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:1fafabe50a6977ac70dfe829b2d5735fd54e190ab55259ec8aea4aaea412fa0b"},
|
||||
{file = "Pillow-10.1.0-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:3b834f4b16173e5b92ab6566f0473bfb09f939ba14b23b8da1f54fa63e4b623f"},
|
||||
{file = "Pillow-10.1.0.tar.gz", hash = "sha256:e6bf8de6c36ed96c86ea3b6e1d5273c53f46ef518a062464cd7ef5dd2cf92e38"},
|
||||
]
|
||||
|
||||
[package.extras]
|
||||
|
@ -1749,22 +1760,22 @@ twisted = ["twisted"]
|
|||
|
||||
[[package]]
|
||||
name = "psycopg2"
|
||||
version = "2.9.8"
|
||||
version = "2.9.9"
|
||||
description = "psycopg2 - Python-PostgreSQL Database Adapter"
|
||||
optional = true
|
||||
python-versions = ">=3.6"
|
||||
python-versions = ">=3.7"
|
||||
files = [
|
||||
{file = "psycopg2-2.9.8-cp310-cp310-win32.whl", hash = "sha256:2f8594f92bbb5d8b59ffec04e2686c416401e2d4297de1193f8e75235937e71d"},
|
||||
{file = "psycopg2-2.9.8-cp310-cp310-win_amd64.whl", hash = "sha256:f9ecbf504c4eaff90139d5c9b95d47275f2b2651e14eba56392b4041fbf4c2b3"},
|
||||
{file = "psycopg2-2.9.8-cp311-cp311-win32.whl", hash = "sha256:65f81e72136d8b9ac8abf5206938d60f50da424149a43b6073f1546063c0565e"},
|
||||
{file = "psycopg2-2.9.8-cp311-cp311-win_amd64.whl", hash = "sha256:f7e62095d749359b7854143843f27edd7dccfcd3e1d833b880562aa5702d92b0"},
|
||||
{file = "psycopg2-2.9.8-cp37-cp37m-win32.whl", hash = "sha256:81b21424023a290a40884c7f8b0093ba6465b59bd785c18f757e76945f65594c"},
|
||||
{file = "psycopg2-2.9.8-cp37-cp37m-win_amd64.whl", hash = "sha256:67c2f32f3aba79afb15799575e77ee2db6b46b8acf943c21d34d02d4e1041d50"},
|
||||
{file = "psycopg2-2.9.8-cp38-cp38-win32.whl", hash = "sha256:287a64ef168ef7fb9f382964705ff664b342bfff47e7242bf0a04ef203269dd5"},
|
||||
{file = "psycopg2-2.9.8-cp38-cp38-win_amd64.whl", hash = "sha256:dcde3cad4920e29e74bf4e76c072649764914facb2069e6b7fa1ddbebcd49e9f"},
|
||||
{file = "psycopg2-2.9.8-cp39-cp39-win32.whl", hash = "sha256:d4ad050ea50a16731d219c3a85e8f2debf49415a070f0b8331ccc96c81700d9b"},
|
||||
{file = "psycopg2-2.9.8-cp39-cp39-win_amd64.whl", hash = "sha256:d39bb3959788b2c9d7bf5ff762e29f436172b241cd7b47529baac77746fd7918"},
|
||||
{file = "psycopg2-2.9.8.tar.gz", hash = "sha256:3da6488042a53b50933244085f3f91803f1b7271f970f3e5536efa69314f6a49"},
|
||||
{file = "psycopg2-2.9.9-cp310-cp310-win32.whl", hash = "sha256:38a8dcc6856f569068b47de286b472b7c473ac7977243593a288ebce0dc89516"},
|
||||
{file = "psycopg2-2.9.9-cp310-cp310-win_amd64.whl", hash = "sha256:426f9f29bde126913a20a96ff8ce7d73fd8a216cfb323b1f04da402d452853c3"},
|
||||
{file = "psycopg2-2.9.9-cp311-cp311-win32.whl", hash = "sha256:ade01303ccf7ae12c356a5e10911c9e1c51136003a9a1d92f7aa9d010fb98372"},
|
||||
{file = "psycopg2-2.9.9-cp311-cp311-win_amd64.whl", hash = "sha256:121081ea2e76729acfb0673ff33755e8703d45e926e416cb59bae3a86c6a4981"},
|
||||
{file = "psycopg2-2.9.9-cp37-cp37m-win32.whl", hash = "sha256:5e0d98cade4f0e0304d7d6f25bbfbc5bd186e07b38eac65379309c4ca3193efa"},
|
||||
{file = "psycopg2-2.9.9-cp37-cp37m-win_amd64.whl", hash = "sha256:7e2dacf8b009a1c1e843b5213a87f7c544b2b042476ed7755be813eaf4e8347a"},
|
||||
{file = "psycopg2-2.9.9-cp38-cp38-win32.whl", hash = "sha256:ff432630e510709564c01dafdbe996cb552e0b9f3f065eb89bdce5bd31fabf4c"},
|
||||
{file = "psycopg2-2.9.9-cp38-cp38-win_amd64.whl", hash = "sha256:bac58c024c9922c23550af2a581998624d6e02350f4ae9c5f0bc642c633a2d5e"},
|
||||
{file = "psycopg2-2.9.9-cp39-cp39-win32.whl", hash = "sha256:c92811b2d4c9b6ea0285942b2e7cac98a59e166d59c588fe5cfe1eda58e72d59"},
|
||||
{file = "psycopg2-2.9.9-cp39-cp39-win_amd64.whl", hash = "sha256:de80739447af31525feddeb8effd640782cf5998e1a4e9192ebdf829717e3913"},
|
||||
{file = "psycopg2-2.9.9.tar.gz", hash = "sha256:d1454bde93fb1e224166811694d600e746430c006fbb031ea06ecc2ea41bf156"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
|
@ -2427,28 +2438,28 @@ files = [
|
|||
|
||||
[[package]]
|
||||
name = "ruff"
|
||||
version = "0.0.290"
|
||||
version = "0.0.292"
|
||||
description = "An extremely fast Python linter, written in Rust."
|
||||
optional = false
|
||||
python-versions = ">=3.7"
|
||||
files = [
|
||||
{file = "ruff-0.0.290-py3-none-macosx_10_7_x86_64.whl", hash = "sha256:0e2b09ac4213b11a3520221083866a5816616f3ae9da123037b8ab275066fbac"},
|
||||
{file = "ruff-0.0.290-py3-none-macosx_10_9_x86_64.macosx_11_0_arm64.macosx_10_9_universal2.whl", hash = "sha256:4ca6285aa77b3d966be32c9a3cd531655b3d4a0171e1f9bf26d66d0372186767"},
|
||||
{file = "ruff-0.0.290-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:35e3550d1d9f2157b0fcc77670f7bb59154f223bff281766e61bdd1dd854e0c5"},
|
||||
{file = "ruff-0.0.290-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:d748c8bd97874f5751aed73e8dde379ce32d16338123d07c18b25c9a2796574a"},
|
||||
{file = "ruff-0.0.290-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:982af5ec67cecd099e2ef5e238650407fb40d56304910102d054c109f390bf3c"},
|
||||
{file = "ruff-0.0.290-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:bbd37352cea4ee007c48a44c9bc45a21f7ba70a57edfe46842e346651e2b995a"},
|
||||
{file = "ruff-0.0.290-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1d9be6351b7889462912e0b8185a260c0219c35dfd920fb490c7f256f1d8313e"},
|
||||
{file = "ruff-0.0.290-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:75cdc7fe32dcf33b7cec306707552dda54632ac29402775b9e212a3c16aad5e6"},
|
||||
{file = "ruff-0.0.290-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:eb07f37f7aecdbbc91d759c0c09870ce0fb3eed4025eebedf9c4b98c69abd527"},
|
||||
{file = "ruff-0.0.290-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:2ab41bc0ba359d3f715fc7b705bdeef19c0461351306b70a4e247f836b9350ed"},
|
||||
{file = "ruff-0.0.290-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:150bf8050214cea5b990945b66433bf9a5e0cef395c9bc0f50569e7de7540c86"},
|
||||
{file = "ruff-0.0.290-py3-none-musllinux_1_2_i686.whl", hash = "sha256:75386ebc15fe5467248c039f5bf6a0cfe7bfc619ffbb8cd62406cd8811815fca"},
|
||||
{file = "ruff-0.0.290-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:ac93eadf07bc4ab4c48d8bb4e427bf0f58f3a9c578862eb85d99d704669f5da0"},
|
||||
{file = "ruff-0.0.290-py3-none-win32.whl", hash = "sha256:461fbd1fb9ca806d4e3d5c745a30e185f7cf3ca77293cdc17abb2f2a990ad3f7"},
|
||||
{file = "ruff-0.0.290-py3-none-win_amd64.whl", hash = "sha256:f1f49f5ec967fd5778813780b12a5650ab0ebcb9ddcca28d642c689b36920796"},
|
||||
{file = "ruff-0.0.290-py3-none-win_arm64.whl", hash = "sha256:ae5a92dfbdf1f0c689433c223f8dac0782c2b2584bd502dfdbc76475669f1ba1"},
|
||||
{file = "ruff-0.0.290.tar.gz", hash = "sha256:949fecbc5467bb11b8db810a7fa53c7e02633856ee6bd1302b2f43adcd71b88d"},
|
||||
{file = "ruff-0.0.292-py3-none-macosx_10_7_x86_64.whl", hash = "sha256:02f29db018c9d474270c704e6c6b13b18ed0ecac82761e4fcf0faa3728430c96"},
|
||||
{file = "ruff-0.0.292-py3-none-macosx_10_9_x86_64.macosx_11_0_arm64.macosx_10_9_universal2.whl", hash = "sha256:69654e564342f507edfa09ee6897883ca76e331d4bbc3676d8a8403838e9fade"},
|
||||
{file = "ruff-0.0.292-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6c3c91859a9b845c33778f11902e7b26440d64b9d5110edd4e4fa1726c41e0a4"},
|
||||
{file = "ruff-0.0.292-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:f4476f1243af2d8c29da5f235c13dca52177117935e1f9393f9d90f9833f69e4"},
|
||||
{file = "ruff-0.0.292-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:be8eb50eaf8648070b8e58ece8e69c9322d34afe367eec4210fdee9a555e4ca7"},
|
||||
{file = "ruff-0.0.292-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:9889bac18a0c07018aac75ef6c1e6511d8411724d67cb879103b01758e110a81"},
|
||||
{file = "ruff-0.0.292-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:6bdfabd4334684a4418b99b3118793f2c13bb67bf1540a769d7816410402a205"},
|
||||
{file = "ruff-0.0.292-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:aa7c77c53bfcd75dbcd4d1f42d6cabf2485d2e1ee0678da850f08e1ab13081a8"},
|
||||
{file = "ruff-0.0.292-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8e087b24d0d849c5c81516ec740bf4fd48bf363cfb104545464e0fca749b6af9"},
|
||||
{file = "ruff-0.0.292-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:f160b5ec26be32362d0774964e218f3fcf0a7da299f7e220ef45ae9e3e67101a"},
|
||||
{file = "ruff-0.0.292-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:ac153eee6dd4444501c4bb92bff866491d4bfb01ce26dd2fff7ca472c8df9ad0"},
|
||||
{file = "ruff-0.0.292-py3-none-musllinux_1_2_i686.whl", hash = "sha256:87616771e72820800b8faea82edd858324b29bb99a920d6aa3d3949dd3f88fb0"},
|
||||
{file = "ruff-0.0.292-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:b76deb3bdbea2ef97db286cf953488745dd6424c122d275f05836c53f62d4016"},
|
||||
{file = "ruff-0.0.292-py3-none-win32.whl", hash = "sha256:e854b05408f7a8033a027e4b1c7f9889563dd2aca545d13d06711e5c39c3d003"},
|
||||
{file = "ruff-0.0.292-py3-none-win_amd64.whl", hash = "sha256:f27282bedfd04d4c3492e5c3398360c9d86a295be00eccc63914438b4ac8a83c"},
|
||||
{file = "ruff-0.0.292-py3-none-win_arm64.whl", hash = "sha256:7f67a69c8f12fbc8daf6ae6d36705037bde315abf8b82b6e1f4c9e74eb750f68"},
|
||||
{file = "ruff-0.0.292.tar.gz", hash = "sha256:1093449e37dd1e9b813798f6ad70932b57cf614e5c2b5c51005bf67d55db33ac"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
|
@ -2483,13 +2494,13 @@ doc = ["Sphinx", "sphinx-rtd-theme"]
|
|||
|
||||
[[package]]
|
||||
name = "sentry-sdk"
|
||||
version = "1.31.0"
|
||||
version = "1.32.0"
|
||||
description = "Python client for Sentry (https://sentry.io)"
|
||||
optional = true
|
||||
python-versions = "*"
|
||||
files = [
|
||||
{file = "sentry-sdk-1.31.0.tar.gz", hash = "sha256:6de2e88304873484207fed836388e422aeff000609b104c802749fd89d56ba5b"},
|
||||
{file = "sentry_sdk-1.31.0-py2.py3-none-any.whl", hash = "sha256:64a7141005fb775b9db298a30de93e3b83e0ddd1232dc6f36eb38aebc1553291"},
|
||||
{file = "sentry-sdk-1.32.0.tar.gz", hash = "sha256:935e8fbd7787a3702457393b74b13d89a5afb67185bc0af85c00cb27cbd42e7c"},
|
||||
{file = "sentry_sdk-1.32.0-py2.py3-none-any.whl", hash = "sha256:eeb0b3550536f3bbc05bb1c7e0feb3a78d74acb43b607159a606ed2ec0a33a4d"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
|
@ -3037,13 +3048,13 @@ twisted = "*"
|
|||
|
||||
[[package]]
|
||||
name = "types-bleach"
|
||||
version = "6.0.0.4"
|
||||
version = "6.1.0.0"
|
||||
description = "Typing stubs for bleach"
|
||||
optional = false
|
||||
python-versions = "*"
|
||||
python-versions = ">=3.7"
|
||||
files = [
|
||||
{file = "types-bleach-6.0.0.4.tar.gz", hash = "sha256:357b0226f65c4f20ab3b13ca8d78a6b91c78aad256d8ec168d4e90fc3303ebd4"},
|
||||
{file = "types_bleach-6.0.0.4-py3-none-any.whl", hash = "sha256:2b8767eb407c286b7f02803678732e522e04db8d56cbc9f1270bee49627eae92"},
|
||||
{file = "types-bleach-6.1.0.0.tar.gz", hash = "sha256:3cf0e55d4618890a00af1151f878b2e2a7a96433850b74e12bede7663d774532"},
|
||||
{file = "types_bleach-6.1.0.0-py3-none-any.whl", hash = "sha256:f0bc75d0f6475036ac69afebf37c41d116dfba78dae55db80437caf0fcd35c28"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
|
@ -3059,15 +3070,18 @@ files = [
|
|||
|
||||
[[package]]
|
||||
name = "types-jsonschema"
|
||||
version = "4.17.0.10"
|
||||
version = "4.19.0.3"
|
||||
description = "Typing stubs for jsonschema"
|
||||
optional = false
|
||||
python-versions = "*"
|
||||
python-versions = ">=3.8"
|
||||
files = [
|
||||
{file = "types-jsonschema-4.17.0.10.tar.gz", hash = "sha256:8e979db34d69bc9f9b3d6e8b89bdbc60b3a41cfce4e1fb87bf191d205c7f5098"},
|
||||
{file = "types_jsonschema-4.17.0.10-py3-none-any.whl", hash = "sha256:3aa2a89afbd9eaa6ce0c15618b36f02692a621433889ce73014656f7d8caf971"},
|
||||
{file = "types-jsonschema-4.19.0.3.tar.gz", hash = "sha256:e0fc0f5d51fd0988bf193be42174a5376b0096820ff79505d9c1b66de23f0581"},
|
||||
{file = "types_jsonschema-4.19.0.3-py3-none-any.whl", hash = "sha256:5cedbb661e5ca88d95b94b79902423e3f97a389c245e5fe0ab384122f27d56b9"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
referencing = "*"
|
||||
|
||||
[[package]]
|
||||
name = "types-netaddr"
|
||||
version = "0.9.0.1"
|
||||
|
@ -3444,4 +3458,4 @@ user-search = ["pyicu"]
|
|||
[metadata]
|
||||
lock-version = "2.0"
|
||||
python-versions = "^3.8.0"
|
||||
content-hash = "364c309486e9d93d4da8a1a3784d5ecd7d2a9734cf84dcd4a991f2cd54f0b5b5"
|
||||
content-hash = "a08543c65f18cc7e9dea648e89c18ab88fc1747aa2e029aa208f777fc3db06dd"
|
||||
|
|
|
@ -96,7 +96,7 @@ module-name = "synapse.synapse_rust"
|
|||
|
||||
[tool.poetry]
|
||||
name = "matrix-synapse"
|
||||
version = "1.94.0rc1"
|
||||
version = "1.95.0rc1"
|
||||
description = "Homeserver for the Matrix decentralised comms protocol"
|
||||
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
|
||||
license = "Apache-2.0"
|
||||
|
@ -321,7 +321,7 @@ all = [
|
|||
# This helps prevents merge conflicts when running a batch of dependabot updates.
|
||||
isort = ">=5.10.1"
|
||||
black = ">=22.7.0"
|
||||
ruff = "0.0.290"
|
||||
ruff = "0.0.292"
|
||||
# Type checking only works with the pydantic.v1 compat module from pydantic v2
|
||||
pydantic = "^2"
|
||||
|
||||
|
|
|
@ -25,14 +25,14 @@ name = "synapse.synapse_rust"
|
|||
anyhow = "1.0.63"
|
||||
lazy_static = "1.4.0"
|
||||
log = "0.4.17"
|
||||
pyo3 = { version = "0.17.1", features = [
|
||||
pyo3 = { version = "0.19.2", features = [
|
||||
"macros",
|
||||
"anyhow",
|
||||
"abi3",
|
||||
"abi3-py37",
|
||||
"abi3-py38",
|
||||
] }
|
||||
pyo3-log = "0.8.1"
|
||||
pythonize = "0.17.0"
|
||||
pythonize = "0.19.0"
|
||||
regex = "1.6.0"
|
||||
serde = { version = "1.0.144", features = ["derive"] }
|
||||
serde_json = "1.0.85"
|
||||
|
|
|
@ -105,6 +105,17 @@ impl PushRuleEvaluator {
|
|||
/// Create a new `PushRuleEvaluator`. See struct docstring for details.
|
||||
#[allow(clippy::too_many_arguments)]
|
||||
#[new]
|
||||
#[pyo3(signature = (
|
||||
flattened_keys,
|
||||
has_mentions,
|
||||
room_member_count,
|
||||
sender_power_level,
|
||||
notification_power_levels,
|
||||
related_events_flattened,
|
||||
related_event_match_enabled,
|
||||
room_version_feature_flags,
|
||||
msc3931_enabled,
|
||||
))]
|
||||
pub fn py_new(
|
||||
flattened_keys: BTreeMap<String, JsonValue>,
|
||||
has_mentions: bool,
|
||||
|
|
|
@ -214,7 +214,7 @@ fi
|
|||
|
||||
extra_test_args=()
|
||||
|
||||
test_tags="synapse_blacklist,msc3874,msc3890,msc3391,msc3930,faster_joins"
|
||||
test_packages="./tests/csapi ./tests ./tests/msc3874 ./tests/msc3890 ./tests/msc3391 ./tests/msc3930 ./tests/msc3902"
|
||||
|
||||
# All environment variables starting with PASS_ will be shared.
|
||||
# (The prefix is stripped off before reaching the container.)
|
||||
|
@ -277,4 +277,4 @@ export PASS_SYNAPSE_LOG_TESTING=1
|
|||
echo "Images built; running complement"
|
||||
cd "$COMPLEMENT_DIR"
|
||||
|
||||
go test -v -tags $test_tags -count=1 "${extra_test_args[@]}" "$@" ./tests/...
|
||||
go test -v -tags "synapse_blacklist" -count=1 "${extra_test_args[@]}" "$@" $test_packages
|
||||
|
|
|
@ -684,6 +684,10 @@ def full(gh_token: str) -> None:
|
|||
click.echo("1. If this is a security release, read the security wiki page.")
|
||||
click.echo("2. Check for any release blockers before proceeding.")
|
||||
click.echo(" https://github.com/matrix-org/synapse/labels/X-Release-Blocker")
|
||||
click.echo(
|
||||
"3. Check for any other special release notes, including announcements to add to the changelog or special deployment instructions."
|
||||
)
|
||||
click.echo(" See the 'Synapse Maintainer Report'.")
|
||||
|
||||
click.confirm("Ready?", abort=True)
|
||||
|
||||
|
|
|
@ -115,7 +115,7 @@ class InternalAuth(BaseAuth):
|
|||
Once get_user_by_req has set up the opentracing span, this does the actual work.
|
||||
"""
|
||||
try:
|
||||
ip_addr = request.getClientAddress().host
|
||||
ip_addr = request.get_client_ip_if_available()
|
||||
user_agent = get_request_user_agent(request)
|
||||
|
||||
access_token = self.get_access_token_from_request(request)
|
||||
|
|
|
@ -80,10 +80,6 @@ class UserPresenceState:
|
|||
def as_dict(self) -> JsonDict:
|
||||
return attr.asdict(self)
|
||||
|
||||
@staticmethod
|
||||
def from_dict(d: JsonDict) -> "UserPresenceState":
|
||||
return UserPresenceState(**d)
|
||||
|
||||
def copy_and_replace(self, **kwargs: Any) -> "UserPresenceState":
|
||||
return attr.evolve(self, **kwargs)
|
||||
|
||||
|
|
|
@ -1402,7 +1402,7 @@ class FederationClient(FederationBase):
|
|||
The remote homeserver return some state from the room. The response
|
||||
dictionary is in the form:
|
||||
|
||||
{"knock_state_events": [<state event dict>, ...]}
|
||||
{"knock_room_state": [<state event dict>, ...]}
|
||||
|
||||
The list of state events may be empty.
|
||||
|
||||
|
@ -1429,7 +1429,7 @@ class FederationClient(FederationBase):
|
|||
The remote homeserver can optionally return some state from the room. The response
|
||||
dictionary is in the form:
|
||||
|
||||
{"knock_state_events": [<state event dict>, ...]}
|
||||
{"knock_room_state": [<state event dict>, ...]}
|
||||
|
||||
The list of state events may be empty.
|
||||
"""
|
||||
|
|
|
@ -850,14 +850,7 @@ class FederationServer(FederationBase):
|
|||
context, self._room_prejoin_state_types
|
||||
)
|
||||
)
|
||||
return {
|
||||
"knock_room_state": stripped_room_state,
|
||||
# Since v1.37, Synapse incorrectly used "knock_state_events" for this field.
|
||||
# Thus, we also populate a 'knock_state_events' with the same content to
|
||||
# support old instances.
|
||||
# See https://github.com/matrix-org/synapse/issues/14088.
|
||||
"knock_state_events": stripped_room_state,
|
||||
}
|
||||
return {"knock_room_state": stripped_room_state}
|
||||
|
||||
async def _on_send_membership_event(
|
||||
self, origin: str, content: JsonDict, membership_type: str, room_id: str
|
||||
|
|
|
@ -395,7 +395,7 @@ class PresenceDestinationsRow(BaseFederationRow):
|
|||
@staticmethod
|
||||
def from_data(data: JsonDict) -> "PresenceDestinationsRow":
|
||||
return PresenceDestinationsRow(
|
||||
state=UserPresenceState.from_dict(data["state"]), destinations=data["dests"]
|
||||
state=UserPresenceState(**data["state"]), destinations=data["dests"]
|
||||
)
|
||||
|
||||
def to_data(self) -> JsonDict:
|
||||
|
|
|
@ -67,7 +67,7 @@ The loop continues so long as there is anything to send. At each iteration of th
|
|||
|
||||
When the `PerDestinationQueue` has the catch-up flag set, the *Catch-Up Transmission Loop*
|
||||
(`_catch_up_transmission_loop`) is used in lieu of the regular `_transaction_transmission_loop`.
|
||||
(Only once the catch-up mode has been exited can the regular tranaction transmission behaviour
|
||||
(Only once the catch-up mode has been exited can the regular transaction transmission behaviour
|
||||
be resumed.)
|
||||
|
||||
*Catch-Up Mode*, entered upon Synapse startup or once a homeserver has fallen behind due to
|
||||
|
|
|
@ -431,7 +431,7 @@ class TransportLayerClient:
|
|||
The remote homeserver can optionally return some state from the room. The response
|
||||
dictionary is in the form:
|
||||
|
||||
{"knock_state_events": [<state event dict>, ...]}
|
||||
{"knock_room_state": [<state event dict>, ...]}
|
||||
|
||||
The list of state events may be empty.
|
||||
"""
|
||||
|
|
|
@ -212,8 +212,8 @@ class AccountValidityHandler:
|
|||
|
||||
addresses = []
|
||||
for threepid in threepids:
|
||||
if threepid["medium"] == "email":
|
||||
addresses.append(threepid["address"])
|
||||
if threepid.medium == "email":
|
||||
addresses.append(threepid.address)
|
||||
|
||||
return addresses
|
||||
|
||||
|
|
|
@ -16,6 +16,8 @@ import abc
|
|||
import logging
|
||||
from typing import TYPE_CHECKING, Any, Dict, List, Mapping, Optional, Sequence, Set
|
||||
|
||||
import attr
|
||||
|
||||
from synapse.api.constants import Direction, Membership
|
||||
from synapse.events import EventBase
|
||||
from synapse.types import JsonMapping, RoomStreamToken, StateMap, UserID, UserInfo
|
||||
|
@ -93,7 +95,7 @@ class AdminHandler:
|
|||
]
|
||||
user_info_dict["displayname"] = profile.display_name
|
||||
user_info_dict["avatar_url"] = profile.avatar_url
|
||||
user_info_dict["threepids"] = threepids
|
||||
user_info_dict["threepids"] = [attr.asdict(t) for t in threepids]
|
||||
user_info_dict["external_ids"] = external_ids
|
||||
user_info_dict["erased"] = await self._store.is_user_erased(user.to_string())
|
||||
|
||||
|
@ -171,8 +173,8 @@ class AdminHandler:
|
|||
else:
|
||||
stream_ordering = room.stream_ordering
|
||||
|
||||
from_key = RoomStreamToken(0, 0)
|
||||
to_key = RoomStreamToken(None, stream_ordering)
|
||||
from_key = RoomStreamToken(topological=0, stream=0)
|
||||
to_key = RoomStreamToken(stream=stream_ordering)
|
||||
|
||||
# Events that we've processed in this room
|
||||
written_events: Set[str] = set()
|
||||
|
|
|
@ -216,7 +216,7 @@ class ApplicationServicesHandler:
|
|||
|
||||
def notify_interested_services_ephemeral(
|
||||
self,
|
||||
stream_key: str,
|
||||
stream_key: StreamKeyType,
|
||||
new_token: Union[int, RoomStreamToken],
|
||||
users: Collection[Union[str, UserID]],
|
||||
) -> None:
|
||||
|
@ -326,7 +326,7 @@ class ApplicationServicesHandler:
|
|||
async def _notify_interested_services_ephemeral(
|
||||
self,
|
||||
services: List[ApplicationService],
|
||||
stream_key: str,
|
||||
stream_key: StreamKeyType,
|
||||
new_token: int,
|
||||
users: Collection[Union[str, UserID]],
|
||||
) -> None:
|
||||
|
|
|
@ -117,9 +117,9 @@ class DeactivateAccountHandler:
|
|||
|
||||
# Remove any local threepid associations for this account.
|
||||
local_threepids = await self.store.user_get_threepids(user_id)
|
||||
for threepid in local_threepids:
|
||||
for local_threepid in local_threepids:
|
||||
await self._auth_handler.delete_local_threepid(
|
||||
user_id, threepid["medium"], threepid["address"]
|
||||
user_id, local_threepid.medium, local_threepid.address
|
||||
)
|
||||
|
||||
# delete any devices belonging to the user, which will also
|
||||
|
|
|
@ -14,17 +14,7 @@
|
|||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
import logging
|
||||
from typing import (
|
||||
TYPE_CHECKING,
|
||||
Any,
|
||||
Dict,
|
||||
Iterable,
|
||||
List,
|
||||
Mapping,
|
||||
Optional,
|
||||
Set,
|
||||
Tuple,
|
||||
)
|
||||
from typing import TYPE_CHECKING, Dict, Iterable, List, Mapping, Optional, Set, Tuple
|
||||
|
||||
from synapse.api import errors
|
||||
from synapse.api.constants import EduTypes, EventTypes
|
||||
|
@ -41,6 +31,7 @@ from synapse.metrics.background_process_metrics import (
|
|||
run_as_background_process,
|
||||
wrap_as_background_process,
|
||||
)
|
||||
from synapse.storage.databases.main.client_ips import DeviceLastConnectionInfo
|
||||
from synapse.types import (
|
||||
JsonDict,
|
||||
JsonMapping,
|
||||
|
@ -845,7 +836,6 @@ class DeviceHandler(DeviceWorkerHandler):
|
|||
else:
|
||||
assert max_stream_id == stream_id
|
||||
# Avoid moving `room_id` backwards.
|
||||
pass
|
||||
|
||||
if self._handle_new_device_update_new_data:
|
||||
continue
|
||||
|
@ -1009,14 +999,14 @@ class DeviceHandler(DeviceWorkerHandler):
|
|||
|
||||
|
||||
def _update_device_from_client_ips(
|
||||
device: JsonDict, client_ips: Mapping[Tuple[str, str], Mapping[str, Any]]
|
||||
device: JsonDict, client_ips: Mapping[Tuple[str, str], DeviceLastConnectionInfo]
|
||||
) -> None:
|
||||
ip = client_ips.get((device["user_id"], device["device_id"]), {})
|
||||
ip = client_ips.get((device["user_id"], device["device_id"]))
|
||||
device.update(
|
||||
{
|
||||
"last_seen_user_agent": ip.get("user_agent"),
|
||||
"last_seen_ts": ip.get("last_seen"),
|
||||
"last_seen_ip": ip.get("ip"),
|
||||
"last_seen_user_agent": ip.user_agent if ip else None,
|
||||
"last_seen_ts": ip.last_seen if ip else None,
|
||||
"last_seen_ip": ip.ip if ip else None,
|
||||
}
|
||||
)
|
||||
|
||||
|
|
|
@ -868,19 +868,10 @@ class FederationHandler:
|
|||
# This is a bit of a hack and is cribbing off of invites. Basically we
|
||||
# store the room state here and retrieve it again when this event appears
|
||||
# in the invitee's sync stream. It is stripped out for all other local users.
|
||||
stripped_room_state = (
|
||||
knock_response.get("knock_room_state")
|
||||
# Since v1.37, Synapse incorrectly used "knock_state_events" for this field.
|
||||
# Thus, we also check for a 'knock_state_events' to support old instances.
|
||||
# See https://github.com/matrix-org/synapse/issues/14088.
|
||||
or knock_response.get("knock_state_events")
|
||||
)
|
||||
stripped_room_state = knock_response.get("knock_room_state")
|
||||
|
||||
if stripped_room_state is None:
|
||||
raise KeyError(
|
||||
"Missing 'knock_room_state' (or legacy 'knock_state_events') field in "
|
||||
"send_knock response"
|
||||
)
|
||||
raise KeyError("Missing 'knock_room_state' field in send_knock response")
|
||||
|
||||
event.unsigned["knock_room_state"] = stripped_room_state
|
||||
|
||||
|
@ -1506,7 +1497,6 @@ class FederationHandler:
|
|||
# in the meantime and context needs to be recomputed, so let's do so.
|
||||
if i == max_retries - 1:
|
||||
raise e
|
||||
pass
|
||||
else:
|
||||
destinations = {x.split(":", 1)[-1] for x in (sender_user_id, room_id)}
|
||||
|
||||
|
@ -1582,7 +1572,6 @@ class FederationHandler:
|
|||
# in the meantime and context needs to be recomputed, so let's do so.
|
||||
if i == max_retries - 1:
|
||||
raise e
|
||||
pass
|
||||
|
||||
async def add_display_name_to_third_party_invite(
|
||||
self,
|
||||
|
|
|
@ -192,8 +192,7 @@ class InitialSyncHandler:
|
|||
)
|
||||
elif event.membership == Membership.LEAVE:
|
||||
room_end_token = RoomStreamToken(
|
||||
None,
|
||||
event.stream_ordering,
|
||||
stream=event.stream_ordering,
|
||||
)
|
||||
deferred_room_state = run_in_background(
|
||||
self._state_storage_controller.get_state_for_events,
|
||||
|
|
|
@ -1133,7 +1133,6 @@ class EventCreationHandler:
|
|||
# in the meantime and context needs to be recomputed, so let's do so.
|
||||
if i == max_retries - 1:
|
||||
raise e
|
||||
pass
|
||||
|
||||
# we know it was persisted, so must have a stream ordering
|
||||
assert ev.internal_metadata.stream_ordering
|
||||
|
@ -2038,7 +2037,6 @@ class EventCreationHandler:
|
|||
# in the meantime and context needs to be recomputed, so let's do so.
|
||||
if i == max_retries - 1:
|
||||
raise e
|
||||
pass
|
||||
return True
|
||||
except AuthError:
|
||||
logger.info(
|
||||
|
|
|
@ -110,6 +110,7 @@ from synapse.replication.http.streams import ReplicationGetStreamUpdates
|
|||
from synapse.replication.tcp.commands import ClearUserSyncsCommand
|
||||
from synapse.replication.tcp.streams import PresenceFederationStream, PresenceStream
|
||||
from synapse.storage.databases.main import DataStore
|
||||
from synapse.storage.databases.main.state_deltas import StateDelta
|
||||
from synapse.streams import EventSource
|
||||
from synapse.types import (
|
||||
JsonDict,
|
||||
|
@ -1499,9 +1500,9 @@ class PresenceHandler(BasePresenceHandler):
|
|||
# We may get multiple deltas for different rooms, but we want to
|
||||
# handle them on a room by room basis, so we batch them up by
|
||||
# room.
|
||||
deltas_by_room: Dict[str, List[JsonDict]] = {}
|
||||
deltas_by_room: Dict[str, List[StateDelta]] = {}
|
||||
for delta in deltas:
|
||||
deltas_by_room.setdefault(delta["room_id"], []).append(delta)
|
||||
deltas_by_room.setdefault(delta.room_id, []).append(delta)
|
||||
|
||||
for room_id, deltas_for_room in deltas_by_room.items():
|
||||
await self._handle_state_delta(room_id, deltas_for_room)
|
||||
|
@ -1513,7 +1514,7 @@ class PresenceHandler(BasePresenceHandler):
|
|||
max_pos
|
||||
)
|
||||
|
||||
async def _handle_state_delta(self, room_id: str, deltas: List[JsonDict]) -> None:
|
||||
async def _handle_state_delta(self, room_id: str, deltas: List[StateDelta]) -> None:
|
||||
"""Process current state deltas for the room to find new joins that need
|
||||
to be handled.
|
||||
"""
|
||||
|
@ -1524,31 +1525,30 @@ class PresenceHandler(BasePresenceHandler):
|
|||
newly_joined_users = set()
|
||||
|
||||
for delta in deltas:
|
||||
assert room_id == delta["room_id"]
|
||||
assert room_id == delta.room_id
|
||||
|
||||
typ = delta["type"]
|
||||
state_key = delta["state_key"]
|
||||
event_id = delta["event_id"]
|
||||
prev_event_id = delta["prev_event_id"]
|
||||
|
||||
logger.debug("Handling: %r %r, %s", typ, state_key, event_id)
|
||||
logger.debug(
|
||||
"Handling: %r %r, %s", delta.event_type, delta.state_key, delta.event_id
|
||||
)
|
||||
|
||||
# Drop any event that isn't a membership join
|
||||
if typ != EventTypes.Member:
|
||||
if delta.event_type != EventTypes.Member:
|
||||
continue
|
||||
|
||||
if event_id is None:
|
||||
if delta.event_id is None:
|
||||
# state has been deleted, so this is not a join. We only care about
|
||||
# joins.
|
||||
continue
|
||||
|
||||
event = await self.store.get_event(event_id, allow_none=True)
|
||||
event = await self.store.get_event(delta.event_id, allow_none=True)
|
||||
if not event or event.content.get("membership") != Membership.JOIN:
|
||||
# We only care about joins
|
||||
continue
|
||||
|
||||
if prev_event_id:
|
||||
prev_event = await self.store.get_event(prev_event_id, allow_none=True)
|
||||
if delta.prev_event_id:
|
||||
prev_event = await self.store.get_event(
|
||||
delta.prev_event_id, allow_none=True
|
||||
)
|
||||
if (
|
||||
prev_event
|
||||
and prev_event.content.get("membership") == Membership.JOIN
|
||||
|
@ -1556,7 +1556,7 @@ class PresenceHandler(BasePresenceHandler):
|
|||
# Ignore changes to join events.
|
||||
continue
|
||||
|
||||
newly_joined_users.add(state_key)
|
||||
newly_joined_users.add(delta.state_key)
|
||||
|
||||
if not newly_joined_users:
|
||||
# If nobody has joined then there's nothing to do.
|
||||
|
|
|
@ -19,7 +19,7 @@ from synapse.api.errors import SynapseError, UnrecognizedRequestError
|
|||
from synapse.push.clientformat import format_push_rules_for_user
|
||||
from synapse.storage.push_rule import RuleNotFoundException
|
||||
from synapse.synapse_rust.push import get_base_rule_ids
|
||||
from synapse.types import JsonDict, UserID
|
||||
from synapse.types import JsonDict, StreamKeyType, UserID
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from synapse.server import HomeServer
|
||||
|
@ -114,7 +114,9 @@ class PushRulesHandler:
|
|||
user_id: the user ID the change is for.
|
||||
"""
|
||||
stream_id = self._main_store.get_max_push_rules_stream_id()
|
||||
self._notifier.on_new_event("push_rules_key", stream_id, users=[user_id])
|
||||
self._notifier.on_new_event(
|
||||
StreamKeyType.PUSH_RULES, stream_id, users=[user_id]
|
||||
)
|
||||
|
||||
async def push_rules_for_user(
|
||||
self, user: UserID
|
||||
|
|
|
@ -130,11 +130,10 @@ class ReceiptsHandler:
|
|||
|
||||
async def _handle_new_receipts(self, receipts: List[ReadReceipt]) -> bool:
|
||||
"""Takes a list of receipts, stores them and informs the notifier."""
|
||||
min_batch_id: Optional[int] = None
|
||||
max_batch_id: Optional[int] = None
|
||||
|
||||
receipts_persisted: List[ReadReceipt] = []
|
||||
for receipt in receipts:
|
||||
res = await self.store.insert_receipt(
|
||||
stream_id = await self.store.insert_receipt(
|
||||
receipt.room_id,
|
||||
receipt.receipt_type,
|
||||
receipt.user_id,
|
||||
|
@ -143,30 +142,26 @@ class ReceiptsHandler:
|
|||
receipt.data,
|
||||
)
|
||||
|
||||
if not res:
|
||||
# res will be None if this receipt is 'old'
|
||||
if stream_id is None:
|
||||
# stream_id will be None if this receipt is 'old'
|
||||
continue
|
||||
|
||||
stream_id, max_persisted_id = res
|
||||
receipts_persisted.append(receipt)
|
||||
|
||||
if min_batch_id is None or stream_id < min_batch_id:
|
||||
min_batch_id = stream_id
|
||||
if max_batch_id is None or max_persisted_id > max_batch_id:
|
||||
max_batch_id = max_persisted_id
|
||||
|
||||
# Either both of these should be None or neither.
|
||||
if min_batch_id is None or max_batch_id is None:
|
||||
if not receipts_persisted:
|
||||
# no new receipts
|
||||
return False
|
||||
|
||||
affected_room_ids = list({r.room_id for r in receipts})
|
||||
max_batch_id = self.store.get_max_receipt_stream_id()
|
||||
|
||||
affected_room_ids = list({r.room_id for r in receipts_persisted})
|
||||
|
||||
self.notifier.on_new_event(
|
||||
StreamKeyType.RECEIPT, max_batch_id, rooms=affected_room_ids
|
||||
)
|
||||
# Note that the min here shouldn't be relied upon to be accurate.
|
||||
await self.hs.get_pusherpool().on_new_receipts(
|
||||
min_batch_id, max_batch_id, affected_room_ids
|
||||
{r.user_id for r in receipts_persisted}
|
||||
)
|
||||
|
||||
return True
|
||||
|
|
|
@ -261,7 +261,6 @@ class RoomCreationHandler:
|
|||
# in the meantime and context needs to be recomputed, so let's do so.
|
||||
if i == max_retries - 1:
|
||||
raise e
|
||||
pass
|
||||
|
||||
# This is to satisfy mypy and should never happen
|
||||
raise PartialStateConflictError()
|
||||
|
@ -1708,7 +1707,7 @@ class RoomEventSource(EventSource[RoomStreamToken, EventBase]):
|
|||
|
||||
if from_key.topological:
|
||||
logger.warning("Stream has topological part!!!! %r", from_key)
|
||||
from_key = RoomStreamToken(None, from_key.stream)
|
||||
from_key = RoomStreamToken(stream=from_key.stream)
|
||||
|
||||
app_service = self.store.get_app_service_by_user_id(user.to_string())
|
||||
if app_service:
|
||||
|
|
|
@ -16,7 +16,7 @@ import abc
|
|||
import logging
|
||||
import random
|
||||
from http import HTTPStatus
|
||||
from typing import TYPE_CHECKING, Any, Dict, Iterable, List, Optional, Set, Tuple
|
||||
from typing import TYPE_CHECKING, Iterable, List, Optional, Set, Tuple
|
||||
|
||||
from synapse import types
|
||||
from synapse.api.constants import (
|
||||
|
@ -44,6 +44,7 @@ from synapse.handlers.worker_lock import NEW_EVENT_DURING_PURGE_LOCK_NAME
|
|||
from synapse.logging import opentracing
|
||||
from synapse.metrics import event_processing_positions
|
||||
from synapse.metrics.background_process_metrics import run_as_background_process
|
||||
from synapse.storage.databases.main.state_deltas import StateDelta
|
||||
from synapse.types import (
|
||||
JsonDict,
|
||||
Requester,
|
||||
|
@ -382,8 +383,10 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
|
|||
and persist a new event for the new membership change.
|
||||
|
||||
Args:
|
||||
requester:
|
||||
target:
|
||||
requester: User requesting the membership change, i.e. the sender of the
|
||||
desired membership event.
|
||||
target: Use whose membership should change, i.e. the state_key of the
|
||||
desired membership event.
|
||||
room_id:
|
||||
membership:
|
||||
|
||||
|
@ -415,7 +418,6 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
|
|||
Returns:
|
||||
Tuple of event ID and stream ordering position
|
||||
"""
|
||||
|
||||
user_id = target.to_string()
|
||||
|
||||
if content is None:
|
||||
|
@ -475,21 +477,6 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
|
|||
(EventTypes.Member, user_id), None
|
||||
)
|
||||
|
||||
if event.membership == Membership.JOIN:
|
||||
newly_joined = True
|
||||
if prev_member_event_id:
|
||||
prev_member_event = await self.store.get_event(
|
||||
prev_member_event_id
|
||||
)
|
||||
newly_joined = prev_member_event.membership != Membership.JOIN
|
||||
|
||||
# Only rate-limit if the user actually joined the room, otherwise we'll end
|
||||
# up blocking profile updates.
|
||||
if newly_joined and ratelimit:
|
||||
await self._join_rate_limiter_local.ratelimit(requester)
|
||||
await self._join_rate_per_room_limiter.ratelimit(
|
||||
requester, key=room_id, update=False
|
||||
)
|
||||
with opentracing.start_active_span("handle_new_client_event"):
|
||||
result_event = (
|
||||
await self.event_creation_handler.handle_new_client_event(
|
||||
|
@ -514,7 +501,6 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
|
|||
# in the meantime and context needs to be recomputed, so let's do so.
|
||||
if i == max_retries - 1:
|
||||
raise e
|
||||
pass
|
||||
|
||||
# we know it was persisted, so should have a stream ordering
|
||||
assert result_event.internal_metadata.stream_ordering
|
||||
|
@ -618,6 +604,25 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
|
|||
Raises:
|
||||
ShadowBanError if a shadow-banned requester attempts to send an invite.
|
||||
"""
|
||||
if ratelimit:
|
||||
if action == Membership.JOIN:
|
||||
# Only rate-limit if the user isn't already joined to the room, otherwise
|
||||
# we'll end up blocking profile updates.
|
||||
(
|
||||
current_membership,
|
||||
_,
|
||||
) = await self.store.get_local_current_membership_for_user_in_room(
|
||||
requester.user.to_string(),
|
||||
room_id,
|
||||
)
|
||||
if current_membership != Membership.JOIN:
|
||||
await self._join_rate_limiter_local.ratelimit(requester)
|
||||
await self._join_rate_per_room_limiter.ratelimit(
|
||||
requester, key=room_id, update=False
|
||||
)
|
||||
elif action == Membership.INVITE:
|
||||
await self.ratelimit_invite(requester, room_id, target.to_string())
|
||||
|
||||
if action == Membership.INVITE and requester.shadow_banned:
|
||||
# We randomly sleep a bit just to annoy the requester.
|
||||
await self.clock.sleep(random.randint(1, 10))
|
||||
|
@ -808,8 +813,6 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
|
|||
|
||||
if effective_membership_state == Membership.INVITE:
|
||||
target_id = target.to_string()
|
||||
if ratelimit:
|
||||
await self.ratelimit_invite(requester, room_id, target_id)
|
||||
|
||||
# block any attempts to invite the server notices mxid
|
||||
if target_id == self._server_notices_mxid:
|
||||
|
@ -2016,7 +2019,6 @@ class RoomMemberMasterHandler(RoomMemberHandler):
|
|||
# in the meantime and context needs to be recomputed, so let's do so.
|
||||
if i == max_retries - 1:
|
||||
raise e
|
||||
pass
|
||||
|
||||
# we know it was persisted, so must have a stream ordering
|
||||
assert result_event.internal_metadata.stream_ordering
|
||||
|
@ -2159,24 +2161,18 @@ class RoomForgetterHandler(StateDeltasHandler):
|
|||
|
||||
await self._store.update_room_forgetter_stream_pos(max_pos)
|
||||
|
||||
async def _handle_deltas(self, deltas: List[Dict[str, Any]]) -> None:
|
||||
async def _handle_deltas(self, deltas: List[StateDelta]) -> None:
|
||||
"""Called with the state deltas to process"""
|
||||
for delta in deltas:
|
||||
typ = delta["type"]
|
||||
state_key = delta["state_key"]
|
||||
room_id = delta["room_id"]
|
||||
event_id = delta["event_id"]
|
||||
prev_event_id = delta["prev_event_id"]
|
||||
|
||||
if typ != EventTypes.Member:
|
||||
if delta.event_type != EventTypes.Member:
|
||||
continue
|
||||
|
||||
if not self._hs.is_mine_id(state_key):
|
||||
if not self._hs.is_mine_id(delta.state_key):
|
||||
continue
|
||||
|
||||
change = await self._get_key_change(
|
||||
prev_event_id,
|
||||
event_id,
|
||||
delta.prev_event_id,
|
||||
delta.event_id,
|
||||
key_name="membership",
|
||||
public_value=Membership.JOIN,
|
||||
)
|
||||
|
@ -2185,7 +2181,7 @@ class RoomForgetterHandler(StateDeltasHandler):
|
|||
if is_leave:
|
||||
try:
|
||||
await self._room_member_handler.forget(
|
||||
UserID.from_string(state_key), room_id
|
||||
UserID.from_string(delta.state_key), delta.room_id
|
||||
)
|
||||
except SynapseError as e:
|
||||
if e.code == 400:
|
||||
|
|
|
@ -27,6 +27,7 @@ from typing import (
|
|||
from synapse.api.constants import EventContentFields, EventTypes, Membership
|
||||
from synapse.metrics import event_processing_positions
|
||||
from synapse.metrics.background_process_metrics import run_as_background_process
|
||||
from synapse.storage.databases.main.state_deltas import StateDelta
|
||||
from synapse.types import JsonDict
|
||||
|
||||
if TYPE_CHECKING:
|
||||
|
@ -142,7 +143,7 @@ class StatsHandler:
|
|||
self.pos = max_pos
|
||||
|
||||
async def _handle_deltas(
|
||||
self, deltas: Iterable[JsonDict]
|
||||
self, deltas: Iterable[StateDelta]
|
||||
) -> Tuple[Dict[str, CounterType[str]], Dict[str, CounterType[str]]]:
|
||||
"""Called with the state deltas to process
|
||||
|
||||
|
@ -157,51 +158,50 @@ class StatsHandler:
|
|||
room_to_state_updates: Dict[str, Dict[str, Any]] = {}
|
||||
|
||||
for delta in deltas:
|
||||
typ = delta["type"]
|
||||
state_key = delta["state_key"]
|
||||
room_id = delta["room_id"]
|
||||
event_id = delta["event_id"]
|
||||
stream_id = delta["stream_id"]
|
||||
prev_event_id = delta["prev_event_id"]
|
||||
logger.debug(
|
||||
"Handling: %r, %r %r, %s",
|
||||
delta.room_id,
|
||||
delta.event_type,
|
||||
delta.state_key,
|
||||
delta.event_id,
|
||||
)
|
||||
|
||||
logger.debug("Handling: %r, %r %r, %s", room_id, typ, state_key, event_id)
|
||||
|
||||
token = await self.store.get_earliest_token_for_stats("room", room_id)
|
||||
token = await self.store.get_earliest_token_for_stats("room", delta.room_id)
|
||||
|
||||
# If the earliest token to begin from is larger than our current
|
||||
# stream ID, skip processing this delta.
|
||||
if token is not None and token >= stream_id:
|
||||
if token is not None and token >= delta.stream_id:
|
||||
logger.debug(
|
||||
"Ignoring: %s as earlier than this room's initial ingestion event",
|
||||
event_id,
|
||||
delta.event_id,
|
||||
)
|
||||
continue
|
||||
|
||||
if event_id is None and prev_event_id is None:
|
||||
if delta.event_id is None and delta.prev_event_id is None:
|
||||
logger.error(
|
||||
"event ID is None and so is the previous event ID. stream_id: %s",
|
||||
stream_id,
|
||||
delta.stream_id,
|
||||
)
|
||||
continue
|
||||
|
||||
event_content: JsonDict = {}
|
||||
|
||||
if event_id is not None:
|
||||
event = await self.store.get_event(event_id, allow_none=True)
|
||||
if delta.event_id is not None:
|
||||
event = await self.store.get_event(delta.event_id, allow_none=True)
|
||||
if event:
|
||||
event_content = event.content or {}
|
||||
|
||||
# All the values in this dict are deltas (RELATIVE changes)
|
||||
room_stats_delta = room_to_stats_deltas.setdefault(room_id, Counter())
|
||||
room_stats_delta = room_to_stats_deltas.setdefault(delta.room_id, Counter())
|
||||
|
||||
room_state = room_to_state_updates.setdefault(room_id, {})
|
||||
room_state = room_to_state_updates.setdefault(delta.room_id, {})
|
||||
|
||||
if prev_event_id is None:
|
||||
if delta.prev_event_id is None:
|
||||
# this state event doesn't overwrite another,
|
||||
# so it is a new effective/current state event
|
||||
room_stats_delta["current_state_events"] += 1
|
||||
|
||||
if typ == EventTypes.Member:
|
||||
if delta.event_type == EventTypes.Member:
|
||||
# we could use StateDeltasHandler._get_key_change here but it's
|
||||
# a bit inefficient given we're not testing for a specific
|
||||
# result; might as well just grab the prev_membership and
|
||||
|
@ -210,9 +210,9 @@ class StatsHandler:
|
|||
# in the absence of a previous event because we do not want to
|
||||
# reduce the leave count when a new-to-the-room user joins.
|
||||
prev_membership = None
|
||||
if prev_event_id is not None:
|
||||
if delta.prev_event_id is not None:
|
||||
prev_event = await self.store.get_event(
|
||||
prev_event_id, allow_none=True
|
||||
delta.prev_event_id, allow_none=True
|
||||
)
|
||||
if prev_event:
|
||||
prev_event_content = prev_event.content
|
||||
|
@ -256,7 +256,7 @@ class StatsHandler:
|
|||
else:
|
||||
raise ValueError("%r is not a valid membership" % (membership,))
|
||||
|
||||
user_id = state_key
|
||||
user_id = delta.state_key
|
||||
if self.is_mine_id(user_id):
|
||||
# this accounts for transitions like leave → ban and so on.
|
||||
has_changed_joinedness = (prev_membership == Membership.JOIN) != (
|
||||
|
@ -272,30 +272,30 @@ class StatsHandler:
|
|||
|
||||
room_stats_delta["local_users_in_room"] += membership_delta
|
||||
|
||||
elif typ == EventTypes.Create:
|
||||
elif delta.event_type == EventTypes.Create:
|
||||
room_state["is_federatable"] = (
|
||||
event_content.get(EventContentFields.FEDERATE, True) is True
|
||||
)
|
||||
room_type = event_content.get(EventContentFields.ROOM_TYPE)
|
||||
if isinstance(room_type, str):
|
||||
room_state["room_type"] = room_type
|
||||
elif typ == EventTypes.JoinRules:
|
||||
elif delta.event_type == EventTypes.JoinRules:
|
||||
room_state["join_rules"] = event_content.get("join_rule")
|
||||
elif typ == EventTypes.RoomHistoryVisibility:
|
||||
elif delta.event_type == EventTypes.RoomHistoryVisibility:
|
||||
room_state["history_visibility"] = event_content.get(
|
||||
"history_visibility"
|
||||
)
|
||||
elif typ == EventTypes.RoomEncryption:
|
||||
elif delta.event_type == EventTypes.RoomEncryption:
|
||||
room_state["encryption"] = event_content.get("algorithm")
|
||||
elif typ == EventTypes.Name:
|
||||
elif delta.event_type == EventTypes.Name:
|
||||
room_state["name"] = event_content.get("name")
|
||||
elif typ == EventTypes.Topic:
|
||||
elif delta.event_type == EventTypes.Topic:
|
||||
room_state["topic"] = event_content.get("topic")
|
||||
elif typ == EventTypes.RoomAvatar:
|
||||
elif delta.event_type == EventTypes.RoomAvatar:
|
||||
room_state["avatar"] = event_content.get("url")
|
||||
elif typ == EventTypes.CanonicalAlias:
|
||||
elif delta.event_type == EventTypes.CanonicalAlias:
|
||||
room_state["canonical_alias"] = event_content.get("alias")
|
||||
elif typ == EventTypes.GuestAccess:
|
||||
elif delta.event_type == EventTypes.GuestAccess:
|
||||
room_state["guest_access"] = event_content.get(
|
||||
EventContentFields.GUEST_ACCESS
|
||||
)
|
||||
|
|
|
@ -40,7 +40,6 @@ from synapse.api.filtering import FilterCollection
|
|||
from synapse.api.presence import UserPresenceState
|
||||
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS
|
||||
from synapse.events import EventBase
|
||||
from synapse.handlers.device import DELETE_DEVICE_MSGS_TASK_NAME
|
||||
from synapse.handlers.relations import BundledAggregations
|
||||
from synapse.logging import issue9533_logger
|
||||
from synapse.logging.context import current_context
|
||||
|
@ -363,36 +362,15 @@ class SyncHandler:
|
|||
# (since we now know that the device has received them)
|
||||
if since_token is not None:
|
||||
since_stream_id = since_token.to_device_key
|
||||
# Fast path: delete a limited number of to-device messages up front.
|
||||
# We do this to avoid the overhead of scheduling a task for every
|
||||
# sync.
|
||||
device_deletion_limit = 100
|
||||
deleted = await self.store.delete_messages_for_device(
|
||||
sync_config.user.to_string(),
|
||||
sync_config.device_id,
|
||||
since_stream_id,
|
||||
limit=device_deletion_limit,
|
||||
)
|
||||
logger.debug(
|
||||
"Deleted %d to-device messages up to %d", deleted, since_stream_id
|
||||
)
|
||||
|
||||
# If we hit the limit, schedule a background task to delete the rest.
|
||||
if deleted >= device_deletion_limit:
|
||||
await self._task_scheduler.schedule_task(
|
||||
DELETE_DEVICE_MSGS_TASK_NAME,
|
||||
resource_id=sync_config.device_id,
|
||||
params={
|
||||
"user_id": sync_config.user.to_string(),
|
||||
"device_id": sync_config.device_id,
|
||||
"up_to_stream_id": since_stream_id,
|
||||
},
|
||||
)
|
||||
logger.debug(
|
||||
"Deletion of to-device messages up to %d scheduled",
|
||||
since_stream_id,
|
||||
)
|
||||
|
||||
if timeout == 0 or since_token is None or full_state:
|
||||
# we are going to return immediately, so don't bother calling
|
||||
# notifier.wait_for_events.
|
||||
|
@ -2333,7 +2311,7 @@ class SyncHandler:
|
|||
continue
|
||||
|
||||
leave_token = now_token.copy_and_replace(
|
||||
StreamKeyType.ROOM, RoomStreamToken(None, event.stream_ordering)
|
||||
StreamKeyType.ROOM, RoomStreamToken(stream=event.stream_ordering)
|
||||
)
|
||||
room_entries.append(
|
||||
RoomSyncResultBuilder(
|
||||
|
|
|
@ -14,7 +14,7 @@
|
|||
|
||||
import logging
|
||||
from http import HTTPStatus
|
||||
from typing import TYPE_CHECKING, Any, Dict, List, Optional, Set, Tuple
|
||||
from typing import TYPE_CHECKING, List, Optional, Set, Tuple
|
||||
|
||||
from twisted.internet.interfaces import IDelayedCall
|
||||
|
||||
|
@ -23,6 +23,7 @@ from synapse.api.constants import EventTypes, HistoryVisibility, JoinRules, Memb
|
|||
from synapse.api.errors import Codes, SynapseError
|
||||
from synapse.handlers.state_deltas import MatchChange, StateDeltasHandler
|
||||
from synapse.metrics.background_process_metrics import run_as_background_process
|
||||
from synapse.storage.databases.main.state_deltas import StateDelta
|
||||
from synapse.storage.databases.main.user_directory import SearchResult
|
||||
from synapse.storage.roommember import ProfileInfo
|
||||
from synapse.types import UserID
|
||||
|
@ -247,32 +248,31 @@ class UserDirectoryHandler(StateDeltasHandler):
|
|||
|
||||
await self.store.update_user_directory_stream_pos(max_pos)
|
||||
|
||||
async def _handle_deltas(self, deltas: List[Dict[str, Any]]) -> None:
|
||||
async def _handle_deltas(self, deltas: List[StateDelta]) -> None:
|
||||
"""Called with the state deltas to process"""
|
||||
for delta in deltas:
|
||||
typ = delta["type"]
|
||||
state_key = delta["state_key"]
|
||||
room_id = delta["room_id"]
|
||||
event_id: Optional[str] = delta["event_id"]
|
||||
prev_event_id: Optional[str] = delta["prev_event_id"]
|
||||
|
||||
logger.debug("Handling: %r %r, %s", typ, state_key, event_id)
|
||||
logger.debug(
|
||||
"Handling: %r %r, %s", delta.event_type, delta.state_key, delta.event_id
|
||||
)
|
||||
|
||||
# For join rule and visibility changes we need to check if the room
|
||||
# may have become public or not and add/remove the users in said room
|
||||
if typ in (EventTypes.RoomHistoryVisibility, EventTypes.JoinRules):
|
||||
if delta.event_type in (
|
||||
EventTypes.RoomHistoryVisibility,
|
||||
EventTypes.JoinRules,
|
||||
):
|
||||
await self._handle_room_publicity_change(
|
||||
room_id, prev_event_id, event_id, typ
|
||||
delta.room_id, delta.prev_event_id, delta.event_id, delta.event_type
|
||||
)
|
||||
elif typ == EventTypes.Member:
|
||||
elif delta.event_type == EventTypes.Member:
|
||||
await self._handle_room_membership_event(
|
||||
room_id,
|
||||
prev_event_id,
|
||||
event_id,
|
||||
state_key,
|
||||
delta.room_id,
|
||||
delta.prev_event_id,
|
||||
delta.event_id,
|
||||
delta.state_key,
|
||||
)
|
||||
else:
|
||||
logger.debug("Ignoring irrelevant type: %r", typ)
|
||||
logger.debug("Ignoring irrelevant type: %r", delta.event_type)
|
||||
|
||||
async def _handle_room_publicity_change(
|
||||
self,
|
||||
|
|
|
@ -266,7 +266,7 @@ class HttpServer(Protocol):
|
|||
def register_paths(
|
||||
self,
|
||||
method: str,
|
||||
path_patterns: Iterable[Pattern],
|
||||
path_patterns: Iterable[Pattern[str]],
|
||||
callback: ServletCallback,
|
||||
servlet_classname: str,
|
||||
) -> None:
|
||||
|
|
|
@ -26,11 +26,11 @@ from twisted.internet.interfaces import IConsumer
|
|||
from twisted.protocols.basic import FileSender
|
||||
from twisted.web.server import Request
|
||||
|
||||
from synapse.api.errors import Codes, SynapseError, cs_error
|
||||
from synapse.api.errors import Codes, cs_error
|
||||
from synapse.http.server import finish_request, respond_with_json
|
||||
from synapse.http.site import SynapseRequest
|
||||
from synapse.logging.context import make_deferred_yieldable
|
||||
from synapse.util.stringutils import is_ascii, parse_and_validate_server_name
|
||||
from synapse.util.stringutils import is_ascii
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
@ -84,52 +84,12 @@ INLINE_CONTENT_TYPES = [
|
|||
]
|
||||
|
||||
|
||||
def parse_media_id(request: Request) -> Tuple[str, str, Optional[str]]:
|
||||
"""Parses the server name, media ID and optional file name from the request URI
|
||||
|
||||
Also performs some rough validation on the server name.
|
||||
|
||||
Args:
|
||||
request: The `Request`.
|
||||
|
||||
Returns:
|
||||
A tuple containing the parsed server name, media ID and optional file name.
|
||||
|
||||
Raises:
|
||||
SynapseError(404): if parsing or validation fail for any reason
|
||||
"""
|
||||
try:
|
||||
# The type on postpath seems incorrect in Twisted 21.2.0.
|
||||
postpath: List[bytes] = request.postpath # type: ignore
|
||||
assert postpath
|
||||
|
||||
# This allows users to append e.g. /test.png to the URL. Useful for
|
||||
# clients that parse the URL to see content type.
|
||||
server_name_bytes, media_id_bytes = postpath[:2]
|
||||
server_name = server_name_bytes.decode("utf-8")
|
||||
media_id = media_id_bytes.decode("utf8")
|
||||
|
||||
# Validate the server name, raising if invalid
|
||||
parse_and_validate_server_name(server_name)
|
||||
|
||||
file_name = None
|
||||
if len(postpath) > 2:
|
||||
try:
|
||||
file_name = urllib.parse.unquote(postpath[-1].decode("utf-8"))
|
||||
except UnicodeDecodeError:
|
||||
pass
|
||||
return server_name, media_id, file_name
|
||||
except Exception:
|
||||
raise SynapseError(
|
||||
404, "Invalid media id token %r" % (request.postpath,), Codes.UNKNOWN
|
||||
)
|
||||
|
||||
|
||||
def respond_404(request: SynapseRequest) -> None:
|
||||
assert request.path is not None
|
||||
respond_with_json(
|
||||
request,
|
||||
404,
|
||||
cs_error("Not found %r" % (request.postpath,), code=Codes.NOT_FOUND),
|
||||
cs_error("Not found '%s'" % (request.path.decode(),), code=Codes.NOT_FOUND),
|
||||
send_cors=True,
|
||||
)
|
||||
|
||||
|
@ -188,7 +148,9 @@ def add_file_headers(
|
|||
|
||||
# A strict subset of content types is allowed to be inlined so that they may
|
||||
# be viewed directly in a browser. Other file types are forced to be downloads.
|
||||
if media_type.lower() in INLINE_CONTENT_TYPES:
|
||||
#
|
||||
# Only the type & subtype are important, parameters can be ignored.
|
||||
if media_type.lower().split(";", 1)[0] in INLINE_CONTENT_TYPES:
|
||||
disposition = "inline"
|
||||
else:
|
||||
disposition = "attachment"
|
||||
|
@ -372,7 +334,7 @@ class ThumbnailInfo:
|
|||
# Content type of thumbnail, e.g. image/png
|
||||
type: str
|
||||
# The size of the media file, in bytes.
|
||||
length: Optional[int] = None
|
||||
length: int
|
||||
|
||||
|
||||
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
||||
|
|
|
@ -48,6 +48,7 @@ from synapse.media.filepath import MediaFilePaths
|
|||
from synapse.media.media_storage import MediaStorage
|
||||
from synapse.media.storage_provider import StorageProviderWrapper
|
||||
from synapse.media.thumbnailer import Thumbnailer, ThumbnailError
|
||||
from synapse.media.url_previewer import UrlPreviewer
|
||||
from synapse.metrics.background_process_metrics import run_as_background_process
|
||||
from synapse.types import UserID
|
||||
from synapse.util.async_helpers import Linearizer
|
||||
|
@ -114,7 +115,7 @@ class MediaRepository:
|
|||
)
|
||||
storage_providers.append(provider)
|
||||
|
||||
self.media_storage = MediaStorage(
|
||||
self.media_storage: MediaStorage = MediaStorage(
|
||||
self.hs, self.primary_base_path, self.filepaths, storage_providers
|
||||
)
|
||||
|
||||
|
@ -142,6 +143,13 @@ class MediaRepository:
|
|||
MEDIA_RETENTION_CHECK_PERIOD_MS,
|
||||
)
|
||||
|
||||
if hs.config.media.url_preview_enabled:
|
||||
self.url_previewer: Optional[UrlPreviewer] = UrlPreviewer(
|
||||
hs, self, self.media_storage
|
||||
)
|
||||
else:
|
||||
self.url_previewer = None
|
||||
|
||||
def _start_update_recently_accessed(self) -> Deferred:
|
||||
return run_as_background_process(
|
||||
"update_recently_accessed_media", self._update_recently_accessed
|
||||
|
@ -616,6 +624,7 @@ class MediaRepository:
|
|||
height=t_height,
|
||||
method=t_method,
|
||||
type=t_type,
|
||||
length=t_byte_source.tell(),
|
||||
),
|
||||
)
|
||||
|
||||
|
@ -686,6 +695,7 @@ class MediaRepository:
|
|||
height=t_height,
|
||||
method=t_method,
|
||||
type=t_type,
|
||||
length=t_byte_source.tell(),
|
||||
),
|
||||
)
|
||||
|
||||
|
@ -831,6 +841,7 @@ class MediaRepository:
|
|||
height=t_height,
|
||||
method=t_method,
|
||||
type=t_type,
|
||||
length=t_byte_source.tell(),
|
||||
),
|
||||
)
|
||||
|
||||
|
|
|
@ -678,7 +678,7 @@ class ModuleApi:
|
|||
"msisdn" for phone numbers, and an "address" key which value is the
|
||||
threepid's address.
|
||||
"""
|
||||
return await self._store.user_get_threepids(user_id)
|
||||
return [attr.asdict(t) for t in await self._store.user_get_threepids(user_id)]
|
||||
|
||||
def check_user_exists(self, user_id: str) -> "defer.Deferred[Optional[str]]":
|
||||
"""Check if user exists.
|
||||
|
|
|
@ -126,7 +126,7 @@ class _NotifierUserStream:
|
|||
|
||||
def notify(
|
||||
self,
|
||||
stream_key: str,
|
||||
stream_key: StreamKeyType,
|
||||
stream_id: Union[int, RoomStreamToken],
|
||||
time_now_ms: int,
|
||||
) -> None:
|
||||
|
@ -454,7 +454,7 @@ class Notifier:
|
|||
|
||||
def on_new_event(
|
||||
self,
|
||||
stream_key: str,
|
||||
stream_key: StreamKeyType,
|
||||
new_token: Union[int, RoomStreamToken],
|
||||
users: Optional[Collection[Union[str, UserID]]] = None,
|
||||
rooms: Optional[StrCollection] = None,
|
||||
|
@ -655,30 +655,29 @@ class Notifier:
|
|||
events: List[Union[JsonDict, EventBase]] = []
|
||||
end_token = from_token
|
||||
|
||||
for name, source in self.event_sources.sources.get_sources():
|
||||
keyname = "%s_key" % name
|
||||
before_id = getattr(before_token, keyname)
|
||||
after_id = getattr(after_token, keyname)
|
||||
for keyname, source in self.event_sources.sources.get_sources():
|
||||
before_id = before_token.get_field(keyname)
|
||||
after_id = after_token.get_field(keyname)
|
||||
if before_id == after_id:
|
||||
continue
|
||||
|
||||
new_events, new_key = await source.get_new_events(
|
||||
user=user,
|
||||
from_key=getattr(from_token, keyname),
|
||||
from_key=from_token.get_field(keyname),
|
||||
limit=limit,
|
||||
is_guest=is_peeking,
|
||||
room_ids=room_ids,
|
||||
explicit_room_id=explicit_room_id,
|
||||
)
|
||||
|
||||
if name == "room":
|
||||
if keyname == StreamKeyType.ROOM:
|
||||
new_events = await filter_events_for_client(
|
||||
self._storage_controllers,
|
||||
user.to_string(),
|
||||
new_events,
|
||||
is_peeking=is_peeking,
|
||||
)
|
||||
elif name == "presence":
|
||||
elif keyname == StreamKeyType.PRESENCE:
|
||||
now = self.clock.time_msec()
|
||||
new_events[:] = [
|
||||
{
|
||||
|
|
|
@ -101,7 +101,7 @@ if TYPE_CHECKING:
|
|||
class PusherConfig:
|
||||
"""Parameters necessary to configure a pusher."""
|
||||
|
||||
id: Optional[str]
|
||||
id: Optional[int]
|
||||
user_name: str
|
||||
|
||||
profile_tag: str
|
||||
|
@ -182,7 +182,7 @@ class Pusher(metaclass=abc.ABCMeta):
|
|||
raise NotImplementedError()
|
||||
|
||||
@abc.abstractmethod
|
||||
def on_new_receipts(self, min_stream_id: int, max_stream_id: int) -> None:
|
||||
def on_new_receipts(self) -> None:
|
||||
raise NotImplementedError()
|
||||
|
||||
@abc.abstractmethod
|
||||
|
|
|
@ -99,7 +99,7 @@ class EmailPusher(Pusher):
|
|||
pass
|
||||
self.timed_call = None
|
||||
|
||||
def on_new_receipts(self, min_stream_id: int, max_stream_id: int) -> None:
|
||||
def on_new_receipts(self) -> None:
|
||||
# We could wake up and cancel the timer but there tend to be quite a
|
||||
# lot of read receipts so it's probably less work to just let the
|
||||
# timer fire
|
||||
|
|
|
@ -165,7 +165,7 @@ class HttpPusher(Pusher):
|
|||
if should_check_for_notifs:
|
||||
self._start_processing()
|
||||
|
||||
def on_new_receipts(self, min_stream_id: int, max_stream_id: int) -> None:
|
||||
def on_new_receipts(self) -> None:
|
||||
# Note that the min here shouldn't be relied upon to be accurate.
|
||||
|
||||
# We could check the receipts are actually m.read receipts here,
|
||||
|
|
|
@ -292,20 +292,12 @@ class PusherPool:
|
|||
except Exception:
|
||||
logger.exception("Exception in pusher on_new_notifications")
|
||||
|
||||
async def on_new_receipts(
|
||||
self, min_stream_id: int, max_stream_id: int, affected_room_ids: Iterable[str]
|
||||
) -> None:
|
||||
async def on_new_receipts(self, users_affected: StrCollection) -> None:
|
||||
if not self.pushers:
|
||||
# nothing to do here.
|
||||
return
|
||||
|
||||
try:
|
||||
# Need to subtract 1 from the minimum because the lower bound here
|
||||
# is not inclusive
|
||||
users_affected = await self.store.get_users_sent_receipts_between(
|
||||
min_stream_id - 1, max_stream_id
|
||||
)
|
||||
|
||||
for u in users_affected:
|
||||
# Don't push if the user account has expired
|
||||
expired = await self._account_validity_handler.is_user_expired(u)
|
||||
|
@ -314,7 +306,7 @@ class PusherPool:
|
|||
|
||||
if u in self.pushers:
|
||||
for p in self.pushers[u].values():
|
||||
p.on_new_receipts(min_stream_id, max_stream_id)
|
||||
p.on_new_receipts()
|
||||
|
||||
except Exception:
|
||||
logger.exception("Exception in pusher on_new_receipts")
|
||||
|
|
|
@ -138,7 +138,11 @@ class ReplicationFederationSendEventsRestServlet(ReplicationEndpoint):
|
|||
|
||||
event_and_contexts.append((event, context))
|
||||
|
||||
logger.info("Got %d events from federation", len(event_and_contexts))
|
||||
logger.info(
|
||||
"Got batch of %i events to persist to room %s",
|
||||
len(event_and_contexts),
|
||||
room_id,
|
||||
)
|
||||
|
||||
max_stream_id = await self.federation_event_handler.persist_events_and_notify(
|
||||
room_id, event_and_contexts, backfilled
|
||||
|
|
|
@ -118,6 +118,7 @@ class ReplicationSendEventsRestServlet(ReplicationEndpoint):
|
|||
with Measure(self.clock, "repl_send_events_parse"):
|
||||
events_and_context = []
|
||||
events = payload["events"]
|
||||
rooms = set()
|
||||
|
||||
for event_payload in events:
|
||||
event_dict = event_payload["event"]
|
||||
|
@ -144,11 +145,13 @@ class ReplicationSendEventsRestServlet(ReplicationEndpoint):
|
|||
UserID.from_string(u) for u in event_payload["extra_users"]
|
||||
]
|
||||
|
||||
logger.info(
|
||||
"Got batch of events to send, last ID of batch is: %s, sending into room: %s",
|
||||
event.event_id,
|
||||
event.room_id,
|
||||
)
|
||||
# all the rooms *should* be the same, but we'll log separately to be
|
||||
# sure.
|
||||
rooms.add(event.room_id)
|
||||
|
||||
logger.info(
|
||||
"Got batch of %i events to persist to rooms %s", len(events), rooms
|
||||
)
|
||||
|
||||
last_event = (
|
||||
await self.event_creation_handler.persist_and_notify_client_events(
|
||||
|
|
|
@ -129,9 +129,7 @@ class ReplicationDataHandler:
|
|||
self.notifier.on_new_event(
|
||||
StreamKeyType.RECEIPT, token, rooms=[row.room_id for row in rows]
|
||||
)
|
||||
await self._pusher_pool.on_new_receipts(
|
||||
token, token, {row.room_id for row in rows}
|
||||
)
|
||||
await self._pusher_pool.on_new_receipts({row.user_id for row in rows})
|
||||
elif stream_name == ToDeviceStream.NAME:
|
||||
entities = [row.entity for row in rows if row.entity.startswith("@")]
|
||||
if entities:
|
||||
|
|
|
@ -18,7 +18,7 @@ allowed to be sent by which side.
|
|||
"""
|
||||
import abc
|
||||
import logging
|
||||
from typing import Optional, Tuple, Type, TypeVar
|
||||
from typing import List, Optional, Tuple, Type, TypeVar
|
||||
|
||||
from synapse.replication.tcp.streams._base import StreamRow
|
||||
from synapse.util import json_decoder, json_encoder
|
||||
|
@ -74,6 +74,8 @@ SC = TypeVar("SC", bound="_SimpleCommand")
|
|||
class _SimpleCommand(Command):
|
||||
"""An implementation of Command whose argument is just a 'data' string."""
|
||||
|
||||
__slots__ = ["data"]
|
||||
|
||||
def __init__(self, data: str):
|
||||
self.data = data
|
||||
|
||||
|
@ -122,6 +124,8 @@ class RdataCommand(Command):
|
|||
RDATA presence master 59 ["@baz:example.com", "online", ...]
|
||||
"""
|
||||
|
||||
__slots__ = ["stream_name", "instance_name", "token", "row"]
|
||||
|
||||
NAME = "RDATA"
|
||||
|
||||
def __init__(
|
||||
|
@ -179,6 +183,8 @@ class PositionCommand(Command):
|
|||
of the stream.
|
||||
"""
|
||||
|
||||
__slots__ = ["stream_name", "instance_name", "prev_token", "new_token"]
|
||||
|
||||
NAME = "POSITION"
|
||||
|
||||
def __init__(
|
||||
|
@ -235,6 +241,8 @@ class ReplicateCommand(Command):
|
|||
REPLICATE
|
||||
"""
|
||||
|
||||
__slots__: List[str] = []
|
||||
|
||||
NAME = "REPLICATE"
|
||||
|
||||
def __init__(self) -> None:
|
||||
|
@ -264,6 +272,8 @@ class UserSyncCommand(Command):
|
|||
Where <state> is either "start" or "end"
|
||||
"""
|
||||
|
||||
__slots__ = ["instance_id", "user_id", "device_id", "is_syncing", "last_sync_ms"]
|
||||
|
||||
NAME = "USER_SYNC"
|
||||
|
||||
def __init__(
|
||||
|
@ -316,6 +326,8 @@ class ClearUserSyncsCommand(Command):
|
|||
CLEAR_USER_SYNC <instance_id>
|
||||
"""
|
||||
|
||||
__slots__ = ["instance_id"]
|
||||
|
||||
NAME = "CLEAR_USER_SYNC"
|
||||
|
||||
def __init__(self, instance_id: str):
|
||||
|
@ -343,6 +355,8 @@ class FederationAckCommand(Command):
|
|||
FEDERATION_ACK <instance_name> <token>
|
||||
"""
|
||||
|
||||
__slots__ = ["instance_name", "token"]
|
||||
|
||||
NAME = "FEDERATION_ACK"
|
||||
|
||||
def __init__(self, instance_name: str, token: int):
|
||||
|
@ -368,6 +382,15 @@ class UserIpCommand(Command):
|
|||
USER_IP <user_id>, <access_token>, <ip>, <device_id>, <last_seen>, <user_agent>
|
||||
"""
|
||||
|
||||
__slots__ = [
|
||||
"user_id",
|
||||
"access_token",
|
||||
"ip",
|
||||
"user_agent",
|
||||
"device_id",
|
||||
"last_seen",
|
||||
]
|
||||
|
||||
NAME = "USER_IP"
|
||||
|
||||
def __init__(
|
||||
|
@ -423,8 +446,6 @@ class RemoteServerUpCommand(_SimpleCommand):
|
|||
"""Sent when a worker has detected that a remote server is no longer
|
||||
"down" and retry timings should be reset.
|
||||
|
||||
If sent from a client the server will relay to all other workers.
|
||||
|
||||
Format::
|
||||
|
||||
REMOTE_SERVER_UP <server>
|
||||
|
@ -441,6 +462,8 @@ class LockReleasedCommand(Command):
|
|||
LOCK_RELEASED ["<instance_name>", "<lock_name>", "<lock_key>"]
|
||||
"""
|
||||
|
||||
__slots__ = ["instance_name", "lock_name", "lock_key"]
|
||||
|
||||
NAME = "LOCK_RELEASED"
|
||||
|
||||
def __init__(
|
||||
|
|
|
@ -146,7 +146,7 @@ class PurgeHistoryRestServlet(RestServlet):
|
|||
# RoomStreamToken expects [int] not Optional[int]
|
||||
assert event.internal_metadata.stream_ordering is not None
|
||||
room_token = RoomStreamToken(
|
||||
event.depth, event.internal_metadata.stream_ordering
|
||||
topological=event.depth, stream=event.internal_metadata.stream_ordering
|
||||
)
|
||||
token = await room_token.to_string(self.store)
|
||||
|
||||
|
|
|
@ -198,7 +198,13 @@ class DestinationMembershipRestServlet(RestServlet):
|
|||
rooms, total = await self._store.get_destination_rooms_paginate(
|
||||
destination, start, limit, direction
|
||||
)
|
||||
response = {"rooms": rooms, "total": total}
|
||||
response = {
|
||||
"rooms": [
|
||||
{"room_id": room_id, "stream_ordering": stream_ordering}
|
||||
for room_id, stream_ordering in rooms
|
||||
],
|
||||
"total": total,
|
||||
}
|
||||
if (start + limit) < total:
|
||||
response["next_token"] = str(start + len(rooms))
|
||||
|
||||
|
|
|
@ -329,9 +329,8 @@ class UserRestServletV2(RestServlet):
|
|||
|
||||
if threepids is not None:
|
||||
# get changed threepids (added and removed)
|
||||
# convert List[Dict[str, Any]] into Set[Tuple[str, str]]
|
||||
cur_threepids = {
|
||||
(threepid["medium"], threepid["address"])
|
||||
(threepid.medium, threepid.address)
|
||||
for threepid in await self.store.user_get_threepids(user_id)
|
||||
}
|
||||
add_threepids = new_threepids - cur_threepids
|
||||
|
@ -842,7 +841,18 @@ class SearchUsersRestServlet(RestServlet):
|
|||
logger.info("term: %s ", term)
|
||||
|
||||
ret = await self.store.search_users(term)
|
||||
return HTTPStatus.OK, ret
|
||||
results = [
|
||||
{
|
||||
"name": name,
|
||||
"password_hash": password_hash,
|
||||
"is_guest": bool(is_guest),
|
||||
"admin": bool(admin),
|
||||
"user_type": user_type,
|
||||
}
|
||||
for name, password_hash, is_guest, admin, user_type in ret
|
||||
]
|
||||
|
||||
return HTTPStatus.OK, results
|
||||
|
||||
|
||||
class UserAdminServlet(RestServlet):
|
||||
|
|
|
@ -24,6 +24,8 @@ if TYPE_CHECKING or HAS_PYDANTIC_V2:
|
|||
from pydantic.v1 import StrictBool, StrictStr, constr
|
||||
else:
|
||||
from pydantic import StrictBool, StrictStr, constr
|
||||
|
||||
import attr
|
||||
from typing_extensions import Literal
|
||||
|
||||
from twisted.web.server import Request
|
||||
|
@ -595,7 +597,7 @@ class ThreepidRestServlet(RestServlet):
|
|||
|
||||
threepids = await self.datastore.user_get_threepids(requester.user.to_string())
|
||||
|
||||
return 200, {"threepids": threepids}
|
||||
return 200, {"threepids": [attr.asdict(t) for t in threepids]}
|
||||
|
||||
# NOTE(dmr): I have chosen not to use Pydantic to parse this request's body, because
|
||||
# the endpoint is deprecated. (If you really want to, you could do this by reusing
|
||||
|
|
|
@ -14,17 +14,19 @@
|
|||
# limitations under the License.
|
||||
#
|
||||
|
||||
import re
|
||||
from typing import TYPE_CHECKING
|
||||
|
||||
from synapse.http.server import DirectServeJsonResource, respond_with_json
|
||||
from synapse.http.server import respond_with_json
|
||||
from synapse.http.servlet import RestServlet
|
||||
from synapse.http.site import SynapseRequest
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from synapse.server import HomeServer
|
||||
|
||||
|
||||
class MediaConfigResource(DirectServeJsonResource):
|
||||
isLeaf = True
|
||||
class MediaConfigResource(RestServlet):
|
||||
PATTERNS = [re.compile("/_matrix/media/(r0|v3|v1)/config$")]
|
||||
|
||||
def __init__(self, hs: "HomeServer"):
|
||||
super().__init__()
|
||||
|
@ -33,9 +35,6 @@ class MediaConfigResource(DirectServeJsonResource):
|
|||
self.auth = hs.get_auth()
|
||||
self.limits_dict = {"m.upload.size": config.media.max_upload_size}
|
||||
|
||||
async def _async_render_GET(self, request: SynapseRequest) -> None:
|
||||
async def on_GET(self, request: SynapseRequest) -> None:
|
||||
await self.auth.get_user_by_req(request)
|
||||
respond_with_json(request, 200, self.limits_dict, send_cors=True)
|
||||
|
||||
async def _async_render_OPTIONS(self, request: SynapseRequest) -> None:
|
||||
respond_with_json(request, 200, {}, send_cors=True)
|
||||
|
|
|
@ -13,16 +13,14 @@
|
|||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
import logging
|
||||
from typing import TYPE_CHECKING
|
||||
import re
|
||||
from typing import TYPE_CHECKING, Optional
|
||||
|
||||
from synapse.http.server import (
|
||||
DirectServeJsonResource,
|
||||
set_corp_headers,
|
||||
set_cors_headers,
|
||||
)
|
||||
from synapse.http.servlet import parse_boolean
|
||||
from synapse.http.server import set_corp_headers, set_cors_headers
|
||||
from synapse.http.servlet import RestServlet, parse_boolean
|
||||
from synapse.http.site import SynapseRequest
|
||||
from synapse.media._base import parse_media_id, respond_404
|
||||
from synapse.media._base import respond_404
|
||||
from synapse.util.stringutils import parse_and_validate_server_name
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from synapse.media.media_repository import MediaRepository
|
||||
|
@ -31,15 +29,28 @@ if TYPE_CHECKING:
|
|||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class DownloadResource(DirectServeJsonResource):
|
||||
isLeaf = True
|
||||
class DownloadResource(RestServlet):
|
||||
PATTERNS = [
|
||||
re.compile(
|
||||
"/_matrix/media/(r0|v3|v1)/download/(?P<server_name>[^/]*)/(?P<media_id>[^/]*)(/(?P<file_name>[^/]*))?$"
|
||||
)
|
||||
]
|
||||
|
||||
def __init__(self, hs: "HomeServer", media_repo: "MediaRepository"):
|
||||
super().__init__()
|
||||
self.media_repo = media_repo
|
||||
self._is_mine_server_name = hs.is_mine_server_name
|
||||
|
||||
async def _async_render_GET(self, request: SynapseRequest) -> None:
|
||||
async def on_GET(
|
||||
self,
|
||||
request: SynapseRequest,
|
||||
server_name: str,
|
||||
media_id: str,
|
||||
file_name: Optional[str] = None,
|
||||
) -> None:
|
||||
# Validate the server name, raising if invalid
|
||||
parse_and_validate_server_name(server_name)
|
||||
|
||||
set_cors_headers(request)
|
||||
set_corp_headers(request)
|
||||
request.setHeader(
|
||||
|
@ -58,9 +69,8 @@ class DownloadResource(DirectServeJsonResource):
|
|||
b"Referrer-Policy",
|
||||
b"no-referrer",
|
||||
)
|
||||
server_name, media_id, name = parse_media_id(request)
|
||||
if self._is_mine_server_name(server_name):
|
||||
await self.media_repo.get_local_media(request, media_id, name)
|
||||
await self.media_repo.get_local_media(request, media_id, file_name)
|
||||
else:
|
||||
allow_remote = parse_boolean(request, "allow_remote", default=True)
|
||||
if not allow_remote:
|
||||
|
@ -72,4 +82,6 @@ class DownloadResource(DirectServeJsonResource):
|
|||
respond_404(request)
|
||||
return
|
||||
|
||||
await self.media_repo.get_remote_media(request, server_name, media_id, name)
|
||||
await self.media_repo.get_remote_media(
|
||||
request, server_name, media_id, file_name
|
||||
)
|
||||
|
|
|
@ -15,7 +15,7 @@
|
|||
from typing import TYPE_CHECKING
|
||||
|
||||
from synapse.config._base import ConfigError
|
||||
from synapse.http.server import UnrecognizedRequestResource
|
||||
from synapse.http.server import HttpServer, JsonResource
|
||||
|
||||
from .config_resource import MediaConfigResource
|
||||
from .download_resource import DownloadResource
|
||||
|
@ -27,7 +27,7 @@ if TYPE_CHECKING:
|
|||
from synapse.server import HomeServer
|
||||
|
||||
|
||||
class MediaRepositoryResource(UnrecognizedRequestResource):
|
||||
class MediaRepositoryResource(JsonResource):
|
||||
"""File uploading and downloading.
|
||||
|
||||
Uploads are POSTed to a resource which returns a token which is used to GET
|
||||
|
@ -70,6 +70,11 @@ class MediaRepositoryResource(UnrecognizedRequestResource):
|
|||
width and height are close to the requested size and the aspect matches
|
||||
the requested size. The client should scale the image if it needs to fit
|
||||
within a given rectangle.
|
||||
|
||||
This gets mounted at various points under /_matrix/media, including:
|
||||
* /_matrix/media/r0
|
||||
* /_matrix/media/v1
|
||||
* /_matrix/media/v3
|
||||
"""
|
||||
|
||||
def __init__(self, hs: "HomeServer"):
|
||||
|
@ -77,17 +82,23 @@ class MediaRepositoryResource(UnrecognizedRequestResource):
|
|||
if not hs.config.media.can_load_media_repo:
|
||||
raise ConfigError("Synapse is not configured to use a media repo.")
|
||||
|
||||
super().__init__()
|
||||
JsonResource.__init__(self, hs, canonical_json=False)
|
||||
self.register_servlets(self, hs)
|
||||
|
||||
@staticmethod
|
||||
def register_servlets(http_server: HttpServer, hs: "HomeServer") -> None:
|
||||
media_repo = hs.get_media_repository()
|
||||
|
||||
self.putChild(b"upload", UploadResource(hs, media_repo))
|
||||
self.putChild(b"download", DownloadResource(hs, media_repo))
|
||||
self.putChild(
|
||||
b"thumbnail", ThumbnailResource(hs, media_repo, media_repo.media_storage)
|
||||
# Note that many of these should not exist as v1 endpoints, but empirically
|
||||
# a lot of traffic still goes to them.
|
||||
|
||||
UploadResource(hs, media_repo).register(http_server)
|
||||
DownloadResource(hs, media_repo).register(http_server)
|
||||
ThumbnailResource(hs, media_repo, media_repo.media_storage).register(
|
||||
http_server
|
||||
)
|
||||
if hs.config.media.url_preview_enabled:
|
||||
self.putChild(
|
||||
b"preview_url",
|
||||
PreviewUrlResource(hs, media_repo, media_repo.media_storage),
|
||||
PreviewUrlResource(hs, media_repo, media_repo.media_storage).register(
|
||||
http_server
|
||||
)
|
||||
self.putChild(b"config", MediaConfigResource(hs))
|
||||
MediaConfigResource(hs).register(http_server)
|
||||
|
|
|
@ -13,24 +13,20 @@
|
|||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import re
|
||||
from typing import TYPE_CHECKING
|
||||
|
||||
from synapse.http.server import (
|
||||
DirectServeJsonResource,
|
||||
respond_with_json,
|
||||
respond_with_json_bytes,
|
||||
)
|
||||
from synapse.http.servlet import parse_integer, parse_string
|
||||
from synapse.http.server import respond_with_json_bytes
|
||||
from synapse.http.servlet import RestServlet, parse_integer, parse_string
|
||||
from synapse.http.site import SynapseRequest
|
||||
from synapse.media.media_storage import MediaStorage
|
||||
from synapse.media.url_previewer import UrlPreviewer
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from synapse.media.media_repository import MediaRepository
|
||||
from synapse.server import HomeServer
|
||||
|
||||
|
||||
class PreviewUrlResource(DirectServeJsonResource):
|
||||
class PreviewUrlResource(RestServlet):
|
||||
"""
|
||||
The `GET /_matrix/media/r0/preview_url` endpoint provides a generic preview API
|
||||
for URLs which outputs Open Graph (https://ogp.me/) responses (with some Matrix
|
||||
|
@ -48,7 +44,7 @@ class PreviewUrlResource(DirectServeJsonResource):
|
|||
* Matrix cannot be used to distribute the metadata between homeservers.
|
||||
"""
|
||||
|
||||
isLeaf = True
|
||||
PATTERNS = [re.compile("/_matrix/media/(r0|v3|v1)/preview_url$")]
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
|
@ -62,14 +58,10 @@ class PreviewUrlResource(DirectServeJsonResource):
|
|||
self.clock = hs.get_clock()
|
||||
self.media_repo = media_repo
|
||||
self.media_storage = media_storage
|
||||
assert self.media_repo.url_previewer is not None
|
||||
self.url_previewer = self.media_repo.url_previewer
|
||||
|
||||
self._url_previewer = UrlPreviewer(hs, media_repo, media_storage)
|
||||
|
||||
async def _async_render_OPTIONS(self, request: SynapseRequest) -> None:
|
||||
request.setHeader(b"Allow", b"OPTIONS, GET")
|
||||
respond_with_json(request, 200, {}, send_cors=True)
|
||||
|
||||
async def _async_render_GET(self, request: SynapseRequest) -> None:
|
||||
async def on_GET(self, request: SynapseRequest) -> None:
|
||||
# XXX: if get_user_by_req fails, what should we do in an async render?
|
||||
requester = await self.auth.get_user_by_req(request)
|
||||
url = parse_string(request, "url", required=True)
|
||||
|
@ -77,5 +69,5 @@ class PreviewUrlResource(DirectServeJsonResource):
|
|||
if ts is None:
|
||||
ts = self.clock.time_msec()
|
||||
|
||||
og = await self._url_previewer.preview(url, requester.user, ts)
|
||||
og = await self.url_previewer.preview(url, requester.user, ts)
|
||||
respond_with_json_bytes(request, 200, og, send_cors=True)
|
||||
|
|
|
@ -13,29 +13,24 @@
|
|||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
|
||||
import logging
|
||||
from typing import TYPE_CHECKING, Any, Dict, List, Optional, Tuple
|
||||
import re
|
||||
from typing import TYPE_CHECKING, List, Optional, Tuple
|
||||
|
||||
from synapse.api.errors import Codes, SynapseError, cs_error
|
||||
from synapse.config.repository import THUMBNAIL_SUPPORTED_MEDIA_FORMAT_MAP
|
||||
from synapse.http.server import (
|
||||
DirectServeJsonResource,
|
||||
respond_with_json,
|
||||
set_corp_headers,
|
||||
set_cors_headers,
|
||||
)
|
||||
from synapse.http.servlet import parse_integer, parse_string
|
||||
from synapse.http.server import respond_with_json, set_corp_headers, set_cors_headers
|
||||
from synapse.http.servlet import RestServlet, parse_integer, parse_string
|
||||
from synapse.http.site import SynapseRequest
|
||||
from synapse.media._base import (
|
||||
FileInfo,
|
||||
ThumbnailInfo,
|
||||
parse_media_id,
|
||||
respond_404,
|
||||
respond_with_file,
|
||||
respond_with_responder,
|
||||
)
|
||||
from synapse.media.media_storage import MediaStorage
|
||||
from synapse.util.stringutils import parse_and_validate_server_name
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from synapse.media.media_repository import MediaRepository
|
||||
|
@ -44,8 +39,12 @@ if TYPE_CHECKING:
|
|||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class ThumbnailResource(DirectServeJsonResource):
|
||||
isLeaf = True
|
||||
class ThumbnailResource(RestServlet):
|
||||
PATTERNS = [
|
||||
re.compile(
|
||||
"/_matrix/media/(r0|v3|v1)/thumbnail/(?P<server_name>[^/]*)/(?P<media_id>[^/]*)$"
|
||||
)
|
||||
]
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
|
@ -60,12 +59,17 @@ class ThumbnailResource(DirectServeJsonResource):
|
|||
self.media_storage = media_storage
|
||||
self.dynamic_thumbnails = hs.config.media.dynamic_thumbnails
|
||||
self._is_mine_server_name = hs.is_mine_server_name
|
||||
self._server_name = hs.hostname
|
||||
self.prevent_media_downloads_from = hs.config.media.prevent_media_downloads_from
|
||||
|
||||
async def _async_render_GET(self, request: SynapseRequest) -> None:
|
||||
async def on_GET(
|
||||
self, request: SynapseRequest, server_name: str, media_id: str
|
||||
) -> None:
|
||||
# Validate the server name, raising if invalid
|
||||
parse_and_validate_server_name(server_name)
|
||||
|
||||
set_cors_headers(request)
|
||||
set_corp_headers(request)
|
||||
server_name, media_id, _ = parse_media_id(request)
|
||||
width = parse_integer(request, "width", required=True)
|
||||
height = parse_integer(request, "height", required=True)
|
||||
method = parse_string(request, "method", "scale")
|
||||
|
@ -155,30 +159,24 @@ class ThumbnailResource(DirectServeJsonResource):
|
|||
|
||||
thumbnail_infos = await self.store.get_local_media_thumbnails(media_id)
|
||||
for info in thumbnail_infos:
|
||||
t_w = info["thumbnail_width"] == desired_width
|
||||
t_h = info["thumbnail_height"] == desired_height
|
||||
t_method = info["thumbnail_method"] == desired_method
|
||||
t_type = info["thumbnail_type"] == desired_type
|
||||
t_w = info.width == desired_width
|
||||
t_h = info.height == desired_height
|
||||
t_method = info.method == desired_method
|
||||
t_type = info.type == desired_type
|
||||
|
||||
if t_w and t_h and t_method and t_type:
|
||||
file_info = FileInfo(
|
||||
server_name=None,
|
||||
file_id=media_id,
|
||||
url_cache=media_info["url_cache"],
|
||||
thumbnail=ThumbnailInfo(
|
||||
width=info["thumbnail_width"],
|
||||
height=info["thumbnail_height"],
|
||||
type=info["thumbnail_type"],
|
||||
method=info["thumbnail_method"],
|
||||
),
|
||||
thumbnail=info,
|
||||
)
|
||||
|
||||
t_type = file_info.thumbnail_type
|
||||
t_length = info["thumbnail_length"]
|
||||
|
||||
responder = await self.media_storage.fetch_media(file_info)
|
||||
if responder:
|
||||
await respond_with_responder(request, responder, t_type, t_length)
|
||||
await respond_with_responder(
|
||||
request, responder, info.type, info.length
|
||||
)
|
||||
return
|
||||
|
||||
logger.debug("We don't have a thumbnail of that size. Generating")
|
||||
|
@ -218,29 +216,23 @@ class ThumbnailResource(DirectServeJsonResource):
|
|||
file_id = media_info["filesystem_id"]
|
||||
|
||||
for info in thumbnail_infos:
|
||||
t_w = info["thumbnail_width"] == desired_width
|
||||
t_h = info["thumbnail_height"] == desired_height
|
||||
t_method = info["thumbnail_method"] == desired_method
|
||||
t_type = info["thumbnail_type"] == desired_type
|
||||
t_w = info.width == desired_width
|
||||
t_h = info.height == desired_height
|
||||
t_method = info.method == desired_method
|
||||
t_type = info.type == desired_type
|
||||
|
||||
if t_w and t_h and t_method and t_type:
|
||||
file_info = FileInfo(
|
||||
server_name=server_name,
|
||||
file_id=media_info["filesystem_id"],
|
||||
thumbnail=ThumbnailInfo(
|
||||
width=info["thumbnail_width"],
|
||||
height=info["thumbnail_height"],
|
||||
type=info["thumbnail_type"],
|
||||
method=info["thumbnail_method"],
|
||||
),
|
||||
thumbnail=info,
|
||||
)
|
||||
|
||||
t_type = file_info.thumbnail_type
|
||||
t_length = info["thumbnail_length"]
|
||||
|
||||
responder = await self.media_storage.fetch_media(file_info)
|
||||
if responder:
|
||||
await respond_with_responder(request, responder, t_type, t_length)
|
||||
await respond_with_responder(
|
||||
request, responder, info.type, info.length
|
||||
)
|
||||
return
|
||||
|
||||
logger.debug("We don't have a thumbnail of that size. Generating")
|
||||
|
@ -300,7 +292,7 @@ class ThumbnailResource(DirectServeJsonResource):
|
|||
desired_height: int,
|
||||
desired_method: str,
|
||||
desired_type: str,
|
||||
thumbnail_infos: List[Dict[str, Any]],
|
||||
thumbnail_infos: List[ThumbnailInfo],
|
||||
media_id: str,
|
||||
file_id: str,
|
||||
url_cache: bool,
|
||||
|
@ -315,7 +307,7 @@ class ThumbnailResource(DirectServeJsonResource):
|
|||
desired_height: The desired height, the returned thumbnail may be larger than this.
|
||||
desired_method: The desired method used to generate the thumbnail.
|
||||
desired_type: The desired content-type of the thumbnail.
|
||||
thumbnail_infos: A list of dictionaries of candidate thumbnails.
|
||||
thumbnail_infos: A list of thumbnail info of candidate thumbnails.
|
||||
file_id: The ID of the media that a thumbnail is being requested for.
|
||||
url_cache: True if this is from a URL cache.
|
||||
server_name: The server name, if this is a remote thumbnail.
|
||||
|
@ -418,13 +410,14 @@ class ThumbnailResource(DirectServeJsonResource):
|
|||
# `dynamic_thumbnails` is disabled.
|
||||
logger.info("Failed to find any generated thumbnails")
|
||||
|
||||
assert request.path is not None
|
||||
respond_with_json(
|
||||
request,
|
||||
400,
|
||||
cs_error(
|
||||
"Cannot find any thumbnails for the requested media (%r). This might mean the media is not a supported_media_format=(%s) or that thumbnailing failed for some other reason. (Dynamic thumbnails are disabled on this server.)"
|
||||
"Cannot find any thumbnails for the requested media ('%s'). This might mean the media is not a supported_media_format=(%s) or that thumbnailing failed for some other reason. (Dynamic thumbnails are disabled on this server.)"
|
||||
% (
|
||||
request.postpath,
|
||||
request.path.decode(),
|
||||
", ".join(THUMBNAIL_SUPPORTED_MEDIA_FORMAT_MAP.keys()),
|
||||
),
|
||||
code=Codes.UNKNOWN,
|
||||
|
@ -438,7 +431,7 @@ class ThumbnailResource(DirectServeJsonResource):
|
|||
desired_height: int,
|
||||
desired_method: str,
|
||||
desired_type: str,
|
||||
thumbnail_infos: List[Dict[str, Any]],
|
||||
thumbnail_infos: List[ThumbnailInfo],
|
||||
file_id: str,
|
||||
url_cache: bool,
|
||||
server_name: Optional[str],
|
||||
|
@ -451,7 +444,7 @@ class ThumbnailResource(DirectServeJsonResource):
|
|||
desired_height: The desired height, the returned thumbnail may be larger than this.
|
||||
desired_method: The desired method used to generate the thumbnail.
|
||||
desired_type: The desired content-type of the thumbnail.
|
||||
thumbnail_infos: A list of dictionaries of candidate thumbnails.
|
||||
thumbnail_infos: A list of thumbnail infos of candidate thumbnails.
|
||||
file_id: The ID of the media that a thumbnail is being requested for.
|
||||
url_cache: True if this is from a URL cache.
|
||||
server_name: The server name, if this is a remote thumbnail.
|
||||
|
@ -469,21 +462,25 @@ class ThumbnailResource(DirectServeJsonResource):
|
|||
|
||||
if desired_method == "crop":
|
||||
# Thumbnails that match equal or larger sizes of desired width/height.
|
||||
crop_info_list: List[Tuple[int, int, int, bool, int, Dict[str, Any]]] = []
|
||||
crop_info_list: List[
|
||||
Tuple[int, int, int, bool, Optional[int], ThumbnailInfo]
|
||||
] = []
|
||||
# Other thumbnails.
|
||||
crop_info_list2: List[Tuple[int, int, int, bool, int, Dict[str, Any]]] = []
|
||||
crop_info_list2: List[
|
||||
Tuple[int, int, int, bool, Optional[int], ThumbnailInfo]
|
||||
] = []
|
||||
for info in thumbnail_infos:
|
||||
# Skip thumbnails generated with different methods.
|
||||
if info["thumbnail_method"] != "crop":
|
||||
if info.method != "crop":
|
||||
continue
|
||||
|
||||
t_w = info["thumbnail_width"]
|
||||
t_h = info["thumbnail_height"]
|
||||
t_w = info.width
|
||||
t_h = info.height
|
||||
aspect_quality = abs(d_w * t_h - d_h * t_w)
|
||||
min_quality = 0 if d_w <= t_w and d_h <= t_h else 1
|
||||
size_quality = abs((d_w - t_w) * (d_h - t_h))
|
||||
type_quality = desired_type != info["thumbnail_type"]
|
||||
length_quality = info["thumbnail_length"]
|
||||
type_quality = desired_type != info.type
|
||||
length_quality = info.length
|
||||
if t_w >= d_w or t_h >= d_h:
|
||||
crop_info_list.append(
|
||||
(
|
||||
|
@ -508,7 +505,7 @@ class ThumbnailResource(DirectServeJsonResource):
|
|||
)
|
||||
# Pick the most appropriate thumbnail. Some values of `desired_width` and
|
||||
# `desired_height` may result in a tie, in which case we avoid comparing on
|
||||
# the thumbnail info dictionary and pick the thumbnail that appears earlier
|
||||
# the thumbnail info and pick the thumbnail that appears earlier
|
||||
# in the list of candidates.
|
||||
if crop_info_list:
|
||||
thumbnail_info = min(crop_info_list, key=lambda t: t[:-1])[-1]
|
||||
|
@ -516,20 +513,20 @@ class ThumbnailResource(DirectServeJsonResource):
|
|||
thumbnail_info = min(crop_info_list2, key=lambda t: t[:-1])[-1]
|
||||
elif desired_method == "scale":
|
||||
# Thumbnails that match equal or larger sizes of desired width/height.
|
||||
info_list: List[Tuple[int, bool, int, Dict[str, Any]]] = []
|
||||
info_list: List[Tuple[int, bool, int, ThumbnailInfo]] = []
|
||||
# Other thumbnails.
|
||||
info_list2: List[Tuple[int, bool, int, Dict[str, Any]]] = []
|
||||
info_list2: List[Tuple[int, bool, int, ThumbnailInfo]] = []
|
||||
|
||||
for info in thumbnail_infos:
|
||||
# Skip thumbnails generated with different methods.
|
||||
if info["thumbnail_method"] != "scale":
|
||||
if info.method != "scale":
|
||||
continue
|
||||
|
||||
t_w = info["thumbnail_width"]
|
||||
t_h = info["thumbnail_height"]
|
||||
t_w = info.width
|
||||
t_h = info.height
|
||||
size_quality = abs((d_w - t_w) * (d_h - t_h))
|
||||
type_quality = desired_type != info["thumbnail_type"]
|
||||
length_quality = info["thumbnail_length"]
|
||||
type_quality = desired_type != info.type
|
||||
length_quality = info.length
|
||||
if t_w >= d_w or t_h >= d_h:
|
||||
info_list.append((size_quality, type_quality, length_quality, info))
|
||||
else:
|
||||
|
@ -538,7 +535,7 @@ class ThumbnailResource(DirectServeJsonResource):
|
|||
)
|
||||
# Pick the most appropriate thumbnail. Some values of `desired_width` and
|
||||
# `desired_height` may result in a tie, in which case we avoid comparing on
|
||||
# the thumbnail info dictionary and pick the thumbnail that appears earlier
|
||||
# the thumbnail info and pick the thumbnail that appears earlier
|
||||
# in the list of candidates.
|
||||
if info_list:
|
||||
thumbnail_info = min(info_list, key=lambda t: t[:-1])[-1]
|
||||
|
@ -550,13 +547,7 @@ class ThumbnailResource(DirectServeJsonResource):
|
|||
file_id=file_id,
|
||||
url_cache=url_cache,
|
||||
server_name=server_name,
|
||||
thumbnail=ThumbnailInfo(
|
||||
width=thumbnail_info["thumbnail_width"],
|
||||
height=thumbnail_info["thumbnail_height"],
|
||||
type=thumbnail_info["thumbnail_type"],
|
||||
method=thumbnail_info["thumbnail_method"],
|
||||
length=thumbnail_info["thumbnail_length"],
|
||||
),
|
||||
thumbnail=thumbnail_info,
|
||||
)
|
||||
|
||||
# No matching thumbnail was found.
|
||||
|
|
|
@ -14,11 +14,12 @@
|
|||
# limitations under the License.
|
||||
|
||||
import logging
|
||||
import re
|
||||
from typing import IO, TYPE_CHECKING, Dict, List, Optional
|
||||
|
||||
from synapse.api.errors import Codes, SynapseError
|
||||
from synapse.http.server import DirectServeJsonResource, respond_with_json
|
||||
from synapse.http.servlet import parse_bytes_from_args
|
||||
from synapse.http.server import respond_with_json
|
||||
from synapse.http.servlet import RestServlet, parse_bytes_from_args
|
||||
from synapse.http.site import SynapseRequest
|
||||
from synapse.media.media_storage import SpamMediaException
|
||||
|
||||
|
@ -29,8 +30,8 @@ if TYPE_CHECKING:
|
|||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class UploadResource(DirectServeJsonResource):
|
||||
isLeaf = True
|
||||
class UploadResource(RestServlet):
|
||||
PATTERNS = [re.compile("/_matrix/media/(r0|v3|v1)/upload")]
|
||||
|
||||
def __init__(self, hs: "HomeServer", media_repo: "MediaRepository"):
|
||||
super().__init__()
|
||||
|
@ -43,10 +44,7 @@ class UploadResource(DirectServeJsonResource):
|
|||
self.max_upload_size = hs.config.media.max_upload_size
|
||||
self.clock = hs.get_clock()
|
||||
|
||||
async def _async_render_OPTIONS(self, request: SynapseRequest) -> None:
|
||||
respond_with_json(request, 200, {}, send_cors=True)
|
||||
|
||||
async def _async_render_POST(self, request: SynapseRequest) -> None:
|
||||
async def on_POST(self, request: SynapseRequest) -> None:
|
||||
requester = await self.auth.get_user_by_req(request)
|
||||
raw_content_length = request.getHeader("Content-Length")
|
||||
if raw_content_length is None:
|
||||
|
|
|
@ -16,7 +16,6 @@ from itertools import chain
|
|||
from typing import (
|
||||
TYPE_CHECKING,
|
||||
AbstractSet,
|
||||
Any,
|
||||
Callable,
|
||||
Collection,
|
||||
Dict,
|
||||
|
@ -32,6 +31,7 @@ from typing import (
|
|||
from synapse.api.constants import EventTypes, Membership
|
||||
from synapse.events import EventBase
|
||||
from synapse.logging.opentracing import tag_args, trace
|
||||
from synapse.storage.databases.main.state_deltas import StateDelta
|
||||
from synapse.storage.roommember import ProfileInfo
|
||||
from synapse.storage.util.partial_state_events_tracker import (
|
||||
PartialCurrentStateTracker,
|
||||
|
@ -531,19 +531,9 @@ class StateStorageController:
|
|||
@tag_args
|
||||
async def get_current_state_deltas(
|
||||
self, prev_stream_id: int, max_stream_id: int
|
||||
) -> Tuple[int, List[Dict[str, Any]]]:
|
||||
) -> Tuple[int, List[StateDelta]]:
|
||||
"""Fetch a list of room state changes since the given stream id
|
||||
|
||||
Each entry in the result contains the following fields:
|
||||
- stream_id (int)
|
||||
- room_id (str)
|
||||
- type (str): event type
|
||||
- state_key (str):
|
||||
- event_id (str|None): new event_id for this state key. None if the
|
||||
state has been deleted.
|
||||
- prev_event_id (str|None): previous event_id for this state key. None
|
||||
if it's new state.
|
||||
|
||||
Args:
|
||||
prev_stream_id: point to get changes since (exclusive)
|
||||
max_stream_id: the point that we know has been correctly persisted
|
||||
|
|
|
@ -1874,9 +1874,9 @@ class DatabasePool:
|
|||
keyvalues: Optional[Dict[str, Any]] = None,
|
||||
desc: str = "simple_select_many_batch",
|
||||
batch_size: int = 100,
|
||||
) -> List[Dict[str, Any]]:
|
||||
) -> List[Tuple[Any, ...]]:
|
||||
"""Executes a SELECT query on the named table, which may return zero or
|
||||
more rows, returning the result as a list of dicts.
|
||||
more rows.
|
||||
|
||||
Filters rows by whether the value of `column` is in `iterable`.
|
||||
|
||||
|
@ -1888,10 +1888,13 @@ class DatabasePool:
|
|||
keyvalues: dict of column names and values to select the rows with
|
||||
desc: description of the transaction, for logging and metrics
|
||||
batch_size: the number of rows for each select query
|
||||
|
||||
Returns:
|
||||
The results as a list of tuples.
|
||||
"""
|
||||
keyvalues = keyvalues or {}
|
||||
|
||||
results: List[Dict[str, Any]] = []
|
||||
results: List[Tuple[Any, ...]] = []
|
||||
|
||||
for chunk in batch_iter(iterable, batch_size):
|
||||
rows = await self.runInteraction(
|
||||
|
@ -1918,9 +1921,9 @@ class DatabasePool:
|
|||
iterable: Collection[Any],
|
||||
keyvalues: Dict[str, Any],
|
||||
retcols: Iterable[str],
|
||||
) -> List[Dict[str, Any]]:
|
||||
) -> List[Tuple[Any, ...]]:
|
||||
"""Executes a SELECT query on the named table, which may return zero or
|
||||
more rows, returning the result as a list of dicts.
|
||||
more rows.
|
||||
|
||||
Filters rows by whether the value of `column` is in `iterable`.
|
||||
|
||||
|
@ -1931,6 +1934,9 @@ class DatabasePool:
|
|||
iterable: list
|
||||
keyvalues: dict of column names and values to select the rows with
|
||||
retcols: list of strings giving the names of the columns to return
|
||||
|
||||
Returns:
|
||||
The results as a list of tuples.
|
||||
"""
|
||||
if not iterable:
|
||||
return []
|
||||
|
@ -1949,7 +1955,7 @@ class DatabasePool:
|
|||
)
|
||||
|
||||
txn.execute(sql, values)
|
||||
return cls.cursor_to_dict(txn)
|
||||
return txn.fetchall()
|
||||
|
||||
async def simple_update(
|
||||
self,
|
||||
|
@ -2418,7 +2424,7 @@ class DatabasePool:
|
|||
keyvalues: Optional[Dict[str, Any]] = None,
|
||||
exclude_keyvalues: Optional[Dict[str, Any]] = None,
|
||||
order_direction: str = "ASC",
|
||||
) -> List[Dict[str, Any]]:
|
||||
) -> List[Tuple[Any, ...]]:
|
||||
"""
|
||||
Executes a SELECT query on the named table with start and limit,
|
||||
of row numbers, which may return zero or number of rows from start to limit,
|
||||
|
@ -2447,7 +2453,7 @@ class DatabasePool:
|
|||
order_direction: Whether the results should be ordered "ASC" or "DESC".
|
||||
|
||||
Returns:
|
||||
The result as a list of dictionaries.
|
||||
The result as a list of tuples.
|
||||
"""
|
||||
if order_direction not in ["ASC", "DESC"]:
|
||||
raise ValueError("order_direction must be one of 'ASC' or 'DESC'.")
|
||||
|
@ -2474,69 +2480,7 @@ class DatabasePool:
|
|||
)
|
||||
txn.execute(sql, arg_list + [limit, start])
|
||||
|
||||
return cls.cursor_to_dict(txn)
|
||||
|
||||
async def simple_search_list(
|
||||
self,
|
||||
table: str,
|
||||
term: Optional[str],
|
||||
col: str,
|
||||
retcols: Collection[str],
|
||||
desc: str = "simple_search_list",
|
||||
) -> Optional[List[Dict[str, Any]]]:
|
||||
"""Executes a SELECT query on the named table, which may return zero or
|
||||
more rows, returning the result as a list of dicts.
|
||||
|
||||
Args:
|
||||
table: the table name
|
||||
term: term for searching the table matched to a column.
|
||||
col: column to query term should be matched to
|
||||
retcols: the names of the columns to return
|
||||
|
||||
Returns:
|
||||
A list of dictionaries or None.
|
||||
"""
|
||||
|
||||
return await self.runInteraction(
|
||||
desc,
|
||||
self.simple_search_list_txn,
|
||||
table,
|
||||
term,
|
||||
col,
|
||||
retcols,
|
||||
db_autocommit=True,
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def simple_search_list_txn(
|
||||
cls,
|
||||
txn: LoggingTransaction,
|
||||
table: str,
|
||||
term: Optional[str],
|
||||
col: str,
|
||||
retcols: Iterable[str],
|
||||
) -> Optional[List[Dict[str, Any]]]:
|
||||
"""Executes a SELECT query on the named table, which may return zero or
|
||||
more rows, returning the result as a list of dicts.
|
||||
|
||||
Args:
|
||||
txn: Transaction object
|
||||
table: the table name
|
||||
term: term for searching the table matched to a column.
|
||||
col: column to query term should be matched to
|
||||
retcols: the names of the columns to return
|
||||
|
||||
Returns:
|
||||
None if no term is given, otherwise a list of dictionaries.
|
||||
"""
|
||||
if term:
|
||||
sql = "SELECT %s FROM %s WHERE %s LIKE ?" % (", ".join(retcols), table, col)
|
||||
termvalues = ["%%" + term + "%%"]
|
||||
txn.execute(sql, termvalues)
|
||||
else:
|
||||
return None
|
||||
|
||||
return cls.cursor_to_dict(txn)
|
||||
return txn.fetchall()
|
||||
|
||||
|
||||
def make_in_list_sql_clause(
|
||||
|
|
|
@ -15,7 +15,7 @@
|
|||
# limitations under the License.
|
||||
|
||||
import logging
|
||||
from typing import TYPE_CHECKING, List, Optional, Tuple, cast
|
||||
from typing import TYPE_CHECKING, List, Optional, Tuple, Union, cast
|
||||
|
||||
from synapse.api.constants import Direction
|
||||
from synapse.config.homeserver import HomeServerConfig
|
||||
|
@ -142,26 +142,6 @@ class DataStore(
|
|||
|
||||
super().__init__(database, db_conn, hs)
|
||||
|
||||
async def get_users(self) -> List[JsonDict]:
|
||||
"""Function to retrieve a list of users in users table.
|
||||
|
||||
Returns:
|
||||
A list of dictionaries representing users.
|
||||
"""
|
||||
return await self.db_pool.simple_select_list(
|
||||
table="users",
|
||||
keyvalues={},
|
||||
retcols=[
|
||||
"name",
|
||||
"password_hash",
|
||||
"is_guest",
|
||||
"admin",
|
||||
"user_type",
|
||||
"deactivated",
|
||||
],
|
||||
desc="get_users",
|
||||
)
|
||||
|
||||
async def get_users_paginate(
|
||||
self,
|
||||
start: int,
|
||||
|
@ -316,7 +296,11 @@ class DataStore(
|
|||
"get_users_paginate_txn", get_users_paginate_txn
|
||||
)
|
||||
|
||||
async def search_users(self, term: str) -> Optional[List[JsonDict]]:
|
||||
async def search_users(
|
||||
self, term: str
|
||||
) -> List[
|
||||
Tuple[str, Optional[str], Union[int, bool], Union[int, bool], Optional[str]]
|
||||
]:
|
||||
"""Function to search users list for one or more users with
|
||||
the matched term.
|
||||
|
||||
|
@ -324,15 +308,37 @@ class DataStore(
|
|||
term: search term
|
||||
|
||||
Returns:
|
||||
A list of dictionaries or None.
|
||||
A list of tuples of name, password_hash, is_guest, admin, user_type or None.
|
||||
"""
|
||||
return await self.db_pool.simple_search_list(
|
||||
table="users",
|
||||
term=term,
|
||||
col="name",
|
||||
retcols=["name", "password_hash", "is_guest", "admin", "user_type"],
|
||||
desc="search_users",
|
||||
)
|
||||
|
||||
def search_users(
|
||||
txn: LoggingTransaction,
|
||||
) -> List[
|
||||
Tuple[str, Optional[str], Union[int, bool], Union[int, bool], Optional[str]]
|
||||
]:
|
||||
search_term = "%%" + term + "%%"
|
||||
|
||||
sql = """
|
||||
SELECT name, password_hash, is_guest, admin, user_type
|
||||
FROM users
|
||||
WHERE name LIKE ?
|
||||
"""
|
||||
txn.execute(sql, (search_term,))
|
||||
|
||||
return cast(
|
||||
List[
|
||||
Tuple[
|
||||
str,
|
||||
Optional[str],
|
||||
Union[int, bool],
|
||||
Union[int, bool],
|
||||
Optional[str],
|
||||
]
|
||||
],
|
||||
txn.fetchall(),
|
||||
)
|
||||
|
||||
return await self.db_pool.runInteraction("search_users", search_users)
|
||||
|
||||
|
||||
def check_database_before_upgrade(
|
||||
|
|
|
@ -103,6 +103,13 @@ class AccountDataWorkerStore(PushRulesWorkerStore, CacheInvalidationWorkerStore)
|
|||
"AccountDataAndTagsChangeCache", account_max
|
||||
)
|
||||
|
||||
self.db_pool.updates.register_background_index_update(
|
||||
update_name="room_account_data_index_room_id",
|
||||
index_name="room_account_data_room_id",
|
||||
table="room_account_data",
|
||||
columns=("room_id",),
|
||||
)
|
||||
|
||||
self.db_pool.updates.register_background_update_handler(
|
||||
"delete_account_data_for_deactivated_users",
|
||||
self._delete_account_data_for_deactivated_users,
|
||||
|
@ -151,10 +158,10 @@ class AccountDataWorkerStore(PushRulesWorkerStore, CacheInvalidationWorkerStore)
|
|||
sql += " AND content != '{}'"
|
||||
|
||||
txn.execute(sql, (user_id,))
|
||||
rows = self.db_pool.cursor_to_dict(txn)
|
||||
|
||||
return {
|
||||
row["account_data_type"]: db_to_json(row["content"]) for row in rows
|
||||
account_data_type: db_to_json(content)
|
||||
for account_data_type, content in txn
|
||||
}
|
||||
|
||||
return await self.db_pool.runInteraction(
|
||||
|
@ -196,13 +203,12 @@ class AccountDataWorkerStore(PushRulesWorkerStore, CacheInvalidationWorkerStore)
|
|||
sql += " AND content != '{}'"
|
||||
|
||||
txn.execute(sql, (user_id,))
|
||||
rows = self.db_pool.cursor_to_dict(txn)
|
||||
|
||||
by_room: Dict[str, Dict[str, JsonDict]] = {}
|
||||
for row in rows:
|
||||
room_data = by_room.setdefault(row["room_id"], {})
|
||||
for room_id, account_data_type, content in txn:
|
||||
room_data = by_room.setdefault(room_id, {})
|
||||
|
||||
room_data[row["account_data_type"]] = db_to_json(row["content"])
|
||||
room_data[account_data_type] = db_to_json(content)
|
||||
|
||||
return by_room
|
||||
|
||||
|
|
|
@ -14,17 +14,7 @@
|
|||
# limitations under the License.
|
||||
import logging
|
||||
import re
|
||||
from typing import (
|
||||
TYPE_CHECKING,
|
||||
Any,
|
||||
Dict,
|
||||
List,
|
||||
Optional,
|
||||
Pattern,
|
||||
Sequence,
|
||||
Tuple,
|
||||
cast,
|
||||
)
|
||||
from typing import TYPE_CHECKING, List, Optional, Pattern, Sequence, Tuple, cast
|
||||
|
||||
from synapse.appservice import (
|
||||
ApplicationService,
|
||||
|
@ -353,21 +343,15 @@ class ApplicationServiceTransactionWorkerStore(
|
|||
|
||||
def _get_oldest_unsent_txn(
|
||||
txn: LoggingTransaction,
|
||||
) -> Optional[Dict[str, Any]]:
|
||||
) -> Optional[Tuple[int, str]]:
|
||||
# Monotonically increasing txn ids, so just select the smallest
|
||||
# one in the txns table (we delete them when they are sent)
|
||||
txn.execute(
|
||||
"SELECT * FROM application_services_txns WHERE as_id=?"
|
||||
"SELECT txn_id, event_ids FROM application_services_txns WHERE as_id=?"
|
||||
" ORDER BY txn_id ASC LIMIT 1",
|
||||
(service.id,),
|
||||
)
|
||||
rows = self.db_pool.cursor_to_dict(txn)
|
||||
if not rows:
|
||||
return None
|
||||
|
||||
entry = rows[0]
|
||||
|
||||
return entry
|
||||
return cast(Optional[Tuple[int, str]], txn.fetchone())
|
||||
|
||||
entry = await self.db_pool.runInteraction(
|
||||
"get_oldest_unsent_appservice_txn", _get_oldest_unsent_txn
|
||||
|
@ -376,8 +360,9 @@ class ApplicationServiceTransactionWorkerStore(
|
|||
if not entry:
|
||||
return None
|
||||
|
||||
event_ids = db_to_json(entry["event_ids"])
|
||||
txn_id, event_ids_str = entry
|
||||
|
||||
event_ids = db_to_json(event_ids_str)
|
||||
events = await self.get_events_as_list(event_ids)
|
||||
|
||||
# TODO: to-device messages, one-time key counts, device list summaries and unused
|
||||
|
@ -385,7 +370,7 @@ class ApplicationServiceTransactionWorkerStore(
|
|||
# We likely want to populate those for reliability.
|
||||
return AppServiceTransaction(
|
||||
service=service,
|
||||
id=entry["txn_id"],
|
||||
id=txn_id,
|
||||
events=events,
|
||||
ephemeral=[],
|
||||
to_device_messages=[],
|
||||
|
|
|
@ -15,6 +15,7 @@
|
|||
import logging
|
||||
from typing import TYPE_CHECKING, Dict, List, Mapping, Optional, Tuple, Union, cast
|
||||
|
||||
import attr
|
||||
from typing_extensions import TypedDict
|
||||
|
||||
from synapse.metrics.background_process_metrics import wrap_as_background_process
|
||||
|
@ -42,7 +43,8 @@ logger = logging.getLogger(__name__)
|
|||
LAST_SEEN_GRANULARITY = 10 * 60 * 1000
|
||||
|
||||
|
||||
class DeviceLastConnectionInfo(TypedDict):
|
||||
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
||||
class DeviceLastConnectionInfo:
|
||||
"""Metadata for the last connection seen for a user and device combination"""
|
||||
|
||||
# These types must match the columns in the `devices` table
|
||||
|
@ -499,24 +501,29 @@ class ClientIpWorkerStore(ClientIpBackgroundUpdateStore, MonthlyActiveUsersWorke
|
|||
device_id: If None fetches all devices for the user
|
||||
|
||||
Returns:
|
||||
A dictionary mapping a tuple of (user_id, device_id) to dicts, with
|
||||
keys giving the column names from the devices table.
|
||||
A dictionary mapping a tuple of (user_id, device_id) to DeviceLastConnectionInfo.
|
||||
"""
|
||||
|
||||
keyvalues = {"user_id": user_id}
|
||||
if device_id is not None:
|
||||
keyvalues["device_id"] = device_id
|
||||
|
||||
res = cast(
|
||||
List[DeviceLastConnectionInfo],
|
||||
await self.db_pool.simple_select_list(
|
||||
table="devices",
|
||||
keyvalues=keyvalues,
|
||||
retcols=("user_id", "ip", "user_agent", "device_id", "last_seen"),
|
||||
),
|
||||
res = await self.db_pool.simple_select_list(
|
||||
table="devices",
|
||||
keyvalues=keyvalues,
|
||||
retcols=("user_id", "ip", "user_agent", "device_id", "last_seen"),
|
||||
)
|
||||
|
||||
return {(d["user_id"], d["device_id"]): d for d in res}
|
||||
return {
|
||||
(d["user_id"], d["device_id"]): DeviceLastConnectionInfo(
|
||||
user_id=d["user_id"],
|
||||
device_id=d["device_id"],
|
||||
ip=d["ip"],
|
||||
user_agent=d["user_agent"],
|
||||
last_seen=d["last_seen"],
|
||||
)
|
||||
for d in res
|
||||
}
|
||||
|
||||
async def _get_user_ip_and_agents_from_database(
|
||||
self, user: UserID, since_ts: int = 0
|
||||
|
@ -683,8 +690,7 @@ class ClientIpWorkerStore(ClientIpBackgroundUpdateStore, MonthlyActiveUsersWorke
|
|||
device_id: If None fetches all devices for the user
|
||||
|
||||
Returns:
|
||||
A dictionary mapping a tuple of (user_id, device_id) to dicts, with
|
||||
keys giving the column names from the devices table.
|
||||
A dictionary mapping a tuple of (user_id, device_id) to DeviceLastConnectionInfo.
|
||||
"""
|
||||
ret = await self._get_last_client_ip_by_device_from_database(user_id, device_id)
|
||||
|
||||
|
@ -705,13 +711,13 @@ class ClientIpWorkerStore(ClientIpBackgroundUpdateStore, MonthlyActiveUsersWorke
|
|||
continue
|
||||
|
||||
if not device_id or did == device_id:
|
||||
ret[(user_id, did)] = {
|
||||
"user_id": user_id,
|
||||
"ip": ip,
|
||||
"user_agent": user_agent,
|
||||
"device_id": did,
|
||||
"last_seen": last_seen,
|
||||
}
|
||||
ret[(user_id, did)] = DeviceLastConnectionInfo(
|
||||
user_id=user_id,
|
||||
ip=ip,
|
||||
user_agent=user_agent,
|
||||
device_id=did,
|
||||
last_seen=last_seen,
|
||||
)
|
||||
return ret
|
||||
|
||||
async def get_user_ip_and_agents(
|
||||
|
|
|
@ -344,18 +344,19 @@ class DeviceInboxWorkerStore(SQLBaseStore):
|
|||
# Note that this is more efficient than just dropping `device_id` from the query,
|
||||
# since device_inbox has an index on `(user_id, device_id, stream_id)`
|
||||
if not device_ids_to_query:
|
||||
user_device_dicts = self.db_pool.simple_select_many_txn(
|
||||
txn,
|
||||
table="devices",
|
||||
column="user_id",
|
||||
iterable=user_ids_to_query,
|
||||
keyvalues={"hidden": False},
|
||||
retcols=("device_id",),
|
||||
user_device_dicts = cast(
|
||||
List[Tuple[str]],
|
||||
self.db_pool.simple_select_many_txn(
|
||||
txn,
|
||||
table="devices",
|
||||
column="user_id",
|
||||
iterable=user_ids_to_query,
|
||||
keyvalues={"hidden": False},
|
||||
retcols=("device_id",),
|
||||
),
|
||||
)
|
||||
|
||||
device_ids_to_query.update(
|
||||
{row["device_id"] for row in user_device_dicts}
|
||||
)
|
||||
device_ids_to_query.update({row[0] for row in user_device_dicts})
|
||||
|
||||
if not device_ids_to_query:
|
||||
# We've ended up with no devices to query.
|
||||
|
@ -449,7 +450,7 @@ class DeviceInboxWorkerStore(SQLBaseStore):
|
|||
user_id: str,
|
||||
device_id: Optional[str],
|
||||
up_to_stream_id: int,
|
||||
limit: int,
|
||||
limit: Optional[int] = None,
|
||||
) -> int:
|
||||
"""
|
||||
Args:
|
||||
|
@ -480,11 +481,12 @@ class DeviceInboxWorkerStore(SQLBaseStore):
|
|||
ROW_ID_NAME = self.database_engine.row_id_name
|
||||
|
||||
def delete_messages_for_device_txn(txn: LoggingTransaction) -> int:
|
||||
limit_statement = "" if limit is None else f"LIMIT {limit}"
|
||||
sql = f"""
|
||||
DELETE FROM device_inbox WHERE {ROW_ID_NAME} IN (
|
||||
SELECT {ROW_ID_NAME} FROM device_inbox
|
||||
WHERE user_id = ? AND device_id = ? AND stream_id <= ?
|
||||
LIMIT {limit}
|
||||
{limit_statement}
|
||||
)
|
||||
"""
|
||||
txn.execute(sql, (user_id, device_id, up_to_stream_id))
|
||||
|
@ -849,20 +851,21 @@ class DeviceInboxWorkerStore(SQLBaseStore):
|
|||
|
||||
# We exclude hidden devices (such as cross-signing keys) here as they are
|
||||
# not expected to receive to-device messages.
|
||||
rows = self.db_pool.simple_select_many_txn(
|
||||
txn,
|
||||
table="devices",
|
||||
keyvalues={"user_id": user_id, "hidden": False},
|
||||
column="device_id",
|
||||
iterable=devices,
|
||||
retcols=("device_id",),
|
||||
rows = cast(
|
||||
List[Tuple[str]],
|
||||
self.db_pool.simple_select_many_txn(
|
||||
txn,
|
||||
table="devices",
|
||||
keyvalues={"user_id": user_id, "hidden": False},
|
||||
column="device_id",
|
||||
iterable=devices,
|
||||
retcols=("device_id",),
|
||||
),
|
||||
)
|
||||
|
||||
for row in rows:
|
||||
for (device_id,) in rows:
|
||||
# Only insert into the local inbox if the device exists on
|
||||
# this server
|
||||
device_id = row["device_id"]
|
||||
|
||||
with start_active_span("serialise_to_device_message"):
|
||||
msg = messages_by_device[device_id]
|
||||
set_tag(SynapseTags.TO_DEVICE_TYPE, msg["type"])
|
||||
|
|
|
@ -1054,16 +1054,19 @@ class DeviceWorkerStore(RoomMemberWorkerStore, EndToEndKeyWorkerStore):
|
|||
async def get_device_list_last_stream_id_for_remotes(
|
||||
self, user_ids: Iterable[str]
|
||||
) -> Mapping[str, Optional[str]]:
|
||||
rows = await self.db_pool.simple_select_many_batch(
|
||||
table="device_lists_remote_extremeties",
|
||||
column="user_id",
|
||||
iterable=user_ids,
|
||||
retcols=("user_id", "stream_id"),
|
||||
desc="get_device_list_last_stream_id_for_remotes",
|
||||
rows = cast(
|
||||
List[Tuple[str, str]],
|
||||
await self.db_pool.simple_select_many_batch(
|
||||
table="device_lists_remote_extremeties",
|
||||
column="user_id",
|
||||
iterable=user_ids,
|
||||
retcols=("user_id", "stream_id"),
|
||||
desc="get_device_list_last_stream_id_for_remotes",
|
||||
),
|
||||
)
|
||||
|
||||
results: Dict[str, Optional[str]] = {user_id: None for user_id in user_ids}
|
||||
results.update({row["user_id"]: row["stream_id"] for row in rows})
|
||||
results.update(rows)
|
||||
|
||||
return results
|
||||
|
||||
|
@ -1079,22 +1082,30 @@ class DeviceWorkerStore(RoomMemberWorkerStore, EndToEndKeyWorkerStore):
|
|||
The IDs of users whose device lists need resync.
|
||||
"""
|
||||
if user_ids:
|
||||
rows = await self.db_pool.simple_select_many_batch(
|
||||
table="device_lists_remote_resync",
|
||||
column="user_id",
|
||||
iterable=user_ids,
|
||||
retcols=("user_id",),
|
||||
desc="get_user_ids_requiring_device_list_resync_with_iterable",
|
||||
)
|
||||
else:
|
||||
rows = await self.db_pool.simple_select_list(
|
||||
table="device_lists_remote_resync",
|
||||
keyvalues=None,
|
||||
retcols=("user_id",),
|
||||
desc="get_user_ids_requiring_device_list_resync",
|
||||
row_tuples = cast(
|
||||
List[Tuple[str]],
|
||||
await self.db_pool.simple_select_many_batch(
|
||||
table="device_lists_remote_resync",
|
||||
column="user_id",
|
||||
iterable=user_ids,
|
||||
retcols=("user_id",),
|
||||
desc="get_user_ids_requiring_device_list_resync_with_iterable",
|
||||
),
|
||||
)
|
||||
|
||||
return {row["user_id"] for row in rows}
|
||||
return {row[0] for row in row_tuples}
|
||||
else:
|
||||
rows = cast(
|
||||
List[Dict[str, str]],
|
||||
await self.db_pool.simple_select_list(
|
||||
table="device_lists_remote_resync",
|
||||
keyvalues=None,
|
||||
retcols=("user_id",),
|
||||
desc="get_user_ids_requiring_device_list_resync",
|
||||
),
|
||||
)
|
||||
|
||||
return {row["user_id"] for row in rows}
|
||||
|
||||
async def mark_remote_users_device_caches_as_stale(
|
||||
self, user_ids: StrCollection
|
||||
|
@ -1415,13 +1426,13 @@ class DeviceWorkerStore(RoomMemberWorkerStore, EndToEndKeyWorkerStore):
|
|||
|
||||
def get_devices_not_accessed_since_txn(
|
||||
txn: LoggingTransaction,
|
||||
) -> List[Dict[str, str]]:
|
||||
) -> List[Tuple[str, str]]:
|
||||
sql = """
|
||||
SELECT user_id, device_id
|
||||
FROM devices WHERE last_seen < ? AND hidden = FALSE
|
||||
"""
|
||||
txn.execute(sql, (since_ms,))
|
||||
return self.db_pool.cursor_to_dict(txn)
|
||||
return cast(List[Tuple[str, str]], txn.fetchall())
|
||||
|
||||
rows = await self.db_pool.runInteraction(
|
||||
"get_devices_not_accessed_since",
|
||||
|
@ -1429,11 +1440,11 @@ class DeviceWorkerStore(RoomMemberWorkerStore, EndToEndKeyWorkerStore):
|
|||
)
|
||||
|
||||
devices: Dict[str, List[str]] = {}
|
||||
for row in rows:
|
||||
for user_id, device_id in rows:
|
||||
# Remote devices are never stale from our point of view.
|
||||
if self.hs.is_mine_id(row["user_id"]):
|
||||
user_devices = devices.setdefault(row["user_id"], [])
|
||||
user_devices.append(row["device_id"])
|
||||
if self.hs.is_mine_id(user_id):
|
||||
user_devices = devices.setdefault(user_id, [])
|
||||
user_devices.append(device_id)
|
||||
|
||||
return devices
|
||||
|
||||
|
|
|
@ -53,6 +53,13 @@ class EndToEndRoomKeyBackgroundStore(SQLBaseStore):
|
|||
):
|
||||
super().__init__(database, db_conn, hs)
|
||||
|
||||
self.db_pool.updates.register_background_index_update(
|
||||
update_name="e2e_room_keys_index_room_id",
|
||||
index_name="e2e_room_keys_room_id",
|
||||
table="e2e_room_keys",
|
||||
columns=("room_id",),
|
||||
)
|
||||
|
||||
self.db_pool.updates.register_background_update_handler(
|
||||
"delete_e2e_backup_keys_for_deactivated_users",
|
||||
self._delete_e2e_backup_keys_for_deactivated_users,
|
||||
|
@ -208,7 +215,7 @@ class EndToEndRoomKeyStore(EndToEndRoomKeyBackgroundStore):
|
|||
"message": "Set room key",
|
||||
"room_id": room_id,
|
||||
"session_id": session_id,
|
||||
StreamKeyType.ROOM: room_key,
|
||||
StreamKeyType.ROOM.value: room_key,
|
||||
}
|
||||
)
|
||||
|
||||
|
|
|
@ -493,15 +493,18 @@ class EndToEndKeyWorkerStore(EndToEndKeyBackgroundStore, CacheInvalidationWorker
|
|||
A map from (algorithm, key_id) to json string for key
|
||||
"""
|
||||
|
||||
rows = await self.db_pool.simple_select_many_batch(
|
||||
table="e2e_one_time_keys_json",
|
||||
column="key_id",
|
||||
iterable=key_ids,
|
||||
retcols=("algorithm", "key_id", "key_json"),
|
||||
keyvalues={"user_id": user_id, "device_id": device_id},
|
||||
desc="add_e2e_one_time_keys_check",
|
||||
rows = cast(
|
||||
List[Tuple[str, str, str]],
|
||||
await self.db_pool.simple_select_many_batch(
|
||||
table="e2e_one_time_keys_json",
|
||||
column="key_id",
|
||||
iterable=key_ids,
|
||||
retcols=("algorithm", "key_id", "key_json"),
|
||||
keyvalues={"user_id": user_id, "device_id": device_id},
|
||||
desc="add_e2e_one_time_keys_check",
|
||||
),
|
||||
)
|
||||
result = {(row["algorithm"], row["key_id"]): row["key_json"] for row in rows}
|
||||
result = {(algorithm, key_id): key_json for algorithm, key_id, key_json in rows}
|
||||
log_kv({"message": "Fetched one time keys for user", "one_time_keys": result})
|
||||
return result
|
||||
|
||||
|
@ -921,14 +924,10 @@ class EndToEndKeyWorkerStore(EndToEndKeyBackgroundStore, CacheInvalidationWorker
|
|||
}
|
||||
|
||||
txn.execute(sql, params)
|
||||
rows = self.db_pool.cursor_to_dict(txn)
|
||||
|
||||
for row in rows:
|
||||
user_id = row["user_id"]
|
||||
key_type = row["keytype"]
|
||||
key = db_to_json(row["keydata"])
|
||||
for user_id, key_type, key_data, _ in txn:
|
||||
user_keys = result.setdefault(user_id, {})
|
||||
user_keys[key_type] = key
|
||||
user_keys[key_type] = db_to_json(key_data)
|
||||
|
||||
return result
|
||||
|
||||
|
@ -988,13 +987,9 @@ class EndToEndKeyWorkerStore(EndToEndKeyBackgroundStore, CacheInvalidationWorker
|
|||
query_params.extend(item)
|
||||
|
||||
txn.execute(sql, query_params)
|
||||
rows = self.db_pool.cursor_to_dict(txn)
|
||||
|
||||
# and add the signatures to the appropriate keys
|
||||
for row in rows:
|
||||
key_id: str = row["key_id"]
|
||||
target_user_id: str = row["target_user_id"]
|
||||
target_device_id: str = row["target_device_id"]
|
||||
for target_user_id, target_device_id, key_id, signature in txn:
|
||||
key_type = devices[(target_user_id, target_device_id)]
|
||||
# We need to copy everything, because the result may have come
|
||||
# from the cache. dict.copy only does a shallow copy, so we
|
||||
|
@ -1012,13 +1007,11 @@ class EndToEndKeyWorkerStore(EndToEndKeyBackgroundStore, CacheInvalidationWorker
|
|||
].copy()
|
||||
if from_user_id in signatures:
|
||||
user_sigs = signatures[from_user_id] = signatures[from_user_id]
|
||||
user_sigs[key_id] = row["signature"]
|
||||
user_sigs[key_id] = signature
|
||||
else:
|
||||
signatures[from_user_id] = {key_id: row["signature"]}
|
||||
signatures[from_user_id] = {key_id: signature}
|
||||
else:
|
||||
target_user_key["signatures"] = {
|
||||
from_user_id: {key_id: row["signature"]}
|
||||
}
|
||||
target_user_key["signatures"] = {from_user_id: {key_id: signature}}
|
||||
|
||||
return keys
|
||||
|
||||
|
|
|
@ -1049,15 +1049,18 @@ class EventFederationWorkerStore(SignatureWorkerStore, EventsWorkerStore, SQLBas
|
|||
Args:
|
||||
event_ids: The event IDs to calculate the max depth of.
|
||||
"""
|
||||
rows = await self.db_pool.simple_select_many_batch(
|
||||
table="events",
|
||||
column="event_id",
|
||||
iterable=event_ids,
|
||||
retcols=(
|
||||
"event_id",
|
||||
"depth",
|
||||
rows = cast(
|
||||
List[Tuple[str, int]],
|
||||
await self.db_pool.simple_select_many_batch(
|
||||
table="events",
|
||||
column="event_id",
|
||||
iterable=event_ids,
|
||||
retcols=(
|
||||
"event_id",
|
||||
"depth",
|
||||
),
|
||||
desc="get_max_depth_of",
|
||||
),
|
||||
desc="get_max_depth_of",
|
||||
)
|
||||
|
||||
if not rows:
|
||||
|
@ -1065,10 +1068,10 @@ class EventFederationWorkerStore(SignatureWorkerStore, EventsWorkerStore, SQLBas
|
|||
else:
|
||||
max_depth_event_id = ""
|
||||
current_max_depth = 0
|
||||
for row in rows:
|
||||
if row["depth"] > current_max_depth:
|
||||
max_depth_event_id = row["event_id"]
|
||||
current_max_depth = row["depth"]
|
||||
for event_id, depth in rows:
|
||||
if depth > current_max_depth:
|
||||
max_depth_event_id = event_id
|
||||
current_max_depth = depth
|
||||
|
||||
return max_depth_event_id, current_max_depth
|
||||
|
||||
|
@ -1078,15 +1081,18 @@ class EventFederationWorkerStore(SignatureWorkerStore, EventsWorkerStore, SQLBas
|
|||
Args:
|
||||
event_ids: The event IDs to calculate the max depth of.
|
||||
"""
|
||||
rows = await self.db_pool.simple_select_many_batch(
|
||||
table="events",
|
||||
column="event_id",
|
||||
iterable=event_ids,
|
||||
retcols=(
|
||||
"event_id",
|
||||
"depth",
|
||||
rows = cast(
|
||||
List[Tuple[str, int]],
|
||||
await self.db_pool.simple_select_many_batch(
|
||||
table="events",
|
||||
column="event_id",
|
||||
iterable=event_ids,
|
||||
retcols=(
|
||||
"event_id",
|
||||
"depth",
|
||||
),
|
||||
desc="get_min_depth_of",
|
||||
),
|
||||
desc="get_min_depth_of",
|
||||
)
|
||||
|
||||
if not rows:
|
||||
|
@ -1094,10 +1100,10 @@ class EventFederationWorkerStore(SignatureWorkerStore, EventsWorkerStore, SQLBas
|
|||
else:
|
||||
min_depth_event_id = ""
|
||||
current_min_depth = MAX_DEPTH
|
||||
for row in rows:
|
||||
if row["depth"] < current_min_depth:
|
||||
min_depth_event_id = row["event_id"]
|
||||
current_min_depth = row["depth"]
|
||||
for event_id, depth in rows:
|
||||
if depth < current_min_depth:
|
||||
min_depth_event_id = event_id
|
||||
current_min_depth = depth
|
||||
|
||||
return min_depth_event_id, current_min_depth
|
||||
|
||||
|
@ -1553,19 +1559,18 @@ class EventFederationWorkerStore(SignatureWorkerStore, EventsWorkerStore, SQLBas
|
|||
A filtered down list of `event_ids` that have previous failed pull attempts.
|
||||
"""
|
||||
|
||||
rows = await self.db_pool.simple_select_many_batch(
|
||||
table="event_failed_pull_attempts",
|
||||
column="event_id",
|
||||
iterable=event_ids,
|
||||
keyvalues={},
|
||||
retcols=("event_id",),
|
||||
desc="get_event_ids_with_failed_pull_attempts",
|
||||
rows = cast(
|
||||
List[Tuple[str]],
|
||||
await self.db_pool.simple_select_many_batch(
|
||||
table="event_failed_pull_attempts",
|
||||
column="event_id",
|
||||
iterable=event_ids,
|
||||
keyvalues={},
|
||||
retcols=("event_id",),
|
||||
desc="get_event_ids_with_failed_pull_attempts",
|
||||
),
|
||||
)
|
||||
event_ids_with_failed_pull_attempts: Set[str] = {
|
||||
row["event_id"] for row in rows
|
||||
}
|
||||
|
||||
return event_ids_with_failed_pull_attempts
|
||||
return {row[0] for row in rows}
|
||||
|
||||
@trace
|
||||
async def get_event_ids_to_not_pull_from_backoff(
|
||||
|
@ -1585,32 +1590,34 @@ class EventFederationWorkerStore(SignatureWorkerStore, EventsWorkerStore, SQLBas
|
|||
A dictionary of event_ids that should not be attempted to be pulled and the
|
||||
next timestamp at which we may try pulling them again.
|
||||
"""
|
||||
event_failed_pull_attempts = await self.db_pool.simple_select_many_batch(
|
||||
table="event_failed_pull_attempts",
|
||||
column="event_id",
|
||||
iterable=event_ids,
|
||||
keyvalues={},
|
||||
retcols=(
|
||||
"event_id",
|
||||
"last_attempt_ts",
|
||||
"num_attempts",
|
||||
event_failed_pull_attempts = cast(
|
||||
List[Tuple[str, int, int]],
|
||||
await self.db_pool.simple_select_many_batch(
|
||||
table="event_failed_pull_attempts",
|
||||
column="event_id",
|
||||
iterable=event_ids,
|
||||
keyvalues={},
|
||||
retcols=(
|
||||
"event_id",
|
||||
"last_attempt_ts",
|
||||
"num_attempts",
|
||||
),
|
||||
desc="get_event_ids_to_not_pull_from_backoff",
|
||||
),
|
||||
desc="get_event_ids_to_not_pull_from_backoff",
|
||||
)
|
||||
|
||||
current_time = self._clock.time_msec()
|
||||
|
||||
event_ids_with_backoff = {}
|
||||
for event_failed_pull_attempt in event_failed_pull_attempts:
|
||||
event_id = event_failed_pull_attempt["event_id"]
|
||||
for event_id, last_attempt_ts, num_attempts in event_failed_pull_attempts:
|
||||
# Exponential back-off (up to the upper bound) so we don't try to
|
||||
# pull the same event over and over. ex. 2hr, 4hr, 8hr, 16hr, etc.
|
||||
backoff_end_time = (
|
||||
event_failed_pull_attempt["last_attempt_ts"]
|
||||
last_attempt_ts
|
||||
+ (
|
||||
2
|
||||
** min(
|
||||
event_failed_pull_attempt["num_attempts"],
|
||||
num_attempts,
|
||||
BACKFILL_EVENT_EXPONENTIAL_BACKOFF_MAXIMUM_DOUBLING_STEPS,
|
||||
)
|
||||
)
|
||||
|
|
|
@ -27,6 +27,7 @@ from typing import (
|
|||
Optional,
|
||||
Set,
|
||||
Tuple,
|
||||
Union,
|
||||
cast,
|
||||
)
|
||||
|
||||
|
@ -501,16 +502,19 @@ class PersistEventsStore:
|
|||
|
||||
# We ignore legacy rooms that we aren't filling the chain cover index
|
||||
# for.
|
||||
rows = self.db_pool.simple_select_many_txn(
|
||||
txn,
|
||||
table="rooms",
|
||||
column="room_id",
|
||||
iterable={event.room_id for event in events if event.is_state()},
|
||||
keyvalues={},
|
||||
retcols=("room_id", "has_auth_chain_index"),
|
||||
rows = cast(
|
||||
List[Tuple[str, Optional[Union[int, bool]]]],
|
||||
self.db_pool.simple_select_many_txn(
|
||||
txn,
|
||||
table="rooms",
|
||||
column="room_id",
|
||||
iterable={event.room_id for event in events if event.is_state()},
|
||||
keyvalues={},
|
||||
retcols=("room_id", "has_auth_chain_index"),
|
||||
),
|
||||
)
|
||||
rooms_using_chain_index = {
|
||||
row["room_id"] for row in rows if row["has_auth_chain_index"]
|
||||
room_id for room_id, has_auth_chain_index in rows if has_auth_chain_index
|
||||
}
|
||||
|
||||
state_events = {
|
||||
|
@ -571,19 +575,18 @@ class PersistEventsStore:
|
|||
# We check if there are any events that need to be handled in the rooms
|
||||
# we're looking at. These should just be out of band memberships, where
|
||||
# we didn't have the auth chain when we first persisted.
|
||||
rows = db_pool.simple_select_many_txn(
|
||||
txn,
|
||||
table="event_auth_chain_to_calculate",
|
||||
keyvalues={},
|
||||
column="room_id",
|
||||
iterable=set(event_to_room_id.values()),
|
||||
retcols=("event_id", "type", "state_key"),
|
||||
auth_chain_to_calc_rows = cast(
|
||||
List[Tuple[str, str, str]],
|
||||
db_pool.simple_select_many_txn(
|
||||
txn,
|
||||
table="event_auth_chain_to_calculate",
|
||||
keyvalues={},
|
||||
column="room_id",
|
||||
iterable=set(event_to_room_id.values()),
|
||||
retcols=("event_id", "type", "state_key"),
|
||||
),
|
||||
)
|
||||
for row in rows:
|
||||
event_id = row["event_id"]
|
||||
event_type = row["type"]
|
||||
state_key = row["state_key"]
|
||||
|
||||
for event_id, event_type, state_key in auth_chain_to_calc_rows:
|
||||
# (We could pull out the auth events for all rows at once using
|
||||
# simple_select_many, but this case happens rarely and almost always
|
||||
# with a single row.)
|
||||
|
@ -753,23 +756,31 @@ class PersistEventsStore:
|
|||
# Step 1, fetch all existing links from all the chains we've seen
|
||||
# referenced.
|
||||
chain_links = _LinkMap()
|
||||
rows = db_pool.simple_select_many_txn(
|
||||
txn,
|
||||
table="event_auth_chain_links",
|
||||
column="origin_chain_id",
|
||||
iterable={chain_id for chain_id, _ in chain_map.values()},
|
||||
keyvalues={},
|
||||
retcols=(
|
||||
"origin_chain_id",
|
||||
"origin_sequence_number",
|
||||
"target_chain_id",
|
||||
"target_sequence_number",
|
||||
auth_chain_rows = cast(
|
||||
List[Tuple[int, int, int, int]],
|
||||
db_pool.simple_select_many_txn(
|
||||
txn,
|
||||
table="event_auth_chain_links",
|
||||
column="origin_chain_id",
|
||||
iterable={chain_id for chain_id, _ in chain_map.values()},
|
||||
keyvalues={},
|
||||
retcols=(
|
||||
"origin_chain_id",
|
||||
"origin_sequence_number",
|
||||
"target_chain_id",
|
||||
"target_sequence_number",
|
||||
),
|
||||
),
|
||||
)
|
||||
for row in rows:
|
||||
for (
|
||||
origin_chain_id,
|
||||
origin_sequence_number,
|
||||
target_chain_id,
|
||||
target_sequence_number,
|
||||
) in auth_chain_rows:
|
||||
chain_links.add_link(
|
||||
(row["origin_chain_id"], row["origin_sequence_number"]),
|
||||
(row["target_chain_id"], row["target_sequence_number"]),
|
||||
(origin_chain_id, origin_sequence_number),
|
||||
(target_chain_id, target_sequence_number),
|
||||
new=False,
|
||||
)
|
||||
|
||||
|
@ -1654,8 +1665,6 @@ class PersistEventsStore:
|
|||
) -> None:
|
||||
to_prefill = []
|
||||
|
||||
rows = []
|
||||
|
||||
ev_map = {e.event_id: e for e, _ in events_and_contexts}
|
||||
if not ev_map:
|
||||
return
|
||||
|
@ -1676,10 +1685,9 @@ class PersistEventsStore:
|
|||
)
|
||||
|
||||
txn.execute(sql + clause, args)
|
||||
rows = self.db_pool.cursor_to_dict(txn)
|
||||
for row in rows:
|
||||
event = ev_map[row["event_id"]]
|
||||
if not row["rejects"] and not row["redacts"]:
|
||||
for event_id, redacts, rejects in txn:
|
||||
event = ev_map[event_id]
|
||||
if not rejects and not redacts:
|
||||
to_prefill.append(EventCacheEntry(event=event, redacted_event=None))
|
||||
|
||||
async def external_prefill() -> None:
|
||||
|
|
|
@ -369,18 +369,20 @@ class EventsBackgroundUpdatesStore(SQLBaseStore):
|
|||
|
||||
chunks = [event_ids[i : i + 100] for i in range(0, len(event_ids), 100)]
|
||||
for chunk in chunks:
|
||||
ev_rows = self.db_pool.simple_select_many_txn(
|
||||
txn,
|
||||
table="event_json",
|
||||
column="event_id",
|
||||
iterable=chunk,
|
||||
retcols=["event_id", "json"],
|
||||
keyvalues={},
|
||||
ev_rows = cast(
|
||||
List[Tuple[str, str]],
|
||||
self.db_pool.simple_select_many_txn(
|
||||
txn,
|
||||
table="event_json",
|
||||
column="event_id",
|
||||
iterable=chunk,
|
||||
retcols=["event_id", "json"],
|
||||
keyvalues={},
|
||||
),
|
||||
)
|
||||
|
||||
for row in ev_rows:
|
||||
event_id = row["event_id"]
|
||||
event_json = db_to_json(row["json"])
|
||||
for event_id, json in ev_rows:
|
||||
event_json = db_to_json(json)
|
||||
try:
|
||||
origin_server_ts = event_json["origin_server_ts"]
|
||||
except (KeyError, AttributeError):
|
||||
|
@ -563,15 +565,18 @@ class EventsBackgroundUpdatesStore(SQLBaseStore):
|
|||
|
||||
if deleted:
|
||||
# We now need to invalidate the caches of these rooms
|
||||
rows = self.db_pool.simple_select_many_txn(
|
||||
txn,
|
||||
table="events",
|
||||
column="event_id",
|
||||
iterable=to_delete,
|
||||
keyvalues={},
|
||||
retcols=("room_id",),
|
||||
rows = cast(
|
||||
List[Tuple[str]],
|
||||
self.db_pool.simple_select_many_txn(
|
||||
txn,
|
||||
table="events",
|
||||
column="event_id",
|
||||
iterable=to_delete,
|
||||
keyvalues={},
|
||||
retcols=("room_id",),
|
||||
),
|
||||
)
|
||||
room_ids = {row["room_id"] for row in rows}
|
||||
room_ids = {row[0] for row in rows}
|
||||
for room_id in room_ids:
|
||||
txn.call_after(
|
||||
self.get_latest_event_ids_in_room.invalidate, (room_id,) # type: ignore[attr-defined]
|
||||
|
@ -1038,18 +1043,21 @@ class EventsBackgroundUpdatesStore(SQLBaseStore):
|
|||
count = len(rows)
|
||||
|
||||
# We also need to fetch the auth events for them.
|
||||
auth_events = self.db_pool.simple_select_many_txn(
|
||||
txn,
|
||||
table="event_auth",
|
||||
column="event_id",
|
||||
iterable=event_to_room_id,
|
||||
keyvalues={},
|
||||
retcols=("event_id", "auth_id"),
|
||||
auth_events = cast(
|
||||
List[Tuple[str, str]],
|
||||
self.db_pool.simple_select_many_txn(
|
||||
txn,
|
||||
table="event_auth",
|
||||
column="event_id",
|
||||
iterable=event_to_room_id,
|
||||
keyvalues={},
|
||||
retcols=("event_id", "auth_id"),
|
||||
),
|
||||
)
|
||||
|
||||
event_to_auth_chain: Dict[str, List[str]] = {}
|
||||
for row in auth_events:
|
||||
event_to_auth_chain.setdefault(row["event_id"], []).append(row["auth_id"])
|
||||
for event_id, auth_id in auth_events:
|
||||
event_to_auth_chain.setdefault(event_id, []).append(auth_id)
|
||||
|
||||
# Calculate and persist the chain cover index for this set of events.
|
||||
#
|
||||
|
|
|
@ -1584,16 +1584,19 @@ class EventsWorkerStore(SQLBaseStore):
|
|||
"""Given a list of event ids, check if we have already processed and
|
||||
stored them as non outliers.
|
||||
"""
|
||||
rows = await self.db_pool.simple_select_many_batch(
|
||||
table="events",
|
||||
retcols=("event_id",),
|
||||
column="event_id",
|
||||
iterable=list(event_ids),
|
||||
keyvalues={"outlier": False},
|
||||
desc="have_events_in_timeline",
|
||||
rows = cast(
|
||||
List[Tuple[str]],
|
||||
await self.db_pool.simple_select_many_batch(
|
||||
table="events",
|
||||
retcols=("event_id",),
|
||||
column="event_id",
|
||||
iterable=list(event_ids),
|
||||
keyvalues={"outlier": False},
|
||||
desc="have_events_in_timeline",
|
||||
),
|
||||
)
|
||||
|
||||
return {r["event_id"] for r in rows}
|
||||
return {r[0] for r in rows}
|
||||
|
||||
@trace
|
||||
@tag_args
|
||||
|
@ -2340,15 +2343,18 @@ class EventsWorkerStore(SQLBaseStore):
|
|||
a dict mapping from event id to partial-stateness. We return True for
|
||||
any of the events which are unknown (or are outliers).
|
||||
"""
|
||||
result = await self.db_pool.simple_select_many_batch(
|
||||
table="partial_state_events",
|
||||
column="event_id",
|
||||
iterable=event_ids,
|
||||
retcols=["event_id"],
|
||||
desc="get_partial_state_events",
|
||||
result = cast(
|
||||
List[Tuple[str]],
|
||||
await self.db_pool.simple_select_many_batch(
|
||||
table="partial_state_events",
|
||||
column="event_id",
|
||||
iterable=event_ids,
|
||||
retcols=["event_id"],
|
||||
desc="get_partial_state_events",
|
||||
),
|
||||
)
|
||||
# convert the result to a dict, to make @cachedList work
|
||||
partial = {r["event_id"] for r in result}
|
||||
partial = {r[0] for r in result}
|
||||
return {e_id: e_id in partial for e_id in event_ids}
|
||||
|
||||
@cached()
|
||||
|
|
|
@ -16,7 +16,7 @@
|
|||
import itertools
|
||||
import json
|
||||
import logging
|
||||
from typing import Dict, Iterable, Mapping, Optional, Tuple
|
||||
from typing import Dict, Iterable, List, Mapping, Optional, Tuple, Union, cast
|
||||
|
||||
from canonicaljson import encode_canonical_json
|
||||
from signedjson.key import decode_verify_key_bytes
|
||||
|
@ -205,35 +205,39 @@ class KeyStore(CacheInvalidationWorkerStore):
|
|||
|
||||
If we have multiple entries for a given key ID, returns the most recent.
|
||||
"""
|
||||
rows = await self.db_pool.simple_select_many_batch(
|
||||
table="server_keys_json",
|
||||
column="key_id",
|
||||
iterable=key_ids,
|
||||
keyvalues={"server_name": server_name},
|
||||
retcols=(
|
||||
"key_id",
|
||||
"from_server",
|
||||
"ts_added_ms",
|
||||
"ts_valid_until_ms",
|
||||
"key_json",
|
||||
rows = cast(
|
||||
List[Tuple[str, str, int, int, Union[bytes, memoryview]]],
|
||||
await self.db_pool.simple_select_many_batch(
|
||||
table="server_keys_json",
|
||||
column="key_id",
|
||||
iterable=key_ids,
|
||||
keyvalues={"server_name": server_name},
|
||||
retcols=(
|
||||
"key_id",
|
||||
"from_server",
|
||||
"ts_added_ms",
|
||||
"ts_valid_until_ms",
|
||||
"key_json",
|
||||
),
|
||||
desc="get_server_keys_json_for_remote",
|
||||
),
|
||||
desc="get_server_keys_json_for_remote",
|
||||
)
|
||||
|
||||
if not rows:
|
||||
return {}
|
||||
|
||||
# We sort the rows so that the most recently added entry is picked up.
|
||||
rows.sort(key=lambda r: r["ts_added_ms"])
|
||||
# We sort the rows by ts_added_ms so that the most recently added entry
|
||||
# will stomp over older entries in the dictionary.
|
||||
rows.sort(key=lambda r: r[2])
|
||||
|
||||
return {
|
||||
row["key_id"]: FetchKeyResultForRemote(
|
||||
key_id: FetchKeyResultForRemote(
|
||||
# Cast to bytes since postgresql returns a memoryview.
|
||||
key_json=bytes(row["key_json"]),
|
||||
valid_until_ts=row["ts_valid_until_ms"],
|
||||
added_ts=row["ts_added_ms"],
|
||||
key_json=bytes(key_json),
|
||||
valid_until_ts=ts_valid_until_ms,
|
||||
added_ts=ts_added_ms,
|
||||
)
|
||||
for row in rows
|
||||
for key_id, from_server, ts_added_ms, ts_valid_until_ms, key_json in rows
|
||||
}
|
||||
|
||||
async def get_all_server_keys_json_for_remote(
|
||||
|
@ -260,6 +264,8 @@ class KeyStore(CacheInvalidationWorkerStore):
|
|||
if not rows:
|
||||
return {}
|
||||
|
||||
# We sort the rows by ts_added_ms so that the most recently added entry
|
||||
# will stomp over older entries in the dictionary.
|
||||
rows.sort(key=lambda r: r["ts_added_ms"])
|
||||
|
||||
return {
|
||||
|
|
|
@ -28,6 +28,7 @@ from typing import (
|
|||
|
||||
from synapse.api.constants import Direction
|
||||
from synapse.logging.opentracing import trace
|
||||
from synapse.media._base import ThumbnailInfo
|
||||
from synapse.storage._base import SQLBaseStore
|
||||
from synapse.storage.database import (
|
||||
DatabasePool,
|
||||
|
@ -435,8 +436,8 @@ class MediaRepositoryStore(MediaRepositoryBackgroundUpdateStore):
|
|||
desc="store_url_cache",
|
||||
)
|
||||
|
||||
async def get_local_media_thumbnails(self, media_id: str) -> List[Dict[str, Any]]:
|
||||
return await self.db_pool.simple_select_list(
|
||||
async def get_local_media_thumbnails(self, media_id: str) -> List[ThumbnailInfo]:
|
||||
rows = await self.db_pool.simple_select_list(
|
||||
"local_media_repository_thumbnails",
|
||||
{"media_id": media_id},
|
||||
(
|
||||
|
@ -448,6 +449,16 @@ class MediaRepositoryStore(MediaRepositoryBackgroundUpdateStore):
|
|||
),
|
||||
desc="get_local_media_thumbnails",
|
||||
)
|
||||
return [
|
||||
ThumbnailInfo(
|
||||
width=row["thumbnail_width"],
|
||||
height=row["thumbnail_height"],
|
||||
method=row["thumbnail_method"],
|
||||
type=row["thumbnail_type"],
|
||||
length=row["thumbnail_length"],
|
||||
)
|
||||
for row in rows
|
||||
]
|
||||
|
||||
@trace
|
||||
async def store_local_thumbnail(
|
||||
|
@ -556,8 +567,8 @@ class MediaRepositoryStore(MediaRepositoryBackgroundUpdateStore):
|
|||
|
||||
async def get_remote_media_thumbnails(
|
||||
self, origin: str, media_id: str
|
||||
) -> List[Dict[str, Any]]:
|
||||
return await self.db_pool.simple_select_list(
|
||||
) -> List[ThumbnailInfo]:
|
||||
rows = await self.db_pool.simple_select_list(
|
||||
"remote_media_cache_thumbnails",
|
||||
{"media_origin": origin, "media_id": media_id},
|
||||
(
|
||||
|
@ -566,10 +577,19 @@ class MediaRepositoryStore(MediaRepositoryBackgroundUpdateStore):
|
|||
"thumbnail_method",
|
||||
"thumbnail_type",
|
||||
"thumbnail_length",
|
||||
"filesystem_id",
|
||||
),
|
||||
desc="get_remote_media_thumbnails",
|
||||
)
|
||||
return [
|
||||
ThumbnailInfo(
|
||||
width=row["thumbnail_width"],
|
||||
height=row["thumbnail_height"],
|
||||
method=row["thumbnail_method"],
|
||||
type=row["thumbnail_type"],
|
||||
length=row["thumbnail_length"],
|
||||
)
|
||||
for row in rows
|
||||
]
|
||||
|
||||
@trace
|
||||
async def get_remote_media_thumbnail(
|
||||
|
|
|
@ -20,6 +20,7 @@ from typing import (
|
|||
Mapping,
|
||||
Optional,
|
||||
Tuple,
|
||||
Union,
|
||||
cast,
|
||||
)
|
||||
|
||||
|
@ -260,27 +261,40 @@ class PresenceStore(PresenceBackgroundUpdateStore, CacheInvalidationWorkerStore)
|
|||
async def get_presence_for_users(
|
||||
self, user_ids: Iterable[str]
|
||||
) -> Mapping[str, UserPresenceState]:
|
||||
rows = await self.db_pool.simple_select_many_batch(
|
||||
table="presence_stream",
|
||||
column="user_id",
|
||||
iterable=user_ids,
|
||||
keyvalues={},
|
||||
retcols=(
|
||||
"user_id",
|
||||
"state",
|
||||
"last_active_ts",
|
||||
"last_federation_update_ts",
|
||||
"last_user_sync_ts",
|
||||
"status_msg",
|
||||
"currently_active",
|
||||
# TODO All these columns are nullable, but we don't expect that:
|
||||
# https://github.com/matrix-org/synapse/issues/16467
|
||||
rows = cast(
|
||||
List[Tuple[str, str, int, int, int, Optional[str], Union[int, bool]]],
|
||||
await self.db_pool.simple_select_many_batch(
|
||||
table="presence_stream",
|
||||
column="user_id",
|
||||
iterable=user_ids,
|
||||
keyvalues={},
|
||||
retcols=(
|
||||
"user_id",
|
||||
"state",
|
||||
"last_active_ts",
|
||||
"last_federation_update_ts",
|
||||
"last_user_sync_ts",
|
||||
"status_msg",
|
||||
"currently_active",
|
||||
),
|
||||
desc="get_presence_for_users",
|
||||
),
|
||||
desc="get_presence_for_users",
|
||||
)
|
||||
|
||||
for row in rows:
|
||||
row["currently_active"] = bool(row["currently_active"])
|
||||
|
||||
return {row["user_id"]: UserPresenceState(**row) for row in rows}
|
||||
return {
|
||||
user_id: UserPresenceState(
|
||||
user_id=user_id,
|
||||
state=state,
|
||||
last_active_ts=last_active_ts,
|
||||
last_federation_update_ts=last_federation_update_ts,
|
||||
last_user_sync_ts=last_user_sync_ts,
|
||||
status_msg=status_msg,
|
||||
currently_active=bool(currently_active),
|
||||
)
|
||||
for user_id, state, last_active_ts, last_federation_update_ts, last_user_sync_ts, status_msg, currently_active in rows
|
||||
}
|
||||
|
||||
async def should_user_receive_full_presence_with_token(
|
||||
self,
|
||||
|
@ -385,28 +399,49 @@ class PresenceStore(PresenceBackgroundUpdateStore, CacheInvalidationWorkerStore)
|
|||
limit = 100
|
||||
offset = 0
|
||||
while True:
|
||||
rows = await self.db_pool.runInteraction(
|
||||
"get_presence_for_all_users",
|
||||
self.db_pool.simple_select_list_paginate_txn,
|
||||
"presence_stream",
|
||||
orderby="stream_id",
|
||||
start=offset,
|
||||
limit=limit,
|
||||
exclude_keyvalues=exclude_keyvalues,
|
||||
retcols=(
|
||||
"user_id",
|
||||
"state",
|
||||
"last_active_ts",
|
||||
"last_federation_update_ts",
|
||||
"last_user_sync_ts",
|
||||
"status_msg",
|
||||
"currently_active",
|
||||
# TODO All these columns are nullable, but we don't expect that:
|
||||
# https://github.com/matrix-org/synapse/issues/16467
|
||||
rows = cast(
|
||||
List[Tuple[str, str, int, int, int, Optional[str], Union[int, bool]]],
|
||||
await self.db_pool.runInteraction(
|
||||
"get_presence_for_all_users",
|
||||
self.db_pool.simple_select_list_paginate_txn,
|
||||
"presence_stream",
|
||||
orderby="stream_id",
|
||||
start=offset,
|
||||
limit=limit,
|
||||
exclude_keyvalues=exclude_keyvalues,
|
||||
retcols=(
|
||||
"user_id",
|
||||
"state",
|
||||
"last_active_ts",
|
||||
"last_federation_update_ts",
|
||||
"last_user_sync_ts",
|
||||
"status_msg",
|
||||
"currently_active",
|
||||
),
|
||||
order_direction="ASC",
|
||||
),
|
||||
order_direction="ASC",
|
||||
)
|
||||
|
||||
for row in rows:
|
||||
users_to_state[row["user_id"]] = UserPresenceState(**row)
|
||||
for (
|
||||
user_id,
|
||||
state,
|
||||
last_active_ts,
|
||||
last_federation_update_ts,
|
||||
last_user_sync_ts,
|
||||
status_msg,
|
||||
currently_active,
|
||||
) in rows:
|
||||
users_to_state[user_id] = UserPresenceState(
|
||||
user_id=user_id,
|
||||
state=state,
|
||||
last_active_ts=last_active_ts,
|
||||
last_federation_update_ts=last_federation_update_ts,
|
||||
last_user_sync_ts=last_user_sync_ts,
|
||||
status_msg=status_msg,
|
||||
currently_active=bool(currently_active),
|
||||
)
|
||||
|
||||
# We've run out of updates to query
|
||||
if len(rows) < limit:
|
||||
|
@ -434,13 +469,21 @@ class PresenceStore(PresenceBackgroundUpdateStore, CacheInvalidationWorkerStore)
|
|||
|
||||
txn = db_conn.cursor()
|
||||
txn.execute(sql, (PresenceState.OFFLINE,))
|
||||
rows = self.db_pool.cursor_to_dict(txn)
|
||||
rows = txn.fetchall()
|
||||
txn.close()
|
||||
|
||||
for row in rows:
|
||||
row["currently_active"] = bool(row["currently_active"])
|
||||
|
||||
return [UserPresenceState(**row) for row in rows]
|
||||
return [
|
||||
UserPresenceState(
|
||||
user_id=user_id,
|
||||
state=state,
|
||||
last_active_ts=last_active_ts,
|
||||
last_federation_update_ts=last_federation_update_ts,
|
||||
last_user_sync_ts=last_user_sync_ts,
|
||||
status_msg=status_msg,
|
||||
currently_active=bool(currently_active),
|
||||
)
|
||||
for user_id, state, last_active_ts, last_federation_update_ts, last_user_sync_ts, status_msg, currently_active in rows
|
||||
]
|
||||
|
||||
def take_presence_startup_info(self) -> List[UserPresenceState]:
|
||||
active_on_startup = self._presence_on_startup
|
||||
|
|
|
@ -89,6 +89,11 @@ class PurgeEventsStore(StateGroupWorkerStore, CacheInvalidationWorkerStore):
|
|||
# furthermore, we might already have the table from a previous (failed)
|
||||
# purge attempt, so let's drop the table first.
|
||||
|
||||
if isinstance(self.database_engine, PostgresEngine):
|
||||
# Disable statement timeouts for this transaction; purging rooms can
|
||||
# take a while!
|
||||
txn.execute("SET LOCAL statement_timeout = 0")
|
||||
|
||||
txn.execute("DROP TABLE IF EXISTS events_to_purge")
|
||||
|
||||
txn.execute(
|
||||
|
|
|
@ -62,20 +62,34 @@ logger = logging.getLogger(__name__)
|
|||
|
||||
|
||||
def _load_rules(
|
||||
rawrules: List[JsonDict],
|
||||
rawrules: List[Tuple[str, int, str, str]],
|
||||
enabled_map: Dict[str, bool],
|
||||
experimental_config: ExperimentalConfig,
|
||||
) -> FilteredPushRules:
|
||||
"""Take the DB rows returned from the DB and convert them into a full
|
||||
`FilteredPushRules` object.
|
||||
|
||||
Args:
|
||||
rawrules: List of tuples of:
|
||||
* rule ID
|
||||
* Priority lass
|
||||
* Conditions (as serialized JSON)
|
||||
* Actions (as serialized JSON)
|
||||
enabled_map: A dictionary of rule ID to a boolean of whether the rule is
|
||||
enabled. This might not include all rule IDs from rawrules.
|
||||
experimental_config: The `experimental_features` section of the Synapse
|
||||
config. (Used to check if various features are enabled.)
|
||||
|
||||
Returns:
|
||||
A new FilteredPushRules object.
|
||||
"""
|
||||
|
||||
ruleslist = [
|
||||
PushRule.from_db(
|
||||
rule_id=rawrule["rule_id"],
|
||||
priority_class=rawrule["priority_class"],
|
||||
conditions=rawrule["conditions"],
|
||||
actions=rawrule["actions"],
|
||||
rule_id=rawrule[0],
|
||||
priority_class=rawrule[1],
|
||||
conditions=rawrule[2],
|
||||
actions=rawrule[3],
|
||||
)
|
||||
for rawrule in rawrules
|
||||
]
|
||||
|
@ -183,7 +197,19 @@ class PushRulesWorkerStore(
|
|||
|
||||
enabled_map = await self.get_push_rules_enabled_for_user(user_id)
|
||||
|
||||
return _load_rules(rows, enabled_map, self.hs.config.experimental)
|
||||
return _load_rules(
|
||||
[
|
||||
(
|
||||
row["rule_id"],
|
||||
row["priority_class"],
|
||||
row["conditions"],
|
||||
row["actions"],
|
||||
)
|
||||
for row in rows
|
||||
],
|
||||
enabled_map,
|
||||
self.hs.config.experimental,
|
||||
)
|
||||
|
||||
async def get_push_rules_enabled_for_user(self, user_id: str) -> Dict[str, bool]:
|
||||
results = await self.db_pool.simple_select_list(
|
||||
|
@ -221,21 +247,36 @@ class PushRulesWorkerStore(
|
|||
if not user_ids:
|
||||
return {}
|
||||
|
||||
raw_rules: Dict[str, List[JsonDict]] = {user_id: [] for user_id in user_ids}
|
||||
raw_rules: Dict[str, List[Tuple[str, int, str, str]]] = {
|
||||
user_id: [] for user_id in user_ids
|
||||
}
|
||||
|
||||
rows = await self.db_pool.simple_select_many_batch(
|
||||
table="push_rules",
|
||||
column="user_name",
|
||||
iterable=user_ids,
|
||||
retcols=("*",),
|
||||
desc="bulk_get_push_rules",
|
||||
batch_size=1000,
|
||||
rows = cast(
|
||||
List[Tuple[str, str, int, int, str, str]],
|
||||
await self.db_pool.simple_select_many_batch(
|
||||
table="push_rules",
|
||||
column="user_name",
|
||||
iterable=user_ids,
|
||||
retcols=(
|
||||
"user_name",
|
||||
"rule_id",
|
||||
"priority_class",
|
||||
"priority",
|
||||
"conditions",
|
||||
"actions",
|
||||
),
|
||||
desc="bulk_get_push_rules",
|
||||
batch_size=1000,
|
||||
),
|
||||
)
|
||||
|
||||
rows.sort(key=lambda row: (-int(row["priority_class"]), -int(row["priority"])))
|
||||
# Sort by highest priority_class, then highest priority.
|
||||
rows.sort(key=lambda row: (-int(row[2]), -int(row[3])))
|
||||
|
||||
for row in rows:
|
||||
raw_rules.setdefault(row["user_name"], []).append(row)
|
||||
for user_name, rule_id, priority_class, _, conditions, actions in rows:
|
||||
raw_rules.setdefault(user_name, []).append(
|
||||
(rule_id, priority_class, conditions, actions)
|
||||
)
|
||||
|
||||
enabled_map_by_user = await self.bulk_get_push_rules_enabled(user_ids)
|
||||
|
||||
|
@ -256,17 +297,19 @@ class PushRulesWorkerStore(
|
|||
|
||||
results: Dict[str, Dict[str, bool]] = {user_id: {} for user_id in user_ids}
|
||||
|
||||
rows = await self.db_pool.simple_select_many_batch(
|
||||
table="push_rules_enable",
|
||||
column="user_name",
|
||||
iterable=user_ids,
|
||||
retcols=("user_name", "rule_id", "enabled"),
|
||||
desc="bulk_get_push_rules_enabled",
|
||||
batch_size=1000,
|
||||
rows = cast(
|
||||
List[Tuple[str, str, Optional[int]]],
|
||||
await self.db_pool.simple_select_many_batch(
|
||||
table="push_rules_enable",
|
||||
column="user_name",
|
||||
iterable=user_ids,
|
||||
retcols=("user_name", "rule_id", "enabled"),
|
||||
desc="bulk_get_push_rules_enabled",
|
||||
batch_size=1000,
|
||||
),
|
||||
)
|
||||
for row in rows:
|
||||
enabled = bool(row["enabled"])
|
||||
results.setdefault(row["user_name"], {})[row["rule_id"]] = enabled
|
||||
for user_name, rule_id, enabled in rows:
|
||||
results.setdefault(user_name, {})[rule_id] = bool(enabled)
|
||||
return results
|
||||
|
||||
async def get_all_push_rule_updates(
|
||||
|
|
|
@ -47,6 +47,27 @@ if TYPE_CHECKING:
|
|||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# The type of a row in the pushers table.
|
||||
PusherRow = Tuple[
|
||||
int, # id
|
||||
str, # user_name
|
||||
Optional[int], # access_token
|
||||
str, # profile_tag
|
||||
str, # kind
|
||||
str, # app_id
|
||||
str, # app_display_name
|
||||
str, # device_display_name
|
||||
str, # pushkey
|
||||
int, # ts
|
||||
str, # lang
|
||||
str, # data
|
||||
int, # last_stream_ordering
|
||||
int, # last_success
|
||||
int, # failing_since
|
||||
bool, # enabled
|
||||
str, # device_id
|
||||
]
|
||||
|
||||
|
||||
class PusherWorkerStore(SQLBaseStore):
|
||||
def __init__(
|
||||
|
@ -83,30 +104,66 @@ class PusherWorkerStore(SQLBaseStore):
|
|||
self._remove_deleted_email_pushers,
|
||||
)
|
||||
|
||||
def _decode_pushers_rows(self, rows: Iterable[dict]) -> Iterator[PusherConfig]:
|
||||
def _decode_pushers_rows(
|
||||
self,
|
||||
rows: Iterable[PusherRow],
|
||||
) -> Iterator[PusherConfig]:
|
||||
"""JSON-decode the data in the rows returned from the `pushers` table
|
||||
|
||||
Drops any rows whose data cannot be decoded
|
||||
"""
|
||||
for r in rows:
|
||||
data_json = r["data"]
|
||||
for (
|
||||
id,
|
||||
user_name,
|
||||
access_token,
|
||||
profile_tag,
|
||||
kind,
|
||||
app_id,
|
||||
app_display_name,
|
||||
device_display_name,
|
||||
pushkey,
|
||||
ts,
|
||||
lang,
|
||||
data,
|
||||
last_stream_ordering,
|
||||
last_success,
|
||||
failing_since,
|
||||
enabled,
|
||||
device_id,
|
||||
) in rows:
|
||||
try:
|
||||
r["data"] = db_to_json(data_json)
|
||||
data_json = db_to_json(data)
|
||||
except Exception as e:
|
||||
logger.warning(
|
||||
"Invalid JSON in data for pusher %d: %s, %s",
|
||||
r["id"],
|
||||
data_json,
|
||||
id,
|
||||
data,
|
||||
e.args[0],
|
||||
)
|
||||
continue
|
||||
|
||||
# If we're using SQLite, then boolean values are integers. This is
|
||||
# troublesome since some code using the return value of this method might
|
||||
# expect it to be a boolean, or will expose it to clients (in responses).
|
||||
r["enabled"] = bool(r["enabled"])
|
||||
|
||||
yield PusherConfig(**r)
|
||||
yield PusherConfig(
|
||||
id=id,
|
||||
user_name=user_name,
|
||||
profile_tag=profile_tag,
|
||||
kind=kind,
|
||||
app_id=app_id,
|
||||
app_display_name=app_display_name,
|
||||
device_display_name=device_display_name,
|
||||
pushkey=pushkey,
|
||||
ts=ts,
|
||||
lang=lang,
|
||||
data=data_json,
|
||||
last_stream_ordering=last_stream_ordering,
|
||||
last_success=last_success,
|
||||
failing_since=failing_since,
|
||||
# If we're using SQLite, then boolean values are integers. This is
|
||||
# troublesome since some code using the return value of this method might
|
||||
# expect it to be a boolean, or will expose it to clients (in responses).
|
||||
enabled=bool(enabled),
|
||||
device_id=device_id,
|
||||
access_token=access_token,
|
||||
)
|
||||
|
||||
def get_pushers_stream_token(self) -> int:
|
||||
return self._pushers_id_gen.get_current_token()
|
||||
|
@ -136,7 +193,7 @@ class PusherWorkerStore(SQLBaseStore):
|
|||
The pushers for which the given columns have the given values.
|
||||
"""
|
||||
|
||||
def get_pushers_by_txn(txn: LoggingTransaction) -> List[Dict[str, Any]]:
|
||||
def get_pushers_by_txn(txn: LoggingTransaction) -> List[PusherRow]:
|
||||
# We could technically use simple_select_list here, but we need to call
|
||||
# COALESCE on the 'enabled' column. While it is technically possible to give
|
||||
# simple_select_list the whole `COALESCE(...) AS ...` as a column name, it
|
||||
|
@ -154,7 +211,7 @@ class PusherWorkerStore(SQLBaseStore):
|
|||
|
||||
txn.execute(sql, list(keyvalues.values()))
|
||||
|
||||
return self.db_pool.cursor_to_dict(txn)
|
||||
return cast(List[PusherRow], txn.fetchall())
|
||||
|
||||
ret = await self.db_pool.runInteraction(
|
||||
desc="get_pushers_by",
|
||||
|
@ -164,14 +221,22 @@ class PusherWorkerStore(SQLBaseStore):
|
|||
return self._decode_pushers_rows(ret)
|
||||
|
||||
async def get_enabled_pushers(self) -> Iterator[PusherConfig]:
|
||||
def get_enabled_pushers_txn(txn: LoggingTransaction) -> Iterator[PusherConfig]:
|
||||
txn.execute("SELECT * FROM pushers WHERE COALESCE(enabled, TRUE)")
|
||||
rows = self.db_pool.cursor_to_dict(txn)
|
||||
def get_enabled_pushers_txn(txn: LoggingTransaction) -> List[PusherRow]:
|
||||
txn.execute(
|
||||
"""
|
||||
SELECT id, user_name, access_token, profile_tag, kind, app_id,
|
||||
app_display_name, device_display_name, pushkey, ts, lang, data,
|
||||
last_stream_ordering, last_success, failing_since,
|
||||
enabled, device_id
|
||||
FROM pushers WHERE COALESCE(enabled, TRUE)
|
||||
"""
|
||||
)
|
||||
return cast(List[PusherRow], txn.fetchall())
|
||||
|
||||
return self._decode_pushers_rows(rows)
|
||||
|
||||
return await self.db_pool.runInteraction(
|
||||
"get_enabled_pushers", get_enabled_pushers_txn
|
||||
return self._decode_pushers_rows(
|
||||
await self.db_pool.runInteraction(
|
||||
"get_enabled_pushers", get_enabled_pushers_txn
|
||||
)
|
||||
)
|
||||
|
||||
async def get_all_updated_pushers_rows(
|
||||
|
@ -304,7 +369,7 @@ class PusherWorkerStore(SQLBaseStore):
|
|||
)
|
||||
|
||||
async def get_throttle_params_by_room(
|
||||
self, pusher_id: str
|
||||
self, pusher_id: int
|
||||
) -> Dict[str, ThrottleParams]:
|
||||
res = await self.db_pool.simple_select_list(
|
||||
"pusher_throttle",
|
||||
|
@ -323,7 +388,7 @@ class PusherWorkerStore(SQLBaseStore):
|
|||
return params_by_room
|
||||
|
||||
async def set_throttle_params(
|
||||
self, pusher_id: str, room_id: str, params: ThrottleParams
|
||||
self, pusher_id: int, room_id: str, params: ThrottleParams
|
||||
) -> None:
|
||||
await self.db_pool.simple_upsert(
|
||||
"pusher_throttle",
|
||||
|
@ -534,7 +599,7 @@ class PusherBackgroundUpdatesStore(SQLBaseStore):
|
|||
(last_pusher_id, batch_size),
|
||||
)
|
||||
|
||||
rows = self.db_pool.cursor_to_dict(txn)
|
||||
rows = txn.fetchall()
|
||||
if len(rows) == 0:
|
||||
return 0
|
||||
|
||||
|
@ -550,19 +615,19 @@ class PusherBackgroundUpdatesStore(SQLBaseStore):
|
|||
txn=txn,
|
||||
table="pushers",
|
||||
key_names=("id",),
|
||||
key_values=[(row["pusher_id"],) for row in rows],
|
||||
key_values=[row[0] for row in rows],
|
||||
value_names=("device_id", "access_token"),
|
||||
# If there was already a device_id on the pusher, we only want to clear
|
||||
# the access_token column, so we keep the existing device_id. Otherwise,
|
||||
# we set the device_id we got from joining the access_tokens table.
|
||||
value_values=[
|
||||
(row["pusher_device_id"] or row["token_device_id"], None)
|
||||
for row in rows
|
||||
(pusher_device_id or token_device_id, None)
|
||||
for _, pusher_device_id, token_device_id in rows
|
||||
],
|
||||
)
|
||||
|
||||
self.db_pool.updates._background_update_progress_txn(
|
||||
txn, "set_device_id_for_pushers", {"pusher_id": rows[-1]["pusher_id"]}
|
||||
txn, "set_device_id_for_pushers", {"pusher_id": rows[-1][0]}
|
||||
)
|
||||
|
||||
return len(rows)
|
||||
|
|
|
@ -313,25 +313,25 @@ class ReceiptsWorkerStore(SQLBaseStore):
|
|||
) -> Sequence[JsonMapping]:
|
||||
"""See get_linearized_receipts_for_room"""
|
||||
|
||||
def f(txn: LoggingTransaction) -> List[Dict[str, Any]]:
|
||||
def f(txn: LoggingTransaction) -> List[Tuple[str, str, str, str]]:
|
||||
if from_key:
|
||||
sql = (
|
||||
"SELECT * FROM receipts_linearized WHERE"
|
||||
"SELECT receipt_type, user_id, event_id, data"
|
||||
" FROM receipts_linearized WHERE"
|
||||
" room_id = ? AND stream_id > ? AND stream_id <= ?"
|
||||
)
|
||||
|
||||
txn.execute(sql, (room_id, from_key, to_key))
|
||||
else:
|
||||
sql = (
|
||||
"SELECT * FROM receipts_linearized WHERE"
|
||||
"SELECT receipt_type, user_id, event_id, data"
|
||||
" FROM receipts_linearized WHERE"
|
||||
" room_id = ? AND stream_id <= ?"
|
||||
)
|
||||
|
||||
txn.execute(sql, (room_id, to_key))
|
||||
|
||||
rows = self.db_pool.cursor_to_dict(txn)
|
||||
|
||||
return rows
|
||||
return cast(List[Tuple[str, str, str, str]], txn.fetchall())
|
||||
|
||||
rows = await self.db_pool.runInteraction("get_linearized_receipts_for_room", f)
|
||||
|
||||
|
@ -339,10 +339,10 @@ class ReceiptsWorkerStore(SQLBaseStore):
|
|||
return []
|
||||
|
||||
content: JsonDict = {}
|
||||
for row in rows:
|
||||
content.setdefault(row["event_id"], {}).setdefault(row["receipt_type"], {})[
|
||||
row["user_id"]
|
||||
] = db_to_json(row["data"])
|
||||
for receipt_type, user_id, event_id, data in rows:
|
||||
content.setdefault(event_id, {}).setdefault(receipt_type, {})[
|
||||
user_id
|
||||
] = db_to_json(data)
|
||||
|
||||
return [{"type": EduTypes.RECEIPT, "room_id": room_id, "content": content}]
|
||||
|
||||
|
@ -357,10 +357,13 @@ class ReceiptsWorkerStore(SQLBaseStore):
|
|||
if not room_ids:
|
||||
return {}
|
||||
|
||||
def f(txn: LoggingTransaction) -> List[Dict[str, Any]]:
|
||||
def f(
|
||||
txn: LoggingTransaction,
|
||||
) -> List[Tuple[str, str, str, str, Optional[str], str]]:
|
||||
if from_key:
|
||||
sql = """
|
||||
SELECT * FROM receipts_linearized WHERE
|
||||
SELECT room_id, receipt_type, user_id, event_id, thread_id, data
|
||||
FROM receipts_linearized WHERE
|
||||
stream_id > ? AND stream_id <= ? AND
|
||||
"""
|
||||
clause, args = make_in_list_sql_clause(
|
||||
|
@ -370,7 +373,8 @@ class ReceiptsWorkerStore(SQLBaseStore):
|
|||
txn.execute(sql + clause, [from_key, to_key] + list(args))
|
||||
else:
|
||||
sql = """
|
||||
SELECT * FROM receipts_linearized WHERE
|
||||
SELECT room_id, receipt_type, user_id, event_id, thread_id, data
|
||||
FROM receipts_linearized WHERE
|
||||
stream_id <= ? AND
|
||||
"""
|
||||
|
||||
|
@ -380,29 +384,31 @@ class ReceiptsWorkerStore(SQLBaseStore):
|
|||
|
||||
txn.execute(sql + clause, [to_key] + list(args))
|
||||
|
||||
return self.db_pool.cursor_to_dict(txn)
|
||||
return cast(
|
||||
List[Tuple[str, str, str, str, Optional[str], str]], txn.fetchall()
|
||||
)
|
||||
|
||||
txn_results = await self.db_pool.runInteraction(
|
||||
"_get_linearized_receipts_for_rooms", f
|
||||
)
|
||||
|
||||
results: JsonDict = {}
|
||||
for row in txn_results:
|
||||
for room_id, receipt_type, user_id, event_id, thread_id, data in txn_results:
|
||||
# We want a single event per room, since we want to batch the
|
||||
# receipts by room, event and type.
|
||||
room_event = results.setdefault(
|
||||
row["room_id"],
|
||||
{"type": EduTypes.RECEIPT, "room_id": row["room_id"], "content": {}},
|
||||
room_id,
|
||||
{"type": EduTypes.RECEIPT, "room_id": room_id, "content": {}},
|
||||
)
|
||||
|
||||
# The content is of the form:
|
||||
# {"$foo:bar": { "read": { "@user:host": <receipt> }, .. }, .. }
|
||||
event_entry = room_event["content"].setdefault(row["event_id"], {})
|
||||
receipt_type = event_entry.setdefault(row["receipt_type"], {})
|
||||
event_entry = room_event["content"].setdefault(event_id, {})
|
||||
receipt_type_dict = event_entry.setdefault(receipt_type, {})
|
||||
|
||||
receipt_type[row["user_id"]] = db_to_json(row["data"])
|
||||
if row["thread_id"]:
|
||||
receipt_type[row["user_id"]]["thread_id"] = row["thread_id"]
|
||||
receipt_type_dict[user_id] = db_to_json(data)
|
||||
if thread_id:
|
||||
receipt_type_dict[user_id]["thread_id"] = thread_id
|
||||
|
||||
results = {
|
||||
room_id: [results[room_id]] if room_id in results else []
|
||||
|
@ -428,10 +434,11 @@ class ReceiptsWorkerStore(SQLBaseStore):
|
|||
A dictionary of roomids to a list of receipts.
|
||||
"""
|
||||
|
||||
def f(txn: LoggingTransaction) -> List[Dict[str, Any]]:
|
||||
def f(txn: LoggingTransaction) -> List[Tuple[str, str, str, str, str]]:
|
||||
if from_key:
|
||||
sql = """
|
||||
SELECT * FROM receipts_linearized WHERE
|
||||
SELECT room_id, receipt_type, user_id, event_id, data
|
||||
FROM receipts_linearized WHERE
|
||||
stream_id > ? AND stream_id <= ?
|
||||
ORDER BY stream_id DESC
|
||||
LIMIT 100
|
||||
|
@ -439,7 +446,8 @@ class ReceiptsWorkerStore(SQLBaseStore):
|
|||
txn.execute(sql, [from_key, to_key])
|
||||
else:
|
||||
sql = """
|
||||
SELECT * FROM receipts_linearized WHERE
|
||||
SELECT room_id, receipt_type, user_id, event_id, data
|
||||
FROM receipts_linearized WHERE
|
||||
stream_id <= ?
|
||||
ORDER BY stream_id DESC
|
||||
LIMIT 100
|
||||
|
@ -447,27 +455,27 @@ class ReceiptsWorkerStore(SQLBaseStore):
|
|||
|
||||
txn.execute(sql, [to_key])
|
||||
|
||||
return self.db_pool.cursor_to_dict(txn)
|
||||
return cast(List[Tuple[str, str, str, str, str]], txn.fetchall())
|
||||
|
||||
txn_results = await self.db_pool.runInteraction(
|
||||
"get_linearized_receipts_for_all_rooms", f
|
||||
)
|
||||
|
||||
results: JsonDict = {}
|
||||
for row in txn_results:
|
||||
for room_id, receipt_type, user_id, event_id, data in txn_results:
|
||||
# We want a single event per room, since we want to batch the
|
||||
# receipts by room, event and type.
|
||||
room_event = results.setdefault(
|
||||
row["room_id"],
|
||||
{"type": EduTypes.RECEIPT, "room_id": row["room_id"], "content": {}},
|
||||
room_id,
|
||||
{"type": EduTypes.RECEIPT, "room_id": room_id, "content": {}},
|
||||
)
|
||||
|
||||
# The content is of the form:
|
||||
# {"$foo:bar": { "read": { "@user:host": <receipt> }, .. }, .. }
|
||||
event_entry = room_event["content"].setdefault(row["event_id"], {})
|
||||
receipt_type = event_entry.setdefault(row["receipt_type"], {})
|
||||
event_entry = room_event["content"].setdefault(event_id, {})
|
||||
receipt_type_dict = event_entry.setdefault(receipt_type, {})
|
||||
|
||||
receipt_type[row["user_id"]] = db_to_json(row["data"])
|
||||
receipt_type_dict[user_id] = db_to_json(data)
|
||||
|
||||
return results
|
||||
|
||||
|
@ -742,7 +750,7 @@ class ReceiptsWorkerStore(SQLBaseStore):
|
|||
event_ids: List[str],
|
||||
thread_id: Optional[str],
|
||||
data: dict,
|
||||
) -> Optional[Tuple[int, int]]:
|
||||
) -> Optional[int]:
|
||||
"""Insert a receipt, either from local client or remote server.
|
||||
|
||||
Automatically does conversion between linearized and graph
|
||||
|
@ -804,9 +812,7 @@ class ReceiptsWorkerStore(SQLBaseStore):
|
|||
data,
|
||||
)
|
||||
|
||||
max_persisted_id = self._receipts_id_gen.get_current_token()
|
||||
|
||||
return stream_id, max_persisted_id
|
||||
return stream_id
|
||||
|
||||
async def _insert_graph_receipt(
|
||||
self,
|
||||
|
|
|
@ -143,6 +143,14 @@ class LoginTokenLookupResult:
|
|||
"""The session ID advertised by the SSO Identity Provider."""
|
||||
|
||||
|
||||
@attr.s(frozen=True, slots=True, auto_attribs=True)
|
||||
class ThreepidResult:
|
||||
medium: str
|
||||
address: str
|
||||
validated_at: int
|
||||
added_at: int
|
||||
|
||||
|
||||
class RegistrationWorkerStore(CacheInvalidationWorkerStore):
|
||||
def __init__(
|
||||
self,
|
||||
|
@ -195,7 +203,7 @@ class RegistrationWorkerStore(CacheInvalidationWorkerStore):
|
|||
async def get_user_by_id(self, user_id: str) -> Optional[UserInfo]:
|
||||
"""Returns info about the user account, if it exists."""
|
||||
|
||||
def get_user_by_id_txn(txn: LoggingTransaction) -> Optional[Dict[str, Any]]:
|
||||
def get_user_by_id_txn(txn: LoggingTransaction) -> Optional[UserInfo]:
|
||||
# We could technically use simple_select_one here, but it would not perform
|
||||
# the COALESCEs (unless hacked into the column names), which could yield
|
||||
# confusing results.
|
||||
|
@ -213,35 +221,46 @@ class RegistrationWorkerStore(CacheInvalidationWorkerStore):
|
|||
(user_id,),
|
||||
)
|
||||
|
||||
rows = self.db_pool.cursor_to_dict(txn)
|
||||
|
||||
if len(rows) == 0:
|
||||
row = txn.fetchone()
|
||||
if not row:
|
||||
return None
|
||||
|
||||
return rows[0]
|
||||
(
|
||||
name,
|
||||
is_guest,
|
||||
admin,
|
||||
consent_version,
|
||||
consent_ts,
|
||||
consent_server_notice_sent,
|
||||
appservice_id,
|
||||
creation_ts,
|
||||
user_type,
|
||||
deactivated,
|
||||
shadow_banned,
|
||||
approved,
|
||||
locked,
|
||||
) = row
|
||||
|
||||
row = await self.db_pool.runInteraction(
|
||||
return UserInfo(
|
||||
appservice_id=appservice_id,
|
||||
consent_server_notice_sent=consent_server_notice_sent,
|
||||
consent_version=consent_version,
|
||||
consent_ts=consent_ts,
|
||||
creation_ts=creation_ts,
|
||||
is_admin=bool(admin),
|
||||
is_deactivated=bool(deactivated),
|
||||
is_guest=bool(is_guest),
|
||||
is_shadow_banned=bool(shadow_banned),
|
||||
user_id=UserID.from_string(name),
|
||||
user_type=user_type,
|
||||
approved=bool(approved),
|
||||
locked=bool(locked),
|
||||
)
|
||||
|
||||
return await self.db_pool.runInteraction(
|
||||
desc="get_user_by_id",
|
||||
func=get_user_by_id_txn,
|
||||
)
|
||||
if row is None:
|
||||
return None
|
||||
|
||||
return UserInfo(
|
||||
appservice_id=row["appservice_id"],
|
||||
consent_server_notice_sent=row["consent_server_notice_sent"],
|
||||
consent_version=row["consent_version"],
|
||||
consent_ts=row["consent_ts"],
|
||||
creation_ts=row["creation_ts"],
|
||||
is_admin=bool(row["admin"]),
|
||||
is_deactivated=bool(row["deactivated"]),
|
||||
is_guest=bool(row["is_guest"]),
|
||||
is_shadow_banned=bool(row["shadow_banned"]),
|
||||
user_id=UserID.from_string(row["name"]),
|
||||
user_type=row["user_type"],
|
||||
approved=bool(row["approved"]),
|
||||
locked=bool(row["locked"]),
|
||||
)
|
||||
|
||||
async def is_trial_user(self, user_id: str) -> bool:
|
||||
"""Checks if user is in the "trial" period, i.e. within the first
|
||||
|
@ -579,16 +598,31 @@ class RegistrationWorkerStore(CacheInvalidationWorkerStore):
|
|||
"""
|
||||
|
||||
txn.execute(sql, (token,))
|
||||
rows = self.db_pool.cursor_to_dict(txn)
|
||||
row = txn.fetchone()
|
||||
|
||||
if rows:
|
||||
row = rows[0]
|
||||
if row:
|
||||
(
|
||||
user_id,
|
||||
is_guest,
|
||||
shadow_banned,
|
||||
token_id,
|
||||
device_id,
|
||||
valid_until_ms,
|
||||
token_owner,
|
||||
token_used,
|
||||
) = row
|
||||
|
||||
# This field is nullable, ensure it comes out as a boolean
|
||||
if row["token_used"] is None:
|
||||
row["token_used"] = False
|
||||
|
||||
return TokenLookupResult(**row)
|
||||
return TokenLookupResult(
|
||||
user_id=user_id,
|
||||
is_guest=is_guest,
|
||||
shadow_banned=shadow_banned,
|
||||
token_id=token_id,
|
||||
device_id=device_id,
|
||||
valid_until_ms=valid_until_ms,
|
||||
token_owner=token_owner,
|
||||
# This field is nullable, ensure it comes out as a boolean
|
||||
token_used=bool(token_used),
|
||||
)
|
||||
|
||||
return None
|
||||
|
||||
|
@ -833,11 +867,10 @@ class RegistrationWorkerStore(CacheInvalidationWorkerStore):
|
|||
"""Counts all users registered on the homeserver."""
|
||||
|
||||
def _count_users(txn: LoggingTransaction) -> int:
|
||||
txn.execute("SELECT COUNT(*) AS users FROM users")
|
||||
rows = self.db_pool.cursor_to_dict(txn)
|
||||
if rows:
|
||||
return rows[0]["users"]
|
||||
return 0
|
||||
txn.execute("SELECT COUNT(*) FROM users")
|
||||
row = txn.fetchone()
|
||||
assert row is not None
|
||||
return row[0]
|
||||
|
||||
return await self.db_pool.runInteraction("count_users", _count_users)
|
||||
|
||||
|
@ -891,11 +924,10 @@ class RegistrationWorkerStore(CacheInvalidationWorkerStore):
|
|||
"""Counts all users without a special user_type registered on the homeserver."""
|
||||
|
||||
def _count_users(txn: LoggingTransaction) -> int:
|
||||
txn.execute("SELECT COUNT(*) AS users FROM users where user_type is null")
|
||||
rows = self.db_pool.cursor_to_dict(txn)
|
||||
if rows:
|
||||
return rows[0]["users"]
|
||||
return 0
|
||||
txn.execute("SELECT COUNT(*) FROM users where user_type is null")
|
||||
row = txn.fetchone()
|
||||
assert row is not None
|
||||
return row[0]
|
||||
|
||||
return await self.db_pool.runInteraction("count_real_users", _count_users)
|
||||
|
||||
|
@ -964,13 +996,14 @@ class RegistrationWorkerStore(CacheInvalidationWorkerStore):
|
|||
{"user_id": user_id, "validated_at": validated_at, "added_at": added_at},
|
||||
)
|
||||
|
||||
async def user_get_threepids(self, user_id: str) -> List[Dict[str, Any]]:
|
||||
return await self.db_pool.simple_select_list(
|
||||
async def user_get_threepids(self, user_id: str) -> List[ThreepidResult]:
|
||||
results = await self.db_pool.simple_select_list(
|
||||
"user_threepids",
|
||||
{"user_id": user_id},
|
||||
["medium", "address", "validated_at", "added_at"],
|
||||
"user_get_threepids",
|
||||
keyvalues={"user_id": user_id},
|
||||
retcols=["medium", "address", "validated_at", "added_at"],
|
||||
desc="user_get_threepids",
|
||||
)
|
||||
return [ThreepidResult(**r) for r in results]
|
||||
|
||||
async def user_delete_threepid(
|
||||
self, user_id: str, medium: str, address: str
|
||||
|
@ -1252,12 +1285,8 @@ class RegistrationWorkerStore(CacheInvalidationWorkerStore):
|
|||
)
|
||||
txn.execute(sql, [])
|
||||
|
||||
res = self.db_pool.cursor_to_dict(txn)
|
||||
if res:
|
||||
for user in res:
|
||||
self.set_expiration_date_for_user_txn(
|
||||
txn, user["name"], use_delta=True
|
||||
)
|
||||
for (name,) in txn.fetchall():
|
||||
self.set_expiration_date_for_user_txn(txn, name, use_delta=True)
|
||||
|
||||
await self.db_pool.runInteraction(
|
||||
"get_users_with_no_expiration_date",
|
||||
|
@ -1963,11 +1992,12 @@ class RegistrationWorkerStore(CacheInvalidationWorkerStore):
|
|||
(user_id,),
|
||||
)
|
||||
|
||||
rows = self.db_pool.cursor_to_dict(txn)
|
||||
row = txn.fetchone()
|
||||
assert row is not None
|
||||
|
||||
# We cast to bool because the value returned by the database engine might
|
||||
# be an integer if we're using SQLite.
|
||||
return bool(rows[0]["approved"])
|
||||
return bool(row[0])
|
||||
|
||||
return await self.db_pool.runInteraction(
|
||||
desc="is_user_pending_approval",
|
||||
|
@ -2045,22 +2075,22 @@ class RegistrationBackgroundUpdateStore(RegistrationWorkerStore):
|
|||
(last_user, batch_size),
|
||||
)
|
||||
|
||||
rows = self.db_pool.cursor_to_dict(txn)
|
||||
rows = txn.fetchall()
|
||||
|
||||
if not rows:
|
||||
return True, 0
|
||||
|
||||
rows_processed_nb = 0
|
||||
|
||||
for user in rows:
|
||||
if not user["count_tokens"] and not user["count_threepids"]:
|
||||
self.set_user_deactivated_status_txn(txn, user["name"], True)
|
||||
for name, count_tokens, count_threepids in rows:
|
||||
if not count_tokens and not count_threepids:
|
||||
self.set_user_deactivated_status_txn(txn, name, True)
|
||||
rows_processed_nb += 1
|
||||
|
||||
logger.info("Marked %d rows as deactivated", rows_processed_nb)
|
||||
|
||||
self.db_pool.updates._background_update_progress_txn(
|
||||
txn, "users_set_deactivated_flag", {"user_id": rows[-1]["name"]}
|
||||
txn, "users_set_deactivated_flag", {"user_id": rows[-1][0]}
|
||||
)
|
||||
|
||||
if batch_size > len(rows):
|
||||
|
|
|
@ -349,16 +349,19 @@ class RelationsWorkerStore(SQLBaseStore):
|
|||
def get_all_relation_ids_for_event_with_types_txn(
|
||||
txn: LoggingTransaction,
|
||||
) -> List[str]:
|
||||
rows = self.db_pool.simple_select_many_txn(
|
||||
txn=txn,
|
||||
table="event_relations",
|
||||
column="relation_type",
|
||||
iterable=relation_types,
|
||||
keyvalues={"relates_to_id": event_id},
|
||||
retcols=["event_id"],
|
||||
rows = cast(
|
||||
List[Tuple[str]],
|
||||
self.db_pool.simple_select_many_txn(
|
||||
txn=txn,
|
||||
table="event_relations",
|
||||
column="relation_type",
|
||||
iterable=relation_types,
|
||||
keyvalues={"relates_to_id": event_id},
|
||||
retcols=["event_id"],
|
||||
),
|
||||
)
|
||||
|
||||
return [row["event_id"] for row in rows]
|
||||
return [row[0] for row in rows]
|
||||
|
||||
return await self.db_pool.runInteraction(
|
||||
desc="get_all_relation_ids_for_event_with_types",
|
||||
|
|
|
@ -831,7 +831,7 @@ class RoomWorkerStore(CacheInvalidationWorkerStore):
|
|||
|
||||
def get_retention_policy_for_room_txn(
|
||||
txn: LoggingTransaction,
|
||||
) -> List[Dict[str, Optional[int]]]:
|
||||
) -> Optional[Tuple[Optional[int], Optional[int]]]:
|
||||
txn.execute(
|
||||
"""
|
||||
SELECT min_lifetime, max_lifetime FROM room_retention
|
||||
|
@ -841,7 +841,7 @@ class RoomWorkerStore(CacheInvalidationWorkerStore):
|
|||
(room_id,),
|
||||
)
|
||||
|
||||
return self.db_pool.cursor_to_dict(txn)
|
||||
return cast(Optional[Tuple[Optional[int], Optional[int]]], txn.fetchone())
|
||||
|
||||
ret = await self.db_pool.runInteraction(
|
||||
"get_retention_policy_for_room",
|
||||
|
@ -856,8 +856,7 @@ class RoomWorkerStore(CacheInvalidationWorkerStore):
|
|||
max_lifetime=self.config.retention.retention_default_max_lifetime,
|
||||
)
|
||||
|
||||
min_lifetime = ret[0]["min_lifetime"]
|
||||
max_lifetime = ret[0]["max_lifetime"]
|
||||
min_lifetime, max_lifetime = ret
|
||||
|
||||
# If one of the room's policy's attributes isn't defined, use the matching
|
||||
# attribute from the default policy.
|
||||
|
@ -1162,14 +1161,13 @@ class RoomWorkerStore(CacheInvalidationWorkerStore):
|
|||
|
||||
txn.execute(sql, args)
|
||||
|
||||
rows = self.db_pool.cursor_to_dict(txn)
|
||||
rooms_dict = {}
|
||||
|
||||
for row in rows:
|
||||
rooms_dict[row["room_id"]] = RetentionPolicy(
|
||||
min_lifetime=row["min_lifetime"],
|
||||
max_lifetime=row["max_lifetime"],
|
||||
rooms_dict = {
|
||||
room_id: RetentionPolicy(
|
||||
min_lifetime=min_lifetime,
|
||||
max_lifetime=max_lifetime,
|
||||
)
|
||||
for room_id, min_lifetime, max_lifetime in txn
|
||||
}
|
||||
|
||||
if include_null:
|
||||
# If required, do a second query that retrieves all of the rooms we know
|
||||
|
@ -1178,13 +1176,11 @@ class RoomWorkerStore(CacheInvalidationWorkerStore):
|
|||
|
||||
txn.execute(sql)
|
||||
|
||||
rows = self.db_pool.cursor_to_dict(txn)
|
||||
|
||||
# If a room isn't already in the dict (i.e. it doesn't have a retention
|
||||
# policy in its state), add it with a null policy.
|
||||
for row in rows:
|
||||
if row["room_id"] not in rooms_dict:
|
||||
rooms_dict[row["room_id"]] = RetentionPolicy()
|
||||
for (room_id,) in txn:
|
||||
if room_id not in rooms_dict:
|
||||
rooms_dict[room_id] = RetentionPolicy()
|
||||
|
||||
return rooms_dict
|
||||
|
||||
|
@ -1300,14 +1296,17 @@ class RoomWorkerStore(CacheInvalidationWorkerStore):
|
|||
complete.
|
||||
"""
|
||||
|
||||
rows: List[Dict[str, str]] = await self.db_pool.simple_select_many_batch(
|
||||
table="partial_state_rooms",
|
||||
column="room_id",
|
||||
iterable=room_ids,
|
||||
retcols=("room_id",),
|
||||
desc="is_partial_state_room_batched",
|
||||
rows = cast(
|
||||
List[Tuple[str]],
|
||||
await self.db_pool.simple_select_many_batch(
|
||||
table="partial_state_rooms",
|
||||
column="room_id",
|
||||
iterable=room_ids,
|
||||
retcols=("room_id",),
|
||||
desc="is_partial_state_room_batched",
|
||||
),
|
||||
)
|
||||
partial_state_rooms = {row_dict["room_id"] for row_dict in rows}
|
||||
partial_state_rooms = {row[0] for row in rows}
|
||||
return {room_id: room_id in partial_state_rooms for room_id in room_ids}
|
||||
|
||||
async def get_join_event_id_and_device_lists_stream_id_for_partial_state(
|
||||
|
@ -1703,24 +1702,24 @@ class RoomBackgroundUpdateStore(SQLBaseStore):
|
|||
(last_room, batch_size),
|
||||
)
|
||||
|
||||
rows = self.db_pool.cursor_to_dict(txn)
|
||||
rows = txn.fetchall()
|
||||
|
||||
if not rows:
|
||||
return True
|
||||
|
||||
for row in rows:
|
||||
if not row["json"]:
|
||||
for room_id, event_id, json in rows:
|
||||
if not json:
|
||||
retention_policy = {}
|
||||
else:
|
||||
ev = db_to_json(row["json"])
|
||||
ev = db_to_json(json)
|
||||
retention_policy = ev["content"]
|
||||
|
||||
self.db_pool.simple_insert_txn(
|
||||
txn=txn,
|
||||
table="room_retention",
|
||||
values={
|
||||
"room_id": row["room_id"],
|
||||
"event_id": row["event_id"],
|
||||
"room_id": room_id,
|
||||
"event_id": event_id,
|
||||
"min_lifetime": retention_policy.get("min_lifetime"),
|
||||
"max_lifetime": retention_policy.get("max_lifetime"),
|
||||
},
|
||||
|
@ -1729,7 +1728,7 @@ class RoomBackgroundUpdateStore(SQLBaseStore):
|
|||
logger.info("Inserted %d rows into room_retention", len(rows))
|
||||
|
||||
self.db_pool.updates._background_update_progress_txn(
|
||||
txn, "insert_room_retention", {"room_id": rows[-1]["room_id"]}
|
||||
txn, "insert_room_retention", {"room_id": rows[-1][0]}
|
||||
)
|
||||
|
||||
if batch_size > len(rows):
|
||||
|
|
|
@ -27,6 +27,7 @@ from typing import (
|
|||
Set,
|
||||
Tuple,
|
||||
Union,
|
||||
cast,
|
||||
)
|
||||
|
||||
import attr
|
||||
|
@ -683,25 +684,28 @@ class RoomMemberWorkerStore(EventsWorkerStore, CacheInvalidationWorkerStore):
|
|||
Map from user_id to set of rooms that is currently in.
|
||||
"""
|
||||
|
||||
rows = await self.db_pool.simple_select_many_batch(
|
||||
table="current_state_events",
|
||||
column="state_key",
|
||||
iterable=user_ids,
|
||||
retcols=(
|
||||
"state_key",
|
||||
"room_id",
|
||||
rows = cast(
|
||||
List[Tuple[str, str]],
|
||||
await self.db_pool.simple_select_many_batch(
|
||||
table="current_state_events",
|
||||
column="state_key",
|
||||
iterable=user_ids,
|
||||
retcols=(
|
||||
"state_key",
|
||||
"room_id",
|
||||
),
|
||||
keyvalues={
|
||||
"type": EventTypes.Member,
|
||||
"membership": Membership.JOIN,
|
||||
},
|
||||
desc="get_rooms_for_users",
|
||||
),
|
||||
keyvalues={
|
||||
"type": EventTypes.Member,
|
||||
"membership": Membership.JOIN,
|
||||
},
|
||||
desc="get_rooms_for_users",
|
||||
)
|
||||
|
||||
user_rooms: Dict[str, Set[str]] = {user_id: set() for user_id in user_ids}
|
||||
|
||||
for row in rows:
|
||||
user_rooms[row["state_key"]].add(row["room_id"])
|
||||
for state_key, room_id in rows:
|
||||
user_rooms[state_key].add(room_id)
|
||||
|
||||
return {key: frozenset(rooms) for key, rooms in user_rooms.items()}
|
||||
|
||||
|
@ -892,17 +896,20 @@ class RoomMemberWorkerStore(EventsWorkerStore, CacheInvalidationWorkerStore):
|
|||
Map from event ID to `user_id`, or None if event is not a join.
|
||||
"""
|
||||
|
||||
rows = await self.db_pool.simple_select_many_batch(
|
||||
table="room_memberships",
|
||||
column="event_id",
|
||||
iterable=event_ids,
|
||||
retcols=("user_id", "event_id"),
|
||||
keyvalues={"membership": Membership.JOIN},
|
||||
batch_size=1000,
|
||||
desc="_get_user_ids_from_membership_event_ids",
|
||||
rows = cast(
|
||||
List[Tuple[str, str]],
|
||||
await self.db_pool.simple_select_many_batch(
|
||||
table="room_memberships",
|
||||
column="event_id",
|
||||
iterable=event_ids,
|
||||
retcols=("event_id", "user_id"),
|
||||
keyvalues={"membership": Membership.JOIN},
|
||||
batch_size=1000,
|
||||
desc="_get_user_ids_from_membership_event_ids",
|
||||
),
|
||||
)
|
||||
|
||||
return {row["event_id"]: row["user_id"] for row in rows}
|
||||
return dict(rows)
|
||||
|
||||
@cached(max_entries=10000)
|
||||
async def is_host_joined(self, room_id: str, host: str) -> bool:
|
||||
|
@ -1202,21 +1209,22 @@ class RoomMemberWorkerStore(EventsWorkerStore, CacheInvalidationWorkerStore):
|
|||
membership event, otherwise the value is None.
|
||||
"""
|
||||
|
||||
rows = await self.db_pool.simple_select_many_batch(
|
||||
table="room_memberships",
|
||||
column="event_id",
|
||||
iterable=member_event_ids,
|
||||
retcols=("user_id", "membership", "event_id"),
|
||||
keyvalues={},
|
||||
batch_size=500,
|
||||
desc="get_membership_from_event_ids",
|
||||
rows = cast(
|
||||
List[Tuple[str, str, str]],
|
||||
await self.db_pool.simple_select_many_batch(
|
||||
table="room_memberships",
|
||||
column="event_id",
|
||||
iterable=member_event_ids,
|
||||
retcols=("user_id", "membership", "event_id"),
|
||||
keyvalues={},
|
||||
batch_size=500,
|
||||
desc="get_membership_from_event_ids",
|
||||
),
|
||||
)
|
||||
|
||||
return {
|
||||
row["event_id"]: EventIdMembership(
|
||||
membership=row["membership"], user_id=row["user_id"]
|
||||
)
|
||||
for row in rows
|
||||
event_id: EventIdMembership(membership=membership, user_id=user_id)
|
||||
for user_id, membership, event_id in rows
|
||||
}
|
||||
|
||||
async def is_local_host_in_room_ignoring_users(
|
||||
|
@ -1349,18 +1357,16 @@ class RoomMemberBackgroundUpdateStore(SQLBaseStore):
|
|||
|
||||
txn.execute(sql, (target_min_stream_id, max_stream_id, batch_size))
|
||||
|
||||
rows = self.db_pool.cursor_to_dict(txn)
|
||||
rows = txn.fetchall()
|
||||
if not rows:
|
||||
return 0
|
||||
|
||||
min_stream_id = rows[-1]["stream_ordering"]
|
||||
min_stream_id = rows[-1][0]
|
||||
|
||||
to_update = []
|
||||
for row in rows:
|
||||
event_id = row["event_id"]
|
||||
room_id = row["room_id"]
|
||||
for _, event_id, room_id, json in rows:
|
||||
try:
|
||||
event_json = db_to_json(row["json"])
|
||||
event_json = db_to_json(json)
|
||||
content = event_json["content"]
|
||||
except Exception:
|
||||
continue
|
||||
|
|
|
@ -179,22 +179,24 @@ class SearchBackgroundUpdateStore(SearchWorkerStore):
|
|||
# store_search_entries_txn with a generator function, but that
|
||||
# would mean having two cursors open on the database at once.
|
||||
# Instead we just build a list of results.
|
||||
rows = self.db_pool.cursor_to_dict(txn)
|
||||
rows = txn.fetchall()
|
||||
if not rows:
|
||||
return 0
|
||||
|
||||
min_stream_id = rows[-1]["stream_ordering"]
|
||||
min_stream_id = rows[-1][0]
|
||||
|
||||
event_search_rows = []
|
||||
for row in rows:
|
||||
for (
|
||||
stream_ordering,
|
||||
event_id,
|
||||
room_id,
|
||||
etype,
|
||||
json,
|
||||
origin_server_ts,
|
||||
) in rows:
|
||||
try:
|
||||
event_id = row["event_id"]
|
||||
room_id = row["room_id"]
|
||||
etype = row["type"]
|
||||
stream_ordering = row["stream_ordering"]
|
||||
origin_server_ts = row["origin_server_ts"]
|
||||
try:
|
||||
event_json = db_to_json(row["json"])
|
||||
event_json = db_to_json(json)
|
||||
content = event_json["content"]
|
||||
except Exception:
|
||||
continue
|
||||
|
|
|
@ -20,10 +20,12 @@ from typing import (
|
|||
Collection,
|
||||
Dict,
|
||||
Iterable,
|
||||
List,
|
||||
Mapping,
|
||||
Optional,
|
||||
Set,
|
||||
Tuple,
|
||||
cast,
|
||||
)
|
||||
|
||||
import attr
|
||||
|
@ -388,16 +390,19 @@ class StateGroupWorkerStore(EventsWorkerStore, SQLBaseStore):
|
|||
Raises:
|
||||
RuntimeError if the state is unknown at any of the given events
|
||||
"""
|
||||
rows = await self.db_pool.simple_select_many_batch(
|
||||
table="event_to_state_groups",
|
||||
column="event_id",
|
||||
iterable=event_ids,
|
||||
keyvalues={},
|
||||
retcols=("event_id", "state_group"),
|
||||
desc="_get_state_group_for_events",
|
||||
rows = cast(
|
||||
List[Tuple[str, int]],
|
||||
await self.db_pool.simple_select_many_batch(
|
||||
table="event_to_state_groups",
|
||||
column="event_id",
|
||||
iterable=event_ids,
|
||||
keyvalues={},
|
||||
retcols=("event_id", "state_group"),
|
||||
desc="_get_state_group_for_events",
|
||||
),
|
||||
)
|
||||
|
||||
res = {row["event_id"]: row["state_group"] for row in rows}
|
||||
res = dict(rows)
|
||||
for e in event_ids:
|
||||
if e not in res:
|
||||
raise RuntimeError("No state group for unknown or outlier event %s" % e)
|
||||
|
@ -415,16 +420,19 @@ class StateGroupWorkerStore(EventsWorkerStore, SQLBaseStore):
|
|||
The subset of state groups that are referenced.
|
||||
"""
|
||||
|
||||
rows = await self.db_pool.simple_select_many_batch(
|
||||
table="event_to_state_groups",
|
||||
column="state_group",
|
||||
iterable=state_groups,
|
||||
keyvalues={},
|
||||
retcols=("DISTINCT state_group",),
|
||||
desc="get_referenced_state_groups",
|
||||
rows = cast(
|
||||
List[Tuple[int]],
|
||||
await self.db_pool.simple_select_many_batch(
|
||||
table="event_to_state_groups",
|
||||
column="state_group",
|
||||
iterable=state_groups,
|
||||
keyvalues={},
|
||||
retcols=("DISTINCT state_group",),
|
||||
desc="get_referenced_state_groups",
|
||||
),
|
||||
)
|
||||
|
||||
return {row["state_group"] for row in rows}
|
||||
return {row[0] for row in rows}
|
||||
|
||||
async def update_state_for_partial_state_event(
|
||||
self,
|
||||
|
@ -624,16 +632,22 @@ class MainStateBackgroundUpdateStore(RoomMemberWorkerStore):
|
|||
# potentially stale, since there may have been a period where the
|
||||
# server didn't share a room with the remote user and therefore may
|
||||
# have missed any device updates.
|
||||
rows = self.db_pool.simple_select_many_txn(
|
||||
txn,
|
||||
table="current_state_events",
|
||||
column="room_id",
|
||||
iterable=to_delete,
|
||||
keyvalues={"type": EventTypes.Member, "membership": Membership.JOIN},
|
||||
retcols=("state_key",),
|
||||
rows = cast(
|
||||
List[Tuple[str]],
|
||||
self.db_pool.simple_select_many_txn(
|
||||
txn,
|
||||
table="current_state_events",
|
||||
column="room_id",
|
||||
iterable=to_delete,
|
||||
keyvalues={
|
||||
"type": EventTypes.Member,
|
||||
"membership": Membership.JOIN,
|
||||
},
|
||||
retcols=("state_key",),
|
||||
),
|
||||
)
|
||||
|
||||
potentially_left_users = {row["state_key"] for row in rows}
|
||||
potentially_left_users = {row[0] for row in rows}
|
||||
|
||||
# Now lets actually delete the rooms from the DB.
|
||||
self.db_pool.simple_delete_many_txn(
|
||||
|
|
|
@ -13,7 +13,9 @@
|
|||
# limitations under the License.
|
||||
|
||||
import logging
|
||||
from typing import Any, Dict, List, Tuple
|
||||
from typing import List, Optional, Tuple
|
||||
|
||||
import attr
|
||||
|
||||
from synapse.storage._base import SQLBaseStore
|
||||
from synapse.storage.database import LoggingTransaction
|
||||
|
@ -22,6 +24,20 @@ from synapse.util.caches.stream_change_cache import StreamChangeCache
|
|||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
||||
class StateDelta:
|
||||
stream_id: int
|
||||
room_id: str
|
||||
event_type: str
|
||||
state_key: str
|
||||
|
||||
event_id: Optional[str]
|
||||
"""new event_id for this state key. None if the state has been deleted."""
|
||||
|
||||
prev_event_id: Optional[str]
|
||||
"""previous event_id for this state key. None if it's new state."""
|
||||
|
||||
|
||||
class StateDeltasStore(SQLBaseStore):
|
||||
# This class must be mixed in with a child class which provides the following
|
||||
# attribute. TODO: can we get static analysis to enforce this?
|
||||
|
@ -29,31 +45,21 @@ class StateDeltasStore(SQLBaseStore):
|
|||
|
||||
async def get_partial_current_state_deltas(
|
||||
self, prev_stream_id: int, max_stream_id: int
|
||||
) -> Tuple[int, List[Dict[str, Any]]]:
|
||||
) -> Tuple[int, List[StateDelta]]:
|
||||
"""Fetch a list of room state changes since the given stream id
|
||||
|
||||
Each entry in the result contains the following fields:
|
||||
- stream_id (int)
|
||||
- room_id (str)
|
||||
- type (str): event type
|
||||
- state_key (str):
|
||||
- event_id (str|None): new event_id for this state key. None if the
|
||||
state has been deleted.
|
||||
- prev_event_id (str|None): previous event_id for this state key. None
|
||||
if it's new state.
|
||||
|
||||
This may be the partial state if we're lazy joining the room.
|
||||
|
||||
Args:
|
||||
prev_stream_id: point to get changes since (exclusive)
|
||||
max_stream_id: the point that we know has been correctly persisted
|
||||
- ie, an upper limit to return changes from.
|
||||
- ie, an upper limit to return changes from.
|
||||
|
||||
Returns:
|
||||
A tuple consisting of:
|
||||
- the stream id which these results go up to
|
||||
- list of current_state_delta_stream rows. If it is empty, we are
|
||||
up to date.
|
||||
- the stream id which these results go up to
|
||||
- list of current_state_delta_stream rows. If it is empty, we are
|
||||
up to date.
|
||||
"""
|
||||
prev_stream_id = int(prev_stream_id)
|
||||
|
||||
|
@ -72,7 +78,7 @@ class StateDeltasStore(SQLBaseStore):
|
|||
|
||||
def get_current_state_deltas_txn(
|
||||
txn: LoggingTransaction,
|
||||
) -> Tuple[int, List[Dict[str, Any]]]:
|
||||
) -> Tuple[int, List[StateDelta]]:
|
||||
# First we calculate the max stream id that will give us less than
|
||||
# N results.
|
||||
# We arbitrarily limit to 100 stream_id entries to ensure we don't
|
||||
|
@ -112,7 +118,17 @@ class StateDeltasStore(SQLBaseStore):
|
|||
ORDER BY stream_id ASC
|
||||
"""
|
||||
txn.execute(sql, (prev_stream_id, clipped_stream_id))
|
||||
return clipped_stream_id, self.db_pool.cursor_to_dict(txn)
|
||||
return clipped_stream_id, [
|
||||
StateDelta(
|
||||
stream_id=row[0],
|
||||
room_id=row[1],
|
||||
event_type=row[2],
|
||||
state_key=row[3],
|
||||
event_id=row[4],
|
||||
prev_event_id=row[5],
|
||||
)
|
||||
for row in txn.fetchall()
|
||||
]
|
||||
|
||||
return await self.db_pool.runInteraction(
|
||||
"get_current_state_deltas", get_current_state_deltas_txn
|
||||
|
|
|
@ -506,25 +506,28 @@ class StatsStore(StateDeltasStore):
|
|||
) -> Tuple[List[str], Dict[str, int], int, List[str], int]:
|
||||
pos = self.get_room_max_stream_ordering() # type: ignore[attr-defined]
|
||||
|
||||
rows = self.db_pool.simple_select_many_txn(
|
||||
txn,
|
||||
table="current_state_events",
|
||||
column="type",
|
||||
iterable=[
|
||||
EventTypes.Create,
|
||||
EventTypes.JoinRules,
|
||||
EventTypes.RoomHistoryVisibility,
|
||||
EventTypes.RoomEncryption,
|
||||
EventTypes.Name,
|
||||
EventTypes.Topic,
|
||||
EventTypes.RoomAvatar,
|
||||
EventTypes.CanonicalAlias,
|
||||
],
|
||||
keyvalues={"room_id": room_id, "state_key": ""},
|
||||
retcols=["event_id"],
|
||||
rows = cast(
|
||||
List[Tuple[str]],
|
||||
self.db_pool.simple_select_many_txn(
|
||||
txn,
|
||||
table="current_state_events",
|
||||
column="type",
|
||||
iterable=[
|
||||
EventTypes.Create,
|
||||
EventTypes.JoinRules,
|
||||
EventTypes.RoomHistoryVisibility,
|
||||
EventTypes.RoomEncryption,
|
||||
EventTypes.Name,
|
||||
EventTypes.Topic,
|
||||
EventTypes.RoomAvatar,
|
||||
EventTypes.CanonicalAlias,
|
||||
],
|
||||
keyvalues={"room_id": room_id, "state_key": ""},
|
||||
retcols=["event_id"],
|
||||
),
|
||||
)
|
||||
|
||||
event_ids = cast(List[str], [row["event_id"] for row in rows])
|
||||
event_ids = [row[0] for row in rows]
|
||||
|
||||
txn.execute(
|
||||
"""
|
||||
|
|
|
@ -266,7 +266,7 @@ def generate_next_token(
|
|||
# when we are going backwards so we subtract one from the
|
||||
# stream part.
|
||||
last_stream_ordering -= 1
|
||||
return RoomStreamToken(last_topo_ordering, last_stream_ordering)
|
||||
return RoomStreamToken(topological=last_topo_ordering, stream=last_stream_ordering)
|
||||
|
||||
|
||||
def _make_generic_sql_bound(
|
||||
|
@ -558,7 +558,7 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
|
|||
if p > min_pos
|
||||
}
|
||||
|
||||
return RoomStreamToken(None, min_pos, immutabledict(positions))
|
||||
return RoomStreamToken(stream=min_pos, instance_map=immutabledict(positions))
|
||||
|
||||
async def get_room_events_stream_for_rooms(
|
||||
self,
|
||||
|
@ -708,7 +708,7 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
|
|||
ret.reverse()
|
||||
|
||||
if rows:
|
||||
key = RoomStreamToken(None, min(r.stream_ordering for r in rows))
|
||||
key = RoomStreamToken(stream=min(r.stream_ordering for r in rows))
|
||||
else:
|
||||
# Assume we didn't get anything because there was nothing to
|
||||
# get.
|
||||
|
@ -969,7 +969,7 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
|
|||
topo = await self.db_pool.runInteraction(
|
||||
"_get_max_topological_txn", self._get_max_topological_txn, room_id
|
||||
)
|
||||
return RoomStreamToken(topo, stream_ordering)
|
||||
return RoomStreamToken(topological=topo, stream=stream_ordering)
|
||||
|
||||
@overload
|
||||
def get_stream_id_for_event_txn(
|
||||
|
@ -1033,7 +1033,9 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
|
|||
retcols=("stream_ordering", "topological_ordering"),
|
||||
desc="get_topological_token_for_event",
|
||||
)
|
||||
return RoomStreamToken(row["topological_ordering"], row["stream_ordering"])
|
||||
return RoomStreamToken(
|
||||
topological=row["topological_ordering"], stream=row["stream_ordering"]
|
||||
)
|
||||
|
||||
async def get_current_topological_token(self, room_id: str, stream_key: int) -> int:
|
||||
"""Gets the topological token in a room after or at the given stream
|
||||
|
@ -1114,8 +1116,8 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
|
|||
else:
|
||||
topo = None
|
||||
internal = event.internal_metadata
|
||||
internal.before = RoomStreamToken(topo, stream - 1)
|
||||
internal.after = RoomStreamToken(topo, stream)
|
||||
internal.before = RoomStreamToken(topological=topo, stream=stream - 1)
|
||||
internal.after = RoomStreamToken(topological=topo, stream=stream)
|
||||
internal.order = (int(topo) if topo else 0, int(stream))
|
||||
|
||||
async def get_events_around(
|
||||
|
@ -1191,11 +1193,13 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
|
|||
# Paginating backwards includes the event at the token, but paginating
|
||||
# forward doesn't.
|
||||
before_token = RoomStreamToken(
|
||||
results["topological_ordering"] - 1, results["stream_ordering"]
|
||||
topological=results["topological_ordering"] - 1,
|
||||
stream=results["stream_ordering"],
|
||||
)
|
||||
|
||||
after_token = RoomStreamToken(
|
||||
results["topological_ordering"], results["stream_ordering"]
|
||||
topological=results["topological_ordering"],
|
||||
stream=results["stream_ordering"],
|
||||
)
|
||||
|
||||
rows, start_token = self._paginate_room_events_txn(
|
||||
|
|
|
@ -12,7 +12,7 @@
|
|||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from typing import TYPE_CHECKING, Any, Dict, List, Optional
|
||||
from typing import TYPE_CHECKING, Any, List, Optional, Tuple, cast
|
||||
|
||||
from synapse.storage._base import SQLBaseStore, db_to_json
|
||||
from synapse.storage.database import (
|
||||
|
@ -27,6 +27,8 @@ from synapse.util import json_encoder
|
|||
if TYPE_CHECKING:
|
||||
from synapse.server import HomeServer
|
||||
|
||||
ScheduledTaskRow = Tuple[str, str, str, int, str, str, str, str]
|
||||
|
||||
|
||||
class TaskSchedulerWorkerStore(SQLBaseStore):
|
||||
def __init__(
|
||||
|
@ -38,13 +40,18 @@ class TaskSchedulerWorkerStore(SQLBaseStore):
|
|||
super().__init__(database, db_conn, hs)
|
||||
|
||||
@staticmethod
|
||||
def _convert_row_to_task(row: Dict[str, Any]) -> ScheduledTask:
|
||||
row["status"] = TaskStatus(row["status"])
|
||||
if row["params"] is not None:
|
||||
row["params"] = db_to_json(row["params"])
|
||||
if row["result"] is not None:
|
||||
row["result"] = db_to_json(row["result"])
|
||||
return ScheduledTask(**row)
|
||||
def _convert_row_to_task(row: ScheduledTaskRow) -> ScheduledTask:
|
||||
task_id, action, status, timestamp, resource_id, params, result, error = row
|
||||
return ScheduledTask(
|
||||
id=task_id,
|
||||
action=action,
|
||||
status=TaskStatus(status),
|
||||
timestamp=timestamp,
|
||||
resource_id=resource_id,
|
||||
params=db_to_json(params) if params is not None else None,
|
||||
result=db_to_json(result) if result is not None else None,
|
||||
error=error,
|
||||
)
|
||||
|
||||
async def get_scheduled_tasks(
|
||||
self,
|
||||
|
@ -68,7 +75,7 @@ class TaskSchedulerWorkerStore(SQLBaseStore):
|
|||
Returns: a list of `ScheduledTask`, ordered by increasing timestamps
|
||||
"""
|
||||
|
||||
def get_scheduled_tasks_txn(txn: LoggingTransaction) -> List[Dict[str, Any]]:
|
||||
def get_scheduled_tasks_txn(txn: LoggingTransaction) -> List[ScheduledTaskRow]:
|
||||
clauses: List[str] = []
|
||||
args: List[Any] = []
|
||||
if resource_id:
|
||||
|
@ -101,7 +108,7 @@ class TaskSchedulerWorkerStore(SQLBaseStore):
|
|||
args.append(limit)
|
||||
|
||||
txn.execute(sql, args)
|
||||
return self.db_pool.cursor_to_dict(txn)
|
||||
return cast(List[ScheduledTaskRow], txn.fetchall())
|
||||
|
||||
rows = await self.db_pool.runInteraction(
|
||||
"get_scheduled_tasks", get_scheduled_tasks_txn
|
||||
|
@ -193,7 +200,22 @@ class TaskSchedulerWorkerStore(SQLBaseStore):
|
|||
desc="get_scheduled_task",
|
||||
)
|
||||
|
||||
return TaskSchedulerWorkerStore._convert_row_to_task(row) if row else None
|
||||
return (
|
||||
TaskSchedulerWorkerStore._convert_row_to_task(
|
||||
(
|
||||
row["id"],
|
||||
row["action"],
|
||||
row["status"],
|
||||
row["timestamp"],
|
||||
row["resource_id"],
|
||||
row["params"],
|
||||
row["result"],
|
||||
row["error"],
|
||||
)
|
||||
)
|
||||
if row
|
||||
else None
|
||||
)
|
||||
|
||||
async def delete_scheduled_task(self, id: str) -> None:
|
||||
"""Delete a specific task from its id.
|
||||
|
|
|
@ -211,18 +211,28 @@ class TransactionWorkerStore(CacheInvalidationWorkerStore):
|
|||
async def get_destination_retry_timings_batch(
|
||||
self, destinations: StrCollection
|
||||
) -> Mapping[str, Optional[DestinationRetryTimings]]:
|
||||
rows = await self.db_pool.simple_select_many_batch(
|
||||
table="destinations",
|
||||
iterable=destinations,
|
||||
column="destination",
|
||||
retcols=("destination", "failure_ts", "retry_last_ts", "retry_interval"),
|
||||
desc="get_destination_retry_timings_batch",
|
||||
rows = cast(
|
||||
List[Tuple[str, Optional[int], Optional[int], Optional[int]]],
|
||||
await self.db_pool.simple_select_many_batch(
|
||||
table="destinations",
|
||||
iterable=destinations,
|
||||
column="destination",
|
||||
retcols=(
|
||||
"destination",
|
||||
"failure_ts",
|
||||
"retry_last_ts",
|
||||
"retry_interval",
|
||||
),
|
||||
desc="get_destination_retry_timings_batch",
|
||||
),
|
||||
)
|
||||
|
||||
return {
|
||||
row.pop("destination"): DestinationRetryTimings(**row)
|
||||
for row in rows
|
||||
if row["retry_last_ts"] and row["failure_ts"] and row["retry_interval"]
|
||||
destination: DestinationRetryTimings(
|
||||
failure_ts, retry_last_ts, retry_interval
|
||||
)
|
||||
for destination, failure_ts, retry_last_ts, retry_interval in rows
|
||||
if retry_last_ts and failure_ts and retry_interval
|
||||
}
|
||||
|
||||
async def set_destination_retry_timings(
|
||||
|
@ -526,7 +536,7 @@ class TransactionWorkerStore(CacheInvalidationWorkerStore):
|
|||
start: int,
|
||||
limit: int,
|
||||
direction: Direction = Direction.FORWARDS,
|
||||
) -> Tuple[List[JsonDict], int]:
|
||||
) -> Tuple[List[Tuple[str, int]], int]:
|
||||
"""Function to retrieve a paginated list of destination's rooms.
|
||||
This will return a json list of rooms and the
|
||||
total number of rooms.
|
||||
|
@ -537,12 +547,14 @@ class TransactionWorkerStore(CacheInvalidationWorkerStore):
|
|||
limit: number of rows to retrieve
|
||||
direction: sort ascending or descending by room_id
|
||||
Returns:
|
||||
A tuple of a dict of rooms and a count of total rooms.
|
||||
A tuple of a list of room tuples and a count of total rooms.
|
||||
|
||||
Each room tuple is room_id, stream_ordering.
|
||||
"""
|
||||
|
||||
def get_destination_rooms_paginate_txn(
|
||||
txn: LoggingTransaction,
|
||||
) -> Tuple[List[JsonDict], int]:
|
||||
) -> Tuple[List[Tuple[str, int]], int]:
|
||||
if direction == Direction.BACKWARDS:
|
||||
order = "DESC"
|
||||
else:
|
||||
|
@ -556,14 +568,17 @@ class TransactionWorkerStore(CacheInvalidationWorkerStore):
|
|||
txn.execute(sql, [destination])
|
||||
count = cast(Tuple[int], txn.fetchone())[0]
|
||||
|
||||
rooms = self.db_pool.simple_select_list_paginate_txn(
|
||||
txn=txn,
|
||||
table="destination_rooms",
|
||||
orderby="room_id",
|
||||
start=start,
|
||||
limit=limit,
|
||||
retcols=("room_id", "stream_ordering"),
|
||||
order_direction=order,
|
||||
rooms = cast(
|
||||
List[Tuple[str, int]],
|
||||
self.db_pool.simple_select_list_paginate_txn(
|
||||
txn=txn,
|
||||
table="destination_rooms",
|
||||
orderby="room_id",
|
||||
start=start,
|
||||
limit=limit,
|
||||
retcols=("room_id", "stream_ordering"),
|
||||
order_direction=order,
|
||||
),
|
||||
)
|
||||
return rooms, count
|
||||
|
||||
|
|
|
@ -337,13 +337,16 @@ class UIAuthWorkerStore(SQLBaseStore):
|
|||
|
||||
# If a registration token was used, decrement the pending counter
|
||||
# before deleting the session.
|
||||
rows = self.db_pool.simple_select_many_txn(
|
||||
txn,
|
||||
table="ui_auth_sessions_credentials",
|
||||
column="session_id",
|
||||
iterable=session_ids,
|
||||
keyvalues={"stage_type": LoginType.REGISTRATION_TOKEN},
|
||||
retcols=["result"],
|
||||
rows = cast(
|
||||
List[Tuple[str]],
|
||||
self.db_pool.simple_select_many_txn(
|
||||
txn,
|
||||
table="ui_auth_sessions_credentials",
|
||||
column="session_id",
|
||||
iterable=session_ids,
|
||||
keyvalues={"stage_type": LoginType.REGISTRATION_TOKEN},
|
||||
retcols=["result"],
|
||||
),
|
||||
)
|
||||
|
||||
# Get the tokens used and how much pending needs to be decremented by.
|
||||
|
@ -353,23 +356,25 @@ class UIAuthWorkerStore(SQLBaseStore):
|
|||
# registration token stage for that session will be True.
|
||||
# If a token was used to authenticate, but registration was
|
||||
# never completed, the result will be the token used.
|
||||
token = db_to_json(r["result"])
|
||||
token = db_to_json(r[0])
|
||||
if isinstance(token, str):
|
||||
token_counts[token] = token_counts.get(token, 0) + 1
|
||||
|
||||
# Update the `pending` counters.
|
||||
if len(token_counts) > 0:
|
||||
token_rows = self.db_pool.simple_select_many_txn(
|
||||
txn,
|
||||
table="registration_tokens",
|
||||
column="token",
|
||||
iterable=list(token_counts.keys()),
|
||||
keyvalues={},
|
||||
retcols=["token", "pending"],
|
||||
token_rows = cast(
|
||||
List[Tuple[str, int]],
|
||||
self.db_pool.simple_select_many_txn(
|
||||
txn,
|
||||
table="registration_tokens",
|
||||
column="token",
|
||||
iterable=list(token_counts.keys()),
|
||||
keyvalues={},
|
||||
retcols=["token", "pending"],
|
||||
),
|
||||
)
|
||||
for token_row in token_rows:
|
||||
token = token_row["token"]
|
||||
new_pending = token_row["pending"] - token_counts[token]
|
||||
for token, pending in token_rows:
|
||||
new_pending = pending - token_counts[token]
|
||||
self.db_pool.simple_update_one_txn(
|
||||
txn,
|
||||
table="registration_tokens",
|
||||
|
|
|
@ -410,25 +410,24 @@ class UserDirectoryBackgroundUpdateStore(StateDeltasStore):
|
|||
)
|
||||
|
||||
# Next fetch their profiles. Note that not all users have profiles.
|
||||
profile_rows = self.db_pool.simple_select_many_txn(
|
||||
txn,
|
||||
table="profiles",
|
||||
column="full_user_id",
|
||||
iterable=list(users_to_insert),
|
||||
retcols=(
|
||||
"full_user_id",
|
||||
"displayname",
|
||||
"avatar_url",
|
||||
profile_rows = cast(
|
||||
List[Tuple[str, Optional[str], Optional[str]]],
|
||||
self.db_pool.simple_select_many_txn(
|
||||
txn,
|
||||
table="profiles",
|
||||
column="full_user_id",
|
||||
iterable=list(users_to_insert),
|
||||
retcols=(
|
||||
"full_user_id",
|
||||
"displayname",
|
||||
"avatar_url",
|
||||
),
|
||||
keyvalues={},
|
||||
),
|
||||
keyvalues={},
|
||||
)
|
||||
profiles = {
|
||||
row["full_user_id"]: _UserDirProfile(
|
||||
row["full_user_id"],
|
||||
row["displayname"],
|
||||
row["avatar_url"],
|
||||
)
|
||||
for row in profile_rows
|
||||
full_user_id: _UserDirProfile(full_user_id, displayname, avatar_url)
|
||||
for full_user_id, displayname, avatar_url in profile_rows
|
||||
}
|
||||
|
||||
profiles_to_insert = [
|
||||
|
@ -517,18 +516,21 @@ class UserDirectoryBackgroundUpdateStore(StateDeltasStore):
|
|||
and not self.get_if_app_services_interested_in_user(user) # type: ignore[attr-defined]
|
||||
]
|
||||
|
||||
rows = self.db_pool.simple_select_many_txn(
|
||||
txn,
|
||||
table="users",
|
||||
column="name",
|
||||
iterable=users,
|
||||
keyvalues={
|
||||
"deactivated": 0,
|
||||
},
|
||||
retcols=("name", "user_type"),
|
||||
rows = cast(
|
||||
List[Tuple[str, Optional[str]]],
|
||||
self.db_pool.simple_select_many_txn(
|
||||
txn,
|
||||
table="users",
|
||||
column="name",
|
||||
iterable=users,
|
||||
keyvalues={
|
||||
"deactivated": 0,
|
||||
},
|
||||
retcols=("name", "user_type"),
|
||||
),
|
||||
)
|
||||
|
||||
return [row["name"] for row in rows if row["user_type"] != UserTypes.SUPPORT]
|
||||
return [name for name, user_type in rows if user_type != UserTypes.SUPPORT]
|
||||
|
||||
async def is_room_world_readable_or_publicly_joinable(self, room_id: str) -> bool:
|
||||
"""Check if the room is either world_readable or publically joinable"""
|
||||
|
|
|
@ -12,7 +12,7 @@
|
|||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from typing import Iterable, Mapping
|
||||
from typing import Iterable, List, Mapping, Tuple, cast
|
||||
|
||||
from synapse.storage.database import LoggingTransaction
|
||||
from synapse.storage.databases.main import CacheInvalidationWorkerStore
|
||||
|
@ -50,14 +50,17 @@ class UserErasureWorkerStore(CacheInvalidationWorkerStore):
|
|||
Returns:
|
||||
for each user, whether the user has requested erasure.
|
||||
"""
|
||||
rows = await self.db_pool.simple_select_many_batch(
|
||||
table="erased_users",
|
||||
column="user_id",
|
||||
iterable=user_ids,
|
||||
retcols=("user_id",),
|
||||
desc="are_users_erased",
|
||||
rows = cast(
|
||||
List[Tuple[str]],
|
||||
await self.db_pool.simple_select_many_batch(
|
||||
table="erased_users",
|
||||
column="user_id",
|
||||
iterable=user_ids,
|
||||
retcols=("user_id",),
|
||||
desc="are_users_erased",
|
||||
),
|
||||
)
|
||||
erased_users = {row["user_id"] for row in rows}
|
||||
erased_users = {row[0] for row in rows}
|
||||
|
||||
return {u: u in erased_users for u in user_ids}
|
||||
|
||||
|
|
|
@ -13,7 +13,17 @@
|
|||
# limitations under the License.
|
||||
|
||||
import logging
|
||||
from typing import TYPE_CHECKING, Collection, Dict, Iterable, List, Optional, Set, Tuple
|
||||
from typing import (
|
||||
TYPE_CHECKING,
|
||||
Collection,
|
||||
Dict,
|
||||
Iterable,
|
||||
List,
|
||||
Optional,
|
||||
Set,
|
||||
Tuple,
|
||||
cast,
|
||||
)
|
||||
|
||||
import attr
|
||||
|
||||
|
@ -730,19 +740,22 @@ class StateGroupDataStore(StateBackgroundUpdateStore, SQLBaseStore):
|
|||
"[purge] found %i state groups to delete", len(state_groups_to_delete)
|
||||
)
|
||||
|
||||
rows = self.db_pool.simple_select_many_txn(
|
||||
txn,
|
||||
table="state_group_edges",
|
||||
column="prev_state_group",
|
||||
iterable=state_groups_to_delete,
|
||||
keyvalues={},
|
||||
retcols=("state_group",),
|
||||
rows = cast(
|
||||
List[Tuple[int]],
|
||||
self.db_pool.simple_select_many_txn(
|
||||
txn,
|
||||
table="state_group_edges",
|
||||
column="prev_state_group",
|
||||
iterable=state_groups_to_delete,
|
||||
keyvalues={},
|
||||
retcols=("state_group",),
|
||||
),
|
||||
)
|
||||
|
||||
remaining_state_groups = {
|
||||
row["state_group"]
|
||||
for row in rows
|
||||
if row["state_group"] not in state_groups_to_delete
|
||||
state_group
|
||||
for state_group, in rows
|
||||
if state_group not in state_groups_to_delete
|
||||
}
|
||||
|
||||
logger.info(
|
||||
|
@ -799,16 +812,19 @@ class StateGroupDataStore(StateBackgroundUpdateStore, SQLBaseStore):
|
|||
A mapping from state group to previous state group.
|
||||
"""
|
||||
|
||||
rows = await self.db_pool.simple_select_many_batch(
|
||||
table="state_group_edges",
|
||||
column="prev_state_group",
|
||||
iterable=state_groups,
|
||||
keyvalues={},
|
||||
retcols=("prev_state_group", "state_group"),
|
||||
desc="get_previous_state_groups",
|
||||
rows = cast(
|
||||
List[Tuple[int, int]],
|
||||
await self.db_pool.simple_select_many_batch(
|
||||
table="state_group_edges",
|
||||
column="prev_state_group",
|
||||
iterable=state_groups,
|
||||
keyvalues={},
|
||||
retcols=("state_group", "prev_state_group"),
|
||||
desc="get_previous_state_groups",
|
||||
),
|
||||
)
|
||||
|
||||
return {row["state_group"]: row["prev_state_group"] for row in rows}
|
||||
return dict(rows)
|
||||
|
||||
async def purge_room_state(
|
||||
self, room_id: str, state_groups_to_delete: Collection[int]
|
||||
|
|
|
@ -0,0 +1,20 @@
|
|||
/* Copyright 2023 The Matrix.org Foundation C.I.C
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
INSERT INTO background_updates (ordering, update_name, progress_json) VALUES
|
||||
(8204, 'e2e_room_keys_index_room_id', '{}');
|
||||
|
||||
INSERT INTO background_updates (ordering, update_name, progress_json) VALUES
|
||||
(8204, 'room_account_data_index_room_id', '{}');
|
|
@ -12,7 +12,7 @@
|
|||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from typing import TYPE_CHECKING, Iterator, Tuple
|
||||
from typing import TYPE_CHECKING, Sequence, Tuple
|
||||
|
||||
import attr
|
||||
|
||||
|
@ -23,7 +23,7 @@ from synapse.handlers.room import RoomEventSource
|
|||
from synapse.handlers.typing import TypingNotificationEventSource
|
||||
from synapse.logging.opentracing import trace
|
||||
from synapse.streams import EventSource
|
||||
from synapse.types import StreamToken
|
||||
from synapse.types import StreamKeyType, StreamToken
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from synapse.server import HomeServer
|
||||
|
@ -37,9 +37,14 @@ class _EventSourcesInner:
|
|||
receipt: ReceiptEventSource
|
||||
account_data: AccountDataEventSource
|
||||
|
||||
def get_sources(self) -> Iterator[Tuple[str, EventSource]]:
|
||||
for attribute in attr.fields(_EventSourcesInner):
|
||||
yield attribute.name, getattr(self, attribute.name)
|
||||
def get_sources(self) -> Sequence[Tuple[StreamKeyType, EventSource]]:
|
||||
return [
|
||||
(StreamKeyType.ROOM, self.room),
|
||||
(StreamKeyType.PRESENCE, self.presence),
|
||||
(StreamKeyType.TYPING, self.typing),
|
||||
(StreamKeyType.RECEIPT, self.receipt),
|
||||
(StreamKeyType.ACCOUNT_DATA, self.account_data),
|
||||
]
|
||||
|
||||
|
||||
class EventSources:
|
||||
|
|
|
@ -22,8 +22,8 @@ from typing import (
|
|||
Any,
|
||||
ClassVar,
|
||||
Dict,
|
||||
Final,
|
||||
List,
|
||||
Literal,
|
||||
Mapping,
|
||||
Match,
|
||||
MutableMapping,
|
||||
|
@ -34,6 +34,7 @@ from typing import (
|
|||
Type,
|
||||
TypeVar,
|
||||
Union,
|
||||
overload,
|
||||
)
|
||||
|
||||
import attr
|
||||
|
@ -60,6 +61,8 @@ from synapse.util.cancellation import cancellable
|
|||
from synapse.util.stringutils import parse_and_validate_server_name
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from typing_extensions import Self
|
||||
|
||||
from synapse.appservice.api import ApplicationService
|
||||
from synapse.storage.databases.main import DataStore, PurgeEventsStore
|
||||
from synapse.storage.databases.main.appservice import ApplicationServiceWorkerStore
|
||||
|
@ -436,7 +439,78 @@ def map_username_to_mxid_localpart(
|
|||
|
||||
|
||||
@attr.s(frozen=True, slots=True, order=False)
|
||||
class RoomStreamToken:
|
||||
class AbstractMultiWriterStreamToken(metaclass=abc.ABCMeta):
|
||||
"""An abstract stream token class for streams that supports multiple
|
||||
writers.
|
||||
|
||||
This works by keeping track of the stream position of each writer,
|
||||
represented by a default `stream` attribute and a map of instance name to
|
||||
stream position of any writers that are ahead of the default stream
|
||||
position.
|
||||
"""
|
||||
|
||||
stream: int = attr.ib(validator=attr.validators.instance_of(int), kw_only=True)
|
||||
|
||||
instance_map: "immutabledict[str, int]" = attr.ib(
|
||||
factory=immutabledict,
|
||||
validator=attr.validators.deep_mapping(
|
||||
key_validator=attr.validators.instance_of(str),
|
||||
value_validator=attr.validators.instance_of(int),
|
||||
mapping_validator=attr.validators.instance_of(immutabledict),
|
||||
),
|
||||
kw_only=True,
|
||||
)
|
||||
|
||||
@classmethod
|
||||
@abc.abstractmethod
|
||||
async def parse(cls, store: "DataStore", string: str) -> "Self":
|
||||
"""Parse the string representation of the token."""
|
||||
...
|
||||
|
||||
@abc.abstractmethod
|
||||
async def to_string(self, store: "DataStore") -> str:
|
||||
"""Serialize the token into its string representation."""
|
||||
...
|
||||
|
||||
def copy_and_advance(self, other: "Self") -> "Self":
|
||||
"""Return a new token such that if an event is after both this token and
|
||||
the other token, then its after the returned token too.
|
||||
"""
|
||||
|
||||
max_stream = max(self.stream, other.stream)
|
||||
|
||||
instance_map = {
|
||||
instance: max(
|
||||
self.instance_map.get(instance, self.stream),
|
||||
other.instance_map.get(instance, other.stream),
|
||||
)
|
||||
for instance in set(self.instance_map).union(other.instance_map)
|
||||
}
|
||||
|
||||
return attr.evolve(
|
||||
self, stream=max_stream, instance_map=immutabledict(instance_map)
|
||||
)
|
||||
|
||||
def get_max_stream_pos(self) -> int:
|
||||
"""Get the maximum stream position referenced in this token.
|
||||
|
||||
The corresponding "min" position is, by definition just `self.stream`.
|
||||
|
||||
This is used to handle tokens that have non-empty `instance_map`, and so
|
||||
reference stream positions after the `self.stream` position.
|
||||
"""
|
||||
return max(self.instance_map.values(), default=self.stream)
|
||||
|
||||
def get_stream_pos_for_instance(self, instance_name: str) -> int:
|
||||
"""Get the stream position that the given writer was at at this token."""
|
||||
|
||||
# If we don't have an entry for the instance we can assume that it was
|
||||
# at `self.stream`.
|
||||
return self.instance_map.get(instance_name, self.stream)
|
||||
|
||||
|
||||
@attr.s(frozen=True, slots=True, order=False)
|
||||
class RoomStreamToken(AbstractMultiWriterStreamToken):
|
||||
"""Tokens are positions between events. The token "s1" comes after event 1.
|
||||
|
||||
s0 s1
|
||||
|
@ -513,16 +587,8 @@ class RoomStreamToken:
|
|||
|
||||
topological: Optional[int] = attr.ib(
|
||||
validator=attr.validators.optional(attr.validators.instance_of(int)),
|
||||
)
|
||||
stream: int = attr.ib(validator=attr.validators.instance_of(int))
|
||||
|
||||
instance_map: "immutabledict[str, int]" = attr.ib(
|
||||
factory=immutabledict,
|
||||
validator=attr.validators.deep_mapping(
|
||||
key_validator=attr.validators.instance_of(str),
|
||||
value_validator=attr.validators.instance_of(int),
|
||||
mapping_validator=attr.validators.instance_of(immutabledict),
|
||||
),
|
||||
kw_only=True,
|
||||
default=None,
|
||||
)
|
||||
|
||||
def __attrs_post_init__(self) -> None:
|
||||
|
@ -582,17 +648,7 @@ class RoomStreamToken:
|
|||
if self.topological or other.topological:
|
||||
raise Exception("Can't advance topological tokens")
|
||||
|
||||
max_stream = max(self.stream, other.stream)
|
||||
|
||||
instance_map = {
|
||||
instance: max(
|
||||
self.instance_map.get(instance, self.stream),
|
||||
other.instance_map.get(instance, other.stream),
|
||||
)
|
||||
for instance in set(self.instance_map).union(other.instance_map)
|
||||
}
|
||||
|
||||
return RoomStreamToken(None, max_stream, immutabledict(instance_map))
|
||||
return super().copy_and_advance(other)
|
||||
|
||||
def as_historical_tuple(self) -> Tuple[int, int]:
|
||||
"""Returns a tuple of `(topological, stream)` for historical tokens.
|
||||
|
@ -618,16 +674,6 @@ class RoomStreamToken:
|
|||
# at `self.stream`.
|
||||
return self.instance_map.get(instance_name, self.stream)
|
||||
|
||||
def get_max_stream_pos(self) -> int:
|
||||
"""Get the maximum stream position referenced in this token.
|
||||
|
||||
The corresponding "min" position is, by definition just `self.stream`.
|
||||
|
||||
This is used to handle tokens that have non-empty `instance_map`, and so
|
||||
reference stream positions after the `self.stream` position.
|
||||
"""
|
||||
return max(self.instance_map.values(), default=self.stream)
|
||||
|
||||
async def to_string(self, store: "DataStore") -> str:
|
||||
if self.topological is not None:
|
||||
return "t%d-%d" % (self.topological, self.stream)
|
||||
|
@ -649,20 +695,20 @@ class RoomStreamToken:
|
|||
return "s%d" % (self.stream,)
|
||||
|
||||
|
||||
class StreamKeyType:
|
||||
class StreamKeyType(Enum):
|
||||
"""Known stream types.
|
||||
|
||||
A stream is a list of entities ordered by an incrementing "stream token".
|
||||
"""
|
||||
|
||||
ROOM: Final = "room_key"
|
||||
PRESENCE: Final = "presence_key"
|
||||
TYPING: Final = "typing_key"
|
||||
RECEIPT: Final = "receipt_key"
|
||||
ACCOUNT_DATA: Final = "account_data_key"
|
||||
PUSH_RULES: Final = "push_rules_key"
|
||||
TO_DEVICE: Final = "to_device_key"
|
||||
DEVICE_LIST: Final = "device_list_key"
|
||||
ROOM = "room_key"
|
||||
PRESENCE = "presence_key"
|
||||
TYPING = "typing_key"
|
||||
RECEIPT = "receipt_key"
|
||||
ACCOUNT_DATA = "account_data_key"
|
||||
PUSH_RULES = "push_rules_key"
|
||||
TO_DEVICE = "to_device_key"
|
||||
DEVICE_LIST = "device_list_key"
|
||||
UN_PARTIAL_STATED_ROOMS = "un_partial_stated_rooms_key"
|
||||
|
||||
|
||||
|
@ -784,7 +830,7 @@ class StreamToken:
|
|||
def room_stream_id(self) -> int:
|
||||
return self.room_key.stream
|
||||
|
||||
def copy_and_advance(self, key: str, new_value: Any) -> "StreamToken":
|
||||
def copy_and_advance(self, key: StreamKeyType, new_value: Any) -> "StreamToken":
|
||||
"""Advance the given key in the token to a new value if and only if the
|
||||
new value is after the old value.
|
||||
|
||||
|
@ -797,35 +843,68 @@ class StreamToken:
|
|||
return new_token
|
||||
|
||||
new_token = self.copy_and_replace(key, new_value)
|
||||
new_id = int(getattr(new_token, key))
|
||||
old_id = int(getattr(self, key))
|
||||
new_id = new_token.get_field(key)
|
||||
old_id = self.get_field(key)
|
||||
|
||||
if old_id < new_id:
|
||||
return new_token
|
||||
else:
|
||||
return self
|
||||
|
||||
def copy_and_replace(self, key: str, new_value: Any) -> "StreamToken":
|
||||
return attr.evolve(self, **{key: new_value})
|
||||
def copy_and_replace(self, key: StreamKeyType, new_value: Any) -> "StreamToken":
|
||||
return attr.evolve(self, **{key.value: new_value})
|
||||
|
||||
@overload
|
||||
def get_field(self, key: Literal[StreamKeyType.ROOM]) -> RoomStreamToken:
|
||||
...
|
||||
|
||||
@overload
|
||||
def get_field(
|
||||
self,
|
||||
key: Literal[
|
||||
StreamKeyType.ACCOUNT_DATA,
|
||||
StreamKeyType.DEVICE_LIST,
|
||||
StreamKeyType.PRESENCE,
|
||||
StreamKeyType.PUSH_RULES,
|
||||
StreamKeyType.RECEIPT,
|
||||
StreamKeyType.TO_DEVICE,
|
||||
StreamKeyType.TYPING,
|
||||
StreamKeyType.UN_PARTIAL_STATED_ROOMS,
|
||||
],
|
||||
) -> int:
|
||||
...
|
||||
|
||||
@overload
|
||||
def get_field(self, key: StreamKeyType) -> Union[int, RoomStreamToken]:
|
||||
...
|
||||
|
||||
def get_field(self, key: StreamKeyType) -> Union[int, RoomStreamToken]:
|
||||
"""Returns the stream ID for the given key."""
|
||||
return getattr(self, key.value)
|
||||
|
||||
|
||||
StreamToken.START = StreamToken(RoomStreamToken(None, 0), 0, 0, 0, 0, 0, 0, 0, 0, 0)
|
||||
StreamToken.START = StreamToken(RoomStreamToken(stream=0), 0, 0, 0, 0, 0, 0, 0, 0, 0)
|
||||
|
||||
|
||||
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
||||
class PersistedEventPosition:
|
||||
class PersistedPosition:
|
||||
"""Position of a newly persisted row with instance that persisted it."""
|
||||
|
||||
instance_name: str
|
||||
stream: int
|
||||
|
||||
def persisted_after(self, token: AbstractMultiWriterStreamToken) -> bool:
|
||||
return token.get_stream_pos_for_instance(self.instance_name) < self.stream
|
||||
|
||||
|
||||
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
||||
class PersistedEventPosition(PersistedPosition):
|
||||
"""Position of a newly persisted event with instance that persisted it.
|
||||
|
||||
This can be used to test whether the event is persisted before or after a
|
||||
RoomStreamToken.
|
||||
"""
|
||||
|
||||
instance_name: str
|
||||
stream: int
|
||||
|
||||
def persisted_after(self, token: RoomStreamToken) -> bool:
|
||||
return token.get_stream_pos_for_instance(self.instance_name) < self.stream
|
||||
|
||||
def to_room_stream_token(self) -> RoomStreamToken:
|
||||
"""Converts the position to a room stream token such that events
|
||||
persisted in the same room after this position will be after the
|
||||
|
@ -836,7 +915,7 @@ class PersistedEventPosition:
|
|||
"""
|
||||
# Doing the naive thing satisfies the desired properties described in
|
||||
# the docstring.
|
||||
return RoomStreamToken(None, self.stream)
|
||||
return RoomStreamToken(stream=self.stream)
|
||||
|
||||
|
||||
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
||||
|
|
|
@ -170,10 +170,10 @@ class RetryDestinationLimiter:
|
|||
database in milliseconds, or zero if the last request was
|
||||
successful.
|
||||
backoff_on_404: Back off if we get a 404
|
||||
|
||||
backoff_on_failure: set to False if we should not increase the
|
||||
retry interval on a failure.
|
||||
|
||||
notifier: A notifier used to mark servers as up.
|
||||
replication_client A replication client used to mark servers as up.
|
||||
backoff_on_all_error_codes: Whether we should back off on any
|
||||
error code.
|
||||
"""
|
||||
|
@ -237,6 +237,9 @@ class RetryDestinationLimiter:
|
|||
else:
|
||||
valid_err_code = False
|
||||
|
||||
# Whether previous requests to the destination had been failing.
|
||||
previously_failing = bool(self.failure_ts)
|
||||
|
||||
if success:
|
||||
# We connected successfully.
|
||||
if not self.retry_interval:
|
||||
|
@ -282,6 +285,9 @@ class RetryDestinationLimiter:
|
|||
if self.failure_ts is None:
|
||||
self.failure_ts = retry_last_ts
|
||||
|
||||
# Whether the current request to the destination had been failing.
|
||||
currently_failing = bool(self.failure_ts)
|
||||
|
||||
async def store_retry_timings() -> None:
|
||||
try:
|
||||
await self.store.set_destination_retry_timings(
|
||||
|
@ -291,17 +297,15 @@ class RetryDestinationLimiter:
|
|||
self.retry_interval,
|
||||
)
|
||||
|
||||
if self.notifier:
|
||||
# Inform the relevant places that the remote server is back up.
|
||||
self.notifier.notify_remote_server_up(self.destination)
|
||||
# If the server was previously failing, but is no longer.
|
||||
if previously_failing and not currently_failing:
|
||||
if self.notifier:
|
||||
# Inform the relevant places that the remote server is back up.
|
||||
self.notifier.notify_remote_server_up(self.destination)
|
||||
|
||||
if self.replication_client:
|
||||
# If we're on a worker we try and inform master about this. The
|
||||
# replication client doesn't hook into the notifier to avoid
|
||||
# infinite loops where we send a `REMOTE_SERVER_UP` command to
|
||||
# master, which then echoes it back to us which in turn pokes
|
||||
# the notifier.
|
||||
self.replication_client.send_remote_server_up(self.destination)
|
||||
if self.replication_client:
|
||||
# Inform other workers that the remote server is up.
|
||||
self.replication_client.send_remote_server_up(self.destination)
|
||||
|
||||
except Exception:
|
||||
logger.exception("Failed to store destination_retry_timings")
|
||||
|
|
|
@ -13,15 +13,18 @@
|
|||
# limitations under the License.
|
||||
|
||||
import sys
|
||||
from typing import cast
|
||||
|
||||
from synapse.types import ISynapseReactor
|
||||
|
||||
try:
|
||||
from twisted.internet.epollreactor import EPollReactor as Reactor
|
||||
except ImportError:
|
||||
from twisted.internet.pollreactor import PollReactor as Reactor
|
||||
from twisted.internet.pollreactor import PollReactor as Reactor # type: ignore[assignment]
|
||||
from twisted.internet.main import installReactor
|
||||
|
||||
|
||||
def make_reactor():
|
||||
def make_reactor() -> ISynapseReactor:
|
||||
"""
|
||||
Instantiate and install a Twisted reactor suitable for testing (i.e. not the
|
||||
default global one).
|
||||
|
@ -32,4 +35,4 @@ def make_reactor():
|
|||
del sys.modules["twisted.internet.reactor"]
|
||||
installReactor(reactor)
|
||||
|
||||
return reactor
|
||||
return cast(ISynapseReactor, reactor)
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue