Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes
commit
e808df9e2e
|
@ -7,3 +7,4 @@ root = true
|
|||
[*.py]
|
||||
indent_style = space
|
||||
indent_size = 4
|
||||
max_line_length = 88
|
||||
|
|
|
@ -0,0 +1 @@
|
|||
Add a helpful example bash script to the contrib directory for creating multiple worker configuration files of the same type. Contributed by @villepeh.
|
|
@ -0,0 +1 @@
|
|||
Support temporary experimental return values for spam checker module callbacks.
|
|
@ -0,0 +1 @@
|
|||
Reduce memory consumption when processing incoming events in large rooms.
|
|
@ -0,0 +1 @@
|
|||
Document how the Synapse team does reviews.
|
|
@ -0,0 +1 @@
|
|||
Faster room joins: fix race in recalculation of current room state.
|
|
@ -0,0 +1 @@
|
|||
Add support to `complement.sh` for skipping the docker build.
|
|
@ -0,0 +1 @@
|
|||
Make use of the more robust `get_current_state` in `_get_state_map_for_room` to avoid breakages.
|
|
@ -0,0 +1 @@
|
|||
Check that `auto_vacuum` is disabled when porting a SQLite database to Postgres, as `VACUUM`s must not be performed between runs of the script.
|
|
@ -0,0 +1 @@
|
|||
Fix exception when using experimental [MSC3030](https://github.com/matrix-org/matrix-spec-proposals/pull/3030) `/timestamp_to_event` endpoint to look for remote federated imported events before room creation.
|
|
@ -0,0 +1 @@
|
|||
Remove obsolete and for 8 years unused `RoomEventsStoreTestCase`. Contributed by @arkamar.
|
|
@ -0,0 +1 @@
|
|||
Bump the version of `lxml` in matrix.org Docker images Debian packages from 4.8.0 to 4.9.1.
|
|
@ -0,0 +1 @@
|
|||
Reduce number of queries used to get profile information. Contributed by Nick @ Beeper (@fizzadar).
|
|
@ -0,0 +1 @@
|
|||
Reduce number of events queried during room creation. Contributed by Nick @ Beeper (@fizzadar).
|
|
@ -0,0 +1 @@
|
|||
More aggressively rotate push actions.
|
|
@ -0,0 +1 @@
|
|||
Add documentation for the existing `databases` option in the homeserver configuration manual.
|
|
@ -0,0 +1 @@
|
|||
Reduce memory consumption when processing incoming events in large rooms.
|
|
@ -0,0 +1 @@
|
|||
Fix bug where notification counts would get stuck after a highlighted message. Broke in v1.62.0.
|
|
@ -0,0 +1 @@
|
|||
Fix a long-standing bug where the `synapse_port_db` script could fail to copy rows with negative row ids.
|
|
@ -0,0 +1 @@
|
|||
Add `max_line_length` setting for Python files to the `.editorconfig`. Contributed by @sumnerevans @ Beeper.
|
|
@ -0,0 +1 @@
|
|||
Fix MSC3202-enabled appservices not receiving to-device messages, preventing messages from being decrypted.
|
|
@ -0,0 +1 @@
|
|||
Fix appservices not receiving room-less EDUs, like presence, if enabled.
|
|
@ -0,0 +1,31 @@
|
|||
# Creating multiple workers with a bash script
|
||||
|
||||
Setting up multiple worker configuration files manually can be time-consuming.
|
||||
You can alternatively create multiple worker configuration files with a simple `bash` script. For example:
|
||||
|
||||
```sh
|
||||
#!/bin/bash
|
||||
for i in {1..5}
|
||||
do
|
||||
cat << EOF >> generic_worker$i.yaml
|
||||
worker_app: synapse.app.generic_worker
|
||||
worker_name: generic_worker$i
|
||||
|
||||
# The replication listener on the main synapse process.
|
||||
worker_replication_host: 127.0.0.1
|
||||
worker_replication_http_port: 9093
|
||||
|
||||
worker_listeners:
|
||||
- type: http
|
||||
port: 808$i
|
||||
resources:
|
||||
- names: [client, federation]
|
||||
|
||||
worker_log_config: /etc/matrix-synapse/generic-worker-log.yaml
|
||||
EOF
|
||||
done
|
||||
```
|
||||
|
||||
This would create five generic workers with a unique `worker_name` field in each file and listening on ports 8081-8085.
|
||||
|
||||
Customise the script to your needs.
|
|
@ -81,6 +81,7 @@
|
|||
# Development
|
||||
- [Contributing Guide](development/contributing_guide.md)
|
||||
- [Code Style](code_style.md)
|
||||
- [Reviewing Code](development/reviews.md)
|
||||
- [Release Cycle](development/releases.md)
|
||||
- [Git Usage](development/git.md)
|
||||
- [Testing]()
|
||||
|
|
|
@ -351,7 +351,7 @@ To prepare a Pull Request, please:
|
|||
3. `git push` your commit to your fork of Synapse;
|
||||
4. on GitHub, [create the Pull Request](https://docs.github.com/en/github/collaborating-with-issues-and-pull-requests/creating-a-pull-request);
|
||||
5. add a [changelog entry](#changelog) and push it to your Pull Request;
|
||||
6. for most contributors, that's all - however, if you are a member of the organization `matrix-org`, on GitHub, please request a review from `matrix.org / Synapse Core`.
|
||||
6. that's it for now, a non-draft pull request will automatically request review from the team;
|
||||
7. if you need to update your PR, please avoid rebasing and just add new commits to your branch.
|
||||
|
||||
|
||||
|
@ -527,10 +527,13 @@ From this point, you should:
|
|||
1. Look at the results of the CI pipeline.
|
||||
- If there is any error, fix the error.
|
||||
2. If a developer has requested changes, make these changes and let us know if it is ready for a developer to review again.
|
||||
- A pull request is a conversation, if you disagree with the suggestions, please respond and discuss it.
|
||||
3. Create a new commit with the changes.
|
||||
- Please do NOT overwrite the history. New commits make the reviewer's life easier.
|
||||
- Push this commits to your Pull Request.
|
||||
4. Back to 1.
|
||||
5. Once the pull request is ready for review again please re-request review from whichever developer did your initial
|
||||
review (or leave a comment in the pull request that you believe all required changes have been done).
|
||||
|
||||
Once both the CI and the developers are happy, the patch will be merged into Synapse and released shortly!
|
||||
|
||||
|
|
|
@ -0,0 +1,41 @@
|
|||
Some notes on how we do reviews
|
||||
===============================
|
||||
|
||||
The Synapse team works off a shared review queue -- any new pull requests for
|
||||
Synapse (or related projects) has a review requested from the entire team. Team
|
||||
members should process this queue using the following rules:
|
||||
|
||||
* Any high urgency pull requests (e.g. fixes for broken continuous integration
|
||||
or fixes for release blockers);
|
||||
* Follow-up reviews for pull requests which have previously received reviews;
|
||||
* Any remaining pull requests.
|
||||
|
||||
For the latter two categories above, older pull requests should be prioritised.
|
||||
|
||||
It is explicit that there is no priority given to pull requests from the team
|
||||
(vs from the community). If a pull request requires a quick turn around, please
|
||||
explicitly communicate this via [#synapse-dev:matrix.org](https://matrix.to/#/#synapse-dev:matrix.org)
|
||||
or as a comment on the pull request.
|
||||
|
||||
Once an initial review has been completed and the author has made additional changes,
|
||||
follow-up reviews should go back to the same reviewer. This helps build a shared
|
||||
context and conversation between author and reviewer.
|
||||
|
||||
As a team we aim to keep the number of inflight pull requests to a minimum to ensure
|
||||
that ongoing work is finished before starting new work.
|
||||
|
||||
Performing a review
|
||||
-------------------
|
||||
|
||||
To communicate to the rest of the team the status of each pull request, team
|
||||
members should do the following:
|
||||
|
||||
* Assign themselves to the pull request (they should be left assigned to the
|
||||
pull request until it is merged, closed, or are no longer the reviewer);
|
||||
* Review the pull request by leaving comments, questions, and suggestions;
|
||||
* Mark the pull request appropriately (as needing changes or accepted).
|
||||
|
||||
If you are unsure about a particular part of the pull request (or are not confident
|
||||
in your understanding of part of the code) then ask questions or request review
|
||||
from the team again. When requesting review from the team be sure to leave a comment
|
||||
with the rationale on why you're putting it back in the queue.
|
|
@ -143,6 +143,14 @@ to do step 2.
|
|||
|
||||
It is safe to at any time kill the port script and restart it.
|
||||
|
||||
However, under no circumstances should the SQLite database be `VACUUM`ed between
|
||||
multiple runs of the script. Doing so can lead to an inconsistent copy of your database
|
||||
into Postgres.
|
||||
To avoid accidental error, the script will check that SQLite's `auto_vacuum` mechanism
|
||||
is disabled, but the script is not able to protect against a manual `VACUUM` operation
|
||||
performed either by the administrator or by any automated task that the administrator
|
||||
may have configured.
|
||||
|
||||
Note that the database may take up significantly more (25% - 100% more)
|
||||
space on disk after porting to Postgres.
|
||||
|
||||
|
|
|
@ -1257,6 +1257,98 @@ database:
|
|||
cp_max: 10
|
||||
```
|
||||
---
|
||||
### `databases`
|
||||
|
||||
The `databases` option allows specifying a mapping between certain database tables and
|
||||
database host details, spreading the load of a single Synapse instance across multiple
|
||||
database backends. This is often referred to as "database sharding". This option is only
|
||||
supported for PostgreSQL database backends.
|
||||
|
||||
**Important note:** This is a supported option, but is not currently used in production by the
|
||||
Matrix.org Foundation. Proceed with caution and always make backups.
|
||||
|
||||
`databases` is a dictionary of arbitrarily-named database entries. Each entry is equivalent
|
||||
to the value of the `database` homeserver config option (see above), with the addition of
|
||||
a `data_stores` key. `data_stores` is an array of strings that specifies the data store(s)
|
||||
(a defined label for a set of tables) that should be stored on the associated database
|
||||
backend entry.
|
||||
|
||||
The currently defined values for `data_stores` are:
|
||||
|
||||
* `"state"`: Database that relates to state groups will be stored in this database.
|
||||
|
||||
Specifically, that means the following tables:
|
||||
* `state_groups`
|
||||
* `state_group_edges`
|
||||
* `state_groups_state`
|
||||
|
||||
And the following sequences:
|
||||
* `state_groups_seq_id`
|
||||
|
||||
* `"main"`: All other database tables and sequences.
|
||||
|
||||
All databases will end up with additional tables used for tracking database schema migrations
|
||||
and any pending background updates. Synapse will create these automatically on startup when checking for
|
||||
and/or performing database schema migrations.
|
||||
|
||||
To migrate an existing database configuration (e.g. all tables on a single database) to a different
|
||||
configuration (e.g. the "main" data store on one database, and "state" on another), do the following:
|
||||
|
||||
1. Take a backup of your existing database. Things can and do go wrong and database corruption is no joke!
|
||||
2. Ensure all pending database migrations have been applied and background updates have run. The simplest
|
||||
way to do this is to use the `update_synapse_database` script supplied with your Synapse installation.
|
||||
|
||||
```sh
|
||||
update_synapse_database --database-config homeserver.yaml --run-background-updates
|
||||
```
|
||||
|
||||
3. Copy over the necessary tables and sequences from one database to the other. Tables relating to database
|
||||
migrations, schemas, schema versions and background updates should **not** be copied.
|
||||
|
||||
As an example, say that you'd like to split out the "state" data store from an existing database which
|
||||
currently contains all data stores.
|
||||
|
||||
Simply copy the tables and sequences defined above for the "state" datastore from the existing database
|
||||
to the secondary database. As noted above, additional tables will be created in the secondary database
|
||||
when Synapse is started.
|
||||
|
||||
4. Modify/create the `databases` option in your `homeserver.yaml` to match the desired database configuration.
|
||||
5. Start Synapse. Check that it starts up successfully and that things generally seem to be working.
|
||||
6. Drop the old tables that were copied in step 3.
|
||||
|
||||
Only one of the options `database` or `databases` may be specified in your config, but not both.
|
||||
|
||||
Example configuration:
|
||||
|
||||
```yaml
|
||||
databases:
|
||||
basement_box:
|
||||
name: psycopg2
|
||||
txn_limit: 10000
|
||||
data_stores: ["main"]
|
||||
args:
|
||||
user: synapse_user
|
||||
password: secretpassword
|
||||
database: synapse_main
|
||||
host: localhost
|
||||
port: 5432
|
||||
cp_min: 5
|
||||
cp_max: 10
|
||||
|
||||
my_other_database:
|
||||
name: psycopg2
|
||||
txn_limit: 10000
|
||||
data_stores: ["state"]
|
||||
args:
|
||||
user: synapse_user
|
||||
password: secretpassword
|
||||
database: synapse_state
|
||||
host: localhost
|
||||
port: 5432
|
||||
cp_min: 5
|
||||
cp_max: 10
|
||||
```
|
||||
---
|
||||
## Logging ##
|
||||
Config options related to logging.
|
||||
|
||||
|
|
|
@ -502,7 +502,7 @@ pyasn1 = ">=0.4.6"
|
|||
|
||||
[[package]]
|
||||
name = "lxml"
|
||||
version = "4.8.0"
|
||||
version = "4.9.1"
|
||||
description = "Powerful and Pythonic XML processing library combining libxml2/libxslt with the ElementTree API."
|
||||
category = "main"
|
||||
optional = true
|
||||
|
@ -1937,67 +1937,76 @@ ldap3 = [
|
|||
{file = "ldap3-2.9.1.tar.gz", hash = "sha256:f3e7fc4718e3f09dda568b57100095e0ce58633bcabbed8667ce3f8fbaa4229f"},
|
||||
]
|
||||
lxml = [
|
||||
{file = "lxml-4.8.0-cp27-cp27m-macosx_10_14_x86_64.whl", hash = "sha256:e1ab2fac607842ac36864e358c42feb0960ae62c34aa4caaf12ada0a1fb5d99b"},
|
||||
{file = "lxml-4.8.0-cp27-cp27m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:28d1af847786f68bec57961f31221125c29d6f52d9187c01cd34dc14e2b29430"},
|
||||
{file = "lxml-4.8.0-cp27-cp27m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:b92d40121dcbd74831b690a75533da703750f7041b4bf951befc657c37e5695a"},
|
||||
{file = "lxml-4.8.0-cp27-cp27m-win32.whl", hash = "sha256:e01f9531ba5420838c801c21c1b0f45dbc9607cb22ea2cf132844453bec863a5"},
|
||||
{file = "lxml-4.8.0-cp27-cp27m-win_amd64.whl", hash = "sha256:6259b511b0f2527e6d55ad87acc1c07b3cbffc3d5e050d7e7bcfa151b8202df9"},
|
||||
{file = "lxml-4.8.0-cp27-cp27mu-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:1010042bfcac2b2dc6098260a2ed022968dbdfaf285fc65a3acf8e4eb1ffd1bc"},
|
||||
{file = "lxml-4.8.0-cp27-cp27mu-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:fa56bb08b3dd8eac3a8c5b7d075c94e74f755fd9d8a04543ae8d37b1612dd170"},
|
||||
{file = "lxml-4.8.0-cp310-cp310-macosx_10_15_x86_64.whl", hash = "sha256:31ba2cbc64516dcdd6c24418daa7abff989ddf3ba6d3ea6f6ce6f2ed6e754ec9"},
|
||||
{file = "lxml-4.8.0-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_24_i686.whl", hash = "sha256:31499847fc5f73ee17dbe1b8e24c6dafc4e8d5b48803d17d22988976b0171f03"},
|
||||
{file = "lxml-4.8.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:5f7d7d9afc7b293147e2d506a4596641d60181a35279ef3aa5778d0d9d9123fe"},
|
||||
{file = "lxml-4.8.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:a3c5f1a719aa11866ffc530d54ad965063a8cbbecae6515acbd5f0fae8f48eaa"},
|
||||
{file = "lxml-4.8.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:6268e27873a3d191849204d00d03f65c0e343b3bcb518a6eaae05677c95621d1"},
|
||||
{file = "lxml-4.8.0-cp310-cp310-win32.whl", hash = "sha256:330bff92c26d4aee79c5bc4d9967858bdbe73fdbdbacb5daf623a03a914fe05b"},
|
||||
{file = "lxml-4.8.0-cp310-cp310-win_amd64.whl", hash = "sha256:b2582b238e1658c4061ebe1b4df53c435190d22457642377fd0cb30685cdfb76"},
|
||||
{file = "lxml-4.8.0-cp35-cp35m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:a2bfc7e2a0601b475477c954bf167dee6d0f55cb167e3f3e7cefad906e7759f6"},
|
||||
{file = "lxml-4.8.0-cp35-cp35m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:a1547ff4b8a833511eeaceacbcd17b043214fcdb385148f9c1bc5556ca9623e2"},
|
||||
{file = "lxml-4.8.0-cp35-cp35m-win32.whl", hash = "sha256:a9f1c3489736ff8e1c7652e9dc39f80cff820f23624f23d9eab6e122ac99b150"},
|
||||
{file = "lxml-4.8.0-cp35-cp35m-win_amd64.whl", hash = "sha256:530f278849031b0eb12f46cca0e5db01cfe5177ab13bd6878c6e739319bae654"},
|
||||
{file = "lxml-4.8.0-cp36-cp36m-macosx_10_14_x86_64.whl", hash = "sha256:078306d19a33920004addeb5f4630781aaeabb6a8d01398045fcde085091a169"},
|
||||
{file = "lxml-4.8.0-cp36-cp36m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_24_i686.whl", hash = "sha256:86545e351e879d0b72b620db6a3b96346921fa87b3d366d6c074e5a9a0b8dadb"},
|
||||
{file = "lxml-4.8.0-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:24f5c5ae618395ed871b3d8ebfcbb36e3f1091fd847bf54c4de623f9107942f3"},
|
||||
{file = "lxml-4.8.0-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:bbab6faf6568484707acc052f4dfc3802bdb0cafe079383fbaa23f1cdae9ecd4"},
|
||||
{file = "lxml-4.8.0-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:7993232bd4044392c47779a3c7e8889fea6883be46281d45a81451acfd704d7e"},
|
||||
{file = "lxml-4.8.0-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:6d6483b1229470e1d8835e52e0ff3c6973b9b97b24cd1c116dca90b57a2cc613"},
|
||||
{file = "lxml-4.8.0-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:ad4332a532e2d5acb231a2e5d33f943750091ee435daffca3fec0a53224e7e33"},
|
||||
{file = "lxml-4.8.0-cp36-cp36m-win32.whl", hash = "sha256:db3535733f59e5605a88a706824dfcb9bd06725e709ecb017e165fc1d6e7d429"},
|
||||
{file = "lxml-4.8.0-cp36-cp36m-win_amd64.whl", hash = "sha256:5f148b0c6133fb928503cfcdfdba395010f997aa44bcf6474fcdd0c5398d9b63"},
|
||||
{file = "lxml-4.8.0-cp37-cp37m-macosx_10_14_x86_64.whl", hash = "sha256:8a31f24e2a0b6317f33aafbb2f0895c0bce772980ae60c2c640d82caac49628a"},
|
||||
{file = "lxml-4.8.0-cp37-cp37m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_24_i686.whl", hash = "sha256:719544565c2937c21a6f76d520e6e52b726d132815adb3447ccffbe9f44203c4"},
|
||||
{file = "lxml-4.8.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:c0b88ed1ae66777a798dc54f627e32d3b81c8009967c63993c450ee4cbcbec15"},
|
||||
{file = "lxml-4.8.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:fa9b7c450be85bfc6cd39f6df8c5b8cbd76b5d6fc1f69efec80203f9894b885f"},
|
||||
{file = "lxml-4.8.0-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:e9f84ed9f4d50b74fbc77298ee5c870f67cb7e91dcdc1a6915cb1ff6a317476c"},
|
||||
{file = "lxml-4.8.0-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:1d650812b52d98679ed6c6b3b55cbb8fe5a5460a0aef29aeb08dc0b44577df85"},
|
||||
{file = "lxml-4.8.0-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:80bbaddf2baab7e6de4bc47405e34948e694a9efe0861c61cdc23aa774fcb141"},
|
||||
{file = "lxml-4.8.0-cp37-cp37m-win32.whl", hash = "sha256:6f7b82934c08e28a2d537d870293236b1000d94d0b4583825ab9649aef7ddf63"},
|
||||
{file = "lxml-4.8.0-cp37-cp37m-win_amd64.whl", hash = "sha256:e1fd7d2fe11f1cb63d3336d147c852f6d07de0d0020d704c6031b46a30b02ca8"},
|
||||
{file = "lxml-4.8.0-cp38-cp38-macosx_10_14_x86_64.whl", hash = "sha256:5045ee1ccd45a89c4daec1160217d363fcd23811e26734688007c26f28c9e9e7"},
|
||||
{file = "lxml-4.8.0-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_24_i686.whl", hash = "sha256:0c1978ff1fd81ed9dcbba4f91cf09faf1f8082c9d72eb122e92294716c605428"},
|
||||
{file = "lxml-4.8.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:52cbf2ff155b19dc4d4100f7442f6a697938bf4493f8d3b0c51d45568d5666b5"},
|
||||
{file = "lxml-4.8.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:ce13d6291a5f47c1c8dbd375baa78551053bc6b5e5c0e9bb8e39c0a8359fd52f"},
|
||||
{file = "lxml-4.8.0-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:e11527dc23d5ef44d76fef11213215c34f36af1608074561fcc561d983aeb870"},
|
||||
{file = "lxml-4.8.0-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:60d2f60bd5a2a979df28ab309352cdcf8181bda0cca4529769a945f09aba06f9"},
|
||||
{file = "lxml-4.8.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:62f93eac69ec0f4be98d1b96f4d6b964855b8255c345c17ff12c20b93f247b68"},
|
||||
{file = "lxml-4.8.0-cp38-cp38-win32.whl", hash = "sha256:20b8a746a026017acf07da39fdb10aa80ad9877046c9182442bf80c84a1c4696"},
|
||||
{file = "lxml-4.8.0-cp38-cp38-win_amd64.whl", hash = "sha256:891dc8f522d7059ff0024cd3ae79fd224752676447f9c678f2a5c14b84d9a939"},
|
||||
{file = "lxml-4.8.0-cp39-cp39-macosx_10_15_x86_64.whl", hash = "sha256:b6fc2e2fb6f532cf48b5fed57567ef286addcef38c28874458a41b7837a57807"},
|
||||
{file = "lxml-4.8.0-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_24_i686.whl", hash = "sha256:74eb65ec61e3c7c019d7169387d1b6ffcfea1b9ec5894d116a9a903636e4a0b1"},
|
||||
{file = "lxml-4.8.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:627e79894770783c129cc5e89b947e52aa26e8e0557c7e205368a809da4b7939"},
|
||||
{file = "lxml-4.8.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:545bd39c9481f2e3f2727c78c169425efbfb3fbba6e7db4f46a80ebb249819ca"},
|
||||
{file = "lxml-4.8.0-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:5a58d0b12f5053e270510bf12f753a76aaf3d74c453c00942ed7d2c804ca845c"},
|
||||
{file = "lxml-4.8.0-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:ec4b4e75fc68da9dc0ed73dcdb431c25c57775383fec325d23a770a64e7ebc87"},
|
||||
{file = "lxml-4.8.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:5804e04feb4e61babf3911c2a974a5b86f66ee227cc5006230b00ac6d285b3a9"},
|
||||
{file = "lxml-4.8.0-cp39-cp39-win32.whl", hash = "sha256:aa0cf4922da7a3c905d000b35065df6184c0dc1d866dd3b86fd961905bbad2ea"},
|
||||
{file = "lxml-4.8.0-cp39-cp39-win_amd64.whl", hash = "sha256:dd10383f1d6b7edf247d0960a3db274c07e96cf3a3fc7c41c8448f93eac3fb1c"},
|
||||
{file = "lxml-4.8.0-pp37-pypy37_pp73-macosx_10_14_x86_64.whl", hash = "sha256:2403a6d6fb61c285969b71f4a3527873fe93fd0abe0832d858a17fe68c8fa507"},
|
||||
{file = "lxml-4.8.0-pp37-pypy37_pp73-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_24_i686.whl", hash = "sha256:986b7a96228c9b4942ec420eff37556c5777bfba6758edcb95421e4a614b57f9"},
|
||||
{file = "lxml-4.8.0-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:6fe4ef4402df0250b75ba876c3795510d782def5c1e63890bde02d622570d39e"},
|
||||
{file = "lxml-4.8.0-pp38-pypy38_pp73-macosx_10_14_x86_64.whl", hash = "sha256:f10ce66fcdeb3543df51d423ede7e238be98412232fca5daec3e54bcd16b8da0"},
|
||||
{file = "lxml-4.8.0-pp38-pypy38_pp73-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_24_i686.whl", hash = "sha256:730766072fd5dcb219dd2b95c4c49752a54f00157f322bc6d71f7d2a31fecd79"},
|
||||
{file = "lxml-4.8.0-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:8b99ec73073b37f9ebe8caf399001848fced9c08064effdbfc4da2b5a8d07b93"},
|
||||
{file = "lxml-4.8.0.tar.gz", hash = "sha256:f63f62fc60e6228a4ca9abae28228f35e1bd3ce675013d1dfb828688d50c6e23"},
|
||||
{file = "lxml-4.9.1-cp27-cp27m-macosx_10_15_x86_64.whl", hash = "sha256:98cafc618614d72b02185ac583c6f7796202062c41d2eeecdf07820bad3295ed"},
|
||||
{file = "lxml-4.9.1-cp27-cp27m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:c62e8dd9754b7debda0c5ba59d34509c4688f853588d75b53c3791983faa96fc"},
|
||||
{file = "lxml-4.9.1-cp27-cp27m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:21fb3d24ab430fc538a96e9fbb9b150029914805d551deeac7d7822f64631dfc"},
|
||||
{file = "lxml-4.9.1-cp27-cp27m-win32.whl", hash = "sha256:86e92728ef3fc842c50a5cb1d5ba2bc66db7da08a7af53fb3da79e202d1b2cd3"},
|
||||
{file = "lxml-4.9.1-cp27-cp27m-win_amd64.whl", hash = "sha256:4cfbe42c686f33944e12f45a27d25a492cc0e43e1dc1da5d6a87cbcaf2e95627"},
|
||||
{file = "lxml-4.9.1-cp27-cp27mu-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:dad7b164905d3e534883281c050180afcf1e230c3d4a54e8038aa5cfcf312b84"},
|
||||
{file = "lxml-4.9.1-cp27-cp27mu-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:a614e4afed58c14254e67862456d212c4dcceebab2eaa44d627c2ca04bf86837"},
|
||||
{file = "lxml-4.9.1-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_24_i686.whl", hash = "sha256:f9ced82717c7ec65a67667bb05865ffe38af0e835cdd78728f1209c8fffe0cad"},
|
||||
{file = "lxml-4.9.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:d9fc0bf3ff86c17348dfc5d322f627d78273eba545db865c3cd14b3f19e57fa5"},
|
||||
{file = "lxml-4.9.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:e5f66bdf0976ec667fc4594d2812a00b07ed14d1b44259d19a41ae3fff99f2b8"},
|
||||
{file = "lxml-4.9.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:fe17d10b97fdf58155f858606bddb4e037b805a60ae023c009f760d8361a4eb8"},
|
||||
{file = "lxml-4.9.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:8caf4d16b31961e964c62194ea3e26a0e9561cdf72eecb1781458b67ec83423d"},
|
||||
{file = "lxml-4.9.1-cp310-cp310-win32.whl", hash = "sha256:4780677767dd52b99f0af1f123bc2c22873d30b474aa0e2fc3fe5e02217687c7"},
|
||||
{file = "lxml-4.9.1-cp310-cp310-win_amd64.whl", hash = "sha256:b122a188cd292c4d2fcd78d04f863b789ef43aa129b233d7c9004de08693728b"},
|
||||
{file = "lxml-4.9.1-cp311-cp311-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_24_i686.whl", hash = "sha256:be9eb06489bc975c38706902cbc6888f39e946b81383abc2838d186f0e8b6a9d"},
|
||||
{file = "lxml-4.9.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:f1be258c4d3dc609e654a1dc59d37b17d7fef05df912c01fc2e15eb43a9735f3"},
|
||||
{file = "lxml-4.9.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:927a9dd016d6033bc12e0bf5dee1dde140235fc8d0d51099353c76081c03dc29"},
|
||||
{file = "lxml-4.9.1-cp35-cp35m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:9232b09f5efee6a495a99ae6824881940d6447debe272ea400c02e3b68aad85d"},
|
||||
{file = "lxml-4.9.1-cp35-cp35m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:04da965dfebb5dac2619cb90fcf93efdb35b3c6994fea58a157a834f2f94b318"},
|
||||
{file = "lxml-4.9.1-cp35-cp35m-win32.whl", hash = "sha256:4d5bae0a37af799207140652a700f21a85946f107a199bcb06720b13a4f1f0b7"},
|
||||
{file = "lxml-4.9.1-cp35-cp35m-win_amd64.whl", hash = "sha256:4878e667ebabe9b65e785ac8da4d48886fe81193a84bbe49f12acff8f7a383a4"},
|
||||
{file = "lxml-4.9.1-cp36-cp36m-macosx_10_15_x86_64.whl", hash = "sha256:1355755b62c28950f9ce123c7a41460ed9743c699905cbe664a5bcc5c9c7c7fb"},
|
||||
{file = "lxml-4.9.1-cp36-cp36m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_24_i686.whl", hash = "sha256:bcaa1c495ce623966d9fc8a187da80082334236a2a1c7e141763ffaf7a405067"},
|
||||
{file = "lxml-4.9.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6eafc048ea3f1b3c136c71a86db393be36b5b3d9c87b1c25204e7d397cee9536"},
|
||||
{file = "lxml-4.9.1-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:13c90064b224e10c14dcdf8086688d3f0e612db53766e7478d7754703295c7c8"},
|
||||
{file = "lxml-4.9.1-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:206a51077773c6c5d2ce1991327cda719063a47adc02bd703c56a662cdb6c58b"},
|
||||
{file = "lxml-4.9.1-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:e8f0c9d65da595cfe91713bc1222af9ecabd37971762cb830dea2fc3b3bb2acf"},
|
||||
{file = "lxml-4.9.1-cp36-cp36m-musllinux_1_1_aarch64.whl", hash = "sha256:8f0a4d179c9a941eb80c3a63cdb495e539e064f8054230844dcf2fcb812b71d3"},
|
||||
{file = "lxml-4.9.1-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:830c88747dce8a3e7525defa68afd742b4580df6aa2fdd6f0855481e3994d391"},
|
||||
{file = "lxml-4.9.1-cp36-cp36m-win32.whl", hash = "sha256:1e1cf47774373777936c5aabad489fef7b1c087dcd1f426b621fda9dcc12994e"},
|
||||
{file = "lxml-4.9.1-cp36-cp36m-win_amd64.whl", hash = "sha256:5974895115737a74a00b321e339b9c3f45c20275d226398ae79ac008d908bff7"},
|
||||
{file = "lxml-4.9.1-cp37-cp37m-macosx_10_15_x86_64.whl", hash = "sha256:1423631e3d51008871299525b541413c9b6c6423593e89f9c4cfbe8460afc0a2"},
|
||||
{file = "lxml-4.9.1-cp37-cp37m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_24_i686.whl", hash = "sha256:2aaf6a0a6465d39b5ca69688fce82d20088c1838534982996ec46633dc7ad6cc"},
|
||||
{file = "lxml-4.9.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:9f36de4cd0c262dd9927886cc2305aa3f2210db437aa4fed3fb4940b8bf4592c"},
|
||||
{file = "lxml-4.9.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:ae06c1e4bc60ee076292e582a7512f304abdf6c70db59b56745cca1684f875a4"},
|
||||
{file = "lxml-4.9.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:57e4d637258703d14171b54203fd6822fda218c6c2658a7d30816b10995f29f3"},
|
||||
{file = "lxml-4.9.1-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:6d279033bf614953c3fc4a0aa9ac33a21e8044ca72d4fa8b9273fe75359d5cca"},
|
||||
{file = "lxml-4.9.1-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:a60f90bba4c37962cbf210f0188ecca87daafdf60271f4c6948606e4dabf8785"},
|
||||
{file = "lxml-4.9.1-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:6ca2264f341dd81e41f3fffecec6e446aa2121e0b8d026fb5130e02de1402785"},
|
||||
{file = "lxml-4.9.1-cp37-cp37m-win32.whl", hash = "sha256:27e590352c76156f50f538dbcebd1925317a0f70540f7dc8c97d2931c595783a"},
|
||||
{file = "lxml-4.9.1-cp37-cp37m-win_amd64.whl", hash = "sha256:eea5d6443b093e1545ad0210e6cf27f920482bfcf5c77cdc8596aec73523bb7e"},
|
||||
{file = "lxml-4.9.1-cp38-cp38-macosx_10_15_x86_64.whl", hash = "sha256:f05251bbc2145349b8d0b77c0d4e5f3b228418807b1ee27cefb11f69ed3d233b"},
|
||||
{file = "lxml-4.9.1-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_24_i686.whl", hash = "sha256:487c8e61d7acc50b8be82bda8c8d21d20e133c3cbf41bd8ad7eb1aaeb3f07c97"},
|
||||
{file = "lxml-4.9.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:8d1a92d8e90b286d491e5626af53afef2ba04da33e82e30744795c71880eaa21"},
|
||||
{file = "lxml-4.9.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:b570da8cd0012f4af9fa76a5635cd31f707473e65a5a335b186069d5c7121ff2"},
|
||||
{file = "lxml-4.9.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:5ef87fca280fb15342726bd5f980f6faf8b84a5287fcc2d4962ea8af88b35130"},
|
||||
{file = "lxml-4.9.1-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:93e414e3206779ef41e5ff2448067213febf260ba747fc65389a3ddaa3fb8715"},
|
||||
{file = "lxml-4.9.1-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:6653071f4f9bac46fbc30f3c7838b0e9063ee335908c5d61fb7a4a86c8fd2036"},
|
||||
{file = "lxml-4.9.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:32a73c53783becdb7eaf75a2a1525ea8e49379fb7248c3eeefb9412123536387"},
|
||||
{file = "lxml-4.9.1-cp38-cp38-win32.whl", hash = "sha256:1a7c59c6ffd6ef5db362b798f350e24ab2cfa5700d53ac6681918f314a4d3b94"},
|
||||
{file = "lxml-4.9.1-cp38-cp38-win_amd64.whl", hash = "sha256:1436cf0063bba7888e43f1ba8d58824f085410ea2025befe81150aceb123e345"},
|
||||
{file = "lxml-4.9.1-cp39-cp39-macosx_10_15_x86_64.whl", hash = "sha256:4beea0f31491bc086991b97517b9683e5cfb369205dac0148ef685ac12a20a67"},
|
||||
{file = "lxml-4.9.1-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_24_i686.whl", hash = "sha256:41fb58868b816c202e8881fd0f179a4644ce6e7cbbb248ef0283a34b73ec73bb"},
|
||||
{file = "lxml-4.9.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:bd34f6d1810d9354dc7e35158aa6cc33456be7706df4420819af6ed966e85448"},
|
||||
{file = "lxml-4.9.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:edffbe3c510d8f4bf8640e02ca019e48a9b72357318383ca60e3330c23aaffc7"},
|
||||
{file = "lxml-4.9.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:6d949f53ad4fc7cf02c44d6678e7ff05ec5f5552b235b9e136bd52e9bf730b91"},
|
||||
{file = "lxml-4.9.1-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:079b68f197c796e42aa80b1f739f058dcee796dc725cc9a1be0cdb08fc45b000"},
|
||||
{file = "lxml-4.9.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:9c3a88d20e4fe4a2a4a84bf439a5ac9c9aba400b85244c63a1ab7088f85d9d25"},
|
||||
{file = "lxml-4.9.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:4e285b5f2bf321fc0857b491b5028c5f276ec0c873b985d58d7748ece1d770dd"},
|
||||
{file = "lxml-4.9.1-cp39-cp39-win32.whl", hash = "sha256:ef72013e20dd5ba86a8ae1aed7f56f31d3374189aa8b433e7b12ad182c0d2dfb"},
|
||||
{file = "lxml-4.9.1-cp39-cp39-win_amd64.whl", hash = "sha256:10d2017f9150248563bb579cd0d07c61c58da85c922b780060dcc9a3aa9f432d"},
|
||||
{file = "lxml-4.9.1-pp37-pypy37_pp73-macosx_10_15_x86_64.whl", hash = "sha256:0538747a9d7827ce3e16a8fdd201a99e661c7dee3c96c885d8ecba3c35d1032c"},
|
||||
{file = "lxml-4.9.1-pp37-pypy37_pp73-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_24_i686.whl", hash = "sha256:0645e934e940107e2fdbe7c5b6fb8ec6232444260752598bc4d09511bd056c0b"},
|
||||
{file = "lxml-4.9.1-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:6daa662aba22ef3258934105be2dd9afa5bb45748f4f702a3b39a5bf53a1f4dc"},
|
||||
{file = "lxml-4.9.1-pp38-pypy38_pp73-macosx_10_15_x86_64.whl", hash = "sha256:603a464c2e67d8a546ddaa206d98e3246e5db05594b97db844c2f0a1af37cf5b"},
|
||||
{file = "lxml-4.9.1-pp38-pypy38_pp73-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_24_i686.whl", hash = "sha256:c4b2e0559b68455c085fb0f6178e9752c4be3bba104d6e881eb5573b399d1eb2"},
|
||||
{file = "lxml-4.9.1-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:0f3f0059891d3254c7b5fb935330d6db38d6519ecd238ca4fce93c234b4a0f73"},
|
||||
{file = "lxml-4.9.1-pp39-pypy39_pp73-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_24_i686.whl", hash = "sha256:c852b1530083a620cb0de5f3cd6826f19862bafeaf77586f1aef326e49d95f0c"},
|
||||
{file = "lxml-4.9.1-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:287605bede6bd36e930577c5925fcea17cb30453d96a7b4c63c14a257118dbb9"},
|
||||
{file = "lxml-4.9.1.tar.gz", hash = "sha256:fe749b052bb7233fe5d072fcb549221a8cb1a16725c47c37e42b0b9cb3ff2c3f"},
|
||||
]
|
||||
markupsafe = [
|
||||
{file = "MarkupSafe-2.1.0-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:3028252424c72b2602a323f70fbf50aa80a5d3aa616ea6add4ba21ae9cc9da4c"},
|
||||
|
|
|
@ -44,8 +44,14 @@ usage() {
|
|||
Usage: $0 [-f] <go test arguments>...
|
||||
Run the complement test suite on Synapse.
|
||||
|
||||
-f Skip rebuilding the docker images, and just use the most recent
|
||||
'complement-synapse:latest' image
|
||||
-f, --fast
|
||||
Skip rebuilding the docker images, and just use the most recent
|
||||
'complement-synapse:latest' image.
|
||||
Conflicts with --build-only.
|
||||
|
||||
--build-only
|
||||
Only build the Docker images. Don't actually run Complement.
|
||||
Conflicts with -f/--fast.
|
||||
|
||||
For help on arguments to 'go test', run 'go help testflag'.
|
||||
EOF
|
||||
|
@ -53,6 +59,7 @@ EOF
|
|||
|
||||
# parse our arguments
|
||||
skip_docker_build=""
|
||||
skip_complement_run=""
|
||||
while [ $# -ge 1 ]; do
|
||||
arg=$1
|
||||
case "$arg" in
|
||||
|
@ -60,9 +67,12 @@ while [ $# -ge 1 ]; do
|
|||
usage
|
||||
exit 1
|
||||
;;
|
||||
"-f")
|
||||
"-f"|"--fast")
|
||||
skip_docker_build=1
|
||||
;;
|
||||
"--build-only")
|
||||
skip_complement_run=1
|
||||
;;
|
||||
*)
|
||||
# unknown arg: presumably an argument to gotest. break the loop.
|
||||
break
|
||||
|
@ -106,6 +116,11 @@ if [ -z "$skip_docker_build" ]; then
|
|||
echo_if_github "::endgroup::"
|
||||
fi
|
||||
|
||||
if [ -n "$skip_complement_run" ]; then
|
||||
echo "Skipping Complement run as requested."
|
||||
exit
|
||||
fi
|
||||
|
||||
export COMPLEMENT_BASE_IMAGE=complement-synapse
|
||||
|
||||
extra_test_args=()
|
||||
|
|
|
@ -418,12 +418,15 @@ class Porter:
|
|||
self.progress.update(table, table_size) # Mark table as done
|
||||
return
|
||||
|
||||
# We sweep over rowids in two directions: one forwards (rowids 1, 2, 3, ...)
|
||||
# and another backwards (rowids 0, -1, -2, ...).
|
||||
forward_select = (
|
||||
"SELECT rowid, * FROM %s WHERE rowid >= ? ORDER BY rowid LIMIT ?" % (table,)
|
||||
)
|
||||
|
||||
backward_select = (
|
||||
"SELECT rowid, * FROM %s WHERE rowid <= ? ORDER BY rowid LIMIT ?" % (table,)
|
||||
"SELECT rowid, * FROM %s WHERE rowid <= ? ORDER BY rowid DESC LIMIT ?"
|
||||
% (table,)
|
||||
)
|
||||
|
||||
do_forward = [True]
|
||||
|
@ -621,6 +624,25 @@ class Porter:
|
|||
self.postgres_store.db_pool.updates.has_completed_background_updates()
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def _is_sqlite_autovacuum_enabled(txn: LoggingTransaction) -> bool:
|
||||
"""
|
||||
Returns true if auto_vacuum is enabled in SQLite.
|
||||
https://www.sqlite.org/pragma.html#pragma_auto_vacuum
|
||||
|
||||
Vacuuming changes the rowids on rows in the database.
|
||||
Auto-vacuuming is therefore dangerous when used in conjunction with this script.
|
||||
|
||||
Note that the auto_vacuum setting can't be changed without performing
|
||||
a VACUUM after trying to change the pragma.
|
||||
"""
|
||||
txn.execute("PRAGMA auto_vacuum")
|
||||
row = txn.fetchone()
|
||||
assert row is not None, "`PRAGMA auto_vacuum` did not give a row."
|
||||
(autovacuum_setting,) = row
|
||||
# 0 means off. 1 means full. 2 means incremental.
|
||||
return autovacuum_setting != 0
|
||||
|
||||
async def run(self) -> None:
|
||||
"""Ports the SQLite database to a PostgreSQL database.
|
||||
|
||||
|
@ -637,6 +659,21 @@ class Porter:
|
|||
allow_outdated_version=True,
|
||||
)
|
||||
|
||||
# For safety, ensure auto_vacuums are disabled.
|
||||
if await self.sqlite_store.db_pool.runInteraction(
|
||||
"is_sqlite_autovacuum_enabled", self._is_sqlite_autovacuum_enabled
|
||||
):
|
||||
end_error = (
|
||||
"auto_vacuum is enabled in the SQLite database."
|
||||
" (This is not the default configuration.)\n"
|
||||
" This script relies on rowids being consistent and must not"
|
||||
" be used if the database could be vacuumed between re-runs.\n"
|
||||
" To disable auto_vacuum, you need to stop Synapse and run the following SQL:\n"
|
||||
" PRAGMA auto_vacuum=off;\n"
|
||||
" VACUUM;"
|
||||
)
|
||||
return
|
||||
|
||||
# Check if all background updates are done, abort if not.
|
||||
updates_complete = (
|
||||
await self.sqlite_store.db_pool.updates.has_completed_background_updates()
|
||||
|
|
|
@ -297,8 +297,14 @@ class AuthError(SynapseError):
|
|||
other poorly-defined times.
|
||||
"""
|
||||
|
||||
def __init__(self, code: int, msg: str, errcode: str = Codes.FORBIDDEN):
|
||||
super().__init__(code, msg, errcode)
|
||||
def __init__(
|
||||
self,
|
||||
code: int,
|
||||
msg: str,
|
||||
errcode: str = Codes.FORBIDDEN,
|
||||
additional_fields: Optional[dict] = None,
|
||||
):
|
||||
super().__init__(code, msg, errcode, additional_fields)
|
||||
|
||||
|
||||
class InvalidClientCredentialsError(SynapseError):
|
||||
|
|
|
@ -39,6 +39,7 @@ from synapse.replication.slave.storage.push_rule import SlavedPushRuleStore
|
|||
from synapse.replication.slave.storage.receipts import SlavedReceiptsStore
|
||||
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
|
||||
from synapse.server import HomeServer
|
||||
from synapse.storage.database import DatabasePool, LoggingDatabaseConnection
|
||||
from synapse.storage.databases.main.room import RoomWorkerStore
|
||||
from synapse.types import StateMap
|
||||
from synapse.util import SYNAPSE_VERSION
|
||||
|
@ -60,7 +61,17 @@ class AdminCmdSlavedStore(
|
|||
BaseSlavedStore,
|
||||
RoomWorkerStore,
|
||||
):
|
||||
pass
|
||||
def __init__(
|
||||
self,
|
||||
database: DatabasePool,
|
||||
db_conn: LoggingDatabaseConnection,
|
||||
hs: "HomeServer",
|
||||
):
|
||||
super().__init__(database, db_conn, hs)
|
||||
|
||||
# Annoyingly `filter_events_for_client` assumes that this exists. We
|
||||
# should refactor it to take a `Clock` directly.
|
||||
self.clock = hs.get_clock()
|
||||
|
||||
|
||||
class AdminCmdServer(HomeServer):
|
||||
|
|
|
@ -319,7 +319,9 @@ class _ServiceQueuer:
|
|||
rooms_of_interesting_users.update(event.room_id for event in events)
|
||||
# EDUs
|
||||
rooms_of_interesting_users.update(
|
||||
ephemeral["room_id"] for ephemeral in ephemerals
|
||||
ephemeral["room_id"]
|
||||
for ephemeral in ephemerals
|
||||
if ephemeral.get("room_id") is not None
|
||||
)
|
||||
|
||||
# Look up the AS users in those rooms
|
||||
|
@ -329,8 +331,9 @@ class _ServiceQueuer:
|
|||
)
|
||||
|
||||
# Add recipients of to-device messages.
|
||||
# device_message["user_id"] is the ID of the recipient.
|
||||
users.update(device_message["user_id"] for device_message in to_device_messages)
|
||||
users.update(
|
||||
device_message["to_user_id"] for device_message in to_device_messages
|
||||
)
|
||||
|
||||
# Compute and return the counts / fallback key usage states
|
||||
otk_counts = await self._store.count_bulk_e2e_one_time_keys_for_as(users)
|
||||
|
|
|
@ -21,7 +21,6 @@ from typing import (
|
|||
Awaitable,
|
||||
Callable,
|
||||
Collection,
|
||||
Dict,
|
||||
List,
|
||||
Optional,
|
||||
Tuple,
|
||||
|
@ -32,10 +31,11 @@ from typing import (
|
|||
from typing_extensions import Literal
|
||||
|
||||
import synapse
|
||||
from synapse.api.errors import Codes
|
||||
from synapse.rest.media.v1._base import FileInfo
|
||||
from synapse.rest.media.v1.media_storage import ReadableFileWrapper
|
||||
from synapse.spam_checker_api import RegistrationBehaviour
|
||||
from synapse.types import RoomAlias, UserProfile
|
||||
from synapse.types import JsonDict, RoomAlias, UserProfile
|
||||
from synapse.util.async_helpers import delay_cancellation, maybe_awaitable
|
||||
from synapse.util.metrics import Measure
|
||||
|
||||
|
@ -50,12 +50,12 @@ CHECK_EVENT_FOR_SPAM_CALLBACK = Callable[
|
|||
Awaitable[
|
||||
Union[
|
||||
str,
|
||||
"synapse.api.errors.Codes",
|
||||
Codes,
|
||||
# Highly experimental, not officially part of the spamchecker API, may
|
||||
# disappear without warning depending on the results of ongoing
|
||||
# experiments.
|
||||
# Use this to return additional information as part of an error.
|
||||
Tuple["synapse.api.errors.Codes", Dict],
|
||||
Tuple[Codes, JsonDict],
|
||||
# Deprecated
|
||||
bool,
|
||||
]
|
||||
|
@ -70,7 +70,12 @@ USER_MAY_JOIN_ROOM_CALLBACK = Callable[
|
|||
Awaitable[
|
||||
Union[
|
||||
Literal["NOT_SPAM"],
|
||||
"synapse.api.errors.Codes",
|
||||
Codes,
|
||||
# Highly experimental, not officially part of the spamchecker API, may
|
||||
# disappear without warning depending on the results of ongoing
|
||||
# experiments.
|
||||
# Use this to return additional information as part of an error.
|
||||
Tuple[Codes, JsonDict],
|
||||
# Deprecated
|
||||
bool,
|
||||
]
|
||||
|
@ -81,7 +86,12 @@ USER_MAY_INVITE_CALLBACK = Callable[
|
|||
Awaitable[
|
||||
Union[
|
||||
Literal["NOT_SPAM"],
|
||||
"synapse.api.errors.Codes",
|
||||
Codes,
|
||||
# Highly experimental, not officially part of the spamchecker API, may
|
||||
# disappear without warning depending on the results of ongoing
|
||||
# experiments.
|
||||
# Use this to return additional information as part of an error.
|
||||
Tuple[Codes, JsonDict],
|
||||
# Deprecated
|
||||
bool,
|
||||
]
|
||||
|
@ -92,7 +102,12 @@ USER_MAY_SEND_3PID_INVITE_CALLBACK = Callable[
|
|||
Awaitable[
|
||||
Union[
|
||||
Literal["NOT_SPAM"],
|
||||
"synapse.api.errors.Codes",
|
||||
Codes,
|
||||
# Highly experimental, not officially part of the spamchecker API, may
|
||||
# disappear without warning depending on the results of ongoing
|
||||
# experiments.
|
||||
# Use this to return additional information as part of an error.
|
||||
Tuple[Codes, JsonDict],
|
||||
# Deprecated
|
||||
bool,
|
||||
]
|
||||
|
@ -103,7 +118,12 @@ USER_MAY_CREATE_ROOM_CALLBACK = Callable[
|
|||
Awaitable[
|
||||
Union[
|
||||
Literal["NOT_SPAM"],
|
||||
"synapse.api.errors.Codes",
|
||||
Codes,
|
||||
# Highly experimental, not officially part of the spamchecker API, may
|
||||
# disappear without warning depending on the results of ongoing
|
||||
# experiments.
|
||||
# Use this to return additional information as part of an error.
|
||||
Tuple[Codes, JsonDict],
|
||||
# Deprecated
|
||||
bool,
|
||||
]
|
||||
|
@ -114,7 +134,12 @@ USER_MAY_CREATE_ROOM_ALIAS_CALLBACK = Callable[
|
|||
Awaitable[
|
||||
Union[
|
||||
Literal["NOT_SPAM"],
|
||||
"synapse.api.errors.Codes",
|
||||
Codes,
|
||||
# Highly experimental, not officially part of the spamchecker API, may
|
||||
# disappear without warning depending on the results of ongoing
|
||||
# experiments.
|
||||
# Use this to return additional information as part of an error.
|
||||
Tuple[Codes, JsonDict],
|
||||
# Deprecated
|
||||
bool,
|
||||
]
|
||||
|
@ -125,7 +150,12 @@ USER_MAY_PUBLISH_ROOM_CALLBACK = Callable[
|
|||
Awaitable[
|
||||
Union[
|
||||
Literal["NOT_SPAM"],
|
||||
"synapse.api.errors.Codes",
|
||||
Codes,
|
||||
# Highly experimental, not officially part of the spamchecker API, may
|
||||
# disappear without warning depending on the results of ongoing
|
||||
# experiments.
|
||||
# Use this to return additional information as part of an error.
|
||||
Tuple[Codes, JsonDict],
|
||||
# Deprecated
|
||||
bool,
|
||||
]
|
||||
|
@ -154,7 +184,12 @@ CHECK_MEDIA_FILE_FOR_SPAM_CALLBACK = Callable[
|
|||
Awaitable[
|
||||
Union[
|
||||
Literal["NOT_SPAM"],
|
||||
"synapse.api.errors.Codes",
|
||||
Codes,
|
||||
# Highly experimental, not officially part of the spamchecker API, may
|
||||
# disappear without warning depending on the results of ongoing
|
||||
# experiments.
|
||||
# Use this to return additional information as part of an error.
|
||||
Tuple[Codes, JsonDict],
|
||||
# Deprecated
|
||||
bool,
|
||||
]
|
||||
|
@ -345,7 +380,7 @@ class SpamChecker:
|
|||
|
||||
async def check_event_for_spam(
|
||||
self, event: "synapse.events.EventBase"
|
||||
) -> Union[Tuple["synapse.api.errors.Codes", Dict], str]:
|
||||
) -> Union[Tuple[Codes, JsonDict], str]:
|
||||
"""Checks if a given event is considered "spammy" by this server.
|
||||
|
||||
If the server considers an event spammy, then it will be rejected if
|
||||
|
@ -376,7 +411,16 @@ class SpamChecker:
|
|||
elif res is True:
|
||||
# This spam-checker rejects the event with deprecated
|
||||
# return value `True`
|
||||
return (synapse.api.errors.Codes.FORBIDDEN, {})
|
||||
return synapse.api.errors.Codes.FORBIDDEN, {}
|
||||
elif (
|
||||
isinstance(res, tuple)
|
||||
and len(res) == 2
|
||||
and isinstance(res[0], synapse.api.errors.Codes)
|
||||
and isinstance(res[1], dict)
|
||||
):
|
||||
return res
|
||||
elif isinstance(res, synapse.api.errors.Codes):
|
||||
return res, {}
|
||||
elif not isinstance(res, str):
|
||||
# mypy complains that we can't reach this code because of the
|
||||
# return type in CHECK_EVENT_FOR_SPAM_CALLBACK, but we don't know
|
||||
|
@ -422,7 +466,7 @@ class SpamChecker:
|
|||
|
||||
async def user_may_join_room(
|
||||
self, user_id: str, room_id: str, is_invited: bool
|
||||
) -> Union["synapse.api.errors.Codes", Literal["NOT_SPAM"]]:
|
||||
) -> Union[Tuple[Codes, JsonDict], Literal["NOT_SPAM"]]:
|
||||
"""Checks if a given users is allowed to join a room.
|
||||
Not called when a user creates a room.
|
||||
|
||||
|
@ -432,7 +476,7 @@ class SpamChecker:
|
|||
is_invited: Whether the user is invited into the room
|
||||
|
||||
Returns:
|
||||
NOT_SPAM if the operation is permitted, Codes otherwise.
|
||||
NOT_SPAM if the operation is permitted, [Codes, Dict] otherwise.
|
||||
"""
|
||||
for callback in self._user_may_join_room_callbacks:
|
||||
with Measure(
|
||||
|
@ -443,21 +487,28 @@ class SpamChecker:
|
|||
if res is True or res is self.NOT_SPAM:
|
||||
continue
|
||||
elif res is False:
|
||||
return synapse.api.errors.Codes.FORBIDDEN
|
||||
return synapse.api.errors.Codes.FORBIDDEN, {}
|
||||
elif isinstance(res, synapse.api.errors.Codes):
|
||||
return res, {}
|
||||
elif (
|
||||
isinstance(res, tuple)
|
||||
and len(res) == 2
|
||||
and isinstance(res[0], synapse.api.errors.Codes)
|
||||
and isinstance(res[1], dict)
|
||||
):
|
||||
return res
|
||||
else:
|
||||
logger.warning(
|
||||
"Module returned invalid value, rejecting join as spam"
|
||||
)
|
||||
return synapse.api.errors.Codes.FORBIDDEN
|
||||
return synapse.api.errors.Codes.FORBIDDEN, {}
|
||||
|
||||
# No spam-checker has rejected the request, let it pass.
|
||||
return self.NOT_SPAM
|
||||
|
||||
async def user_may_invite(
|
||||
self, inviter_userid: str, invitee_userid: str, room_id: str
|
||||
) -> Union["synapse.api.errors.Codes", Literal["NOT_SPAM"]]:
|
||||
) -> Union[Tuple[Codes, dict], Literal["NOT_SPAM"]]:
|
||||
"""Checks if a given user may send an invite
|
||||
|
||||
Args:
|
||||
|
@ -479,21 +530,28 @@ class SpamChecker:
|
|||
if res is True or res is self.NOT_SPAM:
|
||||
continue
|
||||
elif res is False:
|
||||
return synapse.api.errors.Codes.FORBIDDEN
|
||||
return synapse.api.errors.Codes.FORBIDDEN, {}
|
||||
elif isinstance(res, synapse.api.errors.Codes):
|
||||
return res, {}
|
||||
elif (
|
||||
isinstance(res, tuple)
|
||||
and len(res) == 2
|
||||
and isinstance(res[0], synapse.api.errors.Codes)
|
||||
and isinstance(res[1], dict)
|
||||
):
|
||||
return res
|
||||
else:
|
||||
logger.warning(
|
||||
"Module returned invalid value, rejecting invite as spam"
|
||||
)
|
||||
return synapse.api.errors.Codes.FORBIDDEN
|
||||
return synapse.api.errors.Codes.FORBIDDEN, {}
|
||||
|
||||
# No spam-checker has rejected the request, let it pass.
|
||||
return self.NOT_SPAM
|
||||
|
||||
async def user_may_send_3pid_invite(
|
||||
self, inviter_userid: str, medium: str, address: str, room_id: str
|
||||
) -> Union["synapse.api.errors.Codes", Literal["NOT_SPAM"]]:
|
||||
) -> Union[Tuple[Codes, dict], Literal["NOT_SPAM"]]:
|
||||
"""Checks if a given user may invite a given threepid into the room
|
||||
|
||||
Note that if the threepid is already associated with a Matrix user ID, Synapse
|
||||
|
@ -519,20 +577,27 @@ class SpamChecker:
|
|||
if res is True or res is self.NOT_SPAM:
|
||||
continue
|
||||
elif res is False:
|
||||
return synapse.api.errors.Codes.FORBIDDEN
|
||||
return synapse.api.errors.Codes.FORBIDDEN, {}
|
||||
elif isinstance(res, synapse.api.errors.Codes):
|
||||
return res, {}
|
||||
elif (
|
||||
isinstance(res, tuple)
|
||||
and len(res) == 2
|
||||
and isinstance(res[0], synapse.api.errors.Codes)
|
||||
and isinstance(res[1], dict)
|
||||
):
|
||||
return res
|
||||
else:
|
||||
logger.warning(
|
||||
"Module returned invalid value, rejecting 3pid invite as spam"
|
||||
)
|
||||
return synapse.api.errors.Codes.FORBIDDEN
|
||||
return synapse.api.errors.Codes.FORBIDDEN, {}
|
||||
|
||||
return self.NOT_SPAM
|
||||
|
||||
async def user_may_create_room(
|
||||
self, userid: str
|
||||
) -> Union["synapse.api.errors.Codes", Literal["NOT_SPAM"]]:
|
||||
) -> Union[Tuple[Codes, dict], Literal["NOT_SPAM"]]:
|
||||
"""Checks if a given user may create a room
|
||||
|
||||
Args:
|
||||
|
@ -546,20 +611,27 @@ class SpamChecker:
|
|||
if res is True or res is self.NOT_SPAM:
|
||||
continue
|
||||
elif res is False:
|
||||
return synapse.api.errors.Codes.FORBIDDEN
|
||||
return synapse.api.errors.Codes.FORBIDDEN, {}
|
||||
elif isinstance(res, synapse.api.errors.Codes):
|
||||
return res, {}
|
||||
elif (
|
||||
isinstance(res, tuple)
|
||||
and len(res) == 2
|
||||
and isinstance(res[0], synapse.api.errors.Codes)
|
||||
and isinstance(res[1], dict)
|
||||
):
|
||||
return res
|
||||
else:
|
||||
logger.warning(
|
||||
"Module returned invalid value, rejecting room creation as spam"
|
||||
)
|
||||
return synapse.api.errors.Codes.FORBIDDEN
|
||||
return synapse.api.errors.Codes.FORBIDDEN, {}
|
||||
|
||||
return self.NOT_SPAM
|
||||
|
||||
async def user_may_create_room_alias(
|
||||
self, userid: str, room_alias: RoomAlias
|
||||
) -> Union["synapse.api.errors.Codes", Literal["NOT_SPAM"]]:
|
||||
) -> Union[Tuple[Codes, dict], Literal["NOT_SPAM"]]:
|
||||
"""Checks if a given user may create a room alias
|
||||
|
||||
Args:
|
||||
|
@ -575,20 +647,27 @@ class SpamChecker:
|
|||
if res is True or res is self.NOT_SPAM:
|
||||
continue
|
||||
elif res is False:
|
||||
return synapse.api.errors.Codes.FORBIDDEN
|
||||
return synapse.api.errors.Codes.FORBIDDEN, {}
|
||||
elif isinstance(res, synapse.api.errors.Codes):
|
||||
return res, {}
|
||||
elif (
|
||||
isinstance(res, tuple)
|
||||
and len(res) == 2
|
||||
and isinstance(res[0], synapse.api.errors.Codes)
|
||||
and isinstance(res[1], dict)
|
||||
):
|
||||
return res
|
||||
else:
|
||||
logger.warning(
|
||||
"Module returned invalid value, rejecting room create as spam"
|
||||
)
|
||||
return synapse.api.errors.Codes.FORBIDDEN
|
||||
return synapse.api.errors.Codes.FORBIDDEN, {}
|
||||
|
||||
return self.NOT_SPAM
|
||||
|
||||
async def user_may_publish_room(
|
||||
self, userid: str, room_id: str
|
||||
) -> Union["synapse.api.errors.Codes", Literal["NOT_SPAM"]]:
|
||||
) -> Union[Tuple[Codes, dict], Literal["NOT_SPAM"]]:
|
||||
"""Checks if a given user may publish a room to the directory
|
||||
|
||||
Args:
|
||||
|
@ -603,14 +682,21 @@ class SpamChecker:
|
|||
if res is True or res is self.NOT_SPAM:
|
||||
continue
|
||||
elif res is False:
|
||||
return synapse.api.errors.Codes.FORBIDDEN
|
||||
return synapse.api.errors.Codes.FORBIDDEN, {}
|
||||
elif isinstance(res, synapse.api.errors.Codes):
|
||||
return res, {}
|
||||
elif (
|
||||
isinstance(res, tuple)
|
||||
and len(res) == 2
|
||||
and isinstance(res[0], synapse.api.errors.Codes)
|
||||
and isinstance(res[1], dict)
|
||||
):
|
||||
return res
|
||||
else:
|
||||
logger.warning(
|
||||
"Module returned invalid value, rejecting room publication as spam"
|
||||
)
|
||||
return synapse.api.errors.Codes.FORBIDDEN
|
||||
return synapse.api.errors.Codes.FORBIDDEN, {}
|
||||
|
||||
return self.NOT_SPAM
|
||||
|
||||
|
@ -678,7 +764,7 @@ class SpamChecker:
|
|||
|
||||
async def check_media_file_for_spam(
|
||||
self, file_wrapper: ReadableFileWrapper, file_info: FileInfo
|
||||
) -> Union["synapse.api.errors.Codes", Literal["NOT_SPAM"]]:
|
||||
) -> Union[Tuple[Codes, dict], Literal["NOT_SPAM"]]:
|
||||
"""Checks if a piece of newly uploaded media should be blocked.
|
||||
|
||||
This will be called for local uploads, downloads of remote media, each
|
||||
|
@ -715,13 +801,20 @@ class SpamChecker:
|
|||
if res is False or res is self.NOT_SPAM:
|
||||
continue
|
||||
elif res is True:
|
||||
return synapse.api.errors.Codes.FORBIDDEN
|
||||
return synapse.api.errors.Codes.FORBIDDEN, {}
|
||||
elif isinstance(res, synapse.api.errors.Codes):
|
||||
return res, {}
|
||||
elif (
|
||||
isinstance(res, tuple)
|
||||
and len(res) == 2
|
||||
and isinstance(res[0], synapse.api.errors.Codes)
|
||||
and isinstance(res[1], dict)
|
||||
):
|
||||
return res
|
||||
else:
|
||||
logger.warning(
|
||||
"Module returned invalid value, rejecting media file as spam"
|
||||
)
|
||||
return synapse.api.errors.Codes.FORBIDDEN
|
||||
return synapse.api.errors.Codes.FORBIDDEN, {}
|
||||
|
||||
return self.NOT_SPAM
|
||||
|
|
|
@ -464,14 +464,7 @@ class ThirdPartyEventRules:
|
|||
Returns:
|
||||
A dict mapping (event type, state key) to state event.
|
||||
"""
|
||||
state_ids = await self._storage_controllers.state.get_current_state_ids(room_id)
|
||||
room_state_events = await self.store.get_events(state_ids.values())
|
||||
|
||||
state_events = {}
|
||||
for key, event_id in state_ids.items():
|
||||
state_events[key] = room_state_events[event_id]
|
||||
|
||||
return state_events
|
||||
return await self._storage_controllers.state.get_current_state(room_id)
|
||||
|
||||
async def on_profile_update(
|
||||
self, user_id: str, new_profile: ProfileInfo, by_admin: bool, deactivation: bool
|
||||
|
|
|
@ -149,7 +149,8 @@ class DirectoryHandler:
|
|||
raise AuthError(
|
||||
403,
|
||||
"This user is not permitted to create this alias",
|
||||
spam_check,
|
||||
errcode=spam_check[0],
|
||||
additional_fields=spam_check[1],
|
||||
)
|
||||
|
||||
if not self.config.roomdirectory.is_alias_creation_allowed(
|
||||
|
@ -441,7 +442,8 @@ class DirectoryHandler:
|
|||
raise AuthError(
|
||||
403,
|
||||
"This user is not permitted to publish rooms to the room list",
|
||||
spam_check,
|
||||
errcode=spam_check[0],
|
||||
additional_fields=spam_check[1],
|
||||
)
|
||||
|
||||
if requester.is_guest:
|
||||
|
|
|
@ -844,7 +844,8 @@ class FederationHandler:
|
|||
raise SynapseError(
|
||||
403,
|
||||
"This user is not permitted to send invites to this server/user",
|
||||
spam_check,
|
||||
errcode=spam_check[0],
|
||||
additional_fields=spam_check[1],
|
||||
)
|
||||
|
||||
membership = event.content.get("membership")
|
||||
|
@ -1559,14 +1560,9 @@ class FederationHandler:
|
|||
# all the events are updated, so we can update current state and
|
||||
# clear the lazy-loading flag.
|
||||
logger.info("Updating current state for %s", room_id)
|
||||
# TODO(faster_joins): support workers
|
||||
# TODO(faster_joins): notify workers in notify_room_un_partial_stated
|
||||
# https://github.com/matrix-org/synapse/issues/12994
|
||||
assert (
|
||||
self._storage_controllers.persistence is not None
|
||||
), "worker-mode deployments not currently supported here"
|
||||
await self._storage_controllers.persistence.update_current_state(
|
||||
room_id
|
||||
)
|
||||
await self.state_handler.update_current_state(room_id)
|
||||
|
||||
logger.info("Clearing partial-state flag for %s", room_id)
|
||||
success = await self.store.clear_partial_state_room(room_id)
|
||||
|
|
|
@ -67,19 +67,14 @@ class ProfileHandler:
|
|||
target_user = UserID.from_string(user_id)
|
||||
|
||||
if self.hs.is_mine(target_user):
|
||||
try:
|
||||
displayname = await self.store.get_profile_displayname(
|
||||
target_user.localpart
|
||||
)
|
||||
avatar_url = await self.store.get_profile_avatar_url(
|
||||
target_user.localpart
|
||||
)
|
||||
except StoreError as e:
|
||||
if e.code == 404:
|
||||
raise SynapseError(404, "Profile was not found", Codes.NOT_FOUND)
|
||||
raise
|
||||
profileinfo = await self.store.get_profileinfo(target_user.localpart)
|
||||
if profileinfo.display_name is None:
|
||||
raise SynapseError(404, "Profile was not found", Codes.NOT_FOUND)
|
||||
|
||||
return {"displayname": displayname, "avatar_url": avatar_url}
|
||||
return {
|
||||
"displayname": profileinfo.display_name,
|
||||
"avatar_url": profileinfo.avatar_url,
|
||||
}
|
||||
else:
|
||||
try:
|
||||
result = await self.federation.make_query(
|
||||
|
|
|
@ -440,7 +440,12 @@ class RoomCreationHandler:
|
|||
|
||||
spam_check = await self.spam_checker.user_may_create_room(user_id)
|
||||
if spam_check != NOT_SPAM:
|
||||
raise SynapseError(403, "You are not permitted to create rooms", spam_check)
|
||||
raise SynapseError(
|
||||
403,
|
||||
"You are not permitted to create rooms",
|
||||
errcode=spam_check[0],
|
||||
additional_fields=spam_check[1],
|
||||
)
|
||||
|
||||
creation_content: JsonDict = {
|
||||
"room_version": new_room_version.identifier,
|
||||
|
@ -731,7 +736,10 @@ class RoomCreationHandler:
|
|||
spam_check = await self.spam_checker.user_may_create_room(user_id)
|
||||
if spam_check != NOT_SPAM:
|
||||
raise SynapseError(
|
||||
403, "You are not permitted to create rooms", spam_check
|
||||
403,
|
||||
"You are not permitted to create rooms",
|
||||
errcode=spam_check[0],
|
||||
additional_fields=spam_check[1],
|
||||
)
|
||||
|
||||
if ratelimit:
|
||||
|
@ -1011,6 +1019,8 @@ class RoomCreationHandler:
|
|||
|
||||
event_keys = {"room_id": room_id, "sender": creator_id, "state_key": ""}
|
||||
|
||||
last_sent_event_id: Optional[str] = None
|
||||
|
||||
def create(etype: str, content: JsonDict, **kwargs: Any) -> JsonDict:
|
||||
e = {"type": etype, "content": content}
|
||||
|
||||
|
@ -1020,19 +1030,27 @@ class RoomCreationHandler:
|
|||
return e
|
||||
|
||||
async def send(etype: str, content: JsonDict, **kwargs: Any) -> int:
|
||||
nonlocal last_sent_event_id
|
||||
|
||||
event = create(etype, content, **kwargs)
|
||||
logger.debug("Sending %s in new room", etype)
|
||||
# Allow these events to be sent even if the user is shadow-banned to
|
||||
# allow the room creation to complete.
|
||||
(
|
||||
_,
|
||||
sent_event,
|
||||
last_stream_id,
|
||||
) = await self.event_creation_handler.create_and_send_nonmember_event(
|
||||
creator,
|
||||
event,
|
||||
ratelimit=False,
|
||||
ignore_shadow_ban=True,
|
||||
# Note: we don't pass state_event_ids here because this triggers
|
||||
# an additional query per event to look them up from the events table.
|
||||
prev_event_ids=[last_sent_event_id] if last_sent_event_id else [],
|
||||
)
|
||||
|
||||
last_sent_event_id = sent_event.event_id
|
||||
|
||||
return last_stream_id
|
||||
|
||||
try:
|
||||
|
@ -1046,7 +1064,9 @@ class RoomCreationHandler:
|
|||
await send(etype=EventTypes.Create, content=creation_content)
|
||||
|
||||
logger.debug("Sending %s in new room", EventTypes.Member)
|
||||
await self.room_member_handler.update_membership(
|
||||
# Room create event must exist at this point
|
||||
assert last_sent_event_id is not None
|
||||
member_event_id, _ = await self.room_member_handler.update_membership(
|
||||
creator,
|
||||
creator.user,
|
||||
room_id,
|
||||
|
@ -1054,7 +1074,9 @@ class RoomCreationHandler:
|
|||
ratelimit=ratelimit,
|
||||
content=creator_join_profile,
|
||||
new_room=True,
|
||||
prev_event_ids=[last_sent_event_id],
|
||||
)
|
||||
last_sent_event_id = member_event_id
|
||||
|
||||
# We treat the power levels override specially as this needs to be one
|
||||
# of the first events that get sent into a room.
|
||||
|
@ -1375,6 +1397,7 @@ class TimestampLookupHandler:
|
|||
# the timestamp given and the event we were able to find locally
|
||||
is_event_next_to_backward_gap = False
|
||||
is_event_next_to_forward_gap = False
|
||||
local_event = None
|
||||
if local_event_id:
|
||||
local_event = await self.store.get_event(
|
||||
local_event_id, allow_none=False, allow_rejected=False
|
||||
|
@ -1461,7 +1484,10 @@ class TimestampLookupHandler:
|
|||
ex.args,
|
||||
)
|
||||
|
||||
if not local_event_id:
|
||||
# To appease mypy, we have to add both of these conditions to check for
|
||||
# `None`. We only expect `local_event` to be `None` when
|
||||
# `local_event_id` is `None` but mypy isn't as smart and assuming as us.
|
||||
if not local_event_id or not local_event:
|
||||
raise SynapseError(
|
||||
404,
|
||||
"Unable to find event from %s in direction %s" % (timestamp, direction),
|
||||
|
|
|
@ -699,7 +699,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
|
|||
if target_id == self._server_notices_mxid:
|
||||
raise SynapseError(HTTPStatus.FORBIDDEN, "Cannot invite this user")
|
||||
|
||||
block_invite_code = None
|
||||
block_invite_result = None
|
||||
|
||||
if (
|
||||
self._server_notices_mxid is not None
|
||||
|
@ -717,18 +717,21 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
|
|||
"Blocking invite: user is not admin and non-admin "
|
||||
"invites disabled"
|
||||
)
|
||||
block_invite_code = Codes.FORBIDDEN
|
||||
block_invite_result = (Codes.FORBIDDEN, {})
|
||||
|
||||
spam_check = await self.spam_checker.user_may_invite(
|
||||
requester.user.to_string(), target_id, room_id
|
||||
)
|
||||
if spam_check != NOT_SPAM:
|
||||
logger.info("Blocking invite due to spam checker")
|
||||
block_invite_code = spam_check
|
||||
block_invite_result = spam_check
|
||||
|
||||
if block_invite_code is not None:
|
||||
if block_invite_result is not None:
|
||||
raise SynapseError(
|
||||
403, "Invites have been disabled on this server", block_invite_code
|
||||
403,
|
||||
"Invites have been disabled on this server",
|
||||
errcode=block_invite_result[0],
|
||||
additional_fields=block_invite_result[1],
|
||||
)
|
||||
|
||||
# An empty prev_events list is allowed as long as the auth_event_ids are present
|
||||
|
@ -842,7 +845,12 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
|
|||
target.to_string(), room_id, is_invited=inviter is not None
|
||||
)
|
||||
if spam_check != NOT_SPAM:
|
||||
raise SynapseError(403, "Not allowed to join this room", spam_check)
|
||||
raise SynapseError(
|
||||
403,
|
||||
"Not allowed to join this room",
|
||||
errcode=spam_check[0],
|
||||
additional_fields=spam_check[1],
|
||||
)
|
||||
|
||||
# Check if a remote join should be performed.
|
||||
remote_join, remote_room_hosts = await self._should_perform_remote_join(
|
||||
|
@ -1401,7 +1409,12 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
|
|||
room_id=room_id,
|
||||
)
|
||||
if spam_check != NOT_SPAM:
|
||||
raise SynapseError(403, "Cannot send threepid invite", spam_check)
|
||||
raise SynapseError(
|
||||
403,
|
||||
"Cannot send threepid invite",
|
||||
errcode=spam_check[0],
|
||||
additional_fields=spam_check[1],
|
||||
)
|
||||
|
||||
stream_id = await self._make_and_store_3pid_invite(
|
||||
requester,
|
||||
|
|
|
@ -35,6 +35,7 @@ from typing_extensions import ParamSpec
|
|||
from twisted.internet import defer
|
||||
from twisted.web.resource import Resource
|
||||
|
||||
from synapse.api import errors
|
||||
from synapse.api.errors import SynapseError
|
||||
from synapse.events import EventBase
|
||||
from synapse.events.presence_router import (
|
||||
|
|
|
@ -17,7 +17,6 @@ import itertools
|
|||
import logging
|
||||
from typing import TYPE_CHECKING, Any, Dict, Iterable, List, Optional, Set, Tuple, Union
|
||||
|
||||
import attr
|
||||
from prometheus_client import Counter
|
||||
|
||||
from synapse.api.constants import EventTypes, Membership, RelationTypes
|
||||
|
@ -26,13 +25,11 @@ from synapse.events import EventBase, relation_from_event
|
|||
from synapse.events.snapshot import EventContext
|
||||
from synapse.state import POWER_KEY
|
||||
from synapse.storage.databases.main.roommember import EventIdMembership
|
||||
from synapse.util.async_helpers import Linearizer
|
||||
from synapse.util.caches import CacheMetric, register_cache
|
||||
from synapse.util.caches.descriptors import lru_cache
|
||||
from synapse.util.caches.lrucache import LruCache
|
||||
from synapse.storage.state import StateFilter
|
||||
from synapse.util.caches import register_cache
|
||||
from synapse.util.metrics import measure_func
|
||||
from synapse.visibility import filter_event_for_clients_with_state
|
||||
|
||||
from ..storage.state import StateFilter
|
||||
from .push_rule_evaluator import PushRuleEvaluatorForEvent
|
||||
|
||||
if TYPE_CHECKING:
|
||||
|
@ -48,15 +45,6 @@ push_rules_state_size_counter = Counter(
|
|||
"synapse_push_bulk_push_rule_evaluator_push_rules_state_size_counter", ""
|
||||
)
|
||||
|
||||
# Measures whether we use the fast path of using state deltas, or if we have to
|
||||
# recalculate from scratch
|
||||
push_rules_delta_state_cache_metric = register_cache(
|
||||
"cache",
|
||||
"push_rules_delta_state_cache_metric",
|
||||
cache=[], # Meaningless size, as this isn't a cache that stores values
|
||||
resizable=False,
|
||||
)
|
||||
|
||||
|
||||
STATE_EVENT_TYPES_TO_MARK_UNREAD = {
|
||||
EventTypes.Topic,
|
||||
|
@ -111,10 +99,6 @@ class BulkPushRuleEvaluator:
|
|||
self.clock = hs.get_clock()
|
||||
self._event_auth_handler = hs.get_event_auth_handler()
|
||||
|
||||
# Used by `RulesForRoom` to ensure only one thing mutates the cache at a
|
||||
# time. Keyed off room_id.
|
||||
self._rules_linearizer = Linearizer(name="rules_for_room")
|
||||
|
||||
self.room_push_rule_cache_metrics = register_cache(
|
||||
"cache",
|
||||
"room_push_rule_cache",
|
||||
|
@ -126,48 +110,48 @@ class BulkPushRuleEvaluator:
|
|||
self._relations_match_enabled = self.hs.config.experimental.msc3772_enabled
|
||||
|
||||
async def _get_rules_for_event(
|
||||
self, event: EventBase, context: EventContext
|
||||
self,
|
||||
event: EventBase,
|
||||
) -> Dict[str, List[Dict[str, Any]]]:
|
||||
"""This gets the rules for all users in the room at the time of the event,
|
||||
as well as the push rules for the invitee if the event is an invite.
|
||||
"""Get the push rules for all users who may need to be notified about
|
||||
the event.
|
||||
|
||||
Note: this does not check if the user is allowed to see the event.
|
||||
|
||||
Returns:
|
||||
dict of user_id -> push_rules
|
||||
Mapping of user ID to their push rules.
|
||||
"""
|
||||
room_id = event.room_id
|
||||
# We get the users who may need to be notified by first fetching the
|
||||
# local users currently in the room, finding those that have push rules,
|
||||
# and *then* checking which users are actually allowed to see the event.
|
||||
#
|
||||
# The alternative is to first fetch all users that were joined at the
|
||||
# event, but that requires fetching the full state at the event, which
|
||||
# may be expensive for large rooms with few local users.
|
||||
|
||||
rules_for_room_data = self._get_rules_for_room(room_id)
|
||||
rules_for_room = RulesForRoom(
|
||||
hs=self.hs,
|
||||
room_id=room_id,
|
||||
rules_for_room_cache=self._get_rules_for_room.cache,
|
||||
room_push_rule_cache_metrics=self.room_push_rule_cache_metrics,
|
||||
linearizer=self._rules_linearizer,
|
||||
cached_data=rules_for_room_data,
|
||||
)
|
||||
|
||||
rules_by_user = await rules_for_room.get_rules(event, context)
|
||||
local_users = await self.store.get_local_users_in_room(event.room_id)
|
||||
|
||||
# if this event is an invite event, we may need to run rules for the user
|
||||
# who's been invited, otherwise they won't get told they've been invited
|
||||
if event.type == "m.room.member" and event.content["membership"] == "invite":
|
||||
if event.type == EventTypes.Member and event.membership == Membership.INVITE:
|
||||
invited = event.state_key
|
||||
if invited and self.hs.is_mine_id(invited):
|
||||
rules_by_user = dict(rules_by_user)
|
||||
rules_by_user[invited] = await self.store.get_push_rules_for_user(
|
||||
invited
|
||||
)
|
||||
if invited and self.hs.is_mine_id(invited) and invited not in local_users:
|
||||
local_users = list(local_users)
|
||||
local_users.append(invited)
|
||||
|
||||
rules_by_user = await self.store.bulk_get_push_rules(local_users)
|
||||
|
||||
logger.debug("Users in room: %s", local_users)
|
||||
|
||||
if logger.isEnabledFor(logging.DEBUG):
|
||||
logger.debug(
|
||||
"Returning push rules for %r %r",
|
||||
event.room_id,
|
||||
list(rules_by_user.keys()),
|
||||
)
|
||||
|
||||
return rules_by_user
|
||||
|
||||
@lru_cache()
|
||||
def _get_rules_for_room(self, room_id: str) -> "RulesForRoomData":
|
||||
"""Get the current RulesForRoomData object for the given room id"""
|
||||
# It's important that the RulesForRoomData object gets added to self._get_rules_for_room.cache
|
||||
# before any lookup methods get called on it as otherwise there may be
|
||||
# a race if invalidate_all gets called (which assumes its in the cache)
|
||||
return RulesForRoomData()
|
||||
|
||||
async def _get_power_levels_and_sender_level(
|
||||
self, event: EventBase, context: EventContext
|
||||
) -> Tuple[dict, int]:
|
||||
|
@ -262,10 +246,12 @@ class BulkPushRuleEvaluator:
|
|||
|
||||
count_as_unread = _should_count_as_unread(event, context)
|
||||
|
||||
rules_by_user = await self._get_rules_for_event(event, context)
|
||||
rules_by_user = await self._get_rules_for_event(event)
|
||||
actions_by_user: Dict[str, List[Union[dict, str]]] = {}
|
||||
|
||||
room_members = await self.store.get_joined_users_from_context(event, context)
|
||||
room_member_count = await self.store.get_number_joined_users_in_room(
|
||||
event.room_id
|
||||
)
|
||||
|
||||
(
|
||||
power_levels,
|
||||
|
@ -278,30 +264,36 @@ class BulkPushRuleEvaluator:
|
|||
|
||||
evaluator = PushRuleEvaluatorForEvent(
|
||||
event,
|
||||
len(room_members),
|
||||
room_member_count,
|
||||
sender_power_level,
|
||||
power_levels,
|
||||
relations,
|
||||
self._relations_match_enabled,
|
||||
)
|
||||
|
||||
# If the event is not a state event check if any users ignore the sender.
|
||||
if not event.is_state():
|
||||
ignorers = await self.store.ignored_by(event.sender)
|
||||
else:
|
||||
ignorers = frozenset()
|
||||
users = rules_by_user.keys()
|
||||
profiles = await self.store.get_subset_users_in_room_with_profiles(
|
||||
event.room_id, users
|
||||
)
|
||||
|
||||
# This is a check for the case where user joins a room without being
|
||||
# allowed to see history, and then the server receives a delayed event
|
||||
# from before the user joined, which they should not be pushed for
|
||||
uids_with_visibility = await filter_event_for_clients_with_state(
|
||||
self.store, users, event, context
|
||||
)
|
||||
|
||||
for uid, rules in rules_by_user.items():
|
||||
if event.sender == uid:
|
||||
continue
|
||||
|
||||
if uid in ignorers:
|
||||
if uid not in uids_with_visibility:
|
||||
continue
|
||||
|
||||
display_name = None
|
||||
profile_info = room_members.get(uid)
|
||||
if profile_info:
|
||||
display_name = profile_info.display_name
|
||||
profile = profiles.get(uid)
|
||||
if profile:
|
||||
display_name = profile.display_name
|
||||
|
||||
if not display_name:
|
||||
# Handle the case where we are pushing a membership event to
|
||||
|
@ -346,283 +338,3 @@ MemberMap = Dict[str, Optional[EventIdMembership]]
|
|||
Rule = Dict[str, dict]
|
||||
RulesByUser = Dict[str, List[Rule]]
|
||||
StateGroup = Union[object, int]
|
||||
|
||||
|
||||
@attr.s(slots=True, auto_attribs=True)
|
||||
class RulesForRoomData:
|
||||
"""The data stored in the cache by `RulesForRoom`.
|
||||
|
||||
We don't store `RulesForRoom` directly in the cache as we want our caches to
|
||||
*only* include data, and not references to e.g. the data stores.
|
||||
"""
|
||||
|
||||
# event_id -> EventIdMembership
|
||||
member_map: MemberMap = attr.Factory(dict)
|
||||
# user_id -> rules
|
||||
rules_by_user: RulesByUser = attr.Factory(dict)
|
||||
|
||||
# The last state group we updated the caches for. If the state_group of
|
||||
# a new event comes along, we know that we can just return the cached
|
||||
# result.
|
||||
# On invalidation of the rules themselves (if the user changes them),
|
||||
# we invalidate everything and set state_group to `object()`
|
||||
state_group: StateGroup = attr.Factory(object)
|
||||
|
||||
# A sequence number to keep track of when we're allowed to update the
|
||||
# cache. We bump the sequence number when we invalidate the cache. If
|
||||
# the sequence number changes while we're calculating stuff we should
|
||||
# not update the cache with it.
|
||||
sequence: int = 0
|
||||
|
||||
# A cache of user_ids that we *know* aren't interesting, e.g. user_ids
|
||||
# owned by AS's, or remote users, etc. (I.e. users we will never need to
|
||||
# calculate push for)
|
||||
# These never need to be invalidated as we will never set up push for
|
||||
# them.
|
||||
uninteresting_user_set: Set[str] = attr.Factory(set)
|
||||
|
||||
|
||||
class RulesForRoom:
|
||||
"""Caches push rules for users in a room.
|
||||
|
||||
This efficiently handles users joining/leaving the room by not invalidating
|
||||
the entire cache for the room.
|
||||
|
||||
A new instance is constructed for each call to
|
||||
`BulkPushRuleEvaluator._get_rules_for_event`, with the cached data from
|
||||
previous calls passed in.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
hs: "HomeServer",
|
||||
room_id: str,
|
||||
rules_for_room_cache: LruCache,
|
||||
room_push_rule_cache_metrics: CacheMetric,
|
||||
linearizer: Linearizer,
|
||||
cached_data: RulesForRoomData,
|
||||
):
|
||||
"""
|
||||
Args:
|
||||
hs: The HomeServer object.
|
||||
room_id: The room ID.
|
||||
rules_for_room_cache: The cache object that caches these
|
||||
RoomsForUser objects.
|
||||
room_push_rule_cache_metrics: The metrics object
|
||||
linearizer: The linearizer used to ensure only one thing mutates
|
||||
the cache at a time. Keyed off room_id
|
||||
cached_data: Cached data from previous calls to `self.get_rules`,
|
||||
can be mutated.
|
||||
"""
|
||||
self.room_id = room_id
|
||||
self.is_mine_id = hs.is_mine_id
|
||||
self.store = hs.get_datastores().main
|
||||
self.room_push_rule_cache_metrics = room_push_rule_cache_metrics
|
||||
|
||||
# Used to ensure only one thing mutates the cache at a time. Keyed off
|
||||
# room_id.
|
||||
self.linearizer = linearizer
|
||||
|
||||
self.data = cached_data
|
||||
|
||||
# We need to be clever on the invalidating caches callbacks, as
|
||||
# otherwise the invalidation callback holds a reference to the object,
|
||||
# potentially causing it to leak.
|
||||
# To get around this we pass a function that on invalidations looks ups
|
||||
# the RoomsForUser entry in the cache, rather than keeping a reference
|
||||
# to self around in the callback.
|
||||
self.invalidate_all_cb = _Invalidation(rules_for_room_cache, room_id)
|
||||
|
||||
async def get_rules(
|
||||
self, event: EventBase, context: EventContext
|
||||
) -> Dict[str, List[Dict[str, dict]]]:
|
||||
"""Given an event context return the rules for all users who are
|
||||
currently in the room.
|
||||
"""
|
||||
state_group = context.state_group
|
||||
|
||||
if state_group and self.data.state_group == state_group:
|
||||
logger.debug("Using cached rules for %r", self.room_id)
|
||||
self.room_push_rule_cache_metrics.inc_hits()
|
||||
return self.data.rules_by_user
|
||||
|
||||
async with self.linearizer.queue(self.room_id):
|
||||
if state_group and self.data.state_group == state_group:
|
||||
logger.debug("Using cached rules for %r", self.room_id)
|
||||
self.room_push_rule_cache_metrics.inc_hits()
|
||||
return self.data.rules_by_user
|
||||
|
||||
self.room_push_rule_cache_metrics.inc_misses()
|
||||
|
||||
ret_rules_by_user = {}
|
||||
missing_member_event_ids = {}
|
||||
if state_group and self.data.state_group == context.prev_group:
|
||||
# If we have a simple delta then we can reuse most of the previous
|
||||
# results.
|
||||
ret_rules_by_user = self.data.rules_by_user
|
||||
current_state_ids = context.delta_ids
|
||||
|
||||
push_rules_delta_state_cache_metric.inc_hits()
|
||||
else:
|
||||
current_state_ids = await context.get_current_state_ids()
|
||||
push_rules_delta_state_cache_metric.inc_misses()
|
||||
# Ensure the state IDs exist.
|
||||
assert current_state_ids is not None
|
||||
|
||||
push_rules_state_size_counter.inc(len(current_state_ids))
|
||||
|
||||
logger.debug(
|
||||
"Looking for member changes in %r %r", state_group, current_state_ids
|
||||
)
|
||||
|
||||
# Loop through to see which member events we've seen and have rules
|
||||
# for and which we need to fetch
|
||||
for key in current_state_ids:
|
||||
typ, user_id = key
|
||||
if typ != EventTypes.Member:
|
||||
continue
|
||||
|
||||
if user_id in self.data.uninteresting_user_set:
|
||||
continue
|
||||
|
||||
if not self.is_mine_id(user_id):
|
||||
self.data.uninteresting_user_set.add(user_id)
|
||||
continue
|
||||
|
||||
if self.store.get_if_app_services_interested_in_user(user_id):
|
||||
self.data.uninteresting_user_set.add(user_id)
|
||||
continue
|
||||
|
||||
event_id = current_state_ids[key]
|
||||
|
||||
res = self.data.member_map.get(event_id, None)
|
||||
if res:
|
||||
if res.membership == Membership.JOIN:
|
||||
rules = self.data.rules_by_user.get(res.user_id, None)
|
||||
if rules:
|
||||
ret_rules_by_user[res.user_id] = rules
|
||||
continue
|
||||
|
||||
# If a user has left a room we remove their push rule. If they
|
||||
# joined then we re-add it later in _update_rules_with_member_event_ids
|
||||
ret_rules_by_user.pop(user_id, None)
|
||||
missing_member_event_ids[user_id] = event_id
|
||||
|
||||
if missing_member_event_ids:
|
||||
# If we have some member events we haven't seen, look them up
|
||||
# and fetch push rules for them if appropriate.
|
||||
logger.debug("Found new member events %r", missing_member_event_ids)
|
||||
await self._update_rules_with_member_event_ids(
|
||||
ret_rules_by_user, missing_member_event_ids, state_group, event
|
||||
)
|
||||
else:
|
||||
# The push rules didn't change but lets update the cache anyway
|
||||
self.update_cache(
|
||||
self.data.sequence,
|
||||
members={}, # There were no membership changes
|
||||
rules_by_user=ret_rules_by_user,
|
||||
state_group=state_group,
|
||||
)
|
||||
|
||||
if logger.isEnabledFor(logging.DEBUG):
|
||||
logger.debug(
|
||||
"Returning push rules for %r %r", self.room_id, ret_rules_by_user.keys()
|
||||
)
|
||||
return ret_rules_by_user
|
||||
|
||||
async def _update_rules_with_member_event_ids(
|
||||
self,
|
||||
ret_rules_by_user: Dict[str, list],
|
||||
member_event_ids: Dict[str, str],
|
||||
state_group: Optional[int],
|
||||
event: EventBase,
|
||||
) -> None:
|
||||
"""Update the partially filled rules_by_user dict by fetching rules for
|
||||
any newly joined users in the `member_event_ids` list.
|
||||
|
||||
Args:
|
||||
ret_rules_by_user: Partially filled dict of push rules. Gets
|
||||
updated with any new rules.
|
||||
member_event_ids: Dict of user id to event id for membership events
|
||||
that have happened since the last time we filled rules_by_user
|
||||
state_group: The state group we are currently computing push rules
|
||||
for. Used when updating the cache.
|
||||
event: The event we are currently computing push rules for.
|
||||
"""
|
||||
sequence = self.data.sequence
|
||||
|
||||
members = await self.store.get_membership_from_event_ids(
|
||||
member_event_ids.values()
|
||||
)
|
||||
|
||||
# If the event is a join event then it will be in current state events
|
||||
# map but not in the DB, so we have to explicitly insert it.
|
||||
if event.type == EventTypes.Member:
|
||||
for event_id in member_event_ids.values():
|
||||
if event_id == event.event_id:
|
||||
members[event_id] = EventIdMembership(
|
||||
user_id=event.state_key, membership=event.membership
|
||||
)
|
||||
|
||||
if logger.isEnabledFor(logging.DEBUG):
|
||||
logger.debug("Found members %r: %r", self.room_id, members.values())
|
||||
|
||||
joined_user_ids = {
|
||||
entry.user_id
|
||||
for entry in members.values()
|
||||
if entry and entry.membership == Membership.JOIN
|
||||
}
|
||||
|
||||
logger.debug("Joined: %r", joined_user_ids)
|
||||
|
||||
# Previously we only considered users with pushers or read receipts in that
|
||||
# room. We can't do this anymore because we use push actions to calculate unread
|
||||
# counts, which don't rely on the user having pushers or sent a read receipt into
|
||||
# the room. Therefore we just need to filter for local users here.
|
||||
user_ids = list(filter(self.is_mine_id, joined_user_ids))
|
||||
|
||||
rules_by_user = await self.store.bulk_get_push_rules(
|
||||
user_ids, on_invalidate=self.invalidate_all_cb
|
||||
)
|
||||
|
||||
ret_rules_by_user.update(
|
||||
item for item in rules_by_user.items() if item[0] is not None
|
||||
)
|
||||
|
||||
self.update_cache(sequence, members, ret_rules_by_user, state_group)
|
||||
|
||||
def update_cache(
|
||||
self,
|
||||
sequence: int,
|
||||
members: MemberMap,
|
||||
rules_by_user: RulesByUser,
|
||||
state_group: StateGroup,
|
||||
) -> None:
|
||||
if sequence == self.data.sequence:
|
||||
self.data.member_map.update(members)
|
||||
self.data.rules_by_user = rules_by_user
|
||||
self.data.state_group = state_group
|
||||
|
||||
|
||||
@attr.attrs(slots=True, frozen=True, auto_attribs=True)
|
||||
class _Invalidation:
|
||||
# _Invalidation is passed as an `on_invalidate` callback to bulk_get_push_rules,
|
||||
# which means that it it is stored on the bulk_get_push_rules cache entry. In order
|
||||
# to ensure that we don't accumulate lots of redundant callbacks on the cache entry,
|
||||
# we need to ensure that two _Invalidation objects are "equal" if they refer to the
|
||||
# same `cache` and `room_id`.
|
||||
#
|
||||
# attrs provides suitable __hash__ and __eq__ methods, provided we remember to
|
||||
# set `frozen=True`.
|
||||
|
||||
cache: LruCache
|
||||
room_id: str
|
||||
|
||||
def __call__(self) -> None:
|
||||
rules_data = self.cache.get(self.room_id, None, update_metrics=False)
|
||||
if rules_data:
|
||||
rules_data.sequence += 1
|
||||
rules_data.state_group = object()
|
||||
rules_data.member_map = {}
|
||||
rules_data.rules_by_user = {}
|
||||
push_rules_invalidation_counter.inc()
|
||||
|
|
|
@ -25,6 +25,7 @@ from synapse.replication.http import (
|
|||
push,
|
||||
register,
|
||||
send_event,
|
||||
state,
|
||||
streams,
|
||||
)
|
||||
|
||||
|
@ -48,6 +49,7 @@ class ReplicationRestResource(JsonResource):
|
|||
streams.register_servlets(hs, self)
|
||||
account_data.register_servlets(hs, self)
|
||||
push.register_servlets(hs, self)
|
||||
state.register_servlets(hs, self)
|
||||
|
||||
# The following can't currently be instantiated on workers.
|
||||
if hs.config.worker.worker_app is None:
|
||||
|
|
|
@ -0,0 +1,75 @@
|
|||
# Copyright 2022 The Matrix.org Foundation C.I.C.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import logging
|
||||
from typing import TYPE_CHECKING, Tuple
|
||||
|
||||
from twisted.web.server import Request
|
||||
|
||||
from synapse.api.errors import SynapseError
|
||||
from synapse.http.server import HttpServer
|
||||
from synapse.replication.http._base import ReplicationEndpoint
|
||||
from synapse.types import JsonDict
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from synapse.server import HomeServer
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class ReplicationUpdateCurrentStateRestServlet(ReplicationEndpoint):
|
||||
"""Recalculates the current state for a room, and persists it.
|
||||
|
||||
The API looks like:
|
||||
|
||||
POST /_synapse/replication/update_current_state/:room_id
|
||||
|
||||
{}
|
||||
|
||||
200 OK
|
||||
|
||||
{}
|
||||
"""
|
||||
|
||||
NAME = "update_current_state"
|
||||
PATH_ARGS = ("room_id",)
|
||||
|
||||
def __init__(self, hs: "HomeServer"):
|
||||
super().__init__(hs)
|
||||
|
||||
self._state_handler = hs.get_state_handler()
|
||||
self._events_shard_config = hs.config.worker.events_shard_config
|
||||
self._instance_name = hs.get_instance_name()
|
||||
|
||||
@staticmethod
|
||||
async def _serialize_payload(room_id: str) -> JsonDict: # type: ignore[override]
|
||||
return {}
|
||||
|
||||
async def _handle_request( # type: ignore[override]
|
||||
self, request: Request, room_id: str
|
||||
) -> Tuple[int, JsonDict]:
|
||||
writer_instance = self._events_shard_config.get_instance(room_id)
|
||||
if writer_instance != self._instance_name:
|
||||
raise SynapseError(
|
||||
400, "/update_current_state request was routed to the wrong worker"
|
||||
)
|
||||
|
||||
await self._state_handler.update_current_state(room_id)
|
||||
|
||||
return 200, {}
|
||||
|
||||
|
||||
def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
|
||||
if hs.get_instance_name() in hs.config.worker.writers.events:
|
||||
ReplicationUpdateCurrentStateRestServlet(hs).register(http_server)
|
|
@ -154,7 +154,9 @@ class MediaStorage:
|
|||
# Note that we'll delete the stored media, due to the
|
||||
# try/except below. The media also won't be stored in
|
||||
# the DB.
|
||||
raise SpamMediaException(errcode=spam_check)
|
||||
# We currently ignore any additional field returned by
|
||||
# the spam-check API.
|
||||
raise SpamMediaException(errcode=spam_check[0])
|
||||
|
||||
for provider in self.storage_providers:
|
||||
await provider.store_file(path, file_info)
|
||||
|
|
|
@ -43,6 +43,7 @@ from synapse.api.room_versions import KNOWN_ROOM_VERSIONS, StateResolutionVersio
|
|||
from synapse.events import EventBase
|
||||
from synapse.events.snapshot import EventContext
|
||||
from synapse.logging.context import ContextResourceUsage
|
||||
from synapse.replication.http.state import ReplicationUpdateCurrentStateRestServlet
|
||||
from synapse.state import v1, v2
|
||||
from synapse.storage.databases.main.events_worker import EventRedactBehaviour
|
||||
from synapse.storage.roommember import ProfileInfo
|
||||
|
@ -129,6 +130,12 @@ class StateHandler:
|
|||
self.hs = hs
|
||||
self._state_resolution_handler = hs.get_state_resolution_handler()
|
||||
self._storage_controllers = hs.get_storage_controllers()
|
||||
self._events_shard_config = hs.config.worker.events_shard_config
|
||||
self._instance_name = hs.get_instance_name()
|
||||
|
||||
self._update_current_state_client = (
|
||||
ReplicationUpdateCurrentStateRestServlet.make_client(hs)
|
||||
)
|
||||
|
||||
async def get_current_state_ids(
|
||||
self,
|
||||
|
@ -423,6 +430,24 @@ class StateHandler:
|
|||
|
||||
return {key: state_map[ev_id] for key, ev_id in new_state.items()}
|
||||
|
||||
async def update_current_state(self, room_id: str) -> None:
|
||||
"""Recalculates the current state for a room, and persists it.
|
||||
|
||||
Raises:
|
||||
SynapseError(502): if all attempts to connect to the event persister worker
|
||||
fail
|
||||
"""
|
||||
writer_instance = self._events_shard_config.get_instance(room_id)
|
||||
if writer_instance != self._instance_name:
|
||||
await self._update_current_state_client(
|
||||
instance_name=writer_instance,
|
||||
room_id=room_id,
|
||||
)
|
||||
return
|
||||
|
||||
assert self._storage_controllers.persistence is not None
|
||||
await self._storage_controllers.persistence.update_current_state(room_id)
|
||||
|
||||
|
||||
@attr.s(slots=True, auto_attribs=True)
|
||||
class _StateResMetrics:
|
||||
|
|
|
@ -75,6 +75,15 @@ class SQLBaseStore(metaclass=ABCMeta):
|
|||
self._attempt_to_invalidate_cache(
|
||||
"get_users_in_room_with_profiles", (room_id,)
|
||||
)
|
||||
self._attempt_to_invalidate_cache(
|
||||
"get_number_joined_users_in_room", (room_id,)
|
||||
)
|
||||
self._attempt_to_invalidate_cache("get_local_users_in_room", (room_id,))
|
||||
|
||||
for user_id in members_changed:
|
||||
self._attempt_to_invalidate_cache(
|
||||
"get_user_in_room_with_profile", (room_id, user_id)
|
||||
)
|
||||
|
||||
# Purge other caches based on room state.
|
||||
self._attempt_to_invalidate_cache("get_room_summary", (room_id,))
|
||||
|
|
|
@ -22,6 +22,7 @@ from typing import (
|
|||
Any,
|
||||
Awaitable,
|
||||
Callable,
|
||||
ClassVar,
|
||||
Collection,
|
||||
Deque,
|
||||
Dict,
|
||||
|
@ -33,6 +34,7 @@ from typing import (
|
|||
Set,
|
||||
Tuple,
|
||||
TypeVar,
|
||||
Union,
|
||||
)
|
||||
|
||||
import attr
|
||||
|
@ -111,9 +113,43 @@ times_pruned_extremities = Counter(
|
|||
|
||||
|
||||
@attr.s(auto_attribs=True, slots=True)
|
||||
class _EventPersistQueueItem:
|
||||
class _PersistEventsTask:
|
||||
"""A batch of events to persist."""
|
||||
|
||||
name: ClassVar[str] = "persist_event_batch" # used for opentracing
|
||||
|
||||
events_and_contexts: List[Tuple[EventBase, EventContext]]
|
||||
backfilled: bool
|
||||
|
||||
def try_merge(self, task: "_EventPersistQueueTask") -> bool:
|
||||
"""Batches events with the same backfilled option together."""
|
||||
if (
|
||||
not isinstance(task, _PersistEventsTask)
|
||||
or self.backfilled != task.backfilled
|
||||
):
|
||||
return False
|
||||
|
||||
self.events_and_contexts.extend(task.events_and_contexts)
|
||||
return True
|
||||
|
||||
|
||||
@attr.s(auto_attribs=True, slots=True)
|
||||
class _UpdateCurrentStateTask:
|
||||
"""A room whose current state needs recalculating."""
|
||||
|
||||
name: ClassVar[str] = "update_current_state" # used for opentracing
|
||||
|
||||
def try_merge(self, task: "_EventPersistQueueTask") -> bool:
|
||||
"""Deduplicates consecutive recalculations of current state."""
|
||||
return isinstance(task, _UpdateCurrentStateTask)
|
||||
|
||||
|
||||
_EventPersistQueueTask = Union[_PersistEventsTask, _UpdateCurrentStateTask]
|
||||
|
||||
|
||||
@attr.s(auto_attribs=True, slots=True)
|
||||
class _EventPersistQueueItem:
|
||||
task: _EventPersistQueueTask
|
||||
deferred: ObservableDeferred
|
||||
|
||||
parent_opentracing_span_contexts: List = attr.ib(factory=list)
|
||||
|
@ -127,14 +163,16 @@ _PersistResult = TypeVar("_PersistResult")
|
|||
|
||||
|
||||
class _EventPeristenceQueue(Generic[_PersistResult]):
|
||||
"""Queues up events so that they can be persisted in bulk with only one
|
||||
concurrent transaction per room.
|
||||
"""Queues up tasks so that they can be processed with only one concurrent
|
||||
transaction per room.
|
||||
|
||||
Tasks can be bulk persistence of events or recalculation of a room's current state.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
per_item_callback: Callable[
|
||||
[List[Tuple[EventBase, EventContext]], bool],
|
||||
[str, _EventPersistQueueTask],
|
||||
Awaitable[_PersistResult],
|
||||
],
|
||||
):
|
||||
|
@ -150,18 +188,17 @@ class _EventPeristenceQueue(Generic[_PersistResult]):
|
|||
async def add_to_queue(
|
||||
self,
|
||||
room_id: str,
|
||||
events_and_contexts: Iterable[Tuple[EventBase, EventContext]],
|
||||
backfilled: bool,
|
||||
task: _EventPersistQueueTask,
|
||||
) -> _PersistResult:
|
||||
"""Add events to the queue, with the given persist_event options.
|
||||
"""Add a task to the queue.
|
||||
|
||||
If we are not already processing events in this room, starts off a background
|
||||
If we are not already processing tasks in this room, starts off a background
|
||||
process to to so, calling the per_item_callback for each item.
|
||||
|
||||
Args:
|
||||
room_id (str):
|
||||
events_and_contexts (list[(EventBase, EventContext)]):
|
||||
backfilled (bool):
|
||||
task (_EventPersistQueueTask): A _PersistEventsTask or
|
||||
_UpdateCurrentStateTask to process.
|
||||
|
||||
Returns:
|
||||
the result returned by the `_per_item_callback` passed to
|
||||
|
@ -169,26 +206,20 @@ class _EventPeristenceQueue(Generic[_PersistResult]):
|
|||
"""
|
||||
queue = self._event_persist_queues.setdefault(room_id, deque())
|
||||
|
||||
# if the last item in the queue has the same `backfilled` setting,
|
||||
# we can just add these new events to that item.
|
||||
if queue and queue[-1].backfilled == backfilled:
|
||||
if queue and queue[-1].task.try_merge(task):
|
||||
# the new task has been merged into the last task in the queue
|
||||
end_item = queue[-1]
|
||||
else:
|
||||
# need to make a new queue item
|
||||
deferred: ObservableDeferred[_PersistResult] = ObservableDeferred(
|
||||
defer.Deferred(), consumeErrors=True
|
||||
)
|
||||
|
||||
end_item = _EventPersistQueueItem(
|
||||
events_and_contexts=[],
|
||||
backfilled=backfilled,
|
||||
task=task,
|
||||
deferred=deferred,
|
||||
)
|
||||
queue.append(end_item)
|
||||
|
||||
# add our events to the queue item
|
||||
end_item.events_and_contexts.extend(events_and_contexts)
|
||||
|
||||
# also add our active opentracing span to the item so that we get a link back
|
||||
span = opentracing.active_span()
|
||||
if span:
|
||||
|
@ -202,7 +233,7 @@ class _EventPeristenceQueue(Generic[_PersistResult]):
|
|||
|
||||
# add another opentracing span which links to the persist trace.
|
||||
with opentracing.start_active_span_follows_from(
|
||||
"persist_event_batch_complete", (end_item.opentracing_span_context,)
|
||||
f"{task.name}_complete", (end_item.opentracing_span_context,)
|
||||
):
|
||||
pass
|
||||
|
||||
|
@ -234,16 +265,14 @@ class _EventPeristenceQueue(Generic[_PersistResult]):
|
|||
for item in queue:
|
||||
try:
|
||||
with opentracing.start_active_span_follows_from(
|
||||
"persist_event_batch",
|
||||
item.task.name,
|
||||
item.parent_opentracing_span_contexts,
|
||||
inherit_force_tracing=True,
|
||||
) as scope:
|
||||
if scope:
|
||||
item.opentracing_span_context = scope.span.context
|
||||
|
||||
ret = await self._per_item_callback(
|
||||
item.events_and_contexts, item.backfilled
|
||||
)
|
||||
ret = await self._per_item_callback(room_id, item.task)
|
||||
except Exception:
|
||||
with PreserveLoggingContext():
|
||||
item.deferred.errback()
|
||||
|
@ -292,9 +321,32 @@ class EventsPersistenceStorageController:
|
|||
self._clock = hs.get_clock()
|
||||
self._instance_name = hs.get_instance_name()
|
||||
self.is_mine_id = hs.is_mine_id
|
||||
self._event_persist_queue = _EventPeristenceQueue(self._persist_event_batch)
|
||||
self._event_persist_queue = _EventPeristenceQueue(
|
||||
self._process_event_persist_queue_task
|
||||
)
|
||||
self._state_resolution_handler = hs.get_state_resolution_handler()
|
||||
|
||||
async def _process_event_persist_queue_task(
|
||||
self,
|
||||
room_id: str,
|
||||
task: _EventPersistQueueTask,
|
||||
) -> Dict[str, str]:
|
||||
"""Callback for the _event_persist_queue
|
||||
|
||||
Returns:
|
||||
A dictionary of event ID to event ID we didn't persist as we already
|
||||
had another event persisted with the same TXN ID.
|
||||
"""
|
||||
if isinstance(task, _PersistEventsTask):
|
||||
return await self._persist_event_batch(room_id, task)
|
||||
elif isinstance(task, _UpdateCurrentStateTask):
|
||||
await self._update_current_state(room_id, task)
|
||||
return {}
|
||||
else:
|
||||
raise AssertionError(
|
||||
f"Found an unexpected task type in event persistence queue: {task}"
|
||||
)
|
||||
|
||||
@opentracing.trace
|
||||
async def persist_events(
|
||||
self,
|
||||
|
@ -329,7 +381,8 @@ class EventsPersistenceStorageController:
|
|||
) -> Dict[str, str]:
|
||||
room_id, evs_ctxs = item
|
||||
return await self._event_persist_queue.add_to_queue(
|
||||
room_id, evs_ctxs, backfilled=backfilled
|
||||
room_id,
|
||||
_PersistEventsTask(events_and_contexts=evs_ctxs, backfilled=backfilled),
|
||||
)
|
||||
|
||||
ret_vals = await yieldable_gather_results(enqueue, partitioned.items())
|
||||
|
@ -376,7 +429,10 @@ class EventsPersistenceStorageController:
|
|||
# event was deduplicated. (The dict may also include other entries if
|
||||
# the event was persisted in a batch with other events.)
|
||||
replaced_events = await self._event_persist_queue.add_to_queue(
|
||||
event.room_id, [(event, context)], backfilled=backfilled
|
||||
event.room_id,
|
||||
_PersistEventsTask(
|
||||
events_and_contexts=[(event, context)], backfilled=backfilled
|
||||
),
|
||||
)
|
||||
replaced_event = replaced_events.get(event.event_id)
|
||||
if replaced_event:
|
||||
|
@ -391,20 +447,22 @@ class EventsPersistenceStorageController:
|
|||
|
||||
async def update_current_state(self, room_id: str) -> None:
|
||||
"""Recalculate the current state for a room, and persist it"""
|
||||
await self._event_persist_queue.add_to_queue(
|
||||
room_id,
|
||||
_UpdateCurrentStateTask(),
|
||||
)
|
||||
|
||||
async def _update_current_state(
|
||||
self, room_id: str, _task: _UpdateCurrentStateTask
|
||||
) -> None:
|
||||
"""Callback for the _event_persist_queue
|
||||
|
||||
Recalculates the current state for a room, and persists it.
|
||||
"""
|
||||
state = await self._calculate_current_state(room_id)
|
||||
delta = await self._calculate_state_delta(room_id, state)
|
||||
|
||||
# TODO(faster_joins): get a real stream ordering, to make this work correctly
|
||||
# across workers.
|
||||
# https://github.com/matrix-org/synapse/issues/12994
|
||||
#
|
||||
# TODO(faster_joins): this can race against event persistence, in which case we
|
||||
# will end up with incorrect state. Perhaps we should make this a job we
|
||||
# farm out to the event persister thread, somehow.
|
||||
# https://github.com/matrix-org/synapse/issues/13007
|
||||
#
|
||||
stream_id = self.main_store.get_room_max_stream_ordering()
|
||||
await self.persist_events_store.update_current_state(room_id, delta, stream_id)
|
||||
await self.persist_events_store.update_current_state(room_id, delta)
|
||||
|
||||
async def _calculate_current_state(self, room_id: str) -> StateMap[str]:
|
||||
"""Calculate the current state of a room, based on the forward extremities
|
||||
|
@ -449,9 +507,7 @@ class EventsPersistenceStorageController:
|
|||
return res.state
|
||||
|
||||
async def _persist_event_batch(
|
||||
self,
|
||||
events_and_contexts: List[Tuple[EventBase, EventContext]],
|
||||
backfilled: bool = False,
|
||||
self, _room_id: str, task: _PersistEventsTask
|
||||
) -> Dict[str, str]:
|
||||
"""Callback for the _event_persist_queue
|
||||
|
||||
|
@ -466,6 +522,9 @@ class EventsPersistenceStorageController:
|
|||
PartialStateConflictError: if attempting to persist a partial state event in
|
||||
a room that has been un-partial stated.
|
||||
"""
|
||||
events_and_contexts = task.events_and_contexts
|
||||
backfilled = task.backfilled
|
||||
|
||||
replaced_events: Dict[str, str] = {}
|
||||
if not events_and_contexts:
|
||||
return replaced_events
|
||||
|
|
|
@ -143,7 +143,6 @@ class EventPushActionsWorkerStore(ReceiptsWorkerStore, StreamWorkerStore, SQLBas
|
|||
self._find_stream_orderings_for_times, 10 * 60 * 1000
|
||||
)
|
||||
|
||||
self._rotate_delay = 3
|
||||
self._rotate_count = 10000
|
||||
self._doing_notif_rotation = False
|
||||
if hs.config.worker.run_background_tasks:
|
||||
|
@ -847,7 +846,6 @@ class EventPushActionsWorkerStore(ReceiptsWorkerStore, StreamWorkerStore, SQLBas
|
|||
)
|
||||
if caught_up:
|
||||
break
|
||||
await self.hs.get_clock().sleep(self._rotate_delay)
|
||||
|
||||
# Finally we clear out old event push actions.
|
||||
await self._remove_old_push_actions_that_have_rotated()
|
||||
|
@ -1016,9 +1014,14 @@ class EventPushActionsWorkerStore(ReceiptsWorkerStore, StreamWorkerStore, SQLBas
|
|||
upd.stream_ordering
|
||||
FROM (
|
||||
SELECT user_id, room_id, count(*) as cnt,
|
||||
max(stream_ordering) as stream_ordering
|
||||
FROM event_push_actions
|
||||
WHERE ? < stream_ordering AND stream_ordering <= ?
|
||||
max(ea.stream_ordering) as stream_ordering
|
||||
FROM event_push_actions AS ea
|
||||
LEFT JOIN event_push_summary AS old USING (user_id, room_id)
|
||||
WHERE ? < ea.stream_ordering AND ea.stream_ordering <= ?
|
||||
AND (
|
||||
old.last_receipt_stream_ordering IS NULL
|
||||
OR old.last_receipt_stream_ordering < ea.stream_ordering
|
||||
)
|
||||
AND %s = 1
|
||||
GROUP BY user_id, room_id
|
||||
) AS upd
|
||||
|
@ -1109,7 +1112,7 @@ class EventPushActionsWorkerStore(ReceiptsWorkerStore, StreamWorkerStore, SQLBas
|
|||
) -> bool:
|
||||
# We don't want to clear out too much at a time, so we bound our
|
||||
# deletes.
|
||||
batch_size = 10000
|
||||
batch_size = self._rotate_count
|
||||
|
||||
txn.execute(
|
||||
"""
|
||||
|
|
|
@ -1007,16 +1007,16 @@ class PersistEventsStore:
|
|||
self,
|
||||
room_id: str,
|
||||
state_delta: DeltaState,
|
||||
stream_id: int,
|
||||
) -> None:
|
||||
"""Update the current state stored in the datatabase for the given room"""
|
||||
|
||||
await self.db_pool.runInteraction(
|
||||
"update_current_state",
|
||||
self._update_current_state_txn,
|
||||
state_delta_by_room={room_id: state_delta},
|
||||
stream_id=stream_id,
|
||||
)
|
||||
async with self._stream_id_gen.get_next() as stream_ordering:
|
||||
await self.db_pool.runInteraction(
|
||||
"update_current_state",
|
||||
self._update_current_state_txn,
|
||||
state_delta_by_room={room_id: state_delta},
|
||||
stream_id=stream_ordering,
|
||||
)
|
||||
|
||||
def _update_current_state_txn(
|
||||
self,
|
||||
|
@ -1797,6 +1797,18 @@ class PersistEventsStore:
|
|||
self.store.get_invited_rooms_for_local_user.invalidate,
|
||||
(event.state_key,),
|
||||
)
|
||||
txn.call_after(
|
||||
self.store.get_local_users_in_room.invalidate,
|
||||
(event.room_id,),
|
||||
)
|
||||
txn.call_after(
|
||||
self.store.get_number_joined_users_in_room.invalidate,
|
||||
(event.room_id,),
|
||||
)
|
||||
txn.call_after(
|
||||
self.store.get_user_in_room_with_profile.invalidate,
|
||||
(event.room_id, event.state_key),
|
||||
)
|
||||
|
||||
# The `_get_membership_from_event_id` is immutable, except for the
|
||||
# case where we look up an event *before* persisting it.
|
||||
|
|
|
@ -212,6 +212,60 @@ class RoomMemberWorkerStore(EventsWorkerStore):
|
|||
txn.execute(sql, (room_id, Membership.JOIN))
|
||||
return [r[0] for r in txn]
|
||||
|
||||
@cached()
|
||||
def get_user_in_room_with_profile(
|
||||
self, room_id: str, user_id: str
|
||||
) -> Dict[str, ProfileInfo]:
|
||||
raise NotImplementedError()
|
||||
|
||||
@cachedList(
|
||||
cached_method_name="get_user_in_room_with_profile", list_name="user_ids"
|
||||
)
|
||||
async def get_subset_users_in_room_with_profiles(
|
||||
self, room_id: str, user_ids: Collection[str]
|
||||
) -> Dict[str, ProfileInfo]:
|
||||
"""Get a mapping from user ID to profile information for a list of users
|
||||
in a given room.
|
||||
|
||||
The profile information comes directly from this room's `m.room.member`
|
||||
events, and so may be specific to this room rather than part of a user's
|
||||
global profile. To avoid privacy leaks, the profile data should only be
|
||||
revealed to users who are already in this room.
|
||||
|
||||
Args:
|
||||
room_id: The ID of the room to retrieve the users of.
|
||||
user_ids: a list of users in the room to run the query for
|
||||
|
||||
Returns:
|
||||
A mapping from user ID to ProfileInfo.
|
||||
"""
|
||||
|
||||
def _get_subset_users_in_room_with_profiles(
|
||||
txn: LoggingTransaction,
|
||||
) -> Dict[str, ProfileInfo]:
|
||||
clause, ids = make_in_list_sql_clause(
|
||||
self.database_engine, "m.user_id", user_ids
|
||||
)
|
||||
|
||||
sql = """
|
||||
SELECT state_key, display_name, avatar_url FROM room_memberships as m
|
||||
INNER JOIN current_state_events as c
|
||||
ON m.event_id = c.event_id
|
||||
AND m.room_id = c.room_id
|
||||
AND m.user_id = c.state_key
|
||||
WHERE c.type = 'm.room.member' AND c.room_id = ? AND m.membership = ? AND %s
|
||||
""" % (
|
||||
clause,
|
||||
)
|
||||
txn.execute(sql, (room_id, Membership.JOIN, *ids))
|
||||
|
||||
return {r[0]: ProfileInfo(display_name=r[1], avatar_url=r[2]) for r in txn}
|
||||
|
||||
return await self.db_pool.runInteraction(
|
||||
"get_subset_users_in_room_with_profiles",
|
||||
_get_subset_users_in_room_with_profiles,
|
||||
)
|
||||
|
||||
@cached(max_entries=100000, iterable=True)
|
||||
async def get_users_in_room_with_profiles(
|
||||
self, room_id: str
|
||||
|
@ -337,6 +391,15 @@ class RoomMemberWorkerStore(EventsWorkerStore):
|
|||
"get_room_summary", _get_room_summary_txn
|
||||
)
|
||||
|
||||
@cached()
|
||||
async def get_number_joined_users_in_room(self, room_id: str) -> int:
|
||||
return await self.db_pool.simple_select_one_onecol(
|
||||
table="current_state_events",
|
||||
keyvalues={"room_id": room_id, "membership": Membership.JOIN},
|
||||
retcol="COUNT(*)",
|
||||
desc="get_number_joined_users_in_room",
|
||||
)
|
||||
|
||||
@cached()
|
||||
async def get_invited_rooms_for_local_user(
|
||||
self, user_id: str
|
||||
|
@ -416,6 +479,17 @@ class RoomMemberWorkerStore(EventsWorkerStore):
|
|||
user_id: str,
|
||||
membership_list: List[str],
|
||||
) -> List[RoomsForUser]:
|
||||
"""Get all the rooms for this *local* user where the membership for this user
|
||||
matches one in the membership list.
|
||||
|
||||
Args:
|
||||
user_id: The user ID.
|
||||
membership_list: A list of synapse.api.constants.Membership
|
||||
values which the user must be in.
|
||||
|
||||
Returns:
|
||||
The RoomsForUser that the user matches the membership types.
|
||||
"""
|
||||
# Paranoia check.
|
||||
if not self.hs.is_mine_id(user_id):
|
||||
raise Exception(
|
||||
|
@ -444,6 +518,18 @@ class RoomMemberWorkerStore(EventsWorkerStore):
|
|||
|
||||
return results
|
||||
|
||||
@cached(iterable=True)
|
||||
async def get_local_users_in_room(self, room_id: str) -> List[str]:
|
||||
"""
|
||||
Retrieves a list of the current roommembers who are local to the server.
|
||||
"""
|
||||
return await self.db_pool.simple_select_onecol(
|
||||
table="local_current_membership",
|
||||
keyvalues={"room_id": room_id, "membership": Membership.JOIN},
|
||||
retcol="user_id",
|
||||
desc="get_local_users_in_room",
|
||||
)
|
||||
|
||||
async def get_local_current_membership_for_user_in_room(
|
||||
self, user_id: str, room_id: str
|
||||
) -> Tuple[Optional[str], Optional[str]]:
|
||||
|
|
|
@ -13,16 +13,21 @@
|
|||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
import logging
|
||||
from enum import Enum, auto
|
||||
from typing import Collection, Dict, FrozenSet, List, Optional, Tuple
|
||||
|
||||
import attr
|
||||
from typing_extensions import Final
|
||||
|
||||
from synapse.api.constants import EventTypes, HistoryVisibility, Membership
|
||||
from synapse.events import EventBase
|
||||
from synapse.events.snapshot import EventContext
|
||||
from synapse.events.utils import prune_event
|
||||
from synapse.storage.controllers import StorageControllers
|
||||
from synapse.storage.databases.main import DataStore
|
||||
from synapse.storage.state import StateFilter
|
||||
from synapse.types import RetentionPolicy, StateMap, get_domain_from_id
|
||||
from synapse.util import Clock
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
@ -102,153 +107,18 @@ async def filter_events_for_client(
|
|||
] = await storage.main.get_retention_policy_for_room(room_id)
|
||||
|
||||
def allowed(event: EventBase) -> Optional[EventBase]:
|
||||
"""
|
||||
Args:
|
||||
event: event to check
|
||||
|
||||
Returns:
|
||||
None if the user cannot see this event at all
|
||||
|
||||
a redacted copy of the event if they can only see a redacted
|
||||
version
|
||||
|
||||
the original event if they can see it as normal.
|
||||
"""
|
||||
# Only run some checks if these events aren't about to be sent to clients. This is
|
||||
# because, if this is not the case, we're probably only checking if the users can
|
||||
# see events in the room at that point in the DAG, and that shouldn't be decided
|
||||
# on those checks.
|
||||
if filter_send_to_client:
|
||||
if event.type == EventTypes.Dummy:
|
||||
return None
|
||||
|
||||
if not event.is_state() and event.sender in ignore_list:
|
||||
return None
|
||||
|
||||
# Until MSC2261 has landed we can't redact malicious alias events, so for
|
||||
# now we temporarily filter out m.room.aliases entirely to mitigate
|
||||
# abuse, while we spec a better solution to advertising aliases
|
||||
# on rooms.
|
||||
if event.type == EventTypes.Aliases:
|
||||
return None
|
||||
|
||||
# Don't try to apply the room's retention policy if the event is a state
|
||||
# event, as MSC1763 states that retention is only considered for non-state
|
||||
# events.
|
||||
if not event.is_state():
|
||||
retention_policy = retention_policies[event.room_id]
|
||||
max_lifetime = retention_policy.max_lifetime
|
||||
|
||||
if max_lifetime is not None:
|
||||
oldest_allowed_ts = storage.main.clock.time_msec() - max_lifetime
|
||||
|
||||
if event.origin_server_ts < oldest_allowed_ts:
|
||||
return None
|
||||
|
||||
if event.event_id in always_include_ids:
|
||||
return event
|
||||
|
||||
# we need to handle outliers separately, since we don't have the room state.
|
||||
if event.internal_metadata.outlier:
|
||||
# Normally these can't be seen by clients, but we make an exception for
|
||||
# for out-of-band membership events (eg, incoming invites, or rejections of
|
||||
# said invite) for the user themselves.
|
||||
if event.type == EventTypes.Member and event.state_key == user_id:
|
||||
logger.debug("Returning out-of-band-membership event %s", event)
|
||||
return event
|
||||
|
||||
return None
|
||||
|
||||
state = event_id_to_state[event.event_id]
|
||||
|
||||
# get the room_visibility at the time of the event.
|
||||
visibility = get_effective_room_visibility_from_state(state)
|
||||
|
||||
# Always allow history visibility events on boundaries. This is done
|
||||
# by setting the effective visibility to the least restrictive
|
||||
# of the old vs new.
|
||||
if event.type == EventTypes.RoomHistoryVisibility:
|
||||
prev_content = event.unsigned.get("prev_content", {})
|
||||
prev_visibility = prev_content.get("history_visibility", None)
|
||||
|
||||
if prev_visibility not in VISIBILITY_PRIORITY:
|
||||
prev_visibility = HistoryVisibility.SHARED
|
||||
|
||||
new_priority = VISIBILITY_PRIORITY.index(visibility)
|
||||
old_priority = VISIBILITY_PRIORITY.index(prev_visibility)
|
||||
if old_priority < new_priority:
|
||||
visibility = prev_visibility
|
||||
|
||||
# likewise, if the event is the user's own membership event, use
|
||||
# the 'most joined' membership
|
||||
membership = None
|
||||
if event.type == EventTypes.Member and event.state_key == user_id:
|
||||
membership = event.content.get("membership", None)
|
||||
if membership not in MEMBERSHIP_PRIORITY:
|
||||
membership = "leave"
|
||||
|
||||
prev_content = event.unsigned.get("prev_content", {})
|
||||
prev_membership = prev_content.get("membership", None)
|
||||
if prev_membership not in MEMBERSHIP_PRIORITY:
|
||||
prev_membership = "leave"
|
||||
|
||||
# Always allow the user to see their own leave events, otherwise
|
||||
# they won't see the room disappear if they reject the invite
|
||||
#
|
||||
# (Note this doesn't work for out-of-band invite rejections, which don't
|
||||
# have prev_state populated. They are handled above in the outlier code.)
|
||||
if membership == "leave" and (
|
||||
prev_membership == "join" or prev_membership == "invite"
|
||||
):
|
||||
return event
|
||||
|
||||
new_priority = MEMBERSHIP_PRIORITY.index(membership)
|
||||
old_priority = MEMBERSHIP_PRIORITY.index(prev_membership)
|
||||
if old_priority < new_priority:
|
||||
membership = prev_membership
|
||||
|
||||
# otherwise, get the user's membership at the time of the event.
|
||||
if membership is None:
|
||||
membership_event = state.get((EventTypes.Member, user_id), None)
|
||||
if membership_event:
|
||||
membership = membership_event.membership
|
||||
|
||||
# if the user was a member of the room at the time of the event,
|
||||
# they can see it.
|
||||
if membership == Membership.JOIN:
|
||||
return event
|
||||
|
||||
# otherwise, it depends on the room visibility.
|
||||
|
||||
if visibility == HistoryVisibility.JOINED:
|
||||
# we weren't a member at the time of the event, so we can't
|
||||
# see this event.
|
||||
return None
|
||||
|
||||
elif visibility == HistoryVisibility.INVITED:
|
||||
# user can also see the event if they were *invited* at the time
|
||||
# of the event.
|
||||
return event if membership == Membership.INVITE else None
|
||||
|
||||
elif visibility == HistoryVisibility.SHARED and is_peeking:
|
||||
# if the visibility is shared, users cannot see the event unless
|
||||
# they have *subsequently* joined the room (or were members at the
|
||||
# time, of course)
|
||||
#
|
||||
# XXX: if the user has subsequently joined and then left again,
|
||||
# ideally we would share history up to the point they left. But
|
||||
# we don't know when they left. We just treat it as though they
|
||||
# never joined, and restrict access.
|
||||
return None
|
||||
|
||||
# the visibility is either shared or world_readable, and the user was
|
||||
# not a member at the time. We allow it, provided the original sender
|
||||
# has not requested their data to be erased, in which case, we return
|
||||
# a redacted version.
|
||||
if erased_senders[event.sender]:
|
||||
return prune_event(event)
|
||||
|
||||
return event
|
||||
return _check_client_allowed_to_see_event(
|
||||
user_id=user_id,
|
||||
event=event,
|
||||
clock=storage.main.clock,
|
||||
filter_send_to_client=filter_send_to_client,
|
||||
sender_ignored=event.sender in ignore_list,
|
||||
always_include_ids=always_include_ids,
|
||||
retention_policy=retention_policies[room_id],
|
||||
state=event_id_to_state.get(event.event_id),
|
||||
is_peeking=is_peeking,
|
||||
sender_erased=erased_senders.get(event.sender, False),
|
||||
)
|
||||
|
||||
# Check each event: gives an iterable of None or (a potentially modified)
|
||||
# EventBase.
|
||||
|
@ -258,9 +128,389 @@ async def filter_events_for_client(
|
|||
return [ev for ev in filtered_events if ev]
|
||||
|
||||
|
||||
async def filter_event_for_clients_with_state(
|
||||
store: DataStore,
|
||||
user_ids: Collection[str],
|
||||
event: EventBase,
|
||||
context: EventContext,
|
||||
is_peeking: bool = False,
|
||||
filter_send_to_client: bool = True,
|
||||
) -> Collection[str]:
|
||||
"""
|
||||
Checks to see if an event is visible to the users in the list at the time of
|
||||
the event.
|
||||
|
||||
Note: This does *not* check if the sender of the event was erased.
|
||||
|
||||
Args:
|
||||
store: databases
|
||||
user_ids: user_ids to be checked
|
||||
event: the event to be checked
|
||||
context: EventContext for the event to be checked
|
||||
is_peeking: Whether the users are peeking into the room, ie not
|
||||
currently joined
|
||||
filter_send_to_client: Whether we're checking an event that's going to be
|
||||
sent to a client. This might not always be the case since this function can
|
||||
also be called to check whether a user can see the state at a given point.
|
||||
|
||||
Returns:
|
||||
Collection of user IDs for whom the event is visible
|
||||
"""
|
||||
# None of the users should see the event if it is soft_failed
|
||||
if event.internal_metadata.is_soft_failed():
|
||||
return []
|
||||
|
||||
# Make a set for all user IDs that haven't been filtered out by a check.
|
||||
allowed_user_ids = set(user_ids)
|
||||
|
||||
# Only run some checks if these events aren't about to be sent to clients. This is
|
||||
# because, if this is not the case, we're probably only checking if the users can
|
||||
# see events in the room at that point in the DAG, and that shouldn't be decided
|
||||
# on those checks.
|
||||
if filter_send_to_client:
|
||||
ignored_by = await store.ignored_by(event.sender)
|
||||
retention_policy = await store.get_retention_policy_for_room(event.room_id)
|
||||
|
||||
for user_id in user_ids:
|
||||
if (
|
||||
_check_filter_send_to_client(
|
||||
event,
|
||||
store.clock,
|
||||
retention_policy,
|
||||
sender_ignored=user_id in ignored_by,
|
||||
)
|
||||
== _CheckFilter.DENIED
|
||||
):
|
||||
allowed_user_ids.discard(user_id)
|
||||
|
||||
if event.internal_metadata.outlier:
|
||||
# Normally these can't be seen by clients, but we make an exception for
|
||||
# for out-of-band membership events (eg, incoming invites, or rejections of
|
||||
# said invite) for the user themselves.
|
||||
if event.type == EventTypes.Member and event.state_key in allowed_user_ids:
|
||||
logger.debug("Returning out-of-band-membership event %s", event)
|
||||
return {event.state_key}
|
||||
|
||||
return set()
|
||||
|
||||
# First we get just the history visibility in case its shared/world-readable
|
||||
# room.
|
||||
visibility_state_map = await _get_state_map(
|
||||
store, event, context, StateFilter.from_types([_HISTORY_VIS_KEY])
|
||||
)
|
||||
|
||||
visibility = get_effective_room_visibility_from_state(visibility_state_map)
|
||||
if (
|
||||
_check_history_visibility(event, visibility, is_peeking=is_peeking)
|
||||
== _CheckVisibility.ALLOWED
|
||||
):
|
||||
return allowed_user_ids
|
||||
|
||||
# The history visibility isn't lax, so we now need to fetch the membership
|
||||
# events of all the users.
|
||||
|
||||
filter_list = []
|
||||
for user_id in allowed_user_ids:
|
||||
filter_list.append((EventTypes.Member, user_id))
|
||||
filter_list.append((EventTypes.RoomHistoryVisibility, ""))
|
||||
|
||||
state_filter = StateFilter.from_types(filter_list)
|
||||
state_map = await _get_state_map(store, event, context, state_filter)
|
||||
|
||||
# Now we check whether the membership allows each user to see the event.
|
||||
return {
|
||||
user_id
|
||||
for user_id in allowed_user_ids
|
||||
if _check_membership(user_id, event, visibility, state_map, is_peeking).allowed
|
||||
}
|
||||
|
||||
|
||||
async def _get_state_map(
|
||||
store: DataStore, event: EventBase, context: EventContext, state_filter: StateFilter
|
||||
) -> StateMap[EventBase]:
|
||||
"""Helper function for getting a `StateMap[EventBase]` from an `EventContext`"""
|
||||
state_map = await context.get_prev_state_ids(state_filter)
|
||||
|
||||
# Use events rather than event ids as content from the events are needed in
|
||||
# _check_visibility
|
||||
event_map = await store.get_events(state_map.values(), get_prev_content=False)
|
||||
|
||||
updated_state_map = {}
|
||||
for state_key, event_id in state_map.items():
|
||||
state_event = event_map.get(event_id)
|
||||
if state_event:
|
||||
updated_state_map[state_key] = state_event
|
||||
|
||||
if event.is_state():
|
||||
current_state_key = (event.type, event.state_key)
|
||||
# Add current event to updated_state_map, we need to do this here as it
|
||||
# may not have been persisted to the db yet
|
||||
updated_state_map[current_state_key] = event
|
||||
|
||||
return updated_state_map
|
||||
|
||||
|
||||
def _check_client_allowed_to_see_event(
|
||||
user_id: str,
|
||||
event: EventBase,
|
||||
clock: Clock,
|
||||
filter_send_to_client: bool,
|
||||
is_peeking: bool,
|
||||
always_include_ids: FrozenSet[str],
|
||||
sender_ignored: bool,
|
||||
retention_policy: RetentionPolicy,
|
||||
state: Optional[StateMap[EventBase]],
|
||||
sender_erased: bool,
|
||||
) -> Optional[EventBase]:
|
||||
"""Check with the given user is allowed to see the given event
|
||||
|
||||
See `filter_events_for_client` for details about args
|
||||
|
||||
Args:
|
||||
user_id
|
||||
event
|
||||
clock
|
||||
filter_send_to_client
|
||||
is_peeking
|
||||
always_include_ids
|
||||
sender_ignored: Whether the user is ignoring the event sender
|
||||
retention_policy: The retention policy of the room
|
||||
state: The state at the event, unless its an outlier
|
||||
sender_erased: Whether the event sender has been marked as "erased"
|
||||
|
||||
Returns:
|
||||
None if the user cannot see this event at all
|
||||
|
||||
a redacted copy of the event if they can only see a redacted
|
||||
version
|
||||
|
||||
the original event if they can see it as normal.
|
||||
"""
|
||||
# Only run some checks if these events aren't about to be sent to clients. This is
|
||||
# because, if this is not the case, we're probably only checking if the users can
|
||||
# see events in the room at that point in the DAG, and that shouldn't be decided
|
||||
# on those checks.
|
||||
if filter_send_to_client:
|
||||
if (
|
||||
_check_filter_send_to_client(event, clock, retention_policy, sender_ignored)
|
||||
== _CheckFilter.DENIED
|
||||
):
|
||||
return None
|
||||
|
||||
if event.event_id in always_include_ids:
|
||||
return event
|
||||
|
||||
# we need to handle outliers separately, since we don't have the room state.
|
||||
if event.internal_metadata.outlier:
|
||||
# Normally these can't be seen by clients, but we make an exception for
|
||||
# for out-of-band membership events (eg, incoming invites, or rejections of
|
||||
# said invite) for the user themselves.
|
||||
if event.type == EventTypes.Member and event.state_key == user_id:
|
||||
logger.debug("Returning out-of-band-membership event %s", event)
|
||||
return event
|
||||
|
||||
return None
|
||||
|
||||
if state is None:
|
||||
raise Exception("Missing state for non-outlier event")
|
||||
|
||||
# get the room_visibility at the time of the event.
|
||||
visibility = get_effective_room_visibility_from_state(state)
|
||||
|
||||
# Check if the room has lax history visibility, allowing us to skip
|
||||
# membership checks.
|
||||
#
|
||||
# We can only do this check if the sender has *not* been erased, as if they
|
||||
# have we need to check the user's membership.
|
||||
if (
|
||||
not sender_erased
|
||||
and _check_history_visibility(event, visibility, is_peeking)
|
||||
== _CheckVisibility.ALLOWED
|
||||
):
|
||||
return event
|
||||
|
||||
membership_result = _check_membership(user_id, event, visibility, state, is_peeking)
|
||||
if not membership_result.allowed:
|
||||
return None
|
||||
|
||||
# If the sender has been erased and the user was not joined at the time, we
|
||||
# must only return the redacted form.
|
||||
if sender_erased and not membership_result.joined:
|
||||
event = prune_event(event)
|
||||
|
||||
return event
|
||||
|
||||
|
||||
@attr.s(frozen=True, slots=True, auto_attribs=True)
|
||||
class _CheckMembershipReturn:
|
||||
"Return value of _check_membership"
|
||||
allowed: bool
|
||||
joined: bool
|
||||
|
||||
|
||||
def _check_membership(
|
||||
user_id: str,
|
||||
event: EventBase,
|
||||
visibility: str,
|
||||
state: StateMap[EventBase],
|
||||
is_peeking: bool,
|
||||
) -> _CheckMembershipReturn:
|
||||
"""Check whether the user can see the event due to their membership
|
||||
|
||||
Returns:
|
||||
True if they can, False if they can't, plus the membership of the user
|
||||
at the event.
|
||||
"""
|
||||
# If the event is the user's own membership event, use the 'most joined'
|
||||
# membership
|
||||
membership = None
|
||||
if event.type == EventTypes.Member and event.state_key == user_id:
|
||||
membership = event.content.get("membership", None)
|
||||
if membership not in MEMBERSHIP_PRIORITY:
|
||||
membership = "leave"
|
||||
|
||||
prev_content = event.unsigned.get("prev_content", {})
|
||||
prev_membership = prev_content.get("membership", None)
|
||||
if prev_membership not in MEMBERSHIP_PRIORITY:
|
||||
prev_membership = "leave"
|
||||
|
||||
# Always allow the user to see their own leave events, otherwise
|
||||
# they won't see the room disappear if they reject the invite
|
||||
#
|
||||
# (Note this doesn't work for out-of-band invite rejections, which don't
|
||||
# have prev_state populated. They are handled above in the outlier code.)
|
||||
if membership == "leave" and (
|
||||
prev_membership == "join" or prev_membership == "invite"
|
||||
):
|
||||
return _CheckMembershipReturn(True, membership == Membership.JOIN)
|
||||
|
||||
new_priority = MEMBERSHIP_PRIORITY.index(membership)
|
||||
old_priority = MEMBERSHIP_PRIORITY.index(prev_membership)
|
||||
if old_priority < new_priority:
|
||||
membership = prev_membership
|
||||
|
||||
# otherwise, get the user's membership at the time of the event.
|
||||
if membership is None:
|
||||
membership_event = state.get((EventTypes.Member, user_id), None)
|
||||
if membership_event:
|
||||
membership = membership_event.membership
|
||||
|
||||
# if the user was a member of the room at the time of the event,
|
||||
# they can see it.
|
||||
if membership == Membership.JOIN:
|
||||
return _CheckMembershipReturn(True, True)
|
||||
|
||||
# otherwise, it depends on the room visibility.
|
||||
|
||||
if visibility == HistoryVisibility.JOINED:
|
||||
# we weren't a member at the time of the event, so we can't
|
||||
# see this event.
|
||||
return _CheckMembershipReturn(False, False)
|
||||
|
||||
elif visibility == HistoryVisibility.INVITED:
|
||||
# user can also see the event if they were *invited* at the time
|
||||
# of the event.
|
||||
return _CheckMembershipReturn(membership == Membership.INVITE, False)
|
||||
|
||||
elif visibility == HistoryVisibility.SHARED and is_peeking:
|
||||
# if the visibility is shared, users cannot see the event unless
|
||||
# they have *subsequently* joined the room (or were members at the
|
||||
# time, of course)
|
||||
#
|
||||
# XXX: if the user has subsequently joined and then left again,
|
||||
# ideally we would share history up to the point they left. But
|
||||
# we don't know when they left. We just treat it as though they
|
||||
# never joined, and restrict access.
|
||||
return _CheckMembershipReturn(False, False)
|
||||
|
||||
# The visibility is either shared or world_readable, and the user was
|
||||
# not a member at the time. We allow it.
|
||||
return _CheckMembershipReturn(True, False)
|
||||
|
||||
|
||||
class _CheckFilter(Enum):
|
||||
MAYBE_ALLOWED = auto()
|
||||
DENIED = auto()
|
||||
|
||||
|
||||
def _check_filter_send_to_client(
|
||||
event: EventBase,
|
||||
clock: Clock,
|
||||
retention_policy: RetentionPolicy,
|
||||
sender_ignored: bool,
|
||||
) -> _CheckFilter:
|
||||
"""Apply checks for sending events to client
|
||||
|
||||
Returns:
|
||||
True if might be allowed to be sent to clients, False if definitely not.
|
||||
"""
|
||||
|
||||
if event.type == EventTypes.Dummy:
|
||||
return _CheckFilter.DENIED
|
||||
|
||||
if not event.is_state() and sender_ignored:
|
||||
return _CheckFilter.DENIED
|
||||
|
||||
# Until MSC2261 has landed we can't redact malicious alias events, so for
|
||||
# now we temporarily filter out m.room.aliases entirely to mitigate
|
||||
# abuse, while we spec a better solution to advertising aliases
|
||||
# on rooms.
|
||||
if event.type == EventTypes.Aliases:
|
||||
return _CheckFilter.DENIED
|
||||
|
||||
# Don't try to apply the room's retention policy if the event is a state
|
||||
# event, as MSC1763 states that retention is only considered for non-state
|
||||
# events.
|
||||
if not event.is_state():
|
||||
max_lifetime = retention_policy.max_lifetime
|
||||
|
||||
if max_lifetime is not None:
|
||||
oldest_allowed_ts = clock.time_msec() - max_lifetime
|
||||
|
||||
if event.origin_server_ts < oldest_allowed_ts:
|
||||
return _CheckFilter.DENIED
|
||||
|
||||
return _CheckFilter.MAYBE_ALLOWED
|
||||
|
||||
|
||||
class _CheckVisibility(Enum):
|
||||
ALLOWED = auto()
|
||||
MAYBE_DENIED = auto()
|
||||
|
||||
|
||||
def _check_history_visibility(
|
||||
event: EventBase, visibility: str, is_peeking: bool
|
||||
) -> _CheckVisibility:
|
||||
"""Check if event is allowed to be seen due to lax history visibility.
|
||||
|
||||
Returns:
|
||||
True if user can definitely see the event, False if maybe not.
|
||||
"""
|
||||
# Always allow history visibility events on boundaries. This is done
|
||||
# by setting the effective visibility to the least restrictive
|
||||
# of the old vs new.
|
||||
if event.type == EventTypes.RoomHistoryVisibility:
|
||||
prev_content = event.unsigned.get("prev_content", {})
|
||||
prev_visibility = prev_content.get("history_visibility", None)
|
||||
|
||||
if prev_visibility not in VISIBILITY_PRIORITY:
|
||||
prev_visibility = HistoryVisibility.SHARED
|
||||
|
||||
new_priority = VISIBILITY_PRIORITY.index(visibility)
|
||||
old_priority = VISIBILITY_PRIORITY.index(prev_visibility)
|
||||
if old_priority < new_priority:
|
||||
visibility = prev_visibility
|
||||
|
||||
if visibility == HistoryVisibility.SHARED and not is_peeking:
|
||||
return _CheckVisibility.ALLOWED
|
||||
elif visibility == HistoryVisibility.WORLD_READABLE:
|
||||
return _CheckVisibility.ALLOWED
|
||||
|
||||
return _CheckVisibility.MAYBE_DENIED
|
||||
|
||||
|
||||
def get_effective_room_visibility_from_state(state: StateMap[EventBase]) -> str:
|
||||
"""Get the actual history vis, from a state map including the history_visibility event
|
||||
|
||||
Handles missing and invalid history visibility events.
|
||||
"""
|
||||
visibility_event = state.get(_HISTORY_VIS_KEY, None)
|
||||
|
|
|
@ -22,7 +22,7 @@ from typing import Any, Dict, Iterable, List, Optional, Tuple, Union
|
|||
from unittest.mock import Mock, call
|
||||
from urllib import parse as urlparse
|
||||
|
||||
# `Literal` appears with Python 3.8.
|
||||
from parameterized import param, parameterized
|
||||
from typing_extensions import Literal
|
||||
|
||||
from twisted.test.proto_helpers import MemoryReactor
|
||||
|
@ -708,6 +708,21 @@ class RoomsCreateTestCase(RoomBase):
|
|||
|
||||
self.assertEqual(200, channel.code, channel.result)
|
||||
self.assertTrue("room_id" in channel.json_body)
|
||||
assert channel.resource_usage is not None
|
||||
self.assertEqual(37, channel.resource_usage.db_txn_count)
|
||||
|
||||
def test_post_room_initial_state(self) -> None:
|
||||
# POST with initial_state config key, expect new room id
|
||||
channel = self.make_request(
|
||||
"POST",
|
||||
"/createRoom",
|
||||
b'{"initial_state":[{"type": "m.bridge", "content": {}}]}',
|
||||
)
|
||||
|
||||
self.assertEqual(200, channel.code, channel.result)
|
||||
self.assertTrue("room_id" in channel.json_body)
|
||||
assert channel.resource_usage is not None
|
||||
self.assertEqual(41, channel.resource_usage.db_txn_count)
|
||||
|
||||
def test_post_room_visibility_key(self) -> None:
|
||||
# POST with visibility config key, expect new room id
|
||||
|
@ -815,14 +830,14 @@ class RoomsCreateTestCase(RoomBase):
|
|||
In this test, we use the more recent API in which callbacks return a `Union[Codes, Literal["NOT_SPAM"]]`.
|
||||
"""
|
||||
|
||||
async def user_may_join_room(
|
||||
async def user_may_join_room_codes(
|
||||
mxid: str,
|
||||
room_id: str,
|
||||
is_invite: bool,
|
||||
) -> Codes:
|
||||
return Codes.CONSENT_NOT_GIVEN
|
||||
|
||||
join_mock = Mock(side_effect=user_may_join_room)
|
||||
join_mock = Mock(side_effect=user_may_join_room_codes)
|
||||
self.hs.get_spam_checker()._user_may_join_room_callbacks.append(join_mock)
|
||||
|
||||
channel = self.make_request(
|
||||
|
@ -834,6 +849,25 @@ class RoomsCreateTestCase(RoomBase):
|
|||
|
||||
self.assertEqual(join_mock.call_count, 0)
|
||||
|
||||
# Now change the return value of the callback to deny any join. Since we're
|
||||
# creating the room, despite the return value, we should be able to join.
|
||||
async def user_may_join_room_tuple(
|
||||
mxid: str,
|
||||
room_id: str,
|
||||
is_invite: bool,
|
||||
) -> Tuple[Codes, dict]:
|
||||
return Codes.INCOMPATIBLE_ROOM_VERSION, {}
|
||||
|
||||
join_mock.side_effect = user_may_join_room_tuple
|
||||
|
||||
channel = self.make_request(
|
||||
"POST",
|
||||
"/createRoom",
|
||||
{},
|
||||
)
|
||||
self.assertEqual(channel.code, 200, channel.json_body)
|
||||
self.assertEqual(join_mock.call_count, 0)
|
||||
|
||||
|
||||
class RoomTopicTestCase(RoomBase):
|
||||
"""Tests /rooms/$room_id/topic REST events."""
|
||||
|
@ -1113,13 +1147,15 @@ class RoomJoinTestCase(RoomBase):
|
|||
"""
|
||||
|
||||
# Register a dummy callback. Make it allow all room joins for now.
|
||||
return_value: Union[Literal["NOT_SPAM"], Codes] = synapse.module_api.NOT_SPAM
|
||||
return_value: Union[
|
||||
Literal["NOT_SPAM"], Tuple[Codes, dict], Codes
|
||||
] = synapse.module_api.NOT_SPAM
|
||||
|
||||
async def user_may_join_room(
|
||||
userid: str,
|
||||
room_id: str,
|
||||
is_invited: bool,
|
||||
) -> Union[Literal["NOT_SPAM"], Codes]:
|
||||
) -> Union[Literal["NOT_SPAM"], Tuple[Codes, dict], Codes]:
|
||||
return return_value
|
||||
|
||||
# `spec` argument is needed for this function mock to have `__qualname__`, which
|
||||
|
@ -1163,8 +1199,28 @@ class RoomJoinTestCase(RoomBase):
|
|||
)
|
||||
|
||||
# Now make the callback deny all room joins, and check that a join actually fails.
|
||||
# We pick an arbitrary Codes rather than the default `Codes.FORBIDDEN`.
|
||||
return_value = Codes.CONSENT_NOT_GIVEN
|
||||
self.helper.join(self.room3, self.user2, expect_code=403, tok=self.tok2)
|
||||
self.helper.invite(self.room3, self.user1, self.user2, tok=self.tok1)
|
||||
self.helper.join(
|
||||
self.room3,
|
||||
self.user2,
|
||||
expect_code=403,
|
||||
expect_errcode=return_value,
|
||||
tok=self.tok2,
|
||||
)
|
||||
|
||||
# Now make the callback deny all room joins, and check that a join actually fails.
|
||||
# As above, with the experimental extension that lets us return dictionaries.
|
||||
return_value = (Codes.BAD_ALIAS, {"another_field": "12345"})
|
||||
self.helper.join(
|
||||
self.room3,
|
||||
self.user2,
|
||||
expect_code=403,
|
||||
expect_errcode=return_value[0],
|
||||
tok=self.tok2,
|
||||
expect_additional_fields=return_value[1],
|
||||
)
|
||||
|
||||
|
||||
class RoomJoinRatelimitTestCase(RoomBase):
|
||||
|
@ -1314,6 +1370,97 @@ class RoomMessagesTestCase(RoomBase):
|
|||
channel = self.make_request("PUT", path, content)
|
||||
self.assertEqual(200, channel.code, msg=channel.result["body"])
|
||||
|
||||
@parameterized.expand(
|
||||
[
|
||||
# Allow
|
||||
param(
|
||||
name="NOT_SPAM", value="NOT_SPAM", expected_code=200, expected_fields={}
|
||||
),
|
||||
param(name="False", value=False, expected_code=200, expected_fields={}),
|
||||
# Block
|
||||
param(
|
||||
name="scalene string",
|
||||
value="ANY OTHER STRING",
|
||||
expected_code=403,
|
||||
expected_fields={"errcode": "M_FORBIDDEN"},
|
||||
),
|
||||
param(
|
||||
name="True",
|
||||
value=True,
|
||||
expected_code=403,
|
||||
expected_fields={"errcode": "M_FORBIDDEN"},
|
||||
),
|
||||
param(
|
||||
name="Code",
|
||||
value=Codes.LIMIT_EXCEEDED,
|
||||
expected_code=403,
|
||||
expected_fields={"errcode": "M_LIMIT_EXCEEDED"},
|
||||
),
|
||||
param(
|
||||
name="Tuple",
|
||||
value=(Codes.SERVER_NOT_TRUSTED, {"additional_field": "12345"}),
|
||||
expected_code=403,
|
||||
expected_fields={
|
||||
"errcode": "M_SERVER_NOT_TRUSTED",
|
||||
"additional_field": "12345",
|
||||
},
|
||||
),
|
||||
]
|
||||
)
|
||||
def test_spam_checker_check_event_for_spam(
|
||||
self,
|
||||
name: str,
|
||||
value: Union[str, bool, Codes, Tuple[Codes, JsonDict]],
|
||||
expected_code: int,
|
||||
expected_fields: dict,
|
||||
) -> None:
|
||||
class SpamCheck:
|
||||
mock_return_value: Union[
|
||||
str, bool, Codes, Tuple[Codes, JsonDict], bool
|
||||
] = "NOT_SPAM"
|
||||
mock_content: Optional[JsonDict] = None
|
||||
|
||||
async def check_event_for_spam(
|
||||
self,
|
||||
event: synapse.events.EventBase,
|
||||
) -> Union[str, Codes, Tuple[Codes, JsonDict], bool]:
|
||||
self.mock_content = event.content
|
||||
return self.mock_return_value
|
||||
|
||||
spam_checker = SpamCheck()
|
||||
|
||||
self.hs.get_spam_checker()._check_event_for_spam_callbacks.append(
|
||||
spam_checker.check_event_for_spam
|
||||
)
|
||||
|
||||
# Inject `value` as mock_return_value
|
||||
spam_checker.mock_return_value = value
|
||||
path = "/rooms/%s/send/m.room.message/check_event_for_spam_%s" % (
|
||||
urlparse.quote(self.room_id),
|
||||
urlparse.quote(name),
|
||||
)
|
||||
body = "test-%s" % name
|
||||
content = '{"body":"%s","msgtype":"m.text"}' % body
|
||||
channel = self.make_request("PUT", path, content)
|
||||
|
||||
# Check that the callback has witnessed the correct event.
|
||||
self.assertIsNotNone(spam_checker.mock_content)
|
||||
if (
|
||||
spam_checker.mock_content is not None
|
||||
): # Checked just above, but mypy doesn't know about that.
|
||||
self.assertEqual(
|
||||
spam_checker.mock_content["body"], body, spam_checker.mock_content
|
||||
)
|
||||
|
||||
# Check that we have the correct result.
|
||||
self.assertEqual(expected_code, channel.code, msg=channel.result["body"])
|
||||
for expected_key, expected_value in expected_fields.items():
|
||||
self.assertEqual(
|
||||
channel.json_body.get(expected_key, None),
|
||||
expected_value,
|
||||
"Field %s absent or invalid " % expected_key,
|
||||
)
|
||||
|
||||
|
||||
class RoomPowerLevelOverridesTestCase(RoomBase):
|
||||
"""Tests that the power levels can be overridden with server config."""
|
||||
|
@ -3235,7 +3382,8 @@ class ThreepidInviteTestCase(unittest.HomeserverTestCase):
|
|||
make_invite_mock.assert_called_once()
|
||||
|
||||
# Now change the return value of the callback to deny any invite and test that
|
||||
# we can't send the invite.
|
||||
# we can't send the invite. We pick an arbitrary error code to be able to check
|
||||
# that the same code has been returned
|
||||
mock.return_value = make_awaitable(Codes.CONSENT_NOT_GIVEN)
|
||||
channel = self.make_request(
|
||||
method="POST",
|
||||
|
@ -3249,6 +3397,27 @@ class ThreepidInviteTestCase(unittest.HomeserverTestCase):
|
|||
access_token=self.tok,
|
||||
)
|
||||
self.assertEqual(channel.code, 403)
|
||||
self.assertEqual(channel.json_body["errcode"], Codes.CONSENT_NOT_GIVEN)
|
||||
|
||||
# Also check that it stopped before calling _make_and_store_3pid_invite.
|
||||
make_invite_mock.assert_called_once()
|
||||
|
||||
# Run variant with `Tuple[Codes, dict]`.
|
||||
mock.return_value = make_awaitable((Codes.EXPIRED_ACCOUNT, {"field": "value"}))
|
||||
channel = self.make_request(
|
||||
method="POST",
|
||||
path="/rooms/" + self.room_id + "/invite",
|
||||
content={
|
||||
"id_server": "example.com",
|
||||
"id_access_token": "sometoken",
|
||||
"medium": "email",
|
||||
"address": email_to_invite,
|
||||
},
|
||||
access_token=self.tok,
|
||||
)
|
||||
self.assertEqual(channel.code, 403)
|
||||
self.assertEqual(channel.json_body["errcode"], Codes.EXPIRED_ACCOUNT)
|
||||
self.assertEqual(channel.json_body["field"], "value")
|
||||
|
||||
# Also check that it stopped before calling _make_and_store_3pid_invite.
|
||||
make_invite_mock.assert_called_once()
|
||||
|
|
|
@ -41,6 +41,7 @@ from twisted.web.resource import Resource
|
|||
from twisted.web.server import Site
|
||||
|
||||
from synapse.api.constants import Membership
|
||||
from synapse.api.errors import Codes
|
||||
from synapse.server import HomeServer
|
||||
from synapse.types import JsonDict
|
||||
|
||||
|
@ -171,6 +172,8 @@ class RestHelper:
|
|||
expect_code: int = HTTPStatus.OK,
|
||||
tok: Optional[str] = None,
|
||||
appservice_user_id: Optional[str] = None,
|
||||
expect_errcode: Optional[Codes] = None,
|
||||
expect_additional_fields: Optional[dict] = None,
|
||||
) -> None:
|
||||
self.change_membership(
|
||||
room=room,
|
||||
|
@ -180,6 +183,8 @@ class RestHelper:
|
|||
appservice_user_id=appservice_user_id,
|
||||
membership=Membership.JOIN,
|
||||
expect_code=expect_code,
|
||||
expect_errcode=expect_errcode,
|
||||
expect_additional_fields=expect_additional_fields,
|
||||
)
|
||||
|
||||
def knock(
|
||||
|
@ -263,6 +268,7 @@ class RestHelper:
|
|||
appservice_user_id: Optional[str] = None,
|
||||
expect_code: int = HTTPStatus.OK,
|
||||
expect_errcode: Optional[str] = None,
|
||||
expect_additional_fields: Optional[dict] = None,
|
||||
) -> None:
|
||||
"""
|
||||
Send a membership state event into a room.
|
||||
|
@ -323,6 +329,21 @@ class RestHelper:
|
|||
channel.result["body"],
|
||||
)
|
||||
|
||||
if expect_additional_fields is not None:
|
||||
for expect_key, expect_value in expect_additional_fields.items():
|
||||
assert expect_key in channel.json_body, "Expected field %s, got %s" % (
|
||||
expect_key,
|
||||
channel.json_body,
|
||||
)
|
||||
assert (
|
||||
channel.json_body[expect_key] == expect_value
|
||||
), "Expected: %s at %s, got: %s, resp: %s" % (
|
||||
expect_value,
|
||||
expect_key,
|
||||
channel.json_body[expect_key],
|
||||
channel.json_body,
|
||||
)
|
||||
|
||||
self.auth_user_id = temp_id
|
||||
|
||||
def send(
|
||||
|
|
|
@ -23,11 +23,13 @@ from urllib import parse
|
|||
import attr
|
||||
from parameterized import parameterized, parameterized_class
|
||||
from PIL import Image as Image
|
||||
from typing_extensions import Literal
|
||||
|
||||
from twisted.internet import defer
|
||||
from twisted.internet.defer import Deferred
|
||||
from twisted.test.proto_helpers import MemoryReactor
|
||||
|
||||
from synapse.api.errors import Codes
|
||||
from synapse.events import EventBase
|
||||
from synapse.events.spamcheck import load_legacy_spam_checkers
|
||||
from synapse.logging.context import make_deferred_yieldable
|
||||
|
@ -570,9 +572,11 @@ class MediaRepoTests(unittest.HomeserverTestCase):
|
|||
)
|
||||
|
||||
|
||||
class TestSpamChecker:
|
||||
class TestSpamCheckerLegacy:
|
||||
"""A spam checker module that rejects all media that includes the bytes
|
||||
`evil`.
|
||||
|
||||
Uses the legacy Spam-Checker API.
|
||||
"""
|
||||
|
||||
def __init__(self, config: Dict[str, Any], api: ModuleApi) -> None:
|
||||
|
@ -613,7 +617,7 @@ class TestSpamChecker:
|
|||
return b"evil" in buf.getvalue()
|
||||
|
||||
|
||||
class SpamCheckerTestCase(unittest.HomeserverTestCase):
|
||||
class SpamCheckerTestCaseLegacy(unittest.HomeserverTestCase):
|
||||
servlets = [
|
||||
login.register_servlets,
|
||||
admin.register_servlets,
|
||||
|
@ -637,7 +641,8 @@ class SpamCheckerTestCase(unittest.HomeserverTestCase):
|
|||
{
|
||||
"spam_checker": [
|
||||
{
|
||||
"module": TestSpamChecker.__module__ + ".TestSpamChecker",
|
||||
"module": TestSpamCheckerLegacy.__module__
|
||||
+ ".TestSpamCheckerLegacy",
|
||||
"config": {},
|
||||
}
|
||||
]
|
||||
|
@ -662,3 +667,62 @@ class SpamCheckerTestCase(unittest.HomeserverTestCase):
|
|||
self.helper.upload_media(
|
||||
self.upload_resource, data, tok=self.tok, expect_code=400
|
||||
)
|
||||
|
||||
|
||||
EVIL_DATA = b"Some evil data"
|
||||
EVIL_DATA_EXPERIMENT = b"Some evil data to trigger the experimental tuple API"
|
||||
|
||||
|
||||
class SpamCheckerTestCase(unittest.HomeserverTestCase):
|
||||
servlets = [
|
||||
login.register_servlets,
|
||||
admin.register_servlets,
|
||||
]
|
||||
|
||||
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
|
||||
self.user = self.register_user("user", "pass")
|
||||
self.tok = self.login("user", "pass")
|
||||
|
||||
# Allow for uploading and downloading to/from the media repo
|
||||
self.media_repo = hs.get_media_repository_resource()
|
||||
self.download_resource = self.media_repo.children[b"download"]
|
||||
self.upload_resource = self.media_repo.children[b"upload"]
|
||||
|
||||
hs.get_module_api().register_spam_checker_callbacks(
|
||||
check_media_file_for_spam=self.check_media_file_for_spam
|
||||
)
|
||||
|
||||
async def check_media_file_for_spam(
|
||||
self, file_wrapper: ReadableFileWrapper, file_info: FileInfo
|
||||
) -> Union[Codes, Literal["NOT_SPAM"]]:
|
||||
buf = BytesIO()
|
||||
await file_wrapper.write_chunks_to(buf.write)
|
||||
|
||||
if buf.getvalue() == EVIL_DATA:
|
||||
return Codes.FORBIDDEN
|
||||
elif buf.getvalue() == EVIL_DATA_EXPERIMENT:
|
||||
return (Codes.FORBIDDEN, {})
|
||||
else:
|
||||
return "NOT_SPAM"
|
||||
|
||||
def test_upload_innocent(self) -> None:
|
||||
"""Attempt to upload some innocent data that should be allowed."""
|
||||
self.helper.upload_media(
|
||||
self.upload_resource, SMALL_PNG, tok=self.tok, expect_code=200
|
||||
)
|
||||
|
||||
def test_upload_ban(self) -> None:
|
||||
"""Attempt to upload some data that includes bytes "evil", which should
|
||||
get rejected by the spam checker.
|
||||
"""
|
||||
|
||||
self.helper.upload_media(
|
||||
self.upload_resource, EVIL_DATA, tok=self.tok, expect_code=400
|
||||
)
|
||||
|
||||
self.helper.upload_media(
|
||||
self.upload_resource,
|
||||
EVIL_DATA_EXPERIMENT,
|
||||
tok=self.tok,
|
||||
expect_code=400,
|
||||
)
|
||||
|
|
|
@ -196,6 +196,13 @@ class EventPushActionsStoreTestCase(HomeserverTestCase):
|
|||
_mark_read(10, 10)
|
||||
_assert_counts(0, 0)
|
||||
|
||||
_inject_actions(11, HIGHLIGHT)
|
||||
_assert_counts(1, 1)
|
||||
_mark_read(11, 11)
|
||||
_assert_counts(0, 0)
|
||||
_rotate(11)
|
||||
_assert_counts(0, 0)
|
||||
|
||||
def test_find_first_stream_ordering_after_ts(self) -> None:
|
||||
def add_event(so: int, ts: int) -> None:
|
||||
self.get_success(
|
||||
|
|
|
@ -12,7 +12,6 @@
|
|||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from synapse.api.constants import EventTypes
|
||||
from synapse.api.room_versions import RoomVersions
|
||||
from synapse.types import RoomAlias, RoomID, UserID
|
||||
|
||||
|
@ -65,71 +64,3 @@ class RoomStoreTestCase(HomeserverTestCase):
|
|||
self.assertIsNone(
|
||||
(self.get_success(self.store.get_room_with_stats("!uknown:test"))),
|
||||
)
|
||||
|
||||
|
||||
class RoomEventsStoreTestCase(HomeserverTestCase):
|
||||
def prepare(self, reactor, clock, hs):
|
||||
# Room events need the full datastore, for persist_event() and
|
||||
# get_room_state()
|
||||
self.store = hs.get_datastores().main
|
||||
self._storage_controllers = hs.get_storage_controllers()
|
||||
self.event_factory = hs.get_event_factory()
|
||||
|
||||
self.room = RoomID.from_string("!abcde:test")
|
||||
|
||||
self.get_success(
|
||||
self.store.store_room(
|
||||
self.room.to_string(),
|
||||
room_creator_user_id="@creator:text",
|
||||
is_public=True,
|
||||
room_version=RoomVersions.V1,
|
||||
)
|
||||
)
|
||||
|
||||
def inject_room_event(self, **kwargs):
|
||||
self.get_success(
|
||||
self._storage_controllers.persistence.persist_event(
|
||||
self.event_factory.create_event(room_id=self.room.to_string(), **kwargs)
|
||||
)
|
||||
)
|
||||
|
||||
def STALE_test_room_name(self):
|
||||
name = "A-Room-Name"
|
||||
|
||||
self.inject_room_event(
|
||||
etype=EventTypes.Name, name=name, content={"name": name}, depth=1
|
||||
)
|
||||
|
||||
state = self.get_success(
|
||||
self._storage_controllers.state.get_current_state(
|
||||
room_id=self.room.to_string()
|
||||
)
|
||||
)
|
||||
|
||||
self.assertEqual(1, len(state))
|
||||
self.assertObjectHasAttributes(
|
||||
{"type": "m.room.name", "room_id": self.room.to_string(), "name": name},
|
||||
state[0],
|
||||
)
|
||||
|
||||
def STALE_test_room_topic(self):
|
||||
topic = "A place for things"
|
||||
|
||||
self.inject_room_event(
|
||||
etype=EventTypes.Topic, topic=topic, content={"topic": topic}, depth=1
|
||||
)
|
||||
|
||||
state = self.get_success(
|
||||
self._storage_controllers.state.get_current_state(
|
||||
room_id=self.room.to_string()
|
||||
)
|
||||
)
|
||||
|
||||
self.assertEqual(1, len(state))
|
||||
self.assertObjectHasAttributes(
|
||||
{"type": "m.room.topic", "room_id": self.room.to_string(), "topic": topic},
|
||||
state[0],
|
||||
)
|
||||
|
||||
# Not testing the various 'level' methods for now because there's lots
|
||||
# of them and need coalescing; see JIRA SPEC-11
|
||||
|
|
|
@ -195,6 +195,8 @@ class StateTestCase(unittest.TestCase):
|
|||
"get_state_resolution_handler",
|
||||
"get_account_validity_handler",
|
||||
"get_macaroon_generator",
|
||||
"get_instance_name",
|
||||
"get_simple_http_client",
|
||||
"hostname",
|
||||
]
|
||||
)
|
||||
|
|
Loading…
Reference in New Issue