Merge branch 'develop' into matrix-org-hotfixes
commit
cb79a2b785
|
@ -5,7 +5,7 @@ name: Build docker images
|
||||||
on:
|
on:
|
||||||
push:
|
push:
|
||||||
tags: ["v*"]
|
tags: ["v*"]
|
||||||
branches: [ master, main ]
|
branches: [ master, main, develop ]
|
||||||
workflow_dispatch:
|
workflow_dispatch:
|
||||||
|
|
||||||
permissions:
|
permissions:
|
||||||
|
@ -38,6 +38,9 @@ jobs:
|
||||||
id: set-tag
|
id: set-tag
|
||||||
run: |
|
run: |
|
||||||
case "${GITHUB_REF}" in
|
case "${GITHUB_REF}" in
|
||||||
|
refs/heads/develop)
|
||||||
|
tag=develop
|
||||||
|
;;
|
||||||
refs/heads/master|refs/heads/main)
|
refs/heads/master|refs/heads/main)
|
||||||
tag=latest
|
tag=latest
|
||||||
;;
|
;;
|
||||||
|
|
|
@ -8717,14 +8717,14 @@ General:
|
||||||
|
|
||||||
Federation:
|
Federation:
|
||||||
|
|
||||||
- Add key distribution mechanisms for fetching public keys of unavailable remote home servers. See [Retrieving Server Keys](https://github.com/matrix-org/matrix-doc/blob/6f2698/specification/30_server_server_api.rst#retrieving-server-keys) in the spec.
|
- Add key distribution mechanisms for fetching public keys of unavailable remote homeservers. See [Retrieving Server Keys](https://github.com/matrix-org/matrix-doc/blob/6f2698/specification/30_server_server_api.rst#retrieving-server-keys) in the spec.
|
||||||
|
|
||||||
Configuration:
|
Configuration:
|
||||||
|
|
||||||
- Add support for multiple config files.
|
- Add support for multiple config files.
|
||||||
- Add support for dictionaries in config files.
|
- Add support for dictionaries in config files.
|
||||||
- Remove support for specifying config options on the command line, except for:
|
- Remove support for specifying config options on the command line, except for:
|
||||||
- `--daemonize` - Daemonize the home server.
|
- `--daemonize` - Daemonize the homeserver.
|
||||||
- `--manhole` - Turn on the twisted telnet manhole service on the given port.
|
- `--manhole` - Turn on the twisted telnet manhole service on the given port.
|
||||||
- `--database-path` - The path to a sqlite database to use.
|
- `--database-path` - The path to a sqlite database to use.
|
||||||
- `--verbose` - The verbosity level.
|
- `--verbose` - The verbosity level.
|
||||||
|
@ -8929,7 +8929,7 @@ This version adds support for using a TURN server. See docs/turn-howto.rst on ho
|
||||||
Homeserver:
|
Homeserver:
|
||||||
|
|
||||||
- Add support for redaction of messages.
|
- Add support for redaction of messages.
|
||||||
- Fix bug where inviting a user on a remote home server could take up to 20-30s.
|
- Fix bug where inviting a user on a remote homeserver could take up to 20-30s.
|
||||||
- Implement a get current room state API.
|
- Implement a get current room state API.
|
||||||
- Add support specifying and retrieving turn server configuration.
|
- Add support specifying and retrieving turn server configuration.
|
||||||
|
|
||||||
|
@ -9019,7 +9019,7 @@ Changes in synapse 0.2.3 (2014-09-12)
|
||||||
|
|
||||||
Homeserver:
|
Homeserver:
|
||||||
|
|
||||||
- Fix bug where we stopped sending events to remote home servers if a user from that home server left, even if there were some still in the room.
|
- Fix bug where we stopped sending events to remote homeservers if a user from that homeserver left, even if there were some still in the room.
|
||||||
- Fix bugs in the state conflict resolution where it was incorrectly rejecting events.
|
- Fix bugs in the state conflict resolution where it was incorrectly rejecting events.
|
||||||
|
|
||||||
Webclient:
|
Webclient:
|
||||||
|
|
|
@ -0,0 +1 @@
|
||||||
|
Add type annotations to `synapse.metrics`.
|
|
@ -0,0 +1 @@
|
||||||
|
Experimental support for the thread relation defined in [MSC3440](https://github.com/matrix-org/matrix-doc/pull/3440).
|
|
@ -0,0 +1 @@
|
||||||
|
Add a new version of delete room admin API `DELETE /_synapse/admin/v2/rooms/<room_id>` to run it in background. Contributed by @dklimpel.
|
|
@ -0,0 +1 @@
|
||||||
|
Allow the admin [Delete Room API](https://matrix-org.github.io/synapse/latest/admin_api/rooms.html#delete-room-api) to block a room without the need to join it.
|
|
@ -0,0 +1,2 @@
|
||||||
|
Fix a long-standing bug wherein display names or avatar URLs containing null bytes cause an internal server error
|
||||||
|
when stored in the DB.
|
|
@ -0,0 +1 @@
|
||||||
|
Support filtering by relation senders & types per [MSC3440](https://github.com/matrix-org/matrix-doc/pull/3440).
|
|
@ -0,0 +1 @@
|
||||||
|
Split out federated PDU retrieval function into a non-cached version.
|
|
@ -0,0 +1 @@
|
||||||
|
Clean up code relating to to-device messages and sending ephemeral events to application services.
|
|
@ -0,0 +1 @@
|
||||||
|
Prevent [MSC2716](https://github.com/matrix-org/matrix-doc/pull/2716) historical state events from being pushed to an application service via `/transactions`.
|
|
@ -0,0 +1 @@
|
||||||
|
Fix a small typo in the error response when a relation type other than 'm.annotation' is passed to `GET /rooms/{room_id}/aggregations/{event_id}`.
|
|
@ -0,0 +1 @@
|
||||||
|
Drop unused db tables `room_stats_historical` and `user_stats_historical`.
|
|
@ -0,0 +1 @@
|
||||||
|
Suggest users of the Debian packages add configuration to `/etc/matrix-synapse/conf.d/` to prevent, upon upgrade, being asked to choose between their configuration and the maintainer's.
|
|
@ -0,0 +1 @@
|
||||||
|
Require all files in synapse/ and tests/ to pass mypy unless specifically excluded.
|
|
@ -0,0 +1 @@
|
||||||
|
Require all files in synapse/ and tests/ to pass mypy unless specifically excluded.
|
|
@ -0,0 +1 @@
|
||||||
|
Fix typo in the word `available` and fix HTTP method (should be `GET`) for the `username_available` admin API. Contributed by Stanislav Motylkov.
|
|
@ -0,0 +1 @@
|
||||||
|
Add missing type hints to `synapse.app`.
|
|
@ -0,0 +1 @@
|
||||||
|
Fix a long-standing bug where uploading extremely thin images (e.g. 1000x1) would fail. Contributed by @Neeeflix.
|
|
@ -0,0 +1 @@
|
||||||
|
Remove unused parameters on `FederationEventHandler._check_event_auth`.
|
|
@ -0,0 +1 @@
|
||||||
|
Add type hints to `synapse._scripts`.
|
|
@ -0,0 +1 @@
|
||||||
|
Add Single Sign-On, SAML and CAS pages to the documentation.
|
|
@ -0,0 +1 @@
|
||||||
|
Fix an issue which prevented the 'remove deleted devices from device_inbox column' background process from running when updating from a recent Synapse version.
|
|
@ -0,0 +1 @@
|
||||||
|
Add type hints to storage classes.
|
|
@ -0,0 +1 @@
|
||||||
|
Add type hints to storage classes.
|
|
@ -0,0 +1 @@
|
||||||
|
Add type hints to storage classes.
|
|
@ -0,0 +1 @@
|
||||||
|
Add type hints to storage classes.
|
|
@ -0,0 +1 @@
|
||||||
|
Add type hints to storage classes.
|
|
@ -0,0 +1 @@
|
||||||
|
Add type hints to storage classes.
|
|
@ -0,0 +1 @@
|
||||||
|
Add type hints to storage classes.
|
|
@ -0,0 +1 @@
|
||||||
|
Add support for the `/_matrix/client/v3` APIs from Matrix v1.1.
|
|
@ -0,0 +1 @@
|
||||||
|
Changed the word 'Home server' as one word 'homeserver' in documentation.
|
|
@ -0,0 +1 @@
|
||||||
|
Add type hints to `synapse.util`.
|
|
@ -0,0 +1 @@
|
||||||
|
Add type hints to storage classes.
|
|
@ -0,0 +1 @@
|
||||||
|
Improve type annotations in Synapse's test suite.
|
|
@ -0,0 +1 @@
|
||||||
|
Add dedicated admin API for blocking a room.
|
|
@ -0,0 +1 @@
|
||||||
|
Test that room alias deletion works as intended.
|
|
@ -0,0 +1 @@
|
||||||
|
Add type hints to `synapse.util`.
|
|
@ -0,0 +1 @@
|
||||||
|
Improve type annotations in Synapse's test suite.
|
|
@ -0,0 +1 @@
|
||||||
|
Add type hints to storage classes.
|
|
@ -0,0 +1 @@
|
||||||
|
Remove deprecated `trust_identity_server_for_password_resets` configuration flag.
|
|
@ -0,0 +1 @@
|
||||||
|
Support the stable version of [MSC2778](https://github.com/matrix-org/matrix-doc/pull/2778): the `m.login.application_service` login type. Contributed by @tulir.
|
|
@ -0,0 +1 @@
|
||||||
|
Add type hints to storage classes.
|
|
@ -0,0 +1 @@
|
||||||
|
Fix a bug, introduced in Synapse 1.46.0, which caused the `check_3pid_auth` and `on_logged_out` callbacks in legacy password authentication provider modules to not be registered. Modules using the generic module API were not affected.
|
|
@ -0,0 +1 @@
|
||||||
|
Add type annotations for some methods and properties in the module API.
|
|
@ -0,0 +1 @@
|
||||||
|
Add type hints to storage classes.
|
|
@ -0,0 +1 @@
|
||||||
|
Add admin API to un-shadow-ban a user.
|
|
@ -0,0 +1 @@
|
||||||
|
Add admin API to run background jobs.
|
|
@ -0,0 +1 @@
|
||||||
|
Fix a bug introduced in 1.41.0 where space hierarchy responses would be incorrectly reused if multiple users were to make the same request at the same time.
|
|
@ -0,0 +1 @@
|
||||||
|
Require all files in synapse/ and tests/ to pass mypy unless specifically excluded.
|
|
@ -0,0 +1 @@
|
||||||
|
Update the JWT login type to support custom a `sub` claim.
|
|
@ -0,0 +1 @@
|
||||||
|
Fix running `scripts-dev/complement.sh`, which was broken in v1.47.0rc1.
|
|
@ -0,0 +1 @@
|
||||||
|
Rename `get_access_token_for_user_id` to `create_access_token_for_user_id` to better reflect what it does.
|
|
@ -0,0 +1 @@
|
||||||
|
Rename `get_refresh_token_for_user_id` to `create_refresh_token_for_user_id` to better describe what it does.
|
|
@ -0,0 +1 @@
|
||||||
|
Add support for the `/_matrix/media/v3` APIs from Matrix v1.1.
|
|
@ -0,0 +1 @@
|
||||||
|
Fix a bug introduced in v1.45.0 where the `read_templates` method of the module API would error.
|
|
@ -0,0 +1 @@
|
||||||
|
Add type hints to configuration classes.
|
|
@ -0,0 +1 @@
|
||||||
|
Fix an issue introduced in v1.47.0 which prevented servers re-joining rooms they had previously left, if their signing keys were replaced.
|
|
@ -0,0 +1 @@
|
||||||
|
Publish a `develop` image to dockerhub.
|
|
@ -0,0 +1 @@
|
||||||
|
Fix missing quotes for wildcard domains in `federation_certificate_verification_whitelist`.
|
|
@ -0,0 +1 @@
|
||||||
|
Keep fallback key marked as used if it's re-uploaded.
|
|
@ -0,0 +1 @@
|
||||||
|
Use `auto_attribs` on the `attrs` class `RefreshTokenLookupResult`.
|
|
@ -0,0 +1 @@
|
||||||
|
Rename unstable `access_token_lifetime` configuration option to `refreshable_access_token_lifetime` to make it clear it only concerns refreshable access tokens.
|
|
@ -0,0 +1 @@
|
||||||
|
Do not run the broken MSC2716 tests when running `scripts-dev/complement.sh`.
|
|
@ -0,0 +1 @@
|
||||||
|
Store and allow querying of arbitrary event relations.
|
|
@ -0,0 +1 @@
|
||||||
|
Fix a bug introduced in v1.13.0 where creating and publishing a room could cause errors if `room_list_publication_rules` is configured.
|
|
@ -0,0 +1 @@
|
||||||
|
Remove dead code from supporting ACME.
|
|
@ -0,0 +1 @@
|
||||||
|
Remove deprecated `trust_identity_server_for_password_resets` configuration flag.
|
|
@ -0,0 +1 @@
|
||||||
|
Refactor including the bundled relations when serializing an event.
|
|
@ -0,0 +1 @@
|
||||||
|
Improve performance of various background database schema updates.
|
|
@ -0,0 +1 @@
|
||||||
|
Improve performance of various background database schema updates.
|
|
@ -148,14 +148,6 @@ bcrypt_rounds: 12
|
||||||
allow_guest_access: {{ "True" if SYNAPSE_ALLOW_GUEST else "False" }}
|
allow_guest_access: {{ "True" if SYNAPSE_ALLOW_GUEST else "False" }}
|
||||||
enable_group_creation: true
|
enable_group_creation: true
|
||||||
|
|
||||||
# The list of identity servers trusted to verify third party
|
|
||||||
# identifiers by this server.
|
|
||||||
#
|
|
||||||
# Also defines the ID server which will be called when an account is
|
|
||||||
# deactivated (one will be picked arbitrarily).
|
|
||||||
trusted_third_party_id_servers:
|
|
||||||
- matrix.org
|
|
||||||
- vector.im
|
|
||||||
|
|
||||||
## Metrics ###
|
## Metrics ###
|
||||||
|
|
||||||
|
|
|
@ -48,7 +48,7 @@ WORKERS_CONFIG = {
|
||||||
"app": "synapse.app.user_dir",
|
"app": "synapse.app.user_dir",
|
||||||
"listener_resources": ["client"],
|
"listener_resources": ["client"],
|
||||||
"endpoint_patterns": [
|
"endpoint_patterns": [
|
||||||
"^/_matrix/client/(api/v1|r0|unstable)/user_directory/search$"
|
"^/_matrix/client/(api/v1|r0|v3|unstable)/user_directory/search$"
|
||||||
],
|
],
|
||||||
"shared_extra_conf": {"update_user_directory": False},
|
"shared_extra_conf": {"update_user_directory": False},
|
||||||
"worker_extra_conf": "",
|
"worker_extra_conf": "",
|
||||||
|
@ -85,10 +85,10 @@ WORKERS_CONFIG = {
|
||||||
"app": "synapse.app.generic_worker",
|
"app": "synapse.app.generic_worker",
|
||||||
"listener_resources": ["client"],
|
"listener_resources": ["client"],
|
||||||
"endpoint_patterns": [
|
"endpoint_patterns": [
|
||||||
"^/_matrix/client/(v2_alpha|r0)/sync$",
|
"^/_matrix/client/(v2_alpha|r0|v3)/sync$",
|
||||||
"^/_matrix/client/(api/v1|v2_alpha|r0)/events$",
|
"^/_matrix/client/(api/v1|v2_alpha|r0|v3)/events$",
|
||||||
"^/_matrix/client/(api/v1|r0)/initialSync$",
|
"^/_matrix/client/(api/v1|r0|v3)/initialSync$",
|
||||||
"^/_matrix/client/(api/v1|r0)/rooms/[^/]+/initialSync$",
|
"^/_matrix/client/(api/v1|r0|v3)/rooms/[^/]+/initialSync$",
|
||||||
],
|
],
|
||||||
"shared_extra_conf": {},
|
"shared_extra_conf": {},
|
||||||
"worker_extra_conf": "",
|
"worker_extra_conf": "",
|
||||||
|
@ -146,11 +146,11 @@ WORKERS_CONFIG = {
|
||||||
"app": "synapse.app.generic_worker",
|
"app": "synapse.app.generic_worker",
|
||||||
"listener_resources": ["client"],
|
"listener_resources": ["client"],
|
||||||
"endpoint_patterns": [
|
"endpoint_patterns": [
|
||||||
"^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/redact",
|
"^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/redact",
|
||||||
"^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/send",
|
"^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/send",
|
||||||
"^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/(join|invite|leave|ban|unban|kick)$",
|
"^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/(join|invite|leave|ban|unban|kick)$",
|
||||||
"^/_matrix/client/(api/v1|r0|unstable)/join/",
|
"^/_matrix/client/(api/v1|r0|v3|unstable)/join/",
|
||||||
"^/_matrix/client/(api/v1|r0|unstable)/profile/",
|
"^/_matrix/client/(api/v1|r0|v3|unstable)/profile/",
|
||||||
],
|
],
|
||||||
"shared_extra_conf": {},
|
"shared_extra_conf": {},
|
||||||
"worker_extra_conf": "",
|
"worker_extra_conf": "",
|
||||||
|
@ -158,7 +158,7 @@ WORKERS_CONFIG = {
|
||||||
"frontend_proxy": {
|
"frontend_proxy": {
|
||||||
"app": "synapse.app.frontend_proxy",
|
"app": "synapse.app.frontend_proxy",
|
||||||
"listener_resources": ["client", "replication"],
|
"listener_resources": ["client", "replication"],
|
||||||
"endpoint_patterns": ["^/_matrix/client/(api/v1|r0|unstable)/keys/upload"],
|
"endpoint_patterns": ["^/_matrix/client/(api/v1|r0|v3|unstable)/keys/upload"],
|
||||||
"shared_extra_conf": {},
|
"shared_extra_conf": {},
|
||||||
"worker_extra_conf": (
|
"worker_extra_conf": (
|
||||||
"worker_main_http_uri: http://127.0.0.1:%d"
|
"worker_main_http_uri: http://127.0.0.1:%d"
|
||||||
|
|
|
@ -50,8 +50,10 @@ build the documentation with:
|
||||||
mdbook build
|
mdbook build
|
||||||
```
|
```
|
||||||
|
|
||||||
The rendered contents will be outputted to a new `book/` directory at the root of the repository. You can
|
The rendered contents will be outputted to a new `book/` directory at the root of the repository. Please note that
|
||||||
browse the book by opening `book/index.html` in a web browser.
|
index.html is not built by default, it is created by copying over the file `welcome_and_overview.html` to `index.html`
|
||||||
|
during deployment. Thus, when running `mdbook serve` locally the book will initially show a 404 in place of the index
|
||||||
|
due to the above. Do not be alarmed!
|
||||||
|
|
||||||
You can also have mdbook host the docs on a local webserver with hot-reload functionality via:
|
You can also have mdbook host the docs on a local webserver with hot-reload functionality via:
|
||||||
|
|
||||||
|
|
|
@ -23,10 +23,10 @@
|
||||||
- [Structured Logging](structured_logging.md)
|
- [Structured Logging](structured_logging.md)
|
||||||
- [Templates](templates.md)
|
- [Templates](templates.md)
|
||||||
- [User Authentication](usage/configuration/user_authentication/README.md)
|
- [User Authentication](usage/configuration/user_authentication/README.md)
|
||||||
- [Single-Sign On]()
|
- [Single-Sign On](usage/configuration/user_authentication/single_sign_on/README.md)
|
||||||
- [OpenID Connect](openid.md)
|
- [OpenID Connect](openid.md)
|
||||||
- [SAML]()
|
- [SAML](usage/configuration/user_authentication/single_sign_on/saml.md)
|
||||||
- [CAS]()
|
- [CAS](usage/configuration/user_authentication/single_sign_on/cas.md)
|
||||||
- [SSO Mapping Providers](sso_mapping_providers.md)
|
- [SSO Mapping Providers](sso_mapping_providers.md)
|
||||||
- [Password Auth Providers](password_auth_providers.md)
|
- [Password Auth Providers](password_auth_providers.md)
|
||||||
- [JSON Web Tokens](jwt.md)
|
- [JSON Web Tokens](jwt.md)
|
||||||
|
|
|
@ -70,6 +70,8 @@ This API returns a JSON body like the following:
|
||||||
|
|
||||||
The status will be one of `active`, `complete`, or `failed`.
|
The status will be one of `active`, `complete`, or `failed`.
|
||||||
|
|
||||||
|
If `status` is `failed` there will be a string `error` with the error message.
|
||||||
|
|
||||||
## Reclaim disk space (Postgres)
|
## Reclaim disk space (Postgres)
|
||||||
|
|
||||||
To reclaim the disk space and return it to the operating system, you need to run
|
To reclaim the disk space and return it to the operating system, you need to run
|
||||||
|
|
|
@ -3,7 +3,11 @@
|
||||||
- [Room Details API](#room-details-api)
|
- [Room Details API](#room-details-api)
|
||||||
- [Room Members API](#room-members-api)
|
- [Room Members API](#room-members-api)
|
||||||
- [Room State API](#room-state-api)
|
- [Room State API](#room-state-api)
|
||||||
|
- [Block Room API](#block-room-api)
|
||||||
- [Delete Room API](#delete-room-api)
|
- [Delete Room API](#delete-room-api)
|
||||||
|
* [Version 1 (old version)](#version-1-old-version)
|
||||||
|
* [Version 2 (new version)](#version-2-new-version)
|
||||||
|
* [Status of deleting rooms](#status-of-deleting-rooms)
|
||||||
* [Undoing room shutdowns](#undoing-room-shutdowns)
|
* [Undoing room shutdowns](#undoing-room-shutdowns)
|
||||||
- [Make Room Admin API](#make-room-admin-api)
|
- [Make Room Admin API](#make-room-admin-api)
|
||||||
- [Forward Extremities Admin API](#forward-extremities-admin-api)
|
- [Forward Extremities Admin API](#forward-extremities-admin-api)
|
||||||
|
@ -383,6 +387,83 @@ A response body like the following is returned:
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
# Block Room API
|
||||||
|
The Block Room admin API allows server admins to block and unblock rooms,
|
||||||
|
and query to see if a given room is blocked.
|
||||||
|
This API can be used to pre-emptively block a room, even if it's unknown to this
|
||||||
|
homeserver. Users will be prevented from joining a blocked room.
|
||||||
|
|
||||||
|
## Block or unblock a room
|
||||||
|
|
||||||
|
The API is:
|
||||||
|
|
||||||
|
```
|
||||||
|
PUT /_synapse/admin/v1/rooms/<room_id>/block
|
||||||
|
```
|
||||||
|
|
||||||
|
with a body of:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"block": true
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
A response body like the following is returned:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"block": true
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Parameters**
|
||||||
|
|
||||||
|
The following parameters should be set in the URL:
|
||||||
|
|
||||||
|
- `room_id` - The ID of the room.
|
||||||
|
|
||||||
|
The following JSON body parameters are available:
|
||||||
|
|
||||||
|
- `block` - If `true` the room will be blocked and if `false` the room will be unblocked.
|
||||||
|
|
||||||
|
**Response**
|
||||||
|
|
||||||
|
The following fields are possible in the JSON response body:
|
||||||
|
|
||||||
|
- `block` - A boolean. `true` if the room is blocked, otherwise `false`
|
||||||
|
|
||||||
|
## Get block status
|
||||||
|
|
||||||
|
The API is:
|
||||||
|
|
||||||
|
```
|
||||||
|
GET /_synapse/admin/v1/rooms/<room_id>/block
|
||||||
|
```
|
||||||
|
|
||||||
|
A response body like the following is returned:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"block": true,
|
||||||
|
"user_id": "<user_id>"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Parameters**
|
||||||
|
|
||||||
|
The following parameters should be set in the URL:
|
||||||
|
|
||||||
|
- `room_id` - The ID of the room.
|
||||||
|
|
||||||
|
**Response**
|
||||||
|
|
||||||
|
The following fields are possible in the JSON response body:
|
||||||
|
|
||||||
|
- `block` - A boolean. `true` if the room is blocked, otherwise `false`
|
||||||
|
- `user_id` - An optional string. If the room is blocked (`block` is `true`) shows
|
||||||
|
the user who has add the room to blocking list. Otherwise it is not displayed.
|
||||||
|
|
||||||
# Delete Room API
|
# Delete Room API
|
||||||
|
|
||||||
The Delete Room admin API allows server admins to remove rooms from the server
|
The Delete Room admin API allows server admins to remove rooms from the server
|
||||||
|
@ -396,18 +477,33 @@ The new room will be created with the user specified by the `new_room_user_id` p
|
||||||
as room administrator and will contain a message explaining what happened. Users invited
|
as room administrator and will contain a message explaining what happened. Users invited
|
||||||
to the new room will have power level `-10` by default, and thus be unable to speak.
|
to the new room will have power level `-10` by default, and thus be unable to speak.
|
||||||
|
|
||||||
If `block` is `True` it prevents new joins to the old room.
|
If `block` is `true`, users will be prevented from joining the old room.
|
||||||
|
This option can in [Version 1](#version-1-old-version) also be used to pre-emptively
|
||||||
|
block a room, even if it's unknown to this homeserver. In this case, the room will be
|
||||||
|
blocked, and no further action will be taken. If `block` is `false`, attempting to
|
||||||
|
delete an unknown room is invalid and will be rejected as a bad request.
|
||||||
|
|
||||||
This API will remove all trace of the old room from your database after removing
|
This API will remove all trace of the old room from your database after removing
|
||||||
all local users. If `purge` is `true` (the default), all traces of the old room will
|
all local users. If `purge` is `true` (the default), all traces of the old room will
|
||||||
be removed from your database after removing all local users. If you do not want
|
be removed from your database after removing all local users. If you do not want
|
||||||
this to happen, set `purge` to `false`.
|
this to happen, set `purge` to `false`.
|
||||||
Depending on the amount of history being purged a call to the API may take
|
Depending on the amount of history being purged, a call to the API may take
|
||||||
several minutes or longer.
|
several minutes or longer.
|
||||||
|
|
||||||
The local server will only have the power to move local user and room aliases to
|
The local server will only have the power to move local user and room aliases to
|
||||||
the new room. Users on other servers will be unaffected.
|
the new room. Users on other servers will be unaffected.
|
||||||
|
|
||||||
|
To use it, you will need to authenticate by providing an ``access_token`` for a
|
||||||
|
server admin: see [Admin API](../usage/administration/admin_api).
|
||||||
|
|
||||||
|
## Version 1 (old version)
|
||||||
|
|
||||||
|
This version works synchronously. That means you only get the response once the server has
|
||||||
|
finished the action, which may take a long time. If you request the same action
|
||||||
|
a second time, and the server has not finished the first one, the second request will block.
|
||||||
|
This is fixed in version 2 of this API. The parameters are the same in both APIs.
|
||||||
|
This API will become deprecated in the future.
|
||||||
|
|
||||||
The API is:
|
The API is:
|
||||||
|
|
||||||
```
|
```
|
||||||
|
@ -426,9 +522,6 @@ with a body of:
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
To use it, you will need to authenticate by providing an ``access_token`` for a
|
|
||||||
server admin: see [Admin API](../usage/administration/admin_api).
|
|
||||||
|
|
||||||
A response body like the following is returned:
|
A response body like the following is returned:
|
||||||
|
|
||||||
```json
|
```json
|
||||||
|
@ -445,6 +538,44 @@ A response body like the following is returned:
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
The parameters and response values have the same format as
|
||||||
|
[version 2](#version-2-new-version) of the API.
|
||||||
|
|
||||||
|
## Version 2 (new version)
|
||||||
|
|
||||||
|
**Note**: This API is new, experimental and "subject to change".
|
||||||
|
|
||||||
|
This version works asynchronously, meaning you get the response from server immediately
|
||||||
|
while the server works on that task in background. You can then request the status of the action
|
||||||
|
to check if it has completed.
|
||||||
|
|
||||||
|
The API is:
|
||||||
|
|
||||||
|
```
|
||||||
|
DELETE /_synapse/admin/v2/rooms/<room_id>
|
||||||
|
```
|
||||||
|
|
||||||
|
with a body of:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"new_room_user_id": "@someuser:example.com",
|
||||||
|
"room_name": "Content Violation Notification",
|
||||||
|
"message": "Bad Room has been shutdown due to content violations on this server. Please review our Terms of Service.",
|
||||||
|
"block": true,
|
||||||
|
"purge": true
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
The API starts the shut down and purge running, and returns immediately with a JSON body with
|
||||||
|
a purge id:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"delete_id": "<opaque id>"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
**Parameters**
|
**Parameters**
|
||||||
|
|
||||||
The following parameters should be set in the URL:
|
The following parameters should be set in the URL:
|
||||||
|
@ -464,8 +595,10 @@ The following JSON body parameters are available:
|
||||||
`new_room_user_id` in the new room. Ideally this will clearly convey why the
|
`new_room_user_id` in the new room. Ideally this will clearly convey why the
|
||||||
original room was shut down. Defaults to `Sharing illegal content on this server
|
original room was shut down. Defaults to `Sharing illegal content on this server
|
||||||
is not permitted and rooms in violation will be blocked.`
|
is not permitted and rooms in violation will be blocked.`
|
||||||
* `block` - Optional. If set to `true`, this room will be added to a blocking list, preventing
|
* `block` - Optional. If set to `true`, this room will be added to a blocking list,
|
||||||
future attempts to join the room. Defaults to `false`.
|
preventing future attempts to join the room. Rooms can be blocked
|
||||||
|
even if they're not yet known to the homeserver (only with
|
||||||
|
[Version 1](#version-1-old-version) of the API). Defaults to `false`.
|
||||||
* `purge` - Optional. If set to `true`, it will remove all traces of the room from your database.
|
* `purge` - Optional. If set to `true`, it will remove all traces of the room from your database.
|
||||||
Defaults to `true`.
|
Defaults to `true`.
|
||||||
* `force_purge` - Optional, and ignored unless `purge` is `true`. If set to `true`, it
|
* `force_purge` - Optional, and ignored unless `purge` is `true`. If set to `true`, it
|
||||||
|
@ -475,16 +608,124 @@ The following JSON body parameters are available:
|
||||||
|
|
||||||
The JSON body must not be empty. The body must be at least `{}`.
|
The JSON body must not be empty. The body must be at least `{}`.
|
||||||
|
|
||||||
**Response**
|
## Status of deleting rooms
|
||||||
|
|
||||||
|
**Note**: This API is new, experimental and "subject to change".
|
||||||
|
|
||||||
|
It is possible to query the status of the background task for deleting rooms.
|
||||||
|
The status can be queried up to 24 hours after completion of the task,
|
||||||
|
or until Synapse is restarted (whichever happens first).
|
||||||
|
|
||||||
|
### Query by `room_id`
|
||||||
|
|
||||||
|
With this API you can get the status of all active deletion tasks, and all those completed in the last 24h,
|
||||||
|
for the given `room_id`.
|
||||||
|
|
||||||
|
The API is:
|
||||||
|
|
||||||
|
```
|
||||||
|
GET /_synapse/admin/v2/rooms/<room_id>/delete_status
|
||||||
|
```
|
||||||
|
|
||||||
|
A response body like the following is returned:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"results": [
|
||||||
|
{
|
||||||
|
"delete_id": "delete_id1",
|
||||||
|
"status": "failed",
|
||||||
|
"error": "error message",
|
||||||
|
"shutdown_room": {
|
||||||
|
"kicked_users": [],
|
||||||
|
"failed_to_kick_users": [],
|
||||||
|
"local_aliases": [],
|
||||||
|
"new_room_id": null
|
||||||
|
}
|
||||||
|
}, {
|
||||||
|
"delete_id": "delete_id2",
|
||||||
|
"status": "purging",
|
||||||
|
"shutdown_room": {
|
||||||
|
"kicked_users": [
|
||||||
|
"@foobar:example.com"
|
||||||
|
],
|
||||||
|
"failed_to_kick_users": [],
|
||||||
|
"local_aliases": [
|
||||||
|
"#badroom:example.com",
|
||||||
|
"#evilsaloon:example.com"
|
||||||
|
],
|
||||||
|
"new_room_id": "!newroomid:example.com"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Parameters**
|
||||||
|
|
||||||
|
The following parameters should be set in the URL:
|
||||||
|
|
||||||
|
* `room_id` - The ID of the room.
|
||||||
|
|
||||||
|
### Query by `delete_id`
|
||||||
|
|
||||||
|
With this API you can get the status of one specific task by `delete_id`.
|
||||||
|
|
||||||
|
The API is:
|
||||||
|
|
||||||
|
```
|
||||||
|
GET /_synapse/admin/v2/rooms/delete_status/<delete_id>
|
||||||
|
```
|
||||||
|
|
||||||
|
A response body like the following is returned:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"status": "purging",
|
||||||
|
"shutdown_room": {
|
||||||
|
"kicked_users": [
|
||||||
|
"@foobar:example.com"
|
||||||
|
],
|
||||||
|
"failed_to_kick_users": [],
|
||||||
|
"local_aliases": [
|
||||||
|
"#badroom:example.com",
|
||||||
|
"#evilsaloon:example.com"
|
||||||
|
],
|
||||||
|
"new_room_id": "!newroomid:example.com"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Parameters**
|
||||||
|
|
||||||
|
The following parameters should be set in the URL:
|
||||||
|
|
||||||
|
* `delete_id` - The ID for this delete.
|
||||||
|
|
||||||
|
### Response
|
||||||
|
|
||||||
The following fields are returned in the JSON response body:
|
The following fields are returned in the JSON response body:
|
||||||
|
|
||||||
* `kicked_users` - An array of users (`user_id`) that were kicked.
|
- `results` - An array of objects, each containing information about one task.
|
||||||
* `failed_to_kick_users` - An array of users (`user_id`) that that were not kicked.
|
This field is omitted from the result when you query by `delete_id`.
|
||||||
* `local_aliases` - An array of strings representing the local aliases that were migrated from
|
Task objects contain the following fields:
|
||||||
the old room to the new.
|
- `delete_id` - The ID for this purge if you query by `room_id`.
|
||||||
* `new_room_id` - A string representing the room ID of the new room.
|
- `status` - The status will be one of:
|
||||||
|
- `shutting_down` - The process is removing users from the room.
|
||||||
|
- `purging` - The process is purging the room and event data from database.
|
||||||
|
- `complete` - The process has completed successfully.
|
||||||
|
- `failed` - The process is aborted, an error has occurred.
|
||||||
|
- `error` - A string that shows an error message if `status` is `failed`.
|
||||||
|
Otherwise this field is hidden.
|
||||||
|
- `shutdown_room` - An object containing information about the result of shutting down the room.
|
||||||
|
*Note:* The result is shown after removing the room members.
|
||||||
|
The delete process can still be running. Please pay attention to the `status`.
|
||||||
|
- `kicked_users` - An array of users (`user_id`) that were kicked.
|
||||||
|
- `failed_to_kick_users` - An array of users (`user_id`) that that were not kicked.
|
||||||
|
- `local_aliases` - An array of strings representing the local aliases that were
|
||||||
|
migrated from the old room to the new.
|
||||||
|
- `new_room_id` - A string representing the room ID of the new room, or `null` if
|
||||||
|
no such room was created.
|
||||||
|
|
||||||
## Undoing room deletions
|
## Undoing room deletions
|
||||||
|
|
||||||
|
|
|
@ -948,7 +948,7 @@ The following fields are returned in the JSON response body:
|
||||||
See also the
|
See also the
|
||||||
[Client-Server API Spec on pushers](https://matrix.org/docs/spec/client_server/latest#get-matrix-client-r0-pushers).
|
[Client-Server API Spec on pushers](https://matrix.org/docs/spec/client_server/latest#get-matrix-client-r0-pushers).
|
||||||
|
|
||||||
## Shadow-banning users
|
## Controlling whether a user is shadow-banned
|
||||||
|
|
||||||
Shadow-banning is a useful tool for moderating malicious or egregiously abusive users.
|
Shadow-banning is a useful tool for moderating malicious or egregiously abusive users.
|
||||||
A shadow-banned users receives successful responses to their client-server API requests,
|
A shadow-banned users receives successful responses to their client-server API requests,
|
||||||
|
@ -961,16 +961,22 @@ or broken behaviour for the client. A shadow-banned user will not receive any
|
||||||
notification and it is generally more appropriate to ban or kick abusive users.
|
notification and it is generally more appropriate to ban or kick abusive users.
|
||||||
A shadow-banned user will be unable to contact anyone on the server.
|
A shadow-banned user will be unable to contact anyone on the server.
|
||||||
|
|
||||||
The API is:
|
To shadow-ban a user the API is:
|
||||||
|
|
||||||
```
|
```
|
||||||
POST /_synapse/admin/v1/users/<user_id>/shadow_ban
|
POST /_synapse/admin/v1/users/<user_id>/shadow_ban
|
||||||
```
|
```
|
||||||
|
|
||||||
|
To un-shadow-ban a user the API is:
|
||||||
|
|
||||||
|
```
|
||||||
|
DELETE /_synapse/admin/v1/users/<user_id>/shadow_ban
|
||||||
|
```
|
||||||
|
|
||||||
To use it, you will need to authenticate by providing an `access_token` for a
|
To use it, you will need to authenticate by providing an `access_token` for a
|
||||||
server admin: [Admin API](../usage/administration/admin_api)
|
server admin: [Admin API](../usage/administration/admin_api)
|
||||||
|
|
||||||
An empty JSON dict is returned.
|
An empty JSON dict is returned in both cases.
|
||||||
|
|
||||||
**Parameters**
|
**Parameters**
|
||||||
|
|
||||||
|
@ -1107,7 +1113,7 @@ This endpoint will work even if registration is disabled on the server, unlike
|
||||||
The API is:
|
The API is:
|
||||||
|
|
||||||
```
|
```
|
||||||
POST /_synapse/admin/v1/username_availabile?username=$localpart
|
GET /_synapse/admin/v1/username_available?username=$localpart
|
||||||
```
|
```
|
||||||
|
|
||||||
The request and response format is the same as the [/_matrix/client/r0/register/available](https://matrix.org/docs/spec/client_server/r0.6.0#get-matrix-client-r0-register-available) API.
|
The request and response format is the same as the [/_matrix/client/r0/register/available](https://matrix.org/docs/spec/client_server/r0.6.0#get-matrix-client-r0-register-available) API.
|
||||||
|
|
|
@ -7,7 +7,7 @@
|
||||||
|
|
||||||
## Server to Server Stack
|
## Server to Server Stack
|
||||||
|
|
||||||
To use the server to server stack, home servers should only need to
|
To use the server to server stack, homeservers should only need to
|
||||||
interact with the Messaging layer.
|
interact with the Messaging layer.
|
||||||
|
|
||||||
The server to server side of things is designed into 4 distinct layers:
|
The server to server side of things is designed into 4 distinct layers:
|
||||||
|
@ -23,7 +23,7 @@ Server with a domain specific API.
|
||||||
|
|
||||||
1. **Messaging Layer**
|
1. **Messaging Layer**
|
||||||
|
|
||||||
This is what the rest of the Home Server hits to send messages, join rooms,
|
This is what the rest of the homeserver hits to send messages, join rooms,
|
||||||
etc. It also allows you to register callbacks for when it get's notified by
|
etc. It also allows you to register callbacks for when it get's notified by
|
||||||
lower levels that e.g. a new message has been received.
|
lower levels that e.g. a new message has been received.
|
||||||
|
|
||||||
|
@ -45,7 +45,7 @@ Server with a domain specific API.
|
||||||
|
|
||||||
For incoming PDUs, it has to check the PDUs it references to see
|
For incoming PDUs, it has to check the PDUs it references to see
|
||||||
if we have missed any. If we have go and ask someone (another
|
if we have missed any. If we have go and ask someone (another
|
||||||
home server) for it.
|
homeserver) for it.
|
||||||
|
|
||||||
3. **Transaction Layer**
|
3. **Transaction Layer**
|
||||||
|
|
||||||
|
|
|
@ -22,8 +22,9 @@ will be removed in a future version of Synapse.
|
||||||
|
|
||||||
The `token` field should include the JSON web token with the following claims:
|
The `token` field should include the JSON web token with the following claims:
|
||||||
|
|
||||||
* The `sub` (subject) claim is required and should encode the local part of the
|
* A claim that encodes the local part of the user ID is required. By default,
|
||||||
user ID.
|
the `sub` (subject) claim is used, or a custom claim can be set in the
|
||||||
|
configuration file.
|
||||||
* The expiration time (`exp`), not before time (`nbf`), and issued at (`iat`)
|
* The expiration time (`exp`), not before time (`nbf`), and issued at (`iat`)
|
||||||
claims are optional, but validated if present.
|
claims are optional, but validated if present.
|
||||||
* The issuer (`iss`) claim is optional, but required and validated if configured.
|
* The issuer (`iss`) claim is optional, but required and validated if configured.
|
||||||
|
|
|
@ -1,7 +1,7 @@
|
||||||
<h2 style="color:red">
|
<h2 style="color:red">
|
||||||
This page of the Synapse documentation is now deprecated. For up to date
|
This page of the Synapse documentation is now deprecated. For up to date
|
||||||
documentation on setting up or writing a password auth provider module, please see
|
documentation on setting up or writing a password auth provider module, please see
|
||||||
<a href="modules.md">this page</a>.
|
<a href="modules/index.md">this page</a>.
|
||||||
</h2>
|
</h2>
|
||||||
|
|
||||||
# Password auth provider modules
|
# Password auth provider modules
|
||||||
|
|
|
@ -647,8 +647,8 @@ retention:
|
||||||
#
|
#
|
||||||
#federation_certificate_verification_whitelist:
|
#federation_certificate_verification_whitelist:
|
||||||
# - lon.example.com
|
# - lon.example.com
|
||||||
# - *.domain.com
|
# - "*.domain.com"
|
||||||
# - *.onion
|
# - "*.onion"
|
||||||
|
|
||||||
# List of custom certificate authorities for federation traffic.
|
# List of custom certificate authorities for federation traffic.
|
||||||
#
|
#
|
||||||
|
@ -2039,6 +2039,12 @@ sso:
|
||||||
#
|
#
|
||||||
#algorithm: "provided-by-your-issuer"
|
#algorithm: "provided-by-your-issuer"
|
||||||
|
|
||||||
|
# Name of the claim containing a unique identifier for the user.
|
||||||
|
#
|
||||||
|
# Optional, defaults to `sub`.
|
||||||
|
#
|
||||||
|
#subject_claim: "sub"
|
||||||
|
|
||||||
# The issuer to validate the "iss" claim against.
|
# The issuer to validate the "iss" claim against.
|
||||||
#
|
#
|
||||||
# Optional, if provided the "iss" claim will be required and
|
# Optional, if provided the "iss" claim will be required and
|
||||||
|
@ -2360,8 +2366,8 @@ user_directory:
|
||||||
# indexes were (re)built was before Synapse 1.44, you'll have to
|
# indexes were (re)built was before Synapse 1.44, you'll have to
|
||||||
# rebuild the indexes in order to search through all known users.
|
# rebuild the indexes in order to search through all known users.
|
||||||
# These indexes are built the first time Synapse starts; admins can
|
# These indexes are built the first time Synapse starts; admins can
|
||||||
# manually trigger a rebuild following the instructions at
|
# manually trigger a rebuild via API following the instructions at
|
||||||
# https://matrix-org.github.io/synapse/latest/user_directory.html
|
# https://matrix-org.github.io/synapse/latest/usage/administration/admin_api/background_updates.html#run
|
||||||
#
|
#
|
||||||
# Uncomment to return search results containing all known users, even if that
|
# Uncomment to return search results containing all known users, even if that
|
||||||
# user does not share a room with the requester.
|
# user does not share a room with the requester.
|
||||||
|
|
|
@ -76,6 +76,12 @@ The fingerprint of the repository signing key (as shown by `gpg
|
||||||
/usr/share/keyrings/matrix-org-archive-keyring.gpg`) is
|
/usr/share/keyrings/matrix-org-archive-keyring.gpg`) is
|
||||||
`AAF9AE843A7584B5A3E4CD2BCF45A512DE2DA058`.
|
`AAF9AE843A7584B5A3E4CD2BCF45A512DE2DA058`.
|
||||||
|
|
||||||
|
When installing with Debian packages, you might prefer to place files in
|
||||||
|
`/etc/matrix-synapse/conf.d/` to override your configuration without editing
|
||||||
|
the main configuration file at `/etc/matrix-synapse/homeserver.yaml`.
|
||||||
|
By doing that, you won't be asked if you want to replace your configuration
|
||||||
|
file when you upgrade the Debian package to a later version.
|
||||||
|
|
||||||
##### Downstream Debian packages
|
##### Downstream Debian packages
|
||||||
|
|
||||||
We do not recommend using the packages from the default Debian `buster`
|
We do not recommend using the packages from the default Debian `buster`
|
||||||
|
|
|
@ -1,12 +1,12 @@
|
||||||
# Overview
|
# Overview
|
||||||
|
|
||||||
This document explains how to enable VoIP relaying on your Home Server with
|
This document explains how to enable VoIP relaying on your homeserver with
|
||||||
TURN.
|
TURN.
|
||||||
|
|
||||||
The synapse Matrix Home Server supports integration with TURN server via the
|
The synapse Matrix homeserver supports integration with TURN server via the
|
||||||
[TURN server REST API](<https://tools.ietf.org/html/draft-uberti-behave-turn-rest-00>). This
|
[TURN server REST API](<https://tools.ietf.org/html/draft-uberti-behave-turn-rest-00>). This
|
||||||
allows the Home Server to generate credentials that are valid for use on the
|
allows the homeserver to generate credentials that are valid for use on the
|
||||||
TURN server through the use of a secret shared between the Home Server and the
|
TURN server through the use of a secret shared between the homeserver and the
|
||||||
TURN server.
|
TURN server.
|
||||||
|
|
||||||
The following sections describe how to install [coturn](<https://github.com/coturn/coturn>) (which implements the TURN REST API) and integrate it with synapse.
|
The following sections describe how to install [coturn](<https://github.com/coturn/coturn>) (which implements the TURN REST API) and integrate it with synapse.
|
||||||
|
@ -165,18 +165,18 @@ This will install and start a systemd service called `coturn`.
|
||||||
|
|
||||||
## Synapse setup
|
## Synapse setup
|
||||||
|
|
||||||
Your home server configuration file needs the following extra keys:
|
Your homeserver configuration file needs the following extra keys:
|
||||||
|
|
||||||
1. "`turn_uris`": This needs to be a yaml list of public-facing URIs
|
1. "`turn_uris`": This needs to be a yaml list of public-facing URIs
|
||||||
for your TURN server to be given out to your clients. Add separate
|
for your TURN server to be given out to your clients. Add separate
|
||||||
entries for each transport your TURN server supports.
|
entries for each transport your TURN server supports.
|
||||||
2. "`turn_shared_secret`": This is the secret shared between your
|
2. "`turn_shared_secret`": This is the secret shared between your
|
||||||
Home server and your TURN server, so you should set it to the same
|
homeserver and your TURN server, so you should set it to the same
|
||||||
string you used in turnserver.conf.
|
string you used in turnserver.conf.
|
||||||
3. "`turn_user_lifetime`": This is the amount of time credentials
|
3. "`turn_user_lifetime`": This is the amount of time credentials
|
||||||
generated by your Home Server are valid for (in milliseconds).
|
generated by your homeserver are valid for (in milliseconds).
|
||||||
Shorter times offer less potential for abuse at the expense of
|
Shorter times offer less potential for abuse at the expense of
|
||||||
increased traffic between web clients and your home server to
|
increased traffic between web clients and your homeserver to
|
||||||
refresh credentials. The TURN REST API specification recommends
|
refresh credentials. The TURN REST API specification recommends
|
||||||
one day (86400000).
|
one day (86400000).
|
||||||
4. "`turn_allow_guests`": Whether to allow guest users to use the
|
4. "`turn_allow_guests`": Whether to allow guest users to use the
|
||||||
|
@ -220,7 +220,7 @@ Here are a few things to try:
|
||||||
anyone who has successfully set this up.
|
anyone who has successfully set this up.
|
||||||
|
|
||||||
* Check that you have opened your firewall to allow TCP and UDP traffic to the
|
* Check that you have opened your firewall to allow TCP and UDP traffic to the
|
||||||
TURN ports (normally 3478 and 5479).
|
TURN ports (normally 3478 and 5349).
|
||||||
|
|
||||||
* Check that you have opened your firewall to allow UDP traffic to the UDP
|
* Check that you have opened your firewall to allow UDP traffic to the UDP
|
||||||
relay ports (49152-65535 by default).
|
relay ports (49152-65535 by default).
|
||||||
|
|
|
@ -42,7 +42,6 @@ For each update:
|
||||||
`average_items_per_ms` how many items are processed per millisecond based on an exponential average.
|
`average_items_per_ms` how many items are processed per millisecond based on an exponential average.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
## Enabled
|
## Enabled
|
||||||
|
|
||||||
This API allow pausing background updates.
|
This API allow pausing background updates.
|
||||||
|
@ -82,3 +81,29 @@ The API returns the `enabled` param.
|
||||||
```
|
```
|
||||||
|
|
||||||
There is also a `GET` version which returns the `enabled` state.
|
There is also a `GET` version which returns the `enabled` state.
|
||||||
|
|
||||||
|
|
||||||
|
## Run
|
||||||
|
|
||||||
|
This API schedules a specific background update to run. The job starts immediately after calling the API.
|
||||||
|
|
||||||
|
|
||||||
|
The API is:
|
||||||
|
|
||||||
|
```
|
||||||
|
POST /_synapse/admin/v1/background_updates/start_job
|
||||||
|
```
|
||||||
|
|
||||||
|
with the following body:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"job_name": "populate_stats_process_rooms"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
The following JSON body parameters are available:
|
||||||
|
|
||||||
|
- `job_name` - A string which job to run. Valid values are:
|
||||||
|
- `populate_stats_process_rooms` - Recalculate the stats for all rooms.
|
||||||
|
- `regenerate_directory` - Recalculate the [user directory](../../../user_directory.md) if it is stale or out of sync.
|
||||||
|
|
|
@ -0,0 +1,5 @@
|
||||||
|
# Single Sign-On
|
||||||
|
|
||||||
|
Synapse supports single sign-on through the SAML, Open ID Connect or CAS protocols.
|
||||||
|
LDAP and other login methods are supported through first and third-party password
|
||||||
|
auth provider modules.
|
|
@ -0,0 +1,8 @@
|
||||||
|
# CAS
|
||||||
|
|
||||||
|
Synapse supports authenticating users via the [Central Authentication
|
||||||
|
Service protocol](https://en.wikipedia.org/wiki/Central_Authentication_Service)
|
||||||
|
(CAS) natively.
|
||||||
|
|
||||||
|
Please see the `cas_config` and `sso` sections of the [Synapse configuration
|
||||||
|
file](../../../configuration/homeserver_sample_config.md) for more details.
|
|
@ -0,0 +1,8 @@
|
||||||
|
# SAML
|
||||||
|
|
||||||
|
Synapse supports authenticating users via the [Security Assertion
|
||||||
|
Markup Language](https://en.wikipedia.org/wiki/Security_Assertion_Markup_Language)
|
||||||
|
(SAML) protocol natively.
|
||||||
|
|
||||||
|
Please see the `saml2_config` and `sso` sections of the [Synapse configuration
|
||||||
|
file](../../../configuration/homeserver_sample_config.md) for more details.
|
|
@ -6,9 +6,9 @@ on this particular server - i.e. ones which your account shares a room with, or
|
||||||
who are present in a publicly viewable room present on the server.
|
who are present in a publicly viewable room present on the server.
|
||||||
|
|
||||||
The directory info is stored in various tables, which can (typically after
|
The directory info is stored in various tables, which can (typically after
|
||||||
DB corruption) get stale or out of sync. If this happens, for now the
|
DB corruption) get stale or out of sync. If this happens, for now the
|
||||||
solution to fix it is to execute the SQL [here](https://github.com/matrix-org/synapse/blob/master/synapse/storage/schema/main/delta/53/user_dir_populate.sql)
|
solution to fix it is to use the [admin API](usage/administration/admin_api/background_updates.md#run)
|
||||||
and then restart synapse. This should then start a background task to
|
and execute the job `regenerate_directory`. This should then start a background task to
|
||||||
flush the current tables and regenerate the directory.
|
flush the current tables and regenerate the directory.
|
||||||
|
|
||||||
Data model
|
Data model
|
||||||
|
|
|
@ -182,10 +182,10 @@ This worker can handle API requests matching the following regular
|
||||||
expressions:
|
expressions:
|
||||||
|
|
||||||
# Sync requests
|
# Sync requests
|
||||||
^/_matrix/client/(v2_alpha|r0)/sync$
|
^/_matrix/client/(v2_alpha|r0|v3)/sync$
|
||||||
^/_matrix/client/(api/v1|v2_alpha|r0)/events$
|
^/_matrix/client/(api/v1|v2_alpha|r0|v3)/events$
|
||||||
^/_matrix/client/(api/v1|r0)/initialSync$
|
^/_matrix/client/(api/v1|r0|v3)/initialSync$
|
||||||
^/_matrix/client/(api/v1|r0)/rooms/[^/]+/initialSync$
|
^/_matrix/client/(api/v1|r0|v3)/rooms/[^/]+/initialSync$
|
||||||
|
|
||||||
# Federation requests
|
# Federation requests
|
||||||
^/_matrix/federation/v1/event/
|
^/_matrix/federation/v1/event/
|
||||||
|
@ -216,40 +216,40 @@ expressions:
|
||||||
^/_matrix/federation/v1/send/
|
^/_matrix/federation/v1/send/
|
||||||
|
|
||||||
# Client API requests
|
# Client API requests
|
||||||
^/_matrix/client/(api/v1|r0|unstable)/createRoom$
|
^/_matrix/client/(api/v1|r0|v3|unstable)/createRoom$
|
||||||
^/_matrix/client/(api/v1|r0|unstable)/publicRooms$
|
^/_matrix/client/(api/v1|r0|v3|unstable)/publicRooms$
|
||||||
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/joined_members$
|
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/joined_members$
|
||||||
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/context/.*$
|
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/context/.*$
|
||||||
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/members$
|
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/members$
|
||||||
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/state$
|
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/state$
|
||||||
^/_matrix/client/unstable/org.matrix.msc2946/rooms/.*/spaces$
|
^/_matrix/client/unstable/org.matrix.msc2946/rooms/.*/spaces$
|
||||||
^/_matrix/client/unstable/org.matrix.msc2946/rooms/.*/hierarchy$
|
^/_matrix/client/unstable/org.matrix.msc2946/rooms/.*/hierarchy$
|
||||||
^/_matrix/client/unstable/im.nheko.summary/rooms/.*/summary$
|
^/_matrix/client/unstable/im.nheko.summary/rooms/.*/summary$
|
||||||
^/_matrix/client/(api/v1|r0|unstable)/account/3pid$
|
^/_matrix/client/(api/v1|r0|v3|unstable)/account/3pid$
|
||||||
^/_matrix/client/(api/v1|r0|unstable)/devices$
|
^/_matrix/client/(api/v1|r0|v3|unstable)/devices$
|
||||||
^/_matrix/client/(api/v1|r0|unstable)/keys/query$
|
^/_matrix/client/(api/v1|r0|v3|unstable)/keys/query$
|
||||||
^/_matrix/client/(api/v1|r0|unstable)/keys/changes$
|
^/_matrix/client/(api/v1|r0|v3|unstable)/keys/changes$
|
||||||
^/_matrix/client/versions$
|
^/_matrix/client/versions$
|
||||||
^/_matrix/client/(api/v1|r0|unstable)/voip/turnServer$
|
^/_matrix/client/(api/v1|r0|v3|unstable)/voip/turnServer$
|
||||||
^/_matrix/client/(api/v1|r0|unstable)/joined_groups$
|
^/_matrix/client/(api/v1|r0|v3|unstable)/joined_groups$
|
||||||
^/_matrix/client/(api/v1|r0|unstable)/publicised_groups$
|
^/_matrix/client/(api/v1|r0|v3|unstable)/publicised_groups$
|
||||||
^/_matrix/client/(api/v1|r0|unstable)/publicised_groups/
|
^/_matrix/client/(api/v1|r0|v3|unstable)/publicised_groups/
|
||||||
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/event/
|
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/event/
|
||||||
^/_matrix/client/(api/v1|r0|unstable)/joined_rooms$
|
^/_matrix/client/(api/v1|r0|v3|unstable)/joined_rooms$
|
||||||
^/_matrix/client/(api/v1|r0|unstable)/search$
|
^/_matrix/client/(api/v1|r0|v3|unstable)/search$
|
||||||
|
|
||||||
# Registration/login requests
|
# Registration/login requests
|
||||||
^/_matrix/client/(api/v1|r0|unstable)/login$
|
^/_matrix/client/(api/v1|r0|v3|unstable)/login$
|
||||||
^/_matrix/client/(r0|unstable)/register$
|
^/_matrix/client/(r0|v3|unstable)/register$
|
||||||
^/_matrix/client/unstable/org.matrix.msc3231/register/org.matrix.msc3231.login.registration_token/validity$
|
^/_matrix/client/unstable/org.matrix.msc3231/register/org.matrix.msc3231.login.registration_token/validity$
|
||||||
|
|
||||||
# Event sending requests
|
# Event sending requests
|
||||||
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/redact
|
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/redact
|
||||||
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/send
|
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/send
|
||||||
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/state/
|
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/state/
|
||||||
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/(join|invite|leave|ban|unban|kick)$
|
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/(join|invite|leave|ban|unban|kick)$
|
||||||
^/_matrix/client/(api/v1|r0|unstable)/join/
|
^/_matrix/client/(api/v1|r0|v3|unstable)/join/
|
||||||
^/_matrix/client/(api/v1|r0|unstable)/profile/
|
^/_matrix/client/(api/v1|r0|v3|unstable)/profile/
|
||||||
|
|
||||||
|
|
||||||
Additionally, the following REST endpoints can be handled for GET requests:
|
Additionally, the following REST endpoints can be handled for GET requests:
|
||||||
|
@ -261,14 +261,14 @@ room must be routed to the same instance. Additionally, care must be taken to
|
||||||
ensure that the purge history admin API is not used while pagination requests
|
ensure that the purge history admin API is not used while pagination requests
|
||||||
for the room are in flight:
|
for the room are in flight:
|
||||||
|
|
||||||
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/messages$
|
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/messages$
|
||||||
|
|
||||||
Additionally, the following endpoints should be included if Synapse is configured
|
Additionally, the following endpoints should be included if Synapse is configured
|
||||||
to use SSO (you only need to include the ones for whichever SSO provider you're
|
to use SSO (you only need to include the ones for whichever SSO provider you're
|
||||||
using):
|
using):
|
||||||
|
|
||||||
# for all SSO providers
|
# for all SSO providers
|
||||||
^/_matrix/client/(api/v1|r0|unstable)/login/sso/redirect
|
^/_matrix/client/(api/v1|r0|v3|unstable)/login/sso/redirect
|
||||||
^/_synapse/client/pick_idp$
|
^/_synapse/client/pick_idp$
|
||||||
^/_synapse/client/pick_username
|
^/_synapse/client/pick_username
|
||||||
^/_synapse/client/new_user_consent$
|
^/_synapse/client/new_user_consent$
|
||||||
|
@ -281,7 +281,7 @@ using):
|
||||||
^/_synapse/client/saml2/authn_response$
|
^/_synapse/client/saml2/authn_response$
|
||||||
|
|
||||||
# CAS requests.
|
# CAS requests.
|
||||||
^/_matrix/client/(api/v1|r0|unstable)/login/cas/ticket$
|
^/_matrix/client/(api/v1|r0|v3|unstable)/login/cas/ticket$
|
||||||
|
|
||||||
Ensure that all SSO logins go to a single process.
|
Ensure that all SSO logins go to a single process.
|
||||||
For multiple workers not handling the SSO endpoints properly, see
|
For multiple workers not handling the SSO endpoints properly, see
|
||||||
|
@ -465,7 +465,7 @@ Note that if a reverse proxy is used , then `/_matrix/media/` must be routed for
|
||||||
Handles searches in the user directory. It can handle REST endpoints matching
|
Handles searches in the user directory. It can handle REST endpoints matching
|
||||||
the following regular expressions:
|
the following regular expressions:
|
||||||
|
|
||||||
^/_matrix/client/(api/v1|r0|unstable)/user_directory/search$
|
^/_matrix/client/(api/v1|r0|v3|unstable)/user_directory/search$
|
||||||
|
|
||||||
When using this worker you must also set `update_user_directory: False` in the
|
When using this worker you must also set `update_user_directory: False` in the
|
||||||
shared configuration file to stop the main synapse running background
|
shared configuration file to stop the main synapse running background
|
||||||
|
@ -477,12 +477,12 @@ Proxies some frequently-requested client endpoints to add caching and remove
|
||||||
load from the main synapse. It can handle REST endpoints matching the following
|
load from the main synapse. It can handle REST endpoints matching the following
|
||||||
regular expressions:
|
regular expressions:
|
||||||
|
|
||||||
^/_matrix/client/(api/v1|r0|unstable)/keys/upload
|
^/_matrix/client/(api/v1|r0|v3|unstable)/keys/upload
|
||||||
|
|
||||||
If `use_presence` is False in the homeserver config, it can also handle REST
|
If `use_presence` is False in the homeserver config, it can also handle REST
|
||||||
endpoints matching the following regular expressions:
|
endpoints matching the following regular expressions:
|
||||||
|
|
||||||
^/_matrix/client/(api/v1|r0|unstable)/presence/[^/]+/status
|
^/_matrix/client/(api/v1|r0|v3|unstable)/presence/[^/]+/status
|
||||||
|
|
||||||
This "stub" presence handler will pass through `GET` request but make the
|
This "stub" presence handler will pass through `GET` request but make the
|
||||||
`PUT` effectively a no-op.
|
`PUT` effectively a no-op.
|
||||||
|
|
326
mypy.ini
326
mypy.ini
|
@ -10,86 +10,150 @@ warn_unreachable = True
|
||||||
local_partial_types = True
|
local_partial_types = True
|
||||||
no_implicit_optional = True
|
no_implicit_optional = True
|
||||||
|
|
||||||
# To find all folders that pass mypy you run:
|
|
||||||
#
|
|
||||||
# find synapse/* -type d -not -name __pycache__ -exec bash -c "mypy '{}' > /dev/null" \; -print
|
|
||||||
|
|
||||||
files =
|
files =
|
||||||
scripts-dev/sign_json,
|
scripts-dev/sign_json,
|
||||||
synapse/__init__.py,
|
setup.py,
|
||||||
synapse/api,
|
synapse/,
|
||||||
synapse/appservice,
|
tests/
|
||||||
synapse/config,
|
|
||||||
synapse/crypto,
|
# Note: Better exclusion syntax coming in mypy > 0.910
|
||||||
synapse/event_auth.py,
|
# https://github.com/python/mypy/pull/11329
|
||||||
synapse/events,
|
#
|
||||||
synapse/federation,
|
# For now, set the (?x) flag enable "verbose" regexes
|
||||||
synapse/groups,
|
# https://docs.python.org/3/library/re.html#re.X
|
||||||
synapse/handlers,
|
exclude = (?x)
|
||||||
synapse/http,
|
^(
|
||||||
synapse/logging,
|
|synapse/storage/databases/__init__.py
|
||||||
synapse/metrics,
|
|synapse/storage/databases/main/__init__.py
|
||||||
synapse/module_api,
|
|synapse/storage/databases/main/account_data.py
|
||||||
synapse/notifier.py,
|
|synapse/storage/databases/main/cache.py
|
||||||
synapse/push,
|
|synapse/storage/databases/main/devices.py
|
||||||
synapse/replication,
|
|synapse/storage/databases/main/e2e_room_keys.py
|
||||||
synapse/rest,
|
|synapse/storage/databases/main/end_to_end_keys.py
|
||||||
synapse/server.py,
|
|synapse/storage/databases/main/event_federation.py
|
||||||
synapse/server_notices,
|
|synapse/storage/databases/main/event_push_actions.py
|
||||||
synapse/spam_checker_api,
|
|synapse/storage/databases/main/events_bg_updates.py
|
||||||
synapse/state,
|
|synapse/storage/databases/main/events_worker.py
|
||||||
synapse/storage/__init__.py,
|
|synapse/storage/databases/main/group_server.py
|
||||||
synapse/storage/_base.py,
|
|synapse/storage/databases/main/metrics.py
|
||||||
synapse/storage/background_updates.py,
|
|synapse/storage/databases/main/monthly_active_users.py
|
||||||
synapse/storage/databases/main/appservice.py,
|
|synapse/storage/databases/main/presence.py
|
||||||
synapse/storage/databases/main/client_ips.py,
|
|synapse/storage/databases/main/purge_events.py
|
||||||
synapse/storage/databases/main/events.py,
|
|synapse/storage/databases/main/push_rule.py
|
||||||
synapse/storage/databases/main/keys.py,
|
|synapse/storage/databases/main/receipts.py
|
||||||
synapse/storage/databases/main/pusher.py,
|
|synapse/storage/databases/main/room.py
|
||||||
synapse/storage/databases/main/registration.py,
|
|synapse/storage/databases/main/roommember.py
|
||||||
synapse/storage/databases/main/relations.py,
|
|synapse/storage/databases/main/search.py
|
||||||
synapse/storage/databases/main/session.py,
|
|synapse/storage/databases/main/state.py
|
||||||
synapse/storage/databases/main/stream.py,
|
|synapse/storage/databases/main/stats.py
|
||||||
synapse/storage/databases/main/ui_auth.py,
|
|synapse/storage/databases/main/transactions.py
|
||||||
synapse/storage/databases/state,
|
|synapse/storage/databases/main/user_directory.py
|
||||||
synapse/storage/database.py,
|
|synapse/storage/schema/
|
||||||
synapse/storage/engines,
|
|
||||||
synapse/storage/keys.py,
|
|tests/api/test_auth.py
|
||||||
synapse/storage/persist_events.py,
|
|tests/api/test_ratelimiting.py
|
||||||
synapse/storage/prepare_database.py,
|
|tests/app/test_openid_listener.py
|
||||||
synapse/storage/purge_events.py,
|
|tests/appservice/test_scheduler.py
|
||||||
synapse/storage/push_rule.py,
|
|tests/config/test_cache.py
|
||||||
synapse/storage/relations.py,
|
|tests/config/test_tls.py
|
||||||
synapse/storage/roommember.py,
|
|tests/crypto/test_keyring.py
|
||||||
synapse/storage/state.py,
|
|tests/events/test_presence_router.py
|
||||||
synapse/storage/types.py,
|
|tests/events/test_utils.py
|
||||||
synapse/storage/util,
|
|tests/federation/test_federation_catch_up.py
|
||||||
synapse/streams,
|
|tests/federation/test_federation_sender.py
|
||||||
synapse/types.py,
|
|tests/federation/test_federation_server.py
|
||||||
synapse/util,
|
|tests/federation/transport/test_knocking.py
|
||||||
synapse/visibility.py,
|
|tests/federation/transport/test_server.py
|
||||||
tests/replication,
|
|tests/handlers/test_cas.py
|
||||||
tests/test_event_auth.py,
|
|tests/handlers/test_directory.py
|
||||||
tests/test_utils,
|
|tests/handlers/test_e2e_keys.py
|
||||||
tests/handlers/test_password_providers.py,
|
|tests/handlers/test_federation.py
|
||||||
tests/handlers/test_room.py,
|
|tests/handlers/test_oidc.py
|
||||||
tests/handlers/test_room_summary.py,
|
|tests/handlers/test_presence.py
|
||||||
tests/handlers/test_send_email.py,
|
|tests/handlers/test_profile.py
|
||||||
tests/handlers/test_sync.py,
|
|tests/handlers/test_saml.py
|
||||||
tests/handlers/test_user_directory.py,
|
|tests/handlers/test_typing.py
|
||||||
tests/rest/client/test_login.py,
|
|tests/http/federation/test_matrix_federation_agent.py
|
||||||
tests/rest/client/test_auth.py,
|
|tests/http/federation/test_srv_resolver.py
|
||||||
tests/rest/client/test_relations.py,
|
|tests/http/test_fedclient.py
|
||||||
tests/rest/media/v1/test_filepath.py,
|
|tests/http/test_proxyagent.py
|
||||||
tests/rest/media/v1/test_oembed.py,
|
|tests/http/test_servlet.py
|
||||||
tests/storage/test_state.py,
|
|tests/http/test_site.py
|
||||||
tests/storage/test_user_directory.py,
|
|tests/logging/__init__.py
|
||||||
tests/util/test_itertools.py,
|
|tests/logging/test_terse_json.py
|
||||||
tests/util/test_stream_change_cache.py
|
|tests/module_api/test_api.py
|
||||||
|
|tests/push/test_email.py
|
||||||
|
|tests/push/test_http.py
|
||||||
|
|tests/push/test_presentable_names.py
|
||||||
|
|tests/push/test_push_rule_evaluator.py
|
||||||
|
|tests/rest/admin/test_admin.py
|
||||||
|
|tests/rest/admin/test_device.py
|
||||||
|
|tests/rest/admin/test_media.py
|
||||||
|
|tests/rest/admin/test_server_notice.py
|
||||||
|
|tests/rest/admin/test_user.py
|
||||||
|
|tests/rest/admin/test_username_available.py
|
||||||
|
|tests/rest/client/test_account.py
|
||||||
|
|tests/rest/client/test_events.py
|
||||||
|
|tests/rest/client/test_filter.py
|
||||||
|
|tests/rest/client/test_groups.py
|
||||||
|
|tests/rest/client/test_register.py
|
||||||
|
|tests/rest/client/test_report_event.py
|
||||||
|
|tests/rest/client/test_rooms.py
|
||||||
|
|tests/rest/client/test_third_party_rules.py
|
||||||
|
|tests/rest/client/test_transactions.py
|
||||||
|
|tests/rest/client/test_typing.py
|
||||||
|
|tests/rest/client/utils.py
|
||||||
|
|tests/rest/key/v2/test_remote_key_resource.py
|
||||||
|
|tests/rest/media/v1/test_base.py
|
||||||
|
|tests/rest/media/v1/test_media_storage.py
|
||||||
|
|tests/rest/media/v1/test_url_preview.py
|
||||||
|
|tests/scripts/test_new_matrix_user.py
|
||||||
|
|tests/server.py
|
||||||
|
|tests/server_notices/test_resource_limits_server_notices.py
|
||||||
|
|tests/state/test_v2.py
|
||||||
|
|tests/storage/test_account_data.py
|
||||||
|
|tests/storage/test_appservice.py
|
||||||
|
|tests/storage/test_background_update.py
|
||||||
|
|tests/storage/test_base.py
|
||||||
|
|tests/storage/test_client_ips.py
|
||||||
|
|tests/storage/test_database.py
|
||||||
|
|tests/storage/test_event_federation.py
|
||||||
|
|tests/storage/test_id_generators.py
|
||||||
|
|tests/storage/test_roommember.py
|
||||||
|
|tests/test_metrics.py
|
||||||
|
|tests/test_phone_home.py
|
||||||
|
|tests/test_server.py
|
||||||
|
|tests/test_state.py
|
||||||
|
|tests/test_terms_auth.py
|
||||||
|
|tests/test_visibility.py
|
||||||
|
|tests/unittest.py
|
||||||
|
|tests/util/caches/test_cached_call.py
|
||||||
|
|tests/util/caches/test_deferred_cache.py
|
||||||
|
|tests/util/caches/test_descriptors.py
|
||||||
|
|tests/util/caches/test_response_cache.py
|
||||||
|
|tests/util/caches/test_ttlcache.py
|
||||||
|
|tests/util/test_async_helpers.py
|
||||||
|
|tests/util/test_batching_queue.py
|
||||||
|
|tests/util/test_dict_cache.py
|
||||||
|
|tests/util/test_expiring_cache.py
|
||||||
|
|tests/util/test_file_consumer.py
|
||||||
|
|tests/util/test_linearizer.py
|
||||||
|
|tests/util/test_logcontext.py
|
||||||
|
|tests/util/test_lrucache.py
|
||||||
|
|tests/util/test_rwlock.py
|
||||||
|
|tests/util/test_wheel_timer.py
|
||||||
|
|tests/utils.py
|
||||||
|
)$
|
||||||
|
|
||||||
[mypy-synapse.api.*]
|
[mypy-synapse.api.*]
|
||||||
disallow_untyped_defs = True
|
disallow_untyped_defs = True
|
||||||
|
|
||||||
|
[mypy-synapse.app.*]
|
||||||
|
disallow_untyped_defs = True
|
||||||
|
|
||||||
|
[mypy-synapse.config._base]
|
||||||
|
disallow_untyped_defs = True
|
||||||
|
|
||||||
[mypy-synapse.crypto.*]
|
[mypy-synapse.crypto.*]
|
||||||
disallow_untyped_defs = True
|
disallow_untyped_defs = True
|
||||||
|
|
||||||
|
@ -99,6 +163,9 @@ disallow_untyped_defs = True
|
||||||
[mypy-synapse.handlers.*]
|
[mypy-synapse.handlers.*]
|
||||||
disallow_untyped_defs = True
|
disallow_untyped_defs = True
|
||||||
|
|
||||||
|
[mypy-synapse.metrics.*]
|
||||||
|
disallow_untyped_defs = True
|
||||||
|
|
||||||
[mypy-synapse.push.*]
|
[mypy-synapse.push.*]
|
||||||
disallow_untyped_defs = True
|
disallow_untyped_defs = True
|
||||||
|
|
||||||
|
@ -114,105 +181,45 @@ disallow_untyped_defs = True
|
||||||
[mypy-synapse.storage.databases.main.client_ips]
|
[mypy-synapse.storage.databases.main.client_ips]
|
||||||
disallow_untyped_defs = True
|
disallow_untyped_defs = True
|
||||||
|
|
||||||
|
[mypy-synapse.storage.databases.main.directory]
|
||||||
|
disallow_untyped_defs = True
|
||||||
|
|
||||||
|
[mypy-synapse.storage.databases.main.room_batch]
|
||||||
|
disallow_untyped_defs = True
|
||||||
|
|
||||||
|
[mypy-synapse.storage.databases.main.profile]
|
||||||
|
disallow_untyped_defs = True
|
||||||
|
|
||||||
|
[mypy-synapse.storage.databases.main.state_deltas]
|
||||||
|
disallow_untyped_defs = True
|
||||||
|
|
||||||
|
[mypy-synapse.storage.databases.main.user_erasure_store]
|
||||||
|
disallow_untyped_defs = True
|
||||||
|
|
||||||
[mypy-synapse.storage.util.*]
|
[mypy-synapse.storage.util.*]
|
||||||
disallow_untyped_defs = True
|
disallow_untyped_defs = True
|
||||||
|
|
||||||
[mypy-synapse.streams.*]
|
[mypy-synapse.streams.*]
|
||||||
disallow_untyped_defs = True
|
disallow_untyped_defs = True
|
||||||
|
|
||||||
[mypy-synapse.util.batching_queue]
|
[mypy-synapse.util.*]
|
||||||
disallow_untyped_defs = True
|
disallow_untyped_defs = True
|
||||||
|
|
||||||
[mypy-synapse.util.caches.cached_call]
|
[mypy-synapse.util.caches.treecache]
|
||||||
disallow_untyped_defs = True
|
disallow_untyped_defs = False
|
||||||
|
|
||||||
[mypy-synapse.util.caches.dictionary_cache]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.util.caches.lrucache]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.util.caches.response_cache]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.util.caches.stream_change_cache]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.util.caches.ttl_cache]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.util.daemonize]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.util.file_consumer]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.util.frozenutils]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.util.hash]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.util.httpresourcetree]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.util.iterutils]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.util.linked_list]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.util.logcontext]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.util.logformatter]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.util.macaroons]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.util.manhole]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.util.module_loader]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.util.msisdn]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.util.patch_inline_callbacks]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.util.ratelimitutils]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.util.retryutils]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.util.rlimit]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.util.stringutils]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.util.templates]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.util.threepids]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.util.wheel_timer]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.util.versionstring]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-tests.handlers.test_user_directory]
|
[mypy-tests.handlers.test_user_directory]
|
||||||
disallow_untyped_defs = True
|
disallow_untyped_defs = True
|
||||||
|
|
||||||
|
[mypy-tests.storage.test_profile]
|
||||||
|
disallow_untyped_defs = True
|
||||||
|
|
||||||
[mypy-tests.storage.test_user_directory]
|
[mypy-tests.storage.test_user_directory]
|
||||||
disallow_untyped_defs = True
|
disallow_untyped_defs = True
|
||||||
|
|
||||||
|
[mypy-tests.rest.client.test_directory]
|
||||||
|
disallow_untyped_defs = True
|
||||||
|
|
||||||
;; Dependencies without annotations
|
;; Dependencies without annotations
|
||||||
;; Before ignoring a module, check to see if type stubs are available.
|
;; Before ignoring a module, check to see if type stubs are available.
|
||||||
;; The `typeshed` project maintains stubs here:
|
;; The `typeshed` project maintains stubs here:
|
||||||
|
@ -272,6 +279,9 @@ ignore_missing_imports = True
|
||||||
[mypy-opentracing]
|
[mypy-opentracing]
|
||||||
ignore_missing_imports = True
|
ignore_missing_imports = True
|
||||||
|
|
||||||
|
[mypy-parameterized.*]
|
||||||
|
ignore_missing_imports = True
|
||||||
|
|
||||||
[mypy-phonenumbers.*]
|
[mypy-phonenumbers.*]
|
||||||
ignore_missing_imports = True
|
ignore_missing_imports = True
|
||||||
|
|
||||||
|
|
|
@ -24,7 +24,7 @@
|
||||||
set -e
|
set -e
|
||||||
|
|
||||||
# Change to the repository root
|
# Change to the repository root
|
||||||
cd "$(dirname "$0")/.."
|
cd "$(dirname $0)/.."
|
||||||
|
|
||||||
# Check for a user-specified Complement checkout
|
# Check for a user-specified Complement checkout
|
||||||
if [[ -z "$COMPLEMENT_DIR" ]]; then
|
if [[ -z "$COMPLEMENT_DIR" ]]; then
|
||||||
|
@ -61,8 +61,8 @@ cd "$COMPLEMENT_DIR"
|
||||||
EXTRA_COMPLEMENT_ARGS=""
|
EXTRA_COMPLEMENT_ARGS=""
|
||||||
if [[ -n "$1" ]]; then
|
if [[ -n "$1" ]]; then
|
||||||
# A test name regex has been set, supply it to Complement
|
# A test name regex has been set, supply it to Complement
|
||||||
EXTRA_COMPLEMENT_ARGS=(-run "$1")
|
EXTRA_COMPLEMENT_ARGS+="-run $1 "
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Run the tests!
|
# Run the tests!
|
||||||
go test -v -tags synapse_blacklist,msc2946,msc3083,msc2403,msc2716 -count=1 "${EXTRA_COMPLEMENT_ARGS[@]}" ./tests/...
|
go test -v -tags synapse_blacklist,msc2946,msc3083,msc2403 -count=1 $EXTRA_COMPLEMENT_ARGS ./tests/...
|
||||||
|
|
6
setup.py
6
setup.py
|
@ -17,6 +17,7 @@
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
import glob
|
import glob
|
||||||
import os
|
import os
|
||||||
|
from typing import Any, Dict
|
||||||
|
|
||||||
from setuptools import Command, find_packages, setup
|
from setuptools import Command, find_packages, setup
|
||||||
|
|
||||||
|
@ -49,8 +50,6 @@ here = os.path.abspath(os.path.dirname(__file__))
|
||||||
# [1]: http://tox.readthedocs.io/en/2.5.0/example/basic.html#integration-with-setup-py-test-command
|
# [1]: http://tox.readthedocs.io/en/2.5.0/example/basic.html#integration-with-setup-py-test-command
|
||||||
# [2]: https://pypi.python.org/pypi/setuptools_trial
|
# [2]: https://pypi.python.org/pypi/setuptools_trial
|
||||||
class TestCommand(Command):
|
class TestCommand(Command):
|
||||||
user_options = []
|
|
||||||
|
|
||||||
def initialize_options(self):
|
def initialize_options(self):
|
||||||
pass
|
pass
|
||||||
|
|
||||||
|
@ -75,7 +74,7 @@ def read_file(path_segments):
|
||||||
|
|
||||||
def exec_file(path_segments):
|
def exec_file(path_segments):
|
||||||
"""Execute a single python file to get the variables defined in it"""
|
"""Execute a single python file to get the variables defined in it"""
|
||||||
result = {}
|
result: Dict[str, Any] = {}
|
||||||
code = read_file(path_segments)
|
code = read_file(path_segments)
|
||||||
exec(code, result)
|
exec(code, result)
|
||||||
return result
|
return result
|
||||||
|
@ -111,6 +110,7 @@ CONDITIONAL_REQUIREMENTS["mypy"] = [
|
||||||
"types-Pillow>=8.3.4",
|
"types-Pillow>=8.3.4",
|
||||||
"types-pyOpenSSL>=20.0.7",
|
"types-pyOpenSSL>=20.0.7",
|
||||||
"types-PyYAML>=5.4.10",
|
"types-PyYAML>=5.4.10",
|
||||||
|
"types-requests>=2.26.0",
|
||||||
"types-setuptools>=57.4.0",
|
"types-setuptools>=57.4.0",
|
||||||
]
|
]
|
||||||
|
|
||||||
|
|
|
@ -1,5 +1,6 @@
|
||||||
# Copyright 2015, 2016 OpenMarket Ltd
|
# Copyright 2015, 2016 OpenMarket Ltd
|
||||||
# Copyright 2018 New Vector
|
# Copyright 2018 New Vector
|
||||||
|
# Copyright 2021 The Matrix.org Foundation C.I.C.
|
||||||
#
|
#
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
# you may not use this file except in compliance with the License.
|
# you may not use this file except in compliance with the License.
|
||||||
|
@ -19,22 +20,23 @@ import hashlib
|
||||||
import hmac
|
import hmac
|
||||||
import logging
|
import logging
|
||||||
import sys
|
import sys
|
||||||
|
from typing import Callable, Optional
|
||||||
|
|
||||||
import requests as _requests
|
import requests as _requests
|
||||||
import yaml
|
import yaml
|
||||||
|
|
||||||
|
|
||||||
def request_registration(
|
def request_registration(
|
||||||
user,
|
user: str,
|
||||||
password,
|
password: str,
|
||||||
server_location,
|
server_location: str,
|
||||||
shared_secret,
|
shared_secret: str,
|
||||||
admin=False,
|
admin: bool = False,
|
||||||
user_type=None,
|
user_type: Optional[str] = None,
|
||||||
requests=_requests,
|
requests=_requests,
|
||||||
_print=print,
|
_print: Callable[[str], None] = print,
|
||||||
exit=sys.exit,
|
exit: Callable[[int], None] = sys.exit,
|
||||||
):
|
) -> None:
|
||||||
|
|
||||||
url = "%s/_synapse/admin/v1/register" % (server_location.rstrip("/"),)
|
url = "%s/_synapse/admin/v1/register" % (server_location.rstrip("/"),)
|
||||||
|
|
||||||
|
@ -65,13 +67,13 @@ def request_registration(
|
||||||
mac.update(b"\x00")
|
mac.update(b"\x00")
|
||||||
mac.update(user_type.encode("utf8"))
|
mac.update(user_type.encode("utf8"))
|
||||||
|
|
||||||
mac = mac.hexdigest()
|
hex_mac = mac.hexdigest()
|
||||||
|
|
||||||
data = {
|
data = {
|
||||||
"nonce": nonce,
|
"nonce": nonce,
|
||||||
"username": user,
|
"username": user,
|
||||||
"password": password,
|
"password": password,
|
||||||
"mac": mac,
|
"mac": hex_mac,
|
||||||
"admin": admin,
|
"admin": admin,
|
||||||
"user_type": user_type,
|
"user_type": user_type,
|
||||||
}
|
}
|
||||||
|
@ -91,10 +93,17 @@ def request_registration(
|
||||||
_print("Success!")
|
_print("Success!")
|
||||||
|
|
||||||
|
|
||||||
def register_new_user(user, password, server_location, shared_secret, admin, user_type):
|
def register_new_user(
|
||||||
|
user: str,
|
||||||
|
password: str,
|
||||||
|
server_location: str,
|
||||||
|
shared_secret: str,
|
||||||
|
admin: Optional[bool],
|
||||||
|
user_type: Optional[str],
|
||||||
|
) -> None:
|
||||||
if not user:
|
if not user:
|
||||||
try:
|
try:
|
||||||
default_user = getpass.getuser()
|
default_user: Optional[str] = getpass.getuser()
|
||||||
except Exception:
|
except Exception:
|
||||||
default_user = None
|
default_user = None
|
||||||
|
|
||||||
|
@ -123,8 +132,8 @@ def register_new_user(user, password, server_location, shared_secret, admin, use
|
||||||
sys.exit(1)
|
sys.exit(1)
|
||||||
|
|
||||||
if admin is None:
|
if admin is None:
|
||||||
admin = input("Make admin [no]: ")
|
admin_inp = input("Make admin [no]: ")
|
||||||
if admin in ("y", "yes", "true"):
|
if admin_inp in ("y", "yes", "true"):
|
||||||
admin = True
|
admin = True
|
||||||
else:
|
else:
|
||||||
admin = False
|
admin = False
|
||||||
|
@ -134,7 +143,7 @@ def register_new_user(user, password, server_location, shared_secret, admin, use
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
def main():
|
def main() -> None:
|
||||||
|
|
||||||
logging.captureWarnings(True)
|
logging.captureWarnings(True)
|
||||||
|
|
||||||
|
|
|
@ -92,7 +92,7 @@ def get_recent_users(txn: LoggingTransaction, since_ms: int) -> List[UserInfo]:
|
||||||
return user_infos
|
return user_infos
|
||||||
|
|
||||||
|
|
||||||
def main():
|
def main() -> None:
|
||||||
parser = argparse.ArgumentParser()
|
parser = argparse.ArgumentParser()
|
||||||
parser.add_argument(
|
parser.add_argument(
|
||||||
"-c",
|
"-c",
|
||||||
|
@ -142,7 +142,8 @@ def main():
|
||||||
engine = create_engine(database_config.config)
|
engine = create_engine(database_config.config)
|
||||||
|
|
||||||
with make_conn(database_config, engine, "review_recent_signups") as db_conn:
|
with make_conn(database_config, engine, "review_recent_signups") as db_conn:
|
||||||
user_infos = get_recent_users(db_conn.cursor(), since_ms)
|
# This generates a type of Cursor, not LoggingTransaction.
|
||||||
|
user_infos = get_recent_users(db_conn.cursor(), since_ms) # type: ignore[arg-type]
|
||||||
|
|
||||||
for user_info in user_infos:
|
for user_info in user_infos:
|
||||||
if exclude_users_with_email and user_info.emails:
|
if exclude_users_with_email and user_info.emails:
|
||||||
|
|
|
@ -1,7 +1,7 @@
|
||||||
# Copyright 2015, 2016 OpenMarket Ltd
|
# Copyright 2015, 2016 OpenMarket Ltd
|
||||||
# Copyright 2017 Vector Creations Ltd
|
# Copyright 2017 Vector Creations Ltd
|
||||||
# Copyright 2018-2019 New Vector Ltd
|
# Copyright 2018-2019 New Vector Ltd
|
||||||
# Copyright 2019 The Matrix.org Foundation C.I.C.
|
# Copyright 2019-2021 The Matrix.org Foundation C.I.C.
|
||||||
#
|
#
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
# you may not use this file except in compliance with the License.
|
# you may not use this file except in compliance with the License.
|
||||||
|
@ -86,6 +86,9 @@ ROOM_EVENT_FILTER_SCHEMA = {
|
||||||
# cf https://github.com/matrix-org/matrix-doc/pull/2326
|
# cf https://github.com/matrix-org/matrix-doc/pull/2326
|
||||||
"org.matrix.labels": {"type": "array", "items": {"type": "string"}},
|
"org.matrix.labels": {"type": "array", "items": {"type": "string"}},
|
||||||
"org.matrix.not_labels": {"type": "array", "items": {"type": "string"}},
|
"org.matrix.not_labels": {"type": "array", "items": {"type": "string"}},
|
||||||
|
# MSC3440, filtering by event relations.
|
||||||
|
"io.element.relation_senders": {"type": "array", "items": {"type": "string"}},
|
||||||
|
"io.element.relation_types": {"type": "array", "items": {"type": "string"}},
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -146,14 +149,16 @@ def matrix_user_id_validator(user_id_str: str) -> UserID:
|
||||||
|
|
||||||
class Filtering:
|
class Filtering:
|
||||||
def __init__(self, hs: "HomeServer"):
|
def __init__(self, hs: "HomeServer"):
|
||||||
super().__init__()
|
self._hs = hs
|
||||||
self.store = hs.get_datastore()
|
self.store = hs.get_datastore()
|
||||||
|
|
||||||
|
self.DEFAULT_FILTER_COLLECTION = FilterCollection(hs, {})
|
||||||
|
|
||||||
async def get_user_filter(
|
async def get_user_filter(
|
||||||
self, user_localpart: str, filter_id: Union[int, str]
|
self, user_localpart: str, filter_id: Union[int, str]
|
||||||
) -> "FilterCollection":
|
) -> "FilterCollection":
|
||||||
result = await self.store.get_user_filter(user_localpart, filter_id)
|
result = await self.store.get_user_filter(user_localpart, filter_id)
|
||||||
return FilterCollection(result)
|
return FilterCollection(self._hs, result)
|
||||||
|
|
||||||
def add_user_filter(
|
def add_user_filter(
|
||||||
self, user_localpart: str, user_filter: JsonDict
|
self, user_localpart: str, user_filter: JsonDict
|
||||||
|
@ -191,21 +196,22 @@ FilterEvent = TypeVar("FilterEvent", EventBase, UserPresenceState, JsonDict)
|
||||||
|
|
||||||
|
|
||||||
class FilterCollection:
|
class FilterCollection:
|
||||||
def __init__(self, filter_json: JsonDict):
|
def __init__(self, hs: "HomeServer", filter_json: JsonDict):
|
||||||
self._filter_json = filter_json
|
self._filter_json = filter_json
|
||||||
|
|
||||||
room_filter_json = self._filter_json.get("room", {})
|
room_filter_json = self._filter_json.get("room", {})
|
||||||
|
|
||||||
self._room_filter = Filter(
|
self._room_filter = Filter(
|
||||||
{k: v for k, v in room_filter_json.items() if k in ("rooms", "not_rooms")}
|
hs,
|
||||||
|
{k: v for k, v in room_filter_json.items() if k in ("rooms", "not_rooms")},
|
||||||
)
|
)
|
||||||
|
|
||||||
self._room_timeline_filter = Filter(room_filter_json.get("timeline", {}))
|
self._room_timeline_filter = Filter(hs, room_filter_json.get("timeline", {}))
|
||||||
self._room_state_filter = Filter(room_filter_json.get("state", {}))
|
self._room_state_filter = Filter(hs, room_filter_json.get("state", {}))
|
||||||
self._room_ephemeral_filter = Filter(room_filter_json.get("ephemeral", {}))
|
self._room_ephemeral_filter = Filter(hs, room_filter_json.get("ephemeral", {}))
|
||||||
self._room_account_data = Filter(room_filter_json.get("account_data", {}))
|
self._room_account_data = Filter(hs, room_filter_json.get("account_data", {}))
|
||||||
self._presence_filter = Filter(filter_json.get("presence", {}))
|
self._presence_filter = Filter(hs, filter_json.get("presence", {}))
|
||||||
self._account_data = Filter(filter_json.get("account_data", {}))
|
self._account_data = Filter(hs, filter_json.get("account_data", {}))
|
||||||
|
|
||||||
self.include_leave = filter_json.get("room", {}).get("include_leave", False)
|
self.include_leave = filter_json.get("room", {}).get("include_leave", False)
|
||||||
self.event_fields = filter_json.get("event_fields", [])
|
self.event_fields = filter_json.get("event_fields", [])
|
||||||
|
@ -232,25 +238,37 @@ class FilterCollection:
|
||||||
def include_redundant_members(self) -> bool:
|
def include_redundant_members(self) -> bool:
|
||||||
return self._room_state_filter.include_redundant_members
|
return self._room_state_filter.include_redundant_members
|
||||||
|
|
||||||
def filter_presence(
|
async def filter_presence(
|
||||||
self, events: Iterable[UserPresenceState]
|
self, events: Iterable[UserPresenceState]
|
||||||
) -> List[UserPresenceState]:
|
) -> List[UserPresenceState]:
|
||||||
return self._presence_filter.filter(events)
|
return await self._presence_filter.filter(events)
|
||||||
|
|
||||||
def filter_account_data(self, events: Iterable[JsonDict]) -> List[JsonDict]:
|
async def filter_account_data(self, events: Iterable[JsonDict]) -> List[JsonDict]:
|
||||||
return self._account_data.filter(events)
|
return await self._account_data.filter(events)
|
||||||
|
|
||||||
def filter_room_state(self, events: Iterable[EventBase]) -> List[EventBase]:
|
async def filter_room_state(self, events: Iterable[EventBase]) -> List[EventBase]:
|
||||||
return self._room_state_filter.filter(self._room_filter.filter(events))
|
return await self._room_state_filter.filter(
|
||||||
|
await self._room_filter.filter(events)
|
||||||
|
)
|
||||||
|
|
||||||
def filter_room_timeline(self, events: Iterable[EventBase]) -> List[EventBase]:
|
async def filter_room_timeline(
|
||||||
return self._room_timeline_filter.filter(self._room_filter.filter(events))
|
self, events: Iterable[EventBase]
|
||||||
|
) -> List[EventBase]:
|
||||||
|
return await self._room_timeline_filter.filter(
|
||||||
|
await self._room_filter.filter(events)
|
||||||
|
)
|
||||||
|
|
||||||
def filter_room_ephemeral(self, events: Iterable[JsonDict]) -> List[JsonDict]:
|
async def filter_room_ephemeral(self, events: Iterable[JsonDict]) -> List[JsonDict]:
|
||||||
return self._room_ephemeral_filter.filter(self._room_filter.filter(events))
|
return await self._room_ephemeral_filter.filter(
|
||||||
|
await self._room_filter.filter(events)
|
||||||
|
)
|
||||||
|
|
||||||
def filter_room_account_data(self, events: Iterable[JsonDict]) -> List[JsonDict]:
|
async def filter_room_account_data(
|
||||||
return self._room_account_data.filter(self._room_filter.filter(events))
|
self, events: Iterable[JsonDict]
|
||||||
|
) -> List[JsonDict]:
|
||||||
|
return await self._room_account_data.filter(
|
||||||
|
await self._room_filter.filter(events)
|
||||||
|
)
|
||||||
|
|
||||||
def blocks_all_presence(self) -> bool:
|
def blocks_all_presence(self) -> bool:
|
||||||
return (
|
return (
|
||||||
|
@ -274,7 +292,9 @@ class FilterCollection:
|
||||||
|
|
||||||
|
|
||||||
class Filter:
|
class Filter:
|
||||||
def __init__(self, filter_json: JsonDict):
|
def __init__(self, hs: "HomeServer", filter_json: JsonDict):
|
||||||
|
self._hs = hs
|
||||||
|
self._store = hs.get_datastore()
|
||||||
self.filter_json = filter_json
|
self.filter_json = filter_json
|
||||||
|
|
||||||
self.limit = filter_json.get("limit", 10)
|
self.limit = filter_json.get("limit", 10)
|
||||||
|
@ -297,6 +317,20 @@ class Filter:
|
||||||
self.labels = filter_json.get("org.matrix.labels", None)
|
self.labels = filter_json.get("org.matrix.labels", None)
|
||||||
self.not_labels = filter_json.get("org.matrix.not_labels", [])
|
self.not_labels = filter_json.get("org.matrix.not_labels", [])
|
||||||
|
|
||||||
|
# Ideally these would be rejected at the endpoint if they were provided
|
||||||
|
# and not supported, but that would involve modifying the JSON schema
|
||||||
|
# based on the homeserver configuration.
|
||||||
|
if hs.config.experimental.msc3440_enabled:
|
||||||
|
self.relation_senders = self.filter_json.get(
|
||||||
|
"io.element.relation_senders", None
|
||||||
|
)
|
||||||
|
self.relation_types = self.filter_json.get(
|
||||||
|
"io.element.relation_types", None
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
self.relation_senders = None
|
||||||
|
self.relation_types = None
|
||||||
|
|
||||||
def filters_all_types(self) -> bool:
|
def filters_all_types(self) -> bool:
|
||||||
return "*" in self.not_types
|
return "*" in self.not_types
|
||||||
|
|
||||||
|
@ -306,7 +340,7 @@ class Filter:
|
||||||
def filters_all_rooms(self) -> bool:
|
def filters_all_rooms(self) -> bool:
|
||||||
return "*" in self.not_rooms
|
return "*" in self.not_rooms
|
||||||
|
|
||||||
def check(self, event: FilterEvent) -> bool:
|
def _check(self, event: FilterEvent) -> bool:
|
||||||
"""Checks whether the filter matches the given event.
|
"""Checks whether the filter matches the given event.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
|
@ -420,8 +454,30 @@ class Filter:
|
||||||
|
|
||||||
return room_ids
|
return room_ids
|
||||||
|
|
||||||
def filter(self, events: Iterable[FilterEvent]) -> List[FilterEvent]:
|
async def _check_event_relations(
|
||||||
return list(filter(self.check, events))
|
self, events: Iterable[FilterEvent]
|
||||||
|
) -> List[FilterEvent]:
|
||||||
|
# The event IDs to check, mypy doesn't understand the ifinstance check.
|
||||||
|
event_ids = [event.event_id for event in events if isinstance(event, EventBase)] # type: ignore[attr-defined]
|
||||||
|
event_ids_to_keep = set(
|
||||||
|
await self._store.events_have_relations(
|
||||||
|
event_ids, self.relation_senders, self.relation_types
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
return [
|
||||||
|
event
|
||||||
|
for event in events
|
||||||
|
if not isinstance(event, EventBase) or event.event_id in event_ids_to_keep
|
||||||
|
]
|
||||||
|
|
||||||
|
async def filter(self, events: Iterable[FilterEvent]) -> List[FilterEvent]:
|
||||||
|
result = [event for event in events if self._check(event)]
|
||||||
|
|
||||||
|
if self.relation_senders or self.relation_types:
|
||||||
|
return await self._check_event_relations(result)
|
||||||
|
|
||||||
|
return result
|
||||||
|
|
||||||
def with_room_ids(self, room_ids: Iterable[str]) -> "Filter":
|
def with_room_ids(self, room_ids: Iterable[str]) -> "Filter":
|
||||||
"""Returns a new filter with the given room IDs appended.
|
"""Returns a new filter with the given room IDs appended.
|
||||||
|
@ -433,7 +489,7 @@ class Filter:
|
||||||
filter: A new filter including the given rooms and the old
|
filter: A new filter including the given rooms and the old
|
||||||
filter's rooms.
|
filter's rooms.
|
||||||
"""
|
"""
|
||||||
newFilter = Filter(self.filter_json)
|
newFilter = Filter(self._hs, self.filter_json)
|
||||||
newFilter.rooms += room_ids
|
newFilter.rooms += room_ids
|
||||||
return newFilter
|
return newFilter
|
||||||
|
|
||||||
|
@ -444,6 +500,3 @@ def _matches_wildcard(actual_value: Optional[str], filter_value: str) -> bool:
|
||||||
return actual_value.startswith(type_prefix)
|
return actual_value.startswith(type_prefix)
|
||||||
else:
|
else:
|
||||||
return actual_value == filter_value
|
return actual_value == filter_value
|
||||||
|
|
||||||
|
|
||||||
DEFAULT_FILTER_COLLECTION = FilterCollection({})
|
|
||||||
|
|
|
@ -30,7 +30,8 @@ FEDERATION_UNSTABLE_PREFIX = FEDERATION_PREFIX + "/unstable"
|
||||||
STATIC_PREFIX = "/_matrix/static"
|
STATIC_PREFIX = "/_matrix/static"
|
||||||
WEB_CLIENT_PREFIX = "/_matrix/client"
|
WEB_CLIENT_PREFIX = "/_matrix/client"
|
||||||
SERVER_KEY_V2_PREFIX = "/_matrix/key/v2"
|
SERVER_KEY_V2_PREFIX = "/_matrix/key/v2"
|
||||||
MEDIA_PREFIX = "/_matrix/media/r0"
|
MEDIA_R0_PREFIX = "/_matrix/media/r0"
|
||||||
|
MEDIA_V3_PREFIX = "/_matrix/media/v3"
|
||||||
LEGACY_MEDIA_PREFIX = "/_matrix/media/v1"
|
LEGACY_MEDIA_PREFIX = "/_matrix/media/v1"
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -13,6 +13,7 @@
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
import logging
|
import logging
|
||||||
import sys
|
import sys
|
||||||
|
from typing import Container
|
||||||
|
|
||||||
from synapse import python_dependencies # noqa: E402
|
from synapse import python_dependencies # noqa: E402
|
||||||
|
|
||||||
|
@ -27,7 +28,9 @@ except python_dependencies.DependencyException as e:
|
||||||
sys.exit(1)
|
sys.exit(1)
|
||||||
|
|
||||||
|
|
||||||
def check_bind_error(e, address, bind_addresses):
|
def check_bind_error(
|
||||||
|
e: Exception, address: str, bind_addresses: Container[str]
|
||||||
|
) -> None:
|
||||||
"""
|
"""
|
||||||
This method checks an exception occurred while binding on 0.0.0.0.
|
This method checks an exception occurred while binding on 0.0.0.0.
|
||||||
If :: is specified in the bind addresses a warning is shown.
|
If :: is specified in the bind addresses a warning is shown.
|
||||||
|
@ -38,9 +41,9 @@ def check_bind_error(e, address, bind_addresses):
|
||||||
When binding on 0.0.0.0 after :: this can safely be ignored.
|
When binding on 0.0.0.0 after :: this can safely be ignored.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
e (Exception): Exception that was caught.
|
e: Exception that was caught.
|
||||||
address (str): Address on which binding was attempted.
|
address: Address on which binding was attempted.
|
||||||
bind_addresses (list): Addresses on which the service listens.
|
bind_addresses: Addresses on which the service listens.
|
||||||
"""
|
"""
|
||||||
if address == "0.0.0.0" and "::" in bind_addresses:
|
if address == "0.0.0.0" and "::" in bind_addresses:
|
||||||
logger.warning(
|
logger.warning(
|
||||||
|
|
|
@ -22,13 +22,27 @@ import socket
|
||||||
import sys
|
import sys
|
||||||
import traceback
|
import traceback
|
||||||
import warnings
|
import warnings
|
||||||
from typing import TYPE_CHECKING, Awaitable, Callable, Iterable
|
from typing import (
|
||||||
|
TYPE_CHECKING,
|
||||||
|
Any,
|
||||||
|
Awaitable,
|
||||||
|
Callable,
|
||||||
|
Collection,
|
||||||
|
Dict,
|
||||||
|
Iterable,
|
||||||
|
List,
|
||||||
|
NoReturn,
|
||||||
|
Tuple,
|
||||||
|
cast,
|
||||||
|
)
|
||||||
|
|
||||||
from cryptography.utils import CryptographyDeprecationWarning
|
from cryptography.utils import CryptographyDeprecationWarning
|
||||||
from typing_extensions import NoReturn
|
|
||||||
|
|
||||||
import twisted
|
import twisted
|
||||||
from twisted.internet import defer, error, reactor
|
from twisted.internet import defer, error, reactor as _reactor
|
||||||
|
from twisted.internet.interfaces import IOpenSSLContextFactory, IReactorSSL, IReactorTCP
|
||||||
|
from twisted.internet.protocol import ServerFactory
|
||||||
|
from twisted.internet.tcp import Port
|
||||||
from twisted.logger import LoggingFile, LogLevel
|
from twisted.logger import LoggingFile, LogLevel
|
||||||
from twisted.protocols.tls import TLSMemoryBIOFactory
|
from twisted.protocols.tls import TLSMemoryBIOFactory
|
||||||
from twisted.python.threadpool import ThreadPool
|
from twisted.python.threadpool import ThreadPool
|
||||||
|
@ -48,6 +62,7 @@ from synapse.logging.context import PreserveLoggingContext
|
||||||
from synapse.metrics import register_threadpool
|
from synapse.metrics import register_threadpool
|
||||||
from synapse.metrics.background_process_metrics import wrap_as_background_process
|
from synapse.metrics.background_process_metrics import wrap_as_background_process
|
||||||
from synapse.metrics.jemalloc import setup_jemalloc_stats
|
from synapse.metrics.jemalloc import setup_jemalloc_stats
|
||||||
|
from synapse.types import ISynapseReactor
|
||||||
from synapse.util.caches.lrucache import setup_expire_lru_cache_entries
|
from synapse.util.caches.lrucache import setup_expire_lru_cache_entries
|
||||||
from synapse.util.daemonize import daemonize_process
|
from synapse.util.daemonize import daemonize_process
|
||||||
from synapse.util.gai_resolver import GAIResolver
|
from synapse.util.gai_resolver import GAIResolver
|
||||||
|
@ -57,33 +72,44 @@ from synapse.util.versionstring import get_version_string
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
from synapse.server import HomeServer
|
from synapse.server import HomeServer
|
||||||
|
|
||||||
|
# Twisted injects the global reactor to make it easier to import, this confuses
|
||||||
|
# mypy which thinks it is a module. Tell it that it a more proper type.
|
||||||
|
reactor = cast(ISynapseReactor, _reactor)
|
||||||
|
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
# list of tuples of function, args list, kwargs dict
|
# list of tuples of function, args list, kwargs dict
|
||||||
_sighup_callbacks = []
|
_sighup_callbacks: List[
|
||||||
|
Tuple[Callable[..., None], Tuple[Any, ...], Dict[str, Any]]
|
||||||
|
] = []
|
||||||
|
|
||||||
|
|
||||||
def register_sighup(func, *args, **kwargs):
|
def register_sighup(func: Callable[..., None], *args: Any, **kwargs: Any) -> None:
|
||||||
"""
|
"""
|
||||||
Register a function to be called when a SIGHUP occurs.
|
Register a function to be called when a SIGHUP occurs.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
func (function): Function to be called when sent a SIGHUP signal.
|
func: Function to be called when sent a SIGHUP signal.
|
||||||
*args, **kwargs: args and kwargs to be passed to the target function.
|
*args, **kwargs: args and kwargs to be passed to the target function.
|
||||||
"""
|
"""
|
||||||
_sighup_callbacks.append((func, args, kwargs))
|
_sighup_callbacks.append((func, args, kwargs))
|
||||||
|
|
||||||
|
|
||||||
def start_worker_reactor(appname, config, run_command=reactor.run):
|
def start_worker_reactor(
|
||||||
|
appname: str,
|
||||||
|
config: HomeServerConfig,
|
||||||
|
run_command: Callable[[], None] = reactor.run,
|
||||||
|
) -> None:
|
||||||
"""Run the reactor in the main process
|
"""Run the reactor in the main process
|
||||||
|
|
||||||
Daemonizes if necessary, and then configures some resources, before starting
|
Daemonizes if necessary, and then configures some resources, before starting
|
||||||
the reactor. Pulls configuration from the 'worker' settings in 'config'.
|
the reactor. Pulls configuration from the 'worker' settings in 'config'.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
appname (str): application name which will be sent to syslog
|
appname: application name which will be sent to syslog
|
||||||
config (synapse.config.Config): config object
|
config: config object
|
||||||
run_command (Callable[]): callable that actually runs the reactor
|
run_command: callable that actually runs the reactor
|
||||||
"""
|
"""
|
||||||
|
|
||||||
logger = logging.getLogger(config.worker.worker_app)
|
logger = logging.getLogger(config.worker.worker_app)
|
||||||
|
@ -101,32 +127,32 @@ def start_worker_reactor(appname, config, run_command=reactor.run):
|
||||||
|
|
||||||
|
|
||||||
def start_reactor(
|
def start_reactor(
|
||||||
appname,
|
appname: str,
|
||||||
soft_file_limit,
|
soft_file_limit: int,
|
||||||
gc_thresholds,
|
gc_thresholds: Tuple[int, int, int],
|
||||||
pid_file,
|
pid_file: str,
|
||||||
daemonize,
|
daemonize: bool,
|
||||||
print_pidfile,
|
print_pidfile: bool,
|
||||||
logger,
|
logger: logging.Logger,
|
||||||
run_command=reactor.run,
|
run_command: Callable[[], None] = reactor.run,
|
||||||
):
|
) -> None:
|
||||||
"""Run the reactor in the main process
|
"""Run the reactor in the main process
|
||||||
|
|
||||||
Daemonizes if necessary, and then configures some resources, before starting
|
Daemonizes if necessary, and then configures some resources, before starting
|
||||||
the reactor
|
the reactor
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
appname (str): application name which will be sent to syslog
|
appname: application name which will be sent to syslog
|
||||||
soft_file_limit (int):
|
soft_file_limit:
|
||||||
gc_thresholds:
|
gc_thresholds:
|
||||||
pid_file (str): name of pid file to write to if daemonize is True
|
pid_file: name of pid file to write to if daemonize is True
|
||||||
daemonize (bool): true to run the reactor in a background process
|
daemonize: true to run the reactor in a background process
|
||||||
print_pidfile (bool): whether to print the pid file, if daemonize is True
|
print_pidfile: whether to print the pid file, if daemonize is True
|
||||||
logger (logging.Logger): logger instance to pass to Daemonize
|
logger: logger instance to pass to Daemonize
|
||||||
run_command (Callable[]): callable that actually runs the reactor
|
run_command: callable that actually runs the reactor
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def run():
|
def run() -> None:
|
||||||
logger.info("Running")
|
logger.info("Running")
|
||||||
setup_jemalloc_stats()
|
setup_jemalloc_stats()
|
||||||
change_resource_limit(soft_file_limit)
|
change_resource_limit(soft_file_limit)
|
||||||
|
@ -185,7 +211,7 @@ def redirect_stdio_to_logs() -> None:
|
||||||
print("Redirected stdout/stderr to logs")
|
print("Redirected stdout/stderr to logs")
|
||||||
|
|
||||||
|
|
||||||
def register_start(cb: Callable[..., Awaitable], *args, **kwargs) -> None:
|
def register_start(cb: Callable[..., Awaitable], *args: Any, **kwargs: Any) -> None:
|
||||||
"""Register a callback with the reactor, to be called once it is running
|
"""Register a callback with the reactor, to be called once it is running
|
||||||
|
|
||||||
This can be used to initialise parts of the system which require an asynchronous
|
This can be used to initialise parts of the system which require an asynchronous
|
||||||
|
@ -195,7 +221,7 @@ def register_start(cb: Callable[..., Awaitable], *args, **kwargs) -> None:
|
||||||
will exit.
|
will exit.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
async def wrapper():
|
async def wrapper() -> None:
|
||||||
try:
|
try:
|
||||||
await cb(*args, **kwargs)
|
await cb(*args, **kwargs)
|
||||||
except Exception:
|
except Exception:
|
||||||
|
@ -224,7 +250,7 @@ def register_start(cb: Callable[..., Awaitable], *args, **kwargs) -> None:
|
||||||
reactor.callWhenRunning(lambda: defer.ensureDeferred(wrapper()))
|
reactor.callWhenRunning(lambda: defer.ensureDeferred(wrapper()))
|
||||||
|
|
||||||
|
|
||||||
def listen_metrics(bind_addresses, port):
|
def listen_metrics(bind_addresses: Iterable[str], port: int) -> None:
|
||||||
"""
|
"""
|
||||||
Start Prometheus metrics server.
|
Start Prometheus metrics server.
|
||||||
"""
|
"""
|
||||||
|
@ -236,11 +262,11 @@ def listen_metrics(bind_addresses, port):
|
||||||
|
|
||||||
|
|
||||||
def listen_manhole(
|
def listen_manhole(
|
||||||
bind_addresses: Iterable[str],
|
bind_addresses: Collection[str],
|
||||||
port: int,
|
port: int,
|
||||||
manhole_settings: ManholeConfig,
|
manhole_settings: ManholeConfig,
|
||||||
manhole_globals: dict,
|
manhole_globals: dict,
|
||||||
):
|
) -> None:
|
||||||
# twisted.conch.manhole 21.1.0 uses "int_from_bytes", which produces a confusing
|
# twisted.conch.manhole 21.1.0 uses "int_from_bytes", which produces a confusing
|
||||||
# warning. It's fixed by https://github.com/twisted/twisted/pull/1522), so
|
# warning. It's fixed by https://github.com/twisted/twisted/pull/1522), so
|
||||||
# suppress the warning for now.
|
# suppress the warning for now.
|
||||||
|
@ -259,12 +285,18 @@ def listen_manhole(
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
def listen_tcp(bind_addresses, port, factory, reactor=reactor, backlog=50):
|
def listen_tcp(
|
||||||
|
bind_addresses: Collection[str],
|
||||||
|
port: int,
|
||||||
|
factory: ServerFactory,
|
||||||
|
reactor: IReactorTCP = reactor,
|
||||||
|
backlog: int = 50,
|
||||||
|
) -> List[Port]:
|
||||||
"""
|
"""
|
||||||
Create a TCP socket for a port and several addresses
|
Create a TCP socket for a port and several addresses
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
list[twisted.internet.tcp.Port]: listening for TCP connections
|
list of twisted.internet.tcp.Port listening for TCP connections
|
||||||
"""
|
"""
|
||||||
r = []
|
r = []
|
||||||
for address in bind_addresses:
|
for address in bind_addresses:
|
||||||
|
@ -273,12 +305,19 @@ def listen_tcp(bind_addresses, port, factory, reactor=reactor, backlog=50):
|
||||||
except error.CannotListenError as e:
|
except error.CannotListenError as e:
|
||||||
check_bind_error(e, address, bind_addresses)
|
check_bind_error(e, address, bind_addresses)
|
||||||
|
|
||||||
return r
|
# IReactorTCP returns an object implementing IListeningPort from listenTCP,
|
||||||
|
# but we know it will be a Port instance.
|
||||||
|
return r # type: ignore[return-value]
|
||||||
|
|
||||||
|
|
||||||
def listen_ssl(
|
def listen_ssl(
|
||||||
bind_addresses, port, factory, context_factory, reactor=reactor, backlog=50
|
bind_addresses: Collection[str],
|
||||||
):
|
port: int,
|
||||||
|
factory: ServerFactory,
|
||||||
|
context_factory: IOpenSSLContextFactory,
|
||||||
|
reactor: IReactorSSL = reactor,
|
||||||
|
backlog: int = 50,
|
||||||
|
) -> List[Port]:
|
||||||
"""
|
"""
|
||||||
Create an TLS-over-TCP socket for a port and several addresses
|
Create an TLS-over-TCP socket for a port and several addresses
|
||||||
|
|
||||||
|
@ -294,10 +333,13 @@ def listen_ssl(
|
||||||
except error.CannotListenError as e:
|
except error.CannotListenError as e:
|
||||||
check_bind_error(e, address, bind_addresses)
|
check_bind_error(e, address, bind_addresses)
|
||||||
|
|
||||||
return r
|
# IReactorSSL incorrectly declares that an int is returned from listenSSL,
|
||||||
|
# it actually returns an object implementing IListeningPort, but we know it
|
||||||
|
# will be a Port instance.
|
||||||
|
return r # type: ignore[return-value]
|
||||||
|
|
||||||
|
|
||||||
def refresh_certificate(hs: "HomeServer"):
|
def refresh_certificate(hs: "HomeServer") -> None:
|
||||||
"""
|
"""
|
||||||
Refresh the TLS certificates that Synapse is using by re-reading them from
|
Refresh the TLS certificates that Synapse is using by re-reading them from
|
||||||
disk and updating the TLS context factories to use them.
|
disk and updating the TLS context factories to use them.
|
||||||
|
@ -329,7 +371,7 @@ def refresh_certificate(hs: "HomeServer"):
|
||||||
logger.info("Context factories updated.")
|
logger.info("Context factories updated.")
|
||||||
|
|
||||||
|
|
||||||
async def start(hs: "HomeServer"):
|
async def start(hs: "HomeServer") -> None:
|
||||||
"""
|
"""
|
||||||
Start a Synapse server or worker.
|
Start a Synapse server or worker.
|
||||||
|
|
||||||
|
@ -360,7 +402,7 @@ async def start(hs: "HomeServer"):
|
||||||
if hasattr(signal, "SIGHUP"):
|
if hasattr(signal, "SIGHUP"):
|
||||||
|
|
||||||
@wrap_as_background_process("sighup")
|
@wrap_as_background_process("sighup")
|
||||||
def handle_sighup(*args, **kwargs):
|
async def handle_sighup(*args: Any, **kwargs: Any) -> None:
|
||||||
# Tell systemd our state, if we're using it. This will silently fail if
|
# Tell systemd our state, if we're using it. This will silently fail if
|
||||||
# we're not using systemd.
|
# we're not using systemd.
|
||||||
sdnotify(b"RELOADING=1")
|
sdnotify(b"RELOADING=1")
|
||||||
|
@ -373,7 +415,7 @@ async def start(hs: "HomeServer"):
|
||||||
# We defer running the sighup handlers until next reactor tick. This
|
# We defer running the sighup handlers until next reactor tick. This
|
||||||
# is so that we're in a sane state, e.g. flushing the logs may fail
|
# is so that we're in a sane state, e.g. flushing the logs may fail
|
||||||
# if the sighup happens in the middle of writing a log entry.
|
# if the sighup happens in the middle of writing a log entry.
|
||||||
def run_sighup(*args, **kwargs):
|
def run_sighup(*args: Any, **kwargs: Any) -> None:
|
||||||
# `callFromThread` should be "signal safe" as well as thread
|
# `callFromThread` should be "signal safe" as well as thread
|
||||||
# safe.
|
# safe.
|
||||||
reactor.callFromThread(handle_sighup, *args, **kwargs)
|
reactor.callFromThread(handle_sighup, *args, **kwargs)
|
||||||
|
@ -436,12 +478,8 @@ async def start(hs: "HomeServer"):
|
||||||
atexit.register(gc.freeze)
|
atexit.register(gc.freeze)
|
||||||
|
|
||||||
|
|
||||||
def setup_sentry(hs: "HomeServer"):
|
def setup_sentry(hs: "HomeServer") -> None:
|
||||||
"""Enable sentry integration, if enabled in configuration
|
"""Enable sentry integration, if enabled in configuration"""
|
||||||
|
|
||||||
Args:
|
|
||||||
hs
|
|
||||||
"""
|
|
||||||
|
|
||||||
if not hs.config.metrics.sentry_enabled:
|
if not hs.config.metrics.sentry_enabled:
|
||||||
return
|
return
|
||||||
|
@ -466,7 +504,7 @@ def setup_sentry(hs: "HomeServer"):
|
||||||
scope.set_tag("worker_name", name)
|
scope.set_tag("worker_name", name)
|
||||||
|
|
||||||
|
|
||||||
def setup_sdnotify(hs: "HomeServer"):
|
def setup_sdnotify(hs: "HomeServer") -> None:
|
||||||
"""Adds process state hooks to tell systemd what we are up to."""
|
"""Adds process state hooks to tell systemd what we are up to."""
|
||||||
|
|
||||||
# Tell systemd our state, if we're using it. This will silently fail if
|
# Tell systemd our state, if we're using it. This will silently fail if
|
||||||
|
@ -481,7 +519,7 @@ def setup_sdnotify(hs: "HomeServer"):
|
||||||
sdnotify_sockaddr = os.getenv("NOTIFY_SOCKET")
|
sdnotify_sockaddr = os.getenv("NOTIFY_SOCKET")
|
||||||
|
|
||||||
|
|
||||||
def sdnotify(state):
|
def sdnotify(state: bytes) -> None:
|
||||||
"""
|
"""
|
||||||
Send a notification to systemd, if the NOTIFY_SOCKET env var is set.
|
Send a notification to systemd, if the NOTIFY_SOCKET env var is set.
|
||||||
|
|
||||||
|
@ -490,7 +528,7 @@ def sdnotify(state):
|
||||||
package which many OSes don't include as a matter of principle.
|
package which many OSes don't include as a matter of principle.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
state (bytes): notification to send
|
state: notification to send
|
||||||
"""
|
"""
|
||||||
if not isinstance(state, bytes):
|
if not isinstance(state, bytes):
|
||||||
raise TypeError("sdnotify should be called with a bytes")
|
raise TypeError("sdnotify should be called with a bytes")
|
||||||
|
|
|
@ -17,6 +17,7 @@ import logging
|
||||||
import os
|
import os
|
||||||
import sys
|
import sys
|
||||||
import tempfile
|
import tempfile
|
||||||
|
from typing import List, Optional
|
||||||
|
|
||||||
from twisted.internet import defer, task
|
from twisted.internet import defer, task
|
||||||
|
|
||||||
|
@ -25,6 +26,7 @@ from synapse.app import _base
|
||||||
from synapse.config._base import ConfigError
|
from synapse.config._base import ConfigError
|
||||||
from synapse.config.homeserver import HomeServerConfig
|
from synapse.config.homeserver import HomeServerConfig
|
||||||
from synapse.config.logger import setup_logging
|
from synapse.config.logger import setup_logging
|
||||||
|
from synapse.events import EventBase
|
||||||
from synapse.handlers.admin import ExfiltrationWriter
|
from synapse.handlers.admin import ExfiltrationWriter
|
||||||
from synapse.replication.slave.storage._base import BaseSlavedStore
|
from synapse.replication.slave.storage._base import BaseSlavedStore
|
||||||
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
|
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
|
||||||
|
@ -40,6 +42,7 @@ from synapse.replication.slave.storage.receipts import SlavedReceiptsStore
|
||||||
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
|
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
|
||||||
from synapse.server import HomeServer
|
from synapse.server import HomeServer
|
||||||
from synapse.storage.databases.main.room import RoomWorkerStore
|
from synapse.storage.databases.main.room import RoomWorkerStore
|
||||||
|
from synapse.types import StateMap
|
||||||
from synapse.util.logcontext import LoggingContext
|
from synapse.util.logcontext import LoggingContext
|
||||||
from synapse.util.versionstring import get_version_string
|
from synapse.util.versionstring import get_version_string
|
||||||
|
|
||||||
|
@ -65,16 +68,11 @@ class AdminCmdSlavedStore(
|
||||||
|
|
||||||
|
|
||||||
class AdminCmdServer(HomeServer):
|
class AdminCmdServer(HomeServer):
|
||||||
DATASTORE_CLASS = AdminCmdSlavedStore
|
DATASTORE_CLASS = AdminCmdSlavedStore # type: ignore
|
||||||
|
|
||||||
|
|
||||||
async def export_data_command(hs: HomeServer, args):
|
async def export_data_command(hs: HomeServer, args: argparse.Namespace) -> None:
|
||||||
"""Export data for a user.
|
"""Export data for a user."""
|
||||||
|
|
||||||
Args:
|
|
||||||
hs
|
|
||||||
args (argparse.Namespace)
|
|
||||||
"""
|
|
||||||
|
|
||||||
user_id = args.user_id
|
user_id = args.user_id
|
||||||
directory = args.output_directory
|
directory = args.output_directory
|
||||||
|
@ -92,12 +90,12 @@ class FileExfiltrationWriter(ExfiltrationWriter):
|
||||||
Note: This writes to disk on the main reactor thread.
|
Note: This writes to disk on the main reactor thread.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
user_id (str): The user whose data is being exfiltrated.
|
user_id: The user whose data is being exfiltrated.
|
||||||
directory (str|None): The directory to write the data to, if None then
|
directory: The directory to write the data to, if None then will write
|
||||||
will write to a temporary directory.
|
to a temporary directory.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def __init__(self, user_id, directory=None):
|
def __init__(self, user_id: str, directory: Optional[str] = None):
|
||||||
self.user_id = user_id
|
self.user_id = user_id
|
||||||
|
|
||||||
if directory:
|
if directory:
|
||||||
|
@ -111,7 +109,7 @@ class FileExfiltrationWriter(ExfiltrationWriter):
|
||||||
if list(os.listdir(self.base_directory)):
|
if list(os.listdir(self.base_directory)):
|
||||||
raise Exception("Directory must be empty")
|
raise Exception("Directory must be empty")
|
||||||
|
|
||||||
def write_events(self, room_id, events):
|
def write_events(self, room_id: str, events: List[EventBase]) -> None:
|
||||||
room_directory = os.path.join(self.base_directory, "rooms", room_id)
|
room_directory = os.path.join(self.base_directory, "rooms", room_id)
|
||||||
os.makedirs(room_directory, exist_ok=True)
|
os.makedirs(room_directory, exist_ok=True)
|
||||||
events_file = os.path.join(room_directory, "events")
|
events_file = os.path.join(room_directory, "events")
|
||||||
|
@ -120,7 +118,9 @@ class FileExfiltrationWriter(ExfiltrationWriter):
|
||||||
for event in events:
|
for event in events:
|
||||||
print(json.dumps(event.get_pdu_json()), file=f)
|
print(json.dumps(event.get_pdu_json()), file=f)
|
||||||
|
|
||||||
def write_state(self, room_id, event_id, state):
|
def write_state(
|
||||||
|
self, room_id: str, event_id: str, state: StateMap[EventBase]
|
||||||
|
) -> None:
|
||||||
room_directory = os.path.join(self.base_directory, "rooms", room_id)
|
room_directory = os.path.join(self.base_directory, "rooms", room_id)
|
||||||
state_directory = os.path.join(room_directory, "state")
|
state_directory = os.path.join(room_directory, "state")
|
||||||
os.makedirs(state_directory, exist_ok=True)
|
os.makedirs(state_directory, exist_ok=True)
|
||||||
|
@ -131,7 +131,9 @@ class FileExfiltrationWriter(ExfiltrationWriter):
|
||||||
for event in state.values():
|
for event in state.values():
|
||||||
print(json.dumps(event.get_pdu_json()), file=f)
|
print(json.dumps(event.get_pdu_json()), file=f)
|
||||||
|
|
||||||
def write_invite(self, room_id, event, state):
|
def write_invite(
|
||||||
|
self, room_id: str, event: EventBase, state: StateMap[EventBase]
|
||||||
|
) -> None:
|
||||||
self.write_events(room_id, [event])
|
self.write_events(room_id, [event])
|
||||||
|
|
||||||
# We write the invite state somewhere else as they aren't full events
|
# We write the invite state somewhere else as they aren't full events
|
||||||
|
@ -145,7 +147,9 @@ class FileExfiltrationWriter(ExfiltrationWriter):
|
||||||
for event in state.values():
|
for event in state.values():
|
||||||
print(json.dumps(event), file=f)
|
print(json.dumps(event), file=f)
|
||||||
|
|
||||||
def write_knock(self, room_id, event, state):
|
def write_knock(
|
||||||
|
self, room_id: str, event: EventBase, state: StateMap[EventBase]
|
||||||
|
) -> None:
|
||||||
self.write_events(room_id, [event])
|
self.write_events(room_id, [event])
|
||||||
|
|
||||||
# We write the knock state somewhere else as they aren't full events
|
# We write the knock state somewhere else as they aren't full events
|
||||||
|
@ -159,11 +163,11 @@ class FileExfiltrationWriter(ExfiltrationWriter):
|
||||||
for event in state.values():
|
for event in state.values():
|
||||||
print(json.dumps(event), file=f)
|
print(json.dumps(event), file=f)
|
||||||
|
|
||||||
def finished(self):
|
def finished(self) -> str:
|
||||||
return self.base_directory
|
return self.base_directory
|
||||||
|
|
||||||
|
|
||||||
def start(config_options):
|
def start(config_options: List[str]) -> None:
|
||||||
parser = argparse.ArgumentParser(description="Synapse Admin Command")
|
parser = argparse.ArgumentParser(description="Synapse Admin Command")
|
||||||
HomeServerConfig.add_arguments_to_parser(parser)
|
HomeServerConfig.add_arguments_to_parser(parser)
|
||||||
|
|
||||||
|
@ -231,7 +235,7 @@ def start(config_options):
|
||||||
# We also make sure that `_base.start` gets run before we actually run the
|
# We also make sure that `_base.start` gets run before we actually run the
|
||||||
# command.
|
# command.
|
||||||
|
|
||||||
async def run():
|
async def run() -> None:
|
||||||
with LoggingContext("command"):
|
with LoggingContext("command"):
|
||||||
await _base.start(ss)
|
await _base.start(ss)
|
||||||
await args.func(ss, args)
|
await args.func(ss, args)
|
||||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue