Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes
commit
881855d4e3
|
@ -314,8 +314,9 @@ jobs:
|
|||
# There aren't wheels for some of the older deps, so we need to install
|
||||
# their build dependencies
|
||||
- run: |
|
||||
sudo apt update
|
||||
sudo apt-get -qq install build-essential libffi-dev python-dev \
|
||||
libxml2-dev libxslt-dev xmlsec1 zlib1g-dev libjpeg-dev libwebp-dev
|
||||
libxml2-dev libxslt-dev xmlsec1 zlib1g-dev libjpeg-dev libwebp-dev
|
||||
|
||||
- uses: actions/setup-python@v4
|
||||
with:
|
||||
|
|
18
CHANGES.md
18
CHANGES.md
|
@ -1,3 +1,14 @@
|
|||
Synapse 1.84.0 (2023-05-23)
|
||||
===========================
|
||||
|
||||
The `worker_replication_*` configuration settings have been deprecated in favour of configuring the main process consistently with other instances in the `instance_map`. The deprecated settings will be removed in Synapse v1.88.0, but changing your configuration in advance is recommended. See the [upgrade notes](https://github.com/matrix-org/synapse/blob/release-v1.84/docs/upgrade.md#upgrading-to-v1840) for more information.
|
||||
|
||||
Bugfixes
|
||||
--------
|
||||
|
||||
- Fix a bug introduced in Synapse 1.84.0rc1 where errors during startup were not reported correctly on Python < 3.10. ([\#15599](https://github.com/matrix-org/synapse/issues/15599))
|
||||
|
||||
|
||||
Synapse 1.84.0rc1 (2023-05-16)
|
||||
==============================
|
||||
|
||||
|
@ -23,6 +34,12 @@ Bugfixes
|
|||
- Require at least poetry-core v1.1.0. ([\#15566](https://github.com/matrix-org/synapse/issues/15566), [\#15571](https://github.com/matrix-org/synapse/issues/15571))
|
||||
|
||||
|
||||
Deprecations and Removals
|
||||
-------------------------
|
||||
|
||||
- Remove need for `worker_replication_*` based settings in worker configuration yaml by placing this data directly on the `instance_map` instead. ([\#15491](https://github.com/matrix-org/synapse/issues/15491))
|
||||
|
||||
|
||||
Updates to the Docker image
|
||||
---------------------------
|
||||
|
||||
|
@ -42,7 +59,6 @@ Internal Changes
|
|||
|
||||
- Use oEmbed to generate URL previews for YouTube Shorts. ([\#15025](https://github.com/matrix-org/synapse/issues/15025))
|
||||
- Create new `Client` for use with HTTP Replication between workers. Contributed by Jason Little. ([\#15470](https://github.com/matrix-org/synapse/issues/15470))
|
||||
- Remove need for `worker_replication_*` based settings in worker configuration yaml by placing this data directly on the `instance_map` instead. ([\#15491](https://github.com/matrix-org/synapse/issues/15491))
|
||||
- Bump pyicu from 2.10.2 to 2.11. ([\#15509](https://github.com/matrix-org/synapse/issues/15509))
|
||||
- Remove references to supporting per-user flag for [MSC2654](https://github.com/matrix-org/matrix-spec-proposals/pull/2654). ([\#15522](https://github.com/matrix-org/synapse/issues/15522))
|
||||
- Don't use a trusted key server when running the demo scripts. ([\#15527](https://github.com/matrix-org/synapse/issues/15527))
|
||||
|
|
|
@ -0,0 +1 @@
|
|||
Remove the old version of the R30 (30-day retained users) phone-home metric.
|
|
@ -0,0 +1 @@
|
|||
Fix a long-standing bug where setting the read marker could fail when using message retention. Contributed by Nick @ Beeper (@fizzadar).
|
|
@ -0,0 +1 @@
|
|||
Add not null constraint to column full_user_id of tables profiles and user_filters.
|
|
@ -0,0 +1 @@
|
|||
Allow connecting to HTTP Replication Endpoints by using `worker_name` when constructing the request.
|
|
@ -0,0 +1 @@
|
|||
Print full error and stack-trace of any exception that occurs during startup/initialization.
|
|
@ -0,0 +1 @@
|
|||
Fix a long-standing bug where the `url_preview_url_blacklist` configuration setting was not applied to oEmbed or image URLs found while previewing a URL.
|
|
@ -0,0 +1 @@
|
|||
Run mypy type checking with the minimum supported Python version to catch new usage that isn't backwards-compatible.
|
|
@ -0,0 +1 @@
|
|||
Fix subscriptable type usage in Python <3.9.
|
|
@ -0,0 +1 @@
|
|||
Update internal terminology.
|
|
@ -0,0 +1 @@
|
|||
Fix a long-standing bug where filters with multiple backslashes were rejected.
|
|
@ -0,0 +1 @@
|
|||
Instrument `state` and `state_group` storage-related operations to better picture what's happening when tracing.
|
|
@ -0,0 +1 @@
|
|||
Add a new admin API to create a new device for a user.
|
|
@ -0,0 +1 @@
|
|||
Warn users that at least 3.75GB of space is needed for the nix Synapse development environment.
|
|
@ -0,0 +1 @@
|
|||
Fix a bug introduced in Synapse 1.82.0 where the error message displayed when validation of the `app_service_config_files` config option fails would be incorrectly formatted.
|
|
@ -0,0 +1 @@
|
|||
Re-type config paths in `ConfigError`s to be `StrSequence`s instead of `Iterable[str]`s.
|
|
@ -0,0 +1 @@
|
|||
Update internal terminology.
|
|
@ -0,0 +1 @@
|
|||
Update Mutual Rooms (MSC2666) implementation to match new proposal text.
|
|
@ -0,0 +1 @@
|
|||
Fix a long-standing bug where deactivated users were still able to login using the custom `org.matrix.login.jwt` login type (if enabled).
|
|
@ -0,0 +1 @@
|
|||
Remove the unstable identifiers from faster joins ([MSC3706](https://github.com/matrix-org/matrix-spec-proposals/pull/3706).
|
|
@ -0,0 +1 @@
|
|||
Fix the olddeps CI.
|
|
@ -0,0 +1 @@
|
|||
Fix two memory leaks in `trial` test runs.
|
|
@ -0,0 +1 @@
|
|||
Trace how many new events from the backfill response we need to process.
|
|
@ -0,0 +1 @@
|
|||
Fix a long-standing bug where deactivated users were able to login in uncommon situations.
|
|
@ -0,0 +1 @@
|
|||
Remove duplicate timestamp from test logs (`_trial_temp/test.log`).
|
|
@ -0,0 +1 @@
|
|||
Bump types-setuptools from 67.7.0.2 to 67.8.0.0.
|
|
@ -0,0 +1 @@
|
|||
Bump types-pillow from 9.5.0.2 to 9.5.0.4.
|
|
@ -0,0 +1 @@
|
|||
Bump sphinx from 6.1.3 to 6.2.1.
|
|
@ -0,0 +1 @@
|
|||
Bump furo from 2023.3.27 to 2023.5.20.
|
|
@ -0,0 +1 @@
|
|||
Bump pygithub from 1.58.1 to 1.58.2.
|
|
@ -0,0 +1 @@
|
|||
Limit the size of the `HomeServerConfig` cache in trial test runs.
|
|
@ -0,0 +1 @@
|
|||
Instrument `state` and `state_group` storage-related operations to better picture what's happening when tracing.
|
|
@ -0,0 +1 @@
|
|||
Remove outdated comment from the generated and sample homeserver log configs.
|
|
@ -0,0 +1 @@
|
|||
Bump requests from 2.28.2 to 2.31.0.
|
|
@ -0,0 +1 @@
|
|||
Improve type hints.
|
|
@ -0,0 +1 @@
|
|||
Improve type hints.
|
|
@ -0,0 +1 @@
|
|||
Speed up rebuilding of the user directory for local users.
|
|
@ -1,3 +1,9 @@
|
|||
matrix-synapse-py3 (1.84.0) stable; urgency=medium
|
||||
|
||||
* New Synapse release 1.84.0.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Tue, 23 May 2023 10:57:22 +0100
|
||||
|
||||
matrix-synapse-py3 (1.84.0~rc1) stable; urgency=medium
|
||||
|
||||
* New Synapse release 1.84.0rc1.
|
||||
|
|
|
@ -813,6 +813,33 @@ The following fields are returned in the JSON response body:
|
|||
|
||||
- `total` - Total number of user's devices.
|
||||
|
||||
### Create a device
|
||||
|
||||
Creates a new device for a specific `user_id` and `device_id`. Does nothing if the `device_id`
|
||||
exists already.
|
||||
|
||||
The API is:
|
||||
|
||||
```
|
||||
POST /_synapse/admin/v2/users/<user_id>/devices
|
||||
|
||||
{
|
||||
"device_id": "QBUAZIFURK"
|
||||
}
|
||||
```
|
||||
|
||||
An empty JSON dict is returned.
|
||||
|
||||
**Parameters**
|
||||
|
||||
The following parameters should be set in the URL:
|
||||
|
||||
- `user_id` - fully qualified: for example, `@user:server.com`.
|
||||
|
||||
The following fields are required in the JSON request body:
|
||||
|
||||
- `device_id` - The device ID to create.
|
||||
|
||||
### Delete multiple devices
|
||||
Deletes the given devices for a specific `user_id`, and invalidates
|
||||
any access token associated with them.
|
||||
|
|
|
@ -46,6 +46,9 @@ instead.
|
|||
|
||||
If the authentication is unsuccessful, the module must return `None`.
|
||||
|
||||
Note that the user is not automatically registered, the `register_user(..)` method of
|
||||
the [module API](writing_a_module.html) can be used to lazily create users.
|
||||
|
||||
If multiple modules register an auth checker for the same login type but with different
|
||||
fields, Synapse will refuse to start.
|
||||
|
||||
|
|
|
@ -30,12 +30,6 @@ minimal.
|
|||
|
||||
See [the TCP replication documentation](tcp_replication.md).
|
||||
|
||||
### The Slaved DataStore
|
||||
|
||||
There are read-only version of the synapse storage layer in
|
||||
`synapse/replication/slave/storage` that use the response of the
|
||||
replication API to invalidate their caches.
|
||||
|
||||
### The TCP Replication Module
|
||||
Information about how the tcp replication module is structured, including how
|
||||
the classes interact, can be found in
|
||||
|
|
|
@ -68,9 +68,7 @@ root:
|
|||
# Write logs to the `buffer` handler, which will buffer them together in memory,
|
||||
# then write them to a file.
|
||||
#
|
||||
# Replace "buffer" with "console" to log to stderr instead. (Note that you'll
|
||||
# also need to update the configuration for the `twisted` logger above, in
|
||||
# this case.)
|
||||
# Replace "buffer" with "console" to log to stderr instead.
|
||||
#
|
||||
handlers: [buffer]
|
||||
|
||||
|
|
|
@ -92,15 +92,22 @@ process, for example:
|
|||
|
||||
## Deprecation of `worker_replication_*` configuration settings
|
||||
|
||||
When using workers,
|
||||
When using workers,
|
||||
|
||||
* `worker_replication_host`
|
||||
* `worker_replication_http_port`
|
||||
* `worker_replication_http_tls`
|
||||
|
||||
can now be removed from individual worker YAML configuration ***if*** you add the main process to the `instance_map` in the shared YAML configuration,
|
||||
using the name `main`.
|
||||
should now be removed from individual worker YAML configurations and the main process should instead be added to the `instance_map`
|
||||
in the shared YAML configuration, using the name `main`.
|
||||
|
||||
The old `worker_replication_*` settings are now considered deprecated and are expected to be removed in Synapse v1.88.0.
|
||||
|
||||
|
||||
### Example change
|
||||
|
||||
#### Before:
|
||||
|
||||
### Before:
|
||||
Shared YAML
|
||||
```yaml
|
||||
instance_map:
|
||||
|
@ -109,6 +116,7 @@ instance_map:
|
|||
port: 5678
|
||||
tls: false
|
||||
```
|
||||
|
||||
Worker YAML
|
||||
```yaml
|
||||
worker_app: synapse.app.generic_worker
|
||||
|
@ -130,7 +138,10 @@ worker_listeners:
|
|||
|
||||
worker_log_config: /etc/matrix-synapse/generic-worker-log.yaml
|
||||
```
|
||||
### After:
|
||||
|
||||
|
||||
#### After:
|
||||
|
||||
Shared YAML
|
||||
```yaml
|
||||
instance_map:
|
||||
|
@ -143,6 +154,7 @@ instance_map:
|
|||
port: 5678
|
||||
tls: false
|
||||
```
|
||||
|
||||
Worker YAML
|
||||
```yaml
|
||||
worker_app: synapse.app.generic_worker
|
||||
|
@ -165,7 +177,6 @@ Notes:
|
|||
* `tls` is optional but mirrors the functionality of `worker_replication_http_tls`
|
||||
|
||||
|
||||
|
||||
# Upgrading to v1.81.0
|
||||
|
||||
## Application service path & authentication deprecations
|
||||
|
|
|
@ -42,11 +42,6 @@ The following statistics are sent to the configured reporting endpoint:
|
|||
| `daily_e2ee_messages` | int | The number of (state) events with the type `m.room.encrypted` seen in the last 24 hours. |
|
||||
| `daily_sent_messages` | int | The number of (state) events sent by a local user with the type `m.room.message` seen in the last 24 hours. |
|
||||
| `daily_sent_e2ee_messages` | int | The number of (state) events sent by a local user with the type `m.room.encrypted` seen in the last 24 hours. |
|
||||
| `r30_users_all` | int | The number of 30 day retained users, defined as users who have created their accounts more than 30 days ago, where they were last seen at most 30 days ago and where those two timestamps are over 30 days apart. Includes clients that do not fit into the below r30 client types. |
|
||||
| `r30_users_android` | int | The number of 30 day retained users, as defined above. Filtered only to clients with "Android" in the user agent string. |
|
||||
| `r30_users_ios` | int | The number of 30 day retained users, as defined above. Filtered only to clients with "iOS" in the user agent string. |
|
||||
| `r30_users_electron` | int | The number of 30 day retained users, as defined above. Filtered only to clients with "Electron" in the user agent string. |
|
||||
| `r30_users_web` | int | The number of 30 day retained users, as defined above. Filtered only to clients with "Mozilla" or "Gecko" in the user agent string. |
|
||||
| `r30v2_users_all` | int | The number of 30 day retained users, with a revised algorithm. Defined as users that appear more than once in the past 60 days, and have more than 30 days between the most and least recent appearances in the past 60 days. Includes clients that do not fit into the below r30 client types. |
|
||||
| `r30v2_users_android` | int | The number of 30 day retained users, as defined above. Filtered only to clients with ("riot" or "element") and "android" (case-insensitive) in the user agent string. |
|
||||
| `r30v2_users_ios` | int | The number of 30 day retained users, as defined above. Filtered only to clients with ("riot" or "element") and "ios" (case-insensitive) in the user agent string. |
|
||||
|
|
53
flake.nix
53
flake.nix
|
@ -1,35 +1,30 @@
|
|||
# A nix flake that sets up a complete Synapse development environment. Dependencies
|
||||
# A Nix flake that sets up a complete Synapse development environment. Dependencies
|
||||
# for the SyTest (https://github.com/matrix-org/sytest) and Complement
|
||||
# (https://github.com/matrix-org/complement) Matrix homeserver test suites are also
|
||||
# installed automatically.
|
||||
#
|
||||
# You must have already installed nix (https://nixos.org) on your system to use this.
|
||||
# nix can be installed on Linux or MacOS; NixOS is not required. Windows is not
|
||||
# directly supported, but nix can be installed inside of WSL2 or even Docker
|
||||
# You must have already installed Nix (https://nixos.org) on your system to use this.
|
||||
# Nix can be installed on Linux or MacOS; NixOS is not required. Windows is not
|
||||
# directly supported, but Nix can be installed inside of WSL2 or even Docker
|
||||
# containers. Please refer to https://nixos.org/download for details.
|
||||
#
|
||||
# You must also enable support for flakes in Nix. See the following for how to
|
||||
# do so permanently: https://nixos.wiki/wiki/Flakes#Enable_flakes
|
||||
#
|
||||
# Be warned: you'll need over 3.75 GB of free space to download all the dependencies.
|
||||
#
|
||||
# Usage:
|
||||
#
|
||||
# With nix installed, navigate to the directory containing this flake and run
|
||||
# With Nix installed, navigate to the directory containing this flake and run
|
||||
# `nix develop --impure`. The `--impure` is necessary in order to store state
|
||||
# locally from "services", such as PostgreSQL and Redis.
|
||||
#
|
||||
# You should now be dropped into a new shell with all programs and dependencies
|
||||
# availabile to you!
|
||||
#
|
||||
# You can start up pre-configured, local PostgreSQL and Redis instances by
|
||||
# You can start up pre-configured local Synapse, PostgreSQL and Redis instances by
|
||||
# running: `devenv up`. To stop them, use Ctrl-C.
|
||||
#
|
||||
# A PostgreSQL database called 'synapse' will be set up for you, along with
|
||||
# a PostgreSQL user named 'synapse_user'.
|
||||
# The 'host' can be found by running `echo $PGHOST` with the development
|
||||
# shell activated. Use these values to configure your Synapse to connect
|
||||
# to the local PostgreSQL database. You do not need to specify a password.
|
||||
# https://matrix-org.github.io/synapse/latest/postgres
|
||||
#
|
||||
# All state (the venv, postgres and redis data and config) are stored in
|
||||
# .devenv/state. Deleting a file from here and then re-entering the shell
|
||||
# will recreate these files from scratch.
|
||||
|
@ -66,7 +61,7 @@
|
|||
let
|
||||
pkgs = nixpkgs.legacyPackages.${system};
|
||||
in {
|
||||
# Everything is configured via devenv - a nix module for creating declarative
|
||||
# Everything is configured via devenv - a Nix module for creating declarative
|
||||
# developer environments. See https://devenv.sh/reference/options/ for a list
|
||||
# of all possible options.
|
||||
default = devenv.lib.mkShell {
|
||||
|
@ -153,11 +148,39 @@
|
|||
# Redis is needed in order to run Synapse in worker mode.
|
||||
services.redis.enable = true;
|
||||
|
||||
# Configure and start Synapse. Before starting Synapse, this shell code:
|
||||
# * generates a default homeserver.yaml config file if one does not exist, and
|
||||
# * ensures a directory containing two additional homeserver config files exists;
|
||||
# one to configure using the development environment's PostgreSQL as the
|
||||
# database backend and another for enabling Redis support.
|
||||
process.before = ''
|
||||
python -m synapse.app.homeserver -c homeserver.yaml --generate-config --server-name=synapse.dev --report-stats=no
|
||||
mkdir -p homeserver-config-overrides.d
|
||||
cat > homeserver-config-overrides.d/database.yaml << EOF
|
||||
## Do not edit this file. This file is generated by flake.nix
|
||||
database:
|
||||
name: psycopg2
|
||||
args:
|
||||
user: synapse_user
|
||||
database: synapse
|
||||
host: $PGHOST
|
||||
cp_min: 5
|
||||
cp_max: 10
|
||||
EOF
|
||||
cat > homeserver-config-overrides.d/redis.yaml << EOF
|
||||
## Do not edit this file. This file is generated by flake.nix
|
||||
redis:
|
||||
enabled: true
|
||||
EOF
|
||||
'';
|
||||
# Start synapse when `devenv up` is run.
|
||||
processes.synapse.exec = "poetry run python -m synapse.app.homeserver -c homeserver.yaml --config-directory homeserver-config-overrides.d";
|
||||
|
||||
# Define the perl modules we require to run SyTest.
|
||||
#
|
||||
# This list was compiled by cross-referencing https://metacpan.org/
|
||||
# with the modules defined in './cpanfile' and then finding the
|
||||
# corresponding nix packages on https://search.nixos.org/packages.
|
||||
# corresponding Nix packages on https://search.nixos.org/packages.
|
||||
#
|
||||
# This was done until `./install-deps.pl --dryrun` produced no output.
|
||||
env.PERL5LIB = "${with pkgs.perl536Packages; makePerlPath [
|
||||
|
|
6
mypy.ini
6
mypy.ini
|
@ -13,6 +13,9 @@ no_implicit_optional = True
|
|||
disallow_untyped_defs = True
|
||||
strict_equality = True
|
||||
warn_redundant_casts = True
|
||||
# Run mypy type checking with the minimum supported Python version to catch new usage
|
||||
# that isn't backwards-compatible (types, overloads, etc).
|
||||
python_version = 3.8
|
||||
|
||||
files =
|
||||
docker/,
|
||||
|
@ -29,9 +32,6 @@ warn_unused_ignores = False
|
|||
[mypy-synapse.util.caches.treecache]
|
||||
disallow_untyped_defs = False
|
||||
|
||||
[mypy-tests.util.caches.test_descriptors]
|
||||
disallow_untyped_defs = False
|
||||
|
||||
;; Dependencies without annotations
|
||||
;; Before ignoring a module, check to see if type stubs are available.
|
||||
;; The `typeshed` project maintains stubs here:
|
||||
|
|
|
@ -580,20 +580,20 @@ dev = ["Sphinx", "coverage", "flake8", "lxml", "lxml-stubs", "memory-profiler",
|
|||
|
||||
[[package]]
|
||||
name = "furo"
|
||||
version = "2023.3.27"
|
||||
version = "2023.5.20"
|
||||
description = "A clean customisable Sphinx documentation theme."
|
||||
category = "dev"
|
||||
optional = false
|
||||
python-versions = ">=3.7"
|
||||
files = [
|
||||
{file = "furo-2023.3.27-py3-none-any.whl", hash = "sha256:4ab2be254a2d5e52792d0ca793a12c35582dd09897228a6dd47885dabd5c9521"},
|
||||
{file = "furo-2023.3.27.tar.gz", hash = "sha256:b99e7867a5cc833b2b34d7230631dd6558c7a29f93071fdbb5709634bb33c5a5"},
|
||||
{file = "furo-2023.5.20-py3-none-any.whl", hash = "sha256:594a8436ddfe0c071f3a9e9a209c314a219d8341f3f1af33fdf7c69544fab9e6"},
|
||||
{file = "furo-2023.5.20.tar.gz", hash = "sha256:40e09fa17c6f4b22419d122e933089226dcdb59747b5b6c79363089827dea16f"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
beautifulsoup4 = "*"
|
||||
pygments = ">=2.7"
|
||||
sphinx = ">=5.0,<7.0"
|
||||
sphinx = ">=6.0,<8.0"
|
||||
sphinx-basic-ng = "*"
|
||||
|
||||
[[package]]
|
||||
|
@ -1940,14 +1940,14 @@ email = ["email-validator (>=1.0.3)"]
|
|||
|
||||
[[package]]
|
||||
name = "pygithub"
|
||||
version = "1.58.1"
|
||||
version = "1.58.2"
|
||||
description = "Use the full Github API v3"
|
||||
category = "dev"
|
||||
optional = false
|
||||
python-versions = ">=3.7"
|
||||
files = [
|
||||
{file = "PyGithub-1.58.1-py3-none-any.whl", hash = "sha256:4e7fe9c3ec30d5fde5b4fbb97f18821c9dbf372bf6df337fe66f6689a65e0a83"},
|
||||
{file = "PyGithub-1.58.1.tar.gz", hash = "sha256:7d528b4ad92bc13122129fafd444ce3d04c47d2d801f6446b6e6ee2d410235b3"},
|
||||
{file = "PyGithub-1.58.2-py3-none-any.whl", hash = "sha256:f435884af617c6debaa76cbc355372d1027445a56fbc39972a3b9ed4968badc8"},
|
||||
{file = "PyGithub-1.58.2.tar.gz", hash = "sha256:1e6b1b7afe31f75151fb81f7ab6b984a7188a852bdb123dbb9ae90023c3ce60f"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
|
@ -2251,21 +2251,21 @@ md = ["cmarkgfm (>=0.8.0)"]
|
|||
|
||||
[[package]]
|
||||
name = "requests"
|
||||
version = "2.28.2"
|
||||
version = "2.31.0"
|
||||
description = "Python HTTP for Humans."
|
||||
category = "main"
|
||||
optional = false
|
||||
python-versions = ">=3.7, <4"
|
||||
python-versions = ">=3.7"
|
||||
files = [
|
||||
{file = "requests-2.28.2-py3-none-any.whl", hash = "sha256:64299f4909223da747622c030b781c0d7811e359c37124b4bd368fb8c6518baa"},
|
||||
{file = "requests-2.28.2.tar.gz", hash = "sha256:98b1b2782e3c6c4904938b84c0eb932721069dfdb9134313beff7c83c2df24bf"},
|
||||
{file = "requests-2.31.0-py3-none-any.whl", hash = "sha256:58cd2187c01e70e6e26505bca751777aa9f2ee0b7f4300988b709f44e013003f"},
|
||||
{file = "requests-2.31.0.tar.gz", hash = "sha256:942c5a758f98d790eaed1a29cb6eefc7ffb0d1cf7af05c3d2791656dbd6ad1e1"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
certifi = ">=2017.4.17"
|
||||
charset-normalizer = ">=2,<4"
|
||||
idna = ">=2.5,<4"
|
||||
urllib3 = ">=1.21.1,<1.27"
|
||||
urllib3 = ">=1.21.1,<3"
|
||||
|
||||
[package.extras]
|
||||
socks = ["PySocks (>=1.5.6,!=1.5.7)"]
|
||||
|
@ -2565,21 +2565,21 @@ files = [
|
|||
|
||||
[[package]]
|
||||
name = "sphinx"
|
||||
version = "6.1.3"
|
||||
version = "6.2.1"
|
||||
description = "Python documentation generator"
|
||||
category = "dev"
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
files = [
|
||||
{file = "Sphinx-6.1.3.tar.gz", hash = "sha256:0dac3b698538ffef41716cf97ba26c1c7788dba73ce6f150c1ff5b4720786dd2"},
|
||||
{file = "sphinx-6.1.3-py3-none-any.whl", hash = "sha256:807d1cb3d6be87eb78a381c3e70ebd8d346b9a25f3753e9947e866b2786865fc"},
|
||||
{file = "Sphinx-6.2.1.tar.gz", hash = "sha256:6d56a34697bb749ffa0152feafc4b19836c755d90a7c59b72bc7dfd371b9cc6b"},
|
||||
{file = "sphinx-6.2.1-py3-none-any.whl", hash = "sha256:97787ff1fa3256a3eef9eda523a63dbf299f7b47e053cfcf684a1c2a8380c912"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
alabaster = ">=0.7,<0.8"
|
||||
babel = ">=2.9"
|
||||
colorama = {version = ">=0.4.5", markers = "sys_platform == \"win32\""}
|
||||
docutils = ">=0.18,<0.20"
|
||||
docutils = ">=0.18.1,<0.20"
|
||||
imagesize = ">=1.3"
|
||||
importlib-metadata = {version = ">=4.8", markers = "python_version < \"3.10\""}
|
||||
Jinja2 = ">=3.0"
|
||||
|
@ -2597,7 +2597,7 @@ sphinxcontrib-serializinghtml = ">=1.1.5"
|
|||
[package.extras]
|
||||
docs = ["sphinxcontrib-websupport"]
|
||||
lint = ["docutils-stubs", "flake8 (>=3.5.0)", "flake8-simplify", "isort", "mypy (>=0.990)", "ruff", "sphinx-lint", "types-requests"]
|
||||
test = ["cython", "html5lib", "pytest (>=4.6)"]
|
||||
test = ["cython", "filelock", "html5lib", "pytest (>=4.6)"]
|
||||
|
||||
[[package]]
|
||||
name = "sphinx-autodoc2"
|
||||
|
@ -3058,14 +3058,14 @@ files = [
|
|||
|
||||
[[package]]
|
||||
name = "types-pillow"
|
||||
version = "9.5.0.2"
|
||||
version = "9.5.0.4"
|
||||
description = "Typing stubs for Pillow"
|
||||
category = "dev"
|
||||
optional = false
|
||||
python-versions = "*"
|
||||
files = [
|
||||
{file = "types-Pillow-9.5.0.2.tar.gz", hash = "sha256:b3f9f621f259566c19c1deca21901017c8b1e3e200ed2e49e0a2d83c0a5175db"},
|
||||
{file = "types_Pillow-9.5.0.2-py3-none-any.whl", hash = "sha256:58fdebd0ffa2353ecccdd622adde23bce89da5c0c8b96c34f2d1eca7b7e42d0e"},
|
||||
{file = "types-Pillow-9.5.0.4.tar.gz", hash = "sha256:f1b6af47abd151847ee25911ffeba784899bc7dc7f9eba8ca6a5aac522b012ef"},
|
||||
{file = "types_Pillow-9.5.0.4-py3-none-any.whl", hash = "sha256:69427d9fa4320ff6e30f00fb9c0dd71185dc0a16de4757774220104759483466"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
|
@ -3124,14 +3124,14 @@ types-urllib3 = "*"
|
|||
|
||||
[[package]]
|
||||
name = "types-setuptools"
|
||||
version = "67.7.0.2"
|
||||
version = "67.8.0.0"
|
||||
description = "Typing stubs for setuptools"
|
||||
category = "dev"
|
||||
optional = false
|
||||
python-versions = "*"
|
||||
files = [
|
||||
{file = "types-setuptools-67.7.0.2.tar.gz", hash = "sha256:155789e85e79d5682b0d341919d4beb6140408ae52bac922af25b54e36ab25c0"},
|
||||
{file = "types_setuptools-67.7.0.2-py3-none-any.whl", hash = "sha256:bd30f6dbe9b83f0a7e6e3eab6d2df748aa4f55700d54e9f077d3aa30cc019445"},
|
||||
{file = "types-setuptools-67.8.0.0.tar.gz", hash = "sha256:95c9ed61871d6c0e258433373a4e1753c0a7c3627a46f4d4058c7b5a08ab844f"},
|
||||
{file = "types_setuptools-67.8.0.0-py3-none-any.whl", hash = "sha256:6df73340d96b238a4188b7b7668814b37e8018168aef1eef94a3b1872e3f60ff"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
|
|
|
@ -89,7 +89,7 @@ manifest-path = "rust/Cargo.toml"
|
|||
|
||||
[tool.poetry]
|
||||
name = "matrix-synapse"
|
||||
version = "1.84.0rc1"
|
||||
version = "1.84.0"
|
||||
description = "Homeserver for the Matrix decentralised comms protocol"
|
||||
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
|
||||
license = "Apache-2.0"
|
||||
|
|
|
@ -18,10 +18,11 @@ can crop up, e.g the cache descriptors.
|
|||
|
||||
from typing import Callable, Optional, Type
|
||||
|
||||
from mypy.erasetype import remove_instance_last_known_values
|
||||
from mypy.nodes import ARG_NAMED_OPT
|
||||
from mypy.plugin import MethodSigContext, Plugin
|
||||
from mypy.typeops import bind_self
|
||||
from mypy.types import CallableType, NoneType, UnionType
|
||||
from mypy.types import CallableType, Instance, NoneType, UnionType
|
||||
|
||||
|
||||
class SynapsePlugin(Plugin):
|
||||
|
@ -92,10 +93,41 @@ def cached_function_method_signature(ctx: MethodSigContext) -> CallableType:
|
|||
arg_names.append("on_invalidate")
|
||||
arg_kinds.append(ARG_NAMED_OPT) # Arg is an optional kwarg.
|
||||
|
||||
# Finally we ensure the return type is a Deferred.
|
||||
if (
|
||||
isinstance(signature.ret_type, Instance)
|
||||
and signature.ret_type.type.fullname == "twisted.internet.defer.Deferred"
|
||||
):
|
||||
# If it is already a Deferred, nothing to do.
|
||||
ret_type = signature.ret_type
|
||||
else:
|
||||
ret_arg = None
|
||||
if isinstance(signature.ret_type, Instance):
|
||||
# If a coroutine, wrap the coroutine's return type in a Deferred.
|
||||
if signature.ret_type.type.fullname == "typing.Coroutine":
|
||||
ret_arg = signature.ret_type.args[2]
|
||||
|
||||
# If an awaitable, wrap the awaitable's final value in a Deferred.
|
||||
elif signature.ret_type.type.fullname == "typing.Awaitable":
|
||||
ret_arg = signature.ret_type.args[0]
|
||||
|
||||
# Otherwise, wrap the return value in a Deferred.
|
||||
if ret_arg is None:
|
||||
ret_arg = signature.ret_type
|
||||
|
||||
# This should be able to use ctx.api.named_generic_type, but that doesn't seem
|
||||
# to find the correct symbol for anything more than 1 module deep.
|
||||
#
|
||||
# modules is not part of CheckerPluginInterface. The following is a combination
|
||||
# of TypeChecker.named_generic_type and TypeChecker.lookup_typeinfo.
|
||||
sym = ctx.api.modules["twisted.internet.defer"].names.get("Deferred") # type: ignore[attr-defined]
|
||||
ret_type = Instance(sym.node, [remove_instance_last_known_values(ret_arg)])
|
||||
|
||||
signature = signature.copy_modified(
|
||||
arg_types=arg_types,
|
||||
arg_names=arg_names,
|
||||
arg_kinds=arg_kinds,
|
||||
ret_type=ret_type,
|
||||
)
|
||||
|
||||
return signature
|
||||
|
|
|
@ -128,20 +128,7 @@ USER_FILTER_SCHEMA = {
|
|||
"account_data": {"$ref": "#/definitions/filter"},
|
||||
"room": {"$ref": "#/definitions/room_filter"},
|
||||
"event_format": {"type": "string", "enum": ["client", "federation"]},
|
||||
"event_fields": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"type": "string",
|
||||
# Don't allow '\\' in event field filters. This makes matching
|
||||
# events a lot easier as we can then use a negative lookbehind
|
||||
# assertion to split '\.' If we allowed \\ then it would
|
||||
# incorrectly split '\\.' See synapse.events.utils.serialize_event
|
||||
#
|
||||
# Note that because this is a regular expression, we have to escape
|
||||
# each backslash in the pattern.
|
||||
"pattern": r"^((?!\\\\).)*$",
|
||||
},
|
||||
},
|
||||
"event_fields": {"type": "array", "items": {"type": "string"}},
|
||||
},
|
||||
"additionalProperties": True, # Allow new fields for forward compatibility
|
||||
}
|
||||
|
|
|
@ -214,7 +214,7 @@ def handle_startup_exception(e: Exception) -> NoReturn:
|
|||
# the reactor are written to the logs, followed by a summary to stderr.
|
||||
logger.exception("Exception during startup")
|
||||
|
||||
error_string = "".join(traceback.format_exception(e))
|
||||
error_string = "".join(traceback.format_exception(type(e), e, e.__traceback__))
|
||||
indented_error_string = indent(error_string, " ")
|
||||
|
||||
quit_with_error(
|
||||
|
|
|
@ -64,7 +64,7 @@ from synapse.util.logcontext import LoggingContext
|
|||
logger = logging.getLogger("synapse.app.admin_cmd")
|
||||
|
||||
|
||||
class AdminCmdSlavedStore(
|
||||
class AdminCmdStore(
|
||||
FilteringWorkerStore,
|
||||
ClientIpWorkerStore,
|
||||
DeviceWorkerStore,
|
||||
|
@ -103,7 +103,7 @@ class AdminCmdSlavedStore(
|
|||
|
||||
|
||||
class AdminCmdServer(HomeServer):
|
||||
DATASTORE_CLASS = AdminCmdSlavedStore # type: ignore
|
||||
DATASTORE_CLASS = AdminCmdStore # type: ignore
|
||||
|
||||
|
||||
async def export_data_command(hs: HomeServer, args: argparse.Namespace) -> None:
|
||||
|
|
|
@ -102,7 +102,7 @@ from synapse.util.httpresourcetree import create_resource_tree
|
|||
logger = logging.getLogger("synapse.app.generic_worker")
|
||||
|
||||
|
||||
class GenericWorkerSlavedStore(
|
||||
class GenericWorkerStore(
|
||||
# FIXME(#3714): We need to add UserDirectoryStore as we write directly
|
||||
# rather than going via the correct worker.
|
||||
UserDirectoryStore,
|
||||
|
@ -154,7 +154,7 @@ class GenericWorkerSlavedStore(
|
|||
|
||||
|
||||
class GenericWorkerServer(HomeServer):
|
||||
DATASTORE_CLASS = GenericWorkerSlavedStore # type: ignore
|
||||
DATASTORE_CLASS = GenericWorkerStore # type: ignore
|
||||
|
||||
def _listen_http(self, listener_config: ListenerConfig) -> None:
|
||||
assert listener_config.http_options is not None
|
||||
|
|
|
@ -127,10 +127,6 @@ async def phone_stats_home(
|
|||
daily_sent_messages = await store.count_daily_sent_messages()
|
||||
stats["daily_sent_messages"] = daily_sent_messages
|
||||
|
||||
r30_results = await store.count_r30_users()
|
||||
for name, count in r30_results.items():
|
||||
stats["r30_users_" + name] = count
|
||||
|
||||
r30v2_results = await store.count_r30v2_users()
|
||||
for name, count in r30v2_results.items():
|
||||
stats["r30v2_users_" + name] = count
|
||||
|
|
|
@ -86,6 +86,7 @@ class ApplicationService:
|
|||
url.rstrip("/") if isinstance(url, str) else None
|
||||
) # url must not end with a slash
|
||||
self.hs_token = hs_token
|
||||
# The full Matrix ID for this application service's sender.
|
||||
self.sender = sender
|
||||
self.namespaces = self._check_namespaces(namespaces)
|
||||
self.id = id
|
||||
|
@ -212,7 +213,7 @@ class ApplicationService:
|
|||
True if the application service is interested in the user, False if not.
|
||||
"""
|
||||
return (
|
||||
# User is the appservice's sender_localpart user
|
||||
# User is the appservice's configured sender_localpart user
|
||||
user_id == self.sender
|
||||
# User is in the appservice's user namespace
|
||||
or self.is_user_in_namespace(user_id)
|
||||
|
|
|
@ -44,6 +44,7 @@ import jinja2
|
|||
import pkg_resources
|
||||
import yaml
|
||||
|
||||
from synapse.types import StrSequence
|
||||
from synapse.util.templates import _create_mxc_to_http_filter, _format_ts_filter
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
@ -58,7 +59,7 @@ class ConfigError(Exception):
|
|||
the problem lies.
|
||||
"""
|
||||
|
||||
def __init__(self, msg: str, path: Optional[Iterable[str]] = None):
|
||||
def __init__(self, msg: str, path: Optional[StrSequence] = None):
|
||||
self.msg = msg
|
||||
self.path = path
|
||||
|
||||
|
|
|
@ -61,9 +61,10 @@ from synapse.config import ( # noqa: F401
|
|||
voip,
|
||||
workers,
|
||||
)
|
||||
from synapse.types import StrSequence
|
||||
|
||||
class ConfigError(Exception):
|
||||
def __init__(self, msg: str, path: Optional[Iterable[str]] = None):
|
||||
def __init__(self, msg: str, path: Optional[StrSequence] = None):
|
||||
self.msg = msg
|
||||
self.path = path
|
||||
|
||||
|
|
|
@ -11,17 +11,17 @@
|
|||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
from typing import Any, Dict, Iterable, Type, TypeVar
|
||||
from typing import Any, Dict, Type, TypeVar
|
||||
|
||||
import jsonschema
|
||||
from pydantic import BaseModel, ValidationError, parse_obj_as
|
||||
|
||||
from synapse.config._base import ConfigError
|
||||
from synapse.types import JsonDict
|
||||
from synapse.types import JsonDict, StrSequence
|
||||
|
||||
|
||||
def validate_config(
|
||||
json_schema: JsonDict, config: Any, config_path: Iterable[str]
|
||||
json_schema: JsonDict, config: Any, config_path: StrSequence
|
||||
) -> None:
|
||||
"""Validates a config setting against a JsonSchema definition
|
||||
|
||||
|
@ -45,7 +45,7 @@ def validate_config(
|
|||
|
||||
|
||||
def json_error_to_config_error(
|
||||
e: jsonschema.ValidationError, config_path: Iterable[str]
|
||||
e: jsonschema.ValidationError, config_path: StrSequence
|
||||
) -> ConfigError:
|
||||
"""Converts a json validation error to a user-readable ConfigError
|
||||
|
||||
|
|
|
@ -36,11 +36,10 @@ class AppServiceConfig(Config):
|
|||
if not isinstance(self.app_service_config_files, list) or not all(
|
||||
type(x) is str for x in self.app_service_config_files
|
||||
):
|
||||
# type-ignore: this function gets arbitrary json value; we do use this path.
|
||||
raise ConfigError(
|
||||
"Expected '%s' to be a list of AS config files:"
|
||||
% (self.app_service_config_files),
|
||||
"app_service_config_files",
|
||||
("app_service_config_files",),
|
||||
)
|
||||
|
||||
self.track_appservice_user_ips = config.get("track_appservice_user_ips", False)
|
||||
|
|
|
@ -84,18 +84,6 @@ class ExperimentalConfig(Config):
|
|||
"msc3984_appservice_key_query", False
|
||||
)
|
||||
|
||||
# MSC3706 (server-side support for partial state in /send_join responses)
|
||||
# Synapse will always serve partial state responses to requests using the stable
|
||||
# query parameter `omit_members`. If this flag is set, Synapse will also serve
|
||||
# partial state responses to requests using the unstable query parameter
|
||||
# `org.matrix.msc3706.partial_state`.
|
||||
self.msc3706_enabled: bool = experimental.get("msc3706_enabled", False)
|
||||
|
||||
# experimental support for faster joins over federation
|
||||
# (MSC2775, MSC3706, MSC3895)
|
||||
# requires a target server that can provide a partial join response (MSC3706)
|
||||
self.faster_joins_enabled: bool = experimental.get("faster_joins", True)
|
||||
|
||||
# MSC3720 (Account status endpoint)
|
||||
self.msc3720_enabled: bool = experimental.get("msc3720_enabled", False)
|
||||
|
||||
|
|
|
@ -117,9 +117,7 @@ root:
|
|||
# Write logs to the `buffer` handler, which will buffer them together in memory,
|
||||
# then write them to a file.
|
||||
#
|
||||
# Replace "buffer" with "console" to log to stderr instead. (Note that you'll
|
||||
# also need to update the configuration for the `twisted` logger above, in
|
||||
# this case.)
|
||||
# Replace "buffer" with "console" to log to stderr instead.
|
||||
#
|
||||
handlers: [buffer]
|
||||
|
||||
|
|
|
@ -19,7 +19,7 @@ from urllib import parse as urlparse
|
|||
import attr
|
||||
import pkg_resources
|
||||
|
||||
from synapse.types import JsonDict
|
||||
from synapse.types import JsonDict, StrSequence
|
||||
|
||||
from ._base import Config, ConfigError
|
||||
from ._util import validate_config
|
||||
|
@ -80,7 +80,7 @@ class OembedConfig(Config):
|
|||
)
|
||||
|
||||
def _parse_and_validate_provider(
|
||||
self, providers: List[JsonDict], config_path: Iterable[str]
|
||||
self, providers: List[JsonDict], config_path: StrSequence
|
||||
) -> Iterable[OEmbedEndpointConfig]:
|
||||
# Ensure it is the proper form.
|
||||
validate_config(
|
||||
|
@ -112,7 +112,7 @@ class OembedConfig(Config):
|
|||
api_endpoint, patterns, endpoint.get("formats")
|
||||
)
|
||||
|
||||
def _glob_to_pattern(self, glob: str, config_path: Iterable[str]) -> Pattern:
|
||||
def _glob_to_pattern(self, glob: str, config_path: StrSequence) -> Pattern:
|
||||
"""
|
||||
Convert the glob into a sane regular expression to match against. The
|
||||
rules followed will be slightly different for the domain portion vs.
|
||||
|
|
|
@ -224,20 +224,20 @@ class ContentRepositoryConfig(Config):
|
|||
if "http" in proxy_env or "https" in proxy_env:
|
||||
logger.warning("".join(HTTP_PROXY_SET_WARNING))
|
||||
|
||||
# we always blacklist '0.0.0.0' and '::', which are supposed to be
|
||||
# we always block '0.0.0.0' and '::', which are supposed to be
|
||||
# unroutable addresses.
|
||||
self.url_preview_ip_range_blacklist = generate_ip_set(
|
||||
self.url_preview_ip_range_blocklist = generate_ip_set(
|
||||
config["url_preview_ip_range_blacklist"],
|
||||
["0.0.0.0", "::"],
|
||||
config_path=("url_preview_ip_range_blacklist",),
|
||||
)
|
||||
|
||||
self.url_preview_ip_range_whitelist = generate_ip_set(
|
||||
self.url_preview_ip_range_allowlist = generate_ip_set(
|
||||
config.get("url_preview_ip_range_whitelist", ()),
|
||||
config_path=("url_preview_ip_range_whitelist",),
|
||||
)
|
||||
|
||||
self.url_preview_url_blacklist = config.get("url_preview_url_blacklist", ())
|
||||
self.url_preview_url_blocklist = config.get("url_preview_url_blacklist", ())
|
||||
|
||||
self.url_preview_accept_language = config.get(
|
||||
"url_preview_accept_language"
|
||||
|
|
|
@ -27,7 +27,7 @@ from netaddr import AddrFormatError, IPNetwork, IPSet
|
|||
from twisted.conch.ssh.keys import Key
|
||||
|
||||
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS
|
||||
from synapse.types import JsonDict
|
||||
from synapse.types import JsonDict, StrSequence
|
||||
from synapse.util.module_loader import load_module
|
||||
from synapse.util.stringutils import parse_and_validate_server_name
|
||||
|
||||
|
@ -73,7 +73,7 @@ def _6to4(network: IPNetwork) -> IPNetwork:
|
|||
def generate_ip_set(
|
||||
ip_addresses: Optional[Iterable[str]],
|
||||
extra_addresses: Optional[Iterable[str]] = None,
|
||||
config_path: Optional[Iterable[str]] = None,
|
||||
config_path: Optional[StrSequence] = None,
|
||||
) -> IPSet:
|
||||
"""
|
||||
Generate an IPSet from a list of IP addresses or CIDRs.
|
||||
|
@ -115,7 +115,7 @@ def generate_ip_set(
|
|||
|
||||
|
||||
# IP ranges that are considered private / unroutable / don't make sense.
|
||||
DEFAULT_IP_RANGE_BLACKLIST = [
|
||||
DEFAULT_IP_RANGE_BLOCKLIST = [
|
||||
# Localhost
|
||||
"127.0.0.0/8",
|
||||
# Private networks.
|
||||
|
@ -501,36 +501,36 @@ class ServerConfig(Config):
|
|||
# due to resource constraints
|
||||
self.admin_contact = config.get("admin_contact", None)
|
||||
|
||||
ip_range_blacklist = config.get(
|
||||
"ip_range_blacklist", DEFAULT_IP_RANGE_BLACKLIST
|
||||
ip_range_blocklist = config.get(
|
||||
"ip_range_blacklist", DEFAULT_IP_RANGE_BLOCKLIST
|
||||
)
|
||||
|
||||
# Attempt to create an IPSet from the given ranges
|
||||
|
||||
# Always blacklist 0.0.0.0, ::
|
||||
self.ip_range_blacklist = generate_ip_set(
|
||||
ip_range_blacklist, ["0.0.0.0", "::"], config_path=("ip_range_blacklist",)
|
||||
# Always block 0.0.0.0, ::
|
||||
self.ip_range_blocklist = generate_ip_set(
|
||||
ip_range_blocklist, ["0.0.0.0", "::"], config_path=("ip_range_blacklist",)
|
||||
)
|
||||
|
||||
self.ip_range_whitelist = generate_ip_set(
|
||||
self.ip_range_allowlist = generate_ip_set(
|
||||
config.get("ip_range_whitelist", ()), config_path=("ip_range_whitelist",)
|
||||
)
|
||||
# The federation_ip_range_blacklist is used for backwards-compatibility
|
||||
# and only applies to federation and identity servers.
|
||||
if "federation_ip_range_blacklist" in config:
|
||||
# Always blacklist 0.0.0.0, ::
|
||||
self.federation_ip_range_blacklist = generate_ip_set(
|
||||
# Always block 0.0.0.0, ::
|
||||
self.federation_ip_range_blocklist = generate_ip_set(
|
||||
config["federation_ip_range_blacklist"],
|
||||
["0.0.0.0", "::"],
|
||||
config_path=("federation_ip_range_blacklist",),
|
||||
)
|
||||
# 'federation_ip_range_whitelist' was never a supported configuration option.
|
||||
self.federation_ip_range_whitelist = None
|
||||
self.federation_ip_range_allowlist = None
|
||||
else:
|
||||
# No backwards-compatiblity requrired, as federation_ip_range_blacklist
|
||||
# is not given. Default to ip_range_blacklist and ip_range_whitelist.
|
||||
self.federation_ip_range_blacklist = self.ip_range_blacklist
|
||||
self.federation_ip_range_whitelist = self.ip_range_whitelist
|
||||
self.federation_ip_range_blocklist = self.ip_range_blocklist
|
||||
self.federation_ip_range_allowlist = self.ip_range_allowlist
|
||||
|
||||
# (undocumented) option for torturing the worker-mode replication a bit,
|
||||
# for testing. The value defines the number of milliseconds to pause before
|
||||
|
|
|
@ -19,6 +19,7 @@ from immutabledict import immutabledict
|
|||
|
||||
from synapse.appservice import ApplicationService
|
||||
from synapse.events import EventBase
|
||||
from synapse.logging.opentracing import tag_args, trace
|
||||
from synapse.types import JsonDict, StateMap
|
||||
|
||||
if TYPE_CHECKING:
|
||||
|
@ -242,6 +243,8 @@ class EventContext(UnpersistedEventContextBase):
|
|||
|
||||
return self._state_group
|
||||
|
||||
@trace
|
||||
@tag_args
|
||||
async def get_current_state_ids(
|
||||
self, state_filter: Optional["StateFilter"] = None
|
||||
) -> Optional[StateMap[str]]:
|
||||
|
@ -275,6 +278,8 @@ class EventContext(UnpersistedEventContextBase):
|
|||
|
||||
return prev_state_ids
|
||||
|
||||
@trace
|
||||
@tag_args
|
||||
async def get_prev_state_ids(
|
||||
self, state_filter: Optional["StateFilter"] = None
|
||||
) -> StateMap[str]:
|
||||
|
|
|
@ -22,6 +22,7 @@ from typing import (
|
|||
Iterable,
|
||||
List,
|
||||
Mapping,
|
||||
Match,
|
||||
MutableMapping,
|
||||
Optional,
|
||||
Union,
|
||||
|
@ -46,12 +47,10 @@ if TYPE_CHECKING:
|
|||
from synapse.handlers.relations import BundledAggregations
|
||||
|
||||
|
||||
# Split strings on "." but not "\." This uses a negative lookbehind assertion for '\'
|
||||
# (?<!stuff) matches if the current position in the string is not preceded
|
||||
# by a match for 'stuff'.
|
||||
# TODO: This is fast, but fails to handle "foo\\.bar" which should be treated as
|
||||
# the literal fields "foo\" and "bar" but will instead be treated as "foo\\.bar"
|
||||
SPLIT_FIELD_REGEX = re.compile(r"(?<!\\)\.")
|
||||
# Split strings on "." but not "\." (or "\\\.").
|
||||
SPLIT_FIELD_REGEX = re.compile(r"\\*\.")
|
||||
# Find escaped characters, e.g. those with a \ in front of them.
|
||||
ESCAPE_SEQUENCE_PATTERN = re.compile(r"\\(.)")
|
||||
|
||||
CANONICALJSON_MAX_INT = (2**53) - 1
|
||||
CANONICALJSON_MIN_INT = -CANONICALJSON_MAX_INT
|
||||
|
@ -253,6 +252,57 @@ def _copy_field(src: JsonDict, dst: JsonDict, field: List[str]) -> None:
|
|||
sub_out_dict[key_to_move] = sub_dict[key_to_move]
|
||||
|
||||
|
||||
def _escape_slash(m: Match[str]) -> str:
|
||||
"""
|
||||
Replacement function; replace a backslash-backslash or backslash-dot with the
|
||||
second character. Leaves any other string alone.
|
||||
"""
|
||||
if m.group(1) in ("\\", "."):
|
||||
return m.group(1)
|
||||
return m.group(0)
|
||||
|
||||
|
||||
def _split_field(field: str) -> List[str]:
|
||||
"""
|
||||
Splits strings on unescaped dots and removes escaping.
|
||||
|
||||
Args:
|
||||
field: A string representing a path to a field.
|
||||
|
||||
Returns:
|
||||
A list of nested fields to traverse.
|
||||
"""
|
||||
|
||||
# Convert the field and remove escaping:
|
||||
#
|
||||
# 1. "content.body.thing\.with\.dots"
|
||||
# 2. ["content", "body", "thing\.with\.dots"]
|
||||
# 3. ["content", "body", "thing.with.dots"]
|
||||
|
||||
# Find all dots (and their preceding backslashes). If the dot is unescaped
|
||||
# then emit a new field part.
|
||||
result = []
|
||||
prev_start = 0
|
||||
for match in SPLIT_FIELD_REGEX.finditer(field):
|
||||
# If the match is an *even* number of characters than the dot was escaped.
|
||||
if len(match.group()) % 2 == 0:
|
||||
continue
|
||||
|
||||
# Add a new part (up to the dot, exclusive) after escaping.
|
||||
result.append(
|
||||
ESCAPE_SEQUENCE_PATTERN.sub(
|
||||
_escape_slash, field[prev_start : match.end() - 1]
|
||||
)
|
||||
)
|
||||
prev_start = match.end()
|
||||
|
||||
# Add any part of the field after the last unescaped dot. (Note that if the
|
||||
# character is a dot this correctly adds a blank string.)
|
||||
result.append(re.sub(r"\\(.)", _escape_slash, field[prev_start:]))
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def only_fields(dictionary: JsonDict, fields: List[str]) -> JsonDict:
|
||||
"""Return a new dict with only the fields in 'dictionary' which are present
|
||||
in 'fields'.
|
||||
|
@ -260,7 +310,7 @@ def only_fields(dictionary: JsonDict, fields: List[str]) -> JsonDict:
|
|||
If there are no event fields specified then all fields are included.
|
||||
The entries may include '.' characters to indicate sub-fields.
|
||||
So ['content.body'] will include the 'body' field of the 'content' object.
|
||||
A literal '.' character in a field name may be escaped using a '\'.
|
||||
A literal '.' or '\' character in a field name may be escaped using a '\'.
|
||||
|
||||
Args:
|
||||
dictionary: The dictionary to read from.
|
||||
|
@ -275,13 +325,7 @@ def only_fields(dictionary: JsonDict, fields: List[str]) -> JsonDict:
|
|||
|
||||
# for each field, convert it:
|
||||
# ["content.body.thing\.with\.dots"] => [["content", "body", "thing\.with\.dots"]]
|
||||
split_fields = [SPLIT_FIELD_REGEX.split(f) for f in fields]
|
||||
|
||||
# for each element of the output array of arrays:
|
||||
# remove escaping so we can use the right key names.
|
||||
split_fields[:] = [
|
||||
[f.replace(r"\.", r".") for f in field_array] for field_array in split_fields
|
||||
]
|
||||
split_fields = [_split_field(f) for f in fields]
|
||||
|
||||
output: JsonDict = {}
|
||||
for field_array in split_fields:
|
||||
|
|
|
@ -739,12 +739,10 @@ class FederationServer(FederationBase):
|
|||
"event": event_json,
|
||||
"state": [p.get_pdu_json(time_now) for p in state_events],
|
||||
"auth_chain": [p.get_pdu_json(time_now) for p in auth_chain_events],
|
||||
"org.matrix.msc3706.partial_state": caller_supports_partial_state,
|
||||
"members_omitted": caller_supports_partial_state,
|
||||
}
|
||||
|
||||
if servers_in_room is not None:
|
||||
resp["org.matrix.msc3706.servers_in_room"] = list(servers_in_room)
|
||||
resp["servers_in_room"] = list(servers_in_room)
|
||||
|
||||
return resp
|
||||
|
|
|
@ -59,7 +59,6 @@ class TransportLayerClient:
|
|||
|
||||
def __init__(self, hs: "HomeServer"):
|
||||
self.client = hs.get_federation_http_client()
|
||||
self._faster_joins_enabled = hs.config.experimental.faster_joins_enabled
|
||||
self._is_mine_server_name = hs.is_mine_server_name
|
||||
|
||||
async def get_room_state_ids(
|
||||
|
@ -363,12 +362,8 @@ class TransportLayerClient:
|
|||
) -> "SendJoinResponse":
|
||||
path = _create_v2_path("/send_join/%s/%s", room_id, event_id)
|
||||
query_params: Dict[str, str] = {}
|
||||
if self._faster_joins_enabled:
|
||||
# lazy-load state on join
|
||||
query_params["org.matrix.msc3706.partial_state"] = (
|
||||
"true" if omit_members else "false"
|
||||
)
|
||||
query_params["omit_members"] = "true" if omit_members else "false"
|
||||
# lazy-load state on join
|
||||
query_params["omit_members"] = "true" if omit_members else "false"
|
||||
|
||||
return await self.client.put_json(
|
||||
destination=destination,
|
||||
|
@ -902,9 +897,7 @@ def _members_omitted_parser(response: SendJoinResponse) -> Generator[None, Any,
|
|||
while True:
|
||||
val = yield
|
||||
if not isinstance(val, bool):
|
||||
raise TypeError(
|
||||
"members_omitted (formerly org.matrix.msc370c.partial_state) must be a boolean"
|
||||
)
|
||||
raise TypeError("members_omitted must be a boolean")
|
||||
response.members_omitted = val
|
||||
|
||||
|
||||
|
@ -964,14 +957,6 @@ class SendJoinParser(ByteParser[SendJoinResponse]):
|
|||
]
|
||||
|
||||
if not v1_api:
|
||||
self._coros.append(
|
||||
ijson.items_coro(
|
||||
_members_omitted_parser(self._response),
|
||||
"org.matrix.msc3706.partial_state",
|
||||
use_float="True",
|
||||
)
|
||||
)
|
||||
# The stable field name comes last, so it "wins" if the fields disagree
|
||||
self._coros.append(
|
||||
ijson.items_coro(
|
||||
_members_omitted_parser(self._response),
|
||||
|
@ -980,14 +965,6 @@ class SendJoinParser(ByteParser[SendJoinResponse]):
|
|||
)
|
||||
)
|
||||
|
||||
self._coros.append(
|
||||
ijson.items_coro(
|
||||
_servers_in_room_parser(self._response),
|
||||
"org.matrix.msc3706.servers_in_room",
|
||||
use_float="True",
|
||||
)
|
||||
)
|
||||
|
||||
# Again, stable field name comes last
|
||||
self._coros.append(
|
||||
ijson.items_coro(
|
||||
|
|
|
@ -440,7 +440,6 @@ class FederationV2SendJoinServlet(BaseFederationServerServlet):
|
|||
server_name: str,
|
||||
):
|
||||
super().__init__(hs, authenticator, ratelimiter, server_name)
|
||||
self._read_msc3706_query_param = hs.config.experimental.msc3706_enabled
|
||||
|
||||
async def on_PUT(
|
||||
self,
|
||||
|
@ -453,16 +452,7 @@ class FederationV2SendJoinServlet(BaseFederationServerServlet):
|
|||
# TODO(paul): assert that event_id parsed from path actually
|
||||
# match those given in content
|
||||
|
||||
partial_state = False
|
||||
# The stable query parameter wins, if it disagrees with the unstable
|
||||
# parameter for some reason.
|
||||
stable_param = parse_boolean_from_args(query, "omit_members", default=None)
|
||||
if stable_param is not None:
|
||||
partial_state = stable_param
|
||||
elif self._read_msc3706_query_param:
|
||||
partial_state = parse_boolean_from_args(
|
||||
query, "org.matrix.msc3706.partial_state", default=False
|
||||
)
|
||||
partial_state = parse_boolean_from_args(query, "omit_members", default=False)
|
||||
|
||||
result = await self.handler.on_send_join_request(
|
||||
origin, content, room_id, caller_supports_partial_state=partial_state
|
||||
|
|
|
@ -52,7 +52,6 @@ from synapse.api.errors import (
|
|||
NotFoundError,
|
||||
StoreError,
|
||||
SynapseError,
|
||||
UserDeactivatedError,
|
||||
)
|
||||
from synapse.api.ratelimiting import Ratelimiter
|
||||
from synapse.handlers.ui_auth import (
|
||||
|
@ -1419,12 +1418,6 @@ class AuthHandler:
|
|||
return None
|
||||
(user_id, password_hash) = lookupres
|
||||
|
||||
# If the password hash is None, the account has likely been deactivated
|
||||
if not password_hash:
|
||||
deactivated = await self.store.get_user_deactivated_status(user_id)
|
||||
if deactivated:
|
||||
raise UserDeactivatedError("This account has been deactivated")
|
||||
|
||||
result = await self.validate_hash(password, password_hash)
|
||||
if not result:
|
||||
logger.warning("Failed password login for user %s", user_id)
|
||||
|
@ -1749,8 +1742,11 @@ class AuthHandler:
|
|||
registered.
|
||||
auth_provider_session_id: The session ID from the SSO IdP received during login.
|
||||
"""
|
||||
# If the account has been deactivated, do not proceed with the login
|
||||
# flow.
|
||||
# If the account has been deactivated, do not proceed with the login.
|
||||
#
|
||||
# This gets checked again when the token is submitted but this lets us
|
||||
# provide an HTML error page to the user (instead of issuing a token and
|
||||
# having it error later).
|
||||
deactivated = await self.store.get_user_deactivated_status(registered_user_id)
|
||||
if deactivated:
|
||||
respond_with_html(request, 403, self._sso_account_deactivated_template)
|
||||
|
|
|
@ -148,7 +148,7 @@ class FederationHandler:
|
|||
self._event_auth_handler = hs.get_event_auth_handler()
|
||||
self._server_notices_mxid = hs.config.servernotices.server_notices_mxid
|
||||
self.config = hs.config
|
||||
self.http_client = hs.get_proxied_blacklisted_http_client()
|
||||
self.http_client = hs.get_proxied_blocklisted_http_client()
|
||||
self._replication = hs.get_replication_data_handler()
|
||||
self._federation_event_handler = hs.get_federation_event_handler()
|
||||
self._device_handler = hs.get_device_handler()
|
||||
|
|
|
@ -890,6 +890,11 @@ class FederationEventHandler:
|
|||
# Continue on with the events that are new to us.
|
||||
new_events.append(event)
|
||||
|
||||
set_tag(
|
||||
SynapseTags.RESULT_PREFIX + "new_events.length",
|
||||
str(len(new_events)),
|
||||
)
|
||||
|
||||
# We want to sort these by depth so we process them and
|
||||
# tell clients about them in order.
|
||||
sorted_events = sorted(new_events, key=lambda x: x.depth)
|
||||
|
|
|
@ -52,10 +52,10 @@ class IdentityHandler:
|
|||
# An HTTP client for contacting trusted URLs.
|
||||
self.http_client = SimpleHttpClient(hs)
|
||||
# An HTTP client for contacting identity servers specified by clients.
|
||||
self.blacklisting_http_client = SimpleHttpClient(
|
||||
self._http_client = SimpleHttpClient(
|
||||
hs,
|
||||
ip_blacklist=hs.config.server.federation_ip_range_blacklist,
|
||||
ip_whitelist=hs.config.server.federation_ip_range_whitelist,
|
||||
ip_blocklist=hs.config.server.federation_ip_range_blocklist,
|
||||
ip_allowlist=hs.config.server.federation_ip_range_allowlist,
|
||||
)
|
||||
self.federation_http_client = hs.get_federation_http_client()
|
||||
self.hs = hs
|
||||
|
@ -197,7 +197,7 @@ class IdentityHandler:
|
|||
try:
|
||||
# Use the blacklisting http client as this call is only to identity servers
|
||||
# provided by a client
|
||||
data = await self.blacklisting_http_client.post_json_get_json(
|
||||
data = await self._http_client.post_json_get_json(
|
||||
bind_url, bind_data, headers=headers
|
||||
)
|
||||
|
||||
|
@ -308,9 +308,7 @@ class IdentityHandler:
|
|||
try:
|
||||
# Use the blacklisting http client as this call is only to identity servers
|
||||
# provided by a client
|
||||
await self.blacklisting_http_client.post_json_get_json(
|
||||
url, content, headers
|
||||
)
|
||||
await self._http_client.post_json_get_json(url, content, headers)
|
||||
changed = True
|
||||
except HttpResponseException as e:
|
||||
changed = False
|
||||
|
@ -579,7 +577,7 @@ class IdentityHandler:
|
|||
"""
|
||||
# Check what hashing details are supported by this identity server
|
||||
try:
|
||||
hash_details = await self.blacklisting_http_client.get_json(
|
||||
hash_details = await self._http_client.get_json(
|
||||
"%s%s/_matrix/identity/v2/hash_details" % (id_server_scheme, id_server),
|
||||
{"access_token": id_access_token},
|
||||
)
|
||||
|
@ -646,7 +644,7 @@ class IdentityHandler:
|
|||
headers = {"Authorization": create_id_access_token_header(id_access_token)}
|
||||
|
||||
try:
|
||||
lookup_results = await self.blacklisting_http_client.post_json_get_json(
|
||||
lookup_results = await self._http_client.post_json_get_json(
|
||||
"%s%s/_matrix/identity/v2/lookup" % (id_server_scheme, id_server),
|
||||
{
|
||||
"addresses": [lookup_value],
|
||||
|
@ -752,7 +750,7 @@ class IdentityHandler:
|
|||
|
||||
url = "%s%s/_matrix/identity/v2/store-invite" % (id_server_scheme, id_server)
|
||||
try:
|
||||
data = await self.blacklisting_http_client.post_json_get_json(
|
||||
data = await self._http_client.post_json_get_json(
|
||||
url,
|
||||
invite_config,
|
||||
{"Authorization": create_id_access_token_header(id_access_token)},
|
||||
|
|
|
@ -0,0 +1,105 @@
|
|||
# Copyright 2023 Matrix.org Foundation C.I.C.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
from typing import TYPE_CHECKING
|
||||
|
||||
from authlib.jose import JsonWebToken, JWTClaims
|
||||
from authlib.jose.errors import BadSignatureError, InvalidClaimError, JoseError
|
||||
|
||||
from synapse.api.errors import Codes, LoginError
|
||||
from synapse.types import JsonDict, UserID
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from synapse.server import HomeServer
|
||||
|
||||
|
||||
class JwtHandler:
|
||||
def __init__(self, hs: "HomeServer"):
|
||||
self.hs = hs
|
||||
|
||||
self.jwt_secret = hs.config.jwt.jwt_secret
|
||||
self.jwt_subject_claim = hs.config.jwt.jwt_subject_claim
|
||||
self.jwt_algorithm = hs.config.jwt.jwt_algorithm
|
||||
self.jwt_issuer = hs.config.jwt.jwt_issuer
|
||||
self.jwt_audiences = hs.config.jwt.jwt_audiences
|
||||
|
||||
def validate_login(self, login_submission: JsonDict) -> str:
|
||||
"""
|
||||
Authenticates the user for the /login API
|
||||
|
||||
Args:
|
||||
login_submission: the whole of the login submission
|
||||
(including 'type' and other relevant fields)
|
||||
|
||||
Returns:
|
||||
The user ID that is logging in.
|
||||
|
||||
Raises:
|
||||
LoginError if there was an authentication problem.
|
||||
"""
|
||||
token = login_submission.get("token", None)
|
||||
if token is None:
|
||||
raise LoginError(
|
||||
403, "Token field for JWT is missing", errcode=Codes.FORBIDDEN
|
||||
)
|
||||
|
||||
jwt = JsonWebToken([self.jwt_algorithm])
|
||||
claim_options = {}
|
||||
if self.jwt_issuer is not None:
|
||||
claim_options["iss"] = {"value": self.jwt_issuer, "essential": True}
|
||||
if self.jwt_audiences is not None:
|
||||
claim_options["aud"] = {"values": self.jwt_audiences, "essential": True}
|
||||
|
||||
try:
|
||||
claims = jwt.decode(
|
||||
token,
|
||||
key=self.jwt_secret,
|
||||
claims_cls=JWTClaims,
|
||||
claims_options=claim_options,
|
||||
)
|
||||
except BadSignatureError:
|
||||
# We handle this case separately to provide a better error message
|
||||
raise LoginError(
|
||||
403,
|
||||
"JWT validation failed: Signature verification failed",
|
||||
errcode=Codes.FORBIDDEN,
|
||||
)
|
||||
except JoseError as e:
|
||||
# A JWT error occurred, return some info back to the client.
|
||||
raise LoginError(
|
||||
403,
|
||||
"JWT validation failed: %s" % (str(e),),
|
||||
errcode=Codes.FORBIDDEN,
|
||||
)
|
||||
|
||||
try:
|
||||
claims.validate(leeway=120) # allows 2 min of clock skew
|
||||
|
||||
# Enforce the old behavior which is rolled out in productive
|
||||
# servers: if the JWT contains an 'aud' claim but none is
|
||||
# configured, the login attempt will fail
|
||||
if claims.get("aud") is not None:
|
||||
if self.jwt_audiences is None or len(self.jwt_audiences) == 0:
|
||||
raise InvalidClaimError("aud")
|
||||
except JoseError as e:
|
||||
raise LoginError(
|
||||
403,
|
||||
"JWT validation failed: %s" % (str(e),),
|
||||
errcode=Codes.FORBIDDEN,
|
||||
)
|
||||
|
||||
user = claims.get(self.jwt_subject_claim, None)
|
||||
if user is None:
|
||||
raise LoginError(403, "Invalid JWT", errcode=Codes.FORBIDDEN)
|
||||
|
||||
return UserID(user, self.hs.hostname).to_string()
|
|
@ -16,6 +16,7 @@ import logging
|
|||
from typing import TYPE_CHECKING
|
||||
|
||||
from synapse.api.constants import ReceiptTypes
|
||||
from synapse.api.errors import SynapseError
|
||||
from synapse.util.async_helpers import Linearizer
|
||||
|
||||
if TYPE_CHECKING:
|
||||
|
@ -47,12 +48,21 @@ class ReadMarkerHandler:
|
|||
)
|
||||
|
||||
should_update = True
|
||||
# Get event ordering, this also ensures we know about the event
|
||||
event_ordering = await self.store.get_event_ordering(event_id)
|
||||
|
||||
if existing_read_marker:
|
||||
# Only update if the new marker is ahead in the stream
|
||||
should_update = await self.store.is_event_after(
|
||||
event_id, existing_read_marker["event_id"]
|
||||
)
|
||||
try:
|
||||
old_event_ordering = await self.store.get_event_ordering(
|
||||
existing_read_marker["event_id"]
|
||||
)
|
||||
except SynapseError:
|
||||
# Old event no longer exists, assume new is ahead. This may
|
||||
# happen if the old event was removed due to retention.
|
||||
pass
|
||||
else:
|
||||
# Only update if the new marker is ahead in the stream
|
||||
should_update = event_ordering > old_event_ordering
|
||||
|
||||
if should_update:
|
||||
content = {"event_id": event_id}
|
||||
|
|
|
@ -204,7 +204,7 @@ class SsoHandler:
|
|||
self._media_repo = (
|
||||
hs.get_media_repository() if hs.config.media.can_load_media_repo else None
|
||||
)
|
||||
self._http_client = hs.get_proxied_blacklisted_http_client()
|
||||
self._http_client = hs.get_proxied_blocklisted_http_client()
|
||||
|
||||
# The following template is shown after a successful user interactive
|
||||
# authentication session. It tells the user they can close the window.
|
||||
|
|
|
@ -117,22 +117,22 @@ RawHeaderValue = Union[
|
|||
]
|
||||
|
||||
|
||||
def check_against_blacklist(
|
||||
ip_address: IPAddress, ip_whitelist: Optional[IPSet], ip_blacklist: IPSet
|
||||
def _is_ip_blocked(
|
||||
ip_address: IPAddress, allowlist: Optional[IPSet], blocklist: IPSet
|
||||
) -> bool:
|
||||
"""
|
||||
Compares an IP address to allowed and disallowed IP sets.
|
||||
|
||||
Args:
|
||||
ip_address: The IP address to check
|
||||
ip_whitelist: Allowed IP addresses.
|
||||
ip_blacklist: Disallowed IP addresses.
|
||||
allowlist: Allowed IP addresses.
|
||||
blocklist: Disallowed IP addresses.
|
||||
|
||||
Returns:
|
||||
True if the IP address is in the blacklist and not in the whitelist.
|
||||
True if the IP address is in the blocklist and not in the allowlist.
|
||||
"""
|
||||
if ip_address in ip_blacklist:
|
||||
if ip_whitelist is None or ip_address not in ip_whitelist:
|
||||
if ip_address in blocklist:
|
||||
if allowlist is None or ip_address not in allowlist:
|
||||
return True
|
||||
return False
|
||||
|
||||
|
@ -154,27 +154,27 @@ def _make_scheduler(
|
|||
return _scheduler
|
||||
|
||||
|
||||
class _IPBlacklistingResolver:
|
||||
class _IPBlockingResolver:
|
||||
"""
|
||||
A proxy for reactor.nameResolver which only produces non-blacklisted IP
|
||||
addresses, preventing DNS rebinding attacks on URL preview.
|
||||
A proxy for reactor.nameResolver which only produces non-blocklisted IP
|
||||
addresses, preventing DNS rebinding attacks.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
reactor: IReactorPluggableNameResolver,
|
||||
ip_whitelist: Optional[IPSet],
|
||||
ip_blacklist: IPSet,
|
||||
ip_allowlist: Optional[IPSet],
|
||||
ip_blocklist: IPSet,
|
||||
):
|
||||
"""
|
||||
Args:
|
||||
reactor: The twisted reactor.
|
||||
ip_whitelist: IP addresses to allow.
|
||||
ip_blacklist: IP addresses to disallow.
|
||||
ip_allowlist: IP addresses to allow.
|
||||
ip_blocklist: IP addresses to disallow.
|
||||
"""
|
||||
self._reactor = reactor
|
||||
self._ip_whitelist = ip_whitelist
|
||||
self._ip_blacklist = ip_blacklist
|
||||
self._ip_allowlist = ip_allowlist
|
||||
self._ip_blocklist = ip_blocklist
|
||||
|
||||
def resolveHostName(
|
||||
self, recv: IResolutionReceiver, hostname: str, portNumber: int = 0
|
||||
|
@ -191,16 +191,13 @@ class _IPBlacklistingResolver:
|
|||
|
||||
ip_address = IPAddress(address.host)
|
||||
|
||||
if check_against_blacklist(
|
||||
ip_address, self._ip_whitelist, self._ip_blacklist
|
||||
):
|
||||
if _is_ip_blocked(ip_address, self._ip_allowlist, self._ip_blocklist):
|
||||
logger.info(
|
||||
"Dropped %s from DNS resolution to %s due to blacklist"
|
||||
% (ip_address, hostname)
|
||||
"Blocked %s from DNS resolution to %s" % (ip_address, hostname)
|
||||
)
|
||||
has_bad_ip = True
|
||||
|
||||
# if we have a blacklisted IP, we'd like to raise an error to block the
|
||||
# if we have a blocked IP, we'd like to raise an error to block the
|
||||
# request, but all we can really do from here is claim that there were no
|
||||
# valid results.
|
||||
if not has_bad_ip:
|
||||
|
@ -232,24 +229,24 @@ class _IPBlacklistingResolver:
|
|||
# ISynapseReactor implies IReactorCore, but explicitly marking it this as an implementer
|
||||
# of IReactorCore seems to keep mypy-zope happier.
|
||||
@implementer(IReactorCore, ISynapseReactor)
|
||||
class BlacklistingReactorWrapper:
|
||||
class BlocklistingReactorWrapper:
|
||||
"""
|
||||
A Reactor wrapper which will prevent DNS resolution to blacklisted IP
|
||||
A Reactor wrapper which will prevent DNS resolution to blocked IP
|
||||
addresses, to prevent DNS rebinding.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
reactor: IReactorPluggableNameResolver,
|
||||
ip_whitelist: Optional[IPSet],
|
||||
ip_blacklist: IPSet,
|
||||
ip_allowlist: Optional[IPSet],
|
||||
ip_blocklist: IPSet,
|
||||
):
|
||||
self._reactor = reactor
|
||||
|
||||
# We need to use a DNS resolver which filters out blacklisted IP
|
||||
# We need to use a DNS resolver which filters out blocked IP
|
||||
# addresses, to prevent DNS rebinding.
|
||||
self._nameResolver = _IPBlacklistingResolver(
|
||||
self._reactor, ip_whitelist, ip_blacklist
|
||||
self._nameResolver = _IPBlockingResolver(
|
||||
self._reactor, ip_allowlist, ip_blocklist
|
||||
)
|
||||
|
||||
def __getattr__(self, attr: str) -> Any:
|
||||
|
@ -260,7 +257,7 @@ class BlacklistingReactorWrapper:
|
|||
return getattr(self._reactor, attr)
|
||||
|
||||
|
||||
class BlacklistingAgentWrapper(Agent):
|
||||
class BlocklistingAgentWrapper(Agent):
|
||||
"""
|
||||
An Agent wrapper which will prevent access to IP addresses being accessed
|
||||
directly (without an IP address lookup).
|
||||
|
@ -269,18 +266,18 @@ class BlacklistingAgentWrapper(Agent):
|
|||
def __init__(
|
||||
self,
|
||||
agent: IAgent,
|
||||
ip_blacklist: IPSet,
|
||||
ip_whitelist: Optional[IPSet] = None,
|
||||
ip_blocklist: IPSet,
|
||||
ip_allowlist: Optional[IPSet] = None,
|
||||
):
|
||||
"""
|
||||
Args:
|
||||
agent: The Agent to wrap.
|
||||
ip_whitelist: IP addresses to allow.
|
||||
ip_blacklist: IP addresses to disallow.
|
||||
ip_allowlist: IP addresses to allow.
|
||||
ip_blocklist: IP addresses to disallow.
|
||||
"""
|
||||
self._agent = agent
|
||||
self._ip_whitelist = ip_whitelist
|
||||
self._ip_blacklist = ip_blacklist
|
||||
self._ip_allowlist = ip_allowlist
|
||||
self._ip_blocklist = ip_blocklist
|
||||
|
||||
def request(
|
||||
self,
|
||||
|
@ -299,13 +296,9 @@ class BlacklistingAgentWrapper(Agent):
|
|||
# Not an IP
|
||||
pass
|
||||
else:
|
||||
if check_against_blacklist(
|
||||
ip_address, self._ip_whitelist, self._ip_blacklist
|
||||
):
|
||||
logger.info("Blocking access to %s due to blacklist" % (ip_address,))
|
||||
e = SynapseError(
|
||||
HTTPStatus.FORBIDDEN, "IP address blocked by IP blacklist entry"
|
||||
)
|
||||
if _is_ip_blocked(ip_address, self._ip_allowlist, self._ip_blocklist):
|
||||
logger.info("Blocking access to %s" % (ip_address,))
|
||||
e = SynapseError(HTTPStatus.FORBIDDEN, "IP address blocked")
|
||||
return defer.fail(Failure(e))
|
||||
|
||||
return self._agent.request(
|
||||
|
@ -763,10 +756,9 @@ class SimpleHttpClient(BaseHttpClient):
|
|||
Args:
|
||||
hs: The HomeServer instance to pass in
|
||||
treq_args: Extra keyword arguments to be given to treq.request.
|
||||
ip_blacklist: The IP addresses that are blacklisted that
|
||||
we may not request.
|
||||
ip_whitelist: The whitelisted IP addresses, that we can
|
||||
request if it were otherwise caught in a blacklist.
|
||||
ip_blocklist: The IP addresses that we may not request.
|
||||
ip_allowlist: The allowed IP addresses, that we can
|
||||
request if it were otherwise caught in a blocklist.
|
||||
use_proxy: Whether proxy settings should be discovered and used
|
||||
from conventional environment variables.
|
||||
"""
|
||||
|
@ -775,19 +767,19 @@ class SimpleHttpClient(BaseHttpClient):
|
|||
self,
|
||||
hs: "HomeServer",
|
||||
treq_args: Optional[Dict[str, Any]] = None,
|
||||
ip_whitelist: Optional[IPSet] = None,
|
||||
ip_blacklist: Optional[IPSet] = None,
|
||||
ip_allowlist: Optional[IPSet] = None,
|
||||
ip_blocklist: Optional[IPSet] = None,
|
||||
use_proxy: bool = False,
|
||||
):
|
||||
super().__init__(hs, treq_args=treq_args)
|
||||
self._ip_whitelist = ip_whitelist
|
||||
self._ip_blacklist = ip_blacklist
|
||||
self._ip_allowlist = ip_allowlist
|
||||
self._ip_blocklist = ip_blocklist
|
||||
|
||||
if self._ip_blacklist:
|
||||
# If we have an IP blacklist, we need to use a DNS resolver which
|
||||
# filters out blacklisted IP addresses, to prevent DNS rebinding.
|
||||
self.reactor: ISynapseReactor = BlacklistingReactorWrapper(
|
||||
self.reactor, self._ip_whitelist, self._ip_blacklist
|
||||
if self._ip_blocklist:
|
||||
# If we have an IP blocklist, we need to use a DNS resolver which
|
||||
# filters out blocked IP addresses, to prevent DNS rebinding.
|
||||
self.reactor: ISynapseReactor = BlocklistingReactorWrapper(
|
||||
self.reactor, self._ip_allowlist, self._ip_blocklist
|
||||
)
|
||||
|
||||
# the pusher makes lots of concurrent SSL connections to Sygnal, and tends to
|
||||
|
@ -809,14 +801,13 @@ class SimpleHttpClient(BaseHttpClient):
|
|||
use_proxy=use_proxy,
|
||||
)
|
||||
|
||||
if self._ip_blacklist:
|
||||
# If we have an IP blacklist, we then install the blacklisting Agent
|
||||
# which prevents direct access to IP addresses, that are not caught
|
||||
# by the DNS resolution.
|
||||
self.agent = BlacklistingAgentWrapper(
|
||||
if self._ip_blocklist:
|
||||
# If we have an IP blocklist, we then install the Agent which prevents
|
||||
# direct access to IP addresses, that are not caught by the DNS resolution.
|
||||
self.agent = BlocklistingAgentWrapper(
|
||||
self.agent,
|
||||
ip_blacklist=self._ip_blacklist,
|
||||
ip_whitelist=self._ip_whitelist,
|
||||
ip_blocklist=self._ip_blocklist,
|
||||
ip_allowlist=self._ip_allowlist,
|
||||
)
|
||||
|
||||
|
||||
|
@ -844,6 +835,7 @@ class ReplicationClient(BaseHttpClient):
|
|||
|
||||
self.agent: IAgent = ReplicationAgent(
|
||||
hs.get_reactor(),
|
||||
hs.config.worker.instance_map,
|
||||
contextFactory=hs.get_http_client_context_factory(),
|
||||
pool=pool,
|
||||
)
|
||||
|
|
|
@ -36,7 +36,7 @@ from twisted.web.iweb import IAgent, IAgentEndpointFactory, IBodyProducer, IResp
|
|||
|
||||
from synapse.crypto.context_factory import FederationPolicyForHTTPS
|
||||
from synapse.http import proxyagent
|
||||
from synapse.http.client import BlacklistingAgentWrapper, BlacklistingReactorWrapper
|
||||
from synapse.http.client import BlocklistingAgentWrapper, BlocklistingReactorWrapper
|
||||
from synapse.http.connectproxyclient import HTTPConnectProxyEndpoint
|
||||
from synapse.http.federation.srv_resolver import Server, SrvResolver
|
||||
from synapse.http.federation.well_known_resolver import WellKnownResolver
|
||||
|
@ -65,12 +65,12 @@ class MatrixFederationAgent:
|
|||
user_agent:
|
||||
The user agent header to use for federation requests.
|
||||
|
||||
ip_whitelist: Allowed IP addresses.
|
||||
ip_allowlist: Allowed IP addresses.
|
||||
|
||||
ip_blacklist: Disallowed IP addresses.
|
||||
ip_blocklist: Disallowed IP addresses.
|
||||
|
||||
proxy_reactor: twisted reactor to use for connections to the proxy server
|
||||
reactor might have some blacklisting applied (i.e. for DNS queries),
|
||||
reactor might have some blocking applied (i.e. for DNS queries),
|
||||
but we need unblocked access to the proxy.
|
||||
|
||||
_srv_resolver:
|
||||
|
@ -87,17 +87,17 @@ class MatrixFederationAgent:
|
|||
reactor: ISynapseReactor,
|
||||
tls_client_options_factory: Optional[FederationPolicyForHTTPS],
|
||||
user_agent: bytes,
|
||||
ip_whitelist: Optional[IPSet],
|
||||
ip_blacklist: IPSet,
|
||||
ip_allowlist: Optional[IPSet],
|
||||
ip_blocklist: IPSet,
|
||||
_srv_resolver: Optional[SrvResolver] = None,
|
||||
_well_known_resolver: Optional[WellKnownResolver] = None,
|
||||
):
|
||||
# proxy_reactor is not blacklisted
|
||||
# proxy_reactor is not blocklisting reactor
|
||||
proxy_reactor = reactor
|
||||
|
||||
# We need to use a DNS resolver which filters out blacklisted IP
|
||||
# We need to use a DNS resolver which filters out blocked IP
|
||||
# addresses, to prevent DNS rebinding.
|
||||
reactor = BlacklistingReactorWrapper(reactor, ip_whitelist, ip_blacklist)
|
||||
reactor = BlocklistingReactorWrapper(reactor, ip_allowlist, ip_blocklist)
|
||||
|
||||
self._clock = Clock(reactor)
|
||||
self._pool = HTTPConnectionPool(reactor)
|
||||
|
@ -120,7 +120,7 @@ class MatrixFederationAgent:
|
|||
if _well_known_resolver is None:
|
||||
_well_known_resolver = WellKnownResolver(
|
||||
reactor,
|
||||
agent=BlacklistingAgentWrapper(
|
||||
agent=BlocklistingAgentWrapper(
|
||||
ProxyAgent(
|
||||
reactor,
|
||||
proxy_reactor,
|
||||
|
@ -128,7 +128,7 @@ class MatrixFederationAgent:
|
|||
contextFactory=tls_client_options_factory,
|
||||
use_proxy=True,
|
||||
),
|
||||
ip_blacklist=ip_blacklist,
|
||||
ip_blocklist=ip_blocklist,
|
||||
),
|
||||
user_agent=self.user_agent,
|
||||
)
|
||||
|
@ -256,7 +256,7 @@ class MatrixHostnameEndpoint:
|
|||
Args:
|
||||
reactor: twisted reactor to use for underlying requests
|
||||
proxy_reactor: twisted reactor to use for connections to the proxy server.
|
||||
'reactor' might have some blacklisting applied (i.e. for DNS queries),
|
||||
'reactor' might have some blocking applied (i.e. for DNS queries),
|
||||
but we need unblocked access to the proxy.
|
||||
tls_client_options_factory:
|
||||
factory to use for fetching client tls options, or none to disable TLS.
|
||||
|
|
|
@ -64,7 +64,7 @@ from synapse.api.errors import (
|
|||
from synapse.crypto.context_factory import FederationPolicyForHTTPS
|
||||
from synapse.http import QuieterFileBodyProducer
|
||||
from synapse.http.client import (
|
||||
BlacklistingAgentWrapper,
|
||||
BlocklistingAgentWrapper,
|
||||
BodyExceededMaxSize,
|
||||
ByteWriteable,
|
||||
_make_scheduler,
|
||||
|
@ -392,15 +392,15 @@ class MatrixFederationHttpClient:
|
|||
self.reactor,
|
||||
tls_client_options_factory,
|
||||
user_agent.encode("ascii"),
|
||||
hs.config.server.federation_ip_range_whitelist,
|
||||
hs.config.server.federation_ip_range_blacklist,
|
||||
hs.config.server.federation_ip_range_allowlist,
|
||||
hs.config.server.federation_ip_range_blocklist,
|
||||
)
|
||||
|
||||
# Use a BlacklistingAgentWrapper to prevent circumventing the IP
|
||||
# blacklist via IP literals in server names
|
||||
self.agent = BlacklistingAgentWrapper(
|
||||
# Use a BlocklistingAgentWrapper to prevent circumventing the IP
|
||||
# blocking via IP literals in server names
|
||||
self.agent = BlocklistingAgentWrapper(
|
||||
federation_agent,
|
||||
ip_blacklist=hs.config.server.federation_ip_range_blacklist,
|
||||
ip_blocklist=hs.config.server.federation_ip_range_blocklist,
|
||||
)
|
||||
|
||||
self.clock = hs.get_clock()
|
||||
|
|
|
@ -53,7 +53,7 @@ class ProxyAgent(_AgentBase):
|
|||
connections.
|
||||
|
||||
proxy_reactor: twisted reactor to use for connections to the proxy server
|
||||
reactor might have some blacklisting applied (i.e. for DNS queries),
|
||||
reactor might have some blocking applied (i.e. for DNS queries),
|
||||
but we need unblocked access to the proxy.
|
||||
|
||||
contextFactory: A factory for TLS contexts, to control the
|
||||
|
|
|
@ -13,7 +13,7 @@
|
|||
# limitations under the License.
|
||||
|
||||
import logging
|
||||
from typing import Optional
|
||||
from typing import Dict, Optional
|
||||
|
||||
from zope.interface import implementer
|
||||
|
||||
|
@ -32,6 +32,7 @@ from twisted.web.iweb import (
|
|||
IResponse,
|
||||
)
|
||||
|
||||
from synapse.config.workers import InstanceLocationConfig
|
||||
from synapse.types import ISynapseReactor
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
@ -44,9 +45,11 @@ class ReplicationEndpointFactory:
|
|||
def __init__(
|
||||
self,
|
||||
reactor: ISynapseReactor,
|
||||
instance_map: Dict[str, InstanceLocationConfig],
|
||||
context_factory: IPolicyForHTTPS,
|
||||
) -> None:
|
||||
self.reactor = reactor
|
||||
self.instance_map = instance_map
|
||||
self.context_factory = context_factory
|
||||
|
||||
def endpointForURI(self, uri: URI) -> IStreamClientEndpoint:
|
||||
|
@ -58,15 +61,29 @@ class ReplicationEndpointFactory:
|
|||
|
||||
Returns: The correct client endpoint object
|
||||
"""
|
||||
if uri.scheme in (b"http", b"https"):
|
||||
endpoint = HostnameEndpoint(self.reactor, uri.host, uri.port)
|
||||
if uri.scheme == b"https":
|
||||
# The given URI has a special scheme and includes the worker name. The
|
||||
# actual connection details are pulled from the instance map.
|
||||
worker_name = uri.netloc.decode("utf-8")
|
||||
scheme = self.instance_map[worker_name].scheme()
|
||||
|
||||
if scheme in ("http", "https"):
|
||||
endpoint = HostnameEndpoint(
|
||||
self.reactor,
|
||||
self.instance_map[worker_name].host,
|
||||
self.instance_map[worker_name].port,
|
||||
)
|
||||
if scheme == "https":
|
||||
endpoint = wrapClientTLS(
|
||||
self.context_factory.creatorForNetloc(uri.host, uri.port), endpoint
|
||||
# The 'port' argument below isn't actually used by the function
|
||||
self.context_factory.creatorForNetloc(
|
||||
self.instance_map[worker_name].host,
|
||||
self.instance_map[worker_name].port,
|
||||
),
|
||||
endpoint,
|
||||
)
|
||||
return endpoint
|
||||
else:
|
||||
raise SchemeNotSupported(f"Unsupported scheme: {uri.scheme!r}")
|
||||
raise SchemeNotSupported(f"Unsupported scheme: {scheme}")
|
||||
|
||||
|
||||
@implementer(IAgent)
|
||||
|
@ -80,6 +97,7 @@ class ReplicationAgent(_AgentBase):
|
|||
def __init__(
|
||||
self,
|
||||
reactor: ISynapseReactor,
|
||||
instance_map: Dict[str, InstanceLocationConfig],
|
||||
contextFactory: IPolicyForHTTPS,
|
||||
connectTimeout: Optional[float] = None,
|
||||
bindAddress: Optional[bytes] = None,
|
||||
|
@ -102,7 +120,9 @@ class ReplicationAgent(_AgentBase):
|
|||
created.
|
||||
"""
|
||||
_AgentBase.__init__(self, reactor, pool)
|
||||
endpoint_factory = ReplicationEndpointFactory(reactor, contextFactory)
|
||||
endpoint_factory = ReplicationEndpointFactory(
|
||||
reactor, instance_map, contextFactory
|
||||
)
|
||||
self._endpointFactory = endpoint_factory
|
||||
|
||||
def request(
|
||||
|
|
|
@ -105,7 +105,7 @@ class UrlPreviewer:
|
|||
|
||||
When Synapse is asked to preview a URL it does the following:
|
||||
|
||||
1. Checks against a URL blacklist (defined as `url_preview_url_blacklist` in the
|
||||
1. Checks against a URL blocklist (defined as `url_preview_url_blacklist` in the
|
||||
config).
|
||||
2. Checks the URL against an in-memory cache and returns the result if it exists. (This
|
||||
is also used to de-duplicate processing of multiple in-flight requests at once.)
|
||||
|
@ -113,7 +113,7 @@ class UrlPreviewer:
|
|||
1. Checks URL and timestamp against the database cache and returns the result if it
|
||||
has not expired and was successful (a 2xx return code).
|
||||
2. Checks if the URL matches an oEmbed (https://oembed.com/) pattern. If it
|
||||
does, update the URL to download.
|
||||
does and the new URL is not blocked, update the URL to download.
|
||||
3. Downloads the URL and stores it into a file via the media storage provider
|
||||
and saves the local media metadata.
|
||||
4. If the media is an image:
|
||||
|
@ -127,14 +127,14 @@ class UrlPreviewer:
|
|||
and saves the local media metadata.
|
||||
2. Convert the oEmbed response to an Open Graph response.
|
||||
3. Override any Open Graph data from the HTML with data from oEmbed.
|
||||
4. If an image exists in the Open Graph response:
|
||||
4. If an image URL exists in the Open Graph response:
|
||||
1. Downloads the URL and stores it into a file via the media storage
|
||||
provider and saves the local media metadata.
|
||||
2. Generates thumbnails.
|
||||
3. Updates the Open Graph response based on image properties.
|
||||
6. If the media is JSON and an oEmbed URL was found:
|
||||
6. If an oEmbed URL was found and the media is JSON:
|
||||
1. Convert the oEmbed response to an Open Graph response.
|
||||
2. If a thumbnail or image is in the oEmbed response:
|
||||
2. If an image URL is in the oEmbed response:
|
||||
1. Downloads the URL and stores it into a file via the media storage
|
||||
provider and saves the local media metadata.
|
||||
2. Generates thumbnails.
|
||||
|
@ -144,7 +144,8 @@ class UrlPreviewer:
|
|||
|
||||
If any additional requests (e.g. from oEmbed autodiscovery, step 5.3 or
|
||||
image thumbnailing, step 5.4 or 6.4) fails then the URL preview as a whole
|
||||
does not fail. As much information as possible is returned.
|
||||
does not fail. If any of them are blocked, then those additional requests
|
||||
are skipped. As much information as possible is returned.
|
||||
|
||||
The in-memory cache expires after 1 hour.
|
||||
|
||||
|
@ -166,8 +167,8 @@ class UrlPreviewer:
|
|||
self.client = SimpleHttpClient(
|
||||
hs,
|
||||
treq_args={"browser_like_redirects": True},
|
||||
ip_whitelist=hs.config.media.url_preview_ip_range_whitelist,
|
||||
ip_blacklist=hs.config.media.url_preview_ip_range_blacklist,
|
||||
ip_allowlist=hs.config.media.url_preview_ip_range_allowlist,
|
||||
ip_blocklist=hs.config.media.url_preview_ip_range_blocklist,
|
||||
use_proxy=True,
|
||||
)
|
||||
self.media_repo = media_repo
|
||||
|
@ -185,7 +186,7 @@ class UrlPreviewer:
|
|||
or instance_running_jobs == hs.get_instance_name()
|
||||
)
|
||||
|
||||
self.url_preview_url_blacklist = hs.config.media.url_preview_url_blacklist
|
||||
self.url_preview_url_blocklist = hs.config.media.url_preview_url_blocklist
|
||||
self.url_preview_accept_language = hs.config.media.url_preview_accept_language
|
||||
|
||||
# memory cache mapping urls to an ObservableDeferred returning
|
||||
|
@ -203,48 +204,14 @@ class UrlPreviewer:
|
|||
)
|
||||
|
||||
async def preview(self, url: str, user: UserID, ts: int) -> bytes:
|
||||
# XXX: we could move this into _do_preview if we wanted.
|
||||
url_tuple = urlsplit(url)
|
||||
for entry in self.url_preview_url_blacklist:
|
||||
match = True
|
||||
for attrib in entry:
|
||||
pattern = entry[attrib]
|
||||
value = getattr(url_tuple, attrib)
|
||||
logger.debug(
|
||||
"Matching attrib '%s' with value '%s' against pattern '%s'",
|
||||
attrib,
|
||||
value,
|
||||
pattern,
|
||||
)
|
||||
|
||||
if value is None:
|
||||
match = False
|
||||
continue
|
||||
|
||||
# Some attributes might not be parsed as strings by urlsplit (such as the
|
||||
# port, which is parsed as an int). Because we use match functions that
|
||||
# expect strings, we want to make sure that's what we give them.
|
||||
value_str = str(value)
|
||||
|
||||
if pattern.startswith("^"):
|
||||
if not re.match(pattern, value_str):
|
||||
match = False
|
||||
continue
|
||||
else:
|
||||
if not fnmatch.fnmatch(value_str, pattern):
|
||||
match = False
|
||||
continue
|
||||
if match:
|
||||
logger.warning("URL %s blocked by url_blacklist entry %s", url, entry)
|
||||
raise SynapseError(
|
||||
403, "URL blocked by url pattern blacklist entry", Codes.UNKNOWN
|
||||
)
|
||||
|
||||
# the in-memory cache:
|
||||
# * ensures that only one request is active at a time
|
||||
# * ensures that only one request to a URL is active at a time
|
||||
# * takes load off the DB for the thundering herds
|
||||
# * also caches any failures (unlike the DB) so we don't keep
|
||||
# requesting the same endpoint
|
||||
# requesting the same endpoint
|
||||
#
|
||||
# Note that autodiscovered oEmbed URLs and pre-caching of images
|
||||
# are not captured in the in-memory cache.
|
||||
|
||||
observable = self._cache.get(url)
|
||||
|
||||
|
@ -283,7 +250,7 @@ class UrlPreviewer:
|
|||
og = og.encode("utf8")
|
||||
return og
|
||||
|
||||
# If this URL can be accessed via oEmbed, use that instead.
|
||||
# If this URL can be accessed via an allowed oEmbed, use that instead.
|
||||
url_to_download = url
|
||||
oembed_url = self._oembed.get_oembed_url(url)
|
||||
if oembed_url:
|
||||
|
@ -329,6 +296,7 @@ class UrlPreviewer:
|
|||
# defer to that.
|
||||
oembed_url = self._oembed.autodiscover_from_html(tree)
|
||||
og_from_oembed: JsonDict = {}
|
||||
# Only download to the oEmbed URL if it is allowed.
|
||||
if oembed_url:
|
||||
try:
|
||||
oembed_info = await self._handle_url(
|
||||
|
@ -411,6 +379,59 @@ class UrlPreviewer:
|
|||
|
||||
return jsonog.encode("utf8")
|
||||
|
||||
def _is_url_blocked(self, url: str) -> bool:
|
||||
"""
|
||||
Check whether the URL is allowed to be previewed (according to the homeserver
|
||||
configuration).
|
||||
|
||||
Args:
|
||||
url: The requested URL.
|
||||
|
||||
Return:
|
||||
True if the URL is blocked, False if it is allowed.
|
||||
"""
|
||||
url_tuple = urlsplit(url)
|
||||
for entry in self.url_preview_url_blocklist:
|
||||
match = True
|
||||
# Iterate over each entry. If *all* attributes of that entry match
|
||||
# the current URL, then reject it.
|
||||
for attrib, pattern in entry.items():
|
||||
value = getattr(url_tuple, attrib)
|
||||
logger.debug(
|
||||
"Matching attrib '%s' with value '%s' against pattern '%s'",
|
||||
attrib,
|
||||
value,
|
||||
pattern,
|
||||
)
|
||||
|
||||
if value is None:
|
||||
match = False
|
||||
break
|
||||
|
||||
# Some attributes might not be parsed as strings by urlsplit (such as the
|
||||
# port, which is parsed as an int). Because we use match functions that
|
||||
# expect strings, we want to make sure that's what we give them.
|
||||
value_str = str(value)
|
||||
|
||||
# Check the value against the pattern as either a regular expression or
|
||||
# a glob. If it doesn't match, the entry doesn't match.
|
||||
if pattern.startswith("^"):
|
||||
if not re.match(pattern, value_str):
|
||||
match = False
|
||||
break
|
||||
else:
|
||||
if not fnmatch.fnmatch(value_str, pattern):
|
||||
match = False
|
||||
break
|
||||
|
||||
# All fields matched, return true (the URL is blocked).
|
||||
if match:
|
||||
logger.warning("URL %s blocked by entry %s", url, entry)
|
||||
return match
|
||||
|
||||
# No matches were found, the URL is allowed.
|
||||
return False
|
||||
|
||||
async def _download_url(self, url: str, output_stream: BinaryIO) -> DownloadResult:
|
||||
"""
|
||||
Fetches a remote URL and parses the headers.
|
||||
|
@ -451,7 +472,7 @@ class UrlPreviewer:
|
|||
except DNSLookupError:
|
||||
# DNS lookup returned no results
|
||||
# Note: This will also be the case if one of the resolved IP
|
||||
# addresses is blacklisted
|
||||
# addresses is blocked.
|
||||
raise SynapseError(
|
||||
502,
|
||||
"DNS resolution failure during URL preview generation",
|
||||
|
@ -547,8 +568,16 @@ class UrlPreviewer:
|
|||
|
||||
Returns:
|
||||
A MediaInfo object describing the fetched content.
|
||||
|
||||
Raises:
|
||||
SynapseError if the URL is blocked.
|
||||
"""
|
||||
|
||||
if self._is_url_blocked(url):
|
||||
raise SynapseError(
|
||||
403, "URL blocked by url pattern blocklist entry", Codes.UNKNOWN
|
||||
)
|
||||
|
||||
# TODO: we should probably honour robots.txt... except in practice
|
||||
# we're most likely being explicitly triggered by a human rather than a
|
||||
# bot, so are we really a robot?
|
||||
|
@ -624,7 +653,7 @@ class UrlPreviewer:
|
|||
return
|
||||
|
||||
# The image URL from the HTML might be relative to the previewed page,
|
||||
# convert it to an URL which can be requested directly.
|
||||
# convert it to a URL which can be requested directly.
|
||||
url_parts = urlparse(image_url)
|
||||
if url_parts.scheme != "data":
|
||||
image_url = urljoin(media_info.uri, image_url)
|
||||
|
|
|
@ -134,7 +134,7 @@ from synapse.util.caches.descriptors import CachedFunction, cached as _cached
|
|||
from synapse.util.frozenutils import freeze
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from synapse.app.generic_worker import GenericWorkerSlavedStore
|
||||
from synapse.app.generic_worker import GenericWorkerStore
|
||||
from synapse.server import HomeServer
|
||||
|
||||
|
||||
|
@ -237,9 +237,7 @@ class ModuleApi:
|
|||
|
||||
# TODO: Fix this type hint once the types for the data stores have been ironed
|
||||
# out.
|
||||
self._store: Union[
|
||||
DataStore, "GenericWorkerSlavedStore"
|
||||
] = hs.get_datastores().main
|
||||
self._store: Union[DataStore, "GenericWorkerStore"] = hs.get_datastores().main
|
||||
self._storage_controllers = hs.get_storage_controllers()
|
||||
self._auth = hs.get_auth()
|
||||
self._auth_handler = auth_handler
|
||||
|
|
|
@ -148,7 +148,7 @@ class HttpPusher(Pusher):
|
|||
)
|
||||
|
||||
self.url = url
|
||||
self.http_client = hs.get_proxied_blacklisted_http_client()
|
||||
self.http_client = hs.get_proxied_blocklisted_http_client()
|
||||
self.data_minus_url = {}
|
||||
self.data_minus_url.update(self.data)
|
||||
del self.data_minus_url["url"]
|
||||
|
|
|
@ -219,11 +219,7 @@ class ReplicationEndpoint(metaclass=abc.ABCMeta):
|
|||
with outgoing_gauge.track_inprogress():
|
||||
if instance_name == local_instance_name:
|
||||
raise Exception("Trying to send HTTP request to self")
|
||||
if instance_name in instance_map:
|
||||
host = instance_map[instance_name].host
|
||||
port = instance_map[instance_name].port
|
||||
tls = instance_map[instance_name].tls
|
||||
else:
|
||||
if instance_name not in instance_map:
|
||||
raise Exception(
|
||||
"Instance %r not in 'instance_map' config" % (instance_name,)
|
||||
)
|
||||
|
@ -271,13 +267,11 @@ class ReplicationEndpoint(metaclass=abc.ABCMeta):
|
|||
"Unknown METHOD on %s replication endpoint" % (cls.NAME,)
|
||||
)
|
||||
|
||||
# Here the protocol is hard coded to be http by default or https in case the replication
|
||||
# port is set to have tls true.
|
||||
scheme = "https" if tls else "http"
|
||||
uri = "%s://%s:%s/_synapse/replication/%s/%s" % (
|
||||
scheme,
|
||||
host,
|
||||
port,
|
||||
# Hard code a special scheme to show this only used for replication. The
|
||||
# instance_name will be passed into the ReplicationEndpointFactory to
|
||||
# determine connection details from the instance_map.
|
||||
uri = "synapse-replication://%s/_synapse/replication/%s/%s" % (
|
||||
instance_name,
|
||||
cls.NAME,
|
||||
"/".join(url_args),
|
||||
)
|
||||
|
|
|
@ -60,7 +60,7 @@ _WAIT_FOR_REPLICATION_TIMEOUT_SECONDS = 5
|
|||
class ReplicationDataHandler:
|
||||
"""Handles incoming stream updates from replication.
|
||||
|
||||
This instance notifies the slave data store about updates. Can be subclassed
|
||||
This instance notifies the data store about updates. Can be subclassed
|
||||
to handle updates in additional ways.
|
||||
"""
|
||||
|
||||
|
@ -91,7 +91,7 @@ class ReplicationDataHandler:
|
|||
) -> None:
|
||||
"""Called to handle a batch of replication data with a given stream token.
|
||||
|
||||
By default this just pokes the slave store. Can be overridden in subclasses to
|
||||
By default, this just pokes the data store. Can be overridden in subclasses to
|
||||
handle more.
|
||||
|
||||
Args:
|
||||
|
|
|
@ -137,6 +137,35 @@ class DevicesRestServlet(RestServlet):
|
|||
devices = await self.device_handler.get_devices_by_user(target_user.to_string())
|
||||
return HTTPStatus.OK, {"devices": devices, "total": len(devices)}
|
||||
|
||||
async def on_POST(
|
||||
self, request: SynapseRequest, user_id: str
|
||||
) -> Tuple[int, JsonDict]:
|
||||
"""Creates a new device for the user."""
|
||||
await assert_requester_is_admin(self.auth, request)
|
||||
|
||||
target_user = UserID.from_string(user_id)
|
||||
if not self.is_mine(target_user):
|
||||
raise SynapseError(
|
||||
HTTPStatus.BAD_REQUEST, "Can only create devices for local users"
|
||||
)
|
||||
|
||||
u = await self.store.get_user_by_id(target_user.to_string())
|
||||
if u is None:
|
||||
raise NotFoundError("Unknown user")
|
||||
|
||||
body = parse_json_object_from_request(request)
|
||||
device_id = body.get("device_id")
|
||||
if not device_id:
|
||||
raise SynapseError(HTTPStatus.BAD_REQUEST, "Missing device_id")
|
||||
if not isinstance(device_id, str):
|
||||
raise SynapseError(HTTPStatus.BAD_REQUEST, "device_id must be a string")
|
||||
|
||||
await self.device_handler.check_device_registered(
|
||||
user_id=user_id, device_id=device_id
|
||||
)
|
||||
|
||||
return HTTPStatus.CREATED, {}
|
||||
|
||||
|
||||
class DeleteDevicesRestServlet(RestServlet):
|
||||
"""
|
||||
|
|
|
@ -35,6 +35,7 @@ from synapse.api.errors import (
|
|||
LoginError,
|
||||
NotApprovedError,
|
||||
SynapseError,
|
||||
UserDeactivatedError,
|
||||
)
|
||||
from synapse.api.ratelimiting import Ratelimiter
|
||||
from synapse.api.urls import CLIENT_API_PREFIX
|
||||
|
@ -84,14 +85,10 @@ class LoginRestServlet(RestServlet):
|
|||
def __init__(self, hs: "HomeServer"):
|
||||
super().__init__()
|
||||
self.hs = hs
|
||||
self._main_store = hs.get_datastores().main
|
||||
|
||||
# JWT configuration variables.
|
||||
self.jwt_enabled = hs.config.jwt.jwt_enabled
|
||||
self.jwt_secret = hs.config.jwt.jwt_secret
|
||||
self.jwt_subject_claim = hs.config.jwt.jwt_subject_claim
|
||||
self.jwt_algorithm = hs.config.jwt.jwt_algorithm
|
||||
self.jwt_issuer = hs.config.jwt.jwt_issuer
|
||||
self.jwt_audiences = hs.config.jwt.jwt_audiences
|
||||
|
||||
# SSO configuration.
|
||||
self.saml2_enabled = hs.config.saml2.saml2_enabled
|
||||
|
@ -117,13 +114,13 @@ class LoginRestServlet(RestServlet):
|
|||
|
||||
self._well_known_builder = WellKnownBuilder(hs)
|
||||
self._address_ratelimiter = Ratelimiter(
|
||||
store=hs.get_datastores().main,
|
||||
store=self._main_store,
|
||||
clock=hs.get_clock(),
|
||||
rate_hz=self.hs.config.ratelimiting.rc_login_address.per_second,
|
||||
burst_count=self.hs.config.ratelimiting.rc_login_address.burst_count,
|
||||
)
|
||||
self._account_ratelimiter = Ratelimiter(
|
||||
store=hs.get_datastores().main,
|
||||
store=self._main_store,
|
||||
clock=hs.get_clock(),
|
||||
rate_hz=self.hs.config.ratelimiting.rc_login_account.per_second,
|
||||
burst_count=self.hs.config.ratelimiting.rc_login_account.burst_count,
|
||||
|
@ -285,6 +282,9 @@ class LoginRestServlet(RestServlet):
|
|||
login_submission,
|
||||
ratelimit=appservice.is_rate_limited(),
|
||||
should_issue_refresh_token=should_issue_refresh_token,
|
||||
# The user represented by an appservice's configured sender_localpart
|
||||
# is not actually created in Synapse.
|
||||
should_check_deactivated=qualified_user_id != appservice.sender,
|
||||
)
|
||||
|
||||
async def _do_other_login(
|
||||
|
@ -331,6 +331,7 @@ class LoginRestServlet(RestServlet):
|
|||
auth_provider_id: Optional[str] = None,
|
||||
should_issue_refresh_token: bool = False,
|
||||
auth_provider_session_id: Optional[str] = None,
|
||||
should_check_deactivated: bool = True,
|
||||
) -> LoginResponse:
|
||||
"""Called when we've successfully authed the user and now need to
|
||||
actually login them in (e.g. create devices). This gets called on
|
||||
|
@ -350,6 +351,11 @@ class LoginRestServlet(RestServlet):
|
|||
should_issue_refresh_token: True if this login should issue
|
||||
a refresh token alongside the access token.
|
||||
auth_provider_session_id: The session ID got during login from the SSO IdP.
|
||||
should_check_deactivated: True if the user should be checked for
|
||||
deactivation status before logging in.
|
||||
|
||||
This exists purely for appservice's configured sender_localpart
|
||||
which doesn't have an associated user in the database.
|
||||
|
||||
Returns:
|
||||
Dictionary of account information after successful login.
|
||||
|
@ -369,6 +375,12 @@ class LoginRestServlet(RestServlet):
|
|||
)
|
||||
user_id = canonical_uid
|
||||
|
||||
# If the account has been deactivated, do not proceed with the login.
|
||||
if should_check_deactivated:
|
||||
deactivated = await self._main_store.get_user_deactivated_status(user_id)
|
||||
if deactivated:
|
||||
raise UserDeactivatedError("This account has been deactivated")
|
||||
|
||||
device_id = login_submission.get("device_id")
|
||||
|
||||
# If device_id is present, check that device_id is not longer than a reasonable 512 characters
|
||||
|
@ -427,7 +439,7 @@ class LoginRestServlet(RestServlet):
|
|||
self, login_submission: JsonDict, should_issue_refresh_token: bool = False
|
||||
) -> LoginResponse:
|
||||
"""
|
||||
Handle the final stage of SSO login.
|
||||
Handle token login.
|
||||
|
||||
Args:
|
||||
login_submission: The JSON request body.
|
||||
|
@ -452,72 +464,24 @@ class LoginRestServlet(RestServlet):
|
|||
async def _do_jwt_login(
|
||||
self, login_submission: JsonDict, should_issue_refresh_token: bool = False
|
||||
) -> LoginResponse:
|
||||
token = login_submission.get("token", None)
|
||||
if token is None:
|
||||
raise LoginError(
|
||||
403, "Token field for JWT is missing", errcode=Codes.FORBIDDEN
|
||||
)
|
||||
"""
|
||||
Handle the custom JWT login.
|
||||
|
||||
from authlib.jose import JsonWebToken, JWTClaims
|
||||
from authlib.jose.errors import BadSignatureError, InvalidClaimError, JoseError
|
||||
Args:
|
||||
login_submission: The JSON request body.
|
||||
should_issue_refresh_token: True if this login should issue
|
||||
a refresh token alongside the access token.
|
||||
|
||||
jwt = JsonWebToken([self.jwt_algorithm])
|
||||
claim_options = {}
|
||||
if self.jwt_issuer is not None:
|
||||
claim_options["iss"] = {"value": self.jwt_issuer, "essential": True}
|
||||
if self.jwt_audiences is not None:
|
||||
claim_options["aud"] = {"values": self.jwt_audiences, "essential": True}
|
||||
|
||||
try:
|
||||
claims = jwt.decode(
|
||||
token,
|
||||
key=self.jwt_secret,
|
||||
claims_cls=JWTClaims,
|
||||
claims_options=claim_options,
|
||||
)
|
||||
except BadSignatureError:
|
||||
# We handle this case separately to provide a better error message
|
||||
raise LoginError(
|
||||
403,
|
||||
"JWT validation failed: Signature verification failed",
|
||||
errcode=Codes.FORBIDDEN,
|
||||
)
|
||||
except JoseError as e:
|
||||
# A JWT error occurred, return some info back to the client.
|
||||
raise LoginError(
|
||||
403,
|
||||
"JWT validation failed: %s" % (str(e),),
|
||||
errcode=Codes.FORBIDDEN,
|
||||
)
|
||||
|
||||
try:
|
||||
claims.validate(leeway=120) # allows 2 min of clock skew
|
||||
|
||||
# Enforce the old behavior which is rolled out in productive
|
||||
# servers: if the JWT contains an 'aud' claim but none is
|
||||
# configured, the login attempt will fail
|
||||
if claims.get("aud") is not None:
|
||||
if self.jwt_audiences is None or len(self.jwt_audiences) == 0:
|
||||
raise InvalidClaimError("aud")
|
||||
except JoseError as e:
|
||||
raise LoginError(
|
||||
403,
|
||||
"JWT validation failed: %s" % (str(e),),
|
||||
errcode=Codes.FORBIDDEN,
|
||||
)
|
||||
|
||||
user = claims.get(self.jwt_subject_claim, None)
|
||||
if user is None:
|
||||
raise LoginError(403, "Invalid JWT", errcode=Codes.FORBIDDEN)
|
||||
|
||||
user_id = UserID(user, self.hs.hostname).to_string()
|
||||
result = await self._complete_login(
|
||||
Returns:
|
||||
The body of the JSON response.
|
||||
"""
|
||||
user_id = self.hs.get_jwt_handler().validate_login(login_submission)
|
||||
return await self._complete_login(
|
||||
user_id,
|
||||
login_submission,
|
||||
create_non_existent_users=True,
|
||||
should_issue_refresh_token=should_issue_refresh_token,
|
||||
)
|
||||
return result
|
||||
|
||||
|
||||
def _get_auth_flow_dict_for_idp(idp: SsoIdentityProvider) -> JsonDict:
|
||||
|
|
|
@ -12,13 +12,14 @@
|
|||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
import logging
|
||||
from typing import TYPE_CHECKING, Tuple
|
||||
from http import HTTPStatus
|
||||
from typing import TYPE_CHECKING, Dict, List, Tuple
|
||||
|
||||
from synapse.api.errors import Codes, SynapseError
|
||||
from synapse.http.server import HttpServer
|
||||
from synapse.http.servlet import RestServlet
|
||||
from synapse.http.servlet import RestServlet, parse_strings_from_args
|
||||
from synapse.http.site import SynapseRequest
|
||||
from synapse.types import JsonDict, UserID
|
||||
from synapse.types import JsonDict
|
||||
|
||||
from ._base import client_patterns
|
||||
|
||||
|
@ -30,11 +31,11 @@ logger = logging.getLogger(__name__)
|
|||
|
||||
class UserMutualRoomsServlet(RestServlet):
|
||||
"""
|
||||
GET /uk.half-shot.msc2666/user/mutual_rooms/{user_id} HTTP/1.1
|
||||
GET /uk.half-shot.msc2666/user/mutual_rooms?user_id={user_id} HTTP/1.1
|
||||
"""
|
||||
|
||||
PATTERNS = client_patterns(
|
||||
"/uk.half-shot.msc2666/user/mutual_rooms/(?P<user_id>[^/]*)",
|
||||
"/uk.half-shot.msc2666/user/mutual_rooms$",
|
||||
releases=(), # This is an unstable feature
|
||||
)
|
||||
|
||||
|
@ -43,17 +44,35 @@ class UserMutualRoomsServlet(RestServlet):
|
|||
self.auth = hs.get_auth()
|
||||
self.store = hs.get_datastores().main
|
||||
|
||||
async def on_GET(
|
||||
self, request: SynapseRequest, user_id: str
|
||||
) -> Tuple[int, JsonDict]:
|
||||
UserID.from_string(user_id)
|
||||
async def on_GET(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
|
||||
# twisted.web.server.Request.args is incorrectly defined as Optional[Any]
|
||||
args: Dict[bytes, List[bytes]] = request.args # type: ignore
|
||||
|
||||
user_ids = parse_strings_from_args(args, "user_id", required=True)
|
||||
|
||||
if len(user_ids) > 1:
|
||||
raise SynapseError(
|
||||
HTTPStatus.BAD_REQUEST,
|
||||
"Duplicate user_id query parameter",
|
||||
errcode=Codes.INVALID_PARAM,
|
||||
)
|
||||
|
||||
# We don't do batching, so a batch token is illegal by default
|
||||
if b"batch_token" in args:
|
||||
raise SynapseError(
|
||||
HTTPStatus.BAD_REQUEST,
|
||||
"Unknown batch_token",
|
||||
errcode=Codes.INVALID_PARAM,
|
||||
)
|
||||
|
||||
user_id = user_ids[0]
|
||||
|
||||
requester = await self.auth.get_user_by_req(request)
|
||||
if user_id == requester.user.to_string():
|
||||
raise SynapseError(
|
||||
code=400,
|
||||
msg="You cannot request a list of shared rooms with yourself",
|
||||
errcode=Codes.FORBIDDEN,
|
||||
HTTPStatus.UNPROCESSABLE_ENTITY,
|
||||
"You cannot request a list of shared rooms with yourself",
|
||||
errcode=Codes.INVALID_PARAM,
|
||||
)
|
||||
|
||||
rooms = await self.store.get_mutual_rooms_between_users(
|
||||
|
|
|
@ -91,7 +91,7 @@ class VersionsRestServlet(RestServlet):
|
|||
# Implements additional endpoints as described in MSC2432
|
||||
"org.matrix.msc2432": True,
|
||||
# Implements additional endpoints as described in MSC2666
|
||||
"uk.half-shot.msc2666.mutual_rooms": True,
|
||||
"uk.half-shot.msc2666.query_mutual_rooms": True,
|
||||
# Whether new rooms will be set to encrypted or not (based on presets).
|
||||
"io.element.e2ee_forced.public": self.e2ee_forced_public,
|
||||
"io.element.e2ee_forced.private": self.e2ee_forced_private,
|
||||
|
|
|
@ -147,6 +147,7 @@ logger = logging.getLogger(__name__)
|
|||
if TYPE_CHECKING:
|
||||
from txredisapi import ConnectionHandler
|
||||
|
||||
from synapse.handlers.jwt import JwtHandler
|
||||
from synapse.handlers.oidc import OidcHandler
|
||||
from synapse.handlers.saml import SamlHandler
|
||||
|
||||
|
@ -453,15 +454,15 @@ class HomeServer(metaclass=abc.ABCMeta):
|
|||
return SimpleHttpClient(self, use_proxy=True)
|
||||
|
||||
@cache_in_self
|
||||
def get_proxied_blacklisted_http_client(self) -> SimpleHttpClient:
|
||||
def get_proxied_blocklisted_http_client(self) -> SimpleHttpClient:
|
||||
"""
|
||||
An HTTP client that uses configured HTTP(S) proxies and blacklists IPs
|
||||
based on the IP range blacklist/whitelist.
|
||||
An HTTP client that uses configured HTTP(S) proxies and blocks IPs
|
||||
based on the configured IP ranges.
|
||||
"""
|
||||
return SimpleHttpClient(
|
||||
self,
|
||||
ip_whitelist=self.config.server.ip_range_whitelist,
|
||||
ip_blacklist=self.config.server.ip_range_blacklist,
|
||||
ip_allowlist=self.config.server.ip_range_allowlist,
|
||||
ip_blocklist=self.config.server.ip_range_blocklist,
|
||||
use_proxy=True,
|
||||
)
|
||||
|
||||
|
@ -533,6 +534,12 @@ class HomeServer(metaclass=abc.ABCMeta):
|
|||
def get_sso_handler(self) -> SsoHandler:
|
||||
return SsoHandler(self)
|
||||
|
||||
@cache_in_self
|
||||
def get_jwt_handler(self) -> "JwtHandler":
|
||||
from synapse.handlers.jwt import JwtHandler
|
||||
|
||||
return JwtHandler(self)
|
||||
|
||||
@cache_in_self
|
||||
def get_sync_handler(self) -> SyncHandler:
|
||||
return SyncHandler(self)
|
||||
|
|
|
@ -45,6 +45,7 @@ from synapse.events.snapshot import (
|
|||
UnpersistedEventContextBase,
|
||||
)
|
||||
from synapse.logging.context import ContextResourceUsage
|
||||
from synapse.logging.opentracing import tag_args, trace
|
||||
from synapse.replication.http.state import ReplicationUpdateCurrentStateRestServlet
|
||||
from synapse.state import v1, v2
|
||||
from synapse.storage.databases.main.events_worker import EventRedactBehaviour
|
||||
|
@ -270,6 +271,8 @@ class StateHandler:
|
|||
state = await entry.get_state(self._state_storage_controller, StateFilter.all())
|
||||
return await self.store.get_joined_hosts(room_id, state, entry)
|
||||
|
||||
@trace
|
||||
@tag_args
|
||||
async def calculate_context_info(
|
||||
self,
|
||||
event: EventBase,
|
||||
|
@ -465,6 +468,7 @@ class StateHandler:
|
|||
|
||||
return await unpersisted_context.persist(event)
|
||||
|
||||
@trace
|
||||
@measure_func()
|
||||
async def resolve_state_groups_for_events(
|
||||
self, room_id: str, event_ids: Collection[str], await_full_state: bool = True
|
||||
|
|
|
@ -16,7 +16,6 @@ from typing import (
|
|||
TYPE_CHECKING,
|
||||
AbstractSet,
|
||||
Any,
|
||||
Awaitable,
|
||||
Callable,
|
||||
Collection,
|
||||
Dict,
|
||||
|
@ -67,6 +66,8 @@ class StateStorageController:
|
|||
"""
|
||||
self._partial_state_room_tracker.notify_un_partial_stated(room_id)
|
||||
|
||||
@trace
|
||||
@tag_args
|
||||
async def get_state_group_delta(
|
||||
self, state_group: int
|
||||
) -> Tuple[Optional[int], Optional[StateMap[str]]]:
|
||||
|
@ -84,6 +85,8 @@ class StateStorageController:
|
|||
state_group_delta = await self.stores.state.get_state_group_delta(state_group)
|
||||
return state_group_delta.prev_group, state_group_delta.delta_ids
|
||||
|
||||
@trace
|
||||
@tag_args
|
||||
async def get_state_groups_ids(
|
||||
self, _room_id: str, event_ids: Collection[str], await_full_state: bool = True
|
||||
) -> Dict[int, MutableStateMap[str]]:
|
||||
|
@ -114,6 +117,8 @@ class StateStorageController:
|
|||
|
||||
return group_to_state
|
||||
|
||||
@trace
|
||||
@tag_args
|
||||
async def get_state_ids_for_group(
|
||||
self, state_group: int, state_filter: Optional[StateFilter] = None
|
||||
) -> StateMap[str]:
|
||||
|
@ -130,6 +135,8 @@ class StateStorageController:
|
|||
|
||||
return group_to_state[state_group]
|
||||
|
||||
@trace
|
||||
@tag_args
|
||||
async def get_state_groups(
|
||||
self, room_id: str, event_ids: Collection[str]
|
||||
) -> Dict[int, List[EventBase]]:
|
||||
|
@ -165,9 +172,11 @@ class StateStorageController:
|
|||
for group, event_id_map in group_to_ids.items()
|
||||
}
|
||||
|
||||
def _get_state_groups_from_groups(
|
||||
@trace
|
||||
@tag_args
|
||||
async def _get_state_groups_from_groups(
|
||||
self, groups: List[int], state_filter: StateFilter
|
||||
) -> Awaitable[Dict[int, StateMap[str]]]:
|
||||
) -> Dict[int, StateMap[str]]:
|
||||
"""Returns the state groups for a given set of groups, filtering on
|
||||
types of state events.
|
||||
|
||||
|
@ -180,9 +189,12 @@ class StateStorageController:
|
|||
Dict of state group to state map.
|
||||
"""
|
||||
|
||||
return self.stores.state._get_state_groups_from_groups(groups, state_filter)
|
||||
return await self.stores.state._get_state_groups_from_groups(
|
||||
groups, state_filter
|
||||
)
|
||||
|
||||
@trace
|
||||
@tag_args
|
||||
async def get_state_for_events(
|
||||
self, event_ids: Collection[str], state_filter: Optional[StateFilter] = None
|
||||
) -> Dict[str, StateMap[EventBase]]:
|
||||
|
@ -280,6 +292,8 @@ class StateStorageController:
|
|||
|
||||
return {event: event_to_state[event] for event in event_ids}
|
||||
|
||||
@trace
|
||||
@tag_args
|
||||
async def get_state_for_event(
|
||||
self, event_id: str, state_filter: Optional[StateFilter] = None
|
||||
) -> StateMap[EventBase]:
|
||||
|
@ -303,6 +317,7 @@ class StateStorageController:
|
|||
return state_map[event_id]
|
||||
|
||||
@trace
|
||||
@tag_args
|
||||
async def get_state_ids_for_event(
|
||||
self,
|
||||
event_id: str,
|
||||
|
@ -333,9 +348,11 @@ class StateStorageController:
|
|||
)
|
||||
return state_map[event_id]
|
||||
|
||||
def get_state_for_groups(
|
||||
@trace
|
||||
@tag_args
|
||||
async def get_state_for_groups(
|
||||
self, groups: Iterable[int], state_filter: Optional[StateFilter] = None
|
||||
) -> Awaitable[Dict[int, MutableStateMap[str]]]:
|
||||
) -> Dict[int, MutableStateMap[str]]:
|
||||
"""Gets the state at each of a list of state groups, optionally
|
||||
filtering by type/state_key
|
||||
|
||||
|
@ -347,7 +364,7 @@ class StateStorageController:
|
|||
Returns:
|
||||
Dict of state group to state map.
|
||||
"""
|
||||
return self.stores.state._get_state_for_groups(
|
||||
return await self.stores.state._get_state_for_groups(
|
||||
groups, state_filter or StateFilter.all()
|
||||
)
|
||||
|
||||
|
@ -402,6 +419,8 @@ class StateStorageController:
|
|||
event_id, room_id, prev_group, delta_ids, current_state_ids
|
||||
)
|
||||
|
||||
@trace
|
||||
@tag_args
|
||||
@cancellable
|
||||
async def get_current_state_ids(
|
||||
self,
|
||||
|
@ -442,6 +461,8 @@ class StateStorageController:
|
|||
room_id, on_invalidate=on_invalidate
|
||||
)
|
||||
|
||||
@trace
|
||||
@tag_args
|
||||
async def get_canonical_alias_for_room(self, room_id: str) -> Optional[str]:
|
||||
"""Get canonical alias for room, if any
|
||||
|
||||
|
@ -466,6 +487,8 @@ class StateStorageController:
|
|||
|
||||
return event.content.get("canonical_alias")
|
||||
|
||||
@trace
|
||||
@tag_args
|
||||
async def get_current_state_deltas(
|
||||
self, prev_stream_id: int, max_stream_id: int
|
||||
) -> Tuple[int, List[Dict[str, Any]]]:
|
||||
|
@ -500,6 +523,7 @@ class StateStorageController:
|
|||
)
|
||||
|
||||
@trace
|
||||
@tag_args
|
||||
async def get_current_state(
|
||||
self, room_id: str, state_filter: Optional[StateFilter] = None
|
||||
) -> StateMap[EventBase]:
|
||||
|
@ -516,6 +540,8 @@ class StateStorageController:
|
|||
|
||||
return state_map
|
||||
|
||||
@trace
|
||||
@tag_args
|
||||
async def get_current_state_event(
|
||||
self, room_id: str, event_type: str, state_key: str
|
||||
) -> Optional[EventBase]:
|
||||
|
@ -527,6 +553,8 @@ class StateStorageController:
|
|||
)
|
||||
return state_map.get(key)
|
||||
|
||||
@trace
|
||||
@tag_args
|
||||
async def get_current_hosts_in_room(self, room_id: str) -> AbstractSet[str]:
|
||||
"""Get current hosts in room based on current state.
|
||||
|
||||
|
@ -538,6 +566,8 @@ class StateStorageController:
|
|||
|
||||
return await self.stores.main.get_current_hosts_in_room(room_id)
|
||||
|
||||
@trace
|
||||
@tag_args
|
||||
async def get_current_hosts_in_room_ordered(self, room_id: str) -> List[str]:
|
||||
"""Get current hosts in room based on current state.
|
||||
|
||||
|
@ -553,6 +583,8 @@ class StateStorageController:
|
|||
|
||||
return await self.stores.main.get_current_hosts_in_room_ordered(room_id)
|
||||
|
||||
@trace
|
||||
@tag_args
|
||||
async def get_current_hosts_in_room_or_partial_state_approximation(
|
||||
self, room_id: str
|
||||
) -> Collection[str]:
|
||||
|
@ -582,6 +614,8 @@ class StateStorageController:
|
|||
|
||||
return hosts
|
||||
|
||||
@trace
|
||||
@tag_args
|
||||
async def get_users_in_room_with_profiles(
|
||||
self, room_id: str
|
||||
) -> Mapping[str, ProfileInfo]:
|
||||
|
|
|
@ -13,8 +13,7 @@
|
|||
# limitations under the License.
|
||||
|
||||
import logging
|
||||
from collections import Counter
|
||||
from typing import TYPE_CHECKING, Collection, List, Tuple
|
||||
from typing import TYPE_CHECKING, Collection, Counter, List, Tuple
|
||||
|
||||
from synapse.api.errors import SynapseError
|
||||
from synapse.storage.database import LoggingTransaction
|
||||
|
|
|
@ -565,9 +565,8 @@ class DatabasePool:
|
|||
# A set of tables that are not safe to use native upserts in.
|
||||
self._unsafe_to_upsert_tables = set(UNIQUE_INDEX_BACKGROUND_UPDATES.keys())
|
||||
|
||||
# We add the user_directory_search table to the blacklist on SQLite
|
||||
# because the existing search table does not have an index, making it
|
||||
# unsafe to use native upserts.
|
||||
# The user_directory_search table is unsafe to use native upserts
|
||||
# on SQLite because the existing search table does not have an index.
|
||||
if isinstance(self.engine, Sqlite3Engine):
|
||||
self._unsafe_to_upsert_tables.add("user_directory_search")
|
||||
|
||||
|
|
|
@ -85,13 +85,10 @@ class AccountDataWorkerStore(PushRulesWorkerStore, CacheInvalidationWorkerStore)
|
|||
writers=hs.config.worker.writers.account_data,
|
||||
)
|
||||
else:
|
||||
# Multiple writers are not supported for SQLite.
|
||||
#
|
||||
# We shouldn't be running in worker mode with SQLite, but its useful
|
||||
# to support it for unit tests.
|
||||
#
|
||||
# If this process is the writer than we need to use
|
||||
# `StreamIdGenerator`, otherwise we use `SlavedIdTracker` which gets
|
||||
# updated over replication. (Multiple writers are not supported for
|
||||
# SQLite).
|
||||
self._account_data_id_gen = StreamIdGenerator(
|
||||
db_conn,
|
||||
hs.get_replication_notifier(),
|
||||
|
|
|
@ -274,11 +274,11 @@ class CacheInvalidationWorkerStore(SQLBaseStore):
|
|||
async def invalidate_cache_and_stream(
|
||||
self, cache_name: str, keys: Tuple[Any, ...]
|
||||
) -> None:
|
||||
"""Invalidates the cache and adds it to the cache stream so slaves
|
||||
"""Invalidates the cache and adds it to the cache stream so other workers
|
||||
will know to invalidate their caches.
|
||||
|
||||
This should only be used to invalidate caches where slaves won't
|
||||
otherwise know from other replication streams that the cache should
|
||||
This should only be used to invalidate caches where other workers won't
|
||||
otherwise have known from other replication streams that the cache should
|
||||
be invalidated.
|
||||
"""
|
||||
cache_func = getattr(self, cache_name, None)
|
||||
|
@ -297,11 +297,11 @@ class CacheInvalidationWorkerStore(SQLBaseStore):
|
|||
cache_func: CachedFunction,
|
||||
keys: Tuple[Any, ...],
|
||||
) -> None:
|
||||
"""Invalidates the cache and adds it to the cache stream so slaves
|
||||
"""Invalidates the cache and adds it to the cache stream so other workers
|
||||
will know to invalidate their caches.
|
||||
|
||||
This should only be used to invalidate caches where slaves won't
|
||||
otherwise know from other replication streams that the cache should
|
||||
This should only be used to invalidate caches where other workers won't
|
||||
otherwise have known from other replication streams that the cache should
|
||||
be invalidated.
|
||||
"""
|
||||
txn.call_after(cache_func.invalidate, keys)
|
||||
|
@ -310,7 +310,7 @@ class CacheInvalidationWorkerStore(SQLBaseStore):
|
|||
def _invalidate_all_cache_and_stream(
|
||||
self, txn: LoggingTransaction, cache_func: CachedFunction
|
||||
) -> None:
|
||||
"""Invalidates the entire cache and adds it to the cache stream so slaves
|
||||
"""Invalidates the entire cache and adds it to the cache stream so other workers
|
||||
will know to invalidate their caches.
|
||||
"""
|
||||
|
||||
|
|
|
@ -105,8 +105,6 @@ class DeviceWorkerStore(RoomMemberWorkerStore, EndToEndKeyWorkerStore):
|
|||
is_writer=hs.config.worker.worker_app is None,
|
||||
)
|
||||
|
||||
# Type-ignore: _device_list_id_gen is mixed in from either DataStore (as a
|
||||
# StreamIdGenerator) or SlavedDataStore (as a SlavedIdTracker).
|
||||
device_list_max = self._device_list_id_gen.get_current_token()
|
||||
device_list_prefill, min_device_list_id = self.db_pool.get_cache_dict(
|
||||
db_conn,
|
||||
|
|
|
@ -213,13 +213,10 @@ class EventsWorkerStore(SQLBaseStore):
|
|||
writers=hs.config.worker.writers.events,
|
||||
)
|
||||
else:
|
||||
# Multiple writers are not supported for SQLite.
|
||||
#
|
||||
# We shouldn't be running in worker mode with SQLite, but its useful
|
||||
# to support it for unit tests.
|
||||
#
|
||||
# If this process is the writer than we need to use
|
||||
# `StreamIdGenerator`, otherwise we use `SlavedIdTracker` which gets
|
||||
# updated over replication. (Multiple writers are not supported for
|
||||
# SQLite).
|
||||
self._stream_id_gen = StreamIdGenerator(
|
||||
db_conn,
|
||||
hs.get_replication_notifier(),
|
||||
|
@ -1976,12 +1973,6 @@ class EventsWorkerStore(SQLBaseStore):
|
|||
|
||||
return rows, to_token, True
|
||||
|
||||
async def is_event_after(self, event_id1: str, event_id2: str) -> bool:
|
||||
"""Returns True if event_id1 is after event_id2 in the stream"""
|
||||
to_1, so_1 = await self.get_event_ordering(event_id1)
|
||||
to_2, so_2 = await self.get_event_ordering(event_id2)
|
||||
return (to_1, so_1) > (to_2, so_2)
|
||||
|
||||
@cached(max_entries=5000)
|
||||
async def get_event_ordering(self, event_id: str) -> Tuple[int, int]:
|
||||
res = await self.db_pool.simple_select_one(
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue