Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes

anoa/log_11772
Erik Johnston 2021-09-28 11:55:53 +01:00
commit ba3a888a05
188 changed files with 1823 additions and 749 deletions

1
.gitignore vendored
View File

@ -40,6 +40,7 @@ __pycache__/
/.coverage* /.coverage*
/.mypy_cache/ /.mypy_cache/
/.tox /.tox
/.tox-pg-container
/build/ /build/
/coverage.* /coverage.*
/dist/ /dist/

1
changelog.d/10690.bugfix Normal file
View File

@ -0,0 +1 @@
Fix a long-standing bug that caused an `AssertionError` when purging history in certain rooms. Contributed by @Kokokokoka.

1
changelog.d/10782.bugfix Normal file
View File

@ -0,0 +1 @@
Fix a long-standing bug which caused deactivated users that were later reactivated to be missing from the user directory.

View File

@ -0,0 +1 @@
Improve oEmbed previews by processing the author name, photo, and video information.

1
changelog.d/10820.misc Normal file
View File

@ -0,0 +1 @@
Fix a long-standing bug where an `m.room.message` event containing a null byte would cause an internal server error.

1
changelog.d/10827.bugfix Normal file
View File

@ -0,0 +1 @@
Fix error in deprecated `/initialSync` endpoint when using the undocumented `from` and `to` parameters.

1
changelog.d/10833.misc Normal file
View File

@ -0,0 +1 @@
Extend the ModuleApi to let plug-ins check whether an ID is local and to access IP + User Agent data.

1
changelog.d/10865.doc Normal file
View File

@ -0,0 +1 @@
Add developer documentation about experimental configuration flags.

View File

@ -0,0 +1 @@
Speed up responding with large JSON objects to requests.

1
changelog.d/10873.bugfix Normal file
View File

@ -0,0 +1 @@
Fix a bug introduced in Synapse 1.37.0 which caused `knock` events which we sent to remote servers to be incorrectly stored in the local database.

1
changelog.d/10875.bugfix Normal file
View File

@ -0,0 +1 @@
Fix invalidating one-time key count cache after claiming keys. Contributed by Tulir at Beeper.

1
changelog.d/10880.misc Normal file
View File

@ -0,0 +1 @@
Break down Grafana's cache expiry time series based on reason for eviction---see #10829.

1
changelog.d/10881.bugfix Normal file
View File

@ -0,0 +1 @@
Fix application service users being subject to MAU blocking if MAU had been reached, even if configured not to be blocked.

1
changelog.d/10883.misc Normal file
View File

@ -0,0 +1 @@
Clean up some of the federation event authentication code for clarity.

1
changelog.d/10884.misc Normal file
View File

@ -0,0 +1 @@
Clean up some of the federation event authentication code for clarity.

1
changelog.d/10885.misc Normal file
View File

@ -0,0 +1 @@
Use direct references to config flags.

1
changelog.d/10887.bugfix Normal file
View File

@ -0,0 +1 @@
Allow the `.` and `~` characters when creating registration tokens as per the change to [MSC3231](https://github.com/matrix-org/matrix-doc/pull/3231).

1
changelog.d/10889.misc Normal file
View File

@ -0,0 +1 @@
Clean up some unnecessary parentheses in places around the codebase.

1
changelog.d/10891.misc Normal file
View File

@ -0,0 +1 @@
Improve type hinting in the user directory code.

1
changelog.d/10893.misc Normal file
View File

@ -0,0 +1 @@
Use direct references to config flags.

1
changelog.d/10896.misc Normal file
View File

@ -0,0 +1 @@
Clean up some of the federation event authentication code for clarity.

1
changelog.d/10897.misc Normal file
View File

@ -0,0 +1 @@
Use direct references to config flags.

View File

@ -0,0 +1 @@
Add a `user_may_create_room_with_invites` spam checker callback to allow modules to allow or deny a room creation request based on the invites and/or 3PID invites it includes.

1
changelog.d/10901.misc Normal file
View File

@ -0,0 +1 @@
Clean up some of the federation event authentication code for clarity.

View File

@ -0,0 +1 @@
Speed up responding with large JSON objects to requests.

1
changelog.d/10906.misc Normal file
View File

@ -0,0 +1 @@
Update development testing script `test_postgresql.sh` to use a supported Python version and make re-runs quicker.

1
changelog.d/10907.bugfix Normal file
View File

@ -0,0 +1 @@
Fix a long-standing bug which could cause events pulled over federation to be incorrectly rejected.

1
changelog.d/10911.bugfix Normal file
View File

@ -0,0 +1 @@
Avoid storing URL cache files in storage providers. Server admins may safely delete the `url_cache/` and `url_cache_thumbnails/` directories from any configured storage providers to reclaim space.

1
changelog.d/10913.bugfix Normal file
View File

@ -0,0 +1 @@
Fix race conditions when creating media store and config directories.

1
changelog.d/10917.misc Normal file
View File

@ -0,0 +1 @@
Document and summarize changes in schema version `61` - `64`.

1
changelog.d/10925.misc Normal file
View File

@ -0,0 +1 @@
Update release script to sign the newly created git tags.

View File

@ -6785,7 +6785,7 @@
"expr": "rate(synapse_util_caches_cache:evicted_size{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size])", "expr": "rate(synapse_util_caches_cache:evicted_size{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size])",
"format": "time_series", "format": "time_series",
"intervalFactor": 1, "intervalFactor": 1,
"legendFormat": "{{name}} {{job}}-{{index}}", "legendFormat": "{{name}} ({{reason}}) {{job}}-{{index}}",
"refId": "A" "refId": "A"
} }
], ],
@ -10888,5 +10888,5 @@
"timezone": "", "timezone": "",
"title": "Synapse", "title": "Synapse",
"uid": "000000012", "uid": "000000012",
"version": 99 "version": 100
} }

View File

@ -1,6 +1,6 @@
# Use the Sytest image that comes with a lot of the build dependencies # Use the Sytest image that comes with a lot of the build dependencies
# pre-installed # pre-installed
FROM matrixdotorg/sytest:latest FROM matrixdotorg/sytest:bionic
# The Sytest image doesn't come with python, so install that # The Sytest image doesn't come with python, so install that
RUN apt-get update && apt-get -qq install -y python3 python3-dev python3-pip RUN apt-get update && apt-get -qq install -y python3 python3-dev python3-pip
@ -8,5 +8,23 @@ RUN apt-get update && apt-get -qq install -y python3 python3-dev python3-pip
# We need tox to run the tests in run_pg_tests.sh # We need tox to run the tests in run_pg_tests.sh
RUN python3 -m pip install tox RUN python3 -m pip install tox
ADD run_pg_tests.sh /pg_tests.sh # Initialise the db
ENTRYPOINT /pg_tests.sh RUN su -c '/usr/lib/postgresql/10/bin/initdb -D /var/lib/postgresql/data -E "UTF-8" --lc-collate="C.UTF-8" --lc-ctype="C.UTF-8" --username=postgres' postgres
# Add a user with our UID and GID so that files get created on the host owned
# by us, not root.
ARG UID
ARG GID
RUN groupadd --gid $GID user
RUN useradd --uid $UID --gid $GID --groups sudo --no-create-home user
# Ensure we can start postgres by sudo-ing as the postgres user.
RUN apt-get update && apt-get -qq install -y sudo
RUN echo "user ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
ADD run_pg_tests.sh /run_pg_tests.sh
# Use the "exec form" of ENTRYPOINT (https://docs.docker.com/engine/reference/builder/#entrypoint)
# so that we can `docker run` this container and pass arguments to pg_tests.sh
ENTRYPOINT ["/run_pg_tests.sh"]
USER user

View File

@ -10,11 +10,10 @@ set -e
# Set PGUSER so Synapse's tests know what user to connect to the database with # Set PGUSER so Synapse's tests know what user to connect to the database with
export PGUSER=postgres export PGUSER=postgres
# Initialise & start the database # Start the database
su -c '/usr/lib/postgresql/9.6/bin/initdb -D /var/lib/postgresql/data -E "UTF-8" --lc-collate="en_US.UTF-8" --lc-ctype="en_US.UTF-8" --username=postgres' postgres sudo -u postgres /usr/lib/postgresql/10/bin/pg_ctl -w -D /var/lib/postgresql/data start
su -c '/usr/lib/postgresql/9.6/bin/pg_ctl -w -D /var/lib/postgresql/data start' postgres
# Run the tests # Run the tests
cd /src cd /src
export TRIAL_FLAGS="-j 4" export TRIAL_FLAGS="-j 4"
tox --workdir=/tmp -e py35-postgres tox --workdir=./.tox-pg-container -e py36-postgres "$@"

View File

@ -74,6 +74,7 @@
- [Testing]() - [Testing]()
- [OpenTracing](opentracing.md) - [OpenTracing](opentracing.md)
- [Database Schemas](development/database_schema.md) - [Database Schemas](development/database_schema.md)
- [Experimental features](development/experimental_features.md)
- [Synapse Architecture]() - [Synapse Architecture]()
- [Log Contexts](log_contexts.md) - [Log Contexts](log_contexts.md)
- [Replication](replication.md) - [Replication](replication.md)

View File

@ -170,6 +170,53 @@ To increase the log level for the tests, set `SYNAPSE_TEST_LOG_LEVEL`:
SYNAPSE_TEST_LOG_LEVEL=DEBUG trial tests SYNAPSE_TEST_LOG_LEVEL=DEBUG trial tests
``` ```
### Running tests under PostgreSQL
Invoking `trial` as above will use an in-memory SQLite database. This is great for
quick development and testing. However, we recommend using a PostgreSQL database
in production (and indeed, we have some code paths specific to each database).
This means that we need to run our unit tests against PostgreSQL too. Our CI does
this automatically for pull requests and release candidates, but it's sometimes
useful to reproduce this locally.
To do so, [configure Postgres](../postgres.md) and run `trial` with the
following environment variables matching your configuration:
- `SYNAPSE_POSTGRES` to anything nonempty
- `SYNAPSE_POSTGRES_HOST`
- `SYNAPSE_POSTGRES_USER`
- `SYNAPSE_POSTGRES_PASSWORD`
For example:
```shell
export SYNAPSE_POSTGRES=1
export SYNAPSE_POSTGRES_HOST=localhost
export SYNAPSE_POSTGRES_USER=postgres
export SYNAPSE_POSTGRES_PASSWORD=mydevenvpassword
trial
```
#### Prebuilt container
Since configuring PostgreSQL can be fiddly, we can make use of a pre-made
Docker container to set up PostgreSQL and run our tests for us. To do so, run
```shell
scripts-dev/test_postgresql.sh
```
Any extra arguments to the script will be passed to `tox` and then to `trial`,
so we can run a specific test in this container with e.g.
```shell
scripts-dev/test_postgresql.sh tests.replication.test_sharded_event_persister.EventPersisterShardTestCase
```
The container creates a folder in your Synapse checkout called
`.tox-pg-container` and uses this as a tox environment. The output of any
`trial` runs goes into `_trial_temp` in your synapse source directory — the same
as running `trial` directly on your host machine.
## Run the integration tests ([Sytest](https://github.com/matrix-org/sytest)). ## Run the integration tests ([Sytest](https://github.com/matrix-org/sytest)).

View File

@ -0,0 +1,37 @@
# Implementing experimental features in Synapse
It can be desirable to implement "experimental" features which are disabled by
default and must be explicitly enabled via the Synapse configuration. This is
applicable for features which:
* Are unstable in the Matrix spec (e.g. those defined by an MSC that has not yet been merged).
* Developers are not confident in their use by general Synapse administrators/users
(e.g. a feature is incomplete, buggy, performs poorly, or needs further testing).
Note that this only really applies to features which are expected to be desirable
to a broad audience. The [module infrastructure](../modules/index.md) should
instead be investigated for non-standard features.
Guarding experimental features behind configuration flags should help with some
of the following scenarios:
* Ensure that clients do not assume that unstable features exist (failing
gracefully if they do not).
* Unstable features do not become de-facto standards and can be removed
aggressively (since only those who have opted-in will be affected).
* Ease finding the implementation of unstable features in Synapse (for future
removal or stabilization).
* Ease testing a feature (or removal of feature) due to enabling/disabling without
code changes. It also becomes possible to ask for wider testing, if desired.
Experimental configuration flags should be disabled by default (requiring Synapse
administrators to explicitly opt-in), although there are situations where it makes
sense (from a product point-of-view) to enable features by default. This is
expected and not an issue.
It is not a requirement for experimental features to be behind a configuration flag,
but one should be used if unsure.
New experimental configuration flags should be added under the `experimental`
configuration key (see the `synapse.config.experimental` file) and either explain
(briefly) what is being enabled, or include the MSC number.

View File

@ -38,6 +38,35 @@ async def user_may_create_room(user: str) -> bool
Called when processing a room creation request. The module must return a `bool` indicating Called when processing a room creation request. The module must return a `bool` indicating
whether the given user (represented by their Matrix user ID) is allowed to create a room. whether the given user (represented by their Matrix user ID) is allowed to create a room.
### `user_may_create_room_with_invites`
```python
async def user_may_create_room_with_invites(
user: str,
invites: List[str],
threepid_invites: List[Dict[str, str]],
) -> bool
```
Called when processing a room creation request (right after `user_may_create_room`).
The module is given the Matrix user ID of the user trying to create a room, as well as a
list of Matrix users to invite and a list of third-party identifiers (3PID, e.g. email
addresses) to invite.
An invited Matrix user to invite is represented by their Matrix user IDs, and an invited
3PIDs is represented by a dict that includes the 3PID medium (e.g. "email") through its
`medium` key and its address (e.g. "alice@example.com") through its `address` key.
See [the Matrix specification](https://matrix.org/docs/spec/appendices#pid-types) for more
information regarding third-party identifiers.
If no invite and/or 3PID invite were specified in the room creation request, the
corresponding list(s) will be empty.
**Note**: This callback is not called when a room is cloned (e.g. during a room upgrade)
since no invites are sent when cloning a room. To cover this case, modules also need to
implement `user_may_create_room`.
### `user_may_create_room_alias` ### `user_may_create_room_alias`
```python ```python

View File

@ -85,6 +85,13 @@ process, for example:
dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb
``` ```
# Upgrading to v1.44.0
## The URL preview cache is no longer mirrored to storage providers
The `url_cache/` and `url_cache_thumbnails/` directories in the media store are
no longer mirrored to storage providers. These two directories can be safely
deleted from any configured storage providers to reclaim space.
# Upgrading to v1.43.0 # Upgrading to v1.43.0
## The spaces summary APIs can now be handled by workers ## The spaces summary APIs can now be handled by workers

View File

@ -85,9 +85,11 @@ files =
tests/handlers/test_room_summary.py, tests/handlers/test_room_summary.py,
tests/handlers/test_send_email.py, tests/handlers/test_send_email.py,
tests/handlers/test_sync.py, tests/handlers/test_sync.py,
tests/handlers/test_user_directory.py,
tests/rest/client/test_login.py, tests/rest/client/test_login.py,
tests/rest/client/test_auth.py, tests/rest/client/test_auth.py,
tests/storage/test_state.py, tests/storage/test_state.py,
tests/storage/test_user_directory.py,
tests/util/test_itertools.py, tests/util/test_itertools.py,
tests/util/test_stream_change_cache.py tests/util/test_stream_change_cache.py

View File

@ -276,7 +276,7 @@ def tag(gh_token: Optional[str]):
if click.confirm("Edit text?", default=False): if click.confirm("Edit text?", default=False):
changes = click.edit(changes, require_save=False) changes = click.edit(changes, require_save=False)
repo.create_tag(tag_name, message=changes) repo.create_tag(tag_name, message=changes, sign=True)
if not click.confirm("Push tag to GitHub?", default=True): if not click.confirm("Push tag to GitHub?", default=True):
print("") print("")

19
scripts-dev/test_postgresql.sh Executable file
View File

@ -0,0 +1,19 @@
#!/usr/bin/env bash
# This script builds the Docker image to run the PostgreSQL tests, and then runs
# the tests. It uses a dedicated tox environment so that we don't have to
# rebuild it each time.
# Command line arguments to this script are forwarded to "tox" and then to "trial".
set -e
# Build, and tag
docker build docker/ \
--build-arg "UID=$(id -u)" \
--build-arg "GID=$(id -g)" \
-f docker/Dockerfile-pgtests \
-t synapsepgtests
# Run, mounting the current directory into /src
docker run --rm -it -v "$(pwd):/src" -v synapse-pg-test-tox:/tox synapsepgtests "$@"

View File

@ -81,7 +81,7 @@ class AuthBlocking:
# We never block the server from doing actions on behalf of # We never block the server from doing actions on behalf of
# users. # users.
return return
elif requester.app_service and not self._track_appservice_user_ips: if requester.app_service and not self._track_appservice_user_ips:
# If we're authenticated as an appservice then we only block # If we're authenticated as an appservice then we only block
# auth if `track_appservice_user_ips` is set, as that option # auth if `track_appservice_user_ips` is set, as that option
# implicitly means that application services are part of MAU # implicitly means that application services are part of MAU

View File

@ -39,12 +39,12 @@ class ConsentURIBuilder:
Args: Args:
hs_config (synapse.config.homeserver.HomeServerConfig): hs_config (synapse.config.homeserver.HomeServerConfig):
""" """
if hs_config.form_secret is None: if hs_config.key.form_secret is None:
raise ConfigError("form_secret not set in config") raise ConfigError("form_secret not set in config")
if hs_config.server.public_baseurl is None: if hs_config.server.public_baseurl is None:
raise ConfigError("public_baseurl not set in config") raise ConfigError("public_baseurl not set in config")
self._hmac_secret = hs_config.form_secret.encode("utf-8") self._hmac_secret = hs_config.key.form_secret.encode("utf-8")
self._public_baseurl = hs_config.server.public_baseurl self._public_baseurl = hs_config.server.public_baseurl
def build_user_consent_uri(self, user_id): def build_user_consent_uri(self, user_id):

View File

@ -88,8 +88,8 @@ def start_worker_reactor(appname, config, run_command=reactor.run):
appname, appname,
soft_file_limit=config.soft_file_limit, soft_file_limit=config.soft_file_limit,
gc_thresholds=config.gc_thresholds, gc_thresholds=config.gc_thresholds,
pid_file=config.worker_pid_file, pid_file=config.worker.worker_pid_file,
daemonize=config.worker_daemonize, daemonize=config.worker.worker_daemonize,
print_pidfile=config.print_pidfile, print_pidfile=config.print_pidfile,
logger=logger, logger=logger,
run_command=run_command, run_command=run_command,
@ -424,12 +424,14 @@ def setup_sentry(hs):
hs (synapse.server.HomeServer) hs (synapse.server.HomeServer)
""" """
if not hs.config.sentry_enabled: if not hs.config.metrics.sentry_enabled:
return return
import sentry_sdk import sentry_sdk
sentry_sdk.init(dsn=hs.config.sentry_dsn, release=get_version_string(synapse)) sentry_sdk.init(
dsn=hs.config.metrics.sentry_dsn, release=get_version_string(synapse)
)
# We set some default tags that give some context to this instance # We set some default tags that give some context to this instance
with sentry_sdk.configure_scope() as scope: with sentry_sdk.configure_scope() as scope:

View File

@ -186,13 +186,13 @@ def start(config_options):
config.worker.worker_app = "synapse.app.admin_cmd" config.worker.worker_app = "synapse.app.admin_cmd"
if ( if (
not config.worker_daemonize not config.worker.worker_daemonize
and not config.worker_log_file and not config.worker.worker_log_file
and not config.worker_log_config and not config.worker.worker_log_config
): ):
# Since we're meant to be run as a "command" let's not redirect stdio # Since we're meant to be run as a "command" let's not redirect stdio
# unless we've actually set log config. # unless we've actually set log config.
config.no_redirect_stdio = True config.logging.no_redirect_stdio = True
# Explicitly disable background processes # Explicitly disable background processes
config.update_user_directory = False config.update_user_directory = False

View File

@ -140,7 +140,7 @@ class KeyUploadServlet(RestServlet):
self.auth = hs.get_auth() self.auth = hs.get_auth()
self.store = hs.get_datastore() self.store = hs.get_datastore()
self.http_client = hs.get_simple_http_client() self.http_client = hs.get_simple_http_client()
self.main_uri = hs.config.worker_main_http_uri self.main_uri = hs.config.worker.worker_main_http_uri
async def on_POST(self, request: Request, device_id: Optional[str]): async def on_POST(self, request: Request, device_id: Optional[str]):
requester = await self.auth.get_user_by_req(request, allow_guest=True) requester = await self.auth.get_user_by_req(request, allow_guest=True)
@ -321,7 +321,7 @@ class GenericWorkerServer(HomeServer):
elif name == "federation": elif name == "federation":
resources.update({FEDERATION_PREFIX: TransportLayerServer(self)}) resources.update({FEDERATION_PREFIX: TransportLayerServer(self)})
elif name == "media": elif name == "media":
if self.config.can_load_media_repo: if self.config.media.can_load_media_repo:
media_repo = self.get_media_repository_resource() media_repo = self.get_media_repository_resource()
# We need to serve the admin servlets for media on the # We need to serve the admin servlets for media on the
@ -384,7 +384,7 @@ class GenericWorkerServer(HomeServer):
logger.info("Synapse worker now listening on port %d", port) logger.info("Synapse worker now listening on port %d", port)
def start_listening(self): def start_listening(self):
for listener in self.config.worker_listeners: for listener in self.config.worker.worker_listeners:
if listener.type == "http": if listener.type == "http":
self._listen_http(listener) self._listen_http(listener)
elif listener.type == "manhole": elif listener.type == "manhole":
@ -395,7 +395,7 @@ class GenericWorkerServer(HomeServer):
manhole_globals={"hs": self}, manhole_globals={"hs": self},
) )
elif listener.type == "metrics": elif listener.type == "metrics":
if not self.config.enable_metrics: if not self.config.metrics.enable_metrics:
logger.warning( logger.warning(
"Metrics listener configured, but " "Metrics listener configured, but "
"enable_metrics is not True!" "enable_metrics is not True!"
@ -488,7 +488,7 @@ def start(config_options):
register_start(_base.start, hs) register_start(_base.start, hs)
# redirect stdio to the logs, if configured. # redirect stdio to the logs, if configured.
if not hs.config.no_redirect_stdio: if not hs.config.logging.no_redirect_stdio:
redirect_stdio_to_logs() redirect_stdio_to_logs()
_base.start_worker_reactor("synapse-generic-worker", config) _base.start_worker_reactor("synapse-generic-worker", config)

View File

@ -195,7 +195,7 @@ class SynapseHomeServer(HomeServer):
} }
) )
if self.config.threepid_behaviour_email == ThreepidBehaviour.LOCAL: if self.config.email.threepid_behaviour_email == ThreepidBehaviour.LOCAL:
from synapse.rest.synapse.client.password_reset import ( from synapse.rest.synapse.client.password_reset import (
PasswordResetSubmitTokenResource, PasswordResetSubmitTokenResource,
) )
@ -234,7 +234,7 @@ class SynapseHomeServer(HomeServer):
) )
if name in ["media", "federation", "client"]: if name in ["media", "federation", "client"]:
if self.config.enable_media_repo: if self.config.media.enable_media_repo:
media_repo = self.get_media_repository_resource() media_repo = self.get_media_repository_resource()
resources.update( resources.update(
{MEDIA_PREFIX: media_repo, LEGACY_MEDIA_PREFIX: media_repo} {MEDIA_PREFIX: media_repo, LEGACY_MEDIA_PREFIX: media_repo}
@ -269,7 +269,7 @@ class SynapseHomeServer(HomeServer):
# https://twistedmatrix.com/trac/ticket/7678 # https://twistedmatrix.com/trac/ticket/7678
resources[WEB_CLIENT_PREFIX] = File(webclient_loc) resources[WEB_CLIENT_PREFIX] = File(webclient_loc)
if name == "metrics" and self.config.enable_metrics: if name == "metrics" and self.config.metrics.enable_metrics:
resources[METRICS_PREFIX] = MetricsResource(RegistryProxy) resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
if name == "replication": if name == "replication":
@ -278,7 +278,7 @@ class SynapseHomeServer(HomeServer):
return resources return resources
def start_listening(self): def start_listening(self):
if self.config.redis_enabled: if self.config.redis.redis_enabled:
# If redis is enabled we connect via the replication command handler # If redis is enabled we connect via the replication command handler
# in the same way as the workers (since we're effectively a client # in the same way as the workers (since we're effectively a client
# rather than a server). # rather than a server).
@ -305,7 +305,7 @@ class SynapseHomeServer(HomeServer):
for s in services: for s in services:
reactor.addSystemEventTrigger("before", "shutdown", s.stopListening) reactor.addSystemEventTrigger("before", "shutdown", s.stopListening)
elif listener.type == "metrics": elif listener.type == "metrics":
if not self.config.enable_metrics: if not self.config.metrics.enable_metrics:
logger.warning( logger.warning(
"Metrics listener configured, but " "Metrics listener configured, but "
"enable_metrics is not True!" "enable_metrics is not True!"
@ -366,7 +366,7 @@ def setup(config_options):
async def start(): async def start():
# Load the OIDC provider metadatas, if OIDC is enabled. # Load the OIDC provider metadatas, if OIDC is enabled.
if hs.config.oidc_enabled: if hs.config.oidc.oidc_enabled:
oidc = hs.get_oidc_handler() oidc = hs.get_oidc_handler()
# Loading the provider metadata also ensures the provider config is valid. # Loading the provider metadata also ensures the provider config is valid.
await oidc.load_metadata() await oidc.load_metadata()
@ -455,7 +455,7 @@ def main():
hs = setup(sys.argv[1:]) hs = setup(sys.argv[1:])
# redirect stdio to the logs, if configured. # redirect stdio to the logs, if configured.
if not hs.config.no_redirect_stdio: if not hs.config.logging.no_redirect_stdio:
redirect_stdio_to_logs() redirect_stdio_to_logs()
run(hs) run(hs)

View File

@ -131,10 +131,12 @@ async def phone_stats_home(hs, stats, stats_process=_stats_process):
log_level = synapse_logger.getEffectiveLevel() log_level = synapse_logger.getEffectiveLevel()
stats["log_level"] = logging.getLevelName(log_level) stats["log_level"] = logging.getLevelName(log_level)
logger.info("Reporting stats to %s: %s" % (hs.config.report_stats_endpoint, stats)) logger.info(
"Reporting stats to %s: %s" % (hs.config.metrics.report_stats_endpoint, stats)
)
try: try:
await hs.get_proxied_http_client().put_json( await hs.get_proxied_http_client().put_json(
hs.config.report_stats_endpoint, stats hs.config.metrics.report_stats_endpoint, stats
) )
except Exception as e: except Exception as e:
logger.warning("Error reporting stats: %s", e) logger.warning("Error reporting stats: %s", e)
@ -188,7 +190,7 @@ def start_phone_stats_home(hs):
clock.looping_call(generate_monthly_active_users, 5 * 60 * 1000) clock.looping_call(generate_monthly_active_users, 5 * 60 * 1000)
# End of monthly active user settings # End of monthly active user settings
if hs.config.report_stats: if hs.config.metrics.report_stats:
logger.info("Scheduling stats reporting for 3 hour intervals") logger.info("Scheduling stats reporting for 3 hour intervals")
clock.looping_call(phone_stats_home, 3 * 60 * 60 * 1000, hs, stats) clock.looping_call(phone_stats_home, 3 * 60 * 60 * 1000, hs, stats)

View File

@ -200,11 +200,7 @@ class Config:
@classmethod @classmethod
def ensure_directory(cls, dir_path): def ensure_directory(cls, dir_path):
dir_path = cls.abspath(dir_path) dir_path = cls.abspath(dir_path)
try: os.makedirs(dir_path, exist_ok=True)
os.makedirs(dir_path)
except OSError as e:
if e.errno != errno.EEXIST:
raise
if not os.path.isdir(dir_path): if not os.path.isdir(dir_path):
raise ConfigError("%s is not a directory" % (dir_path,)) raise ConfigError("%s is not a directory" % (dir_path,))
return dir_path return dir_path
@ -693,8 +689,7 @@ class RootConfig:
open_private_ports=config_args.open_private_ports, open_private_ports=config_args.open_private_ports,
) )
if not path_exists(config_dir_path): os.makedirs(config_dir_path, exist_ok=True)
os.makedirs(config_dir_path)
with open(config_path, "w") as config_file: with open(config_path, "w") as config_file:
config_file.write(config_str) config_file.write(config_str)
config_file.write("\n\n# vim:ft=yaml") config_file.write("\n\n# vim:ft=yaml")

View File

@ -13,6 +13,7 @@
# limitations under the License. # limitations under the License.
from os import path from os import path
from typing import Optional
from synapse.config import ConfigError from synapse.config import ConfigError
@ -78,8 +79,8 @@ class ConsentConfig(Config):
def __init__(self, *args): def __init__(self, *args):
super().__init__(*args) super().__init__(*args)
self.user_consent_version = None self.user_consent_version: Optional[str] = None
self.user_consent_template_dir = None self.user_consent_template_dir: Optional[str] = None
self.user_consent_server_notice_content = None self.user_consent_server_notice_content = None
self.user_consent_server_notice_to_guests = False self.user_consent_server_notice_to_guests = False
self.block_events_without_consent_error = None self.block_events_without_consent_error = None
@ -94,7 +95,9 @@ class ConsentConfig(Config):
return return
self.user_consent_version = str(consent_config["version"]) self.user_consent_version = str(consent_config["version"])
self.user_consent_template_dir = self.abspath(consent_config["template_dir"]) self.user_consent_template_dir = self.abspath(consent_config["template_dir"])
if not path.isdir(self.user_consent_template_dir): if not isinstance(self.user_consent_template_dir, str) or not path.isdir(
self.user_consent_template_dir
):
raise ConfigError( raise ConfigError(
"Could not find template directory '%s'" "Could not find template directory '%s'"
% (self.user_consent_template_dir,) % (self.user_consent_template_dir,)

View File

@ -322,7 +322,9 @@ def setup_logging(
""" """
log_config_path = ( log_config_path = (
config.worker_log_config if use_worker_options else config.log_config config.worker.worker_log_config
if use_worker_options
else config.logging.log_config
) )
# Perform one-time logging configuration. # Perform one-time logging configuration.

View File

@ -1447,7 +1447,7 @@ def read_gc_thresholds(thresholds):
return None return None
try: try:
assert len(thresholds) == 3 assert len(thresholds) == 3
return (int(thresholds[0]), int(thresholds[1]), int(thresholds[2])) return int(thresholds[0]), int(thresholds[1]), int(thresholds[2])
except Exception: except Exception:
raise ConfigError( raise ConfigError(
"Value of `gc_threshold` must be a list of three integers if set" "Value of `gc_threshold` must be a list of three integers if set"

View File

@ -74,8 +74,8 @@ class ServerContextFactory(ContextFactory):
context.set_options( context.set_options(
SSL.OP_NO_SSLv2 | SSL.OP_NO_SSLv3 | SSL.OP_NO_TLSv1 | SSL.OP_NO_TLSv1_1 SSL.OP_NO_SSLv2 | SSL.OP_NO_SSLv3 | SSL.OP_NO_TLSv1 | SSL.OP_NO_TLSv1_1
) )
context.use_certificate_chain_file(config.tls_certificate_file) context.use_certificate_chain_file(config.tls.tls_certificate_file)
context.use_privatekey(config.tls_private_key) context.use_privatekey(config.tls.tls_private_key)
# https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/ # https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
context.set_cipher_list( context.set_cipher_list(

View File

@ -80,9 +80,7 @@ class EventContext:
(type, state_key) -> event_id (type, state_key) -> event_id
FIXME: what is this for an outlier? it seems ill-defined. It seems like For an outlier, this is {}
it could be either {}, or the state we were given by the remote
server, depending on $THINGS
Note that this is a private attribute: it should be accessed via Note that this is a private attribute: it should be accessed via
``get_current_state_ids``. _AsyncEventContext impl calculates this ``get_current_state_ids``. _AsyncEventContext impl calculates this
@ -96,7 +94,7 @@ class EventContext:
(type, state_key) -> event_id (type, state_key) -> event_id
FIXME: again, what is this for an outlier? For an outlier, this is {}
As with _current_state_ids, this is a private attribute. It should be As with _current_state_ids, this is a private attribute. It should be
accessed via get_prev_state_ids. accessed via get_prev_state_ids.
@ -130,6 +128,14 @@ class EventContext:
delta_ids=delta_ids, delta_ids=delta_ids,
) )
@staticmethod
def for_outlier():
"""Return an EventContext instance suitable for persisting an outlier event"""
return EventContext(
current_state_ids={},
prev_state_ids={},
)
async def serialize(self, event: EventBase, store: "DataStore") -> dict: async def serialize(self, event: EventBase, store: "DataStore") -> dict:
"""Converts self to a type that can be serialized as JSON, and then """Converts self to a type that can be serialized as JSON, and then
deserialized by `deserialize` deserialized by `deserialize`

View File

@ -46,6 +46,9 @@ CHECK_EVENT_FOR_SPAM_CALLBACK = Callable[
] ]
USER_MAY_INVITE_CALLBACK = Callable[[str, str, str], Awaitable[bool]] USER_MAY_INVITE_CALLBACK = Callable[[str, str, str], Awaitable[bool]]
USER_MAY_CREATE_ROOM_CALLBACK = Callable[[str], Awaitable[bool]] USER_MAY_CREATE_ROOM_CALLBACK = Callable[[str], Awaitable[bool]]
USER_MAY_CREATE_ROOM_WITH_INVITES_CALLBACK = Callable[
[str, List[str], List[Dict[str, str]]], Awaitable[bool]
]
USER_MAY_CREATE_ROOM_ALIAS_CALLBACK = Callable[[str, RoomAlias], Awaitable[bool]] USER_MAY_CREATE_ROOM_ALIAS_CALLBACK = Callable[[str, RoomAlias], Awaitable[bool]]
USER_MAY_PUBLISH_ROOM_CALLBACK = Callable[[str, str], Awaitable[bool]] USER_MAY_PUBLISH_ROOM_CALLBACK = Callable[[str, str], Awaitable[bool]]
CHECK_USERNAME_FOR_SPAM_CALLBACK = Callable[[Dict[str, str]], Awaitable[bool]] CHECK_USERNAME_FOR_SPAM_CALLBACK = Callable[[Dict[str, str]], Awaitable[bool]]
@ -78,7 +81,7 @@ def load_legacy_spam_checkers(hs: "synapse.server.HomeServer"):
""" """
spam_checkers: List[Any] = [] spam_checkers: List[Any] = []
api = hs.get_module_api() api = hs.get_module_api()
for module, config in hs.config.spam_checkers: for module, config in hs.config.spamchecker.spam_checkers:
# Older spam checkers don't accept the `api` argument, so we # Older spam checkers don't accept the `api` argument, so we
# try and detect support. # try and detect support.
spam_args = inspect.getfullargspec(module) spam_args = inspect.getfullargspec(module)
@ -164,6 +167,9 @@ class SpamChecker:
self._check_event_for_spam_callbacks: List[CHECK_EVENT_FOR_SPAM_CALLBACK] = [] self._check_event_for_spam_callbacks: List[CHECK_EVENT_FOR_SPAM_CALLBACK] = []
self._user_may_invite_callbacks: List[USER_MAY_INVITE_CALLBACK] = [] self._user_may_invite_callbacks: List[USER_MAY_INVITE_CALLBACK] = []
self._user_may_create_room_callbacks: List[USER_MAY_CREATE_ROOM_CALLBACK] = [] self._user_may_create_room_callbacks: List[USER_MAY_CREATE_ROOM_CALLBACK] = []
self._user_may_create_room_with_invites_callbacks: List[
USER_MAY_CREATE_ROOM_WITH_INVITES_CALLBACK
] = []
self._user_may_create_room_alias_callbacks: List[ self._user_may_create_room_alias_callbacks: List[
USER_MAY_CREATE_ROOM_ALIAS_CALLBACK USER_MAY_CREATE_ROOM_ALIAS_CALLBACK
] = [] ] = []
@ -183,6 +189,9 @@ class SpamChecker:
check_event_for_spam: Optional[CHECK_EVENT_FOR_SPAM_CALLBACK] = None, check_event_for_spam: Optional[CHECK_EVENT_FOR_SPAM_CALLBACK] = None,
user_may_invite: Optional[USER_MAY_INVITE_CALLBACK] = None, user_may_invite: Optional[USER_MAY_INVITE_CALLBACK] = None,
user_may_create_room: Optional[USER_MAY_CREATE_ROOM_CALLBACK] = None, user_may_create_room: Optional[USER_MAY_CREATE_ROOM_CALLBACK] = None,
user_may_create_room_with_invites: Optional[
USER_MAY_CREATE_ROOM_WITH_INVITES_CALLBACK
] = None,
user_may_create_room_alias: Optional[ user_may_create_room_alias: Optional[
USER_MAY_CREATE_ROOM_ALIAS_CALLBACK USER_MAY_CREATE_ROOM_ALIAS_CALLBACK
] = None, ] = None,
@ -203,6 +212,11 @@ class SpamChecker:
if user_may_create_room is not None: if user_may_create_room is not None:
self._user_may_create_room_callbacks.append(user_may_create_room) self._user_may_create_room_callbacks.append(user_may_create_room)
if user_may_create_room_with_invites is not None:
self._user_may_create_room_with_invites_callbacks.append(
user_may_create_room_with_invites,
)
if user_may_create_room_alias is not None: if user_may_create_room_alias is not None:
self._user_may_create_room_alias_callbacks.append( self._user_may_create_room_alias_callbacks.append(
user_may_create_room_alias, user_may_create_room_alias,
@ -283,6 +297,34 @@ class SpamChecker:
return True return True
async def user_may_create_room_with_invites(
self,
userid: str,
invites: List[str],
threepid_invites: List[Dict[str, str]],
) -> bool:
"""Checks if a given user may create a room with invites
If this method returns false, the creation request will be rejected.
Args:
userid: The ID of the user attempting to create a room
invites: The IDs of the Matrix users to be invited if the room creation is
allowed.
threepid_invites: The threepids to be invited if the room creation is allowed,
as a dict including a "medium" key indicating the threepid's medium (e.g.
"email") and an "address" key indicating the threepid's address (e.g.
"alice@example.com")
Returns:
True if the user may create the room, otherwise False
"""
for callback in self._user_may_create_room_with_invites_callbacks:
if await callback(userid, invites, threepid_invites) is False:
return False
return True
async def user_may_create_room_alias( async def user_may_create_room_alias(
self, userid: str, room_alias: RoomAlias self, userid: str, room_alias: RoomAlias
) -> bool: ) -> bool:

View File

@ -42,10 +42,10 @@ def load_legacy_third_party_event_rules(hs: "HomeServer"):
"""Wrapper that loads a third party event rules module configured using the old """Wrapper that loads a third party event rules module configured using the old
configuration, and registers the hooks they implement. configuration, and registers the hooks they implement.
""" """
if hs.config.third_party_event_rules is None: if hs.config.thirdpartyrules.third_party_event_rules is None:
return return
module, config = hs.config.third_party_event_rules module, config = hs.config.thirdpartyrules.third_party_event_rules
api = hs.get_module_api() api = hs.get_module_api()
third_party_rules = module(config=config, module_api=api) third_party_rules = module(config=config, module_api=api)

View File

@ -501,8 +501,6 @@ class FederationClient(FederationBase):
destination, auth_chain, outlier=True, room_version=room_version destination, auth_chain, outlier=True, room_version=room_version
) )
signed_auth.sort(key=lambda e: e.depth)
return signed_auth return signed_auth
def _is_unknown_endpoint( def _is_unknown_endpoint(

View File

@ -560,7 +560,7 @@ class PerDestinationQueue:
assert len(edus) <= limit, "get_device_updates_by_remote returned too many EDUs" assert len(edus) <= limit, "get_device_updates_by_remote returned too many EDUs"
return (edus, now_stream_id) return edus, now_stream_id
async def _get_to_device_message_edus(self, limit: int) -> Tuple[List[Edu], int]: async def _get_to_device_message_edus(self, limit: int) -> Tuple[List[Edu], int]:
last_device_stream_id = self._last_device_stream_id last_device_stream_id = self._last_device_stream_id
@ -593,7 +593,7 @@ class PerDestinationQueue:
stream_id, stream_id,
) )
return (edus, stream_id) return edus, stream_id
def _start_catching_up(self) -> None: def _start_catching_up(self) -> None:
""" """

View File

@ -49,7 +49,9 @@ class Authenticator:
self.keyring = hs.get_keyring() self.keyring = hs.get_keyring()
self.server_name = hs.hostname self.server_name = hs.hostname
self.store = hs.get_datastore() self.store = hs.get_datastore()
self.federation_domain_whitelist = hs.config.federation_domain_whitelist self.federation_domain_whitelist = (
hs.config.federation.federation_domain_whitelist
)
self.notifier = hs.get_notifier() self.notifier = hs.get_notifier()
self.replication_client = None self.replication_client = None

View File

@ -847,16 +847,16 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
UserID.from_string(requester_user_id) UserID.from_string(requester_user_id)
) )
if not is_admin: if not is_admin:
if not self.hs.config.enable_group_creation: if not self.hs.config.groups.enable_group_creation:
raise SynapseError( raise SynapseError(
403, "Only a server admin can create groups on this server" 403, "Only a server admin can create groups on this server"
) )
localpart = group_id_obj.localpart localpart = group_id_obj.localpart
if not localpart.startswith(self.hs.config.group_creation_prefix): if not localpart.startswith(self.hs.config.groups.group_creation_prefix):
raise SynapseError( raise SynapseError(
400, 400,
"Can only create groups with prefix %r on this server" "Can only create groups with prefix %r on this server"
% (self.hs.config.group_creation_prefix,), % (self.hs.config.groups.group_creation_prefix,),
) )
profile = content.get("profile", {}) profile = content.get("profile", {})

View File

@ -47,7 +47,7 @@ class AccountValidityHandler:
self.send_email_handler = self.hs.get_send_email_handler() self.send_email_handler = self.hs.get_send_email_handler()
self.clock = self.hs.get_clock() self.clock = self.hs.get_clock()
self._app_name = self.hs.config.email_app_name self._app_name = self.hs.config.email.email_app_name
self._account_validity_enabled = ( self._account_validity_enabled = (
hs.config.account_validity.account_validity_enabled hs.config.account_validity.account_validity_enabled

View File

@ -52,7 +52,7 @@ class ApplicationServicesHandler:
self.scheduler = hs.get_application_service_scheduler() self.scheduler = hs.get_application_service_scheduler()
self.started_scheduler = False self.started_scheduler = False
self.clock = hs.get_clock() self.clock = hs.get_clock()
self.notify_appservices = hs.config.notify_appservices self.notify_appservices = hs.config.appservice.notify_appservices
self.event_sources = hs.get_event_sources() self.event_sources = hs.get_event_sources()
self.current_max = 0 self.current_max = 0

View File

@ -210,15 +210,15 @@ class AuthHandler(BaseHandler):
self.password_providers = [ self.password_providers = [
PasswordProvider.load(module, config, account_handler) PasswordProvider.load(module, config, account_handler)
for module, config in hs.config.password_providers for module, config in hs.config.authproviders.password_providers
] ]
logger.info("Extra password_providers: %s", self.password_providers) logger.info("Extra password_providers: %s", self.password_providers)
self.hs = hs # FIXME better possibility to access registrationHandler later? self.hs = hs # FIXME better possibility to access registrationHandler later?
self.macaroon_gen = hs.get_macaroon_generator() self.macaroon_gen = hs.get_macaroon_generator()
self._password_enabled = hs.config.password_enabled self._password_enabled = hs.config.auth.password_enabled
self._password_localdb_enabled = hs.config.password_localdb_enabled self._password_localdb_enabled = hs.config.auth.password_localdb_enabled
# start out by assuming PASSWORD is enabled; we will remove it later if not. # start out by assuming PASSWORD is enabled; we will remove it later if not.
login_types = set() login_types = set()
@ -250,7 +250,7 @@ class AuthHandler(BaseHandler):
) )
# The number of seconds to keep a UI auth session active. # The number of seconds to keep a UI auth session active.
self._ui_auth_session_timeout = hs.config.ui_auth_session_timeout self._ui_auth_session_timeout = hs.config.auth.ui_auth_session_timeout
# Ratelimitier for failed /login attempts # Ratelimitier for failed /login attempts
self._failed_login_attempts_ratelimiter = Ratelimiter( self._failed_login_attempts_ratelimiter = Ratelimiter(
@ -277,23 +277,25 @@ class AuthHandler(BaseHandler):
# after the SSO completes and before redirecting them back to their client. # after the SSO completes and before redirecting them back to their client.
# It notifies the user they are about to give access to their matrix account # It notifies the user they are about to give access to their matrix account
# to the client. # to the client.
self._sso_redirect_confirm_template = hs.config.sso_redirect_confirm_template self._sso_redirect_confirm_template = (
hs.config.sso.sso_redirect_confirm_template
)
# The following template is shown during user interactive authentication # The following template is shown during user interactive authentication
# in the fallback auth scenario. It notifies the user that they are # in the fallback auth scenario. It notifies the user that they are
# authenticating for an operation to occur on their account. # authenticating for an operation to occur on their account.
self._sso_auth_confirm_template = hs.config.sso_auth_confirm_template self._sso_auth_confirm_template = hs.config.sso.sso_auth_confirm_template
# The following template is shown during the SSO authentication process if # The following template is shown during the SSO authentication process if
# the account is deactivated. # the account is deactivated.
self._sso_account_deactivated_template = ( self._sso_account_deactivated_template = (
hs.config.sso_account_deactivated_template hs.config.sso.sso_account_deactivated_template
) )
self._server_name = hs.config.server.server_name self._server_name = hs.config.server.server_name
# cast to tuple for use with str.startswith # cast to tuple for use with str.startswith
self._whitelisted_sso_clients = tuple(hs.config.sso_client_whitelist) self._whitelisted_sso_clients = tuple(hs.config.sso.sso_client_whitelist)
# A mapping of user ID to extra attributes to include in the login # A mapping of user ID to extra attributes to include in the login
# response. # response.
@ -739,19 +741,19 @@ class AuthHandler(BaseHandler):
return canonical_id return canonical_id
def _get_params_recaptcha(self) -> dict: def _get_params_recaptcha(self) -> dict:
return {"public_key": self.hs.config.recaptcha_public_key} return {"public_key": self.hs.config.captcha.recaptcha_public_key}
def _get_params_terms(self) -> dict: def _get_params_terms(self) -> dict:
return { return {
"policies": { "policies": {
"privacy_policy": { "privacy_policy": {
"version": self.hs.config.user_consent_version, "version": self.hs.config.consent.user_consent_version,
"en": { "en": {
"name": self.hs.config.user_consent_policy_name, "name": self.hs.config.consent.user_consent_policy_name,
"url": "%s_matrix/consent?v=%s" "url": "%s_matrix/consent?v=%s"
% ( % (
self.hs.config.server.public_baseurl, self.hs.config.server.public_baseurl,
self.hs.config.user_consent_version, self.hs.config.consent.user_consent_version,
), ),
}, },
} }
@ -1016,7 +1018,7 @@ class AuthHandler(BaseHandler):
def can_change_password(self) -> bool: def can_change_password(self) -> bool:
"""Get whether users on this server are allowed to change or set a password. """Get whether users on this server are allowed to change or set a password.
Both `config.password_enabled` and `config.password_localdb_enabled` must be true. Both `config.auth.password_enabled` and `config.auth.password_localdb_enabled` must be true.
Note that any account (even SSO accounts) are allowed to add passwords if the above Note that any account (even SSO accounts) are allowed to add passwords if the above
is true. is true.
@ -1486,7 +1488,7 @@ class AuthHandler(BaseHandler):
pw = unicodedata.normalize("NFKC", password) pw = unicodedata.normalize("NFKC", password)
return bcrypt.hashpw( return bcrypt.hashpw(
pw.encode("utf8") + self.hs.config.password_pepper.encode("utf8"), pw.encode("utf8") + self.hs.config.auth.password_pepper.encode("utf8"),
bcrypt.gensalt(self.bcrypt_rounds), bcrypt.gensalt(self.bcrypt_rounds),
).decode("ascii") ).decode("ascii")
@ -1510,7 +1512,7 @@ class AuthHandler(BaseHandler):
pw = unicodedata.normalize("NFKC", password) pw = unicodedata.normalize("NFKC", password)
return bcrypt.checkpw( return bcrypt.checkpw(
pw.encode("utf8") + self.hs.config.password_pepper.encode("utf8"), pw.encode("utf8") + self.hs.config.auth.password_pepper.encode("utf8"),
checked_hash, checked_hash,
) )
@ -1802,7 +1804,7 @@ class MacaroonGenerator:
macaroon = pymacaroons.Macaroon( macaroon = pymacaroons.Macaroon(
location=self.hs.config.server.server_name, location=self.hs.config.server.server_name,
identifier="key", identifier="key",
key=self.hs.config.macaroon_secret_key, key=self.hs.config.key.macaroon_secret_key,
) )
macaroon.add_first_party_caveat("gen = 1") macaroon.add_first_party_caveat("gen = 1")
macaroon.add_first_party_caveat("user_id = %s" % (user_id,)) macaroon.add_first_party_caveat("user_id = %s" % (user_id,))

View File

@ -65,10 +65,10 @@ class CasHandler:
self._auth_handler = hs.get_auth_handler() self._auth_handler = hs.get_auth_handler()
self._registration_handler = hs.get_registration_handler() self._registration_handler = hs.get_registration_handler()
self._cas_server_url = hs.config.cas_server_url self._cas_server_url = hs.config.cas.cas_server_url
self._cas_service_url = hs.config.cas_service_url self._cas_service_url = hs.config.cas.cas_service_url
self._cas_displayname_attribute = hs.config.cas_displayname_attribute self._cas_displayname_attribute = hs.config.cas.cas_displayname_attribute
self._cas_required_attributes = hs.config.cas_required_attributes self._cas_required_attributes = hs.config.cas.cas_required_attributes
self._http_client = hs.get_proxied_http_client() self._http_client = hs.get_proxied_http_client()

View File

@ -255,13 +255,16 @@ class DeactivateAccountHandler(BaseHandler):
Args: Args:
user_id: ID of user to be re-activated user_id: ID of user to be re-activated
""" """
# Add the user to the directory, if necessary.
user = UserID.from_string(user_id) user = UserID.from_string(user_id)
profile = await self.store.get_profileinfo(user.localpart)
await self.user_directory_handler.handle_local_profile_change(user_id, profile)
# Ensure the user is not marked as erased. # Ensure the user is not marked as erased.
await self.store.mark_user_not_erased(user_id) await self.store.mark_user_not_erased(user_id)
# Mark the user as active. # Mark the user as active.
await self.store.set_user_deactivated_status(user_id, False) await self.store.set_user_deactivated_status(user_id, False)
# Add the user to the directory, if necessary. Note that
# this must be done after the user is re-activated, because
# deactivated users are excluded from the user directory.
profile = await self.store.get_profileinfo(user.localpart)
await self.user_directory_handler.handle_local_profile_change(user_id, profile)

View File

@ -48,7 +48,7 @@ class DirectoryHandler(BaseHandler):
self.event_creation_handler = hs.get_event_creation_handler() self.event_creation_handler = hs.get_event_creation_handler()
self.store = hs.get_datastore() self.store = hs.get_datastore()
self.config = hs.config self.config = hs.config
self.enable_room_list_search = hs.config.enable_room_list_search self.enable_room_list_search = hs.config.roomdirectory.enable_room_list_search
self.require_membership = hs.config.require_membership_for_aliases self.require_membership = hs.config.require_membership_for_aliases
self.third_party_event_rules = hs.get_third_party_event_rules() self.third_party_event_rules = hs.get_third_party_event_rules()
@ -143,7 +143,7 @@ class DirectoryHandler(BaseHandler):
): ):
raise AuthError(403, "This user is not permitted to create this alias") raise AuthError(403, "This user is not permitted to create this alias")
if not self.config.is_alias_creation_allowed( if not self.config.roomdirectory.is_alias_creation_allowed(
user_id, room_id, room_alias_str user_id, room_id, room_alias_str
): ):
# Lets just return a generic message, as there may be all sorts of # Lets just return a generic message, as there may be all sorts of
@ -459,7 +459,7 @@ class DirectoryHandler(BaseHandler):
if canonical_alias: if canonical_alias:
room_aliases.append(canonical_alias) room_aliases.append(canonical_alias)
if not self.config.is_publishing_room_allowed( if not self.config.roomdirectory.is_publishing_room_allowed(
user_id, room_id, room_aliases user_id, room_id, room_aliases
): ):
# Lets just return a generic message, as there may be all sorts of # Lets just return a generic message, as there may be all sorts of

View File

@ -91,7 +91,7 @@ class FederationHandler(BaseHandler):
self.spam_checker = hs.get_spam_checker() self.spam_checker = hs.get_spam_checker()
self.event_creation_handler = hs.get_event_creation_handler() self.event_creation_handler = hs.get_event_creation_handler()
self._event_auth_handler = hs.get_event_auth_handler() self._event_auth_handler = hs.get_event_auth_handler()
self._server_notices_mxid = hs.config.server_notices_mxid self._server_notices_mxid = hs.config.servernotices.server_notices_mxid
self.config = hs.config self.config = hs.config
self.http_client = hs.get_proxied_blacklisted_http_client() self.http_client = hs.get_proxied_blacklisted_http_client()
self._replication = hs.get_replication_data_handler() self._replication = hs.get_replication_data_handler()
@ -593,6 +593,13 @@ class FederationHandler(BaseHandler):
target_hosts, room_id, knockee, Membership.KNOCK, content, params=params target_hosts, room_id, knockee, Membership.KNOCK, content, params=params
) )
# Mark the knock as an outlier as we don't yet have the state at this point in
# the DAG.
event.internal_metadata.outlier = True
# ... but tell /sync to send it to clients anyway.
event.internal_metadata.out_of_band_membership = True
# Record the room ID and its version so that we have a record of the room # Record the room ID and its version so that we have a record of the room
await self._maybe_store_room_on_outlier_membership( await self._maybe_store_room_on_outlier_membership(
room_id=event.room_id, room_version=event_format_version room_id=event.room_id, room_version=event_format_version
@ -617,7 +624,7 @@ class FederationHandler(BaseHandler):
# in the invitee's sync stream. It is stripped out for all other local users. # in the invitee's sync stream. It is stripped out for all other local users.
event.unsigned["knock_room_state"] = stripped_room_state["knock_state_events"] event.unsigned["knock_room_state"] = stripped_room_state["knock_state_events"]
context = await self.state_handler.compute_event_context(event) context = EventContext.for_outlier()
stream_id = await self._federation_event_handler.persist_events_and_notify( stream_id = await self._federation_event_handler.persist_events_and_notify(
event.room_id, [(event, context)] event.room_id, [(event, context)]
) )
@ -807,7 +814,7 @@ class FederationHandler(BaseHandler):
) )
) )
context = await self.state_handler.compute_event_context(event) context = EventContext.for_outlier()
await self._federation_event_handler.persist_events_and_notify( await self._federation_event_handler.persist_events_and_notify(
event.room_id, [(event, context)] event.room_id, [(event, context)]
) )
@ -836,7 +843,7 @@ class FederationHandler(BaseHandler):
await self.federation_client.send_leave(host_list, event) await self.federation_client.send_leave(host_list, event)
context = await self.state_handler.compute_event_context(event) context = EventContext.for_outlier()
stream_id = await self._federation_event_handler.persist_events_and_notify( stream_id = await self._federation_event_handler.persist_events_and_notify(
event.room_id, [(event, context)] event.room_id, [(event, context)]
) )
@ -1108,8 +1115,7 @@ class FederationHandler(BaseHandler):
events_to_context = {} events_to_context = {}
for e in itertools.chain(auth_events, state): for e in itertools.chain(auth_events, state):
e.internal_metadata.outlier = True e.internal_metadata.outlier = True
ctx = await self.state_handler.compute_event_context(e) events_to_context[e.event_id] = EventContext.for_outlier()
events_to_context[e.event_id] = ctx
event_map = { event_map = {
e.event_id: e for e in itertools.chain(auth_events, state, [event]) e.event_id: e for e in itertools.chain(auth_events, state, [event])
@ -1363,7 +1369,7 @@ class FederationHandler(BaseHandler):
builder=builder builder=builder
) )
EventValidator().validate_new(event, self.config) EventValidator().validate_new(event, self.config)
return (event, context) return event, context
async def _check_signature(self, event: EventBase, context: EventContext) -> None: async def _check_signature(self, event: EventBase, context: EventContext) -> None:
""" """

View File

@ -27,11 +27,8 @@ from typing import (
Tuple, Tuple,
) )
import attr
from prometheus_client import Counter from prometheus_client import Counter
from twisted.internet import defer
from synapse import event_auth from synapse import event_auth
from synapse.api.constants import ( from synapse.api.constants import (
EventContentFields, EventContentFields,
@ -54,11 +51,7 @@ from synapse.event_auth import auth_types_for_event
from synapse.events import EventBase from synapse.events import EventBase
from synapse.events.snapshot import EventContext from synapse.events.snapshot import EventContext
from synapse.federation.federation_client import InvalidResponseError from synapse.federation.federation_client import InvalidResponseError
from synapse.logging.context import ( from synapse.logging.context import nested_logging_context, run_in_background
make_deferred_yieldable,
nested_logging_context,
run_in_background,
)
from synapse.logging.utils import log_function from synapse.logging.utils import log_function
from synapse.metrics.background_process_metrics import run_as_background_process from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.replication.http.devices import ReplicationUserDevicesResyncRestServlet from synapse.replication.http.devices import ReplicationUserDevicesResyncRestServlet
@ -75,7 +68,11 @@ from synapse.types import (
UserID, UserID,
get_domain_from_id, get_domain_from_id,
) )
from synapse.util.async_helpers import Linearizer, concurrently_execute from synapse.util.async_helpers import (
Linearizer,
concurrently_execute,
yieldable_gather_results,
)
from synapse.util.iterutils import batch_iter from synapse.util.iterutils import batch_iter
from synapse.util.retryutils import NotRetryingDestination from synapse.util.retryutils import NotRetryingDestination
from synapse.util.stringutils import shortstr from synapse.util.stringutils import shortstr
@ -92,30 +89,6 @@ soft_failed_event_counter = Counter(
) )
@attr.s(slots=True, frozen=True, auto_attribs=True)
class _NewEventInfo:
"""Holds information about a received event, ready for passing to _auth_and_persist_events
Attributes:
event: the received event
claimed_auth_event_map: a map of (type, state_key) => event for the event's
claimed auth_events.
This can include events which have not yet been persisted, in the case that
we are backfilling a batch of events.
Note: May be incomplete: if we were unable to find all of the claimed auth
events. Also, treat the contents with caution: the events might also have
been rejected, might not yet have been authorized themselves, or they might
be in the wrong room.
"""
event: EventBase
claimed_auth_event_map: StateMap[EventBase]
class FederationEventHandler: class FederationEventHandler:
"""Handles events that originated from federation. """Handles events that originated from federation.
@ -1107,7 +1080,7 @@ class FederationEventHandler:
room_version = await self._store.get_room_version(room_id) room_version = await self._store.get_room_version(room_id)
event_map: Dict[str, EventBase] = {} events: List[EventBase] = []
async def get_event(event_id: str) -> None: async def get_event(event_id: str) -> None:
with nested_logging_context(event_id): with nested_logging_context(event_id):
@ -1125,8 +1098,7 @@ class FederationEventHandler:
event_id, event_id,
) )
return return
events.append(event)
event_map[event.event_id] = event
except Exception as e: except Exception as e:
logger.warning( logger.warning(
@ -1137,11 +1109,29 @@ class FederationEventHandler:
) )
await concurrently_execute(get_event, event_ids, 5) await concurrently_execute(get_event, event_ids, 5)
logger.info("Fetched %i events of %i requested", len(event_map), len(event_ids)) logger.info("Fetched %i events of %i requested", len(events), len(event_ids))
await self._auth_and_persist_fetched_events(destination, room_id, events)
async def _auth_and_persist_fetched_events(
self, origin: str, room_id: str, events: Iterable[EventBase]
) -> None:
"""Persist the events fetched by _get_events_and_persist or _get_remote_auth_chain_for_event
The events to be persisted must be outliers.
We first sort the events to make sure that we process each event's auth_events
before the event itself, and then auth and persist them.
Notifies about the events where appropriate.
Params:
origin: where the events came from
room_id: the room that the events are meant to be in (though this has
not yet been checked)
events: the events that have been fetched
"""
event_map = {event.event_id: event for event in events}
# we now need to auth the events in an order which ensures that each event's
# auth_events are authed before the event itself.
#
# XXX: it might be possible to kick this process off in parallel with fetching # XXX: it might be possible to kick this process off in parallel with fetching
# the events. # the events.
while event_map: while event_map:
@ -1168,22 +1158,18 @@ class FederationEventHandler:
"Persisting %i of %i remaining events", len(roots), len(event_map) "Persisting %i of %i remaining events", len(roots), len(event_map)
) )
await self._auth_and_persist_fetched_events(destination, room_id, roots) await self._auth_and_persist_fetched_events_inner(origin, room_id, roots)
for ev in roots: for ev in roots:
del event_map[ev.event_id] del event_map[ev.event_id]
async def _auth_and_persist_fetched_events( async def _auth_and_persist_fetched_events_inner(
self, origin: str, room_id: str, fetched_events: Collection[EventBase] self, origin: str, room_id: str, fetched_events: Collection[EventBase]
) -> None: ) -> None:
"""Persist the events fetched by _get_events_and_persist. """Helper for _auth_and_persist_fetched_events
The events should not depend on one another, e.g. this should be used to persist Persists a batch of events where we have (theoretically) already persisted all
a bunch of outliers, but not a chunk of individual events that depend of their auth events.
on each other for state calculations.
We also assume that all of the auth events for all of the events have already
been persisted.
Notifies about the events where appropriate. Notifies about the events where appropriate.
@ -1191,7 +1177,7 @@ class FederationEventHandler:
origin: where the events came from origin: where the events came from
room_id: the room that the events are meant to be in (though this has room_id: the room that the events are meant to be in (though this has
not yet been checked) not yet been checked)
event_id: map from event_id -> event for the fetched events fetched_events: the events to persist
""" """
# get all the auth events for all the events in this batch. By now, they should # get all the auth events for all the events in this batch. By now, they should
# have been persisted. # have been persisted.
@ -1203,47 +1189,36 @@ class FederationEventHandler:
allow_rejected=True, allow_rejected=True,
) )
event_infos = [] async def prep(event: EventBase) -> Optional[Tuple[EventBase, EventContext]]:
for event in fetched_events:
auth = {}
for auth_event_id in event.auth_event_ids():
ae = persisted_events.get(auth_event_id)
if ae:
auth[(ae.type, ae.state_key)] = ae
else:
logger.info("Missing auth event %s", auth_event_id)
event_infos.append(_NewEventInfo(event, auth))
if not event_infos:
return
async def prep(ev_info: _NewEventInfo) -> EventContext:
event = ev_info.event
with nested_logging_context(suffix=event.event_id): with nested_logging_context(suffix=event.event_id):
res = await self._state_handler.compute_event_context(event) auth = {}
res = await self._check_event_auth( for auth_event_id in event.auth_event_ids():
ae = persisted_events.get(auth_event_id)
if not ae:
logger.warning(
"Event %s relies on auth_event %s, which could not be found.",
event,
auth_event_id,
)
# the fact we can't find the auth event doesn't mean it doesn't
# exist, which means it is premature to reject `event`. Instead we
# just ignore it for now.
return None
auth[(ae.type, ae.state_key)] = ae
context = EventContext.for_outlier()
context = await self._check_event_auth(
origin, origin,
event, event,
res, context,
claimed_auth_event_map=ev_info.claimed_auth_event_map, claimed_auth_event_map=auth,
) )
return res return event, context
contexts = await make_deferred_yieldable( events_to_persist = (
defer.gatherResults( x for x in await yieldable_gather_results(prep, fetched_events) if x
[run_in_background(prep, ev_info) for ev_info in event_infos],
consumeErrors=True,
)
)
await self.persist_events_and_notify(
room_id,
[
(ev_info.event, context)
for ev_info, context in zip(event_infos, contexts)
],
) )
await self.persist_events_and_notify(room_id, tuple(events_to_persist))
async def _check_event_auth( async def _check_event_auth(
self, self,
@ -1269,8 +1244,7 @@ class FederationEventHandler:
claimed_auth_event_map: claimed_auth_event_map:
A map of (type, state_key) => event for the event's claimed auth_events. A map of (type, state_key) => event for the event's claimed auth_events.
Possibly incomplete, and possibly including events that are not yet Possibly including events that were rejected, or are in the wrong room.
persisted, or authed, or in the right room.
Only populated when populating outliers. Only populated when populating outliers.
@ -1505,64 +1479,22 @@ class FederationEventHandler:
# If we don't have all the auth events, we need to get them. # If we don't have all the auth events, we need to get them.
logger.info("auth_events contains unknown events: %s", missing_auth) logger.info("auth_events contains unknown events: %s", missing_auth)
try: try:
try: await self._get_remote_auth_chain_for_event(
remote_auth_chain = await self._federation_client.get_event_auth( origin, event.room_id, event.event_id
origin, event.room_id, event.event_id
)
except RequestSendFailed as e1:
# The other side isn't around or doesn't implement the
# endpoint, so lets just bail out.
logger.info("Failed to get event auth from remote: %s", e1)
return context, auth_events
seen_remotes = await self._store.have_seen_events(
event.room_id, [e.event_id for e in remote_auth_chain]
) )
for auth_event in remote_auth_chain:
if auth_event.event_id in seen_remotes:
continue
if auth_event.event_id == event.event_id:
continue
try:
auth_ids = auth_event.auth_event_ids()
auth = {
(e.type, e.state_key): e
for e in remote_auth_chain
if e.event_id in auth_ids or e.type == EventTypes.Create
}
auth_event.internal_metadata.outlier = True
logger.debug(
"_check_event_auth %s missing_auth: %s",
event.event_id,
auth_event.event_id,
)
missing_auth_event_context = (
await self._state_handler.compute_event_context(auth_event)
)
missing_auth_event_context = await self._check_event_auth(
origin,
auth_event,
missing_auth_event_context,
claimed_auth_event_map=auth,
)
await self.persist_events_and_notify(
event.room_id, [(auth_event, missing_auth_event_context)]
)
if auth_event.event_id in event_auth_events:
auth_events[
(auth_event.type, auth_event.state_key)
] = auth_event
except AuthError:
pass
except Exception: except Exception:
logger.exception("Failed to get auth chain") logger.exception("Failed to get auth chain")
else:
# load any auth events we might have persisted from the database. This
# has the side-effect of correctly setting the rejected_reason on them.
auth_events.update(
{
(ae.type, ae.state_key): ae
for ae in await self._store.get_events_as_list(
missing_auth, allow_rejected=True
)
}
)
if event.internal_metadata.is_outlier(): if event.internal_metadata.is_outlier():
# XXX: given that, for an outlier, we'll be working with the # XXX: given that, for an outlier, we'll be working with the
@ -1636,6 +1568,45 @@ class FederationEventHandler:
return context, auth_events return context, auth_events
async def _get_remote_auth_chain_for_event(
self, destination: str, room_id: str, event_id: str
) -> None:
"""If we are missing some of an event's auth events, attempt to request them
Args:
destination: where to fetch the auth tree from
room_id: the room in which we are lacking auth events
event_id: the event for which we are lacking auth events
"""
try:
remote_event_map = {
e.event_id: e
for e in await self._federation_client.get_event_auth(
destination, room_id, event_id
)
}
except RequestSendFailed as e1:
# The other side isn't around or doesn't implement the
# endpoint, so lets just bail out.
logger.info("Failed to get event auth from remote: %s", e1)
return
logger.info("/event_auth returned %i events", len(remote_event_map))
# `event` may be returned, but we should not yet process it.
remote_event_map.pop(event_id, None)
# nor should we reprocess any events we have already seen.
seen_remotes = await self._store.have_seen_events(
room_id, remote_event_map.keys()
)
for s in seen_remotes:
remote_event_map.pop(s, None)
await self._auth_and_persist_fetched_events(
destination, room_id, remote_event_map.values()
)
async def _update_context_for_auth_events( async def _update_context_for_auth_events(
self, event: EventBase, context: EventContext, auth_events: StateMap[EventBase] self, event: EventBase, context: EventContext, auth_events: StateMap[EventBase]
) -> EventContext: ) -> EventContext:

View File

@ -62,7 +62,7 @@ class IdentityHandler(BaseHandler):
self.federation_http_client = hs.get_federation_http_client() self.federation_http_client = hs.get_federation_http_client()
self.hs = hs self.hs = hs
self._web_client_location = hs.config.invite_client_location self._web_client_location = hs.config.email.invite_client_location
# Ratelimiters for `/requestToken` endpoints. # Ratelimiters for `/requestToken` endpoints.
self._3pid_validation_ratelimiter_ip = Ratelimiter( self._3pid_validation_ratelimiter_ip = Ratelimiter(
@ -419,7 +419,7 @@ class IdentityHandler(BaseHandler):
token_expires = ( token_expires = (
self.hs.get_clock().time_msec() self.hs.get_clock().time_msec()
+ self.hs.config.email_validation_token_lifetime + self.hs.config.email.email_validation_token_lifetime
) )
await self.store.start_or_continue_validation_session( await self.store.start_or_continue_validation_session(
@ -465,7 +465,7 @@ class IdentityHandler(BaseHandler):
if next_link: if next_link:
params["next_link"] = next_link params["next_link"] = next_link
if self.hs.config.using_identity_server_from_trusted_list: if self.hs.config.email.using_identity_server_from_trusted_list:
# Warn that a deprecated config option is in use # Warn that a deprecated config option is in use
logger.warning( logger.warning(
'The config option "trust_identity_server_for_password_resets" ' 'The config option "trust_identity_server_for_password_resets" '
@ -518,7 +518,7 @@ class IdentityHandler(BaseHandler):
if next_link: if next_link:
params["next_link"] = next_link params["next_link"] = next_link
if self.hs.config.using_identity_server_from_trusted_list: if self.hs.config.email.using_identity_server_from_trusted_list:
# Warn that a deprecated config option is in use # Warn that a deprecated config option is in use
logger.warning( logger.warning(
'The config option "trust_identity_server_for_password_resets" ' 'The config option "trust_identity_server_for_password_resets" '
@ -572,12 +572,12 @@ class IdentityHandler(BaseHandler):
validation_session = None validation_session = None
# Try to validate as email # Try to validate as email
if self.hs.config.threepid_behaviour_email == ThreepidBehaviour.REMOTE: if self.hs.config.email.threepid_behaviour_email == ThreepidBehaviour.REMOTE:
# Ask our delegated email identity server # Ask our delegated email identity server
validation_session = await self.threepid_from_creds( validation_session = await self.threepid_from_creds(
self.hs.config.account_threepid_delegate_email, threepid_creds self.hs.config.account_threepid_delegate_email, threepid_creds
) )
elif self.hs.config.threepid_behaviour_email == ThreepidBehaviour.LOCAL: elif self.hs.config.email.threepid_behaviour_email == ThreepidBehaviour.LOCAL:
# Get a validated session matching these details # Get a validated session matching these details
validation_session = await self.store.get_threepid_validation_session( validation_session = await self.store.get_threepid_validation_session(
"email", client_secret, sid=sid, validated=True "email", client_secret, sid=sid, validated=True

View File

@ -443,7 +443,7 @@ class EventCreationHandler:
) )
self._block_events_without_consent_error = ( self._block_events_without_consent_error = (
self.config.block_events_without_consent_error self.config.consent.block_events_without_consent_error
) )
# we need to construct a ConsentURIBuilder here, as it checks that the necessary # we need to construct a ConsentURIBuilder here, as it checks that the necessary
@ -666,7 +666,7 @@ class EventCreationHandler:
self.validator.validate_new(event, self.config) self.validator.validate_new(event, self.config)
return (event, context) return event, context
async def _is_exempt_from_privacy_policy( async def _is_exempt_from_privacy_policy(
self, builder: EventBuilder, requester: Requester self, builder: EventBuilder, requester: Requester
@ -692,10 +692,10 @@ class EventCreationHandler:
return False return False
async def _is_server_notices_room(self, room_id: str) -> bool: async def _is_server_notices_room(self, room_id: str) -> bool:
if self.config.server_notices_mxid is None: if self.config.servernotices.server_notices_mxid is None:
return False return False
user_ids = await self.store.get_users_in_room(room_id) user_ids = await self.store.get_users_in_room(room_id)
return self.config.server_notices_mxid in user_ids return self.config.servernotices.server_notices_mxid in user_ids
async def assert_accepted_privacy_policy(self, requester: Requester) -> None: async def assert_accepted_privacy_policy(self, requester: Requester) -> None:
"""Check if a user has accepted the privacy policy """Check if a user has accepted the privacy policy
@ -731,8 +731,8 @@ class EventCreationHandler:
# exempt the system notices user # exempt the system notices user
if ( if (
self.config.server_notices_mxid is not None self.config.servernotices.server_notices_mxid is not None
and user_id == self.config.server_notices_mxid and user_id == self.config.servernotices.server_notices_mxid
): ):
return return
@ -744,7 +744,7 @@ class EventCreationHandler:
if u["appservice_id"] is not None: if u["appservice_id"] is not None:
# users registered by an appservice are exempt # users registered by an appservice are exempt
return return
if u["consent_version"] == self.config.user_consent_version: if u["consent_version"] == self.config.consent.user_consent_version:
return return
consent_uri = self._consent_uri_builder.build_user_consent_uri(user.localpart) consent_uri = self._consent_uri_builder.build_user_consent_uri(user.localpart)
@ -1004,7 +1004,7 @@ class EventCreationHandler:
logger.debug("Created event %s", event.event_id) logger.debug("Created event %s", event.event_id)
return (event, context) return event, context
@measure_func("handle_new_client_event") @measure_func("handle_new_client_event")
async def handle_new_client_event( async def handle_new_client_event(

View File

@ -277,7 +277,7 @@ class OidcProvider:
self._token_generator = token_generator self._token_generator = token_generator
self._config = provider self._config = provider
self._callback_url: str = hs.config.oidc_callback_url self._callback_url: str = hs.config.oidc.oidc_callback_url
# Calculate the prefix for OIDC callback paths based on the public_baseurl. # Calculate the prefix for OIDC callback paths based on the public_baseurl.
# We'll insert this into the Path= parameter of any session cookies we set. # We'll insert this into the Path= parameter of any session cookies we set.

View File

@ -27,8 +27,8 @@ logger = logging.getLogger(__name__)
class PasswordPolicyHandler: class PasswordPolicyHandler:
def __init__(self, hs: "HomeServer"): def __init__(self, hs: "HomeServer"):
self.policy = hs.config.password_policy self.policy = hs.config.auth.password_policy
self.enabled = hs.config.password_policy_enabled self.enabled = hs.config.auth.password_policy_enabled
# Regexps for the spec'd policy parameters. # Regexps for the spec'd policy parameters.
self.regexp_digit = re.compile("[0-9]") self.regexp_digit = re.compile("[0-9]")

View File

@ -309,7 +309,7 @@ class ProfileHandler(BaseHandler):
async def on_profile_query(self, args: JsonDict) -> JsonDict: async def on_profile_query(self, args: JsonDict) -> JsonDict:
"""Handles federation profile query requests.""" """Handles federation profile query requests."""
if not self.hs.config.allow_profile_lookup_over_federation: if not self.hs.config.federation.allow_profile_lookup_over_federation:
raise SynapseError( raise SynapseError(
403, 403,
"Profile lookup over federation is disabled on this homeserver", "Profile lookup over federation is disabled on this homeserver",

View File

@ -238,7 +238,7 @@ class ReceiptEventSource(EventSource[int, JsonDict]):
if self.config.experimental.msc2285_enabled: if self.config.experimental.msc2285_enabled:
events = ReceiptEventSource.filter_out_hidden(events, user.to_string()) events = ReceiptEventSource.filter_out_hidden(events, user.to_string())
return (events, to_key) return events, to_key
async def get_new_events_as( async def get_new_events_as(
self, from_key: int, service: ApplicationService self, from_key: int, service: ApplicationService
@ -270,7 +270,7 @@ class ReceiptEventSource(EventSource[int, JsonDict]):
events.append(event) events.append(event)
return (events, to_key) return events, to_key
def get_current_key(self, direction: str = "f") -> int: def get_current_key(self, direction: str = "f") -> int:
return self.store.get_max_receipt_stream_id() return self.store.get_max_receipt_stream_id()

View File

@ -97,7 +97,8 @@ class RegistrationHandler(BaseHandler):
self.ratelimiter = hs.get_registration_ratelimiter() self.ratelimiter = hs.get_registration_ratelimiter()
self.macaroon_gen = hs.get_macaroon_generator() self.macaroon_gen = hs.get_macaroon_generator()
self._account_validity_handler = hs.get_account_validity_handler() self._account_validity_handler = hs.get_account_validity_handler()
self._server_notices_mxid = hs.config.server_notices_mxid self._user_consent_version = self.hs.config.consent.user_consent_version
self._server_notices_mxid = hs.config.servernotices.server_notices_mxid
self._server_name = hs.hostname self._server_name = hs.hostname
self.spam_checker = hs.get_spam_checker() self.spam_checker = hs.get_spam_checker()
@ -339,7 +340,7 @@ class RegistrationHandler(BaseHandler):
auth_provider=(auth_provider_id or ""), auth_provider=(auth_provider_id or ""),
).inc() ).inc()
if not self.hs.config.user_consent_at_registration: if not self.hs.config.consent.user_consent_at_registration:
if not self.hs.config.auto_join_rooms_for_guests and make_guest: if not self.hs.config.auto_join_rooms_for_guests and make_guest:
logger.info( logger.info(
"Skipping auto-join for %s because auto-join for guests is disabled", "Skipping auto-join for %s because auto-join for guests is disabled",
@ -864,7 +865,9 @@ class RegistrationHandler(BaseHandler):
await self._register_msisdn_threepid(user_id, threepid) await self._register_msisdn_threepid(user_id, threepid)
if auth_result and LoginType.TERMS in auth_result: if auth_result and LoginType.TERMS in auth_result:
await self._on_user_consented(user_id, self.hs.config.user_consent_version) # The terms type should only exist if consent is enabled.
assert self._user_consent_version is not None
await self._on_user_consented(user_id, self._user_consent_version)
async def _on_user_consented(self, user_id: str, consent_version: str) -> None: async def _on_user_consented(self, user_id: str, consent_version: str) -> None:
"""A user consented to the terms on registration """A user consented to the terms on registration
@ -910,8 +913,8 @@ class RegistrationHandler(BaseHandler):
# getting mail spam where they weren't before if email # getting mail spam where they weren't before if email
# notifs are set up on a homeserver) # notifs are set up on a homeserver)
if ( if (
self.hs.config.email_enable_notifs self.hs.config.email.email_enable_notifs
and self.hs.config.email_notif_for_new_users and self.hs.config.email.email_notif_for_new_users
and token and token
): ):
# Pull the ID of the access token back out of the db # Pull the ID of the access token back out of the db

View File

@ -126,7 +126,7 @@ class RoomCreationHandler(BaseHandler):
for preset_name, preset_config in self._presets_dict.items(): for preset_name, preset_config in self._presets_dict.items():
encrypted = ( encrypted = (
preset_name preset_name
in self.config.encryption_enabled_by_default_for_room_presets in self.config.room.encryption_enabled_by_default_for_room_presets
) )
preset_config["encrypted"] = encrypted preset_config["encrypted"] = encrypted
@ -141,7 +141,7 @@ class RoomCreationHandler(BaseHandler):
self._upgrade_response_cache: ResponseCache[Tuple[str, str]] = ResponseCache( self._upgrade_response_cache: ResponseCache[Tuple[str, str]] = ResponseCache(
hs.get_clock(), "room_upgrade", timeout_ms=FIVE_MINUTES_IN_MS hs.get_clock(), "room_upgrade", timeout_ms=FIVE_MINUTES_IN_MS
) )
self._server_notices_mxid = hs.config.server_notices_mxid self._server_notices_mxid = hs.config.servernotices.server_notices_mxid
self.third_party_event_rules = hs.get_third_party_event_rules() self.third_party_event_rules = hs.get_third_party_event_rules()
@ -649,8 +649,16 @@ class RoomCreationHandler(BaseHandler):
requester, config, is_requester_admin=is_requester_admin requester, config, is_requester_admin=is_requester_admin
) )
if not is_requester_admin and not await self.spam_checker.user_may_create_room( invite_3pid_list = config.get("invite_3pid", [])
user_id invite_list = config.get("invite", [])
if not is_requester_admin and not (
await self.spam_checker.user_may_create_room(user_id)
and await self.spam_checker.user_may_create_room_with_invites(
user_id,
invite_list,
invite_3pid_list,
)
): ):
raise SynapseError(403, "You are not permitted to create rooms") raise SynapseError(403, "You are not permitted to create rooms")
@ -684,8 +692,6 @@ class RoomCreationHandler(BaseHandler):
if mapping: if mapping:
raise SynapseError(400, "Room alias already taken", Codes.ROOM_IN_USE) raise SynapseError(400, "Room alias already taken", Codes.ROOM_IN_USE)
invite_3pid_list = config.get("invite_3pid", [])
invite_list = config.get("invite", [])
for i in invite_list: for i in invite_list:
try: try:
uid = UserID.from_string(i) uid = UserID.from_string(i)
@ -757,7 +763,9 @@ class RoomCreationHandler(BaseHandler):
) )
if is_public: if is_public:
if not self.config.is_publishing_room_allowed(user_id, room_id, room_alias): if not self.config.roomdirectory.is_publishing_room_allowed(
user_id, room_id, room_alias
):
# Lets just return a generic message, as there may be all sorts of # Lets just return a generic message, as there may be all sorts of
# reasons why we said no. TODO: Allow configurable error messages # reasons why we said no. TODO: Allow configurable error messages
# per alias creation rule? # per alias creation rule?
@ -1235,7 +1243,7 @@ class RoomEventSource(EventSource[RoomStreamToken, EventBase]):
else: else:
end_key = to_key end_key = to_key
return (events, end_key) return events, end_key
def get_current_key(self) -> RoomStreamToken: def get_current_key(self) -> RoomStreamToken:
return self.store.get_room_max_token() return self.store.get_room_max_token()

View File

@ -52,7 +52,7 @@ EMPTY_THIRD_PARTY_ID = ThirdPartyInstanceID(None, None)
class RoomListHandler(BaseHandler): class RoomListHandler(BaseHandler):
def __init__(self, hs: "HomeServer"): def __init__(self, hs: "HomeServer"):
super().__init__(hs) super().__init__(hs)
self.enable_room_list_search = hs.config.enable_room_list_search self.enable_room_list_search = hs.config.roomdirectory.enable_room_list_search
self.response_cache: ResponseCache[ self.response_cache: ResponseCache[
Tuple[Optional[int], Optional[str], Optional[ThirdPartyInstanceID]] Tuple[Optional[int], Optional[str], Optional[ThirdPartyInstanceID]]
] = ResponseCache(hs.get_clock(), "room_list") ] = ResponseCache(hs.get_clock(), "room_list")

View File

@ -89,7 +89,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
self.clock = hs.get_clock() self.clock = hs.get_clock()
self.spam_checker = hs.get_spam_checker() self.spam_checker = hs.get_spam_checker()
self.third_party_event_rules = hs.get_third_party_event_rules() self.third_party_event_rules = hs.get_third_party_event_rules()
self._server_notices_mxid = self.config.server_notices_mxid self._server_notices_mxid = self.config.servernotices.server_notices_mxid
self._enable_lookup = hs.config.enable_3pid_lookup self._enable_lookup = hs.config.enable_3pid_lookup
self.allow_per_room_profiles = self.config.allow_per_room_profiles self.allow_per_room_profiles = self.config.allow_per_room_profiles

View File

@ -1179,4 +1179,4 @@ def _child_events_comparison_key(
order = None order = None
# Items without an order come last. # Items without an order come last.
return (order is None, order, child.origin_server_ts, child.room_id) return order is None, order, child.origin_server_ts, child.room_id

View File

@ -54,19 +54,18 @@ class Saml2SessionData:
class SamlHandler(BaseHandler): class SamlHandler(BaseHandler):
def __init__(self, hs: "HomeServer"): def __init__(self, hs: "HomeServer"):
super().__init__(hs) super().__init__(hs)
self._saml_client = Saml2Client(hs.config.saml2_sp_config) self._saml_client = Saml2Client(hs.config.saml2.saml2_sp_config)
self._saml_idp_entityid = hs.config.saml2_idp_entityid self._saml_idp_entityid = hs.config.saml2.saml2_idp_entityid
self._saml2_session_lifetime = hs.config.saml2_session_lifetime self._saml2_session_lifetime = hs.config.saml2.saml2_session_lifetime
self._grandfathered_mxid_source_attribute = ( self._grandfathered_mxid_source_attribute = (
hs.config.saml2_grandfathered_mxid_source_attribute hs.config.saml2.saml2_grandfathered_mxid_source_attribute
) )
self._saml2_attribute_requirements = hs.config.saml2.attribute_requirements self._saml2_attribute_requirements = hs.config.saml2.attribute_requirements
self._error_template = hs.config.sso_error_template
# plugin to do custom mapping from saml response to mxid # plugin to do custom mapping from saml response to mxid
self._user_mapping_provider = hs.config.saml2_user_mapping_provider_class( self._user_mapping_provider = hs.config.saml2.saml2_user_mapping_provider_class(
hs.config.saml2_user_mapping_provider_config, hs.config.saml2.saml2_user_mapping_provider_config,
ModuleApi(hs, hs.get_auth_handler()), ModuleApi(hs, hs.get_auth_handler()),
) )
@ -411,7 +410,7 @@ class DefaultSamlMappingProvider:
self._mxid_mapper = parsed_config.mxid_mapper self._mxid_mapper = parsed_config.mxid_mapper
self._grandfathered_mxid_source_attribute = ( self._grandfathered_mxid_source_attribute = (
module_api._hs.config.saml2_grandfathered_mxid_source_attribute module_api._hs.config.saml2.saml2_grandfathered_mxid_source_attribute
) )
def get_remote_user_id( def get_remote_user_id(

View File

@ -184,15 +184,17 @@ class SsoHandler:
self._server_name = hs.hostname self._server_name = hs.hostname
self._registration_handler = hs.get_registration_handler() self._registration_handler = hs.get_registration_handler()
self._auth_handler = hs.get_auth_handler() self._auth_handler = hs.get_auth_handler()
self._error_template = hs.config.sso_error_template self._error_template = hs.config.sso.sso_error_template
self._bad_user_template = hs.config.sso_auth_bad_user_template self._bad_user_template = hs.config.sso.sso_auth_bad_user_template
self._profile_handler = hs.get_profile_handler() self._profile_handler = hs.get_profile_handler()
# The following template is shown after a successful user interactive # The following template is shown after a successful user interactive
# authentication session. It tells the user they can close the window. # authentication session. It tells the user they can close the window.
self._sso_auth_success_template = hs.config.sso_auth_success_template self._sso_auth_success_template = hs.config.sso.sso_auth_success_template
self._sso_update_profile_information = hs.config.sso_update_profile_information self._sso_update_profile_information = (
hs.config.sso.sso_update_profile_information
)
# a lock on the mappings # a lock on the mappings
self._mapping_lock = Linearizer(name="sso_user_mapping", clock=hs.get_clock()) self._mapping_lock = Linearizer(name="sso_user_mapping", clock=hs.get_clock())

View File

@ -46,7 +46,7 @@ class StatsHandler:
self.notifier = hs.get_notifier() self.notifier = hs.get_notifier()
self.is_mine_id = hs.is_mine_id self.is_mine_id = hs.is_mine_id
self.stats_enabled = hs.config.stats_enabled self.stats_enabled = hs.config.stats.stats_enabled
# The current position in the current_state_delta stream # The current position in the current_state_delta stream
self.pos: Optional[int] = None self.pos: Optional[int] = None

View File

@ -483,7 +483,7 @@ class TypingNotificationEventSource(EventSource[int, JsonDict]):
events.append(self._make_event_for(room_id)) events.append(self._make_event_for(room_id))
return (events, handler._latest_room_serial) return events, handler._latest_room_serial
async def get_new_events( async def get_new_events(
self, self,
@ -507,7 +507,7 @@ class TypingNotificationEventSource(EventSource[int, JsonDict]):
events.append(self._make_event_for(room_id)) events.append(self._make_event_for(room_id))
return (events, handler._latest_room_serial) return events, handler._latest_room_serial
def get_current_key(self) -> int: def get_current_key(self) -> int:
return self.get_typing_handler()._latest_room_serial return self.get_typing_handler()._latest_room_serial

View File

@ -82,10 +82,10 @@ class RecaptchaAuthChecker(UserInteractiveAuthChecker):
def __init__(self, hs: "HomeServer"): def __init__(self, hs: "HomeServer"):
super().__init__(hs) super().__init__(hs)
self._enabled = bool(hs.config.recaptcha_private_key) self._enabled = bool(hs.config.captcha.recaptcha_private_key)
self._http_client = hs.get_proxied_http_client() self._http_client = hs.get_proxied_http_client()
self._url = hs.config.recaptcha_siteverify_api self._url = hs.config.captcha.recaptcha_siteverify_api
self._secret = hs.config.recaptcha_private_key self._secret = hs.config.captcha.recaptcha_private_key
def is_enabled(self) -> bool: def is_enabled(self) -> bool:
return self._enabled return self._enabled
@ -161,12 +161,17 @@ class _BaseThreepidAuthChecker:
self.hs.config.account_threepid_delegate_msisdn, threepid_creds self.hs.config.account_threepid_delegate_msisdn, threepid_creds
) )
elif medium == "email": elif medium == "email":
if self.hs.config.threepid_behaviour_email == ThreepidBehaviour.REMOTE: if (
self.hs.config.email.threepid_behaviour_email
== ThreepidBehaviour.REMOTE
):
assert self.hs.config.account_threepid_delegate_email assert self.hs.config.account_threepid_delegate_email
threepid = await identity_handler.threepid_from_creds( threepid = await identity_handler.threepid_from_creds(
self.hs.config.account_threepid_delegate_email, threepid_creds self.hs.config.account_threepid_delegate_email, threepid_creds
) )
elif self.hs.config.threepid_behaviour_email == ThreepidBehaviour.LOCAL: elif (
self.hs.config.email.threepid_behaviour_email == ThreepidBehaviour.LOCAL
):
threepid = None threepid = None
row = await self.store.get_threepid_validation_session( row = await self.store.get_threepid_validation_session(
medium, medium,
@ -218,7 +223,7 @@ class EmailIdentityAuthChecker(UserInteractiveAuthChecker, _BaseThreepidAuthChec
_BaseThreepidAuthChecker.__init__(self, hs) _BaseThreepidAuthChecker.__init__(self, hs)
def is_enabled(self) -> bool: def is_enabled(self) -> bool:
return self.hs.config.threepid_behaviour_email in ( return self.hs.config.email.threepid_behaviour_email in (
ThreepidBehaviour.REMOTE, ThreepidBehaviour.REMOTE,
ThreepidBehaviour.LOCAL, ThreepidBehaviour.LOCAL,
) )

View File

@ -61,7 +61,7 @@ class UserDirectoryHandler(StateDeltasHandler):
self.notifier = hs.get_notifier() self.notifier = hs.get_notifier()
self.is_mine_id = hs.is_mine_id self.is_mine_id = hs.is_mine_id
self.update_user_directory = hs.config.update_user_directory self.update_user_directory = hs.config.update_user_directory
self.search_all_users = hs.config.user_directory_search_all_users self.search_all_users = hs.config.userdirectory.user_directory_search_all_users
self.spam_checker = hs.get_spam_checker() self.spam_checker = hs.get_spam_checker()
# The current position in the current_state_delta stream # The current position in the current_state_delta stream
self.pos: Optional[int] = None self.pos: Optional[int] = None

View File

@ -465,8 +465,9 @@ class MatrixFederationHttpClient:
_sec_timeout = self.default_timeout _sec_timeout = self.default_timeout
if ( if (
self.hs.config.federation_domain_whitelist is not None self.hs.config.federation.federation_domain_whitelist is not None
and request.destination not in self.hs.config.federation_domain_whitelist and request.destination
not in self.hs.config.federation.federation_domain_whitelist
): ):
raise FederationDeniedError(request.destination) raise FederationDeniedError(request.destination)
@ -1186,7 +1187,7 @@ class MatrixFederationHttpClient:
request.method, request.method,
request.uri.decode("ascii"), request.uri.decode("ascii"),
) )
return (length, headers) return length, headers
def _flatten_response_never_received(e): def _flatten_response_never_received(e):

View File

@ -21,7 +21,6 @@ import types
import urllib import urllib
from http import HTTPStatus from http import HTTPStatus
from inspect import isawaitable from inspect import isawaitable
from io import BytesIO
from typing import ( from typing import (
Any, Any,
Awaitable, Awaitable,
@ -37,7 +36,7 @@ from typing import (
) )
import jinja2 import jinja2
from canonicaljson import iterencode_canonical_json from canonicaljson import encode_canonical_json
from typing_extensions import Protocol from typing_extensions import Protocol
from zope.interface import implementer from zope.interface import implementer
@ -45,7 +44,7 @@ from twisted.internet import defer, interfaces
from twisted.python import failure from twisted.python import failure
from twisted.web import resource from twisted.web import resource
from twisted.web.server import NOT_DONE_YET, Request from twisted.web.server import NOT_DONE_YET, Request
from twisted.web.static import File, NoRangeStaticProducer from twisted.web.static import File
from twisted.web.util import redirectTo from twisted.web.util import redirectTo
from synapse.api.errors import ( from synapse.api.errors import (
@ -56,10 +55,11 @@ from synapse.api.errors import (
UnrecognizedRequestError, UnrecognizedRequestError,
) )
from synapse.http.site import SynapseRequest from synapse.http.site import SynapseRequest
from synapse.logging.context import preserve_fn from synapse.logging.context import defer_to_thread, preserve_fn, run_in_background
from synapse.logging.opentracing import trace_servlet from synapse.logging.opentracing import trace_servlet
from synapse.util import json_encoder from synapse.util import json_encoder
from synapse.util.caches import intern_dict from synapse.util.caches import intern_dict
from synapse.util.iterutils import chunk_seq
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -320,7 +320,7 @@ class DirectServeJsonResource(_AsyncResource):
def _send_response( def _send_response(
self, self,
request: Request, request: SynapseRequest,
code: int, code: int,
response_object: Any, response_object: Any,
): ):
@ -620,16 +620,15 @@ class _ByteProducer:
self._request = None self._request = None
def _encode_json_bytes(json_object: Any) -> Iterator[bytes]: def _encode_json_bytes(json_object: Any) -> bytes:
""" """
Encode an object into JSON. Returns an iterator of bytes. Encode an object into JSON. Returns an iterator of bytes.
""" """
for chunk in json_encoder.iterencode(json_object): return json_encoder.encode(json_object).encode("utf-8")
yield chunk.encode("utf-8")
def respond_with_json( def respond_with_json(
request: Request, request: SynapseRequest,
code: int, code: int,
json_object: Any, json_object: Any,
send_cors: bool = False, send_cors: bool = False,
@ -659,7 +658,7 @@ def respond_with_json(
return None return None
if canonical_json: if canonical_json:
encoder = iterencode_canonical_json encoder = encode_canonical_json
else: else:
encoder = _encode_json_bytes encoder = _encode_json_bytes
@ -670,7 +669,9 @@ def respond_with_json(
if send_cors: if send_cors:
set_cors_headers(request) set_cors_headers(request)
_ByteProducer(request, encoder(json_object)) run_in_background(
_async_write_json_to_request_in_thread, request, encoder, json_object
)
return NOT_DONE_YET return NOT_DONE_YET
@ -706,15 +707,56 @@ def respond_with_json_bytes(
if send_cors: if send_cors:
set_cors_headers(request) set_cors_headers(request)
# note that this is zero-copy (the bytesio shares a copy-on-write buffer with _write_bytes_to_request(request, json_bytes)
# the original `bytes`).
bytes_io = BytesIO(json_bytes)
producer = NoRangeStaticProducer(request, bytes_io)
producer.start()
return NOT_DONE_YET return NOT_DONE_YET
async def _async_write_json_to_request_in_thread(
request: SynapseRequest,
json_encoder: Callable[[Any], bytes],
json_object: Any,
):
"""Encodes the given JSON object on a thread and then writes it to the
request.
This is done so that encoding large JSON objects doesn't block the reactor
thread.
Note: We don't use JsonEncoder.iterencode here as that falls back to the
Python implementation (rather than the C backend), which is *much* more
expensive.
"""
json_str = await defer_to_thread(request.reactor, json_encoder, json_object)
_write_bytes_to_request(request, json_str)
def _write_bytes_to_request(request: Request, bytes_to_write: bytes) -> None:
"""Writes the bytes to the request using an appropriate producer.
Note: This should be used instead of `Request.write` to correctly handle
large response bodies.
"""
# The problem with dumping all of the response into the `Request` object at
# once (via `Request.write`) is that doing so starts the timeout for the
# next request to be received: so if it takes longer than 60s to stream back
# the response to the client, the client never gets it.
#
# The correct solution is to use a Producer; then the timeout is only
# started once all of the content is sent over the TCP connection.
# To make sure we don't write all of the bytes at once we split it up into
# chunks.
chunk_size = 4096
bytes_generator = chunk_seq(bytes_to_write, chunk_size)
# We use a `_ByteProducer` here rather than `NoRangeStaticProducer` as the
# unit tests can't cope with being given a pull producer.
_ByteProducer(request, bytes_generator)
def set_cors_headers(request: Request): def set_cors_headers(request: Request):
"""Set the CORS headers so that javascript running in a web browsers can """Set the CORS headers so that javascript running in a web browsers can
use this API use this API

View File

@ -14,13 +14,14 @@
import contextlib import contextlib
import logging import logging
import time import time
from typing import Optional, Tuple, Union from typing import Generator, Optional, Tuple, Union
import attr import attr
from zope.interface import implementer from zope.interface import implementer
from twisted.internet.interfaces import IAddress, IReactorTime from twisted.internet.interfaces import IAddress, IReactorTime
from twisted.python.failure import Failure from twisted.python.failure import Failure
from twisted.web.http import HTTPChannel
from twisted.web.resource import IResource, Resource from twisted.web.resource import IResource, Resource
from twisted.web.server import Request, Site from twisted.web.server import Request, Site
@ -61,10 +62,18 @@ class SynapseRequest(Request):
logcontext: the log context for this request logcontext: the log context for this request
""" """
def __init__(self, channel, *args, max_request_body_size: int = 1024, **kw): def __init__(
Request.__init__(self, channel, *args, **kw) self,
channel: HTTPChannel,
site: "SynapseSite",
*args,
max_request_body_size: int = 1024,
**kw,
):
super().__init__(channel, *args, **kw)
self._max_request_body_size = max_request_body_size self._max_request_body_size = max_request_body_size
self.site: SynapseSite = channel.site self.synapse_site = site
self.reactor = site.reactor
self._channel = channel # this is used by the tests self._channel = channel # this is used by the tests
self.start_time = 0.0 self.start_time = 0.0
@ -97,7 +106,7 @@ class SynapseRequest(Request):
self.get_method(), self.get_method(),
self.get_redacted_uri(), self.get_redacted_uri(),
self.clientproto.decode("ascii", errors="replace"), self.clientproto.decode("ascii", errors="replace"),
self.site.site_tag, self.synapse_site.site_tag,
) )
def handleContentChunk(self, data: bytes) -> None: def handleContentChunk(self, data: bytes) -> None:
@ -216,7 +225,7 @@ class SynapseRequest(Request):
request=ContextRequest( request=ContextRequest(
request_id=request_id, request_id=request_id,
ip_address=self.getClientIP(), ip_address=self.getClientIP(),
site_tag=self.site.site_tag, site_tag=self.synapse_site.site_tag,
# The requester is going to be unknown at this point. # The requester is going to be unknown at this point.
requester=None, requester=None,
authenticated_entity=None, authenticated_entity=None,
@ -228,7 +237,7 @@ class SynapseRequest(Request):
) )
# override the Server header which is set by twisted # override the Server header which is set by twisted
self.setHeader("Server", self.site.server_version_string) self.setHeader("Server", self.synapse_site.server_version_string)
with PreserveLoggingContext(self.logcontext): with PreserveLoggingContext(self.logcontext):
# we start the request metrics timer here with an initial stab # we start the request metrics timer here with an initial stab
@ -247,7 +256,7 @@ class SynapseRequest(Request):
requests_counter.labels(self.get_method(), self.request_metrics.name).inc() requests_counter.labels(self.get_method(), self.request_metrics.name).inc()
@contextlib.contextmanager @contextlib.contextmanager
def processing(self): def processing(self) -> Generator[None, None, None]:
"""Record the fact that we are processing this request. """Record the fact that we are processing this request.
Returns a context manager; the correct way to use this is: Returns a context manager; the correct way to use this is:
@ -346,10 +355,10 @@ class SynapseRequest(Request):
self.start_time, name=servlet_name, method=self.get_method() self.start_time, name=servlet_name, method=self.get_method()
) )
self.site.access_logger.debug( self.synapse_site.access_logger.debug(
"%s - %s - Received request: %s %s", "%s - %s - Received request: %s %s",
self.getClientIP(), self.getClientIP(),
self.site.site_tag, self.synapse_site.site_tag,
self.get_method(), self.get_method(),
self.get_redacted_uri(), self.get_redacted_uri(),
) )
@ -388,13 +397,13 @@ class SynapseRequest(Request):
if authenticated_entity: if authenticated_entity:
requester = f"{authenticated_entity}|{requester}" requester = f"{authenticated_entity}|{requester}"
self.site.access_logger.log( self.synapse_site.access_logger.log(
log_level, log_level,
"%s - %s - {%s}" "%s - %s - {%s}"
" Processed request: %.3fsec/%.3fsec (%.3fsec, %.3fsec) (%.3fsec/%.3fsec/%d)" " Processed request: %.3fsec/%.3fsec (%.3fsec, %.3fsec) (%.3fsec/%.3fsec/%d)"
' %sB %s "%s %s %s" "%s" [%d dbevts]', ' %sB %s "%s %s %s" "%s" [%d dbevts]',
self.getClientIP(), self.getClientIP(),
self.site.site_tag, self.synapse_site.site_tag,
requester, requester,
processing_time, processing_time,
response_send_time, response_send_time,
@ -522,7 +531,7 @@ class SynapseSite(Site):
site_tag: str, site_tag: str,
config: ListenerConfig, config: ListenerConfig,
resource: IResource, resource: IResource,
server_version_string, server_version_string: str,
max_request_body_size: int, max_request_body_size: int,
reactor: IReactorTime, reactor: IReactorTime,
): ):
@ -542,6 +551,7 @@ class SynapseSite(Site):
Site.__init__(self, resource, reactor=reactor) Site.__init__(self, resource, reactor=reactor)
self.site_tag = site_tag self.site_tag = site_tag
self.reactor = reactor
assert config.http_options is not None assert config.http_options is not None
proxied = config.http_options.x_forwarded proxied = config.http_options.x_forwarded
@ -550,6 +560,7 @@ class SynapseSite(Site):
def request_factory(channel, queued: bool) -> Request: def request_factory(channel, queued: bool) -> Request:
return request_class( return request_class(
channel, channel,
self,
max_request_body_size=max_request_body_size, max_request_body_size=max_request_body_size,
queued=queued, queued=queued,
) )

View File

@ -363,7 +363,7 @@ def noop_context_manager(*args, **kwargs):
def init_tracer(hs: "HomeServer"): def init_tracer(hs: "HomeServer"):
"""Set the whitelists and initialise the JaegerClient tracer""" """Set the whitelists and initialise the JaegerClient tracer"""
global opentracing global opentracing
if not hs.config.opentracer_enabled: if not hs.config.tracing.opentracer_enabled:
# We don't have a tracer # We don't have a tracer
opentracing = None opentracing = None
return return
@ -377,12 +377,12 @@ def init_tracer(hs: "HomeServer"):
# Pull out the jaeger config if it was given. Otherwise set it to something sensible. # Pull out the jaeger config if it was given. Otherwise set it to something sensible.
# See https://github.com/jaegertracing/jaeger-client-python/blob/master/jaeger_client/config.py # See https://github.com/jaegertracing/jaeger-client-python/blob/master/jaeger_client/config.py
set_homeserver_whitelist(hs.config.opentracer_whitelist) set_homeserver_whitelist(hs.config.tracing.opentracer_whitelist)
from jaeger_client.metrics.prometheus import PrometheusMetricsFactory from jaeger_client.metrics.prometheus import PrometheusMetricsFactory
config = JaegerConfig( config = JaegerConfig(
config=hs.config.jaeger_config, config=hs.config.tracing.jaeger_config,
service_name=f"{hs.config.server.server_name} {hs.get_instance_name()}", service_name=f"{hs.config.server.server_name} {hs.get_instance_name()}",
scope_manager=LogContextScopeManager(hs.config), scope_manager=LogContextScopeManager(hs.config),
metrics_factory=PrometheusMetricsFactory(), metrics_factory=PrometheusMetricsFactory(),

View File

@ -24,8 +24,10 @@ from typing import (
List, List,
Optional, Optional,
Tuple, Tuple,
Union,
) )
import attr
import jinja2 import jinja2
from twisted.internet import defer from twisted.internet import defer
@ -46,7 +48,14 @@ from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.storage.database import DatabasePool, LoggingTransaction from synapse.storage.database import DatabasePool, LoggingTransaction
from synapse.storage.databases.main.roommember import ProfileInfo from synapse.storage.databases.main.roommember import ProfileInfo
from synapse.storage.state import StateFilter from synapse.storage.state import StateFilter
from synapse.types import JsonDict, Requester, UserID, UserInfo, create_requester from synapse.types import (
DomainSpecificString,
JsonDict,
Requester,
UserID,
UserInfo,
create_requester,
)
from synapse.util import Clock from synapse.util import Clock
from synapse.util.caches.descriptors import cached from synapse.util.caches.descriptors import cached
@ -79,6 +88,18 @@ __all__ = [
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@attr.s(auto_attribs=True)
class UserIpAndAgent:
"""
An IP address and user agent used by a user to connect to this homeserver.
"""
ip: str
user_agent: str
# The time at which this user agent/ip was last seen.
last_seen: int
class ModuleApi: class ModuleApi:
"""A proxy object that gets passed to various plugin modules so they """A proxy object that gets passed to various plugin modules so they
can register new users etc if necessary. can register new users etc if necessary.
@ -98,14 +119,16 @@ class ModuleApi:
self.custom_template_dir = hs.config.server.custom_template_directory self.custom_template_dir = hs.config.server.custom_template_directory
try: try:
app_name = self._hs.config.email_app_name app_name = self._hs.config.email.email_app_name
self._from_string = self._hs.config.email_notif_from % {"app": app_name} self._from_string = self._hs.config.email.email_notif_from % {
"app": app_name
}
except (KeyError, TypeError): except (KeyError, TypeError):
# If substitution failed (which can happen if the string contains # If substitution failed (which can happen if the string contains
# placeholders other than just "app", or if the type of the placeholder is # placeholders other than just "app", or if the type of the placeholder is
# not a string), fall back to the bare strings. # not a string), fall back to the bare strings.
self._from_string = self._hs.config.email_notif_from self._from_string = self._hs.config.email.email_notif_from
self._raw_from = email.utils.parseaddr(self._from_string)[1] self._raw_from = email.utils.parseaddr(self._from_string)[1]
@ -700,6 +723,65 @@ class ModuleApi:
(td for td in (self.custom_template_dir, custom_template_directory) if td), (td for td in (self.custom_template_dir, custom_template_directory) if td),
) )
def is_mine(self, id: Union[str, DomainSpecificString]) -> bool:
"""
Checks whether an ID (user id, room, ...) comes from this homeserver.
Args:
id: any Matrix id (e.g. user id, room id, ...), either as a raw id,
e.g. string "@user:example.com" or as a parsed UserID, RoomID, ...
Returns:
True if id comes from this homeserver, False otherwise.
Added in Synapse v1.44.0.
"""
if isinstance(id, DomainSpecificString):
return self._hs.is_mine(id)
else:
return self._hs.is_mine_id(id)
async def get_user_ip_and_agents(
self, user_id: str, since_ts: int = 0
) -> List[UserIpAndAgent]:
"""
Return the list of user IPs and agents for a user.
Args:
user_id: the id of a user, local or remote
since_ts: a timestamp in seconds since the epoch,
or the epoch itself if not specified.
Returns:
The list of all UserIpAndAgent that the user has
used to connect to this homeserver since `since_ts`.
If the user is remote, this list is empty.
Added in Synapse v1.44.0.
"""
# Don't hit the db if this is not a local user.
is_mine = False
try:
# Let's be defensive against ill-formed strings.
if self.is_mine(user_id):
is_mine = True
except Exception:
pass
if is_mine:
raw_data = await self._store.get_user_ip_and_agents(
UserID.from_string(user_id), since_ts
)
# Sanitize some of the data. We don't want to return tokens.
return [
UserIpAndAgent(
ip=str(data["ip"]),
user_agent=str(data["user_agent"]),
last_seen=int(data["last_seen"]),
)
for data in raw_data
]
else:
return []
class PublicRoomListManager: class PublicRoomListManager:
"""Contains methods for adding to, removing from and querying whether a room """Contains methods for adding to, removing from and querying whether a room

View File

@ -184,7 +184,7 @@ class EmailPusher(Pusher):
should_notify_at = max(notif_ready_at, room_ready_at) should_notify_at = max(notif_ready_at, room_ready_at)
if should_notify_at < self.clock.time_msec(): if should_notify_at <= self.clock.time_msec():
# one of our notifications is ready for sending, so we send # one of our notifications is ready for sending, so we send
# *one* email updating the user on their notifications, # *one* email updating the user on their notifications,
# we then consider all previously outstanding notifications # we then consider all previously outstanding notifications

View File

@ -73,7 +73,9 @@ class HttpPusher(Pusher):
self.failing_since = pusher_config.failing_since self.failing_since = pusher_config.failing_since
self.timed_call: Optional[IDelayedCall] = None self.timed_call: Optional[IDelayedCall] = None
self._is_processing = False self._is_processing = False
self._group_unread_count_by_room = hs.config.push_group_unread_count_by_room self._group_unread_count_by_room = (
hs.config.push.push_group_unread_count_by_room
)
self._pusherpool = hs.get_pusherpool() self._pusherpool = hs.get_pusherpool()
self.data = pusher_config.data self.data = pusher_config.data

View File

@ -77,4 +77,4 @@ class PusherFactory:
if isinstance(brand, str): if isinstance(brand, str):
return brand return brand
return self.config.email_app_name return self.config.email.email_app_name

View File

@ -168,8 +168,8 @@ class ReplicationEndpoint(metaclass=abc.ABCMeta):
client = hs.get_simple_http_client() client = hs.get_simple_http_client()
local_instance_name = hs.get_instance_name() local_instance_name = hs.get_instance_name()
master_host = hs.config.worker_replication_host master_host = hs.config.worker.worker_replication_host
master_port = hs.config.worker_replication_http_port master_port = hs.config.worker.worker_replication_http_port
instance_map = hs.config.worker.instance_map instance_map = hs.config.worker.instance_map

View File

@ -322,8 +322,8 @@ class ReplicationCommandHandler:
else: else:
client_name = hs.get_instance_name() client_name = hs.get_instance_name()
self._factory = DirectTcpReplicationClientFactory(hs, client_name, self) self._factory = DirectTcpReplicationClientFactory(hs, client_name, self)
host = hs.config.worker_replication_host host = hs.config.worker.worker_replication_host
port = hs.config.worker_replication_port port = hs.config.worker.worker_replication_port
hs.get_reactor().connectTCP(host.encode(), port, self._factory) hs.get_reactor().connectTCP(host.encode(), port, self._factory)
def get_streams(self) -> Dict[str, Stream]: def get_streams(self) -> Dict[str, Stream]:

View File

@ -267,7 +267,7 @@ def register_servlets_for_client_rest_resource(
# Load the media repo ones if we're using them. Otherwise load the servlets which # Load the media repo ones if we're using them. Otherwise load the servlets which
# don't need a media repo (typically readonly admin APIs). # don't need a media repo (typically readonly admin APIs).
if hs.config.can_load_media_repo: if hs.config.media.can_load_media_repo:
register_servlets_for_media_repo(hs, http_server) register_servlets_for_media_repo(hs, http_server)
else: else:
ListMediaInRoom(hs).register(http_server) ListMediaInRoom(hs).register(http_server)

View File

@ -113,7 +113,7 @@ class NewRegistrationTokenRestServlet(RestServlet):
self.store = hs.get_datastore() self.store = hs.get_datastore()
self.clock = hs.get_clock() self.clock = hs.get_clock()
# A string of all the characters allowed to be in a registration_token # A string of all the characters allowed to be in a registration_token
self.allowed_chars = string.ascii_letters + string.digits + "-_" self.allowed_chars = string.ascii_letters + string.digits + "._~-"
self.allowed_chars_set = set(self.allowed_chars) self.allowed_chars_set = set(self.allowed_chars)
async def on_POST(self, request: SynapseRequest) -> Tuple[int, JsonDict]: async def on_POST(self, request: SynapseRequest) -> Tuple[int, JsonDict]:

View File

@ -213,7 +213,7 @@ class RoomRestServlet(RestServlet):
members = await self.store.get_users_in_room(room_id) members = await self.store.get_users_in_room(room_id)
ret["joined_local_devices"] = await self.store.count_devices_by_users(members) ret["joined_local_devices"] = await self.store.count_devices_by_users(members)
return (200, ret) return 200, ret
async def on_DELETE( async def on_DELETE(
self, request: SynapseRequest, room_id: str self, request: SynapseRequest, room_id: str
@ -668,4 +668,4 @@ async def _delete_room(
if purge: if purge:
await pagination_handler.purge_room(room_id, force=force_purge) await pagination_handler.purge_room(room_id, force=force_purge)
return (200, ret) return 200, ret

View File

@ -368,8 +368,8 @@ class UserRestServletV2(RestServlet):
user_id, medium, address, current_time user_id, medium, address, current_time
) )
if ( if (
self.hs.config.email_enable_notifs self.hs.config.email.email_enable_notifs
and self.hs.config.email_notif_for_new_users and self.hs.config.email.email_notif_for_new_users
): ):
await self.pusher_pool.add_pusher( await self.pusher_pool.add_pusher(
user_id=user_id, user_id=user_id,

Some files were not shown because too many files have changed in this diff Show More