Merge branch 'release-v1.51' into matrix-org-hotfixes

dmr/review-hotfixes
Olivier Wilkinson (reivilibre) 2022-01-21 10:49:43 +00:00
commit 7977b7f6a8
70 changed files with 537 additions and 173 deletions

2
.gitignore vendored
View File

@ -52,5 +52,5 @@ __pycache__/
book/ book/
# complement # complement
/complement-master /complement-*
/master.tar.gz /master.tar.gz

View File

@ -1,3 +1,79 @@
Synapse 1.51.0rc1 (2022-01-21)
==============================
Features
--------
- Add `track_puppeted_user_ips` config flag to record client IP addresses against puppeted users, and include the puppeted users in monthly active user counts. ([\#11561](https://github.com/matrix-org/synapse/issues/11561), [\#11749](https://github.com/matrix-org/synapse/issues/11749), [\#11757](https://github.com/matrix-org/synapse/issues/11757))
- Remove the `"password_hash"` field from the response dictionaries of the [Users Admin API](https://matrix-org.github.io/synapse/latest/admin_api/user_admin_api.html). ([\#11576](https://github.com/matrix-org/synapse/issues/11576))
- Include whether the requesting user has participated in a thread when generating a summary for [MSC3440](https://github.com/matrix-org/matrix-doc/pull/3440). ([\#11577](https://github.com/matrix-org/synapse/issues/11577))
- Return an `M_FORBIDDEN` error code instead of `M_UNKNOWN` when a spam checker module prevents a user from creating a room. ([\#11672](https://github.com/matrix-org/synapse/issues/11672))
- Add a flag to the `synapse_review_recent_signups` script to ignore and filter appservice users. ([\#11675](https://github.com/matrix-org/synapse/issues/11675), [\#11770](https://github.com/matrix-org/synapse/issues/11770))
Bugfixes
--------
- Fix a long-standing issue which could cause Synapse to incorrectly accept data in the unsigned field of events
received over federation. ([\#11530](https://github.com/matrix-org/synapse/issues/11530))
- Fix a long-standing bug where Synapse wouldn't cache a response indicating that a remote user has no devices. ([\#11587](https://github.com/matrix-org/synapse/issues/11587))
- Fix an error in to get federation status of a destination server even if no error has occurred. This admin API was new introduced in Synapse 1.49.0. ([\#11593](https://github.com/matrix-org/synapse/issues/11593))
- Include the bundled aggregations in the `/sync` response, per [MSC2675](https://github.com/matrix-org/matrix-doc/pull/2675). ([\#11612](https://github.com/matrix-org/synapse/issues/11612), [\#11659](https://github.com/matrix-org/synapse/issues/11659), [\#11791](https://github.com/matrix-org/synapse/issues/11791))
- Fix `/_matrix/client/v1/room/{roomId}/hierarchy` endpoint returning incorrect fields which have been present since Synapse 1.49.0. ([\#11667](https://github.com/matrix-org/synapse/issues/11667))
- Fix preview of some gif URLs (like tenor.com). Contributed by Philippe Daouadi. ([\#11669](https://github.com/matrix-org/synapse/issues/11669))
- Fix a bug where the only the first 50 rooms from a space were returned from the `/hierarchy` API. This has existed since the introduction of the API in Synapse v1.41.0. ([\#11695](https://github.com/matrix-org/synapse/issues/11695))
- Fix a bug introduced in Synapse v1.18.0 where password reset and address validation emails would not be sent if their subject was configured to use the 'app' template variable. Contributed by @br4nnigan. ([\#11710](https://github.com/matrix-org/synapse/issues/11710), [\#11745](https://github.com/matrix-org/synapse/issues/11745))
- Make the list rooms admin api sort stable. Contributed by Daniël Sonck. ([\#11737](https://github.com/matrix-org/synapse/issues/11737))
- Fix a long-standing bug where space hierarchy over federation would only work correctly some of the time. ([\#11775](https://github.com/matrix-org/synapse/issues/11775))
- Fix a bug introduced in Synapse 1.46.0 that prevented `on_logged_out` module callbacks from being correctly awaited by Synapse. ([\#11786](https://github.com/matrix-org/synapse/issues/11786))
Improved Documentation
----------------------
- Warn against using a Let's Encrypt certificate for TLS/DTLS TURN server client connections, and suggest using ZeroSSL certificate instead. This bypasses client-side connectivity errors caused by WebRTC libraries that reject Let's Encrypt certificates. Contibuted by @AndrewFerr. ([\#11686](https://github.com/matrix-org/synapse/issues/11686))
- Document the new `SYNAPSE_TEST_PERSIST_SQLITE_DB` environment variable in the contributing guide. ([\#11715](https://github.com/matrix-org/synapse/issues/11715))
- Document that now the minimum supported PostgreSQL version is 10. ([\#11725](https://github.com/matrix-org/synapse/issues/11725))
- Fix typo in demo docs: differnt. ([\#11735](https://github.com/matrix-org/synapse/issues/11735))
- Update room spec url in config files. ([\#11739](https://github.com/matrix-org/synapse/issues/11739))
- Mention python3-venv and libpq-dev dependencies in contribution guide. ([\#11740](https://github.com/matrix-org/synapse/issues/11740))
- Update documentation for configuring login with facebook. ([\#11755](https://github.com/matrix-org/synapse/issues/11755))
- Update installation instructions to note that Python 3.6 is no longer supported. ([\#11781](https://github.com/matrix-org/synapse/issues/11781))
Deprecations and Removals
-------------------------
- Remove the unstable `/send_relation` endpoint. ([\#11682](https://github.com/matrix-org/synapse/issues/11682))
- Remove `python_twisted_reactor_pending_calls` prometheus metric. ([\#11724](https://github.com/matrix-org/synapse/issues/11724))
Internal Changes
----------------
- Run `pyupgrade --py37-plus --keep-percent-format` on Synapse. ([\#11685](https://github.com/matrix-org/synapse/issues/11685))
- Use buildkit's cache feature to speed up docker builds. ([\#11691](https://github.com/matrix-org/synapse/issues/11691))
- Use `auto_attribs` and native type hints for attrs classes. ([\#11692](https://github.com/matrix-org/synapse/issues/11692), [\#11768](https://github.com/matrix-org/synapse/issues/11768))
- Remove debug logging for #4422, which has been closed since Synapse 0.99. ([\#11693](https://github.com/matrix-org/synapse/issues/11693))
- Remove fallback code for Python 2. ([\#11699](https://github.com/matrix-org/synapse/issues/11699))
- Add a test for [an edge case](https://github.com/matrix-org/synapse/pull/11532#discussion_r769104461) in the `/sync` logic. ([\#11701](https://github.com/matrix-org/synapse/issues/11701))
- Add the option to write sqlite test dbs to disk when running tests. ([\#11702](https://github.com/matrix-org/synapse/issues/11702))
- Improve Complement test output for Gitub Actions. ([\#11707](https://github.com/matrix-org/synapse/issues/11707))
- Fix a typechecker problem related to our (ab)use of `nacl.signing.SigningKey`s. ([\#11714](https://github.com/matrix-org/synapse/issues/11714))
- Fix docstring on `add_account_data_for_user`. ([\#11716](https://github.com/matrix-org/synapse/issues/11716))
- Complement environment variable name change and update `.gitignore`. ([\#11718](https://github.com/matrix-org/synapse/issues/11718))
- Simplify calculation of prometheus metrics for garbage collection. ([\#11723](https://github.com/matrix-org/synapse/issues/11723))
- Improve accuracy of `python_twisted_reactor_tick_time` prometheus metric. ([\#11724](https://github.com/matrix-org/synapse/issues/11724), [\#11771](https://github.com/matrix-org/synapse/issues/11771))
- Minor efficiency improvements when inserting many values into the database. ([\#11742](https://github.com/matrix-org/synapse/issues/11742))
- Invite PR authors to give themselves credit in the changelog. ([\#11744](https://github.com/matrix-org/synapse/issues/11744))
- Add optional debugging to investigate [issue 8631](https://github.com/matrix-org/synapse/issues/8631). ([\#11760](https://github.com/matrix-org/synapse/issues/11760))
- Remove `log_function` utility function and its uses. ([\#11761](https://github.com/matrix-org/synapse/issues/11761))
- Add a unit test that checks both `client` and `webclient` resources will function when simultaneously enabled. ([\#11765](https://github.com/matrix-org/synapse/issues/11765))
- Allow overriding complement commit using `COMPLEMENT_REF`. ([\#11766](https://github.com/matrix-org/synapse/issues/11766))
- Deprecate support for `webclient` listeners and non-HTTP(S) `web_client_location` configuration. ([\#11774](https://github.com/matrix-org/synapse/issues/11774), [\#11783](https://github.com/matrix-org/synapse/issues/11783))
- Add some comments and type annotations for `_update_outliers_txn`. ([\#11776](https://github.com/matrix-org/synapse/issues/11776))
Synapse 1.50.1 (2022-01-18) Synapse 1.50.1 (2022-01-18)
=========================== ===========================

View File

@ -1,2 +0,0 @@
Fix a long-standing issue which could cause Synapse to incorrectly accept data in the unsigned field of events
received over federation.

View File

@ -1 +0,0 @@
Add `track_puppeted_user_ips` config flag to record client IP addresses against puppeted users, and include the puppeted users in monthly active user counts.

View File

@ -1 +0,0 @@
Remove the `"password_hash"` field from the response dictionaries of the [Users Admin API](https://matrix-org.github.io/synapse/latest/admin_api/user_admin_api.html).

View File

@ -1 +0,0 @@
Include whether the requesting user has participated in a thread when generating a summary for [MSC3440](https://github.com/matrix-org/matrix-doc/pull/3440).

View File

@ -1 +0,0 @@
Fix a long-standing bug where Synapse wouldn't cache a response indicating that a remote user has no devices.

View File

@ -1 +0,0 @@
Fix an error in to get federation status of a destination server even if no error has occurred. This admin API was new introduced in Synapse 1.49.0.

View File

@ -1 +0,0 @@
Avoid database access in the JSON serialization process.

View File

@ -1 +0,0 @@
Include the bundled aggregations in the `/sync` response, per [MSC2675](https://github.com/matrix-org/matrix-doc/pull/2675).

View File

@ -1 +0,0 @@
Fix `/_matrix/client/v1/room/{roomId}/hierarchy` endpoint returning incorrect fields which have been present since Synapse 1.49.0.

View File

@ -1 +0,0 @@
Fix preview of some gif URLs (like tenor.com). Contributed by Philippe Daouadi.

View File

@ -1 +0,0 @@
Return an `M_FORBIDDEN` error code instead of `M_UNKNOWN` when a spam checker module prevents a user from creating a room.

View File

@ -1 +0,0 @@
Add a flag to the `synapse_review_recent_signups` script to ignore and filter appservice users.

View File

@ -1 +0,0 @@
Remove the unstable `/send_relation` endpoint.

View File

@ -1 +0,0 @@
Run `pyupgrade --py37-plus --keep-percent-format` on Synapse.

View File

@ -1 +0,0 @@
Warn against using a Let's Encrypt certificate for TLS/DTLS TURN server client connections, and suggest using ZeroSSL certificate instead. This bypasses client-side connectivity errors caused by WebRTC libraries that reject Let's Encrypt certificates. Contibuted by @AndrewFerr.

View File

@ -1 +0,0 @@
Use buildkit's cache feature to speed up docker builds.

View File

@ -1 +0,0 @@
Use `auto_attribs` and native type hints for attrs classes.

View File

@ -1 +0,0 @@
Remove debug logging for #4422, which has been closed since Synapse 0.99.

View File

@ -1 +0,0 @@
Fix a bug where the only the first 50 rooms from a space were returned from the `/hierarchy` API. This has existed since the introduction of the API in Synapse v1.41.0.

View File

@ -1 +0,0 @@
Remove fallback code for Python 2.

View File

@ -1 +0,0 @@
Add a test for [an edge case](https://github.com/matrix-org/synapse/pull/11532#discussion_r769104461) in the `/sync` logic.

View File

@ -1 +0,0 @@
Add the option to write sqlite test dbs to disk when running tests.

View File

@ -1 +0,0 @@
Improve Complement test output for Gitub Actions.

View File

@ -1 +0,0 @@
Fix a bug introduced in Synapse v1.18.0 where password reset and address validation emails would not be sent if their subject was configured to use the 'app' template variable. Contributed by @br4nnigan.

View File

@ -1 +0,0 @@
Fix a typechecker problem related to our (ab)use of `nacl.signing.SigningKey`s.

View File

@ -1 +0,0 @@
Document the new `SYNAPSE_TEST_PERSIST_SQLITE_DB` environment variable in the contributing guide.

View File

@ -1 +0,0 @@
Fix docstring on `add_account_data_for_user`.

View File

@ -1 +0,0 @@
Complement environment variable name change and update `.gitignore`.

View File

@ -1 +0,0 @@
Simplify calculation of prometheus metrics for garbage collection.

View File

@ -1 +0,0 @@
Improve accuracy of `python_twisted_reactor_tick_time` prometheus metric.

View File

@ -1 +0,0 @@
Remove `python_twisted_reactor_pending_calls` prometheus metric.

View File

@ -1 +0,0 @@
Document that now the minimum supported PostgreSQL version is 10.

View File

@ -1 +0,0 @@
Fix typo in demo docs: differnt.

View File

@ -1 +0,0 @@
Make the list rooms admin api sort stable. Contributed by Daniël Sonck.

View File

@ -1 +0,0 @@
Update room spec url in config files.

View File

@ -1 +0,0 @@
Mention python3-venv and libpq-dev dependencies in contribution guide.

View File

@ -1 +0,0 @@
Minor efficiency improvements when inserting many values into the database.

View File

@ -1 +0,0 @@
Invite PR authors to give themselves credit in the changelog.

View File

@ -1 +0,0 @@
Fix a bug introduced in Synapse v1.18.0 where password reset and address validation emails would not be sent if their subject was configured to use the 'app' template variable. Contributed by @br4nnigan.

View File

@ -1 +0,0 @@
Add `track_puppeted_user_ips` config flag to record client IP addresses against puppeted users, and include the puppeted users in monthly active user counts.

View File

@ -1 +0,0 @@
Update documentation for configuring login with facebook.

View File

@ -1 +0,0 @@
Add `track_puppeted_user_ips` config flag to record client IP addresses against puppeted users, and include the puppeted users in monthly active user counts.

View File

@ -1 +0,0 @@
Remove `log_function` utility function and its uses.

View File

@ -1 +0,0 @@
Use `auto_attribs` and native type hints for attrs classes.

6
debian/changelog vendored
View File

@ -1,3 +1,9 @@
matrix-synapse-py3 (1.51.0~rc1) stable; urgency=medium
* New synapse release 1.51.0~rc1.
-- Synapse Packaging team <packages@matrix.org> Fri, 21 Jan 2022 10:46:02 +0000
matrix-synapse-py3 (1.50.1) stable; urgency=medium matrix-synapse-py3 (1.50.1) stable; urgency=medium
* New synapse release 1.50.1. * New synapse release 1.50.1.

View File

@ -74,13 +74,7 @@ server_name: "SERVERNAME"
# #
pid_file: DATADIR/homeserver.pid pid_file: DATADIR/homeserver.pid
# The absolute URL to the web client which /_matrix/client will redirect # The absolute URL to the web client which / will redirect to.
# to if 'webclient' is configured under the 'listeners' configuration.
#
# This option can be also set to the filesystem path to the web client
# which will be served at /_matrix/client/ if 'webclient' is configured
# under the 'listeners' configuration, however this is a security risk:
# https://github.com/matrix-org/synapse#security-note
# #
#web_client_location: https://riot.example.com/ #web_client_location: https://riot.example.com/
@ -310,8 +304,6 @@ presence:
# static: static resources under synapse/static (/_matrix/static). (Mostly # static: static resources under synapse/static (/_matrix/static). (Mostly
# useful for 'fallback authentication'.) # useful for 'fallback authentication'.)
# #
# webclient: A web client. Requires web_client_location to be set.
#
listeners: listeners:
# TLS-enabled listener: for when matrix traffic is sent directly to synapse. # TLS-enabled listener: for when matrix traffic is sent directly to synapse.
# #

View File

@ -194,7 +194,7 @@ When following this route please make sure that the [Platform-specific prerequis
System requirements: System requirements:
- POSIX-compliant system (tested on Linux & OS X) - POSIX-compliant system (tested on Linux & OS X)
- Python 3.6 or later, up to Python 3.9. - Python 3.7 or later, up to Python 3.9.
- At least 1GB of free RAM if you want to join large public rooms like #matrix:matrix.org - At least 1GB of free RAM if you want to join large public rooms like #matrix:matrix.org
To install the Synapse homeserver run: To install the Synapse homeserver run:

View File

@ -85,6 +85,17 @@ process, for example:
dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb
``` ```
# Upgrading to v1.51.0
## Deprecation of `webclient` listeners and non-HTTP(S) `web_client_location`
Listeners of type `webclient` are deprecated and scheduled to be removed in
Synapse v1.53.0.
Similarly, a non-HTTP(S) `web_client_location` configuration is deprecated and
will become a configuration error in Synapse v1.53.0.
# Upgrading to v1.50.0 # Upgrading to v1.50.0
## Dropping support for old Python and Postgres versions ## Dropping support for old Python and Postgres versions

View File

@ -8,7 +8,8 @@
# By default the script will fetch the latest Complement master branch and # By default the script will fetch the latest Complement master branch and
# run tests with that. This can be overridden to use a custom Complement # run tests with that. This can be overridden to use a custom Complement
# checkout by setting the COMPLEMENT_DIR environment variable to the # checkout by setting the COMPLEMENT_DIR environment variable to the
# filepath of a local Complement checkout. # filepath of a local Complement checkout or by setting the COMPLEMENT_REF
# environment variable to pull a different branch or commit.
# #
# By default Synapse is run in monolith mode. This can be overridden by # By default Synapse is run in monolith mode. This can be overridden by
# setting the WORKERS environment variable. # setting the WORKERS environment variable.
@ -31,11 +32,12 @@ cd "$(dirname $0)/.."
# Check for a user-specified Complement checkout # Check for a user-specified Complement checkout
if [[ -z "$COMPLEMENT_DIR" ]]; then if [[ -z "$COMPLEMENT_DIR" ]]; then
echo "COMPLEMENT_DIR not set. Fetching the latest Complement checkout..." COMPLEMENT_REF=${COMPLEMENT_REF:-master}
wget -Nq https://github.com/matrix-org/complement/archive/master.tar.gz echo "COMPLEMENT_DIR not set. Fetching Complement checkout from ${COMPLEMENT_REF}..."
tar -xzf master.tar.gz wget -Nq https://github.com/matrix-org/complement/archive/${COMPLEMENT_REF}.tar.gz
COMPLEMENT_DIR=complement-master tar -xzf ${COMPLEMENT_REF}.tar.gz
echo "Checkout available at 'complement-master'" COMPLEMENT_DIR=complement-${COMPLEMENT_REF}
echo "Checkout available at 'complement-${COMPLEMENT_REF}'"
fi fi
# Build the base Synapse image from the local checkout # Build the base Synapse image from the local checkout

View File

@ -47,7 +47,7 @@ try:
except ImportError: except ImportError:
pass pass
__version__ = "1.50.1" __version__ = "1.51.0rc1"
if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)): if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
# We import here so that we don't have to install a bunch of deps when # We import here so that we don't have to install a bunch of deps when

View File

@ -131,9 +131,18 @@ class SynapseHomeServer(HomeServer):
resources.update(self._module_web_resources) resources.update(self._module_web_resources)
self._module_web_resources_consumed = True self._module_web_resources_consumed = True
# try to find something useful to redirect '/' to # Try to find something useful to serve at '/':
if WEB_CLIENT_PREFIX in resources: #
root_resource: Resource = RootOptionsRedirectResource(WEB_CLIENT_PREFIX) # 1. Redirect to the web client if it is an HTTP(S) URL.
# 2. Redirect to the web client served via Synapse.
# 3. Redirect to the static "Synapse is running" page.
# 4. Do not redirect and use a blank resource.
if self.config.server.web_client_location_is_redirect:
root_resource: Resource = RootOptionsRedirectResource(
self.config.server.web_client_location
)
elif WEB_CLIENT_PREFIX in resources:
root_resource = RootOptionsRedirectResource(WEB_CLIENT_PREFIX)
elif STATIC_PREFIX in resources: elif STATIC_PREFIX in resources:
root_resource = RootOptionsRedirectResource(STATIC_PREFIX) root_resource = RootOptionsRedirectResource(STATIC_PREFIX)
else: else:
@ -262,15 +271,15 @@ class SynapseHomeServer(HomeServer):
resources[SERVER_KEY_V2_PREFIX] = KeyApiV2Resource(self) resources[SERVER_KEY_V2_PREFIX] = KeyApiV2Resource(self)
if name == "webclient": if name == "webclient":
# webclient listeners are deprecated as of Synapse v1.51.0, remove it
# in > v1.53.0.
webclient_loc = self.config.server.web_client_location webclient_loc = self.config.server.web_client_location
if webclient_loc is None: if webclient_loc is None:
logger.warning( logger.warning(
"Not enabling webclient resource, as web_client_location is unset." "Not enabling webclient resource, as web_client_location is unset."
) )
elif webclient_loc.startswith("http://") or webclient_loc.startswith( elif self.config.server.web_client_location_is_redirect:
"https://"
):
resources[WEB_CLIENT_PREFIX] = RootRedirect(webclient_loc) resources[WEB_CLIENT_PREFIX] = RootRedirect(webclient_loc)
else: else:
logger.warning( logger.warning(

View File

@ -259,7 +259,6 @@ class ServerConfig(Config):
raise ConfigError(str(e)) raise ConfigError(str(e))
self.pid_file = self.abspath(config.get("pid_file")) self.pid_file = self.abspath(config.get("pid_file"))
self.web_client_location = config.get("web_client_location", None)
self.soft_file_limit = config.get("soft_file_limit", 0) self.soft_file_limit = config.get("soft_file_limit", 0)
self.daemonize = config.get("daemonize") self.daemonize = config.get("daemonize")
self.print_pidfile = config.get("print_pidfile") self.print_pidfile = config.get("print_pidfile")
@ -506,8 +505,17 @@ class ServerConfig(Config):
l2.append(listener) l2.append(listener)
self.listeners = l2 self.listeners = l2
if not self.web_client_location: self.web_client_location = config.get("web_client_location", None)
_warn_if_webclient_configured(self.listeners) self.web_client_location_is_redirect = self.web_client_location and (
self.web_client_location.startswith("http://")
or self.web_client_location.startswith("https://")
)
# A non-HTTP(S) web client location is deprecated.
if self.web_client_location and not self.web_client_location_is_redirect:
logger.warning(NO_MORE_NONE_HTTP_WEB_CLIENT_LOCATION_WARNING)
# Warn if webclient is configured for a worker.
_warn_if_webclient_configured(self.listeners)
self.gc_thresholds = read_gc_thresholds(config.get("gc_thresholds", None)) self.gc_thresholds = read_gc_thresholds(config.get("gc_thresholds", None))
self.gc_seconds = self.read_gc_intervals(config.get("gc_min_interval", None)) self.gc_seconds = self.read_gc_intervals(config.get("gc_min_interval", None))
@ -793,13 +801,7 @@ class ServerConfig(Config):
# #
pid_file: %(pid_file)s pid_file: %(pid_file)s
# The absolute URL to the web client which /_matrix/client will redirect # The absolute URL to the web client which / will redirect to.
# to if 'webclient' is configured under the 'listeners' configuration.
#
# This option can be also set to the filesystem path to the web client
# which will be served at /_matrix/client/ if 'webclient' is configured
# under the 'listeners' configuration, however this is a security risk:
# https://github.com/matrix-org/synapse#security-note
# #
#web_client_location: https://riot.example.com/ #web_client_location: https://riot.example.com/
@ -1011,8 +1013,6 @@ class ServerConfig(Config):
# static: static resources under synapse/static (/_matrix/static). (Mostly # static: static resources under synapse/static (/_matrix/static). (Mostly
# useful for 'fallback authentication'.) # useful for 'fallback authentication'.)
# #
# webclient: A web client. Requires web_client_location to be set.
#
listeners: listeners:
# TLS-enabled listener: for when matrix traffic is sent directly to synapse. # TLS-enabled listener: for when matrix traffic is sent directly to synapse.
# #
@ -1349,9 +1349,15 @@ def parse_listener_def(listener: Any) -> ListenerConfig:
return ListenerConfig(port, bind_addresses, listener_type, tls, http_config) return ListenerConfig(port, bind_addresses, listener_type, tls, http_config)
NO_MORE_NONE_HTTP_WEB_CLIENT_LOCATION_WARNING = """
Synapse no longer supports serving a web client. To remove this warning,
configure 'web_client_location' with an HTTP(S) URL.
"""
NO_MORE_WEB_CLIENT_WARNING = """ NO_MORE_WEB_CLIENT_WARNING = """
Synapse no longer includes a web client. To enable a web client, configure Synapse no longer includes a web client. To redirect the root resource to a web client, configure
web_client_location. To remove this warning, remove 'webclient' from the 'listeners' 'web_client_location'. To remove this warning, remove 'webclient' from the 'listeners'
configuration. configuration.
""" """

View File

@ -402,7 +402,7 @@ class EventClientSerializer:
if bundle_aggregations: if bundle_aggregations:
event_aggregations = bundle_aggregations.get(event.event_id) event_aggregations = bundle_aggregations.get(event.event_id)
if event_aggregations: if event_aggregations:
self._injected_bundled_aggregations( self._inject_bundled_aggregations(
event, event,
time_now, time_now,
bundle_aggregations[event.event_id], bundle_aggregations[event.event_id],
@ -411,7 +411,7 @@ class EventClientSerializer:
return serialized_event return serialized_event
def _injected_bundled_aggregations( def _inject_bundled_aggregations(
self, self,
event: EventBase, event: EventBase,
time_now: int, time_now: int,

View File

@ -118,7 +118,8 @@ class FederationClient(FederationBase):
# It is a map of (room ID, suggested-only) -> the response of # It is a map of (room ID, suggested-only) -> the response of
# get_room_hierarchy. # get_room_hierarchy.
self._get_room_hierarchy_cache: ExpiringCache[ self._get_room_hierarchy_cache: ExpiringCache[
Tuple[str, bool], Tuple[JsonDict, Sequence[JsonDict], Sequence[str]] Tuple[str, bool],
Tuple[JsonDict, Sequence[JsonDict], Sequence[JsonDict], Sequence[str]],
] = ExpiringCache( ] = ExpiringCache(
cache_name="get_room_hierarchy_cache", cache_name="get_room_hierarchy_cache",
clock=self._clock, clock=self._clock,
@ -1333,7 +1334,7 @@ class FederationClient(FederationBase):
destinations: Iterable[str], destinations: Iterable[str],
room_id: str, room_id: str,
suggested_only: bool, suggested_only: bool,
) -> Tuple[JsonDict, Sequence[JsonDict], Sequence[str]]: ) -> Tuple[JsonDict, Sequence[JsonDict], Sequence[JsonDict], Sequence[str]]:
""" """
Call other servers to get a hierarchy of the given room. Call other servers to get a hierarchy of the given room.
@ -1348,7 +1349,8 @@ class FederationClient(FederationBase):
Returns: Returns:
A tuple of: A tuple of:
The room as a JSON dictionary. The room as a JSON dictionary, without a "children_state" key.
A list of `m.space.child` state events.
A list of children rooms, as JSON dictionaries. A list of children rooms, as JSON dictionaries.
A list of inaccessible children room IDs. A list of inaccessible children room IDs.
@ -1363,7 +1365,7 @@ class FederationClient(FederationBase):
async def send_request( async def send_request(
destination: str, destination: str,
) -> Tuple[JsonDict, Sequence[JsonDict], Sequence[str]]: ) -> Tuple[JsonDict, Sequence[JsonDict], Sequence[JsonDict], Sequence[str]]:
try: try:
res = await self.transport_layer.get_room_hierarchy( res = await self.transport_layer.get_room_hierarchy(
destination=destination, destination=destination,
@ -1392,7 +1394,7 @@ class FederationClient(FederationBase):
raise InvalidResponseError("'room' must be a dict") raise InvalidResponseError("'room' must be a dict")
# Validate children_state of the room. # Validate children_state of the room.
children_state = room.get("children_state", []) children_state = room.pop("children_state", [])
if not isinstance(children_state, Sequence): if not isinstance(children_state, Sequence):
raise InvalidResponseError("'room.children_state' must be a list") raise InvalidResponseError("'room.children_state' must be a list")
if any(not isinstance(e, dict) for e in children_state): if any(not isinstance(e, dict) for e in children_state):
@ -1421,7 +1423,7 @@ class FederationClient(FederationBase):
"Invalid room ID in 'inaccessible_children' list" "Invalid room ID in 'inaccessible_children' list"
) )
return room, children, inaccessible_children return room, children_state, children, inaccessible_children
try: try:
result = await self._try_destination_list( result = await self._try_destination_list(
@ -1469,8 +1471,6 @@ class FederationClient(FederationBase):
if event.room_id == room_id: if event.room_id == room_id:
children_events.append(event.data) children_events.append(event.data)
children_room_ids.add(event.state_key) children_room_ids.add(event.state_key)
# And add them under the requested room.
requested_room["children_state"] = children_events
# Find the children rooms. # Find the children rooms.
children = [] children = []
@ -1480,7 +1480,7 @@ class FederationClient(FederationBase):
# It isn't clear from the response whether some of the rooms are # It isn't clear from the response whether some of the rooms are
# not accessible. # not accessible.
result = (requested_room, children, ()) result = (requested_room, children_events, children, ())
# Cache the result to avoid fetching data over federation every time. # Cache the result to avoid fetching data over federation every time.
self._get_room_hierarchy_cache[(room_id, suggested_only)] = result self._get_room_hierarchy_cache[(room_id, suggested_only)] = result

View File

@ -35,6 +35,7 @@ if TYPE_CHECKING:
import synapse.server import synapse.server
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
issue_8631_logger = logging.getLogger("synapse.8631_debug")
last_pdu_ts_metric = Gauge( last_pdu_ts_metric = Gauge(
"synapse_federation_last_sent_pdu_time", "synapse_federation_last_sent_pdu_time",
@ -124,6 +125,17 @@ class TransactionManager:
len(pdus), len(pdus),
len(edus), len(edus),
) )
if issue_8631_logger.isEnabledFor(logging.DEBUG):
DEVICE_UPDATE_EDUS = {"m.device_list_update", "m.signing_key_update"}
device_list_updates = [
edu.content for edu in edus if edu.edu_type in DEVICE_UPDATE_EDUS
]
if device_list_updates:
issue_8631_logger.debug(
"about to send txn [%s] including device list updates: %s",
transaction.transaction_id,
device_list_updates,
)
# Actually send the transaction # Actually send the transaction

View File

@ -36,6 +36,7 @@ from synapse.util.ratelimitutils import FederationRateLimiter
from synapse.util.versionstring import get_version_string from synapse.util.versionstring import get_version_string
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
issue_8631_logger = logging.getLogger("synapse.8631_debug")
class BaseFederationServerServlet(BaseFederationServlet): class BaseFederationServerServlet(BaseFederationServlet):
@ -95,6 +96,20 @@ class FederationSendServlet(BaseFederationServerServlet):
len(transaction_data.get("edus", [])), len(transaction_data.get("edus", [])),
) )
if issue_8631_logger.isEnabledFor(logging.DEBUG):
DEVICE_UPDATE_EDUS = {"m.device_list_update", "m.signing_key_update"}
device_list_updates = [
edu.content
for edu in transaction_data.get("edus", [])
if edu.edu_type in DEVICE_UPDATE_EDUS
]
if device_list_updates:
issue_8631_logger.debug(
"received transaction [%s] including device list updates: %s",
transaction_id,
device_list_updates,
)
except Exception as e: except Exception as e:
logger.exception(e) logger.exception(e)
return 400, {"error": "Invalid transaction"} return 400, {"error": "Invalid transaction"}

View File

@ -2281,7 +2281,7 @@ class PasswordAuthProvider:
# call all of the on_logged_out callbacks # call all of the on_logged_out callbacks
for callback in self.on_logged_out_callbacks: for callback in self.on_logged_out_callbacks:
try: try:
callback(user_id, device_id, access_token) await callback(user_id, device_id, access_token)
except Exception as e: except Exception as e:
logger.warning("Failed to run module API callback %s: %s", callback, e) logger.warning("Failed to run module API callback %s: %s", callback, e)
continue continue

View File

@ -979,18 +979,16 @@ class RegistrationHandler:
if ( if (
self.hs.config.email.email_enable_notifs self.hs.config.email.email_enable_notifs
and self.hs.config.email.email_notif_for_new_users and self.hs.config.email.email_notif_for_new_users
and token
): ):
# Pull the ID of the access token back out of the db # Pull the ID of the access token back out of the db
# It would really make more sense for this to be passed # It would really make more sense for this to be passed
# up when the access token is saved, but that's quite an # up when the access token is saved, but that's quite an
# invasive change I'd rather do separately. # invasive change I'd rather do separately.
if token: user_tuple = await self.store.get_user_by_access_token(token)
user_tuple = await self.store.get_user_by_access_token(token) # The token better still exist.
# The token better still exist. assert user_tuple
assert user_tuple token_id = user_tuple.token_id
token_id = user_tuple.token_id
else:
token_id = None
await self.pusher_pool.add_pusher( await self.pusher_pool.add_pusher(
user_id=user_id, user_id=user_id,

View File

@ -780,6 +780,7 @@ class RoomSummaryHandler:
try: try:
( (
room_response, room_response,
children_state_events,
children, children,
inaccessible_children, inaccessible_children,
) = await self._federation_client.get_room_hierarchy( ) = await self._federation_client.get_room_hierarchy(
@ -804,7 +805,7 @@ class RoomSummaryHandler:
} }
return ( return (
_RoomEntry(room_id, room_response, room_response.pop("children_state", ())), _RoomEntry(room_id, room_response, children_state_events),
children_by_room_id, children_by_room_id,
set(inaccessible_children), set(inaccessible_children),
) )

View File

@ -35,7 +35,7 @@ tick_time = Histogram(
class EpollWrapper: class EpollWrapper:
"""a wrapper for an epoll object which records the time between polls""" """a wrapper for an epoll object which records the time between polls"""
def __init__(self, poller: "select.epoll"): def __init__(self, poller: "select.epoll"): # type: ignore[name-defined]
self.last_polled = time.time() self.last_polled = time.time()
self._poller = poller self._poller = poller
@ -71,7 +71,7 @@ try:
# if the reactor has a `_poller` attribute, which is an `epoll` object # if the reactor has a `_poller` attribute, which is an `epoll` object
# (ie, it's an EPollReactor), we wrap the `epoll` with a thing that will # (ie, it's an EPollReactor), we wrap the `epoll` with a thing that will
# measure the time between ticks # measure the time between ticks
from select import epoll from select import epoll # type: ignore[attr-defined]
poller = reactor._poller # type: ignore[attr-defined] poller = reactor._poller # type: ignore[attr-defined]
except (AttributeError, ImportError): except (AttributeError, ImportError):

View File

@ -744,20 +744,15 @@ class RoomEventContextServlet(RestServlet):
) )
time_now = self.clock.time_msec() time_now = self.clock.time_msec()
aggregations = results.pop("aggregations", None)
results["events_before"] = self._event_serializer.serialize_events( results["events_before"] = self._event_serializer.serialize_events(
results["events_before"], results["events_before"], time_now, bundle_aggregations=aggregations
time_now,
bundle_aggregations=results["aggregations"],
) )
results["event"] = self._event_serializer.serialize_event( results["event"] = self._event_serializer.serialize_event(
results["event"], results["event"], time_now, bundle_aggregations=aggregations
time_now,
bundle_aggregations=results["aggregations"],
) )
results["events_after"] = self._event_serializer.serialize_events( results["events_after"] = self._event_serializer.serialize_events(
results["events_after"], results["events_after"], time_now, bundle_aggregations=aggregations
time_now,
bundle_aggregations=results["aggregations"],
) )
results["state"] = self._event_serializer.serialize_events( results["state"] = self._event_serializer.serialize_events(
results["state"], time_now results["state"], time_now

View File

@ -714,18 +714,15 @@ class RoomEventContextServlet(RestServlet):
raise SynapseError(404, "Event not found.", errcode=Codes.NOT_FOUND) raise SynapseError(404, "Event not found.", errcode=Codes.NOT_FOUND)
time_now = self.clock.time_msec() time_now = self.clock.time_msec()
aggregations = results.pop("aggregations", None)
results["events_before"] = self._event_serializer.serialize_events( results["events_before"] = self._event_serializer.serialize_events(
results["events_before"], results["events_before"], time_now, bundle_aggregations=aggregations
time_now,
bundle_aggregations=results["aggregations"],
) )
results["event"] = self._event_serializer.serialize_event( results["event"] = self._event_serializer.serialize_event(
results["event"], time_now, bundle_aggregations=results["aggregations"] results["event"], time_now, bundle_aggregations=aggregations
) )
results["events_after"] = self._event_serializer.serialize_events( results["events_after"] = self._event_serializer.serialize_events(
results["events_after"], results["events_after"], time_now, bundle_aggregations=aggregations
time_now,
bundle_aggregations=results["aggregations"],
) )
results["state"] = self._event_serializer.serialize_events( results["state"] = self._event_serializer.serialize_events(
results["state"], time_now results["state"], time_now

View File

@ -53,6 +53,7 @@ if TYPE_CHECKING:
from synapse.server import HomeServer from synapse.server import HomeServer
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
issue_8631_logger = logging.getLogger("synapse.8631_debug")
DROP_DEVICE_LIST_STREAMS_NON_UNIQUE_INDEXES = ( DROP_DEVICE_LIST_STREAMS_NON_UNIQUE_INDEXES = (
"drop_device_list_streams_non_unique_indexes" "drop_device_list_streams_non_unique_indexes"
@ -229,6 +230,12 @@ class DeviceWorkerStore(SQLBaseStore):
if not updates: if not updates:
return now_stream_id, [] return now_stream_id, []
if issue_8631_logger.isEnabledFor(logging.DEBUG):
data = {(user, device): stream_id for user, device, stream_id, _ in updates}
issue_8631_logger.debug(
"device updates need to be sent to %s: %s", destination, data
)
# get the cross-signing keys of the users in the list, so that we can # get the cross-signing keys of the users in the list, so that we can
# determine which of the device changes were cross-signing keys # determine which of the device changes were cross-signing keys
users = {r[0] for r in updates} users = {r[0] for r in updates}
@ -365,6 +372,17 @@ class DeviceWorkerStore(SQLBaseStore):
# and remove the length budgeting above. # and remove the length budgeting above.
results.append(("org.matrix.signing_key_update", result)) results.append(("org.matrix.signing_key_update", result))
if issue_8631_logger.isEnabledFor(logging.DEBUG):
for (user_id, edu) in results:
issue_8631_logger.debug(
"device update to %s for %s from %s to %s: %s",
destination,
user_id,
from_stream_id,
last_processed_stream_id,
edu,
)
return last_processed_stream_id, results return last_processed_stream_id, results
def _get_device_updates_by_remote_txn( def _get_device_updates_by_remote_txn(

View File

@ -1254,20 +1254,22 @@ class PersistEventsStore:
for room_id, depth in depth_updates.items(): for room_id, depth in depth_updates.items():
self._update_min_depth_for_room_txn(txn, room_id, depth) self._update_min_depth_for_room_txn(txn, room_id, depth)
def _update_outliers_txn(self, txn, events_and_contexts): def _update_outliers_txn(
self,
txn: LoggingTransaction,
events_and_contexts: List[Tuple[EventBase, EventContext]],
) -> List[Tuple[EventBase, EventContext]]:
"""Update any outliers with new event info. """Update any outliers with new event info.
This turns outliers into ex-outliers (unless the new event was This turns outliers into ex-outliers (unless the new event was rejected), and
rejected). also removes any other events we have already seen from the list.
Args: Args:
txn (twisted.enterprise.adbapi.Connection): db connection txn: db connection
events_and_contexts (list[(EventBase, EventContext)]): events events_and_contexts: events we are persisting
we are persisting
Returns: Returns:
list[(EventBase, EventContext)] new list, without events which new list, without events which are already in the events table.
are already in the events table.
""" """
txn.execute( txn.execute(
"SELECT event_id, outlier FROM events WHERE event_id in (%s)" "SELECT event_id, outlier FROM events WHERE event_id in (%s)"
@ -1275,7 +1277,9 @@ class PersistEventsStore:
[event.event_id for event, _ in events_and_contexts], [event.event_id for event, _ in events_and_contexts],
) )
have_persisted = {event_id: outlier for event_id, outlier in txn} have_persisted: Dict[str, bool] = {
event_id: outlier for event_id, outlier in txn
}
to_remove = set() to_remove = set()
for event, context in events_and_contexts: for event, context in events_and_contexts:
@ -1285,15 +1289,22 @@ class PersistEventsStore:
to_remove.add(event) to_remove.add(event)
if context.rejected: if context.rejected:
# If the event is rejected then we don't care if the event # If the incoming event is rejected then we don't care if the event
# was an outlier or not. # was an outlier or not - what we have is at least as good.
continue continue
outlier_persisted = have_persisted[event.event_id] outlier_persisted = have_persisted[event.event_id]
if not event.internal_metadata.is_outlier() and outlier_persisted: if not event.internal_metadata.is_outlier() and outlier_persisted:
# We received a copy of an event that we had already stored as # We received a copy of an event that we had already stored as
# an outlier in the database. We now have some state at that # an outlier in the database. We now have some state at that event
# so we need to update the state_groups table with that state. # so we need to update the state_groups table with that state.
#
# Note that we do not update the stream_ordering of the event in this
# scenario. XXX: does this cause bugs? It will mean we won't send such
# events down /sync. In general they will be historical events, so that
# doesn't matter too much, but that is not always the case.
logger.info("Updating state for ex-outlier event %s", event.event_id)
# insert into event_to_state_groups. # insert into event_to_state_groups.
try: try:

View File

@ -22,7 +22,7 @@ from twisted.internet import defer
import synapse import synapse
from synapse.handlers.auth import load_legacy_password_auth_providers from synapse.handlers.auth import load_legacy_password_auth_providers
from synapse.module_api import ModuleApi from synapse.module_api import ModuleApi
from synapse.rest.client import devices, login from synapse.rest.client import devices, login, logout
from synapse.types import JsonDict from synapse.types import JsonDict
from tests import unittest from tests import unittest
@ -155,6 +155,7 @@ class PasswordAuthProviderTests(unittest.HomeserverTestCase):
synapse.rest.admin.register_servlets, synapse.rest.admin.register_servlets,
login.register_servlets, login.register_servlets,
devices.register_servlets, devices.register_servlets,
logout.register_servlets,
] ]
def setUp(self): def setUp(self):
@ -719,6 +720,31 @@ class PasswordAuthProviderTests(unittest.HomeserverTestCase):
channel = self._send_password_login("localuser", "localpass") channel = self._send_password_login("localuser", "localpass")
self.assertEqual(channel.code, 400, channel.result) self.assertEqual(channel.code, 400, channel.result)
def test_on_logged_out(self):
"""Tests that the on_logged_out callback is called when the user logs out."""
self.register_user("rin", "password")
tok = self.login("rin", "password")
self.called = False
async def on_logged_out(user_id, device_id, access_token):
self.called = True
on_logged_out = Mock(side_effect=on_logged_out)
self.hs.get_password_auth_provider().on_logged_out_callbacks.append(
on_logged_out
)
channel = self.make_request(
"POST",
"/_matrix/client/v3/logout",
{},
access_token=tok,
)
self.assertEqual(channel.code, 200)
on_logged_out.assert_called_once()
self.assertTrue(self.called)
def _get_login_flows(self) -> JsonDict: def _get_login_flows(self) -> JsonDict:
channel = self.make_request("GET", "/_matrix/client/r0/login") channel = self.make_request("GET", "/_matrix/client/r0/login")
self.assertEqual(channel.code, 200, channel.result) self.assertEqual(channel.code, 200, channel.result)

View File

@ -28,6 +28,7 @@ from synapse.api.constants import (
from synapse.api.errors import AuthError, NotFoundError, SynapseError from synapse.api.errors import AuthError, NotFoundError, SynapseError
from synapse.api.room_versions import RoomVersions from synapse.api.room_versions import RoomVersions
from synapse.events import make_event_from_dict from synapse.events import make_event_from_dict
from synapse.federation.transport.client import TransportLayerClient
from synapse.handlers.room_summary import _child_events_comparison_key, _RoomEntry from synapse.handlers.room_summary import _child_events_comparison_key, _RoomEntry
from synapse.rest import admin from synapse.rest import admin
from synapse.rest.client import login, room from synapse.rest.client import login, room
@ -134,10 +135,18 @@ class SpaceSummaryTestCase(unittest.HomeserverTestCase):
self._add_child(self.space, self.room, self.token) self._add_child(self.space, self.room, self.token)
def _add_child( def _add_child(
self, space_id: str, room_id: str, token: str, order: Optional[str] = None self,
space_id: str,
room_id: str,
token: str,
order: Optional[str] = None,
via: Optional[List[str]] = None,
) -> None: ) -> None:
"""Add a child room to a space.""" """Add a child room to a space."""
content: JsonDict = {"via": [self.hs.hostname]} if via is None:
via = [self.hs.hostname]
content: JsonDict = {"via": via}
if order is not None: if order is not None:
content["order"] = order content["order"] = order
self.helper.send_state( self.helper.send_state(
@ -1036,6 +1045,85 @@ class SpaceSummaryTestCase(unittest.HomeserverTestCase):
) )
self._assert_hierarchy(result, expected) self._assert_hierarchy(result, expected)
def test_fed_caching(self):
"""
Federation `/hierarchy` responses should be cached.
"""
fed_hostname = self.hs.hostname + "2"
fed_subspace = "#space:" + fed_hostname
fed_room = "#room:" + fed_hostname
# Add a room to the space which is on another server.
self._add_child(self.space, fed_subspace, self.token, via=[fed_hostname])
federation_requests = 0
async def get_room_hierarchy(
_self: TransportLayerClient,
destination: str,
room_id: str,
suggested_only: bool,
) -> JsonDict:
nonlocal federation_requests
federation_requests += 1
return {
"room": {
"room_id": fed_subspace,
"world_readable": True,
"room_type": RoomTypes.SPACE,
"children_state": [
{
"type": EventTypes.SpaceChild,
"room_id": fed_subspace,
"state_key": fed_room,
"content": {"via": [fed_hostname]},
},
],
},
"children": [
{
"room_id": fed_room,
"world_readable": True,
},
],
"inaccessible_children": [],
}
expected = [
(self.space, [self.room, fed_subspace]),
(self.room, ()),
(fed_subspace, [fed_room]),
(fed_room, ()),
]
with mock.patch(
"synapse.federation.transport.client.TransportLayerClient.get_room_hierarchy",
new=get_room_hierarchy,
):
result = self.get_success(
self.handler.get_room_hierarchy(create_requester(self.user), self.space)
)
self.assertEqual(federation_requests, 1)
self._assert_hierarchy(result, expected)
# The previous federation response should be reused.
result = self.get_success(
self.handler.get_room_hierarchy(create_requester(self.user), self.space)
)
self.assertEqual(federation_requests, 1)
self._assert_hierarchy(result, expected)
# Expire the response cache
self.reactor.advance(5 * 60 + 1)
# A new federation request should be made.
result = self.get_success(
self.handler.get_room_hierarchy(create_requester(self.user), self.space)
)
self.assertEqual(federation_requests, 2)
self._assert_hierarchy(result, expected)
class RoomSummaryTestCase(unittest.HomeserverTestCase): class RoomSummaryTestCase(unittest.HomeserverTestCase):
servlets = [ servlets = [

View File

@ -0,0 +1,108 @@
# Copyright 2022 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from http import HTTPStatus
from typing import Dict
from twisted.web.resource import Resource
from synapse.app.homeserver import SynapseHomeServer
from synapse.config.server import HttpListenerConfig, HttpResourceConfig, ListenerConfig
from synapse.http.site import SynapseSite
from tests.server import make_request
from tests.unittest import HomeserverTestCase, create_resource_tree, override_config
class WebClientTests(HomeserverTestCase):
@override_config(
{
"web_client_location": "https://example.org",
}
)
def test_webclient_resolves_with_client_resource(self):
"""
Tests that both client and webclient resources can be accessed simultaneously.
This is a regression test created in response to https://github.com/matrix-org/synapse/issues/11763.
"""
for resource_name_order_list in [
["webclient", "client"],
["client", "webclient"],
]:
# Create a dictionary from path regex -> resource
resource_dict: Dict[str, Resource] = {}
for resource_name in resource_name_order_list:
resource_dict.update(
SynapseHomeServer._configure_named_resource(self.hs, resource_name)
)
# Create a root resource which ties the above resources together into one
root_resource = Resource()
create_resource_tree(resource_dict, root_resource)
# Create a site configured with this resource to make HTTP requests against
listener_config = ListenerConfig(
port=8008,
bind_addresses=["127.0.0.1"],
type="http",
http_options=HttpListenerConfig(
resources=[HttpResourceConfig(names=resource_name_order_list)]
),
)
test_site = SynapseSite(
logger_name="synapse.access.http.fake",
site_tag=self.hs.config.server.server_name,
config=listener_config,
resource=root_resource,
server_version_string="1",
max_request_body_size=1234,
reactor=self.reactor,
)
# Attempt to make requests to endpoints on both the webclient and client resources
# on test_site.
self._request_client_and_webclient_resources(test_site)
def _request_client_and_webclient_resources(self, test_site: SynapseSite) -> None:
"""Make a request to an endpoint on both the webclient and client-server resources
of the given SynapseSite.
Args:
test_site: The SynapseSite object to make requests against.
"""
# Ensure that the *webclient* resource is behaving as expected (we get redirected to
# the configured web_client_location)
channel = make_request(
self.reactor,
site=test_site,
method="GET",
path="/_matrix/client",
)
# Check that we are being redirected to the webclient location URI.
self.assertEqual(channel.code, HTTPStatus.FOUND)
self.assertEqual(
channel.headers.getRawHeaders("Location"), ["https://example.org"]
)
# Ensure that a request to the *client* resource works.
channel = make_request(
self.reactor,
site=test_site,
method="GET",
path="/_matrix/client/v3/login",
)
self.assertEqual(channel.code, HTTPStatus.OK)
self.assertIn("flows", channel.json_body)

View File

@ -21,6 +21,7 @@ from unittest.mock import patch
from synapse.api.constants import EventTypes, RelationTypes from synapse.api.constants import EventTypes, RelationTypes
from synapse.rest import admin from synapse.rest import admin
from synapse.rest.client import login, register, relations, room, sync from synapse.rest.client import login, register, relations, room, sync
from synapse.types import JsonDict
from tests import unittest from tests import unittest
from tests.server import FakeChannel from tests.server import FakeChannel
@ -454,7 +455,14 @@ class RelationsTestCase(unittest.HomeserverTestCase):
@unittest.override_config({"experimental_features": {"msc3440_enabled": True}}) @unittest.override_config({"experimental_features": {"msc3440_enabled": True}})
def test_bundled_aggregations(self): def test_bundled_aggregations(self):
"""Test that annotations, references, and threads get correctly bundled.""" """
Test that annotations, references, and threads get correctly bundled.
Note that this doesn't test against /relations since only thread relations
get bundled via that API. See test_aggregation_get_event_for_thread.
See test_edit for a similar test for edits.
"""
# Setup by sending a variety of relations. # Setup by sending a variety of relations.
channel = self._send_relation(RelationTypes.ANNOTATION, "m.reaction", "a") channel = self._send_relation(RelationTypes.ANNOTATION, "m.reaction", "a")
self.assertEquals(200, channel.code, channel.json_body) self.assertEquals(200, channel.code, channel.json_body)
@ -482,12 +490,13 @@ class RelationsTestCase(unittest.HomeserverTestCase):
self.assertEquals(200, channel.code, channel.json_body) self.assertEquals(200, channel.code, channel.json_body)
thread_2 = channel.json_body["event_id"] thread_2 = channel.json_body["event_id"]
def assert_bundle(actual): def assert_bundle(event_json: JsonDict) -> None:
"""Assert the expected values of the bundled aggregations.""" """Assert the expected values of the bundled aggregations."""
relations_dict = event_json["unsigned"].get("m.relations")
# Ensure the fields are as expected. # Ensure the fields are as expected.
self.assertCountEqual( self.assertCountEqual(
actual.keys(), relations_dict.keys(),
( (
RelationTypes.ANNOTATION, RelationTypes.ANNOTATION,
RelationTypes.REFERENCE, RelationTypes.REFERENCE,
@ -503,20 +512,20 @@ class RelationsTestCase(unittest.HomeserverTestCase):
{"type": "m.reaction", "key": "b", "count": 1}, {"type": "m.reaction", "key": "b", "count": 1},
] ]
}, },
actual[RelationTypes.ANNOTATION], relations_dict[RelationTypes.ANNOTATION],
) )
self.assertEquals( self.assertEquals(
{"chunk": [{"event_id": reply_1}, {"event_id": reply_2}]}, {"chunk": [{"event_id": reply_1}, {"event_id": reply_2}]},
actual[RelationTypes.REFERENCE], relations_dict[RelationTypes.REFERENCE],
) )
self.assertEquals( self.assertEquals(
2, 2,
actual[RelationTypes.THREAD].get("count"), relations_dict[RelationTypes.THREAD].get("count"),
) )
self.assertTrue( self.assertTrue(
actual[RelationTypes.THREAD].get("current_user_participated") relations_dict[RelationTypes.THREAD].get("current_user_participated")
) )
# The latest thread event has some fields that don't matter. # The latest thread event has some fields that don't matter.
self.assert_dict( self.assert_dict(
@ -533,20 +542,9 @@ class RelationsTestCase(unittest.HomeserverTestCase):
"type": "m.room.test", "type": "m.room.test",
"user_id": self.user_id, "user_id": self.user_id,
}, },
actual[RelationTypes.THREAD].get("latest_event"), relations_dict[RelationTypes.THREAD].get("latest_event"),
) )
def _find_and_assert_event(events):
"""
Find the parent event in a chunk of events and assert that it has the proper bundled aggregations.
"""
for event in events:
if event["event_id"] == self.parent_id:
break
else:
raise AssertionError(f"Event {self.parent_id} not found in chunk")
assert_bundle(event["unsigned"].get("m.relations"))
# Request the event directly. # Request the event directly.
channel = self.make_request( channel = self.make_request(
"GET", "GET",
@ -554,7 +552,7 @@ class RelationsTestCase(unittest.HomeserverTestCase):
access_token=self.user_token, access_token=self.user_token,
) )
self.assertEquals(200, channel.code, channel.json_body) self.assertEquals(200, channel.code, channel.json_body)
assert_bundle(channel.json_body["unsigned"].get("m.relations")) assert_bundle(channel.json_body)
# Request the room messages. # Request the room messages.
channel = self.make_request( channel = self.make_request(
@ -563,7 +561,7 @@ class RelationsTestCase(unittest.HomeserverTestCase):
access_token=self.user_token, access_token=self.user_token,
) )
self.assertEquals(200, channel.code, channel.json_body) self.assertEquals(200, channel.code, channel.json_body)
_find_and_assert_event(channel.json_body["chunk"]) assert_bundle(self._find_event_in_chunk(channel.json_body["chunk"]))
# Request the room context. # Request the room context.
channel = self.make_request( channel = self.make_request(
@ -572,17 +570,14 @@ class RelationsTestCase(unittest.HomeserverTestCase):
access_token=self.user_token, access_token=self.user_token,
) )
self.assertEquals(200, channel.code, channel.json_body) self.assertEquals(200, channel.code, channel.json_body)
assert_bundle(channel.json_body["event"]["unsigned"].get("m.relations")) assert_bundle(channel.json_body["event"])
# Request sync. # Request sync.
channel = self.make_request("GET", "/sync", access_token=self.user_token) channel = self.make_request("GET", "/sync", access_token=self.user_token)
self.assertEquals(200, channel.code, channel.json_body) self.assertEquals(200, channel.code, channel.json_body)
room_timeline = channel.json_body["rooms"]["join"][self.room]["timeline"] room_timeline = channel.json_body["rooms"]["join"][self.room]["timeline"]
self.assertTrue(room_timeline["limited"]) self.assertTrue(room_timeline["limited"])
_find_and_assert_event(room_timeline["events"]) self._find_event_in_chunk(room_timeline["events"])
# Note that /relations is tested separately in test_aggregation_get_event_for_thread
# since it needs different data configured.
def test_aggregation_get_event_for_annotation(self): def test_aggregation_get_event_for_annotation(self):
"""Test that annotations do not get bundled aggregations included """Test that annotations do not get bundled aggregations included
@ -777,25 +772,58 @@ class RelationsTestCase(unittest.HomeserverTestCase):
edit_event_id = channel.json_body["event_id"] edit_event_id = channel.json_body["event_id"]
def assert_bundle(event_json: JsonDict) -> None:
"""Assert the expected values of the bundled aggregations."""
relations_dict = event_json["unsigned"].get("m.relations")
self.assertIn(RelationTypes.REPLACE, relations_dict)
m_replace_dict = relations_dict[RelationTypes.REPLACE]
for key in ["event_id", "sender", "origin_server_ts"]:
self.assertIn(key, m_replace_dict)
self.assert_dict(
{"event_id": edit_event_id, "sender": self.user_id}, m_replace_dict
)
channel = self.make_request( channel = self.make_request(
"GET", "GET",
"/rooms/%s/event/%s" % (self.room, self.parent_id), f"/rooms/{self.room}/event/{self.parent_id}",
access_token=self.user_token, access_token=self.user_token,
) )
self.assertEquals(200, channel.code, channel.json_body) self.assertEquals(200, channel.code, channel.json_body)
self.assertEquals(channel.json_body["content"], new_body) self.assertEquals(channel.json_body["content"], new_body)
assert_bundle(channel.json_body)
relations_dict = channel.json_body["unsigned"].get("m.relations") # Request the room messages.
self.assertIn(RelationTypes.REPLACE, relations_dict) channel = self.make_request(
"GET",
m_replace_dict = relations_dict[RelationTypes.REPLACE] f"/rooms/{self.room}/messages?dir=b",
for key in ["event_id", "sender", "origin_server_ts"]: access_token=self.user_token,
self.assertIn(key, m_replace_dict)
self.assert_dict(
{"event_id": edit_event_id, "sender": self.user_id}, m_replace_dict
) )
self.assertEquals(200, channel.code, channel.json_body)
assert_bundle(self._find_event_in_chunk(channel.json_body["chunk"]))
# Request the room context.
channel = self.make_request(
"GET",
f"/rooms/{self.room}/context/{self.parent_id}",
access_token=self.user_token,
)
self.assertEquals(200, channel.code, channel.json_body)
assert_bundle(channel.json_body["event"])
# Request sync, but limit the timeline so it becomes limited (and includes
# bundled aggregations).
filter = urllib.parse.quote_plus(
'{"room": {"timeline": {"limit": 2}}}'.encode()
)
channel = self.make_request(
"GET", f"/sync?filter={filter}", access_token=self.user_token
)
self.assertEquals(200, channel.code, channel.json_body)
room_timeline = channel.json_body["rooms"]["join"][self.room]["timeline"]
self.assertTrue(room_timeline["limited"])
assert_bundle(self._find_event_in_chunk(room_timeline["events"]))
def test_multi_edit(self): def test_multi_edit(self):
"""Test that multiple edits, including attempts by people who """Test that multiple edits, including attempts by people who
@ -1102,6 +1130,16 @@ class RelationsTestCase(unittest.HomeserverTestCase):
self.assertEquals(200, channel.code, channel.json_body) self.assertEquals(200, channel.code, channel.json_body)
self.assertEquals(channel.json_body["chunk"], []) self.assertEquals(channel.json_body["chunk"], [])
def _find_event_in_chunk(self, events: List[JsonDict]) -> JsonDict:
"""
Find the parent event in a chunk of events and assert that it has the proper bundled aggregations.
"""
for event in events:
if event["event_id"] == self.parent_id:
return event
raise AssertionError(f"Event {self.parent_id} not found in chunk")
def _send_relation( def _send_relation(
self, self,
relation_type: str, relation_type: str,