Compare commits
14 Commits
8388384a64
...
ed5172852a
Author | SHA1 | Date |
---|---|---|
Richard van der Hoff | ed5172852a | |
Richard van der Hoff | f347f0cd58 | |
Richard van der Hoff | 935732768c | |
Richard van der Hoff | 0bac276890 | |
Richard van der Hoff | 76469898ee | |
Richard van der Hoff | 7ea85302f3 | |
Patrick Cloke | 30fba62108 | |
Erik Johnston | c5b6abd53d | |
Richard van der Hoff | 693516e756 | |
Johanna Dorothea Reichmann | 0fed46ebe5 | |
David Florness | c4675e1b24 | |
Patrick Cloke | e41720d85f | |
Patrick Cloke | c67af840aa | |
Patrick Cloke | 53b12688dd |
72
CHANGES.md
72
CHANGES.md
|
@ -1,3 +1,75 @@
|
||||||
|
Synapse 1.24.0rc1 (2020-12-02)
|
||||||
|
==============================
|
||||||
|
|
||||||
|
Features
|
||||||
|
--------
|
||||||
|
|
||||||
|
- Add admin API for logging in as a user. ([\#8617](https://github.com/matrix-org/synapse/issues/8617))
|
||||||
|
- Allow specification of the SAML IdP if the metadata returns multiple IdPs. ([\#8630](https://github.com/matrix-org/synapse/issues/8630))
|
||||||
|
- Add support for re-trying generation of a localpart for OpenID Connect mapping providers. ([\#8801](https://github.com/matrix-org/synapse/issues/8801), [\#8855](https://github.com/matrix-org/synapse/issues/8855))
|
||||||
|
- Allow the `Date` header through CORS. Contributed by Nicolas Chamo. ([\#8804](https://github.com/matrix-org/synapse/issues/8804))
|
||||||
|
- Add a config option, `push.group_by_unread_count`, which controls whether unread message counts in push notifications are defined as "the number of rooms with unread messages" or "total unread messages". ([\#8820](https://github.com/matrix-org/synapse/issues/8820))
|
||||||
|
- Add `force_purge` option to delete-room admin api. ([\#8843](https://github.com/matrix-org/synapse/issues/8843))
|
||||||
|
|
||||||
|
|
||||||
|
Bugfixes
|
||||||
|
--------
|
||||||
|
|
||||||
|
- Fix a bug where appservices may be sent an excessive amount of read receipts and presence. Broke in v1.22.0. ([\#8744](https://github.com/matrix-org/synapse/issues/8744))
|
||||||
|
- Fix a bug in some federation APIs which could lead to unexpected behaviour if different parameters were set in the URI and the request body. ([\#8776](https://github.com/matrix-org/synapse/issues/8776))
|
||||||
|
- Fix a bug where synctl could spawn duplicate copies of a worker. Contributed by Waylon Cude. ([\#8798](https://github.com/matrix-org/synapse/issues/8798))
|
||||||
|
- Allow per-room profiles to be used for the server notice user. ([\#8799](https://github.com/matrix-org/synapse/issues/8799))
|
||||||
|
- Fix a bug where logging could break after a call to SIGHUP. ([\#8817](https://github.com/matrix-org/synapse/issues/8817))
|
||||||
|
- Fix `register_new_matrix_user` failing with "Bad Request" when trailing slash is included in server URL. Contributed by @angdraug. ([\#8823](https://github.com/matrix-org/synapse/issues/8823))
|
||||||
|
- Fix a minor long-standing bug in login, where we would offer the `password` login type if a custom auth provider supported it, even if password login was disabled. ([\#8835](https://github.com/matrix-org/synapse/issues/8835))
|
||||||
|
- Fix a long-standing bug which caused Synapse to require unspecified parameters during user-interactive authentication. ([\#8848](https://github.com/matrix-org/synapse/issues/8848))
|
||||||
|
- Fix a bug introduced in v1.20.0 where the user-agent and IP address reported during user registration for CAS, OpenID Connect, and SAML were of the wrong form. ([\#8784](https://github.com/matrix-org/synapse/issues/8784))
|
||||||
|
|
||||||
|
|
||||||
|
Improved Documentation
|
||||||
|
----------------------
|
||||||
|
|
||||||
|
- Clarify the usecase for a msisdn delegate. Contributed by Adrian Wannenmacher. ([\#8734](https://github.com/matrix-org/synapse/issues/8734))
|
||||||
|
- Remove extraneous comma from JSON example in User Admin API docs. ([\#8771](https://github.com/matrix-org/synapse/issues/8771))
|
||||||
|
- Update `turn-howto.md` with troubleshooting notes. ([\#8779](https://github.com/matrix-org/synapse/issues/8779))
|
||||||
|
- Fix the example on how to set the `Content-Type` header in nginx for the Client Well-Known URI. ([\#8793](https://github.com/matrix-org/synapse/issues/8793))
|
||||||
|
- Improve the documentation for the admin API to list all media in a room with respect to encrypted events. ([\#8795](https://github.com/matrix-org/synapse/issues/8795))
|
||||||
|
- Update the formatting of the `push` section of the homeserver config file to better align with the [code style guidelines](https://github.com/matrix-org/synapse/blob/develop/docs/code_style.md#configuration-file-format). ([\#8818](https://github.com/matrix-org/synapse/issues/8818))
|
||||||
|
- Improve documentation how to configure prometheus for workers. ([\#8822](https://github.com/matrix-org/synapse/issues/8822))
|
||||||
|
- Update example prometheus console. ([\#8824](https://github.com/matrix-org/synapse/issues/8824))
|
||||||
|
|
||||||
|
|
||||||
|
Deprecations and Removals
|
||||||
|
-------------------------
|
||||||
|
|
||||||
|
- Remove old `/_matrix/client/*/admin` endpoints which were deprecated since Synapse 1.20.0. ([\#8785](https://github.com/matrix-org/synapse/issues/8785))
|
||||||
|
- Disable pretty printing JSON responses for curl. Users who want pretty-printed output should use [jq](https://stedolan.github.io/jq/) in combination with curl. Contributed by @tulir. ([\#8833](https://github.com/matrix-org/synapse/issues/8833))
|
||||||
|
|
||||||
|
|
||||||
|
Internal Changes
|
||||||
|
----------------
|
||||||
|
|
||||||
|
- Simplify the way the `HomeServer` object caches its internal attributes. ([\#8565](https://github.com/matrix-org/synapse/issues/8565), [\#8851](https://github.com/matrix-org/synapse/issues/8851))
|
||||||
|
- Add an example and documentation for clock skew to the SAML2 sample configuration to allow for clock/time difference between the homserver and IdP. Contributed by @localguru. ([\#8731](https://github.com/matrix-org/synapse/issues/8731))
|
||||||
|
- Generalise `RoomMemberHandler._locally_reject_invite` to apply to more flows than just invite. ([\#8751](https://github.com/matrix-org/synapse/issues/8751))
|
||||||
|
- Generalise `RoomStore.maybe_store_room_on_invite` to handle other, non-invite membership events. ([\#8754](https://github.com/matrix-org/synapse/issues/8754))
|
||||||
|
- Refactor test utilities for injecting HTTP requests. ([\#8757](https://github.com/matrix-org/synapse/issues/8757), [\#8758](https://github.com/matrix-org/synapse/issues/8758), [\#8759](https://github.com/matrix-org/synapse/issues/8759), [\#8760](https://github.com/matrix-org/synapse/issues/8760), [\#8761](https://github.com/matrix-org/synapse/issues/8761), [\#8777](https://github.com/matrix-org/synapse/issues/8777))
|
||||||
|
- Consolidate logic between the OpenID Connect and SAML code. ([\#8765](https://github.com/matrix-org/synapse/issues/8765))
|
||||||
|
- Use `TYPE_CHECKING` instead of magic `MYPY` variable. ([\#8770](https://github.com/matrix-org/synapse/issues/8770))
|
||||||
|
- Add a commandline script to sign arbitrary json objects. ([\#8772](https://github.com/matrix-org/synapse/issues/8772))
|
||||||
|
- Minor log line improvements for the SSO mapping code used to generate Matrix IDs from SSO IDs. ([\#8773](https://github.com/matrix-org/synapse/issues/8773))
|
||||||
|
- Add additional error checking for OpenID Connect and SAML mapping providers. ([\#8774](https://github.com/matrix-org/synapse/issues/8774), [\#8800](https://github.com/matrix-org/synapse/issues/8800))
|
||||||
|
- Add type hints to HTTP abstractions. ([\#8806](https://github.com/matrix-org/synapse/issues/8806), [\#8812](https://github.com/matrix-org/synapse/issues/8812))
|
||||||
|
- Remove unnecessary function arguments and add typing to several membership replication classes. ([\#8809](https://github.com/matrix-org/synapse/issues/8809))
|
||||||
|
- Optimise the lookup for an invite from another homeserver when trying to reject it. ([\#8815](https://github.com/matrix-org/synapse/issues/8815))
|
||||||
|
- Add tests for `password_auth_provider`s. ([\#8819](https://github.com/matrix-org/synapse/issues/8819))
|
||||||
|
- Drop redundant database index on `event_json`. ([\#8845](https://github.com/matrix-org/synapse/issues/8845))
|
||||||
|
- Simplify `uk.half-shot.msc2778.login.application_service` login handler. ([\#8847](https://github.com/matrix-org/synapse/issues/8847))
|
||||||
|
- Refactor `password_auth_provider` support code. ([\#8849](https://github.com/matrix-org/synapse/issues/8849))
|
||||||
|
- Add missing `ordering` to background database updates. ([\#8850](https://github.com/matrix-org/synapse/issues/8850))
|
||||||
|
- Allow for specifying a room version when creating a room in unit tests via `RestHelper.create_room_as`. ([\#8854](https://github.com/matrix-org/synapse/issues/8854))
|
||||||
|
|
||||||
|
|
||||||
Synapse 1.23.0 (2020-11-18)
|
Synapse 1.23.0 (2020-11-18)
|
||||||
===========================
|
===========================
|
||||||
|
|
||||||
|
|
|
@ -1 +0,0 @@
|
||||||
Simplify the way the `HomeServer` object caches its internal attributes.
|
|
|
@ -1 +0,0 @@
|
||||||
Add admin API for logging in as a user.
|
|
|
@ -1 +0,0 @@
|
||||||
Allow specification of the SAML IdP if the metadata returns multiple IdPs.
|
|
|
@ -1 +0,0 @@
|
||||||
Add an example and documentation for clock skew to the SAML2 sample configuration to allow for clock/time difference between the homserver and IdP. Contributed by @localguru.
|
|
|
@ -1 +0,0 @@
|
||||||
Clarify the usecase for an msisdn delegate. Contributed by Adrian Wannenmacher.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix a bug where appservices may be sent an excessive amount of read receipts and presence. Broke in v1.22.0.
|
|
|
@ -1 +0,0 @@
|
||||||
Generalise `RoomMemberHandler._locally_reject_invite` to apply to more flows than just invite.
|
|
|
@ -1 +0,0 @@
|
||||||
Generalise `RoomStore.maybe_store_room_on_invite` to handle other, non-invite membership events.
|
|
|
@ -1 +0,0 @@
|
||||||
Refactor test utilities for injecting HTTP requests.
|
|
|
@ -1 +0,0 @@
|
||||||
Refactor test utilities for injecting HTTP requests.
|
|
|
@ -1 +0,0 @@
|
||||||
Refactor test utilities for injecting HTTP requests.
|
|
|
@ -1 +0,0 @@
|
||||||
Refactor test utilities for injecting HTTP requests.
|
|
|
@ -1 +0,0 @@
|
||||||
Refactor test utilities for injecting HTTP requests.
|
|
|
@ -1 +0,0 @@
|
||||||
Consolidate logic between the OpenID Connect and SAML code.
|
|
|
@ -1 +0,0 @@
|
||||||
Use `TYPE_CHECKING` instead of magic `MYPY` variable.
|
|
|
@ -1 +0,0 @@
|
||||||
Remove extraneous comma from JSON example in User Admin API docs.
|
|
|
@ -1 +0,0 @@
|
||||||
Add a commandline script to sign arbitrary json objects.
|
|
|
@ -1 +0,0 @@
|
||||||
Minor log line improvements for the SSO mapping code used to generate Matrix IDs from SSO IDs.
|
|
|
@ -1 +0,0 @@
|
||||||
Add additional error checking for OpenID Connect and SAML mapping providers.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix a bug in some federation APIs which could lead to unexpected behaviour if different parameters were set in the URI and the request body.
|
|
|
@ -1 +0,0 @@
|
||||||
Refactor test utilities for injecting HTTP requests.
|
|
|
@ -1 +0,0 @@
|
||||||
Update `turn-howto.md` with troubleshooting notes.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix a bug introduced in v1.20.0 where the user-agent and IP address reported during user registration for CAS, OpenID Connect, and SAML were of the wrong form.
|
|
|
@ -1 +0,0 @@
|
||||||
Remove old `/_matrix/client/*/admin` endpoints which was deprecated since Synapse 1.20.0.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix the example on how to set the `Content-Type` header in nginx for the Client Well-Known URI.
|
|
|
@ -1 +0,0 @@
|
||||||
Improve the documentation for the admin API to list all media in a room with respect to encrypted events.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix a bug where synctl could spawn duplicate copies of a worker. Contributed by Waylon Cude.
|
|
|
@ -1 +0,0 @@
|
||||||
Allow per-room profiles to be used for the server notice user.
|
|
|
@ -1 +0,0 @@
|
||||||
Add additional error checking for OpenID Connect and SAML mapping providers.
|
|
|
@ -1 +0,0 @@
|
||||||
Add support for re-trying generation of a localpart for OpenID Connect mapping providers.
|
|
|
@ -0,0 +1 @@
|
||||||
|
Fix the "Event persist rate" section of the included grafana dashboard by adding missing prometheus rules.
|
|
@ -1 +0,0 @@
|
||||||
Allow Date header through CORS. Contributed by Nicolas Chamo.
|
|
|
@ -1 +0,0 @@
|
||||||
Add type hints to HTTP abstractions.
|
|
|
@ -1 +0,0 @@
|
||||||
Remove unnecessary function arguments and add typing to several membership replication classes.
|
|
|
@ -1 +0,0 @@
|
||||||
Add type hints to HTTP abstractions.
|
|
|
@ -1 +0,0 @@
|
||||||
Optimise the lookup for an invite from another homeserver when trying to reject it.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix bug where logging could break after a call to SIGHUP.
|
|
|
@ -1 +0,0 @@
|
||||||
Update the formatting of the `push` section of the homeserver config file to better align with the [code style guidelines](https://github.com/matrix-org/synapse/blob/develop/docs/code_style.md#configuration-file-format).
|
|
|
@ -1 +0,0 @@
|
||||||
Add tests for `password_auth_provider`s.
|
|
|
@ -1 +0,0 @@
|
||||||
Add a config option, `push.group_by_unread_count`, which controls whether unread message counts in push notifications are defined as "the number of rooms with unread messages" or "total unread messages".
|
|
|
@ -0,0 +1 @@
|
||||||
|
Apply the `federation_ip_range_blacklist` to push and key revocation requests.
|
|
@ -1 +0,0 @@
|
||||||
Improve documentation how to configure prometheus for workers.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix `register_new_matrix_user` failing with "Bad Request" when trailing slash is included in server URL. Contributed by @angdraug.
|
|
|
@ -1 +0,0 @@
|
||||||
Update example prometheus console.
|
|
|
@ -0,0 +1 @@
|
||||||
|
Fix bug where we might not correctly calculate the current state for rooms with multiple extremities.
|
|
@ -1 +0,0 @@
|
||||||
Disable pretty printing JSON responses for curl. Users who want pretty-printed output should use [jq](https://stedolan.github.io/jq/) in combination with curl. Contributed by @tulir.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix minor long-standing bug in login, where we would offer the `password` login type if a custom auth provider supported it, even if password login was disabled.
|
|
|
@ -0,0 +1 @@
|
||||||
|
Fix a long standing bug in the register admin endpoint (`/_synapse/admin/v1/register`) when the `mac` field was not provided. The endpoint now properly returns a 400 error. Contributed by @edwargix.
|
|
@ -1 +0,0 @@
|
||||||
Add `force_purge` option to delete-room admin api.
|
|
|
@ -1 +0,0 @@
|
||||||
Drop redundant database index on `event_json`.
|
|
|
@ -1 +0,0 @@
|
||||||
Simplify `uk.half-shot.msc2778.login.application_service` login handler.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix a long-standing bug which caused Synapse to require unspecified parameters during user-interactive authentication.
|
|
|
@ -1 +0,0 @@
|
||||||
Refactor `password_auth_provider` support code.
|
|
|
@ -1 +0,0 @@
|
||||||
Add missing `ordering` to background database updates.
|
|
|
@ -1 +0,0 @@
|
||||||
Simplify the way the `HomeServer` object caches its internal attributes.
|
|
|
@ -1 +0,0 @@
|
||||||
Allow for specifying a room version when creating a room in unit tests via `RestHelper.create_room_as`.
|
|
|
@ -1 +0,0 @@
|
||||||
Add support for re-trying generation of a localpart for OpenID Connect mapping providers.
|
|
|
@ -0,0 +1 @@
|
||||||
|
Fix a long-standing bug on Synapse instances supporting Single-Sign-On, where users would be prompted to enter their password to confirm certain actions, even though they have not set a password.
|
|
@ -0,0 +1 @@
|
||||||
|
Remove unused `FakeResponse` class from unit tests.
|
|
@ -58,3 +58,21 @@ groups:
|
||||||
labels:
|
labels:
|
||||||
type: "PDU"
|
type: "PDU"
|
||||||
expr: 'synapse_federation_transaction_queue_pending_pdus + 0'
|
expr: 'synapse_federation_transaction_queue_pending_pdus + 0'
|
||||||
|
|
||||||
|
- record: synapse_storage_events_persisted_by_source_type
|
||||||
|
expr: sum without(type, origin_type, origin_entity) (synapse_storage_events_persisted_events_sep{origin_type="remote"})
|
||||||
|
labels:
|
||||||
|
type: remote
|
||||||
|
- record: synapse_storage_events_persisted_by_source_type
|
||||||
|
expr: sum without(type, origin_type, origin_entity) (synapse_storage_events_persisted_events_sep{origin_entity="*client*",origin_type="local"})
|
||||||
|
labels:
|
||||||
|
type: local
|
||||||
|
- record: synapse_storage_events_persisted_by_source_type
|
||||||
|
expr: sum without(type, origin_type, origin_entity) (synapse_storage_events_persisted_events_sep{origin_entity!="*client*",origin_type="local"})
|
||||||
|
labels:
|
||||||
|
type: bridges
|
||||||
|
- record: synapse_storage_events_persisted_by_event_type
|
||||||
|
expr: sum without(origin_entity, origin_type) (synapse_storage_events_persisted_events_sep)
|
||||||
|
- record: synapse_storage_events_persisted_by_origin
|
||||||
|
expr: sum without(type) (synapse_storage_events_persisted_events_sep)
|
||||||
|
|
||||||
|
|
|
@ -642,17 +642,19 @@ acme:
|
||||||
# - nyc.example.com
|
# - nyc.example.com
|
||||||
# - syd.example.com
|
# - syd.example.com
|
||||||
|
|
||||||
# Prevent federation requests from being sent to the following
|
# Prevent outgoing requests from being sent to the following blacklisted IP address
|
||||||
# blacklist IP address CIDR ranges. If this option is not specified, or
|
# CIDR ranges. If this option is not specified, or specified with an empty list,
|
||||||
# specified with an empty list, no ip range blacklist will be enforced.
|
# no IP range blacklist will be enforced.
|
||||||
#
|
#
|
||||||
# As of Synapse v1.4.0 this option also affects any outbound requests to identity
|
# The blacklist applies to the outbound requests for federation, identity servers,
|
||||||
# servers provided by user input.
|
# push servers, and for checking key validitity for third-party invite events.
|
||||||
#
|
#
|
||||||
# (0.0.0.0 and :: are always blacklisted, whether or not they are explicitly
|
# (0.0.0.0 and :: are always blacklisted, whether or not they are explicitly
|
||||||
# listed here, since they correspond to unroutable addresses.)
|
# listed here, since they correspond to unroutable addresses.)
|
||||||
#
|
#
|
||||||
federation_ip_range_blacklist:
|
# This option replaces federation_ip_range_blacklist in Synapse v1.24.0.
|
||||||
|
#
|
||||||
|
ip_range_blacklist:
|
||||||
- '127.0.0.0/8'
|
- '127.0.0.0/8'
|
||||||
- '10.0.0.0/8'
|
- '10.0.0.0/8'
|
||||||
- '172.16.0.0/12'
|
- '172.16.0.0/12'
|
||||||
|
|
|
@ -48,7 +48,7 @@ try:
|
||||||
except ImportError:
|
except ImportError:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
__version__ = "1.23.0"
|
__version__ = "1.24.0rc1"
|
||||||
|
|
||||||
if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
|
if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
|
||||||
# We import here so that we don't have to install a bunch of deps when
|
# We import here so that we don't have to install a bunch of deps when
|
||||||
|
|
|
@ -266,7 +266,6 @@ class GenericWorkerPresence(BasePresenceHandler):
|
||||||
super().__init__(hs)
|
super().__init__(hs)
|
||||||
self.hs = hs
|
self.hs = hs
|
||||||
self.is_mine_id = hs.is_mine_id
|
self.is_mine_id = hs.is_mine_id
|
||||||
self.http_client = hs.get_simple_http_client()
|
|
||||||
|
|
||||||
self._presence_enabled = hs.config.use_presence
|
self._presence_enabled = hs.config.use_presence
|
||||||
|
|
||||||
|
|
|
@ -36,22 +36,30 @@ class FederationConfig(Config):
|
||||||
for domain in federation_domain_whitelist:
|
for domain in federation_domain_whitelist:
|
||||||
self.federation_domain_whitelist[domain] = True
|
self.federation_domain_whitelist[domain] = True
|
||||||
|
|
||||||
self.federation_ip_range_blacklist = config.get(
|
ip_range_blacklist = config.get("ip_range_blacklist", [])
|
||||||
"federation_ip_range_blacklist", []
|
|
||||||
)
|
|
||||||
|
|
||||||
# Attempt to create an IPSet from the given ranges
|
# Attempt to create an IPSet from the given ranges
|
||||||
try:
|
try:
|
||||||
self.federation_ip_range_blacklist = IPSet(
|
self.ip_range_blacklist = IPSet(ip_range_blacklist)
|
||||||
self.federation_ip_range_blacklist
|
except Exception as e:
|
||||||
)
|
raise ConfigError("Invalid range(s) provided in ip_range_blacklist: %s" % e)
|
||||||
|
# Always blacklist 0.0.0.0, ::
|
||||||
|
self.ip_range_blacklist.update(["0.0.0.0", "::"])
|
||||||
|
|
||||||
# Always blacklist 0.0.0.0, ::
|
# The federation_ip_range_blacklist is used for backwards-compatibility
|
||||||
self.federation_ip_range_blacklist.update(["0.0.0.0", "::"])
|
# and only applies to federation and identity servers. If it is not given,
|
||||||
|
# default to ip_range_blacklist.
|
||||||
|
federation_ip_range_blacklist = config.get(
|
||||||
|
"federation_ip_range_blacklist", ip_range_blacklist
|
||||||
|
)
|
||||||
|
try:
|
||||||
|
self.federation_ip_range_blacklist = IPSet(federation_ip_range_blacklist)
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
raise ConfigError(
|
raise ConfigError(
|
||||||
"Invalid range(s) provided in federation_ip_range_blacklist: %s" % e
|
"Invalid range(s) provided in federation_ip_range_blacklist: %s" % e
|
||||||
)
|
)
|
||||||
|
# Always blacklist 0.0.0.0, ::
|
||||||
|
self.federation_ip_range_blacklist.update(["0.0.0.0", "::"])
|
||||||
|
|
||||||
federation_metrics_domains = config.get("federation_metrics_domains") or []
|
federation_metrics_domains = config.get("federation_metrics_domains") or []
|
||||||
validate_config(
|
validate_config(
|
||||||
|
@ -76,17 +84,19 @@ class FederationConfig(Config):
|
||||||
# - nyc.example.com
|
# - nyc.example.com
|
||||||
# - syd.example.com
|
# - syd.example.com
|
||||||
|
|
||||||
# Prevent federation requests from being sent to the following
|
# Prevent outgoing requests from being sent to the following blacklisted IP address
|
||||||
# blacklist IP address CIDR ranges. If this option is not specified, or
|
# CIDR ranges. If this option is not specified, or specified with an empty list,
|
||||||
# specified with an empty list, no ip range blacklist will be enforced.
|
# no IP range blacklist will be enforced.
|
||||||
#
|
#
|
||||||
# As of Synapse v1.4.0 this option also affects any outbound requests to identity
|
# The blacklist applies to the outbound requests for federation, identity servers,
|
||||||
# servers provided by user input.
|
# push servers, and for checking key validitity for third-party invite events.
|
||||||
#
|
#
|
||||||
# (0.0.0.0 and :: are always blacklisted, whether or not they are explicitly
|
# (0.0.0.0 and :: are always blacklisted, whether or not they are explicitly
|
||||||
# listed here, since they correspond to unroutable addresses.)
|
# listed here, since they correspond to unroutable addresses.)
|
||||||
#
|
#
|
||||||
federation_ip_range_blacklist:
|
# This option replaces federation_ip_range_blacklist in Synapse v1.24.0.
|
||||||
|
#
|
||||||
|
ip_range_blacklist:
|
||||||
- '127.0.0.0/8'
|
- '127.0.0.0/8'
|
||||||
- '10.0.0.0/8'
|
- '10.0.0.0/8'
|
||||||
- '172.16.0.0/12'
|
- '172.16.0.0/12'
|
||||||
|
|
|
@ -578,7 +578,7 @@ class PerspectivesKeyFetcher(BaseV2KeyFetcher):
|
||||||
def __init__(self, hs):
|
def __init__(self, hs):
|
||||||
super().__init__(hs)
|
super().__init__(hs)
|
||||||
self.clock = hs.get_clock()
|
self.clock = hs.get_clock()
|
||||||
self.client = hs.get_http_client()
|
self.client = hs.get_federation_http_client()
|
||||||
self.key_servers = self.config.key_servers
|
self.key_servers = self.config.key_servers
|
||||||
|
|
||||||
async def get_keys(self, keys_to_fetch):
|
async def get_keys(self, keys_to_fetch):
|
||||||
|
@ -748,7 +748,7 @@ class ServerKeyFetcher(BaseV2KeyFetcher):
|
||||||
def __init__(self, hs):
|
def __init__(self, hs):
|
||||||
super().__init__(hs)
|
super().__init__(hs)
|
||||||
self.clock = hs.get_clock()
|
self.clock = hs.get_clock()
|
||||||
self.client = hs.get_http_client()
|
self.client = hs.get_federation_http_client()
|
||||||
|
|
||||||
async def get_keys(self, keys_to_fetch):
|
async def get_keys(self, keys_to_fetch):
|
||||||
"""
|
"""
|
||||||
|
|
|
@ -845,7 +845,6 @@ class FederationHandlerRegistry:
|
||||||
|
|
||||||
def __init__(self, hs: "HomeServer"):
|
def __init__(self, hs: "HomeServer"):
|
||||||
self.config = hs.config
|
self.config = hs.config
|
||||||
self.http_client = hs.get_simple_http_client()
|
|
||||||
self.clock = hs.get_clock()
|
self.clock = hs.get_clock()
|
||||||
self._instance_name = hs.get_instance_name()
|
self._instance_name = hs.get_instance_name()
|
||||||
|
|
||||||
|
|
|
@ -35,7 +35,7 @@ class TransportLayerClient:
|
||||||
|
|
||||||
def __init__(self, hs):
|
def __init__(self, hs):
|
||||||
self.server_name = hs.hostname
|
self.server_name = hs.hostname
|
||||||
self.client = hs.get_http_client()
|
self.client = hs.get_federation_http_client()
|
||||||
|
|
||||||
@log_function
|
@log_function
|
||||||
def get_room_state_ids(self, destination, room_id, event_id):
|
def get_room_state_ids(self, destination, room_id, event_id):
|
||||||
|
|
|
@ -1462,7 +1462,7 @@ def register_servlets(hs, resource, authenticator, ratelimiter, servlet_groups=N
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
hs (synapse.server.HomeServer): homeserver
|
hs (synapse.server.HomeServer): homeserver
|
||||||
resource (TransportLayerServer): resource class to register to
|
resource (JsonResource): resource class to register to
|
||||||
authenticator (Authenticator): authenticator to use
|
authenticator (Authenticator): authenticator to use
|
||||||
ratelimiter (util.ratelimitutils.FederationRateLimiter): ratelimiter to use
|
ratelimiter (util.ratelimitutils.FederationRateLimiter): ratelimiter to use
|
||||||
servlet_groups (list[str], optional): List of servlet groups to register.
|
servlet_groups (list[str], optional): List of servlet groups to register.
|
||||||
|
|
|
@ -193,9 +193,7 @@ class AuthHandler(BaseHandler):
|
||||||
self.hs = hs # FIXME better possibility to access registrationHandler later?
|
self.hs = hs # FIXME better possibility to access registrationHandler later?
|
||||||
self.macaroon_gen = hs.get_macaroon_generator()
|
self.macaroon_gen = hs.get_macaroon_generator()
|
||||||
self._password_enabled = hs.config.password_enabled
|
self._password_enabled = hs.config.password_enabled
|
||||||
self._sso_enabled = (
|
self._password_localdb_enabled = hs.config.password_localdb_enabled
|
||||||
hs.config.cas_enabled or hs.config.saml2_enabled or hs.config.oidc_enabled
|
|
||||||
)
|
|
||||||
|
|
||||||
# we keep this as a list despite the O(N^2) implication so that we can
|
# we keep this as a list despite the O(N^2) implication so that we can
|
||||||
# keep PASSWORD first and avoid confusing clients which pick the first
|
# keep PASSWORD first and avoid confusing clients which pick the first
|
||||||
|
@ -205,7 +203,7 @@ class AuthHandler(BaseHandler):
|
||||||
|
|
||||||
# start out by assuming PASSWORD is enabled; we will remove it later if not.
|
# start out by assuming PASSWORD is enabled; we will remove it later if not.
|
||||||
login_types = []
|
login_types = []
|
||||||
if hs.config.password_localdb_enabled:
|
if self._password_localdb_enabled:
|
||||||
login_types.append(LoginType.PASSWORD)
|
login_types.append(LoginType.PASSWORD)
|
||||||
|
|
||||||
for provider in self.password_providers:
|
for provider in self.password_providers:
|
||||||
|
@ -219,14 +217,6 @@ class AuthHandler(BaseHandler):
|
||||||
|
|
||||||
self._supported_login_types = login_types
|
self._supported_login_types = login_types
|
||||||
|
|
||||||
# Login types and UI Auth types have a heavy overlap, but are not
|
|
||||||
# necessarily identical. Login types have SSO (and other login types)
|
|
||||||
# added in the rest layer, see synapse.rest.client.v1.login.LoginRestServerlet.on_GET.
|
|
||||||
ui_auth_types = login_types.copy()
|
|
||||||
if self._sso_enabled:
|
|
||||||
ui_auth_types.append(LoginType.SSO)
|
|
||||||
self._supported_ui_auth_types = ui_auth_types
|
|
||||||
|
|
||||||
# Ratelimiter for failed auth during UIA. Uses same ratelimit config
|
# Ratelimiter for failed auth during UIA. Uses same ratelimit config
|
||||||
# as per `rc_login.failed_attempts`.
|
# as per `rc_login.failed_attempts`.
|
||||||
self._failed_uia_attempts_ratelimiter = Ratelimiter(
|
self._failed_uia_attempts_ratelimiter = Ratelimiter(
|
||||||
|
@ -339,7 +329,10 @@ class AuthHandler(BaseHandler):
|
||||||
self._failed_uia_attempts_ratelimiter.ratelimit(user_id, update=False)
|
self._failed_uia_attempts_ratelimiter.ratelimit(user_id, update=False)
|
||||||
|
|
||||||
# build a list of supported flows
|
# build a list of supported flows
|
||||||
flows = [[login_type] for login_type in self._supported_ui_auth_types]
|
supported_ui_auth_types = await self._get_available_ui_auth_types(
|
||||||
|
requester.user
|
||||||
|
)
|
||||||
|
flows = [[login_type] for login_type in supported_ui_auth_types]
|
||||||
|
|
||||||
try:
|
try:
|
||||||
result, params, session_id = await self.check_ui_auth(
|
result, params, session_id = await self.check_ui_auth(
|
||||||
|
@ -351,7 +344,7 @@ class AuthHandler(BaseHandler):
|
||||||
raise
|
raise
|
||||||
|
|
||||||
# find the completed login type
|
# find the completed login type
|
||||||
for login_type in self._supported_ui_auth_types:
|
for login_type in supported_ui_auth_types:
|
||||||
if login_type not in result:
|
if login_type not in result:
|
||||||
continue
|
continue
|
||||||
|
|
||||||
|
@ -367,6 +360,41 @@ class AuthHandler(BaseHandler):
|
||||||
|
|
||||||
return params, session_id
|
return params, session_id
|
||||||
|
|
||||||
|
async def _get_available_ui_auth_types(self, user: UserID) -> Iterable[str]:
|
||||||
|
"""Get a list of the authentication types this user can use
|
||||||
|
"""
|
||||||
|
|
||||||
|
ui_auth_types = set()
|
||||||
|
|
||||||
|
# if the HS supports password auth, and the user has a non-null password, we
|
||||||
|
# support password auth
|
||||||
|
if self._password_localdb_enabled and self._password_enabled:
|
||||||
|
lookupres = await self._find_user_id_and_pwd_hash(user.to_string())
|
||||||
|
if lookupres:
|
||||||
|
_, password_hash = lookupres
|
||||||
|
if password_hash:
|
||||||
|
ui_auth_types.add(LoginType.PASSWORD)
|
||||||
|
|
||||||
|
# also allow auth from password providers
|
||||||
|
for provider in self.password_providers:
|
||||||
|
for t in provider.get_supported_login_types().keys():
|
||||||
|
if t == LoginType.PASSWORD and not self._password_enabled:
|
||||||
|
continue
|
||||||
|
ui_auth_types.add(t)
|
||||||
|
|
||||||
|
# if sso is enabled, allow the user to log in via SSO iff they have a mapping
|
||||||
|
# from sso to mxid.
|
||||||
|
if self.hs.config.saml2.saml2_enabled or self.hs.config.oidc.oidc_enabled:
|
||||||
|
if await self.store.get_external_ids_by_user(user.to_string()):
|
||||||
|
ui_auth_types.add(LoginType.SSO)
|
||||||
|
|
||||||
|
# Our CAS impl does not (yet) correctly register users in user_external_ids,
|
||||||
|
# so always offer that if it's available.
|
||||||
|
if self.hs.config.cas.cas_enabled:
|
||||||
|
ui_auth_types.add(LoginType.SSO)
|
||||||
|
|
||||||
|
return ui_auth_types
|
||||||
|
|
||||||
def get_enabled_auth_types(self):
|
def get_enabled_auth_types(self):
|
||||||
"""Return the enabled user-interactive authentication types
|
"""Return the enabled user-interactive authentication types
|
||||||
|
|
||||||
|
@ -1029,7 +1057,7 @@ class AuthHandler(BaseHandler):
|
||||||
if result:
|
if result:
|
||||||
return result
|
return result
|
||||||
|
|
||||||
if login_type == LoginType.PASSWORD and self.hs.config.password_localdb_enabled:
|
if login_type == LoginType.PASSWORD and self._password_localdb_enabled:
|
||||||
known_login_type = True
|
known_login_type = True
|
||||||
|
|
||||||
# we've already checked that there is a (valid) password field
|
# we've already checked that there is a (valid) password field
|
||||||
|
|
|
@ -140,7 +140,7 @@ class FederationHandler(BaseHandler):
|
||||||
self._message_handler = hs.get_message_handler()
|
self._message_handler = hs.get_message_handler()
|
||||||
self._server_notices_mxid = hs.config.server_notices_mxid
|
self._server_notices_mxid = hs.config.server_notices_mxid
|
||||||
self.config = hs.config
|
self.config = hs.config
|
||||||
self.http_client = hs.get_simple_http_client()
|
self.http_client = hs.get_proxied_blacklisted_http_client()
|
||||||
self._instance_name = hs.get_instance_name()
|
self._instance_name = hs.get_instance_name()
|
||||||
self._replication = hs.get_replication_data_handler()
|
self._replication = hs.get_replication_data_handler()
|
||||||
|
|
||||||
|
|
|
@ -46,13 +46,13 @@ class IdentityHandler(BaseHandler):
|
||||||
def __init__(self, hs):
|
def __init__(self, hs):
|
||||||
super().__init__(hs)
|
super().__init__(hs)
|
||||||
|
|
||||||
|
# An HTTP client for contacting trusted URLs.
|
||||||
self.http_client = SimpleHttpClient(hs)
|
self.http_client = SimpleHttpClient(hs)
|
||||||
# We create a blacklisting instance of SimpleHttpClient for contacting identity
|
# An HTTP client for contacting identity servers specified by clients.
|
||||||
# servers specified by clients
|
|
||||||
self.blacklisting_http_client = SimpleHttpClient(
|
self.blacklisting_http_client = SimpleHttpClient(
|
||||||
hs, ip_blacklist=hs.config.federation_ip_range_blacklist
|
hs, ip_blacklist=hs.config.federation_ip_range_blacklist
|
||||||
)
|
)
|
||||||
self.federation_http_client = hs.get_http_client()
|
self.federation_http_client = hs.get_federation_http_client()
|
||||||
self.hs = hs
|
self.hs = hs
|
||||||
|
|
||||||
async def threepid_from_creds(
|
async def threepid_from_creds(
|
||||||
|
|
|
@ -125,7 +125,7 @@ def _make_scheduler(reactor):
|
||||||
return _scheduler
|
return _scheduler
|
||||||
|
|
||||||
|
|
||||||
class IPBlacklistingResolver:
|
class _IPBlacklistingResolver:
|
||||||
"""
|
"""
|
||||||
A proxy for reactor.nameResolver which only produces non-blacklisted IP
|
A proxy for reactor.nameResolver which only produces non-blacklisted IP
|
||||||
addresses, preventing DNS rebinding attacks on URL preview.
|
addresses, preventing DNS rebinding attacks on URL preview.
|
||||||
|
@ -199,6 +199,35 @@ class IPBlacklistingResolver:
|
||||||
return r
|
return r
|
||||||
|
|
||||||
|
|
||||||
|
@implementer(IReactorPluggableNameResolver)
|
||||||
|
class BlacklistingReactorWrapper:
|
||||||
|
"""
|
||||||
|
A Reactor wrapper which will prevent DNS resolution to blacklisted IP
|
||||||
|
addresses, to prevent DNS rebinding.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
reactor: IReactorPluggableNameResolver,
|
||||||
|
ip_whitelist: Optional[IPSet],
|
||||||
|
ip_blacklist: IPSet,
|
||||||
|
):
|
||||||
|
self._reactor = reactor
|
||||||
|
|
||||||
|
# We need to use a DNS resolver which filters out blacklisted IP
|
||||||
|
# addresses, to prevent DNS rebinding.
|
||||||
|
self._nameResolver = _IPBlacklistingResolver(
|
||||||
|
self._reactor, ip_whitelist, ip_blacklist
|
||||||
|
)
|
||||||
|
|
||||||
|
def __getattr__(self, attr: str) -> Any:
|
||||||
|
# Passthrough to the real reactor except for the DNS resolver.
|
||||||
|
if attr == "nameResolver":
|
||||||
|
return self._nameResolver
|
||||||
|
else:
|
||||||
|
return getattr(self._reactor, attr)
|
||||||
|
|
||||||
|
|
||||||
class BlacklistingAgentWrapper(Agent):
|
class BlacklistingAgentWrapper(Agent):
|
||||||
"""
|
"""
|
||||||
An Agent wrapper which will prevent access to IP addresses being accessed
|
An Agent wrapper which will prevent access to IP addresses being accessed
|
||||||
|
@ -292,22 +321,11 @@ class SimpleHttpClient:
|
||||||
self.user_agent = self.user_agent.encode("ascii")
|
self.user_agent = self.user_agent.encode("ascii")
|
||||||
|
|
||||||
if self._ip_blacklist:
|
if self._ip_blacklist:
|
||||||
real_reactor = hs.get_reactor()
|
|
||||||
# If we have an IP blacklist, we need to use a DNS resolver which
|
# If we have an IP blacklist, we need to use a DNS resolver which
|
||||||
# filters out blacklisted IP addresses, to prevent DNS rebinding.
|
# filters out blacklisted IP addresses, to prevent DNS rebinding.
|
||||||
nameResolver = IPBlacklistingResolver(
|
self.reactor = BlacklistingReactorWrapper(
|
||||||
real_reactor, self._ip_whitelist, self._ip_blacklist
|
hs.get_reactor(), self._ip_whitelist, self._ip_blacklist
|
||||||
)
|
)
|
||||||
|
|
||||||
@implementer(IReactorPluggableNameResolver)
|
|
||||||
class Reactor:
|
|
||||||
def __getattr__(_self, attr):
|
|
||||||
if attr == "nameResolver":
|
|
||||||
return nameResolver
|
|
||||||
else:
|
|
||||||
return getattr(real_reactor, attr)
|
|
||||||
|
|
||||||
self.reactor = Reactor()
|
|
||||||
else:
|
else:
|
||||||
self.reactor = hs.get_reactor()
|
self.reactor = hs.get_reactor()
|
||||||
|
|
||||||
|
|
|
@ -16,7 +16,7 @@ import logging
|
||||||
import urllib.parse
|
import urllib.parse
|
||||||
from typing import List, Optional
|
from typing import List, Optional
|
||||||
|
|
||||||
from netaddr import AddrFormatError, IPAddress
|
from netaddr import AddrFormatError, IPAddress, IPSet
|
||||||
from zope.interface import implementer
|
from zope.interface import implementer
|
||||||
|
|
||||||
from twisted.internet import defer
|
from twisted.internet import defer
|
||||||
|
@ -31,6 +31,7 @@ from twisted.web.http_headers import Headers
|
||||||
from twisted.web.iweb import IAgent, IAgentEndpointFactory, IBodyProducer
|
from twisted.web.iweb import IAgent, IAgentEndpointFactory, IBodyProducer
|
||||||
|
|
||||||
from synapse.crypto.context_factory import FederationPolicyForHTTPS
|
from synapse.crypto.context_factory import FederationPolicyForHTTPS
|
||||||
|
from synapse.http.client import BlacklistingAgentWrapper
|
||||||
from synapse.http.federation.srv_resolver import Server, SrvResolver
|
from synapse.http.federation.srv_resolver import Server, SrvResolver
|
||||||
from synapse.http.federation.well_known_resolver import WellKnownResolver
|
from synapse.http.federation.well_known_resolver import WellKnownResolver
|
||||||
from synapse.logging.context import make_deferred_yieldable, run_in_background
|
from synapse.logging.context import make_deferred_yieldable, run_in_background
|
||||||
|
@ -70,6 +71,7 @@ class MatrixFederationAgent:
|
||||||
reactor: IReactorCore,
|
reactor: IReactorCore,
|
||||||
tls_client_options_factory: Optional[FederationPolicyForHTTPS],
|
tls_client_options_factory: Optional[FederationPolicyForHTTPS],
|
||||||
user_agent: bytes,
|
user_agent: bytes,
|
||||||
|
ip_blacklist: IPSet,
|
||||||
_srv_resolver: Optional[SrvResolver] = None,
|
_srv_resolver: Optional[SrvResolver] = None,
|
||||||
_well_known_resolver: Optional[WellKnownResolver] = None,
|
_well_known_resolver: Optional[WellKnownResolver] = None,
|
||||||
):
|
):
|
||||||
|
@ -90,12 +92,18 @@ class MatrixFederationAgent:
|
||||||
self.user_agent = user_agent
|
self.user_agent = user_agent
|
||||||
|
|
||||||
if _well_known_resolver is None:
|
if _well_known_resolver is None:
|
||||||
|
# Note that the name resolver has already been wrapped in a
|
||||||
|
# IPBlacklistingResolver by MatrixFederationHttpClient.
|
||||||
_well_known_resolver = WellKnownResolver(
|
_well_known_resolver = WellKnownResolver(
|
||||||
self._reactor,
|
self._reactor,
|
||||||
agent=Agent(
|
agent=BlacklistingAgentWrapper(
|
||||||
|
Agent(
|
||||||
|
self._reactor,
|
||||||
|
pool=self._pool,
|
||||||
|
contextFactory=tls_client_options_factory,
|
||||||
|
),
|
||||||
self._reactor,
|
self._reactor,
|
||||||
pool=self._pool,
|
ip_blacklist=ip_blacklist,
|
||||||
contextFactory=tls_client_options_factory,
|
|
||||||
),
|
),
|
||||||
user_agent=self.user_agent,
|
user_agent=self.user_agent,
|
||||||
)
|
)
|
||||||
|
|
|
@ -26,11 +26,10 @@ import treq
|
||||||
from canonicaljson import encode_canonical_json
|
from canonicaljson import encode_canonical_json
|
||||||
from prometheus_client import Counter
|
from prometheus_client import Counter
|
||||||
from signedjson.sign import sign_json
|
from signedjson.sign import sign_json
|
||||||
from zope.interface import implementer
|
|
||||||
|
|
||||||
from twisted.internet import defer
|
from twisted.internet import defer
|
||||||
from twisted.internet.error import DNSLookupError
|
from twisted.internet.error import DNSLookupError
|
||||||
from twisted.internet.interfaces import IReactorPluggableNameResolver, IReactorTime
|
from twisted.internet.interfaces import IReactorTime
|
||||||
from twisted.internet.task import _EPSILON, Cooperator
|
from twisted.internet.task import _EPSILON, Cooperator
|
||||||
from twisted.web.http_headers import Headers
|
from twisted.web.http_headers import Headers
|
||||||
from twisted.web.iweb import IBodyProducer, IResponse
|
from twisted.web.iweb import IBodyProducer, IResponse
|
||||||
|
@ -45,7 +44,7 @@ from synapse.api.errors import (
|
||||||
from synapse.http import QuieterFileBodyProducer
|
from synapse.http import QuieterFileBodyProducer
|
||||||
from synapse.http.client import (
|
from synapse.http.client import (
|
||||||
BlacklistingAgentWrapper,
|
BlacklistingAgentWrapper,
|
||||||
IPBlacklistingResolver,
|
BlacklistingReactorWrapper,
|
||||||
encode_query_args,
|
encode_query_args,
|
||||||
readBodyToFile,
|
readBodyToFile,
|
||||||
)
|
)
|
||||||
|
@ -221,31 +220,22 @@ class MatrixFederationHttpClient:
|
||||||
self.signing_key = hs.signing_key
|
self.signing_key = hs.signing_key
|
||||||
self.server_name = hs.hostname
|
self.server_name = hs.hostname
|
||||||
|
|
||||||
real_reactor = hs.get_reactor()
|
|
||||||
|
|
||||||
# We need to use a DNS resolver which filters out blacklisted IP
|
# We need to use a DNS resolver which filters out blacklisted IP
|
||||||
# addresses, to prevent DNS rebinding.
|
# addresses, to prevent DNS rebinding.
|
||||||
nameResolver = IPBlacklistingResolver(
|
self.reactor = BlacklistingReactorWrapper(
|
||||||
real_reactor, None, hs.config.federation_ip_range_blacklist
|
hs.get_reactor(), None, hs.config.federation_ip_range_blacklist
|
||||||
)
|
)
|
||||||
|
|
||||||
@implementer(IReactorPluggableNameResolver)
|
|
||||||
class Reactor:
|
|
||||||
def __getattr__(_self, attr):
|
|
||||||
if attr == "nameResolver":
|
|
||||||
return nameResolver
|
|
||||||
else:
|
|
||||||
return getattr(real_reactor, attr)
|
|
||||||
|
|
||||||
self.reactor = Reactor()
|
|
||||||
|
|
||||||
user_agent = hs.version_string
|
user_agent = hs.version_string
|
||||||
if hs.config.user_agent_suffix:
|
if hs.config.user_agent_suffix:
|
||||||
user_agent = "%s %s" % (user_agent, hs.config.user_agent_suffix)
|
user_agent = "%s %s" % (user_agent, hs.config.user_agent_suffix)
|
||||||
user_agent = user_agent.encode("ascii")
|
user_agent = user_agent.encode("ascii")
|
||||||
|
|
||||||
self.agent = MatrixFederationAgent(
|
self.agent = MatrixFederationAgent(
|
||||||
self.reactor, tls_client_options_factory, user_agent
|
self.reactor,
|
||||||
|
tls_client_options_factory,
|
||||||
|
user_agent,
|
||||||
|
hs.config.federation_ip_range_blacklist,
|
||||||
)
|
)
|
||||||
|
|
||||||
# Use a BlacklistingAgentWrapper to prevent circumventing the IP
|
# Use a BlacklistingAgentWrapper to prevent circumventing the IP
|
||||||
|
|
|
@ -100,7 +100,7 @@ class HttpPusher:
|
||||||
if "url" not in self.data:
|
if "url" not in self.data:
|
||||||
raise PusherConfigException("'url' required in data for HTTP pusher")
|
raise PusherConfigException("'url' required in data for HTTP pusher")
|
||||||
self.url = self.data["url"]
|
self.url = self.data["url"]
|
||||||
self.http_client = hs.get_proxied_http_client()
|
self.http_client = hs.get_proxied_blacklisted_http_client()
|
||||||
self.data_minus_url = {}
|
self.data_minus_url = {}
|
||||||
self.data_minus_url.update(self.data)
|
self.data_minus_url.update(self.data)
|
||||||
del self.data_minus_url["url"]
|
del self.data_minus_url["url"]
|
||||||
|
|
|
@ -420,6 +420,9 @@ class UserRegisterServlet(RestServlet):
|
||||||
if user_type is not None and user_type not in UserTypes.ALL_USER_TYPES:
|
if user_type is not None and user_type not in UserTypes.ALL_USER_TYPES:
|
||||||
raise SynapseError(400, "Invalid user type")
|
raise SynapseError(400, "Invalid user type")
|
||||||
|
|
||||||
|
if "mac" not in body:
|
||||||
|
raise SynapseError(400, "mac must be specified", errcode=Codes.BAD_JSON)
|
||||||
|
|
||||||
got_mac = body["mac"]
|
got_mac = body["mac"]
|
||||||
|
|
||||||
want_mac_builder = hmac.new(
|
want_mac_builder = hmac.new(
|
||||||
|
|
|
@ -66,7 +66,7 @@ class MediaRepository:
|
||||||
def __init__(self, hs):
|
def __init__(self, hs):
|
||||||
self.hs = hs
|
self.hs = hs
|
||||||
self.auth = hs.get_auth()
|
self.auth = hs.get_auth()
|
||||||
self.client = hs.get_http_client()
|
self.client = hs.get_federation_http_client()
|
||||||
self.clock = hs.get_clock()
|
self.clock = hs.get_clock()
|
||||||
self.server_name = hs.hostname
|
self.server_name = hs.hostname
|
||||||
self.store = hs.get_datastore()
|
self.store = hs.get_datastore()
|
||||||
|
|
|
@ -350,16 +350,45 @@ class HomeServer(metaclass=abc.ABCMeta):
|
||||||
|
|
||||||
@cache_in_self
|
@cache_in_self
|
||||||
def get_simple_http_client(self) -> SimpleHttpClient:
|
def get_simple_http_client(self) -> SimpleHttpClient:
|
||||||
|
"""
|
||||||
|
An HTTP client with no special configuration.
|
||||||
|
"""
|
||||||
return SimpleHttpClient(self)
|
return SimpleHttpClient(self)
|
||||||
|
|
||||||
@cache_in_self
|
@cache_in_self
|
||||||
def get_proxied_http_client(self) -> SimpleHttpClient:
|
def get_proxied_http_client(self) -> SimpleHttpClient:
|
||||||
|
"""
|
||||||
|
An HTTP client that uses configured HTTP(S) proxies.
|
||||||
|
"""
|
||||||
return SimpleHttpClient(
|
return SimpleHttpClient(
|
||||||
self,
|
self,
|
||||||
http_proxy=os.getenvb(b"http_proxy"),
|
http_proxy=os.getenvb(b"http_proxy"),
|
||||||
https_proxy=os.getenvb(b"HTTPS_PROXY"),
|
https_proxy=os.getenvb(b"HTTPS_PROXY"),
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@cache_in_self
|
||||||
|
def get_proxied_blacklisted_http_client(self) -> SimpleHttpClient:
|
||||||
|
"""
|
||||||
|
An HTTP client that uses configured HTTP(S) proxies and blacklists IPs
|
||||||
|
based on the IP range blacklist.
|
||||||
|
"""
|
||||||
|
return SimpleHttpClient(
|
||||||
|
self,
|
||||||
|
ip_blacklist=self.config.ip_range_blacklist,
|
||||||
|
http_proxy=os.getenvb(b"http_proxy"),
|
||||||
|
https_proxy=os.getenvb(b"HTTPS_PROXY"),
|
||||||
|
)
|
||||||
|
|
||||||
|
@cache_in_self
|
||||||
|
def get_federation_http_client(self) -> MatrixFederationHttpClient:
|
||||||
|
"""
|
||||||
|
An HTTP client for federation.
|
||||||
|
"""
|
||||||
|
tls_client_options_factory = context_factory.FederationPolicyForHTTPS(
|
||||||
|
self.config
|
||||||
|
)
|
||||||
|
return MatrixFederationHttpClient(self, tls_client_options_factory)
|
||||||
|
|
||||||
@cache_in_self
|
@cache_in_self
|
||||||
def get_room_creation_handler(self) -> RoomCreationHandler:
|
def get_room_creation_handler(self) -> RoomCreationHandler:
|
||||||
return RoomCreationHandler(self)
|
return RoomCreationHandler(self)
|
||||||
|
@ -514,13 +543,6 @@ class HomeServer(metaclass=abc.ABCMeta):
|
||||||
def get_pusherpool(self) -> PusherPool:
|
def get_pusherpool(self) -> PusherPool:
|
||||||
return PusherPool(self)
|
return PusherPool(self)
|
||||||
|
|
||||||
@cache_in_self
|
|
||||||
def get_http_client(self) -> MatrixFederationHttpClient:
|
|
||||||
tls_client_options_factory = context_factory.FederationPolicyForHTTPS(
|
|
||||||
self.config
|
|
||||||
)
|
|
||||||
return MatrixFederationHttpClient(self, tls_client_options_factory)
|
|
||||||
|
|
||||||
@cache_in_self
|
@cache_in_self
|
||||||
def get_media_repository_resource(self) -> MediaRepositoryResource:
|
def get_media_repository_resource(self) -> MediaRepositoryResource:
|
||||||
# build the media repo resource. This indirects through the HomeServer
|
# build the media repo resource. This indirects through the HomeServer
|
||||||
|
|
|
@ -38,7 +38,7 @@ from synapse.api.constants import EventTypes
|
||||||
from synapse.api.errors import AuthError
|
from synapse.api.errors import AuthError
|
||||||
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS
|
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS
|
||||||
from synapse.events import EventBase
|
from synapse.events import EventBase
|
||||||
from synapse.types import MutableStateMap, StateMap
|
from synapse.types import Collection, MutableStateMap, StateMap
|
||||||
from synapse.util import Clock
|
from synapse.util import Clock
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
@ -252,9 +252,88 @@ async def _get_auth_chain_difference(
|
||||||
Set of event IDs
|
Set of event IDs
|
||||||
"""
|
"""
|
||||||
|
|
||||||
difference = await state_res_store.get_auth_chain_difference(
|
# The `StateResolutionStore.get_auth_chain_difference` function assumes that
|
||||||
[set(state_set.values()) for state_set in state_sets]
|
# all events passed to it (and their auth chains) have been persisted
|
||||||
)
|
# previously. This is not the case for any events in the `event_map`, and so
|
||||||
|
# we need to manually handle those events.
|
||||||
|
#
|
||||||
|
# We do this by:
|
||||||
|
# 1. calculating the auth chain difference for the state sets based on the
|
||||||
|
# events in `event_map` alone
|
||||||
|
# 2. replacing any events in the state_sets that are also in `event_map`
|
||||||
|
# with their auth events (recursively), and then calling
|
||||||
|
# `store.get_auth_chain_difference` as normal
|
||||||
|
# 3. adding the results of 1 and 2 together.
|
||||||
|
|
||||||
|
# Map from event ID in `event_map` to their auth event IDs, and their auth
|
||||||
|
# event IDs if they appear in the `event_map`. This is the intersection of
|
||||||
|
# the event's auth chain with the events in the `event_map` *plus* their
|
||||||
|
# auth event IDs.
|
||||||
|
events_to_auth_chain = {} # type: Dict[str, Set[str]]
|
||||||
|
for event in event_map.values():
|
||||||
|
chain = {event.event_id}
|
||||||
|
events_to_auth_chain[event.event_id] = chain
|
||||||
|
|
||||||
|
to_search = [event]
|
||||||
|
while to_search:
|
||||||
|
for auth_id in to_search.pop().auth_event_ids():
|
||||||
|
chain.add(auth_id)
|
||||||
|
auth_event = event_map.get(auth_id)
|
||||||
|
if auth_event:
|
||||||
|
to_search.append(auth_event)
|
||||||
|
|
||||||
|
# We now a) calculate the auth chain difference for the unpersisted events
|
||||||
|
# and b) work out the state sets to pass to the store.
|
||||||
|
#
|
||||||
|
# Note: If the `event_map` is empty (which is the common case), we can do a
|
||||||
|
# much simpler calculation.
|
||||||
|
if event_map:
|
||||||
|
# The list of state sets to pass to the store, where each state set is a set
|
||||||
|
# of the event ids making up the state. This is similar to `state_sets`,
|
||||||
|
# except that (a) we only have event ids, not the complete
|
||||||
|
# ((type, state_key)->event_id) mappings; and (b) we have stripped out
|
||||||
|
# unpersisted events and replaced them with the persisted events in
|
||||||
|
# their auth chain.
|
||||||
|
state_sets_ids = [] # type: List[Set[str]]
|
||||||
|
|
||||||
|
# For each state set, the unpersisted event IDs reachable (by their auth
|
||||||
|
# chain) from the events in that set.
|
||||||
|
unpersisted_set_ids = [] # type: List[Set[str]]
|
||||||
|
|
||||||
|
for state_set in state_sets:
|
||||||
|
set_ids = set() # type: Set[str]
|
||||||
|
state_sets_ids.append(set_ids)
|
||||||
|
|
||||||
|
unpersisted_ids = set() # type: Set[str]
|
||||||
|
unpersisted_set_ids.append(unpersisted_ids)
|
||||||
|
|
||||||
|
for event_id in state_set.values():
|
||||||
|
event_chain = events_to_auth_chain.get(event_id)
|
||||||
|
if event_chain is not None:
|
||||||
|
# We have an event in `event_map`. We add all the auth
|
||||||
|
# events that it references (that aren't also in `event_map`).
|
||||||
|
set_ids.update(e for e in event_chain if e not in event_map)
|
||||||
|
|
||||||
|
# We also add the full chain of unpersisted event IDs
|
||||||
|
# referenced by this state set, so that we can work out the
|
||||||
|
# auth chain difference of the unpersisted events.
|
||||||
|
unpersisted_ids.update(e for e in event_chain if e in event_map)
|
||||||
|
else:
|
||||||
|
set_ids.add(event_id)
|
||||||
|
|
||||||
|
# The auth chain difference of the unpersisted events of the state sets
|
||||||
|
# is calculated by taking the difference between the union and
|
||||||
|
# intersections.
|
||||||
|
union = unpersisted_set_ids[0].union(*unpersisted_set_ids[1:])
|
||||||
|
intersection = unpersisted_set_ids[0].intersection(*unpersisted_set_ids[1:])
|
||||||
|
|
||||||
|
difference_from_event_map = union - intersection # type: Collection[str]
|
||||||
|
else:
|
||||||
|
difference_from_event_map = ()
|
||||||
|
state_sets_ids = [set(state_set.values()) for state_set in state_sets]
|
||||||
|
|
||||||
|
difference = await state_res_store.get_auth_chain_difference(state_sets_ids)
|
||||||
|
difference.update(difference_from_event_map)
|
||||||
|
|
||||||
return difference
|
return difference
|
||||||
|
|
||||||
|
|
|
@ -463,6 +463,23 @@ class RegistrationWorkerStore(CacheInvalidationWorkerStore):
|
||||||
desc="get_user_by_external_id",
|
desc="get_user_by_external_id",
|
||||||
)
|
)
|
||||||
|
|
||||||
|
async def get_external_ids_by_user(self, mxid: str) -> List[Tuple[str, str]]:
|
||||||
|
"""Look up external ids for the given user
|
||||||
|
|
||||||
|
Args:
|
||||||
|
mxid: the MXID to be looked up
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Tuples of (auth_provider, external_id)
|
||||||
|
"""
|
||||||
|
res = await self.db_pool.simple_select_list(
|
||||||
|
table="user_external_ids",
|
||||||
|
keyvalues={"user_id": mxid},
|
||||||
|
retcols=("auth_provider", "external_id"),
|
||||||
|
desc="get_external_ids_by_user",
|
||||||
|
)
|
||||||
|
return [(r["auth_provider"], r["external_id"]) for r in res]
|
||||||
|
|
||||||
async def count_all_users(self):
|
async def count_all_users(self):
|
||||||
"""Counts all users registered on the homeserver."""
|
"""Counts all users registered on the homeserver."""
|
||||||
|
|
||||||
|
@ -963,6 +980,14 @@ class RegistrationBackgroundUpdateStore(RegistrationWorkerStore):
|
||||||
"users_set_deactivated_flag", self._background_update_set_deactivated_flag
|
"users_set_deactivated_flag", self._background_update_set_deactivated_flag
|
||||||
)
|
)
|
||||||
|
|
||||||
|
self.db_pool.updates.register_background_index_update(
|
||||||
|
"user_external_ids_user_id_idx",
|
||||||
|
index_name="user_external_ids_user_id_idx",
|
||||||
|
table="user_external_ids",
|
||||||
|
columns=["user_id"],
|
||||||
|
unique=False,
|
||||||
|
)
|
||||||
|
|
||||||
async def _background_update_set_deactivated_flag(self, progress, batch_size):
|
async def _background_update_set_deactivated_flag(self, progress, batch_size):
|
||||||
"""Retrieves a list of all deactivated users and sets the 'deactivated' flag to 1
|
"""Retrieves a list of all deactivated users and sets the 'deactivated' flag to 1
|
||||||
for each of them.
|
for each of them.
|
||||||
|
|
|
@ -0,0 +1,17 @@
|
||||||
|
/* Copyright 2020 The Matrix.org Foundation C.I.C
|
||||||
|
*
|
||||||
|
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
* you may not use this file except in compliance with the License.
|
||||||
|
* You may obtain a copy of the License at
|
||||||
|
*
|
||||||
|
* http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
*
|
||||||
|
* Unless required by applicable law or agreed to in writing, software
|
||||||
|
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
* See the License for the specific language governing permissions and
|
||||||
|
* limitations under the License.
|
||||||
|
*/
|
||||||
|
|
||||||
|
INSERT INTO background_updates (ordering, update_name, progress_json) VALUES
|
||||||
|
(5825, 'user_external_ids_user_id_idx', '{}');
|
|
@ -50,7 +50,9 @@ class FilteringTestCase(unittest.TestCase):
|
||||||
self.mock_http_client.put_json = DeferredMockCallable()
|
self.mock_http_client.put_json = DeferredMockCallable()
|
||||||
|
|
||||||
hs = yield setup_test_homeserver(
|
hs = yield setup_test_homeserver(
|
||||||
self.addCleanup, http_client=self.mock_http_client, keyring=Mock(),
|
self.addCleanup,
|
||||||
|
federation_http_client=self.mock_http_client,
|
||||||
|
keyring=Mock(),
|
||||||
)
|
)
|
||||||
|
|
||||||
self.filtering = hs.get_filtering()
|
self.filtering = hs.get_filtering()
|
||||||
|
|
|
@ -23,7 +23,7 @@ class FrontendProxyTests(HomeserverTestCase):
|
||||||
def make_homeserver(self, reactor, clock):
|
def make_homeserver(self, reactor, clock):
|
||||||
|
|
||||||
hs = self.setup_test_homeserver(
|
hs = self.setup_test_homeserver(
|
||||||
http_client=None, homeserver_to_use=GenericWorkerServer
|
federation_http_client=None, homeserver_to_use=GenericWorkerServer
|
||||||
)
|
)
|
||||||
|
|
||||||
return hs
|
return hs
|
||||||
|
|
|
@ -27,7 +27,7 @@ from tests.unittest import HomeserverTestCase
|
||||||
class FederationReaderOpenIDListenerTests(HomeserverTestCase):
|
class FederationReaderOpenIDListenerTests(HomeserverTestCase):
|
||||||
def make_homeserver(self, reactor, clock):
|
def make_homeserver(self, reactor, clock):
|
||||||
hs = self.setup_test_homeserver(
|
hs = self.setup_test_homeserver(
|
||||||
http_client=None, homeserver_to_use=GenericWorkerServer
|
federation_http_client=None, homeserver_to_use=GenericWorkerServer
|
||||||
)
|
)
|
||||||
return hs
|
return hs
|
||||||
|
|
||||||
|
@ -84,7 +84,7 @@ class FederationReaderOpenIDListenerTests(HomeserverTestCase):
|
||||||
class SynapseHomeserverOpenIDListenerTests(HomeserverTestCase):
|
class SynapseHomeserverOpenIDListenerTests(HomeserverTestCase):
|
||||||
def make_homeserver(self, reactor, clock):
|
def make_homeserver(self, reactor, clock):
|
||||||
hs = self.setup_test_homeserver(
|
hs = self.setup_test_homeserver(
|
||||||
http_client=None, homeserver_to_use=SynapseHomeServer
|
federation_http_client=None, homeserver_to_use=SynapseHomeServer
|
||||||
)
|
)
|
||||||
return hs
|
return hs
|
||||||
|
|
||||||
|
|
|
@ -315,7 +315,7 @@ class KeyringTestCase(unittest.HomeserverTestCase):
|
||||||
class ServerKeyFetcherTestCase(unittest.HomeserverTestCase):
|
class ServerKeyFetcherTestCase(unittest.HomeserverTestCase):
|
||||||
def make_homeserver(self, reactor, clock):
|
def make_homeserver(self, reactor, clock):
|
||||||
self.http_client = Mock()
|
self.http_client = Mock()
|
||||||
hs = self.setup_test_homeserver(http_client=self.http_client)
|
hs = self.setup_test_homeserver(federation_http_client=self.http_client)
|
||||||
return hs
|
return hs
|
||||||
|
|
||||||
def test_get_keys_from_server(self):
|
def test_get_keys_from_server(self):
|
||||||
|
@ -395,7 +395,9 @@ class PerspectivesKeyFetcherTestCase(unittest.HomeserverTestCase):
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
|
|
||||||
return self.setup_test_homeserver(http_client=self.http_client, config=config)
|
return self.setup_test_homeserver(
|
||||||
|
federation_http_client=self.http_client, config=config
|
||||||
|
)
|
||||||
|
|
||||||
def build_perspectives_response(
|
def build_perspectives_response(
|
||||||
self, server_name: str, signing_key: SigningKey, valid_until_ts: int,
|
self, server_name: str, signing_key: SigningKey, valid_until_ts: int,
|
||||||
|
|
|
@ -27,7 +27,7 @@ user2 = "@theresa:bbb"
|
||||||
|
|
||||||
class DeviceTestCase(unittest.HomeserverTestCase):
|
class DeviceTestCase(unittest.HomeserverTestCase):
|
||||||
def make_homeserver(self, reactor, clock):
|
def make_homeserver(self, reactor, clock):
|
||||||
hs = self.setup_test_homeserver("server", http_client=None)
|
hs = self.setup_test_homeserver("server", federation_http_client=None)
|
||||||
self.handler = hs.get_device_handler()
|
self.handler = hs.get_device_handler()
|
||||||
self.store = hs.get_datastore()
|
self.store = hs.get_datastore()
|
||||||
return hs
|
return hs
|
||||||
|
@ -229,7 +229,7 @@ class DeviceTestCase(unittest.HomeserverTestCase):
|
||||||
|
|
||||||
class DehydrationTestCase(unittest.HomeserverTestCase):
|
class DehydrationTestCase(unittest.HomeserverTestCase):
|
||||||
def make_homeserver(self, reactor, clock):
|
def make_homeserver(self, reactor, clock):
|
||||||
hs = self.setup_test_homeserver("server", http_client=None)
|
hs = self.setup_test_homeserver("server", federation_http_client=None)
|
||||||
self.handler = hs.get_device_handler()
|
self.handler = hs.get_device_handler()
|
||||||
self.registration = hs.get_registration_handler()
|
self.registration = hs.get_registration_handler()
|
||||||
self.auth = hs.get_auth()
|
self.auth = hs.get_auth()
|
||||||
|
|
|
@ -42,7 +42,7 @@ class DirectoryTestCase(unittest.HomeserverTestCase):
|
||||||
self.mock_registry.register_query_handler = register_query_handler
|
self.mock_registry.register_query_handler = register_query_handler
|
||||||
|
|
||||||
hs = self.setup_test_homeserver(
|
hs = self.setup_test_homeserver(
|
||||||
http_client=None,
|
federation_http_client=None,
|
||||||
resource_for_federation=Mock(),
|
resource_for_federation=Mock(),
|
||||||
federation_client=self.mock_federation,
|
federation_client=self.mock_federation,
|
||||||
federation_registry=self.mock_registry,
|
federation_registry=self.mock_registry,
|
||||||
|
|
|
@ -37,7 +37,7 @@ class FederationTestCase(unittest.HomeserverTestCase):
|
||||||
]
|
]
|
||||||
|
|
||||||
def make_homeserver(self, reactor, clock):
|
def make_homeserver(self, reactor, clock):
|
||||||
hs = self.setup_test_homeserver(http_client=None)
|
hs = self.setup_test_homeserver(federation_http_client=None)
|
||||||
self.handler = hs.get_federation_handler()
|
self.handler = hs.get_federation_handler()
|
||||||
self.store = hs.get_datastore()
|
self.store = hs.get_datastore()
|
||||||
return hs
|
return hs
|
||||||
|
|
|
@ -17,30 +17,15 @@ from urllib.parse import parse_qs, urlparse
|
||||||
|
|
||||||
from mock import Mock, patch
|
from mock import Mock, patch
|
||||||
|
|
||||||
import attr
|
|
||||||
import pymacaroons
|
import pymacaroons
|
||||||
|
|
||||||
from twisted.python.failure import Failure
|
|
||||||
from twisted.web._newclient import ResponseDone
|
|
||||||
|
|
||||||
from synapse.handlers.oidc_handler import OidcError, OidcMappingProvider
|
from synapse.handlers.oidc_handler import OidcError, OidcMappingProvider
|
||||||
from synapse.handlers.sso import MappingException
|
from synapse.handlers.sso import MappingException
|
||||||
from synapse.types import UserID
|
from synapse.types import UserID
|
||||||
|
|
||||||
|
from tests.test_utils import FakeResponse
|
||||||
from tests.unittest import HomeserverTestCase, override_config
|
from tests.unittest import HomeserverTestCase, override_config
|
||||||
|
|
||||||
|
|
||||||
@attr.s
|
|
||||||
class FakeResponse:
|
|
||||||
code = attr.ib()
|
|
||||||
body = attr.ib()
|
|
||||||
phrase = attr.ib()
|
|
||||||
|
|
||||||
def deliverBody(self, protocol):
|
|
||||||
protocol.dataReceived(self.body)
|
|
||||||
protocol.connectionLost(Failure(ResponseDone()))
|
|
||||||
|
|
||||||
|
|
||||||
# These are a few constants that are used as config parameters in the tests.
|
# These are a few constants that are used as config parameters in the tests.
|
||||||
ISSUER = "https://issuer/"
|
ISSUER = "https://issuer/"
|
||||||
CLIENT_ID = "test-client-id"
|
CLIENT_ID = "test-client-id"
|
||||||
|
|
|
@ -463,7 +463,7 @@ class PresenceJoinTestCase(unittest.HomeserverTestCase):
|
||||||
|
|
||||||
def make_homeserver(self, reactor, clock):
|
def make_homeserver(self, reactor, clock):
|
||||||
hs = self.setup_test_homeserver(
|
hs = self.setup_test_homeserver(
|
||||||
"server", http_client=None, federation_sender=Mock()
|
"server", federation_http_client=None, federation_sender=Mock()
|
||||||
)
|
)
|
||||||
return hs
|
return hs
|
||||||
|
|
||||||
|
|
|
@ -44,7 +44,7 @@ class ProfileTestCase(unittest.TestCase):
|
||||||
|
|
||||||
hs = yield setup_test_homeserver(
|
hs = yield setup_test_homeserver(
|
||||||
self.addCleanup,
|
self.addCleanup,
|
||||||
http_client=None,
|
federation_http_client=None,
|
||||||
resource_for_federation=Mock(),
|
resource_for_federation=Mock(),
|
||||||
federation_client=self.mock_federation,
|
federation_client=self.mock_federation,
|
||||||
federation_server=Mock(),
|
federation_server=Mock(),
|
||||||
|
|
|
@ -15,18 +15,20 @@
|
||||||
|
|
||||||
|
|
||||||
import json
|
import json
|
||||||
|
from typing import Dict
|
||||||
|
|
||||||
from mock import ANY, Mock, call
|
from mock import ANY, Mock, call
|
||||||
|
|
||||||
from twisted.internet import defer
|
from twisted.internet import defer
|
||||||
|
from twisted.web.resource import Resource
|
||||||
|
|
||||||
from synapse.api.errors import AuthError
|
from synapse.api.errors import AuthError
|
||||||
|
from synapse.federation.transport.server import TransportLayerServer
|
||||||
from synapse.types import UserID, create_requester
|
from synapse.types import UserID, create_requester
|
||||||
|
|
||||||
from tests import unittest
|
from tests import unittest
|
||||||
from tests.test_utils import make_awaitable
|
from tests.test_utils import make_awaitable
|
||||||
from tests.unittest import override_config
|
from tests.unittest import override_config
|
||||||
from tests.utils import register_federation_servlets
|
|
||||||
|
|
||||||
# Some local users to test with
|
# Some local users to test with
|
||||||
U_APPLE = UserID.from_string("@apple:test")
|
U_APPLE = UserID.from_string("@apple:test")
|
||||||
|
@ -53,8 +55,6 @@ def _make_edu_transaction_json(edu_type, content):
|
||||||
|
|
||||||
|
|
||||||
class TypingNotificationsTestCase(unittest.HomeserverTestCase):
|
class TypingNotificationsTestCase(unittest.HomeserverTestCase):
|
||||||
servlets = [register_federation_servlets]
|
|
||||||
|
|
||||||
def make_homeserver(self, reactor, clock):
|
def make_homeserver(self, reactor, clock):
|
||||||
# we mock out the keyring so as to skip the authentication check on the
|
# we mock out the keyring so as to skip the authentication check on the
|
||||||
# federation API call.
|
# federation API call.
|
||||||
|
@ -70,13 +70,18 @@ class TypingNotificationsTestCase(unittest.HomeserverTestCase):
|
||||||
|
|
||||||
hs = self.setup_test_homeserver(
|
hs = self.setup_test_homeserver(
|
||||||
notifier=Mock(),
|
notifier=Mock(),
|
||||||
http_client=mock_federation_client,
|
federation_http_client=mock_federation_client,
|
||||||
keyring=mock_keyring,
|
keyring=mock_keyring,
|
||||||
replication_streams={},
|
replication_streams={},
|
||||||
)
|
)
|
||||||
|
|
||||||
return hs
|
return hs
|
||||||
|
|
||||||
|
def create_resource_dict(self) -> Dict[str, Resource]:
|
||||||
|
d = super().create_resource_dict()
|
||||||
|
d["/_matrix/federation"] = TransportLayerServer(self.hs)
|
||||||
|
return d
|
||||||
|
|
||||||
def prepare(self, reactor, clock, hs):
|
def prepare(self, reactor, clock, hs):
|
||||||
mock_notifier = hs.get_notifier()
|
mock_notifier = hs.get_notifier()
|
||||||
self.on_new_event = mock_notifier.on_new_event
|
self.on_new_event = mock_notifier.on_new_event
|
||||||
|
@ -192,7 +197,7 @@ class TypingNotificationsTestCase(unittest.HomeserverTestCase):
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
|
|
||||||
put_json = self.hs.get_http_client().put_json
|
put_json = self.hs.get_federation_http_client().put_json
|
||||||
put_json.assert_called_once_with(
|
put_json.assert_called_once_with(
|
||||||
"farm",
|
"farm",
|
||||||
path="/_matrix/federation/v1/send/1000000",
|
path="/_matrix/federation/v1/send/1000000",
|
||||||
|
@ -270,7 +275,7 @@ class TypingNotificationsTestCase(unittest.HomeserverTestCase):
|
||||||
|
|
||||||
self.on_new_event.assert_has_calls([call("typing_key", 1, rooms=[ROOM_ID])])
|
self.on_new_event.assert_has_calls([call("typing_key", 1, rooms=[ROOM_ID])])
|
||||||
|
|
||||||
put_json = self.hs.get_http_client().put_json
|
put_json = self.hs.get_federation_http_client().put_json
|
||||||
put_json.assert_called_once_with(
|
put_json.assert_called_once_with(
|
||||||
"farm",
|
"farm",
|
||||||
path="/_matrix/federation/v1/send/1000000",
|
path="/_matrix/federation/v1/send/1000000",
|
||||||
|
|
|
@ -17,6 +17,7 @@ import logging
|
||||||
from mock import Mock
|
from mock import Mock
|
||||||
|
|
||||||
import treq
|
import treq
|
||||||
|
from netaddr import IPSet
|
||||||
from service_identity import VerificationError
|
from service_identity import VerificationError
|
||||||
from zope.interface import implementer
|
from zope.interface import implementer
|
||||||
|
|
||||||
|
@ -103,6 +104,7 @@ class MatrixFederationAgentTests(unittest.TestCase):
|
||||||
reactor=self.reactor,
|
reactor=self.reactor,
|
||||||
tls_client_options_factory=self.tls_factory,
|
tls_client_options_factory=self.tls_factory,
|
||||||
user_agent="test-agent", # Note that this is unused since _well_known_resolver is provided.
|
user_agent="test-agent", # Note that this is unused since _well_known_resolver is provided.
|
||||||
|
ip_blacklist=IPSet(),
|
||||||
_srv_resolver=self.mock_resolver,
|
_srv_resolver=self.mock_resolver,
|
||||||
_well_known_resolver=self.well_known_resolver,
|
_well_known_resolver=self.well_known_resolver,
|
||||||
)
|
)
|
||||||
|
@ -736,6 +738,7 @@ class MatrixFederationAgentTests(unittest.TestCase):
|
||||||
reactor=self.reactor,
|
reactor=self.reactor,
|
||||||
tls_client_options_factory=tls_factory,
|
tls_client_options_factory=tls_factory,
|
||||||
user_agent=b"test-agent", # This is unused since _well_known_resolver is passed below.
|
user_agent=b"test-agent", # This is unused since _well_known_resolver is passed below.
|
||||||
|
ip_blacklist=IPSet(),
|
||||||
_srv_resolver=self.mock_resolver,
|
_srv_resolver=self.mock_resolver,
|
||||||
_well_known_resolver=WellKnownResolver(
|
_well_known_resolver=WellKnownResolver(
|
||||||
self.reactor,
|
self.reactor,
|
||||||
|
|
|
@ -49,7 +49,9 @@ class HTTPPusherTests(HomeserverTestCase):
|
||||||
config = self.default_config()
|
config = self.default_config()
|
||||||
config["start_pushers"] = True
|
config["start_pushers"] = True
|
||||||
|
|
||||||
hs = self.setup_test_homeserver(config=config, proxied_http_client=m)
|
hs = self.setup_test_homeserver(
|
||||||
|
config=config, proxied_blacklisted_http_client=m
|
||||||
|
)
|
||||||
|
|
||||||
return hs
|
return hs
|
||||||
|
|
||||||
|
|
|
@ -13,7 +13,7 @@
|
||||||
# See the License for the specific language governing permissions and
|
# See the License for the specific language governing permissions and
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
import logging
|
import logging
|
||||||
from typing import Any, Callable, List, Optional, Tuple
|
from typing import Any, Callable, Dict, List, Optional, Tuple
|
||||||
|
|
||||||
import attr
|
import attr
|
||||||
|
|
||||||
|
@ -21,6 +21,7 @@ from twisted.internet.interfaces import IConsumer, IPullProducer, IReactorTime
|
||||||
from twisted.internet.protocol import Protocol
|
from twisted.internet.protocol import Protocol
|
||||||
from twisted.internet.task import LoopingCall
|
from twisted.internet.task import LoopingCall
|
||||||
from twisted.web.http import HTTPChannel
|
from twisted.web.http import HTTPChannel
|
||||||
|
from twisted.web.resource import Resource
|
||||||
|
|
||||||
from synapse.app.generic_worker import (
|
from synapse.app.generic_worker import (
|
||||||
GenericWorkerReplicationHandler,
|
GenericWorkerReplicationHandler,
|
||||||
|
@ -28,7 +29,7 @@ from synapse.app.generic_worker import (
|
||||||
)
|
)
|
||||||
from synapse.http.server import JsonResource
|
from synapse.http.server import JsonResource
|
||||||
from synapse.http.site import SynapseRequest, SynapseSite
|
from synapse.http.site import SynapseRequest, SynapseSite
|
||||||
from synapse.replication.http import ReplicationRestResource, streams
|
from synapse.replication.http import ReplicationRestResource
|
||||||
from synapse.replication.tcp.handler import ReplicationCommandHandler
|
from synapse.replication.tcp.handler import ReplicationCommandHandler
|
||||||
from synapse.replication.tcp.protocol import ClientReplicationStreamProtocol
|
from synapse.replication.tcp.protocol import ClientReplicationStreamProtocol
|
||||||
from synapse.replication.tcp.resource import ReplicationStreamProtocolFactory
|
from synapse.replication.tcp.resource import ReplicationStreamProtocolFactory
|
||||||
|
@ -54,10 +55,6 @@ class BaseStreamTestCase(unittest.HomeserverTestCase):
|
||||||
if not hiredis:
|
if not hiredis:
|
||||||
skip = "Requires hiredis"
|
skip = "Requires hiredis"
|
||||||
|
|
||||||
servlets = [
|
|
||||||
streams.register_servlets,
|
|
||||||
]
|
|
||||||
|
|
||||||
def prepare(self, reactor, clock, hs):
|
def prepare(self, reactor, clock, hs):
|
||||||
# build a replication server
|
# build a replication server
|
||||||
server_factory = ReplicationStreamProtocolFactory(hs)
|
server_factory = ReplicationStreamProtocolFactory(hs)
|
||||||
|
@ -67,7 +64,7 @@ class BaseStreamTestCase(unittest.HomeserverTestCase):
|
||||||
# Make a new HomeServer object for the worker
|
# Make a new HomeServer object for the worker
|
||||||
self.reactor.lookups["testserv"] = "1.2.3.4"
|
self.reactor.lookups["testserv"] = "1.2.3.4"
|
||||||
self.worker_hs = self.setup_test_homeserver(
|
self.worker_hs = self.setup_test_homeserver(
|
||||||
http_client=None,
|
federation_http_client=None,
|
||||||
homeserver_to_use=GenericWorkerServer,
|
homeserver_to_use=GenericWorkerServer,
|
||||||
config=self._get_worker_hs_config(),
|
config=self._get_worker_hs_config(),
|
||||||
reactor=self.reactor,
|
reactor=self.reactor,
|
||||||
|
@ -88,6 +85,11 @@ class BaseStreamTestCase(unittest.HomeserverTestCase):
|
||||||
self._client_transport = None
|
self._client_transport = None
|
||||||
self._server_transport = None
|
self._server_transport = None
|
||||||
|
|
||||||
|
def create_resource_dict(self) -> Dict[str, Resource]:
|
||||||
|
d = super().create_resource_dict()
|
||||||
|
d["/_synapse/replication"] = ReplicationRestResource(self.hs)
|
||||||
|
return d
|
||||||
|
|
||||||
def _get_worker_hs_config(self) -> dict:
|
def _get_worker_hs_config(self) -> dict:
|
||||||
config = self.default_config()
|
config = self.default_config()
|
||||||
config["worker_app"] = "synapse.app.generic_worker"
|
config["worker_app"] = "synapse.app.generic_worker"
|
||||||
|
@ -264,7 +266,7 @@ class BaseMultiWorkerStreamTestCase(unittest.HomeserverTestCase):
|
||||||
worker_app: Type of worker, e.g. `synapse.app.federation_sender`.
|
worker_app: Type of worker, e.g. `synapse.app.federation_sender`.
|
||||||
extra_config: Any extra config to use for this instances.
|
extra_config: Any extra config to use for this instances.
|
||||||
**kwargs: Options that get passed to `self.setup_test_homeserver`,
|
**kwargs: Options that get passed to `self.setup_test_homeserver`,
|
||||||
useful to e.g. pass some mocks for things like `http_client`
|
useful to e.g. pass some mocks for things like `federation_http_client`
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
The new worker HomeServer instance.
|
The new worker HomeServer instance.
|
||||||
|
|
|
@ -50,7 +50,7 @@ class FederationSenderTestCase(BaseMultiWorkerStreamTestCase):
|
||||||
self.make_worker_hs(
|
self.make_worker_hs(
|
||||||
"synapse.app.federation_sender",
|
"synapse.app.federation_sender",
|
||||||
{"send_federation": True},
|
{"send_federation": True},
|
||||||
http_client=mock_client,
|
federation_http_client=mock_client,
|
||||||
)
|
)
|
||||||
|
|
||||||
user = self.register_user("user", "pass")
|
user = self.register_user("user", "pass")
|
||||||
|
@ -81,7 +81,7 @@ class FederationSenderTestCase(BaseMultiWorkerStreamTestCase):
|
||||||
"worker_name": "sender1",
|
"worker_name": "sender1",
|
||||||
"federation_sender_instances": ["sender1", "sender2"],
|
"federation_sender_instances": ["sender1", "sender2"],
|
||||||
},
|
},
|
||||||
http_client=mock_client1,
|
federation_http_client=mock_client1,
|
||||||
)
|
)
|
||||||
|
|
||||||
mock_client2 = Mock(spec=["put_json"])
|
mock_client2 = Mock(spec=["put_json"])
|
||||||
|
@ -93,7 +93,7 @@ class FederationSenderTestCase(BaseMultiWorkerStreamTestCase):
|
||||||
"worker_name": "sender2",
|
"worker_name": "sender2",
|
||||||
"federation_sender_instances": ["sender1", "sender2"],
|
"federation_sender_instances": ["sender1", "sender2"],
|
||||||
},
|
},
|
||||||
http_client=mock_client2,
|
federation_http_client=mock_client2,
|
||||||
)
|
)
|
||||||
|
|
||||||
user = self.register_user("user2", "pass")
|
user = self.register_user("user2", "pass")
|
||||||
|
@ -144,7 +144,7 @@ class FederationSenderTestCase(BaseMultiWorkerStreamTestCase):
|
||||||
"worker_name": "sender1",
|
"worker_name": "sender1",
|
||||||
"federation_sender_instances": ["sender1", "sender2"],
|
"federation_sender_instances": ["sender1", "sender2"],
|
||||||
},
|
},
|
||||||
http_client=mock_client1,
|
federation_http_client=mock_client1,
|
||||||
)
|
)
|
||||||
|
|
||||||
mock_client2 = Mock(spec=["put_json"])
|
mock_client2 = Mock(spec=["put_json"])
|
||||||
|
@ -156,7 +156,7 @@ class FederationSenderTestCase(BaseMultiWorkerStreamTestCase):
|
||||||
"worker_name": "sender2",
|
"worker_name": "sender2",
|
||||||
"federation_sender_instances": ["sender1", "sender2"],
|
"federation_sender_instances": ["sender1", "sender2"],
|
||||||
},
|
},
|
||||||
http_client=mock_client2,
|
federation_http_client=mock_client2,
|
||||||
)
|
)
|
||||||
|
|
||||||
user = self.register_user("user3", "pass")
|
user = self.register_user("user3", "pass")
|
||||||
|
|
|
@ -98,7 +98,7 @@ class PusherShardTestCase(BaseMultiWorkerStreamTestCase):
|
||||||
self.make_worker_hs(
|
self.make_worker_hs(
|
||||||
"synapse.app.pusher",
|
"synapse.app.pusher",
|
||||||
{"start_pushers": True},
|
{"start_pushers": True},
|
||||||
proxied_http_client=http_client_mock,
|
proxied_blacklisted_http_client=http_client_mock,
|
||||||
)
|
)
|
||||||
|
|
||||||
event_id = self._create_pusher_and_send_msg("user")
|
event_id = self._create_pusher_and_send_msg("user")
|
||||||
|
@ -133,7 +133,7 @@ class PusherShardTestCase(BaseMultiWorkerStreamTestCase):
|
||||||
"worker_name": "pusher1",
|
"worker_name": "pusher1",
|
||||||
"pusher_instances": ["pusher1", "pusher2"],
|
"pusher_instances": ["pusher1", "pusher2"],
|
||||||
},
|
},
|
||||||
proxied_http_client=http_client_mock1,
|
proxied_blacklisted_http_client=http_client_mock1,
|
||||||
)
|
)
|
||||||
|
|
||||||
http_client_mock2 = Mock(spec_set=["post_json_get_json"])
|
http_client_mock2 = Mock(spec_set=["post_json_get_json"])
|
||||||
|
@ -148,7 +148,7 @@ class PusherShardTestCase(BaseMultiWorkerStreamTestCase):
|
||||||
"worker_name": "pusher2",
|
"worker_name": "pusher2",
|
||||||
"pusher_instances": ["pusher1", "pusher2"],
|
"pusher_instances": ["pusher1", "pusher2"],
|
||||||
},
|
},
|
||||||
proxied_http_client=http_client_mock2,
|
proxied_blacklisted_http_client=http_client_mock2,
|
||||||
)
|
)
|
||||||
|
|
||||||
# We choose a user name that we know should go to pusher1.
|
# We choose a user name that we know should go to pusher1.
|
||||||
|
|
|
@ -210,7 +210,7 @@ class QuarantineMediaTestCase(unittest.HomeserverTestCase):
|
||||||
}
|
}
|
||||||
config["media_storage_providers"] = [provider_config]
|
config["media_storage_providers"] = [provider_config]
|
||||||
|
|
||||||
hs = self.setup_test_homeserver(config=config, http_client=client)
|
hs = self.setup_test_homeserver(config=config, federation_http_client=client)
|
||||||
|
|
||||||
return hs
|
return hs
|
||||||
|
|
||||||
|
|
|
@ -38,7 +38,7 @@ class PresenceTestCase(unittest.HomeserverTestCase):
|
||||||
|
|
||||||
hs = self.setup_test_homeserver(
|
hs = self.setup_test_homeserver(
|
||||||
"red",
|
"red",
|
||||||
http_client=None,
|
federation_http_client=None,
|
||||||
federation_client=Mock(),
|
federation_client=Mock(),
|
||||||
presence_handler=presence_handler,
|
presence_handler=presence_handler,
|
||||||
)
|
)
|
||||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue