Merge remote-tracking branch 'origin/develop' into clokep/type-devices
commit
914d570aa9
118
CHANGES.md
118
CHANGES.md
|
@ -1,3 +1,121 @@
|
||||||
|
Synapse 1.21.0rc2 (2020-10-02)
|
||||||
|
==============================
|
||||||
|
|
||||||
|
Features
|
||||||
|
--------
|
||||||
|
|
||||||
|
- Convert additional templates from inline HTML to Jinja2 templates. ([\#8444](https://github.com/matrix-org/synapse/issues/8444))
|
||||||
|
|
||||||
|
Bugfixes
|
||||||
|
--------
|
||||||
|
|
||||||
|
- Fix a regression in v1.21.0rc1 which broke thumbnails of remote media. ([\#8438](https://github.com/matrix-org/synapse/issues/8438))
|
||||||
|
- Do not expose the experimental `uk.half-shot.msc2778.login.application_service` flow in the login API, which caused a compatibility problem with Element iOS. ([\#8440](https://github.com/matrix-org/synapse/issues/8440))
|
||||||
|
- Fix malformed log line in new federation "catch up" logic. ([\#8442](https://github.com/matrix-org/synapse/issues/8442))
|
||||||
|
- Fix DB query on startup for negative streams which caused long start up times. Introduced in [\#8374](https://github.com/matrix-org/synapse/issues/8374). ([\#8447](https://github.com/matrix-org/synapse/issues/8447))
|
||||||
|
|
||||||
|
|
||||||
|
Synapse 1.21.0rc1 (2020-10-01)
|
||||||
|
==============================
|
||||||
|
|
||||||
|
Features
|
||||||
|
--------
|
||||||
|
|
||||||
|
- Require the user to confirm that their password should be reset after clicking the email confirmation link. ([\#8004](https://github.com/matrix-org/synapse/issues/8004))
|
||||||
|
- Add an admin API `GET /_synapse/admin/v1/event_reports` to read entries of table `event_reports`. Contributed by @dklimpel. ([\#8217](https://github.com/matrix-org/synapse/issues/8217))
|
||||||
|
- Consolidate the SSO error template across all configuration. ([\#8248](https://github.com/matrix-org/synapse/issues/8248), [\#8405](https://github.com/matrix-org/synapse/issues/8405))
|
||||||
|
- Add a configuration option to specify a whitelist of domains that a user can be redirected to after validating their email or phone number. ([\#8275](https://github.com/matrix-org/synapse/issues/8275), [\#8417](https://github.com/matrix-org/synapse/issues/8417))
|
||||||
|
- Add experimental support for sharding event persister. ([\#8294](https://github.com/matrix-org/synapse/issues/8294), [\#8387](https://github.com/matrix-org/synapse/issues/8387), [\#8396](https://github.com/matrix-org/synapse/issues/8396), [\#8419](https://github.com/matrix-org/synapse/issues/8419))
|
||||||
|
- Add the room topic and avatar to the room details admin API. ([\#8305](https://github.com/matrix-org/synapse/issues/8305))
|
||||||
|
- Add an admin API for querying rooms where a user is a member. Contributed by @dklimpel. ([\#8306](https://github.com/matrix-org/synapse/issues/8306))
|
||||||
|
- Add `uk.half-shot.msc2778.login.application_service` login type to allow appservices to login. ([\#8320](https://github.com/matrix-org/synapse/issues/8320))
|
||||||
|
- Add a configuration option that allows existing users to log in with OpenID Connect. Contributed by @BBBSnowball and @OmmyZhang. ([\#8345](https://github.com/matrix-org/synapse/issues/8345))
|
||||||
|
- Add prometheus metrics for replication requests. ([\#8406](https://github.com/matrix-org/synapse/issues/8406))
|
||||||
|
- Support passing additional single sign-on parameters to the client. ([\#8413](https://github.com/matrix-org/synapse/issues/8413))
|
||||||
|
- Add experimental reporting of metrics on expensive rooms for state-resolution. ([\#8420](https://github.com/matrix-org/synapse/issues/8420))
|
||||||
|
- Add experimental prometheus metric to track numbers of "large" rooms for state resolutiom. ([\#8425](https://github.com/matrix-org/synapse/issues/8425))
|
||||||
|
- Add prometheus metrics to track federation delays. ([\#8430](https://github.com/matrix-org/synapse/issues/8430))
|
||||||
|
|
||||||
|
|
||||||
|
Bugfixes
|
||||||
|
--------
|
||||||
|
|
||||||
|
- Fix a bug in the media repository where remote thumbnails with the same size but different crop methods would overwrite each other. Contributed by @deepbluev7. ([\#7124](https://github.com/matrix-org/synapse/issues/7124))
|
||||||
|
- Fix inconsistent handling of non-existent push rules, and stop tracking the `enabled` state of removed push rules. ([\#7796](https://github.com/matrix-org/synapse/issues/7796))
|
||||||
|
- Fix a longstanding bug when storing a media file with an empty `upload_name`. ([\#7905](https://github.com/matrix-org/synapse/issues/7905))
|
||||||
|
- Fix messages not being sent over federation until an event is sent into the same room. ([\#8230](https://github.com/matrix-org/synapse/issues/8230), [\#8247](https://github.com/matrix-org/synapse/issues/8247), [\#8258](https://github.com/matrix-org/synapse/issues/8258), [\#8272](https://github.com/matrix-org/synapse/issues/8272), [\#8322](https://github.com/matrix-org/synapse/issues/8322))
|
||||||
|
- Fix a longstanding bug where files that could not be thumbnailed would result in an Internal Server Error. ([\#8236](https://github.com/matrix-org/synapse/issues/8236), [\#8435](https://github.com/matrix-org/synapse/issues/8435))
|
||||||
|
- Upgrade minimum version of `canonicaljson` to version 1.4.0, to fix an unicode encoding issue. ([\#8262](https://github.com/matrix-org/synapse/issues/8262))
|
||||||
|
- Fix longstanding bug which could lead to incomplete database upgrades on SQLite. ([\#8265](https://github.com/matrix-org/synapse/issues/8265))
|
||||||
|
- Fix stack overflow when stderr is redirected to the logging system, and the logging system encounters an error. ([\#8268](https://github.com/matrix-org/synapse/issues/8268))
|
||||||
|
- Fix a bug which cause the logging system to report errors, if `DEBUG` was enabled and no `context` filter was applied. ([\#8278](https://github.com/matrix-org/synapse/issues/8278))
|
||||||
|
- Fix edge case where push could get delayed for a user until a later event was pushed. ([\#8287](https://github.com/matrix-org/synapse/issues/8287))
|
||||||
|
- Fix fetching malformed events from remote servers. ([\#8324](https://github.com/matrix-org/synapse/issues/8324))
|
||||||
|
- Fix `UnboundLocalError` from occuring when appservices send a malformed register request. ([\#8329](https://github.com/matrix-org/synapse/issues/8329))
|
||||||
|
- Don't send push notifications to expired user accounts. ([\#8353](https://github.com/matrix-org/synapse/issues/8353))
|
||||||
|
- Fix a regression in v1.19.0 with reactivating users through the admin API. ([\#8362](https://github.com/matrix-org/synapse/issues/8362))
|
||||||
|
- Fix a bug where during device registration the length of the device name wasn't limited. ([\#8364](https://github.com/matrix-org/synapse/issues/8364))
|
||||||
|
- Include `guest_access` in the fields that are checked for null bytes when updating `room_stats_state`. Broke in v1.7.2. ([\#8373](https://github.com/matrix-org/synapse/issues/8373))
|
||||||
|
- Fix theoretical race condition where events are not sent down `/sync` if the synchrotron worker is restarted without restarting other workers. ([\#8374](https://github.com/matrix-org/synapse/issues/8374))
|
||||||
|
- Fix a bug which could cause errors in rooms with malformed membership events, on servers using sqlite. ([\#8385](https://github.com/matrix-org/synapse/issues/8385))
|
||||||
|
- Fix "Re-starting finished log context" warning when receiving an event we already had over federation. ([\#8398](https://github.com/matrix-org/synapse/issues/8398))
|
||||||
|
- Fix incorrect handling of timeouts on outgoing HTTP requests. ([\#8400](https://github.com/matrix-org/synapse/issues/8400))
|
||||||
|
- Fix a regression in v1.20.0 in the `synapse_port_db` script regarding the `ui_auth_sessions_ips` table. ([\#8410](https://github.com/matrix-org/synapse/issues/8410))
|
||||||
|
- Remove unnecessary 3PID registration check when resetting password via an email address. Bug introduced in v0.34.0rc2. ([\#8414](https://github.com/matrix-org/synapse/issues/8414))
|
||||||
|
|
||||||
|
|
||||||
|
Improved Documentation
|
||||||
|
----------------------
|
||||||
|
|
||||||
|
- Add `/_synapse/client` to the reverse proxy documentation. ([\#8227](https://github.com/matrix-org/synapse/issues/8227))
|
||||||
|
- Add note to the reverse proxy settings documentation about disabling Apache's mod_security2. Contributed by Julian Fietkau (@jfietkau). ([\#8375](https://github.com/matrix-org/synapse/issues/8375))
|
||||||
|
- Improve description of `server_name` config option in `homserver.yaml`. ([\#8415](https://github.com/matrix-org/synapse/issues/8415))
|
||||||
|
|
||||||
|
|
||||||
|
Deprecations and Removals
|
||||||
|
-------------------------
|
||||||
|
|
||||||
|
- Drop support for `prometheus_client` older than 0.4.0. ([\#8426](https://github.com/matrix-org/synapse/issues/8426))
|
||||||
|
|
||||||
|
|
||||||
|
Internal Changes
|
||||||
|
----------------
|
||||||
|
|
||||||
|
- Fix tests on distros which disable TLSv1.0. Contributed by @danc86. ([\#8208](https://github.com/matrix-org/synapse/issues/8208))
|
||||||
|
- Simplify the distributor code to avoid unnecessary work. ([\#8216](https://github.com/matrix-org/synapse/issues/8216))
|
||||||
|
- Remove the `populate_stats_process_rooms_2` background job and restore functionality to `populate_stats_process_rooms`. ([\#8243](https://github.com/matrix-org/synapse/issues/8243))
|
||||||
|
- Clean up type hints for `PaginationConfig`. ([\#8250](https://github.com/matrix-org/synapse/issues/8250), [\#8282](https://github.com/matrix-org/synapse/issues/8282))
|
||||||
|
- Track the latest event for every destination and room for catch-up after federation outage. ([\#8256](https://github.com/matrix-org/synapse/issues/8256))
|
||||||
|
- Fix non-user visible bug in implementation of `MultiWriterIdGenerator.get_current_token_for_writer`. ([\#8257](https://github.com/matrix-org/synapse/issues/8257))
|
||||||
|
- Switch to the JSON implementation from the standard library. ([\#8259](https://github.com/matrix-org/synapse/issues/8259))
|
||||||
|
- Add type hints to `synapse.util.async_helpers`. ([\#8260](https://github.com/matrix-org/synapse/issues/8260))
|
||||||
|
- Simplify tests that mock asynchronous functions. ([\#8261](https://github.com/matrix-org/synapse/issues/8261))
|
||||||
|
- Add type hints to `StreamToken` and `RoomStreamToken` classes. ([\#8279](https://github.com/matrix-org/synapse/issues/8279))
|
||||||
|
- Change `StreamToken.room_key` to be a `RoomStreamToken` instance. ([\#8281](https://github.com/matrix-org/synapse/issues/8281))
|
||||||
|
- Refactor notifier code to correctly use the max event stream position. ([\#8288](https://github.com/matrix-org/synapse/issues/8288))
|
||||||
|
- Use slotted classes where possible. ([\#8296](https://github.com/matrix-org/synapse/issues/8296))
|
||||||
|
- Support testing the local Synapse checkout against the [Complement homeserver test suite](https://github.com/matrix-org/complement/). ([\#8317](https://github.com/matrix-org/synapse/issues/8317))
|
||||||
|
- Update outdated usages of `metaclass` to python 3 syntax. ([\#8326](https://github.com/matrix-org/synapse/issues/8326))
|
||||||
|
- Move lint-related dependencies to package-extra field, update CONTRIBUTING.md to utilise this. ([\#8330](https://github.com/matrix-org/synapse/issues/8330), [\#8377](https://github.com/matrix-org/synapse/issues/8377))
|
||||||
|
- Use the `admin_patterns` helper in additional locations. ([\#8331](https://github.com/matrix-org/synapse/issues/8331))
|
||||||
|
- Fix test logging to allow braces in log output. ([\#8335](https://github.com/matrix-org/synapse/issues/8335))
|
||||||
|
- Remove `__future__` imports related to Python 2 compatibility. ([\#8337](https://github.com/matrix-org/synapse/issues/8337))
|
||||||
|
- Simplify `super()` calls to Python 3 syntax. ([\#8344](https://github.com/matrix-org/synapse/issues/8344))
|
||||||
|
- Fix bad merge from `release-v1.20.0` branch to `develop`. ([\#8354](https://github.com/matrix-org/synapse/issues/8354))
|
||||||
|
- Factor out a `_send_dummy_event_for_room` method. ([\#8370](https://github.com/matrix-org/synapse/issues/8370))
|
||||||
|
- Improve logging of state resolution. ([\#8371](https://github.com/matrix-org/synapse/issues/8371))
|
||||||
|
- Add type annotations to `SimpleHttpClient`. ([\#8372](https://github.com/matrix-org/synapse/issues/8372))
|
||||||
|
- Refactor ID generators to use `async with` syntax. ([\#8383](https://github.com/matrix-org/synapse/issues/8383))
|
||||||
|
- Add `EventStreamPosition` type. ([\#8388](https://github.com/matrix-org/synapse/issues/8388))
|
||||||
|
- Create a mechanism for marking tests "logcontext clean". ([\#8399](https://github.com/matrix-org/synapse/issues/8399))
|
||||||
|
- A pair of tiny cleanups in the federation request code. ([\#8401](https://github.com/matrix-org/synapse/issues/8401))
|
||||||
|
- Add checks on startup that PostgreSQL sequences are consistent with their associated tables. ([\#8402](https://github.com/matrix-org/synapse/issues/8402))
|
||||||
|
- Do not include appservice users when calculating the total MAU for a server. ([\#8404](https://github.com/matrix-org/synapse/issues/8404))
|
||||||
|
- Typing fixes for `synapse.handlers.federation`. ([\#8422](https://github.com/matrix-org/synapse/issues/8422))
|
||||||
|
- Various refactors to simplify stream token handling. ([\#8423](https://github.com/matrix-org/synapse/issues/8423))
|
||||||
|
- Make stream token serializing/deserializing async. ([\#8427](https://github.com/matrix-org/synapse/issues/8427))
|
||||||
|
|
||||||
|
|
||||||
Synapse 1.20.1 (2020-09-24)
|
Synapse 1.20.1 (2020-09-24)
|
||||||
===========================
|
===========================
|
||||||
|
|
||||||
|
|
17
UPGRADE.rst
17
UPGRADE.rst
|
@ -75,6 +75,23 @@ for example:
|
||||||
wget https://packages.matrix.org/debian/pool/main/m/matrix-synapse-py3/matrix-synapse-py3_1.3.0+stretch1_amd64.deb
|
wget https://packages.matrix.org/debian/pool/main/m/matrix-synapse-py3/matrix-synapse-py3_1.3.0+stretch1_amd64.deb
|
||||||
dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb
|
dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb
|
||||||
|
|
||||||
|
Upgrading to v1.22.0
|
||||||
|
====================
|
||||||
|
|
||||||
|
ThirdPartyEventRules breaking changes
|
||||||
|
-------------------------------------
|
||||||
|
|
||||||
|
This release introduces a backwards-incompatible change to modules making use of
|
||||||
|
``ThirdPartyEventRules`` in Synapse. If you make use of a module defined under the
|
||||||
|
``third_party_event_rules`` config option, please make sure it is updated to handle
|
||||||
|
the below change:
|
||||||
|
|
||||||
|
The ``http_client`` argument is no longer passed to modules as they are initialised. Instead,
|
||||||
|
modules are expected to make use of the ``http_client`` property on the ``ModuleApi`` class.
|
||||||
|
Modules are now passed a ``module_api`` argument during initialisation, which is an instance of
|
||||||
|
``ModuleApi``. ``ModuleApi`` instances have a ``http_client`` property which acts the same as
|
||||||
|
the ``http_client`` argument previously passed to ``ThirdPartyEventRules`` modules.
|
||||||
|
|
||||||
Upgrading to v1.21.0
|
Upgrading to v1.21.0
|
||||||
====================
|
====================
|
||||||
|
|
||||||
|
|
|
@ -1 +0,0 @@
|
||||||
Fix a bug in the media repository where remote thumbnails with the same size but different crop methods would overwrite each other. Contributed by @deepbluev7.
|
|
|
@ -0,0 +1 @@
|
||||||
|
Add a configuration option for always using the "userinfo endpoint" for OpenID Connect. This fixes support for some identity providers, e.g. GitLab. Contributed by Benjamin Koch.
|
|
@ -1 +0,0 @@
|
||||||
Fix inconsistent handling of non-existent push rules, and stop tracking the `enabled` state of removed push rules.
|
|
|
@ -1 +0,0 @@
|
||||||
Require the user to confirm that their password should be reset after clicking the email confirmation link.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix tests on distros which disable TLSv1.0. Contributed by @danc86.
|
|
|
@ -1 +0,0 @@
|
||||||
Simplify the distributor code to avoid unnecessary work.
|
|
|
@ -1 +0,0 @@
|
||||||
Add an admin API `GET /_synapse/admin/v1/event_reports` to read entries of table `event_reports`. Contributed by @dklimpel.
|
|
|
@ -1 +0,0 @@
|
||||||
Add `/_synapse/client` to the reverse proxy documentation.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix messages over federation being lost until an event is sent into the same room.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix a longstanding bug where files that could not be thumbnailed would result in an Internal Server Error.
|
|
|
@ -1 +0,0 @@
|
||||||
Remove the 'populate_stats_process_rooms_2' background job and restore functionality to 'populate_stats_process_rooms'.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix messages over federation being lost until an event is sent into the same room.
|
|
|
@ -1 +0,0 @@
|
||||||
Consolidate the SSO error template across all configuration.
|
|
|
@ -1 +0,0 @@
|
||||||
Clean up type hints for `PaginationConfig`.
|
|
|
@ -1 +0,0 @@
|
||||||
Track the latest event for every destination and room for catch-up after federation outage.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix non-user visible bug in implementation of `MultiWriterIdGenerator.get_current_token_for_writer`.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix messages over federation being lost until an event is sent into the same room.
|
|
|
@ -1 +0,0 @@
|
||||||
Switch to the JSON implementation from the standard library.
|
|
|
@ -1 +0,0 @@
|
||||||
Add type hints to `synapse.util.async_helpers`.
|
|
|
@ -1 +0,0 @@
|
||||||
Simplify tests that mock asynchronous functions.
|
|
|
@ -1 +0,0 @@
|
||||||
Upgrade canonicaljson to version 1.4.0 to fix an unicode encoding issue.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix logstanding bug which could lead to incomplete database upgrades on SQLite.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix stack overflow when stderr is redirected to the logging system, and the logging system encounters an error.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix messages over federation being lost until an event is sent into the same room.
|
|
|
@ -1 +0,0 @@
|
||||||
Add a config option to specify a whitelist of domains that a user can be redirected to after validating their email or phone number.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix a bug which cause the logging system to report errors, if `DEBUG` was enabled and no `context` filter was applied.
|
|
|
@ -1 +0,0 @@
|
||||||
Add type hints to `StreamToken` and `RoomStreamToken` classes.
|
|
|
@ -1 +0,0 @@
|
||||||
Change `StreamToken.room_key` to be a `RoomStreamToken` instance.
|
|
|
@ -1 +0,0 @@
|
||||||
Clean up type hints for `PaginationConfig`.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix edge case where push could get delayed for a user until a later event was pushed.
|
|
|
@ -1 +0,0 @@
|
||||||
Refactor notifier code to correctly use the max event stream position.
|
|
|
@ -0,0 +1 @@
|
||||||
|
Allow `ThirdPartyEventRules` modules to query and manipulate whether a room is in the public rooms directory.
|
|
@ -1 +0,0 @@
|
||||||
Add experimental support for sharding event persister.
|
|
|
@ -1 +0,0 @@
|
||||||
Use slotted classes where possible.
|
|
|
@ -1 +0,0 @@
|
||||||
Add the room topic and avatar to the room details admin API.
|
|
|
@ -1 +0,0 @@
|
||||||
Add an admin API for querying rooms where a user is a member. Contributed by @dklimpel.
|
|
|
@ -1 +0,0 @@
|
||||||
Add `uk.half-shot.msc2778.login.application_service` login type to allow appservices to login.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix messages over federation being lost until an event is sent into the same room.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix fetching events from remote servers that are malformed.
|
|
|
@ -1 +0,0 @@
|
||||||
Update outdated usages of `metaclass` to python 3 syntax.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix UnboundLocalError from occuring when appservices send malformed register request.
|
|
|
@ -1 +0,0 @@
|
||||||
Move lint-related dependencies to package-extra field, update CONTRIBUTING.md to utilise this.
|
|
|
@ -1 +0,0 @@
|
||||||
Use the `admin_patterns` helper in additional locations.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix test logging to allow braces in log output.
|
|
|
@ -1 +0,0 @@
|
||||||
Remove `__future__` imports related to Python 2 compatibility.
|
|
|
@ -1 +0,0 @@
|
||||||
Simplify `super()` calls to Python 3 syntax.
|
|
|
@ -1 +0,0 @@
|
||||||
Don't send push notifications to expired user accounts.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix bad merge from `release-v1.20.0` branch to `develop`.
|
|
|
@ -1 +0,0 @@
|
||||||
Fixed a regression in v1.19.0 with reactivating users through the admin API.
|
|
|
@ -1,2 +0,0 @@
|
||||||
Fix a bug where during device registration the length of the device name wasn't
|
|
||||||
limited.
|
|
|
@ -0,0 +1 @@
|
||||||
|
Allow running background tasks in a separate worker process.
|
|
@ -1 +0,0 @@
|
||||||
Factor out a `_send_dummy_event_for_room` method.
|
|
|
@ -1 +0,0 @@
|
||||||
Improve logging of state resolution.
|
|
|
@ -1 +0,0 @@
|
||||||
Add type annotations to `SimpleHttpClient`.
|
|
|
@ -1 +0,0 @@
|
||||||
Include `guest_access` in the fields that are checked for null bytes when updating `room_stats_state`. Broke in v1.7.2.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix theoretical race condition where events are not sent down `/sync` if the synchrotron worker is restarted without restarting other workers.
|
|
|
@ -1 +0,0 @@
|
||||||
Add note to the reverse proxy settings documentation about disabling Apache's mod_security2. Contributed by Julian Fietkau (@jfietkau).
|
|
|
@ -1 +0,0 @@
|
||||||
Move lint-related dependencies to package-extra field, update CONTRIBUTING.md to utilise this.
|
|
|
@ -1 +0,0 @@
|
||||||
Refactor ID generators to use `async with` syntax.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix a bug which could cause errors in rooms with malformed membership events, on servers using sqlite.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix a bug introduced in v1.20.0 which caused the `synapse_port_db` script to fail.
|
|
|
@ -1 +0,0 @@
|
||||||
Add experimental support for sharding event persister.
|
|
|
@ -1 +0,0 @@
|
||||||
Add `EventStreamPosition` type.
|
|
|
@ -0,0 +1 @@
|
||||||
|
Check for unreachable code with mypy.
|
|
@ -0,0 +1 @@
|
||||||
|
Add unit test for event persister sharding.
|
|
@ -0,0 +1 @@
|
||||||
|
Configure `public_baseurl` when using demo scripts.
|
|
@ -0,0 +1 @@
|
||||||
|
Add SQL logging on queries that happen during startup.
|
|
@ -0,0 +1 @@
|
||||||
|
Speed up unit tests when using PostgreSQL.
|
|
@ -0,0 +1 @@
|
||||||
|
Remove redundant databae loads of stream_ordering for events we already have.
|
|
@ -0,0 +1 @@
|
||||||
|
Fix a longstanding bug where invalid ignored users in account data could break clients.
|
|
@ -0,0 +1 @@
|
||||||
|
Fix a bug where backfilling a room with an event that was missing the `redacts` field would break.
|
|
@ -0,0 +1 @@
|
||||||
|
Update the directions for using the manhole with coroutines.
|
|
@ -30,6 +30,8 @@ for port in 8080 8081 8082; do
|
||||||
if ! grep -F "Customisation made by demo/start.sh" -q $DIR/etc/$port.config; then
|
if ! grep -F "Customisation made by demo/start.sh" -q $DIR/etc/$port.config; then
|
||||||
printf '\n\n# Customisation made by demo/start.sh\n' >> $DIR/etc/$port.config
|
printf '\n\n# Customisation made by demo/start.sh\n' >> $DIR/etc/$port.config
|
||||||
|
|
||||||
|
echo "public_baseurl: http://localhost:$port/" >> $DIR/etc/$port.config
|
||||||
|
|
||||||
echo 'enable_registration: true' >> $DIR/etc/$port.config
|
echo 'enable_registration: true' >> $DIR/etc/$port.config
|
||||||
|
|
||||||
# Warning, this heredoc depends on the interaction of tabs and spaces. Please don't
|
# Warning, this heredoc depends on the interaction of tabs and spaces. Please don't
|
||||||
|
|
|
@ -35,9 +35,12 @@ This gives a Python REPL in which `hs` gives access to the
|
||||||
`synapse.server.HomeServer` object - which in turn gives access to many other
|
`synapse.server.HomeServer` object - which in turn gives access to many other
|
||||||
parts of the process.
|
parts of the process.
|
||||||
|
|
||||||
|
Note that any call which returns a coroutine will need to be wrapped in `ensureDeferred`.
|
||||||
|
|
||||||
As a simple example, retrieving an event from the database:
|
As a simple example, retrieving an event from the database:
|
||||||
|
|
||||||
```
|
```pycon
|
||||||
>>> hs.get_datastore().get_event('$1416420717069yeQaw:matrix.org')
|
>>> from twisted.internet import defer
|
||||||
|
>>> defer.ensureDeferred(hs.get_datastore().get_event('$1416420717069yeQaw:matrix.org'))
|
||||||
<Deferred at 0x7ff253fc6998 current result: <FrozenEvent event_id='$1416420717069yeQaw:matrix.org', type='m.room.create', state_key=''>>
|
<Deferred at 0x7ff253fc6998 current result: <FrozenEvent event_id='$1416420717069yeQaw:matrix.org', type='m.room.create', state_key=''>>
|
||||||
```
|
```
|
||||||
|
|
|
@ -245,6 +245,29 @@ oidc_config:
|
||||||
client_auth_method: "client_secret_post"
|
client_auth_method: "client_secret_post"
|
||||||
user_mapping_provider:
|
user_mapping_provider:
|
||||||
config:
|
config:
|
||||||
localpart_template: '{{ user.preferred_username }}'
|
localpart_template: "{{ user.preferred_username }}"
|
||||||
|
display_name_template: "{{ user.name }}"
|
||||||
|
```
|
||||||
|
|
||||||
|
### GitLab
|
||||||
|
|
||||||
|
1. Create a [new application](https://gitlab.com/profile/applications).
|
||||||
|
2. Add the `read_user` and `openid` scopes.
|
||||||
|
3. Add this Callback URL: `[synapse public baseurl]/_synapse/oidc/callback`
|
||||||
|
|
||||||
|
Synapse config:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
oidc_config:
|
||||||
|
enabled: true
|
||||||
|
issuer: "https://gitlab.com/"
|
||||||
|
client_id: "your-client-id" # TO BE FILLED
|
||||||
|
client_secret: "your-client-secret" # TO BE FILLED
|
||||||
|
client_auth_method: "client_secret_post"
|
||||||
|
scopes: ["openid", "read_user"]
|
||||||
|
user_profile_method: "userinfo_endpoint"
|
||||||
|
user_mapping_provider:
|
||||||
|
config:
|
||||||
|
localpart_template: '{{ user.nickname }}'
|
||||||
display_name_template: '{{ user.name }}'
|
display_name_template: '{{ user.name }}'
|
||||||
```
|
```
|
||||||
|
|
|
@ -106,6 +106,17 @@ Note that the above may fail with an error about duplicate rows if corruption
|
||||||
has already occurred, and such duplicate rows will need to be manually removed.
|
has already occurred, and such duplicate rows will need to be manually removed.
|
||||||
|
|
||||||
|
|
||||||
|
## Fixing inconsistent sequences error
|
||||||
|
|
||||||
|
Synapse uses Postgres sequences to generate IDs for various tables. A sequence
|
||||||
|
and associated table can get out of sync if, for example, Synapse has been
|
||||||
|
downgraded and then upgraded again.
|
||||||
|
|
||||||
|
To fix the issue shut down Synapse (including any and all workers) and run the
|
||||||
|
SQL command included in the error message. Once done Synapse should start
|
||||||
|
successfully.
|
||||||
|
|
||||||
|
|
||||||
## Tuning Postgres
|
## Tuning Postgres
|
||||||
|
|
||||||
The default settings should be fine for most deployments. For larger
|
The default settings should be fine for most deployments. For larger
|
||||||
|
|
|
@ -33,10 +33,23 @@
|
||||||
|
|
||||||
## Server ##
|
## Server ##
|
||||||
|
|
||||||
# The domain name of the server, with optional explicit port.
|
# The public-facing domain of the server
|
||||||
# This is used by remote servers to connect to this server,
|
#
|
||||||
# e.g. matrix.org, localhost:8080, etc.
|
# The server_name name will appear at the end of usernames and room addresses
|
||||||
# This is also the last part of your UserID.
|
# created on this server. For example if the server_name was example.com,
|
||||||
|
# usernames on this server would be in the format @user:example.com
|
||||||
|
#
|
||||||
|
# In most cases you should avoid using a matrix specific subdomain such as
|
||||||
|
# matrix.example.com or synapse.example.com as the server_name for the same
|
||||||
|
# reasons you wouldn't use user@email.example.com as your email address.
|
||||||
|
# See https://github.com/matrix-org/synapse/blob/master/docs/delegate.md
|
||||||
|
# for information on how to host Synapse on a subdomain while preserving
|
||||||
|
# a clean server_name.
|
||||||
|
#
|
||||||
|
# The server_name cannot be changed later so it is important to
|
||||||
|
# configure this correctly before you start Synapse. It should be all
|
||||||
|
# lowercase and may contain an explicit port.
|
||||||
|
# Examples: matrix.org, localhost:8080
|
||||||
#
|
#
|
||||||
server_name: "SERVERNAME"
|
server_name: "SERVERNAME"
|
||||||
|
|
||||||
|
@ -616,6 +629,7 @@ acme:
|
||||||
#tls_fingerprints: [{"sha256": "<base64_encoded_sha256_fingerprint>"}]
|
#tls_fingerprints: [{"sha256": "<base64_encoded_sha256_fingerprint>"}]
|
||||||
|
|
||||||
|
|
||||||
|
## Federation ##
|
||||||
|
|
||||||
# Restrict federation to the following whitelist of domains.
|
# Restrict federation to the following whitelist of domains.
|
||||||
# N.B. we recommend also firewalling your federation listener to limit
|
# N.B. we recommend also firewalling your federation listener to limit
|
||||||
|
@ -649,6 +663,17 @@ federation_ip_range_blacklist:
|
||||||
- 'fe80::/64'
|
- 'fe80::/64'
|
||||||
- 'fc00::/7'
|
- 'fc00::/7'
|
||||||
|
|
||||||
|
# Report prometheus metrics on the age of PDUs being sent to and received from
|
||||||
|
# the following domains. This can be used to give an idea of "delay" on inbound
|
||||||
|
# and outbound federation, though be aware that any delay can be due to problems
|
||||||
|
# at either end or with the intermediate network.
|
||||||
|
#
|
||||||
|
# By default, no domains are monitored in this way.
|
||||||
|
#
|
||||||
|
#federation_metrics_domains:
|
||||||
|
# - matrix.org
|
||||||
|
# - example.com
|
||||||
|
|
||||||
|
|
||||||
## Caching ##
|
## Caching ##
|
||||||
|
|
||||||
|
@ -1689,6 +1714,19 @@ oidc_config:
|
||||||
#
|
#
|
||||||
#skip_verification: true
|
#skip_verification: true
|
||||||
|
|
||||||
|
# Whether to fetch the user profile from the userinfo endpoint. Valid
|
||||||
|
# values are: "auto" or "userinfo_endpoint".
|
||||||
|
#
|
||||||
|
# Defaults to "auto", which fetches the userinfo endpoint if "openid" is included
|
||||||
|
# in `scopes`. Uncomment the following to always fetch the userinfo endpoint.
|
||||||
|
#
|
||||||
|
#user_profile_method: "userinfo_endpoint"
|
||||||
|
|
||||||
|
# Uncomment to allow a user logging in via OIDC to match a pre-existing account instead
|
||||||
|
# of failing. This could be used if switching from password logins to OIDC. Defaults to false.
|
||||||
|
#
|
||||||
|
#allow_existing_users: true
|
||||||
|
|
||||||
# An external module can be provided here as a custom solution to mapping
|
# An external module can be provided here as a custom solution to mapping
|
||||||
# attributes returned from a OIDC provider onto a matrix user.
|
# attributes returned from a OIDC provider onto a matrix user.
|
||||||
#
|
#
|
||||||
|
@ -1730,6 +1768,14 @@ oidc_config:
|
||||||
#
|
#
|
||||||
#display_name_template: "{{ user.given_name }} {{ user.last_name }}"
|
#display_name_template: "{{ user.given_name }} {{ user.last_name }}"
|
||||||
|
|
||||||
|
# Jinja2 templates for extra attributes to send back to the client during
|
||||||
|
# login.
|
||||||
|
#
|
||||||
|
# Note that these are non-standard and clients will ignore them without modifications.
|
||||||
|
#
|
||||||
|
#extra_attributes:
|
||||||
|
#birthdate: "{{ user.birthdate }}"
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
# Enable CAS for registration and login.
|
# Enable CAS for registration and login.
|
||||||
|
@ -2458,6 +2504,11 @@ opentracing:
|
||||||
# events: worker1
|
# events: worker1
|
||||||
# typing: worker1
|
# typing: worker1
|
||||||
|
|
||||||
|
# The worker that is used to run background tasks (e.g. cleaning up expired
|
||||||
|
# data). If not provided this defaults to the main process.
|
||||||
|
#
|
||||||
|
#run_background_tasks_on: worker1
|
||||||
|
|
||||||
|
|
||||||
# Configuration for Redis when using workers. This *must* be enabled when
|
# Configuration for Redis when using workers. This *must* be enabled when
|
||||||
# using workers (unless using old style direct TCP configuration).
|
# using workers (unless using old style direct TCP configuration).
|
||||||
|
|
|
@ -57,7 +57,7 @@ A custom mapping provider must specify the following methods:
|
||||||
- This method must return a string, which is the unique identifier for the
|
- This method must return a string, which is the unique identifier for the
|
||||||
user. Commonly the ``sub`` claim of the response.
|
user. Commonly the ``sub`` claim of the response.
|
||||||
* `map_user_attributes(self, userinfo, token)`
|
* `map_user_attributes(self, userinfo, token)`
|
||||||
- This method should be async.
|
- This method must be async.
|
||||||
- Arguments:
|
- Arguments:
|
||||||
- `userinfo` - A `authlib.oidc.core.claims.UserInfo` object to extract user
|
- `userinfo` - A `authlib.oidc.core.claims.UserInfo` object to extract user
|
||||||
information from.
|
information from.
|
||||||
|
@ -66,6 +66,18 @@ A custom mapping provider must specify the following methods:
|
||||||
- Returns a dictionary with two keys:
|
- Returns a dictionary with two keys:
|
||||||
- localpart: A required string, used to generate the Matrix ID.
|
- localpart: A required string, used to generate the Matrix ID.
|
||||||
- displayname: An optional string, the display name for the user.
|
- displayname: An optional string, the display name for the user.
|
||||||
|
* `get_extra_attributes(self, userinfo, token)`
|
||||||
|
- This method must be async.
|
||||||
|
- Arguments:
|
||||||
|
- `userinfo` - A `authlib.oidc.core.claims.UserInfo` object to extract user
|
||||||
|
information from.
|
||||||
|
- `token` - A dictionary which includes information necessary to make
|
||||||
|
further requests to the OpenID provider.
|
||||||
|
- Returns a dictionary that is suitable to be serialized to JSON. This
|
||||||
|
will be returned as part of the response during a successful login.
|
||||||
|
|
||||||
|
Note that care should be taken to not overwrite any of the parameters
|
||||||
|
usually returned as part of the [login response](https://matrix.org/docs/spec/client_server/latest#post-matrix-client-r0-login).
|
||||||
|
|
||||||
### Default OpenID Mapping Provider
|
### Default OpenID Mapping Provider
|
||||||
|
|
||||||
|
|
|
@ -243,6 +243,22 @@ for the room are in flight:
|
||||||
|
|
||||||
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/messages$
|
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/messages$
|
||||||
|
|
||||||
|
Additionally, the following endpoints should be included if Synapse is configured
|
||||||
|
to use SSO (you only need to include the ones for whichever SSO provider you're
|
||||||
|
using):
|
||||||
|
|
||||||
|
# OpenID Connect requests.
|
||||||
|
^/_matrix/client/(api/v1|r0|unstable)/login/sso/redirect$
|
||||||
|
^/_synapse/oidc/callback$
|
||||||
|
|
||||||
|
# SAML requests.
|
||||||
|
^/_matrix/client/(api/v1|r0|unstable)/login/sso/redirect$
|
||||||
|
^/_matrix/saml2/authn_response$
|
||||||
|
|
||||||
|
# CAS requests.
|
||||||
|
^/_matrix/client/(api/v1|r0|unstable)/login/(cas|sso)/redirect$
|
||||||
|
^/_matrix/client/(api/v1|r0|unstable)/login/cas/ticket$
|
||||||
|
|
||||||
Note that a HTTP listener with `client` and `federation` resources must be
|
Note that a HTTP listener with `client` and `federation` resources must be
|
||||||
configured in the `worker_listeners` option in the worker config.
|
configured in the `worker_listeners` option in the worker config.
|
||||||
|
|
||||||
|
@ -303,6 +319,23 @@ stream_writers:
|
||||||
events: event_persister1
|
events: event_persister1
|
||||||
```
|
```
|
||||||
|
|
||||||
|
#### Background tasks
|
||||||
|
|
||||||
|
There is also *experimental* support for moving background tasks to a separate
|
||||||
|
worker. Background tasks are run periodically or started via replication. Exactly
|
||||||
|
which tasks are configured to run depends on your Synapse configuration (e.g. if
|
||||||
|
stats is enabled).
|
||||||
|
|
||||||
|
To enable this, the worker must have a `worker_name` and can be configured to run
|
||||||
|
background tasks. For example, to move background tasks to a dedicated worker,
|
||||||
|
the shared configuration would include:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
run_background_tasks_on: background_worker
|
||||||
|
```
|
||||||
|
|
||||||
|
You might also wish to investigate the `update_user_directory` and
|
||||||
|
`media_instance_running_background_jobs` settings.
|
||||||
|
|
||||||
### `synapse.app.pusher`
|
### `synapse.app.pusher`
|
||||||
|
|
||||||
|
|
4
mypy.ini
4
mypy.ini
|
@ -6,6 +6,7 @@ check_untyped_defs = True
|
||||||
show_error_codes = True
|
show_error_codes = True
|
||||||
show_traceback = True
|
show_traceback = True
|
||||||
mypy_path = stubs
|
mypy_path = stubs
|
||||||
|
warn_unreachable = True
|
||||||
files =
|
files =
|
||||||
synapse/api,
|
synapse/api,
|
||||||
synapse/appservice,
|
synapse/appservice,
|
||||||
|
@ -143,3 +144,6 @@ ignore_missing_imports = True
|
||||||
|
|
||||||
[mypy-nacl.*]
|
[mypy-nacl.*]
|
||||||
ignore_missing_imports = True
|
ignore_missing_imports = True
|
||||||
|
|
||||||
|
[mypy-hiredis]
|
||||||
|
ignore_missing_imports = True
|
||||||
|
|
|
@ -0,0 +1,22 @@
|
||||||
|
#! /bin/bash -eu
|
||||||
|
# This script is designed for developers who want to test their code
|
||||||
|
# against Complement.
|
||||||
|
#
|
||||||
|
# It makes a Synapse image which represents the current checkout,
|
||||||
|
# then downloads Complement and runs it with that image.
|
||||||
|
|
||||||
|
cd "$(dirname $0)/.."
|
||||||
|
|
||||||
|
# Build the base Synapse image from the local checkout
|
||||||
|
docker build -t matrixdotorg/synapse:latest -f docker/Dockerfile .
|
||||||
|
|
||||||
|
# Download Complement
|
||||||
|
wget -N https://github.com/matrix-org/complement/archive/master.tar.gz
|
||||||
|
tar -xzf master.tar.gz
|
||||||
|
cd complement-master
|
||||||
|
|
||||||
|
# Build the Synapse image from Complement, based on the above image we just built
|
||||||
|
docker build -t complement-synapse -f dockerfiles/Synapse.Dockerfile ./dockerfiles
|
||||||
|
|
||||||
|
# Run the tests on the resulting image!
|
||||||
|
COMPLEMENT_BASE_IMAGE=complement-synapse go test -v -count=1 ./tests
|
|
@ -145,6 +145,7 @@ IGNORED_TABLES = {
|
||||||
# the sessions are transient anyway, so ignore them.
|
# the sessions are transient anyway, so ignore them.
|
||||||
"ui_auth_sessions",
|
"ui_auth_sessions",
|
||||||
"ui_auth_sessions_credentials",
|
"ui_auth_sessions_credentials",
|
||||||
|
"ui_auth_sessions_ips",
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
@ -488,7 +489,7 @@ class Porter(object):
|
||||||
|
|
||||||
hs = MockHomeserver(self.hs_config)
|
hs = MockHomeserver(self.hs_config)
|
||||||
|
|
||||||
with make_conn(db_config, engine) as db_conn:
|
with make_conn(db_config, engine, "portdb") as db_conn:
|
||||||
engine.check_database(
|
engine.check_database(
|
||||||
db_conn, allow_outdated_version=allow_outdated_version
|
db_conn, allow_outdated_version=allow_outdated_version
|
||||||
)
|
)
|
||||||
|
|
|
@ -16,7 +16,7 @@
|
||||||
"""Contains *incomplete* type hints for txredisapi.
|
"""Contains *incomplete* type hints for txredisapi.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
from typing import List, Optional, Union
|
from typing import List, Optional, Union, Type
|
||||||
|
|
||||||
class RedisProtocol:
|
class RedisProtocol:
|
||||||
def publish(self, channel: str, message: bytes): ...
|
def publish(self, channel: str, message: bytes): ...
|
||||||
|
@ -42,3 +42,21 @@ def lazyConnection(
|
||||||
|
|
||||||
class SubscriberFactory:
|
class SubscriberFactory:
|
||||||
def buildProtocol(self, addr): ...
|
def buildProtocol(self, addr): ...
|
||||||
|
|
||||||
|
class ConnectionHandler: ...
|
||||||
|
|
||||||
|
class RedisFactory:
|
||||||
|
continueTrying: bool
|
||||||
|
handler: RedisProtocol
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
uuid: str,
|
||||||
|
dbid: Optional[int],
|
||||||
|
poolsize: int,
|
||||||
|
isLazy: bool = False,
|
||||||
|
handler: Type = ConnectionHandler,
|
||||||
|
charset: str = "utf-8",
|
||||||
|
password: Optional[str] = None,
|
||||||
|
replyTimeout: Optional[int] = None,
|
||||||
|
convertNumbers: Optional[int] = True,
|
||||||
|
): ...
|
||||||
|
|
|
@ -48,7 +48,7 @@ try:
|
||||||
except ImportError:
|
except ImportError:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
__version__ = "1.20.1"
|
__version__ = "1.21.0rc2"
|
||||||
|
|
||||||
if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
|
if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
|
||||||
# We import here so that we don't have to install a bunch of deps when
|
# We import here so that we don't have to install a bunch of deps when
|
||||||
|
|
|
@ -155,3 +155,8 @@ class EventContentFields:
|
||||||
class RoomEncryptionAlgorithms:
|
class RoomEncryptionAlgorithms:
|
||||||
MEGOLM_V1_AES_SHA2 = "m.megolm.v1.aes-sha2"
|
MEGOLM_V1_AES_SHA2 = "m.megolm.v1.aes-sha2"
|
||||||
DEFAULT = MEGOLM_V1_AES_SHA2
|
DEFAULT = MEGOLM_V1_AES_SHA2
|
||||||
|
|
||||||
|
|
||||||
|
class AccountDataTypes:
|
||||||
|
DIRECT = "m.direct"
|
||||||
|
IGNORED_USER_LIST = "m.ignored_user_list"
|
||||||
|
|
|
@ -28,6 +28,7 @@ from twisted.protocols.tls import TLSMemoryBIOFactory
|
||||||
|
|
||||||
import synapse
|
import synapse
|
||||||
from synapse.app import check_bind_error
|
from synapse.app import check_bind_error
|
||||||
|
from synapse.app.phone_stats_home import start_phone_stats_home
|
||||||
from synapse.config.server import ListenerConfig
|
from synapse.config.server import ListenerConfig
|
||||||
from synapse.crypto import context_factory
|
from synapse.crypto import context_factory
|
||||||
from synapse.logging.context import PreserveLoggingContext
|
from synapse.logging.context import PreserveLoggingContext
|
||||||
|
@ -271,9 +272,19 @@ def start(hs: "synapse.server.HomeServer", listeners: Iterable[ListenerConfig]):
|
||||||
hs.get_datastore().db_pool.start_profiling()
|
hs.get_datastore().db_pool.start_profiling()
|
||||||
hs.get_pusherpool().start()
|
hs.get_pusherpool().start()
|
||||||
|
|
||||||
|
# Log when we start the shut down process.
|
||||||
|
hs.get_reactor().addSystemEventTrigger(
|
||||||
|
"before", "shutdown", logger.info, "Shutting down..."
|
||||||
|
)
|
||||||
|
|
||||||
setup_sentry(hs)
|
setup_sentry(hs)
|
||||||
setup_sdnotify(hs)
|
setup_sdnotify(hs)
|
||||||
|
|
||||||
|
# If background tasks are running on the main process, start collecting the
|
||||||
|
# phone home stats.
|
||||||
|
if hs.config.run_background_tasks:
|
||||||
|
start_phone_stats_home(hs)
|
||||||
|
|
||||||
# We now freeze all allocated objects in the hopes that (almost)
|
# We now freeze all allocated objects in the hopes that (almost)
|
||||||
# everything currently allocated are things that will be used for the
|
# everything currently allocated are things that will be used for the
|
||||||
# rest of time. Doing so means less work each GC (hopefully).
|
# rest of time. Doing so means less work each GC (hopefully).
|
||||||
|
|
|
@ -208,6 +208,7 @@ def start(config_options):
|
||||||
|
|
||||||
# Explicitly disable background processes
|
# Explicitly disable background processes
|
||||||
config.update_user_directory = False
|
config.update_user_directory = False
|
||||||
|
config.run_background_tasks = False
|
||||||
config.start_pushers = False
|
config.start_pushers = False
|
||||||
config.send_federation = False
|
config.send_federation = False
|
||||||
|
|
||||||
|
|
|
@ -128,11 +128,13 @@ from synapse.rest.key.v2 import KeyApiV2Resource
|
||||||
from synapse.server import HomeServer, cache_in_self
|
from synapse.server import HomeServer, cache_in_self
|
||||||
from synapse.storage.databases.main.censor_events import CensorEventsStore
|
from synapse.storage.databases.main.censor_events import CensorEventsStore
|
||||||
from synapse.storage.databases.main.media_repository import MediaRepositoryStore
|
from synapse.storage.databases.main.media_repository import MediaRepositoryStore
|
||||||
|
from synapse.storage.databases.main.metrics import ServerMetricsStore
|
||||||
from synapse.storage.databases.main.monthly_active_users import (
|
from synapse.storage.databases.main.monthly_active_users import (
|
||||||
MonthlyActiveUsersWorkerStore,
|
MonthlyActiveUsersWorkerStore,
|
||||||
)
|
)
|
||||||
from synapse.storage.databases.main.presence import UserPresenceState
|
from synapse.storage.databases.main.presence import UserPresenceState
|
||||||
from synapse.storage.databases.main.search import SearchWorkerStore
|
from synapse.storage.databases.main.search import SearchWorkerStore
|
||||||
|
from synapse.storage.databases.main.stats import StatsStore
|
||||||
from synapse.storage.databases.main.ui_auth import UIAuthWorkerStore
|
from synapse.storage.databases.main.ui_auth import UIAuthWorkerStore
|
||||||
from synapse.storage.databases.main.user_directory import UserDirectoryStore
|
from synapse.storage.databases.main.user_directory import UserDirectoryStore
|
||||||
from synapse.types import ReadReceipt
|
from synapse.types import ReadReceipt
|
||||||
|
@ -454,6 +456,7 @@ class GenericWorkerSlavedStore(
|
||||||
# FIXME(#3714): We need to add UserDirectoryStore as we write directly
|
# FIXME(#3714): We need to add UserDirectoryStore as we write directly
|
||||||
# rather than going via the correct worker.
|
# rather than going via the correct worker.
|
||||||
UserDirectoryStore,
|
UserDirectoryStore,
|
||||||
|
StatsStore,
|
||||||
UIAuthWorkerStore,
|
UIAuthWorkerStore,
|
||||||
SlavedDeviceInboxStore,
|
SlavedDeviceInboxStore,
|
||||||
SlavedDeviceStore,
|
SlavedDeviceStore,
|
||||||
|
@ -476,6 +479,7 @@ class GenericWorkerSlavedStore(
|
||||||
SlavedFilteringStore,
|
SlavedFilteringStore,
|
||||||
MonthlyActiveUsersWorkerStore,
|
MonthlyActiveUsersWorkerStore,
|
||||||
MediaRepositoryStore,
|
MediaRepositoryStore,
|
||||||
|
ServerMetricsStore,
|
||||||
SearchWorkerStore,
|
SearchWorkerStore,
|
||||||
BaseSlavedStore,
|
BaseSlavedStore,
|
||||||
):
|
):
|
||||||
|
|
|
@ -17,14 +17,10 @@
|
||||||
|
|
||||||
import gc
|
import gc
|
||||||
import logging
|
import logging
|
||||||
import math
|
|
||||||
import os
|
import os
|
||||||
import resource
|
|
||||||
import sys
|
import sys
|
||||||
from typing import Iterable
|
from typing import Iterable
|
||||||
|
|
||||||
from prometheus_client import Gauge
|
|
||||||
|
|
||||||
from twisted.application import service
|
from twisted.application import service
|
||||||
from twisted.internet import defer, reactor
|
from twisted.internet import defer, reactor
|
||||||
from twisted.python.failure import Failure
|
from twisted.python.failure import Failure
|
||||||
|
@ -60,7 +56,6 @@ from synapse.http.server import (
|
||||||
from synapse.http.site import SynapseSite
|
from synapse.http.site import SynapseSite
|
||||||
from synapse.logging.context import LoggingContext
|
from synapse.logging.context import LoggingContext
|
||||||
from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
|
from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
|
||||||
from synapse.metrics.background_process_metrics import run_as_background_process
|
|
||||||
from synapse.module_api import ModuleApi
|
from synapse.module_api import ModuleApi
|
||||||
from synapse.python_dependencies import check_requirements
|
from synapse.python_dependencies import check_requirements
|
||||||
from synapse.replication.http import REPLICATION_PREFIX, ReplicationRestResource
|
from synapse.replication.http import REPLICATION_PREFIX, ReplicationRestResource
|
||||||
|
@ -334,20 +329,6 @@ class SynapseHomeServer(HomeServer):
|
||||||
logger.warning("Unrecognized listener type: %s", listener.type)
|
logger.warning("Unrecognized listener type: %s", listener.type)
|
||||||
|
|
||||||
|
|
||||||
# Gauges to expose monthly active user control metrics
|
|
||||||
current_mau_gauge = Gauge("synapse_admin_mau:current", "Current MAU")
|
|
||||||
current_mau_by_service_gauge = Gauge(
|
|
||||||
"synapse_admin_mau_current_mau_by_service",
|
|
||||||
"Current MAU by service",
|
|
||||||
["app_service"],
|
|
||||||
)
|
|
||||||
max_mau_gauge = Gauge("synapse_admin_mau:max", "MAU Limit")
|
|
||||||
registered_reserved_users_mau_gauge = Gauge(
|
|
||||||
"synapse_admin_mau:registered_reserved_users",
|
|
||||||
"Registered users with reserved threepids",
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def setup(config_options):
|
def setup(config_options):
|
||||||
"""
|
"""
|
||||||
Args:
|
Args:
|
||||||
|
@ -389,8 +370,6 @@ def setup(config_options):
|
||||||
except UpgradeDatabaseException as e:
|
except UpgradeDatabaseException as e:
|
||||||
quit_with_error("Failed to upgrade database: %s" % (e,))
|
quit_with_error("Failed to upgrade database: %s" % (e,))
|
||||||
|
|
||||||
hs.setup_master()
|
|
||||||
|
|
||||||
async def do_acme() -> bool:
|
async def do_acme() -> bool:
|
||||||
"""
|
"""
|
||||||
Reprovision an ACME certificate, if it's required.
|
Reprovision an ACME certificate, if it's required.
|
||||||
|
@ -486,92 +465,6 @@ class SynapseService(service.Service):
|
||||||
return self._port.stopListening()
|
return self._port.stopListening()
|
||||||
|
|
||||||
|
|
||||||
# Contains the list of processes we will be monitoring
|
|
||||||
# currently either 0 or 1
|
|
||||||
_stats_process = []
|
|
||||||
|
|
||||||
|
|
||||||
async def phone_stats_home(hs, stats, stats_process=_stats_process):
|
|
||||||
logger.info("Gathering stats for reporting")
|
|
||||||
now = int(hs.get_clock().time())
|
|
||||||
uptime = int(now - hs.start_time)
|
|
||||||
if uptime < 0:
|
|
||||||
uptime = 0
|
|
||||||
|
|
||||||
#
|
|
||||||
# Performance statistics. Keep this early in the function to maintain reliability of `test_performance_100` test.
|
|
||||||
#
|
|
||||||
old = stats_process[0]
|
|
||||||
new = (now, resource.getrusage(resource.RUSAGE_SELF))
|
|
||||||
stats_process[0] = new
|
|
||||||
|
|
||||||
# Get RSS in bytes
|
|
||||||
stats["memory_rss"] = new[1].ru_maxrss
|
|
||||||
|
|
||||||
# Get CPU time in % of a single core, not % of all cores
|
|
||||||
used_cpu_time = (new[1].ru_utime + new[1].ru_stime) - (
|
|
||||||
old[1].ru_utime + old[1].ru_stime
|
|
||||||
)
|
|
||||||
if used_cpu_time == 0 or new[0] == old[0]:
|
|
||||||
stats["cpu_average"] = 0
|
|
||||||
else:
|
|
||||||
stats["cpu_average"] = math.floor(used_cpu_time / (new[0] - old[0]) * 100)
|
|
||||||
|
|
||||||
#
|
|
||||||
# General statistics
|
|
||||||
#
|
|
||||||
|
|
||||||
stats["homeserver"] = hs.config.server_name
|
|
||||||
stats["server_context"] = hs.config.server_context
|
|
||||||
stats["timestamp"] = now
|
|
||||||
stats["uptime_seconds"] = uptime
|
|
||||||
version = sys.version_info
|
|
||||||
stats["python_version"] = "{}.{}.{}".format(
|
|
||||||
version.major, version.minor, version.micro
|
|
||||||
)
|
|
||||||
stats["total_users"] = await hs.get_datastore().count_all_users()
|
|
||||||
|
|
||||||
total_nonbridged_users = await hs.get_datastore().count_nonbridged_users()
|
|
||||||
stats["total_nonbridged_users"] = total_nonbridged_users
|
|
||||||
|
|
||||||
daily_user_type_results = await hs.get_datastore().count_daily_user_type()
|
|
||||||
for name, count in daily_user_type_results.items():
|
|
||||||
stats["daily_user_type_" + name] = count
|
|
||||||
|
|
||||||
room_count = await hs.get_datastore().get_room_count()
|
|
||||||
stats["total_room_count"] = room_count
|
|
||||||
|
|
||||||
stats["daily_active_users"] = await hs.get_datastore().count_daily_users()
|
|
||||||
stats["monthly_active_users"] = await hs.get_datastore().count_monthly_users()
|
|
||||||
stats["daily_active_rooms"] = await hs.get_datastore().count_daily_active_rooms()
|
|
||||||
stats["daily_messages"] = await hs.get_datastore().count_daily_messages()
|
|
||||||
|
|
||||||
r30_results = await hs.get_datastore().count_r30_users()
|
|
||||||
for name, count in r30_results.items():
|
|
||||||
stats["r30_users_" + name] = count
|
|
||||||
|
|
||||||
daily_sent_messages = await hs.get_datastore().count_daily_sent_messages()
|
|
||||||
stats["daily_sent_messages"] = daily_sent_messages
|
|
||||||
stats["cache_factor"] = hs.config.caches.global_factor
|
|
||||||
stats["event_cache_size"] = hs.config.caches.event_cache_size
|
|
||||||
|
|
||||||
#
|
|
||||||
# Database version
|
|
||||||
#
|
|
||||||
|
|
||||||
# This only reports info about the *main* database.
|
|
||||||
stats["database_engine"] = hs.get_datastore().db_pool.engine.module.__name__
|
|
||||||
stats["database_server_version"] = hs.get_datastore().db_pool.engine.server_version
|
|
||||||
|
|
||||||
logger.info("Reporting stats to %s: %s" % (hs.config.report_stats_endpoint, stats))
|
|
||||||
try:
|
|
||||||
await hs.get_proxied_http_client().put_json(
|
|
||||||
hs.config.report_stats_endpoint, stats
|
|
||||||
)
|
|
||||||
except Exception as e:
|
|
||||||
logger.warning("Error reporting stats: %s", e)
|
|
||||||
|
|
||||||
|
|
||||||
def run(hs):
|
def run(hs):
|
||||||
PROFILE_SYNAPSE = False
|
PROFILE_SYNAPSE = False
|
||||||
if PROFILE_SYNAPSE:
|
if PROFILE_SYNAPSE:
|
||||||
|
@ -597,81 +490,6 @@ def run(hs):
|
||||||
ThreadPool._worker = profile(ThreadPool._worker)
|
ThreadPool._worker = profile(ThreadPool._worker)
|
||||||
reactor.run = profile(reactor.run)
|
reactor.run = profile(reactor.run)
|
||||||
|
|
||||||
clock = hs.get_clock()
|
|
||||||
|
|
||||||
stats = {}
|
|
||||||
|
|
||||||
def performance_stats_init():
|
|
||||||
_stats_process.clear()
|
|
||||||
_stats_process.append(
|
|
||||||
(int(hs.get_clock().time()), resource.getrusage(resource.RUSAGE_SELF))
|
|
||||||
)
|
|
||||||
|
|
||||||
def start_phone_stats_home():
|
|
||||||
return run_as_background_process(
|
|
||||||
"phone_stats_home", phone_stats_home, hs, stats
|
|
||||||
)
|
|
||||||
|
|
||||||
def generate_user_daily_visit_stats():
|
|
||||||
return run_as_background_process(
|
|
||||||
"generate_user_daily_visits", hs.get_datastore().generate_user_daily_visits
|
|
||||||
)
|
|
||||||
|
|
||||||
# Rather than update on per session basis, batch up the requests.
|
|
||||||
# If you increase the loop period, the accuracy of user_daily_visits
|
|
||||||
# table will decrease
|
|
||||||
clock.looping_call(generate_user_daily_visit_stats, 5 * 60 * 1000)
|
|
||||||
|
|
||||||
# monthly active user limiting functionality
|
|
||||||
def reap_monthly_active_users():
|
|
||||||
return run_as_background_process(
|
|
||||||
"reap_monthly_active_users", hs.get_datastore().reap_monthly_active_users
|
|
||||||
)
|
|
||||||
|
|
||||||
clock.looping_call(reap_monthly_active_users, 1000 * 60 * 60)
|
|
||||||
reap_monthly_active_users()
|
|
||||||
|
|
||||||
async def generate_monthly_active_users():
|
|
||||||
current_mau_count = 0
|
|
||||||
current_mau_count_by_service = {}
|
|
||||||
reserved_users = ()
|
|
||||||
store = hs.get_datastore()
|
|
||||||
if hs.config.limit_usage_by_mau or hs.config.mau_stats_only:
|
|
||||||
current_mau_count = await store.get_monthly_active_count()
|
|
||||||
current_mau_count_by_service = (
|
|
||||||
await store.get_monthly_active_count_by_service()
|
|
||||||
)
|
|
||||||
reserved_users = await store.get_registered_reserved_users()
|
|
||||||
current_mau_gauge.set(float(current_mau_count))
|
|
||||||
|
|
||||||
for app_service, count in current_mau_count_by_service.items():
|
|
||||||
current_mau_by_service_gauge.labels(app_service).set(float(count))
|
|
||||||
|
|
||||||
registered_reserved_users_mau_gauge.set(float(len(reserved_users)))
|
|
||||||
max_mau_gauge.set(float(hs.config.max_mau_value))
|
|
||||||
|
|
||||||
def start_generate_monthly_active_users():
|
|
||||||
return run_as_background_process(
|
|
||||||
"generate_monthly_active_users", generate_monthly_active_users
|
|
||||||
)
|
|
||||||
|
|
||||||
start_generate_monthly_active_users()
|
|
||||||
if hs.config.limit_usage_by_mau or hs.config.mau_stats_only:
|
|
||||||
clock.looping_call(start_generate_monthly_active_users, 5 * 60 * 1000)
|
|
||||||
# End of monthly active user settings
|
|
||||||
|
|
||||||
if hs.config.report_stats:
|
|
||||||
logger.info("Scheduling stats reporting for 3 hour intervals")
|
|
||||||
clock.looping_call(start_phone_stats_home, 3 * 60 * 60 * 1000)
|
|
||||||
|
|
||||||
# We need to defer this init for the cases that we daemonize
|
|
||||||
# otherwise the process ID we get is that of the non-daemon process
|
|
||||||
clock.call_later(0, performance_stats_init)
|
|
||||||
|
|
||||||
# We wait 5 minutes to send the first set of stats as the server can
|
|
||||||
# be quite busy the first few minutes
|
|
||||||
clock.call_later(5 * 60, start_phone_stats_home)
|
|
||||||
|
|
||||||
_base.start_reactor(
|
_base.start_reactor(
|
||||||
"synapse-homeserver",
|
"synapse-homeserver",
|
||||||
soft_file_limit=hs.config.soft_file_limit,
|
soft_file_limit=hs.config.soft_file_limit,
|
||||||
|
|
|
@ -0,0 +1,202 @@
|
||||||
|
# Copyright 2020 The Matrix.org Foundation C.I.C.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
|
||||||
|
import logging
|
||||||
|
import math
|
||||||
|
import resource
|
||||||
|
import sys
|
||||||
|
|
||||||
|
from prometheus_client import Gauge
|
||||||
|
|
||||||
|
from synapse.metrics.background_process_metrics import run_as_background_process
|
||||||
|
|
||||||
|
logger = logging.getLogger("synapse.app.homeserver")
|
||||||
|
|
||||||
|
# Contains the list of processes we will be monitoring
|
||||||
|
# currently either 0 or 1
|
||||||
|
_stats_process = []
|
||||||
|
|
||||||
|
# Gauges to expose monthly active user control metrics
|
||||||
|
current_mau_gauge = Gauge("synapse_admin_mau:current", "Current MAU")
|
||||||
|
current_mau_by_service_gauge = Gauge(
|
||||||
|
"synapse_admin_mau_current_mau_by_service",
|
||||||
|
"Current MAU by service",
|
||||||
|
["app_service"],
|
||||||
|
)
|
||||||
|
max_mau_gauge = Gauge("synapse_admin_mau:max", "MAU Limit")
|
||||||
|
registered_reserved_users_mau_gauge = Gauge(
|
||||||
|
"synapse_admin_mau:registered_reserved_users",
|
||||||
|
"Registered users with reserved threepids",
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
async def phone_stats_home(hs, stats, stats_process=_stats_process):
|
||||||
|
logger.info("Gathering stats for reporting")
|
||||||
|
now = int(hs.get_clock().time())
|
||||||
|
uptime = int(now - hs.start_time)
|
||||||
|
if uptime < 0:
|
||||||
|
uptime = 0
|
||||||
|
|
||||||
|
#
|
||||||
|
# Performance statistics. Keep this early in the function to maintain reliability of `test_performance_100` test.
|
||||||
|
#
|
||||||
|
old = stats_process[0]
|
||||||
|
new = (now, resource.getrusage(resource.RUSAGE_SELF))
|
||||||
|
stats_process[0] = new
|
||||||
|
|
||||||
|
# Get RSS in bytes
|
||||||
|
stats["memory_rss"] = new[1].ru_maxrss
|
||||||
|
|
||||||
|
# Get CPU time in % of a single core, not % of all cores
|
||||||
|
used_cpu_time = (new[1].ru_utime + new[1].ru_stime) - (
|
||||||
|
old[1].ru_utime + old[1].ru_stime
|
||||||
|
)
|
||||||
|
if used_cpu_time == 0 or new[0] == old[0]:
|
||||||
|
stats["cpu_average"] = 0
|
||||||
|
else:
|
||||||
|
stats["cpu_average"] = math.floor(used_cpu_time / (new[0] - old[0]) * 100)
|
||||||
|
|
||||||
|
#
|
||||||
|
# General statistics
|
||||||
|
#
|
||||||
|
|
||||||
|
stats["homeserver"] = hs.config.server_name
|
||||||
|
stats["server_context"] = hs.config.server_context
|
||||||
|
stats["timestamp"] = now
|
||||||
|
stats["uptime_seconds"] = uptime
|
||||||
|
version = sys.version_info
|
||||||
|
stats["python_version"] = "{}.{}.{}".format(
|
||||||
|
version.major, version.minor, version.micro
|
||||||
|
)
|
||||||
|
stats["total_users"] = await hs.get_datastore().count_all_users()
|
||||||
|
|
||||||
|
total_nonbridged_users = await hs.get_datastore().count_nonbridged_users()
|
||||||
|
stats["total_nonbridged_users"] = total_nonbridged_users
|
||||||
|
|
||||||
|
daily_user_type_results = await hs.get_datastore().count_daily_user_type()
|
||||||
|
for name, count in daily_user_type_results.items():
|
||||||
|
stats["daily_user_type_" + name] = count
|
||||||
|
|
||||||
|
room_count = await hs.get_datastore().get_room_count()
|
||||||
|
stats["total_room_count"] = room_count
|
||||||
|
|
||||||
|
stats["daily_active_users"] = await hs.get_datastore().count_daily_users()
|
||||||
|
stats["monthly_active_users"] = await hs.get_datastore().count_monthly_users()
|
||||||
|
stats["daily_active_rooms"] = await hs.get_datastore().count_daily_active_rooms()
|
||||||
|
stats["daily_messages"] = await hs.get_datastore().count_daily_messages()
|
||||||
|
|
||||||
|
r30_results = await hs.get_datastore().count_r30_users()
|
||||||
|
for name, count in r30_results.items():
|
||||||
|
stats["r30_users_" + name] = count
|
||||||
|
|
||||||
|
daily_sent_messages = await hs.get_datastore().count_daily_sent_messages()
|
||||||
|
stats["daily_sent_messages"] = daily_sent_messages
|
||||||
|
stats["cache_factor"] = hs.config.caches.global_factor
|
||||||
|
stats["event_cache_size"] = hs.config.caches.event_cache_size
|
||||||
|
|
||||||
|
#
|
||||||
|
# Database version
|
||||||
|
#
|
||||||
|
|
||||||
|
# This only reports info about the *main* database.
|
||||||
|
stats["database_engine"] = hs.get_datastore().db_pool.engine.module.__name__
|
||||||
|
stats["database_server_version"] = hs.get_datastore().db_pool.engine.server_version
|
||||||
|
|
||||||
|
logger.info("Reporting stats to %s: %s" % (hs.config.report_stats_endpoint, stats))
|
||||||
|
try:
|
||||||
|
await hs.get_proxied_http_client().put_json(
|
||||||
|
hs.config.report_stats_endpoint, stats
|
||||||
|
)
|
||||||
|
except Exception as e:
|
||||||
|
logger.warning("Error reporting stats: %s", e)
|
||||||
|
|
||||||
|
|
||||||
|
def start_phone_stats_home(hs):
|
||||||
|
"""
|
||||||
|
Start the background tasks which report phone home stats.
|
||||||
|
"""
|
||||||
|
clock = hs.get_clock()
|
||||||
|
|
||||||
|
stats = {}
|
||||||
|
|
||||||
|
def performance_stats_init():
|
||||||
|
_stats_process.clear()
|
||||||
|
_stats_process.append(
|
||||||
|
(int(hs.get_clock().time()), resource.getrusage(resource.RUSAGE_SELF))
|
||||||
|
)
|
||||||
|
|
||||||
|
def start_phone_stats_home():
|
||||||
|
return run_as_background_process(
|
||||||
|
"phone_stats_home", phone_stats_home, hs, stats
|
||||||
|
)
|
||||||
|
|
||||||
|
def generate_user_daily_visit_stats():
|
||||||
|
return run_as_background_process(
|
||||||
|
"generate_user_daily_visits", hs.get_datastore().generate_user_daily_visits
|
||||||
|
)
|
||||||
|
|
||||||
|
# Rather than update on per session basis, batch up the requests.
|
||||||
|
# If you increase the loop period, the accuracy of user_daily_visits
|
||||||
|
# table will decrease
|
||||||
|
clock.looping_call(generate_user_daily_visit_stats, 5 * 60 * 1000)
|
||||||
|
|
||||||
|
# monthly active user limiting functionality
|
||||||
|
def reap_monthly_active_users():
|
||||||
|
return run_as_background_process(
|
||||||
|
"reap_monthly_active_users", hs.get_datastore().reap_monthly_active_users
|
||||||
|
)
|
||||||
|
|
||||||
|
clock.looping_call(reap_monthly_active_users, 1000 * 60 * 60)
|
||||||
|
reap_monthly_active_users()
|
||||||
|
|
||||||
|
async def generate_monthly_active_users():
|
||||||
|
current_mau_count = 0
|
||||||
|
current_mau_count_by_service = {}
|
||||||
|
reserved_users = ()
|
||||||
|
store = hs.get_datastore()
|
||||||
|
if hs.config.limit_usage_by_mau or hs.config.mau_stats_only:
|
||||||
|
current_mau_count = await store.get_monthly_active_count()
|
||||||
|
current_mau_count_by_service = (
|
||||||
|
await store.get_monthly_active_count_by_service()
|
||||||
|
)
|
||||||
|
reserved_users = await store.get_registered_reserved_users()
|
||||||
|
current_mau_gauge.set(float(current_mau_count))
|
||||||
|
|
||||||
|
for app_service, count in current_mau_count_by_service.items():
|
||||||
|
current_mau_by_service_gauge.labels(app_service).set(float(count))
|
||||||
|
|
||||||
|
registered_reserved_users_mau_gauge.set(float(len(reserved_users)))
|
||||||
|
max_mau_gauge.set(float(hs.config.max_mau_value))
|
||||||
|
|
||||||
|
def start_generate_monthly_active_users():
|
||||||
|
return run_as_background_process(
|
||||||
|
"generate_monthly_active_users", generate_monthly_active_users
|
||||||
|
)
|
||||||
|
|
||||||
|
if hs.config.limit_usage_by_mau or hs.config.mau_stats_only:
|
||||||
|
start_generate_monthly_active_users()
|
||||||
|
clock.looping_call(start_generate_monthly_active_users, 5 * 60 * 1000)
|
||||||
|
# End of monthly active user settings
|
||||||
|
|
||||||
|
if hs.config.report_stats:
|
||||||
|
logger.info("Scheduling stats reporting for 3 hour intervals")
|
||||||
|
clock.looping_call(start_phone_stats_home, 3 * 60 * 60 * 1000)
|
||||||
|
|
||||||
|
# We need to defer this init for the cases that we daemonize
|
||||||
|
# otherwise the process ID we get is that of the non-daemon process
|
||||||
|
clock.call_later(0, performance_stats_init)
|
||||||
|
|
||||||
|
# We wait 5 minutes to send the first set of stats as the server can
|
||||||
|
# be quite busy the first few minutes
|
||||||
|
clock.call_later(5 * 60, start_phone_stats_home)
|
|
@ -242,11 +242,10 @@ class Config:
|
||||||
env = jinja2.Environment(loader=loader, autoescape=autoescape)
|
env = jinja2.Environment(loader=loader, autoescape=autoescape)
|
||||||
|
|
||||||
# Update the environment with our custom filters
|
# Update the environment with our custom filters
|
||||||
|
env.filters.update({"format_ts": _format_ts_filter})
|
||||||
|
if self.public_baseurl:
|
||||||
env.filters.update(
|
env.filters.update(
|
||||||
{
|
{"mxc_to_http": _create_mxc_to_http_filter(self.public_baseurl)}
|
||||||
"format_ts": _format_ts_filter,
|
|
||||||
"mxc_to_http": _create_mxc_to_http_filter(self.public_baseurl),
|
|
||||||
}
|
|
||||||
)
|
)
|
||||||
|
|
||||||
for filename in filenames:
|
for filename in filenames:
|
||||||
|
|
|
@ -12,7 +12,7 @@
|
||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
# See the License for the specific language governing permissions and
|
# See the License for the specific language governing permissions and
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
from typing import Any, List
|
from typing import Any, Iterable
|
||||||
|
|
||||||
import jsonschema
|
import jsonschema
|
||||||
|
|
||||||
|
@ -20,7 +20,9 @@ from synapse.config._base import ConfigError
|
||||||
from synapse.types import JsonDict
|
from synapse.types import JsonDict
|
||||||
|
|
||||||
|
|
||||||
def validate_config(json_schema: JsonDict, config: Any, config_path: List[str]) -> None:
|
def validate_config(
|
||||||
|
json_schema: JsonDict, config: Any, config_path: Iterable[str]
|
||||||
|
) -> None:
|
||||||
"""Validates a config setting against a JsonSchema definition
|
"""Validates a config setting against a JsonSchema definition
|
||||||
|
|
||||||
This can be used to validate a section of the config file against a schema
|
This can be used to validate a section of the config file against a schema
|
||||||
|
|
|
@ -28,6 +28,9 @@ class CaptchaConfig(Config):
|
||||||
"recaptcha_siteverify_api",
|
"recaptcha_siteverify_api",
|
||||||
"https://www.recaptcha.net/recaptcha/api/siteverify",
|
"https://www.recaptcha.net/recaptcha/api/siteverify",
|
||||||
)
|
)
|
||||||
|
self.recaptcha_template = self.read_templates(
|
||||||
|
["recaptcha.html"], autoescape=True
|
||||||
|
)[0]
|
||||||
|
|
||||||
def generate_config_section(self, **kwargs):
|
def generate_config_section(self, **kwargs):
|
||||||
return """\
|
return """\
|
||||||
|
|
|
@ -89,6 +89,8 @@ class ConsentConfig(Config):
|
||||||
|
|
||||||
def read_config(self, config, **kwargs):
|
def read_config(self, config, **kwargs):
|
||||||
consent_config = config.get("user_consent")
|
consent_config = config.get("user_consent")
|
||||||
|
self.terms_template = self.read_templates(["terms.html"], autoescape=True)[0]
|
||||||
|
|
||||||
if consent_config is None:
|
if consent_config is None:
|
||||||
return
|
return
|
||||||
self.user_consent_version = str(consent_config["version"])
|
self.user_consent_version = str(consent_config["version"])
|
||||||
|
|
|
@ -17,7 +17,8 @@ from typing import Optional
|
||||||
|
|
||||||
from netaddr import IPSet
|
from netaddr import IPSet
|
||||||
|
|
||||||
from ._base import Config, ConfigError
|
from synapse.config._base import Config, ConfigError
|
||||||
|
from synapse.config._util import validate_config
|
||||||
|
|
||||||
|
|
||||||
class FederationConfig(Config):
|
class FederationConfig(Config):
|
||||||
|
@ -52,8 +53,18 @@ class FederationConfig(Config):
|
||||||
"Invalid range(s) provided in federation_ip_range_blacklist: %s" % e
|
"Invalid range(s) provided in federation_ip_range_blacklist: %s" % e
|
||||||
)
|
)
|
||||||
|
|
||||||
|
federation_metrics_domains = config.get("federation_metrics_domains") or []
|
||||||
|
validate_config(
|
||||||
|
_METRICS_FOR_DOMAINS_SCHEMA,
|
||||||
|
federation_metrics_domains,
|
||||||
|
("federation_metrics_domains",),
|
||||||
|
)
|
||||||
|
self.federation_metrics_domains = set(federation_metrics_domains)
|
||||||
|
|
||||||
def generate_config_section(self, config_dir_path, server_name, **kwargs):
|
def generate_config_section(self, config_dir_path, server_name, **kwargs):
|
||||||
return """\
|
return """\
|
||||||
|
## Federation ##
|
||||||
|
|
||||||
# Restrict federation to the following whitelist of domains.
|
# Restrict federation to the following whitelist of domains.
|
||||||
# N.B. we recommend also firewalling your federation listener to limit
|
# N.B. we recommend also firewalling your federation listener to limit
|
||||||
# inbound federation traffic as early as possible, rather than relying
|
# inbound federation traffic as early as possible, rather than relying
|
||||||
|
@ -85,4 +96,18 @@ class FederationConfig(Config):
|
||||||
- '::1/128'
|
- '::1/128'
|
||||||
- 'fe80::/64'
|
- 'fe80::/64'
|
||||||
- 'fc00::/7'
|
- 'fc00::/7'
|
||||||
|
|
||||||
|
# Report prometheus metrics on the age of PDUs being sent to and received from
|
||||||
|
# the following domains. This can be used to give an idea of "delay" on inbound
|
||||||
|
# and outbound federation, though be aware that any delay can be due to problems
|
||||||
|
# at either end or with the intermediate network.
|
||||||
|
#
|
||||||
|
# By default, no domains are monitored in this way.
|
||||||
|
#
|
||||||
|
#federation_metrics_domains:
|
||||||
|
# - matrix.org
|
||||||
|
# - example.com
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
|
||||||
|
_METRICS_FOR_DOMAINS_SCHEMA = {"type": "array", "items": {"type": "string"}}
|
||||||
|
|
|
@ -92,5 +92,4 @@ class HomeServerConfig(RootConfig):
|
||||||
TracerConfig,
|
TracerConfig,
|
||||||
WorkerConfig,
|
WorkerConfig,
|
||||||
RedisConfig,
|
RedisConfig,
|
||||||
FederationConfig,
|
|
||||||
]
|
]
|
||||||
|
|
|
@ -56,6 +56,8 @@ class OIDCConfig(Config):
|
||||||
self.oidc_userinfo_endpoint = oidc_config.get("userinfo_endpoint")
|
self.oidc_userinfo_endpoint = oidc_config.get("userinfo_endpoint")
|
||||||
self.oidc_jwks_uri = oidc_config.get("jwks_uri")
|
self.oidc_jwks_uri = oidc_config.get("jwks_uri")
|
||||||
self.oidc_skip_verification = oidc_config.get("skip_verification", False)
|
self.oidc_skip_verification = oidc_config.get("skip_verification", False)
|
||||||
|
self.oidc_user_profile_method = oidc_config.get("user_profile_method", "auto")
|
||||||
|
self.oidc_allow_existing_users = oidc_config.get("allow_existing_users", False)
|
||||||
|
|
||||||
ump_config = oidc_config.get("user_mapping_provider", {})
|
ump_config = oidc_config.get("user_mapping_provider", {})
|
||||||
ump_config.setdefault("module", DEFAULT_USER_MAPPING_PROVIDER)
|
ump_config.setdefault("module", DEFAULT_USER_MAPPING_PROVIDER)
|
||||||
|
@ -158,6 +160,19 @@ class OIDCConfig(Config):
|
||||||
#
|
#
|
||||||
#skip_verification: true
|
#skip_verification: true
|
||||||
|
|
||||||
|
# Whether to fetch the user profile from the userinfo endpoint. Valid
|
||||||
|
# values are: "auto" or "userinfo_endpoint".
|
||||||
|
#
|
||||||
|
# Defaults to "auto", which fetches the userinfo endpoint if "openid" is included
|
||||||
|
# in `scopes`. Uncomment the following to always fetch the userinfo endpoint.
|
||||||
|
#
|
||||||
|
#user_profile_method: "userinfo_endpoint"
|
||||||
|
|
||||||
|
# Uncomment to allow a user logging in via OIDC to match a pre-existing account instead
|
||||||
|
# of failing. This could be used if switching from password logins to OIDC. Defaults to false.
|
||||||
|
#
|
||||||
|
#allow_existing_users: true
|
||||||
|
|
||||||
# An external module can be provided here as a custom solution to mapping
|
# An external module can be provided here as a custom solution to mapping
|
||||||
# attributes returned from a OIDC provider onto a matrix user.
|
# attributes returned from a OIDC provider onto a matrix user.
|
||||||
#
|
#
|
||||||
|
@ -198,6 +213,14 @@ class OIDCConfig(Config):
|
||||||
# If unset, no displayname will be set.
|
# If unset, no displayname will be set.
|
||||||
#
|
#
|
||||||
#display_name_template: "{{{{ user.given_name }}}} {{{{ user.last_name }}}}"
|
#display_name_template: "{{{{ user.given_name }}}} {{{{ user.last_name }}}}"
|
||||||
|
|
||||||
|
# Jinja2 templates for extra attributes to send back to the client during
|
||||||
|
# login.
|
||||||
|
#
|
||||||
|
# Note that these are non-standard and clients will ignore them without modifications.
|
||||||
|
#
|
||||||
|
#extra_attributes:
|
||||||
|
#birthdate: "{{{{ user.birthdate }}}}"
|
||||||
""".format(
|
""".format(
|
||||||
mapping_provider=DEFAULT_USER_MAPPING_PROVIDER
|
mapping_provider=DEFAULT_USER_MAPPING_PROVIDER
|
||||||
)
|
)
|
||||||
|
|
|
@ -187,6 +187,11 @@ class RegistrationConfig(Config):
|
||||||
session_lifetime = self.parse_duration(session_lifetime)
|
session_lifetime = self.parse_duration(session_lifetime)
|
||||||
self.session_lifetime = session_lifetime
|
self.session_lifetime = session_lifetime
|
||||||
|
|
||||||
|
# The success template used during fallback auth.
|
||||||
|
self.fallback_success_template = self.read_templates(
|
||||||
|
["auth_success.html"], autoescape=True
|
||||||
|
)[0]
|
||||||
|
|
||||||
def generate_config_section(self, generate_secrets=False, **kwargs):
|
def generate_config_section(self, generate_secrets=False, **kwargs):
|
||||||
if generate_secrets:
|
if generate_secrets:
|
||||||
registration_shared_secret = 'registration_shared_secret: "%s"' % (
|
registration_shared_secret = 'registration_shared_secret: "%s"' % (
|
||||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue