Merge branch 'develop' of github.com:matrix-org/synapse into hawkowl/cache-config-without-synctl

* 'develop' of github.com:matrix-org/synapse: (76 commits)
  1.12.4
  Revert "Revert "Merge pull request #7315 from matrix-org/babolivier/request_token""
  Revert "Merge pull request #7315 from matrix-org/babolivier/request_token"
  Stop the master relaying USER_SYNC for other workers (#7318)
  Config option to inhibit 3PID errors on /requestToken
  Fix replication metrics when using redis (#7325)
  formatting for the changelog
  Another go at fixing one-word commands (#7326)
  1.12.4rc1
  1.12.4rc1
  fix changelog name
  Extend StreamChangeCache to support multiple entities per stream ID (#7303)
  Extend room admin api with additional attributes (#7225)
  Add ability to run replication protocol over redis. (#7040)
  Do not treat display names as globs for push rules. (#7271)
  Reduce logging verbosity of URL cache cleanup. (#7295)
  Query missing cross-signing keys on local sig upload (#7289)
  import urllib.parse when using urllib.parse.quote (#7319)
  Reduce federation logging on success (#7321)
  Fix changelog file
  ...
pull/6391/head
Andrew Morgan 2020-04-24 12:58:06 +01:00
commit f300c08d98
179 changed files with 5090 additions and 2537 deletions

View File

@ -5,8 +5,6 @@ Message history can be paginated
Can re-join room if re-invited Can re-join room if re-invited
/upgrade creates a new room
The only membership state included in an initial sync is for all the senders in the timeline The only membership state included in an initial sync is for all the senders in the timeline
Local device key changes get to remote servers Local device key changes get to remote servers

View File

@ -1,10 +1,39 @@
Next version Next version
============ ============
* A new template (`sso_auth_confirm.html`) was added to Synapse. If your Synapse * New templates (`sso_auth_confirm.html`, `sso_auth_success.html`, and
is configured to use SSO and a custom `sso_redirect_confirm_template_dir` `sso_account_deactivated.html`) were added to Synapse. If your Synapse is
configuration then this template will need to be duplicated into that configured to use SSO and a custom `sso_redirect_confirm_template_dir`
directory. configuration then these templates will need to be duplicated into that
directory.
* Plugins using the `complete_sso_login` method of `synapse.module_api.ModuleApi`
should update to using the async/await version `complete_sso_login_async` which
includes additional checks. The non-async version is considered deprecated.
Synapse 1.12.4 (2020-04-23)
===========================
No significant changes.
Synapse 1.12.4rc1 (2020-04-22)
==============================
Features
--------
- Always send users their own device updates. ([\#7160](https://github.com/matrix-org/synapse/issues/7160))
- Add support for handling GET requests for `account_data` on a worker. ([\#7311](https://github.com/matrix-org/synapse/issues/7311))
Bugfixes
--------
- Fix a bug that prevented cross-signing with users on worker-mode synapses. ([\#7255](https://github.com/matrix-org/synapse/issues/7255))
- Do not treat display names as globs in push rules. ([\#7271](https://github.com/matrix-org/synapse/issues/7271))
- Fix a bug with cross-signing devices belonging to remote users who did not share a room with any user on the local homeserver. ([\#7289](https://github.com/matrix-org/synapse/issues/7289))
Synapse 1.12.3 (2020-04-03) Synapse 1.12.3 (2020-04-03)
=========================== ===========================
@ -15,14 +44,10 @@ correctly fix the issue with building the Debian packages. ([\#7212](https://git
Synapse 1.12.2 (2020-04-02) Synapse 1.12.2 (2020-04-02)
=========================== ===========================
This release works around [an This release works around [an issue](https://github.com/matrix-org/synapse/issues/7208) with building the debian packages.
issue](https://github.com/matrix-org/synapse/issues/7208) with building the
debian packages.
No other significant changes since 1.12.1. No other significant changes since 1.12.1.
>>>>>>> master
Synapse 1.12.1 (2020-04-02) Synapse 1.12.1 (2020-04-02)
=========================== ===========================
@ -42,12 +67,19 @@ Bugfixes
Synapse 1.12.0 (2020-03-23) Synapse 1.12.0 (2020-03-23)
=========================== ===========================
No significant changes since 1.12.0rc1.
Debian packages and Docker images are rebuilt using the latest versions of Debian packages and Docker images are rebuilt using the latest versions of
dependency libraries, including Twisted 20.3.0. **Please see security advisory dependency libraries, including Twisted 20.3.0. **Please see security advisory
below**. below**.
Potential slow database update during upgrade
---------------------------------------------
Synapse 1.12.0 includes a database update which is run as part of the upgrade,
and which may take some time (several hours in the case of a large
server). Synapse will not respond to HTTP requests while this update is taking
place. For imformation on seeing if you are affected, and workaround if you
are, see the [upgrade notes](UPGRADE.rst#upgrading-to-v1120).
Security advisory Security advisory
----------------- -----------------

View File

@ -75,6 +75,71 @@ for example:
wget https://packages.matrix.org/debian/pool/main/m/matrix-synapse-py3/matrix-synapse-py3_1.3.0+stretch1_amd64.deb wget https://packages.matrix.org/debian/pool/main/m/matrix-synapse-py3/matrix-synapse-py3_1.3.0+stretch1_amd64.deb
dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb
Upgrading to v1.12.0
====================
This version includes a database update which is run as part of the upgrade,
and which may take some time (several hours in the case of a large
server). Synapse will not respond to HTTP requests while this update is taking
place.
This is only likely to be a problem in the case of a server which is
participating in many rooms.
0. As with all upgrades, it is recommended that you have a recent backup of
your database which can be used for recovery in the event of any problems.
1. As an initial check to see if you will be affected, you can try running the
following query from the `psql` or `sqlite3` console. It is safe to run it
while Synapse is still running.
.. code:: sql
SELECT MAX(q.v) FROM (
SELECT (
SELECT ej.json AS v
FROM state_events se INNER JOIN event_json ej USING (event_id)
WHERE se.room_id=rooms.room_id AND se.type='m.room.create' AND se.state_key=''
LIMIT 1
) FROM rooms WHERE rooms.room_version IS NULL
) q;
This query will take about the same amount of time as the upgrade process: ie,
if it takes 5 minutes, then it is likely that Synapse will be unresponsive for
5 minutes during the upgrade.
If you consider an outage of this duration to be acceptable, no further
action is necessary and you can simply start Synapse 1.12.0.
If you would prefer to reduce the downtime, continue with the steps below.
2. The easiest workaround for this issue is to manually
create a new index before upgrading. On PostgreSQL, his can be done as follows:
.. code:: sql
CREATE INDEX CONCURRENTLY tmp_upgrade_1_12_0_index
ON state_events(room_id) WHERE type = 'm.room.create';
The above query may take some time, but is also safe to run while Synapse is
running.
We assume that no SQLite users have databases large enough to be
affected. If you *are* affected, you can run a similar query, omitting the
``CONCURRENTLY`` keyword. Note however that this operation may in itself cause
Synapse to stop running for some time. Synapse admins are reminded that
`SQLite is not recommended for use outside a test
environment <https://github.com/matrix-org/synapse/blob/master/README.rst#using-postgresql>`_.
3. Once the index has been created, the ``SELECT`` query in step 1 above should
complete quickly. It is therefore safe to upgrade to Synapse 1.12.0.
4. Once Synapse 1.12.0 has successfully started and is responding to HTTP
requests, the temporary index can be removed:
.. code:: sql
DROP INDEX tmp_upgrade_1_12_0_index;
Upgrading to v1.10.0 Upgrading to v1.10.0
==================== ====================

1
changelog.d/6899.bugfix Normal file
View File

@ -0,0 +1 @@
Improve error responses when accessing remote public room lists.

1
changelog.d/7040.feature Normal file
View File

@ -0,0 +1 @@
Add support for running replication over Redis when using workers.

View File

@ -1 +0,0 @@
Always send users their own device updates.

1
changelog.d/7185.misc Normal file
View File

@ -0,0 +1 @@
Move client command handling out of TCP protocol.

1
changelog.d/7186.feature Normal file
View File

@ -0,0 +1 @@
Support SSO in the user interactive authentication workflow.

1
changelog.d/7187.misc Normal file
View File

@ -0,0 +1 @@
Move server command handling out of TCP protocol.

1
changelog.d/7192.misc Normal file
View File

@ -0,0 +1 @@
Remove sent outbound device list pokes from the database.

1
changelog.d/7193.misc Normal file
View File

@ -0,0 +1 @@
Add a background database update job to clear out duplicate `device_lists_outbound_pokes`.

1
changelog.d/7199.bugfix Normal file
View File

@ -0,0 +1 @@
Fix a bug that could cause a user to be invited to a server notices (aka System Alerts) room without any notice being sent.

1
changelog.d/7207.misc Normal file
View File

@ -0,0 +1 @@
Remove some extraneous debugging log lines.

1
changelog.d/7213.misc Normal file
View File

@ -0,0 +1 @@
Add explicit Python build tooling as dependencies for the snapcraft build.

1
changelog.d/7219.misc Normal file
View File

@ -0,0 +1 @@
Add typing information to federation server code.

1
changelog.d/7225.misc Normal file
View File

@ -0,0 +1 @@
Extend room admin api (`GET /_synapse/admin/v1/rooms`) with additional attributes.

1
changelog.d/7226.misc Normal file
View File

@ -0,0 +1 @@
Move catchup of replication streams logic to worker.

1
changelog.d/7228.misc Normal file
View File

@ -0,0 +1 @@
Unblacklist '/upgrade creates a new room' sytest for workers.

1
changelog.d/7230.feature Normal file
View File

@ -0,0 +1 @@
Require admin privileges to enable room encryption by default. This does not affect existing rooms.

1
changelog.d/7233.misc Normal file
View File

@ -0,0 +1 @@
Remove redundant checks on `daemonize` from synctl.

1
changelog.d/7234.doc Normal file
View File

@ -0,0 +1 @@
Update the contributed documentation on managing synapse workers with systemd, and bring it into the core distribution.

1
changelog.d/7235.feature Normal file
View File

@ -0,0 +1 @@
Improve the support for SSO authentication on the login fallback page.

1
changelog.d/7236.misc Normal file
View File

@ -0,0 +1 @@
Upgrade jQuery to v3.4.1 on fallback login/registration pages.

1
changelog.d/7237.misc Normal file
View File

@ -0,0 +1 @@
Change log line that told user to implement onLogin/onRegister fallback js functions to a warning, instead of an info, so it's more visible.

1
changelog.d/7238.doc Normal file
View File

@ -0,0 +1 @@
Add documentation to the `password_providers` config option. Add known password provider implementations to docs.

1
changelog.d/7239.misc Normal file
View File

@ -0,0 +1 @@
Move catchup of replication streams logic to worker.

1
changelog.d/7240.bugfix Normal file
View File

@ -0,0 +1 @@
Do not allow a deactivated user to login via SSO.

1
changelog.d/7241.misc Normal file
View File

@ -0,0 +1 @@
Convert some of synapse.rest.media to async/await.

1
changelog.d/7243.misc Normal file
View File

@ -0,0 +1 @@
Correct the parameters of a test fixture. Contributed by Isaiah Singletary.

1
changelog.d/7248.doc Normal file
View File

@ -0,0 +1 @@
Add documentation to the `password_providers` config option. Add known password provider implementations to docs.

1
changelog.d/7249.bugfix Normal file
View File

@ -0,0 +1 @@
Fix --help command-line argument.

1
changelog.d/7251.doc Normal file
View File

@ -0,0 +1 @@
Modify suggested nginx reverse proxy configuration to match Synapse's default file upload size. Contributed by @ProCycleDev.

1
changelog.d/7259.bugfix Normal file
View File

@ -0,0 +1 @@
Do not allow a deactivated user to login via SSO.

1
changelog.d/7260.bugfix Normal file
View File

@ -0,0 +1 @@
Fix room publish permissions not being checked on room creation.

1
changelog.d/7261.misc Normal file
View File

@ -0,0 +1 @@
Convert auth handler to async/await.

1
changelog.d/7265.feature Normal file
View File

@ -0,0 +1 @@
Add a config option for specifying the value of the Accept-Language HTTP header when generating URL previews.

1
changelog.d/7268.bugfix Normal file
View File

@ -0,0 +1 @@
Reject unknown session IDs during user interactive authentication instead of silently creating a new session.

1
changelog.d/7272.doc Normal file
View File

@ -0,0 +1 @@
Documentation of media_storage_providers options updated to avoid misunderstandings. Contributed by Tristan Lins.

1
changelog.d/7274.bugfix Normal file
View File

@ -0,0 +1 @@
Fix a sql query introduced in Synapse 1.12.0 which could cause large amounts of logging to the postgres slow-query log.

1
changelog.d/7279.feature Normal file
View File

@ -0,0 +1 @@
Support SSO in the user interactive authentication workflow.

1
changelog.d/7286.misc Normal file
View File

@ -0,0 +1 @@
Move catchup of replication streams logic to worker.

1
changelog.d/7290.misc Normal file
View File

@ -0,0 +1 @@
Move catchup of replication streams logic to worker.

1
changelog.d/7291.misc Normal file
View File

@ -0,0 +1 @@
Improve typing annotations in `synapse.replication.tcp.streams.Stream`.

1
changelog.d/7295.misc Normal file
View File

@ -0,0 +1 @@
Reduce log verbosity of url cache cleanup tasks.

1
changelog.d/7300.misc Normal file
View File

@ -0,0 +1 @@
Fix sample SAML Service Provider configuration. Contributed by @frcl.

1
changelog.d/7303.misc Normal file
View File

@ -0,0 +1 @@
Fix StreamChangeCache to work with multiple entities changing on the same stream id.

1
changelog.d/7315.feature Normal file
View File

@ -0,0 +1 @@
Allow `/requestToken` endpoints to hide the existence (or lack thereof) of 3PID associations on the homeserver.

1
changelog.d/7318.misc Normal file
View File

@ -0,0 +1 @@
Move catchup of replication streams logic to worker.

1
changelog.d/7319.misc Normal file
View File

@ -0,0 +1 @@
Fix an incorrect import in IdentityHandler.

1
changelog.d/7321.misc Normal file
View File

@ -0,0 +1 @@
Reduce logging verbosity for successful federation requests.

1
changelog.d/7325.feature Normal file
View File

@ -0,0 +1 @@
Add support for running replication over Redis when using workers.

1
changelog.d/7326.misc Normal file
View File

@ -0,0 +1 @@
Move catchup of replication streams logic to worker.

View File

@ -1,150 +1,2 @@
# Setup Synapse with Workers and Systemd The documentation for using systemd to manage synapse workers is now part of
the main synapse distribution. See [docs/systemd-with-workers](../../docs/systemd-with-workers).
This is a setup for managing synapse with systemd including support for
managing workers. It provides a `matrix-synapse`, as well as a
`matrix-synapse-worker@` service for any workers you require. Additionally to
group the required services it sets up a `matrix.target`. You can use this to
automatically start any bot- or bridge-services. More on this in
[Bots and Bridges](#bots-and-bridges).
See the folder [system](system) for any service and target files.
The folder [workers](workers) contains an example configuration for the
`federation_reader` worker. Pay special attention to the name of the
configuration file. In order to work with the `matrix-synapse-worker@.service`
service, it needs to have the exact same name as the worker app.
This setup expects neither the homeserver nor any workers to fork. Forking is
handled by systemd.
## Setup
1. Adjust your matrix configs. Make sure that the worker config files have the
exact same name as the worker app. Compare `matrix-synapse-worker@.service` for
why. You can find an example worker config in the [workers](workers) folder. See
below for relevant settings in the `homeserver.yaml`.
2. Copy the `*.service` and `*.target` files in [system](system) to
`/etc/systemd/system`.
3. `systemctl enable matrix-synapse.service` this adds the homeserver
app to the `matrix.target`
4. *Optional.* `systemctl enable
matrix-synapse-worker@federation_reader.service` this adds the federation_reader
app to the `matrix-synapse.service`
5. *Optional.* Repeat step 4 for any additional workers you require.
6. *Optional.* Add any bots or bridges by enabling them.
7. Start all matrix related services via `systemctl start matrix.target`
8. *Optional.* Enable autostart of all matrix related services on system boot
via `systemctl enable matrix.target`
## Usage
After you have setup you can use the following commands to manage your synapse
installation:
```
# Start matrix-synapse, all workers and any enabled bots or bridges.
systemctl start matrix.target
# Restart matrix-synapse and all workers (not necessarily restarting bots
# or bridges, see "Bots and Bridges")
systemctl restart matrix-synapse.service
# Stop matrix-synapse and all workers (not necessarily restarting bots
# or bridges, see "Bots and Bridges")
systemctl stop matrix-synapse.service
# Restart a specific worker (i. e. federation_reader), the homeserver is
# unaffected by this.
systemctl restart matrix-synapse-worker@federation_reader.service
# Add a new worker (assuming all configs are setup already)
systemctl enable matrix-synapse-worker@federation_writer.service
systemctl restart matrix-synapse.service
```
## The Configs
Make sure the `worker_app` is set in the `homeserver.yaml` and it does not fork.
```
worker_app: synapse.app.homeserver
daemonize: false
```
None of the workers should fork, as forking is handled by systemd. Hence make
sure this is present in all worker config files.
```
worker_daemonize: false
```
The config files of all workers are expected to be located in
`/etc/matrix-synapse/workers`. If you want to use a different location you have
to edit the provided `*.service` files accordingly.
## Bots and Bridges
Most bots and bridges do not care if the homeserver goes down or is restarted.
Depending on the implementation this may crash them though. So look up the docs
or ask the community of the specific bridge or bot you want to run to make sure
you choose the correct setup.
Whichever configuration you choose, after the setup the following will enable
automatically starting (and potentially restarting) your bot/bridge with the
`matrix.target`.
```
systemctl enable <yourBotOrBridgeName>.service
```
**Note** that from an inactive synapse the bots/bridges will only be started with
synapse if you start the `matrix.target`, not if you start the
`matrix-synapse.service`. This is on purpose. Think of `matrix-synapse.service`
as *just* synapse, but `matrix.target` being anything matrix related, including
synapse and any and all enabled bots and bridges.
### Start with synapse but ignore synapse going down
If the bridge can handle shutdowns of the homeserver you'll want to install the
service in the `matrix.target` and optionally add a
`After=matrix-synapse.service` dependency to have the bot/bridge start after
synapse on starting everything.
In this case the service file should look like this.
```
[Unit]
# ...
# Optional, this will only ensure that if you start everything, synapse will
# be started before the bot/bridge will be started.
After=matrix-synapse.service
[Service]
# ...
[Install]
WantedBy=matrix.target
```
### Stop/restart when synapse stops/restarts
If the bridge can't handle shutdowns of the homeserver you'll still want to
install the service in the `matrix.target` but also have to specify the
`After=matrix-synapse.service` *and* `BindsTo=matrix-synapse.service`
dependencies to have the bot/bridge stop/restart with synapse.
In this case the service file should look like this.
```
[Unit]
# ...
# Mandatory
After=matrix-synapse.service
BindsTo=matrix-synapse.service
[Service]
# ...
[Install]
WantedBy=matrix.target
```

View File

@ -1,19 +0,0 @@
[Unit]
Description=Synapse Matrix Worker
After=matrix-synapse.service
BindsTo=matrix-synapse.service
[Service]
Type=notify
NotifyAccess=main
User=matrix-synapse
WorkingDirectory=/var/lib/matrix-synapse
EnvironmentFile=/etc/default/matrix-synapse
ExecStart=/opt/venvs/matrix-synapse/bin/python -m synapse.app.%i --config-path=/etc/matrix-synapse/homeserver.yaml --config-path=/etc/matrix-synapse/conf.d/ --config-path=/etc/matrix-synapse/workers/%i.yaml
ExecReload=/bin/kill -HUP $MAINPID
Restart=always
RestartSec=3
SyslogIdentifier=matrix-synapse-%i
[Install]
WantedBy=matrix-synapse.service

View File

@ -1,7 +0,0 @@
[Unit]
Description=Contains matrix services like synapse, bridges and bots
After=network.target
AllowIsolate=no
[Install]
WantedBy=multi-user.target

14
debian/changelog vendored
View File

@ -1,3 +1,17 @@
<<<<<<< HEAD
matrix-synapse-py3 (1.12.3ubuntu1) UNRELEASED; urgency=medium
* Add information about .well-known files to Debian installation scripts.
-- Patrick Cloke <patrickc@matrix.org> Mon, 06 Apr 2020 10:10:38 -0400
=======
matrix-synapse-py3 (1.12.4) stable; urgency=medium
* New synapse release 1.12.4.
-- Synapse Packaging team <packages@matrix.org> Thu, 23 Apr 2020 10:58:14 -0400
>>>>>>> master
matrix-synapse-py3 (1.12.3) stable; urgency=medium matrix-synapse-py3 (1.12.3) stable; urgency=medium
[ Richard van der Hoff ] [ Richard van der Hoff ]

View File

@ -1,14 +1,14 @@
# SOME DESCRIPTIVE TITLE. # SOME DESCRIPTIVE TITLE.
# Copyright (C) YEAR THE PACKAGE'S COPYRIGHT HOLDER # Copyright (C) YEAR THE PACKAGE'S COPYRIGHT HOLDER
# This file is distributed under the same license as the matrix-synapse package. # This file is distributed under the same license as the matrix-synapse-py3 package.
# FIRST AUTHOR <EMAIL@ADDRESS>, YEAR. # FIRST AUTHOR <EMAIL@ADDRESS>, YEAR.
# #
#, fuzzy #, fuzzy
msgid "" msgid ""
msgstr "" msgstr ""
"Project-Id-Version: matrix-synapse\n" "Project-Id-Version: matrix-synapse-py3\n"
"Report-Msgid-Bugs-To: matrix-synapse@packages.debian.org\n" "Report-Msgid-Bugs-To: matrix-synapse-py3@packages.debian.org\n"
"POT-Creation-Date: 2017-02-21 07:51+0000\n" "POT-Creation-Date: 2020-04-06 16:39-0400\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n" "PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n" "Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n" "Language-Team: LANGUAGE <LL@li.org>\n"
@ -28,7 +28,10 @@ msgstr ""
#: ../templates:1001 #: ../templates:1001
msgid "" msgid ""
"The name that this homeserver will appear as, to clients and other servers " "The name that this homeserver will appear as, to clients and other servers "
"via federation. This name should match the SRV record published in DNS." "via federation. This is normally the public hostname of the server running "
"synapse, but can be different if you set up delegation. Please refer to the "
"delegation documentation in this case: https://github.com/matrix-org/synapse/"
"blob/master/docs/delegate.md."
msgstr "" msgstr ""
#. Type: boolean #. Type: boolean

6
debian/templates vendored
View File

@ -2,8 +2,10 @@ Template: matrix-synapse/server-name
Type: string Type: string
_Description: Name of the server: _Description: Name of the server:
The name that this homeserver will appear as, to clients and other The name that this homeserver will appear as, to clients and other
servers via federation. This name should match the SRV record servers via federation. This is normally the public hostname of the
published in DNS. server running synapse, but can be different if you set up delegation.
Please refer to the delegation documentation in this case:
https://github.com/matrix-org/synapse/blob/master/docs/delegate.md.
Template: matrix-synapse/report-stats Template: matrix-synapse/report-stats
Type: boolean Type: boolean

View File

@ -11,8 +11,21 @@ The following query parameters are available:
* `from` - Offset in the returned list. Defaults to `0`. * `from` - Offset in the returned list. Defaults to `0`.
* `limit` - Maximum amount of rooms to return. Defaults to `100`. * `limit` - Maximum amount of rooms to return. Defaults to `100`.
* `order_by` - The method in which to sort the returned list of rooms. Valid values are: * `order_by` - The method in which to sort the returned list of rooms. Valid values are:
- `alphabetical` - Rooms are ordered alphabetically by room name. This is the default. - `alphabetical` - Same as `name`. This is deprecated.
- `size` - Rooms are ordered by the number of members. Largest to smallest. - `size` - Same as `joined_members`. This is deprecated.
- `name` - Rooms are ordered alphabetically by room name. This is the default.
- `canonical_alias` - Rooms are ordered alphabetically by main alias address of the room.
- `joined_members` - Rooms are ordered by the number of members. Largest to smallest.
- `joined_local_members` - Rooms are ordered by the number of local members. Largest to smallest.
- `version` - Rooms are ordered by room version. Largest to smallest.
- `creator` - Rooms are ordered alphabetically by creator of the room.
- `encryption` - Rooms are ordered alphabetically by the end-to-end encryption algorithm.
- `federatable` - Rooms are ordered by whether the room is federatable.
- `public` - Rooms are ordered by visibility in room list.
- `join_rules` - Rooms are ordered alphabetically by join rules of the room.
- `guest_access` - Rooms are ordered alphabetically by guest access option of the room.
- `history_visibility` - Rooms are ordered alphabetically by visibility of history of the room.
- `state_events` - Rooms are ordered by number of state events. Largest to smallest.
* `dir` - Direction of room order. Either `f` for forwards or `b` for backwards. Setting * `dir` - Direction of room order. Either `f` for forwards or `b` for backwards. Setting
this value to `b` will reverse the above sort order. Defaults to `f`. this value to `b` will reverse the above sort order. Defaults to `f`.
* `search_term` - Filter rooms by their room name. Search term can be contained in any * `search_term` - Filter rooms by their room name. Search term can be contained in any
@ -26,6 +39,16 @@ The following fields are possible in the JSON response body:
- `name` - The name of the room. - `name` - The name of the room.
- `canonical_alias` - The canonical (main) alias address of the room. - `canonical_alias` - The canonical (main) alias address of the room.
- `joined_members` - How many users are currently in the room. - `joined_members` - How many users are currently in the room.
- `joined_local_members` - How many local users are currently in the room.
- `version` - The version of the room as a string.
- `creator` - The `user_id` of the room creator.
- `encryption` - Algorithm of end-to-end encryption of messages. Is `null` if encryption is not active.
- `federatable` - Whether users on other servers can join this room.
- `public` - Whether the room is visible in room directory.
- `join_rules` - The type of rules used for users wishing to join this room. One of: ["public", "knock", "invite", "private"].
- `guest_access` - Whether guests can join the room. One of: ["can_join", "forbidden"].
- `history_visibility` - Who can see the room history. One of: ["invited", "joined", "shared", "world_readable"].
- `state_events` - Total number of state_events of a room. Complexity of the room.
* `offset` - The current pagination offset in rooms. This parameter should be * `offset` - The current pagination offset in rooms. This parameter should be
used instead of `next_token` for room offset as `next_token` is used instead of `next_token` for room offset as `next_token` is
not intended to be parsed. not intended to be parsed.
@ -60,14 +83,34 @@ Response:
"room_id": "!OGEhHVWSdvArJzumhm:matrix.org", "room_id": "!OGEhHVWSdvArJzumhm:matrix.org",
"name": "Matrix HQ", "name": "Matrix HQ",
"canonical_alias": "#matrix:matrix.org", "canonical_alias": "#matrix:matrix.org",
"joined_members": 8326 "joined_members": 8326,
"joined_local_members": 2,
"version": "1",
"creator": "@foo:matrix.org",
"encryption": null,
"federatable": true,
"public": true,
"join_rules": "invite",
"guest_access": null,
"history_visibility": "shared",
"state_events": 93534
}, },
... (8 hidden items) ... ... (8 hidden items) ...
{ {
"room_id": "!xYvNcQPhnkrdUmYczI:matrix.org", "room_id": "!xYvNcQPhnkrdUmYczI:matrix.org",
"name": "This Week In Matrix (TWIM)", "name": "This Week In Matrix (TWIM)",
"canonical_alias": "#twim:matrix.org", "canonical_alias": "#twim:matrix.org",
"joined_members": 314 "joined_members": 314,
"joined_local_members": 20,
"version": "4",
"creator": "@foo:matrix.org",
"encryption": "m.megolm.v1.aes-sha2",
"federatable": true,
"public": false,
"join_rules": "invite",
"guest_access": null,
"history_visibility": "shared",
"state_events": 8345
} }
], ],
"offset": 0, "offset": 0,
@ -92,7 +135,17 @@ Response:
"room_id": "!xYvNcQPhnkrdUmYczI:matrix.org", "room_id": "!xYvNcQPhnkrdUmYczI:matrix.org",
"name": "This Week In Matrix (TWIM)", "name": "This Week In Matrix (TWIM)",
"canonical_alias": "#twim:matrix.org", "canonical_alias": "#twim:matrix.org",
"joined_members": 314 "joined_members": 314,
"joined_local_members": 20,
"version": "4",
"creator": "@foo:matrix.org",
"encryption": "m.megolm.v1.aes-sha2",
"federatable": true,
"public": false,
"join_rules": "invite",
"guest_access": null,
"history_visibility": "shared",
"state_events": 8
} }
], ],
"offset": 0, "offset": 0,
@ -117,14 +170,34 @@ Response:
"room_id": "!OGEhHVWSdvArJzumhm:matrix.org", "room_id": "!OGEhHVWSdvArJzumhm:matrix.org",
"name": "Matrix HQ", "name": "Matrix HQ",
"canonical_alias": "#matrix:matrix.org", "canonical_alias": "#matrix:matrix.org",
"joined_members": 8326 "joined_members": 8326,
"joined_local_members": 2,
"version": "1",
"creator": "@foo:matrix.org",
"encryption": null,
"federatable": true,
"public": true,
"join_rules": "invite",
"guest_access": null,
"history_visibility": "shared",
"state_events": 93534
}, },
... (98 hidden items) ... ... (98 hidden items) ...
{ {
"room_id": "!xYvNcQPhnkrdUmYczI:matrix.org", "room_id": "!xYvNcQPhnkrdUmYczI:matrix.org",
"name": "This Week In Matrix (TWIM)", "name": "This Week In Matrix (TWIM)",
"canonical_alias": "#twim:matrix.org", "canonical_alias": "#twim:matrix.org",
"joined_members": 314 "joined_members": 314,
"joined_local_members": 20,
"version": "4",
"creator": "@foo:matrix.org",
"encryption": "m.megolm.v1.aes-sha2",
"federatable": true,
"public": false,
"join_rules": "invite",
"guest_access": null,
"history_visibility": "shared",
"state_events": 8345
} }
], ],
"offset": 0, "offset": 0,
@ -154,6 +227,16 @@ Response:
"name": "Music Theory", "name": "Music Theory",
"canonical_alias": "#musictheory:matrix.org", "canonical_alias": "#musictheory:matrix.org",
"joined_members": 127 "joined_members": 127
"joined_local_members": 2,
"version": "1",
"creator": "@foo:matrix.org",
"encryption": null,
"federatable": true,
"public": true,
"join_rules": "invite",
"guest_access": null,
"history_visibility": "shared",
"state_events": 93534
}, },
... (48 hidden items) ... ... (48 hidden items) ...
{ {
@ -161,6 +244,16 @@ Response:
"name": "weechat-matrix", "name": "weechat-matrix",
"canonical_alias": "#weechat-matrix:termina.org.uk", "canonical_alias": "#weechat-matrix:termina.org.uk",
"joined_members": 137 "joined_members": 137
"joined_local_members": 20,
"version": "4",
"creator": "@foo:termina.org.uk",
"encryption": null,
"federatable": true,
"public": true,
"join_rules": "invite",
"guest_access": null,
"history_visibility": "shared",
"state_events": 8345
} }
], ],
"offset": 100, "offset": 100,

View File

@ -9,7 +9,11 @@ into Synapse, and provides a number of methods by which it can integrate
with the authentication system. with the authentication system.
This document serves as a reference for those looking to implement their This document serves as a reference for those looking to implement their
own password auth providers. own password auth providers. Additionally, here is a list of known
password auth provider module implementations:
* [matrix-synapse-ldap3](https://github.com/matrix-org/matrix-synapse-ldap3/)
* [matrix-synapse-shared-secret-auth](https://github.com/devture/matrix-synapse-shared-secret-auth)
## Required methods ## Required methods

View File

@ -42,6 +42,9 @@ the reverse proxy and the homeserver.
location /_matrix { location /_matrix {
proxy_pass http://localhost:8008; proxy_pass http://localhost:8008;
proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Forwarded-For $remote_addr;
# Nginx by default only allows file uploads up to 1M in size
# Increase client_max_body_size to match max_upload_size defined in homeserver.yaml
client_max_body_size 10M;
} }
} }

View File

@ -414,6 +414,16 @@ retention:
# longest_max_lifetime: 1y # longest_max_lifetime: 1y
# interval: 1d # interval: 1d
# Inhibits the /requestToken endpoints from returning an error that might leak
# information about whether an e-mail address is in use or not on this
# homeserver.
# Note that for some endpoints the error situation is the e-mail already being
# used, and for others the error is entering the e-mail being unused.
# If this option is enabled, instead of returning an error, these endpoints will
# act as if no error happened and return a fake session ID ('sid') to clients.
#
#request_token_inhibit_3pid_errors: true
## TLS ## ## TLS ##
@ -735,12 +745,11 @@ media_store_path: "DATADIR/media_store"
# #
#media_storage_providers: #media_storage_providers:
# - module: file_system # - module: file_system
# # Whether to write new local files. # # Whether to store newly uploaded local files
# store_local: false # store_local: false
# # Whether to write new remote media # # Whether to store newly downloaded remote files
# store_remote: false # store_remote: false
# # Whether to block upload requests waiting for write to this # # Whether to wait for successful storage for local uploads
# # provider to complete
# store_synchronous: false # store_synchronous: false
# config: # config:
# directory: /mnt/some/other/directory # directory: /mnt/some/other/directory
@ -859,6 +868,31 @@ media_store_path: "DATADIR/media_store"
# #
#max_spider_size: 10M #max_spider_size: 10M
# A list of values for the Accept-Language HTTP header used when
# downloading webpages during URL preview generation. This allows
# Synapse to specify the preferred languages that URL previews should
# be in when communicating with remote servers.
#
# Each value is a IETF language tag; a 2-3 letter identifier for a
# language, optionally followed by subtags separated by '-', specifying
# a country or region variant.
#
# Multiple values can be provided, and a weight can be added to each by
# using quality value syntax (;q=). '*' translates to any language.
#
# Defaults to "en".
#
# Example:
#
# url_preview_accept_language:
# - en-UK
# - en-US;q=0.9
# - fr;q=0.8
# - *;q=0.7
#
url_preview_accept_language:
# - en
## Captcha ## ## Captcha ##
# See docs/CAPTCHA_SETUP for full details of configuring this. # See docs/CAPTCHA_SETUP for full details of configuring this.
@ -1315,32 +1349,32 @@ saml2_config:
# remote: # remote:
# - url: https://our_idp/metadata.xml # - url: https://our_idp/metadata.xml
# #
# # By default, the user has to go to our login page first. If you'd like # # By default, the user has to go to our login page first. If you'd like
# # to allow IdP-initiated login, set 'allow_unsolicited: true' in a # # to allow IdP-initiated login, set 'allow_unsolicited: true' in a
# # 'service.sp' section: # # 'service.sp' section:
# # # #
# #service: # #service:
# # sp: # # sp:
# # allow_unsolicited: true # # allow_unsolicited: true
# #
# # The examples below are just used to generate our metadata xml, and you # # The examples below are just used to generate our metadata xml, and you
# # may well not need them, depending on your setup. Alternatively you # # may well not need them, depending on your setup. Alternatively you
# # may need a whole lot more detail - see the pysaml2 docs! # # may need a whole lot more detail - see the pysaml2 docs!
# #
# description: ["My awesome SP", "en"] # description: ["My awesome SP", "en"]
# name: ["Test SP", "en"] # name: ["Test SP", "en"]
# #
# organization: # organization:
# name: Example com # name: Example com
# display_name: # display_name:
# - ["Example co", "en"] # - ["Example co", "en"]
# url: "http://example.com" # url: "http://example.com"
# #
# contact_person: # contact_person:
# - given_name: Bob # - given_name: Bob
# sur_name: "the Sysadmin" # sur_name: "the Sysadmin"
# email_address": ["admin@example.com"] # email_address": ["admin@example.com"]
# contact_type": technical # contact_type": technical
# Instead of putting the config inline as above, you can specify a # Instead of putting the config inline as above, you can specify a
# separate pysaml2 configuration file: # separate pysaml2 configuration file:
@ -1657,7 +1691,19 @@ email:
#template_dir: "res/templates" #template_dir: "res/templates"
#password_providers: # Password providers allow homeserver administrators to integrate
# their Synapse installation with existing authentication methods
# ex. LDAP, external tokens, etc.
#
# For more information and known implementations, please see
# https://github.com/matrix-org/synapse/blob/master/docs/password_auth_providers.md
#
# Note: instances wishing to use SAML or CAS authentication should
# instead use the `saml2_config` or `cas_config` options,
# respectively.
#
password_providers:
# # Example config for an LDAP auth provider
# - module: "ldap_auth_provider.LdapAuthProvider" # - module: "ldap_auth_provider.LdapAuthProvider"
# config: # config:
# enabled: true # enabled: true

View File

@ -0,0 +1,67 @@
# Setting up Synapse with Workers and Systemd
This is a setup for managing synapse with systemd, including support for
managing workers. It provides a `matrix-synapse` service for the master, as
well as a `matrix-synapse-worker@` service template for any workers you
require. Additionally, to group the required services, it sets up a
`matrix-synapse.target`.
See the folder [system](system) for the systemd unit files.
The folder [workers](workers) contains an example configuration for the
`federation_reader` worker.
## Synapse configuration files
See [workers.md](../workers.md) for information on how to set up the
configuration files and reverse-proxy correctly. You can find an example worker
config in the [workers](workers) folder.
Systemd manages daemonization itself, so ensure that none of the configuration
files set either `daemonize` or `worker_daemonize`.
The config files of all workers are expected to be located in
`/etc/matrix-synapse/workers`. If you want to use a different location, edit
the provided `*.service` files accordingly.
There is no need for a separate configuration file for the master process.
## Set up
1. Adjust synapse configuration files as above.
1. Copy the `*.service` and `*.target` files in [system](system) to
`/etc/systemd/system`.
1. Run `systemctl deamon-reload` to tell systemd to load the new unit files.
1. Run `systemctl enable matrix-synapse.service`. This will configure the
synapse master process to be started as part of the `matrix-synapse.target`
target.
1. For each worker process to be enabled, run `systemctl enable
matrix-synapse-worker@<worker_name>.service`. For each `<worker_name>`, there
should be a corresponding configuration file
`/etc/matrix-synapse/workers/<worker_name>.yaml`.
1. Start all the synapse processes with `systemctl start matrix-synapse.target`.
1. Tell systemd to start synapse on boot with `systemctl enable matrix-synapse.target`/
## Usage
Once the services are correctly set up, you can use the following commands
to manage your synapse installation:
```sh
# Restart Synapse master and all workers
systemctl restart matrix-synapse.target
# Stop Synapse and all workers
systemctl stop matrix-synapse.target
# Restart the master alone
systemctl start matrix-synapse.service
# Restart a specific worker (eg. federation_reader); the master is
# unaffected by this.
systemctl restart matrix-synapse-worker@federation_reader.service
# Add a new worker (assuming all configs are set up already)
systemctl enable matrix-synapse-worker@federation_writer.service
systemctl restart matrix-synapse.target
```

View File

@ -0,0 +1,20 @@
[Unit]
Description=Synapse %i
# This service should be restarted when the synapse target is restarted.
PartOf=matrix-synapse.target
[Service]
Type=notify
NotifyAccess=main
User=matrix-synapse
WorkingDirectory=/var/lib/matrix-synapse
EnvironmentFile=/etc/default/matrix-synapse
ExecStart=/opt/venvs/matrix-synapse/bin/python -m synapse.app.generic_worker --config-path=/etc/matrix-synapse/homeserver.yaml --config-path=/etc/matrix-synapse/conf.d/ --config-path=/etc/matrix-synapse/workers/%i.yaml
ExecReload=/bin/kill -HUP $MAINPID
Restart=always
RestartSec=3
SyslogIdentifier=matrix-synapse-%i
[Install]
WantedBy=matrix-synapse.target

View File

@ -1,5 +1,8 @@
[Unit] [Unit]
Description=Synapse Matrix Homeserver Description=Synapse master
# This service should be restarted when the synapse target is restarted.
PartOf=matrix-synapse.target
[Service] [Service]
Type=notify Type=notify
@ -15,4 +18,4 @@ RestartSec=3
SyslogIdentifier=matrix-synapse SyslogIdentifier=matrix-synapse
[Install] [Install]
WantedBy=matrix.target WantedBy=matrix-synapse.target

View File

@ -0,0 +1,6 @@
[Unit]
Description=Synapse parent target
After=network.target
[Install]
WantedBy=multi-user.target

View File

@ -10,5 +10,4 @@ worker_listeners:
resources: resources:
- names: [federation] - names: [federation]
worker_daemonize: false
worker_log_config: /etc/matrix-synapse/federation-reader-log.yaml worker_log_config: /etc/matrix-synapse/federation-reader-log.yaml

View File

@ -196,7 +196,7 @@ Asks the server for the current position of all streams.
#### USER_SYNC (C) #### USER_SYNC (C)
A user has started or stopped syncing A user has started or stopped syncing on this process.
#### CLEAR_USER_SYNC (C) #### CLEAR_USER_SYNC (C)
@ -216,10 +216,6 @@ Asks the server for the current position of all streams.
Inform the server a cache should be invalidated Inform the server a cache should be invalidated
#### SYNC (S, C)
Used exclusively in tests
### REMOTE_SERVER_UP (S, C) ### REMOTE_SERVER_UP (S, C)
Inform other processes that a remote server may have come back online. Inform other processes that a remote server may have come back online.

View File

@ -120,7 +120,7 @@ Your home server configuration file needs the following extra keys:
As an example, here is the relevant section of the config file for matrix.org: As an example, here is the relevant section of the config file for matrix.org:
turn_uris: [ "turn:turn.matrix.org:3478?transport=udp", "turn:turn.matrix.org:3478?transport=tcp" ] turn_uris: [ "turn:turn.matrix.org:3478?transport=udp", "turn:turn.matrix.org:3478?transport=tcp" ]
turn_shared_secret: n0t4ctuAllymatr1Xd0TorgSshar3d5ecret4obvIousreAsons turn_shared_secret: "n0t4ctuAllymatr1Xd0TorgSshar3d5ecret4obvIousreAsons"
turn_user_lifetime: 86400000 turn_user_lifetime: 86400000
turn_allow_guests: True turn_allow_guests: True

View File

@ -52,24 +52,20 @@ synapse process.)
You then create a set of configs for the various worker processes. These You then create a set of configs for the various worker processes. These
should be worker configuration files, and should be stored in a dedicated should be worker configuration files, and should be stored in a dedicated
subdirectory, to allow synctl to manipulate them. An additional configuration subdirectory, to allow synctl to manipulate them.
for the master synapse process will need to be created because the process will
not be started automatically. That configuration should look like this:
worker_app: synapse.app.homeserver
daemonize: true
Each worker configuration file inherits the configuration of the main homeserver Each worker configuration file inherits the configuration of the main homeserver
configuration file. You can then override configuration specific to that worker, configuration file. You can then override configuration specific to that worker,
e.g. the HTTP listener that it provides (if any); logging configuration; etc. e.g. the HTTP listener that it provides (if any); logging configuration; etc.
You should minimise the number of overrides though to maintain a usable config. You should minimise the number of overrides though to maintain a usable config.
You must specify the type of worker application (`worker_app`). The currently In the config file for each worker, you must specify the type of worker
available worker applications are listed below. You must also specify the application (`worker_app`). The currently available worker applications are
replication endpoints that it's talking to on the main synapse process. listed below. You must also specify the replication endpoints that it's talking
`worker_replication_host` should specify the host of the main synapse, to on the main synapse process. `worker_replication_host` should specify the
`worker_replication_port` should point to the TCP replication listener port and host of the main synapse, `worker_replication_port` should point to the TCP
`worker_replication_http_port` should point to the HTTP replication port. replication listener port and `worker_replication_http_port` should point to
the HTTP replication port.
Currently, the `event_creator` and `federation_reader` workers require specifying Currently, the `event_creator` and `federation_reader` workers require specifying
`worker_replication_http_port`. `worker_replication_http_port`.
@ -90,8 +86,6 @@ For instance:
- names: - names:
- client - client
worker_daemonize: True
worker_pid_file: /home/matrix/synapse/synchrotron.pid
worker_log_config: /home/matrix/synapse/config/synchrotron_log_config.yaml worker_log_config: /home/matrix/synapse/config/synchrotron_log_config.yaml
...is a full configuration for a synchrotron worker instance, which will expose a ...is a full configuration for a synchrotron worker instance, which will expose a
@ -101,7 +95,31 @@ by the main synapse.
Obviously you should configure your reverse-proxy to route the relevant Obviously you should configure your reverse-proxy to route the relevant
endpoints to the worker (`localhost:8083` in the above example). endpoints to the worker (`localhost:8083` in the above example).
Finally, to actually run your worker-based synapse, you must pass synctl the -a Finally, you need to start your worker processes. This can be done with either
`synctl` or your distribution's preferred service manager such as `systemd`. We
recommend the use of `systemd` where available: for information on setting up
`systemd` to start synapse workers, see
[systemd-with-workers](systemd-with-workers). To use `synctl`, see below.
### Using synctl
If you want to use `synctl` to manage your synapse processes, you will need to
create an an additional configuration file for the master synapse process. That
configuration should look like this:
```yaml
worker_app: synapse.app.homeserver
```
Additionally, each worker app must be configured with the name of a "pid file",
to which it will write its process ID when it starts. For example, for a
synchrotron, you might write:
```yaml
worker_pid_file: /home/matrix/synapse/synchrotron.pid
```
Finally, to actually run your worker-based synapse, you must pass synctl the `-a`
commandline option to tell it to operate on all the worker configurations found commandline option to tell it to operate on all the worker configurations found
in the given directory, e.g.: in the given directory, e.g.:
@ -268,6 +286,8 @@ Additionally, the following REST endpoints can be handled for GET requests:
^/_matrix/client/(api/v1|r0|unstable)/pushrules/.*$ ^/_matrix/client/(api/v1|r0|unstable)/pushrules/.*$
^/_matrix/client/(api/v1|r0|unstable)/groups/.*$ ^/_matrix/client/(api/v1|r0|unstable)/groups/.*$
^/_matrix/client/(api/v1|r0|unstable)/user/[^/]*/account_data/
^/_matrix/client/(api/v1|r0|unstable)/user/[^/]*/rooms/[^/]*/account_data/
Additionally, the following REST endpoints can be handled, but all requests must Additionally, the following REST endpoints can be handled, but all requests must
be routed to the same instance: be routed to the same instance:

View File

@ -33,6 +33,10 @@ parts:
python-version: python3 python-version: python3
python-packages: python-packages:
- '.[all]' - '.[all]'
- pip
- setuptools
- setuptools-scm
- wheel
build-packages: build-packages:
- libffi-dev - libffi-dev
- libturbojpeg0-dev - libturbojpeg0-dev

View File

@ -0,0 +1,13 @@
from .sorteddict import (
SortedDict,
SortedKeysView,
SortedItemsView,
SortedValuesView,
)
__all__ = [
"SortedDict",
"SortedKeysView",
"SortedItemsView",
"SortedValuesView",
]

View File

@ -0,0 +1,124 @@
# stub for SortedDict. This is a lightly edited copy of
# https://github.com/grantjenks/python-sortedcontainers/blob/eea42df1f7bad2792e8da77335ff888f04b9e5ae/sortedcontainers/sorteddict.pyi
# (from https://github.com/grantjenks/python-sortedcontainers/pull/107)
from typing import (
Any,
Callable,
Dict,
Hashable,
Iterator,
Iterable,
ItemsView,
KeysView,
List,
Mapping,
Optional,
Sequence,
Type,
TypeVar,
Tuple,
Union,
ValuesView,
overload,
)
_T = TypeVar("_T")
_S = TypeVar("_S")
_T_h = TypeVar("_T_h", bound=Hashable)
_KT = TypeVar("_KT", bound=Hashable) # Key type.
_VT = TypeVar("_VT") # Value type.
_KT_co = TypeVar("_KT_co", covariant=True, bound=Hashable)
_VT_co = TypeVar("_VT_co", covariant=True)
_SD = TypeVar("_SD", bound=SortedDict)
_Key = Callable[[_T], Any]
class SortedDict(Dict[_KT, _VT]):
@overload
def __init__(self, **kwargs: _VT) -> None: ...
@overload
def __init__(self, __map: Mapping[_KT, _VT], **kwargs: _VT) -> None: ...
@overload
def __init__(
self, __iterable: Iterable[Tuple[_KT, _VT]], **kwargs: _VT
) -> None: ...
@overload
def __init__(self, __key: _Key[_KT], **kwargs: _VT) -> None: ...
@overload
def __init__(
self, __key: _Key[_KT], __map: Mapping[_KT, _VT], **kwargs: _VT
) -> None: ...
@overload
def __init__(
self, __key: _Key[_KT], __iterable: Iterable[Tuple[_KT, _VT]], **kwargs: _VT
) -> None: ...
@property
def key(self) -> Optional[_Key[_KT]]: ...
@property
def iloc(self) -> SortedKeysView[_KT]: ...
def clear(self) -> None: ...
def __delitem__(self, key: _KT) -> None: ...
def __iter__(self) -> Iterator[_KT]: ...
def __reversed__(self) -> Iterator[_KT]: ...
def __setitem__(self, key: _KT, value: _VT) -> None: ...
def _setitem(self, key: _KT, value: _VT) -> None: ...
def copy(self: _SD) -> _SD: ...
def __copy__(self: _SD) -> _SD: ...
@classmethod
@overload
def fromkeys(cls, seq: Iterable[_T_h]) -> SortedDict[_T_h, None]: ...
@classmethod
@overload
def fromkeys(cls, seq: Iterable[_T_h], value: _S) -> SortedDict[_T_h, _S]: ...
def keys(self) -> SortedKeysView[_KT]: ...
def items(self) -> SortedItemsView[_KT, _VT]: ...
def values(self) -> SortedValuesView[_VT]: ...
@overload
def pop(self, key: _KT) -> _VT: ...
@overload
def pop(self, key: _KT, default: _T = ...) -> Union[_VT, _T]: ...
def popitem(self, index: int = ...) -> Tuple[_KT, _VT]: ...
def peekitem(self, index: int = ...) -> Tuple[_KT, _VT]: ...
def setdefault(self, key: _KT, default: Optional[_VT] = ...) -> _VT: ...
@overload
def update(self, __map: Mapping[_KT, _VT], **kwargs: _VT) -> None: ...
@overload
def update(self, __iterable: Iterable[Tuple[_KT, _VT]], **kwargs: _VT) -> None: ...
@overload
def update(self, **kwargs: _VT) -> None: ...
def __reduce__(
self,
) -> Tuple[
Type[SortedDict[_KT, _VT]], Tuple[Callable[[_KT], Any], List[Tuple[_KT, _VT]]],
]: ...
def __repr__(self) -> str: ...
def _check(self) -> None: ...
def islice(
self, start: Optional[int] = ..., stop: Optional[int] = ..., reverse=bool,
) -> Iterator[_KT]: ...
def bisect_left(self, value: _KT) -> int: ...
def bisect_right(self, value: _KT) -> int: ...
class SortedKeysView(KeysView[_KT_co], Sequence[_KT_co]):
@overload
def __getitem__(self, index: int) -> _KT_co: ...
@overload
def __getitem__(self, index: slice) -> List[_KT_co]: ...
def __delitem__(self, index: Union[int, slice]) -> None: ...
class SortedItemsView( # type: ignore
ItemsView[_KT_co, _VT_co], Sequence[Tuple[_KT_co, _VT_co]]
):
def __iter__(self) -> Iterator[Tuple[_KT_co, _VT_co]]: ...
@overload
def __getitem__(self, index: int) -> Tuple[_KT_co, _VT_co]: ...
@overload
def __getitem__(self, index: slice) -> List[Tuple[_KT_co, _VT_co]]: ...
def __delitem__(self, index: Union[int, slice]) -> None: ...
class SortedValuesView(ValuesView[_VT_co], Sequence[_VT_co]):
@overload
def __getitem__(self, index: int) -> _VT_co: ...
@overload
def __getitem__(self, index: slice) -> List[_VT_co]: ...
def __delitem__(self, index: Union[int, slice]) -> None: ...

40
stubs/txredisapi.pyi Normal file
View File

@ -0,0 +1,40 @@
# -*- coding: utf-8 -*-
# Copyright 2020 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Contains *incomplete* type hints for txredisapi.
"""
from typing import List, Optional, Union
class RedisProtocol:
def publish(self, channel: str, message: bytes): ...
class SubscriberProtocol:
def subscribe(self, channels: Union[str, List[str]]): ...
def lazyConnection(
host: str = ...,
port: int = ...,
dbid: Optional[int] = ...,
reconnect: bool = ...,
charset: str = ...,
password: Optional[str] = ...,
connectTimeout: Optional[int] = ...,
replyTimeout: Optional[int] = ...,
convertNumbers: bool = ...,
) -> RedisProtocol: ...
class SubscriberFactory:
def buildProtocol(self, addr): ...

View File

@ -36,7 +36,7 @@ try:
except ImportError: except ImportError:
pass pass
__version__ = "1.12.3" __version__ = "1.12.4"
if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)): if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
# We import here so that we don't have to install a bunch of deps when # We import here so that we don't have to install a bunch of deps when

View File

@ -97,6 +97,8 @@ class EventTypes(object):
Retention = "m.room.retention" Retention = "m.room.retention"
Presence = "m.presence"
class RejectedReason(object): class RejectedReason(object):
AUTH_ERROR = "auth_error" AUTH_ERROR = "auth_error"

View File

@ -43,7 +43,6 @@ from synapse.replication.slave.storage.push_rule import SlavedPushRuleStore
from synapse.replication.slave.storage.receipts import SlavedReceiptsStore from synapse.replication.slave.storage.receipts import SlavedReceiptsStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.slave.storage.room import RoomStore from synapse.replication.slave.storage.room import RoomStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.server import HomeServer from synapse.server import HomeServer
from synapse.util.logcontext import LoggingContext from synapse.util.logcontext import LoggingContext
from synapse.util.versionstring import get_version_string from synapse.util.versionstring import get_version_string
@ -79,17 +78,6 @@ class AdminCmdServer(HomeServer):
def start_listening(self, listeners): def start_listening(self, listeners):
pass pass
def build_tcp_replication(self):
return AdminCmdReplicationHandler(self)
class AdminCmdReplicationHandler(ReplicationClientHandler):
async def on_rdata(self, stream_name, token, rows):
pass
def get_streams_to_replicate(self):
return {}
@defer.inlineCallbacks @defer.inlineCallbacks
def export_data_command(hs, args): def export_data_command(hs, args):

View File

@ -17,6 +17,9 @@
import contextlib import contextlib
import logging import logging
import sys import sys
from typing import Dict, Iterable
from typing_extensions import ContextManager
from twisted.internet import defer, reactor from twisted.internet import defer, reactor
from twisted.web.resource import NoResource from twisted.web.resource import NoResource
@ -38,14 +41,14 @@ from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging from synapse.config.logger import setup_logging
from synapse.federation import send_queue from synapse.federation import send_queue
from synapse.federation.transport.server import TransportLayerServer from synapse.federation.transport.server import TransportLayerServer
from synapse.handlers.presence import PresenceHandler, get_interested_parties from synapse.handlers.presence import BasePresenceHandler, get_interested_parties
from synapse.http.server import JsonResource from synapse.http.server import JsonResource
from synapse.http.servlet import RestServlet, parse_json_object_from_request from synapse.http.servlet import RestServlet, parse_json_object_from_request
from synapse.http.site import SynapseSite from synapse.http.site import SynapseSite
from synapse.logging.context import LoggingContext from synapse.logging.context import LoggingContext
from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
from synapse.metrics.background_process_metrics import run_as_background_process from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.replication.slave.storage._base import BaseSlavedStore, __func__ from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.client_ips import SlavedClientIpStore from synapse.replication.slave.storage.client_ips import SlavedClientIpStore
@ -64,7 +67,7 @@ from synapse.replication.slave.storage.receipts import SlavedReceiptsStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.slave.storage.room import RoomStore from synapse.replication.slave.storage.room import RoomStore
from synapse.replication.slave.storage.transactions import SlavedTransactionStore from synapse.replication.slave.storage.transactions import SlavedTransactionStore
from synapse.replication.tcp.client import ReplicationClientHandler from synapse.replication.tcp.client import ReplicationDataHandler
from synapse.replication.tcp.commands import ClearUserSyncsCommand from synapse.replication.tcp.commands import ClearUserSyncsCommand
from synapse.replication.tcp.streams import ( from synapse.replication.tcp.streams import (
AccountDataStream, AccountDataStream,
@ -110,6 +113,10 @@ from synapse.rest.client.v1.voip import VoipRestServlet
from synapse.rest.client.v2_alpha import groups, sync, user_directory from synapse.rest.client.v2_alpha import groups, sync, user_directory
from synapse.rest.client.v2_alpha._base import client_patterns from synapse.rest.client.v2_alpha._base import client_patterns
from synapse.rest.client.v2_alpha.account import ThreepidRestServlet from synapse.rest.client.v2_alpha.account import ThreepidRestServlet
from synapse.rest.client.v2_alpha.account_data import (
AccountDataServlet,
RoomAccountDataServlet,
)
from synapse.rest.client.v2_alpha.keys import KeyChangesServlet, KeyQueryServlet from synapse.rest.client.v2_alpha.keys import KeyChangesServlet, KeyQueryServlet
from synapse.rest.client.v2_alpha.register import RegisterRestServlet from synapse.rest.client.v2_alpha.register import RegisterRestServlet
from synapse.rest.client.versions import VersionsRestServlet from synapse.rest.client.versions import VersionsRestServlet
@ -221,23 +228,32 @@ class KeyUploadServlet(RestServlet):
return 200, {"one_time_key_counts": result} return 200, {"one_time_key_counts": result}
class _NullContextManager(ContextManager[None]):
"""A context manager which does nothing."""
def __exit__(self, exc_type, exc_val, exc_tb):
pass
UPDATE_SYNCING_USERS_MS = 10 * 1000 UPDATE_SYNCING_USERS_MS = 10 * 1000
class GenericWorkerPresence(object): class GenericWorkerPresence(BasePresenceHandler):
def __init__(self, hs): def __init__(self, hs):
super().__init__(hs)
self.hs = hs self.hs = hs
self.is_mine_id = hs.is_mine_id self.is_mine_id = hs.is_mine_id
self.http_client = hs.get_simple_http_client() self.http_client = hs.get_simple_http_client()
self.store = hs.get_datastore()
self.user_to_num_current_syncs = {} self._presence_enabled = hs.config.use_presence
self.clock = hs.get_clock()
# The number of ongoing syncs on this process, by user id.
# Empty if _presence_enabled is false.
self._user_to_num_current_syncs = {} # type: Dict[str, int]
self.notifier = hs.get_notifier() self.notifier = hs.get_notifier()
self.instance_id = hs.get_instance_id() self.instance_id = hs.get_instance_id()
active_presence = self.store.take_presence_startup_info()
self.user_to_current_state = {state.user_id: state for state in active_presence}
# user_id -> last_sync_ms. Lists the users that have stopped syncing # user_id -> last_sync_ms. Lists the users that have stopped syncing
# but we haven't notified the master of that yet # but we haven't notified the master of that yet
self.users_going_offline = {} self.users_going_offline = {}
@ -255,13 +271,13 @@ class GenericWorkerPresence(object):
) )
def _on_shutdown(self): def _on_shutdown(self):
if self.hs.config.use_presence: if self._presence_enabled:
self.hs.get_tcp_replication().send_command( self.hs.get_tcp_replication().send_command(
ClearUserSyncsCommand(self.instance_id) ClearUserSyncsCommand(self.instance_id)
) )
def send_user_sync(self, user_id, is_syncing, last_sync_ms): def send_user_sync(self, user_id, is_syncing, last_sync_ms):
if self.hs.config.use_presence: if self._presence_enabled:
self.hs.get_tcp_replication().send_user_sync( self.hs.get_tcp_replication().send_user_sync(
self.instance_id, user_id, is_syncing, last_sync_ms self.instance_id, user_id, is_syncing, last_sync_ms
) )
@ -303,28 +319,33 @@ class GenericWorkerPresence(object):
# TODO Hows this supposed to work? # TODO Hows this supposed to work?
return defer.succeed(None) return defer.succeed(None)
get_states = __func__(PresenceHandler.get_states) async def user_syncing(
get_state = __func__(PresenceHandler.get_state) self, user_id: str, affect_presence: bool
current_state_for_users = __func__(PresenceHandler.current_state_for_users) ) -> ContextManager[None]:
"""Record that a user is syncing.
def user_syncing(self, user_id, affect_presence): Called by the sync and events servlets to record that a user has connected to
if affect_presence: this worker and is waiting for some events.
curr_sync = self.user_to_num_current_syncs.get(user_id, 0) """
self.user_to_num_current_syncs[user_id] = curr_sync + 1 if not affect_presence or not self._presence_enabled:
return _NullContextManager()
# If we went from no in flight sync to some, notify replication curr_sync = self._user_to_num_current_syncs.get(user_id, 0)
if self.user_to_num_current_syncs[user_id] == 1: self._user_to_num_current_syncs[user_id] = curr_sync + 1
self.mark_as_coming_online(user_id)
# If we went from no in flight sync to some, notify replication
if self._user_to_num_current_syncs[user_id] == 1:
self.mark_as_coming_online(user_id)
def _end(): def _end():
# We check that the user_id is in user_to_num_current_syncs because # We check that the user_id is in user_to_num_current_syncs because
# user_to_num_current_syncs may have been cleared if we are # user_to_num_current_syncs may have been cleared if we are
# shutting down. # shutting down.
if affect_presence and user_id in self.user_to_num_current_syncs: if user_id in self._user_to_num_current_syncs:
self.user_to_num_current_syncs[user_id] -= 1 self._user_to_num_current_syncs[user_id] -= 1
# If we went from one in flight sync to non, notify replication # If we went from one in flight sync to non, notify replication
if self.user_to_num_current_syncs[user_id] == 0: if self._user_to_num_current_syncs[user_id] == 0:
self.mark_as_going_offline(user_id) self.mark_as_going_offline(user_id)
@contextlib.contextmanager @contextlib.contextmanager
@ -334,7 +355,7 @@ class GenericWorkerPresence(object):
finally: finally:
_end() _end()
return defer.succeed(_user_syncing()) return _user_syncing()
@defer.inlineCallbacks @defer.inlineCallbacks
def notify_from_replication(self, states, stream_id): def notify_from_replication(self, states, stream_id):
@ -369,15 +390,12 @@ class GenericWorkerPresence(object):
stream_id = token stream_id = token
yield self.notify_from_replication(states, stream_id) yield self.notify_from_replication(states, stream_id)
def get_currently_syncing_users(self): def get_currently_syncing_users_for_replication(self) -> Iterable[str]:
if self.hs.config.use_presence: return [
return [ user_id
user_id for user_id, count in self._user_to_num_current_syncs.items()
for user_id, count in self.user_to_num_current_syncs.items() if count > 0
if count > 0 ]
]
else:
return set()
class GenericWorkerTyping(object): class GenericWorkerTyping(object):
@ -501,6 +519,8 @@ class GenericWorkerServer(HomeServer):
ProfileDisplaynameRestServlet(self).register(resource) ProfileDisplaynameRestServlet(self).register(resource)
ProfileRestServlet(self).register(resource) ProfileRestServlet(self).register(resource)
KeyUploadServlet(self).register(resource) KeyUploadServlet(self).register(resource)
AccountDataServlet(self).register(resource)
RoomAccountDataServlet(self).register(resource)
sync.register_servlets(self, resource) sync.register_servlets(self, resource)
events.register_servlets(self, resource) events.register_servlets(self, resource)
@ -603,7 +623,7 @@ class GenericWorkerServer(HomeServer):
def remove_pusher(self, app_id, push_key, user_id): def remove_pusher(self, app_id, push_key, user_id):
self.get_tcp_replication().send_remove_pusher(app_id, push_key, user_id) self.get_tcp_replication().send_remove_pusher(app_id, push_key, user_id)
def build_tcp_replication(self): def build_replication_data_handler(self):
return GenericWorkerReplicationHandler(self) return GenericWorkerReplicationHandler(self)
def build_presence_handler(self): def build_presence_handler(self):
@ -613,14 +633,13 @@ class GenericWorkerServer(HomeServer):
return GenericWorkerTyping(self) return GenericWorkerTyping(self)
class GenericWorkerReplicationHandler(ReplicationClientHandler): class GenericWorkerReplicationHandler(ReplicationDataHandler):
def __init__(self, hs): def __init__(self, hs):
super(GenericWorkerReplicationHandler, self).__init__(hs.get_datastore()) super(GenericWorkerReplicationHandler, self).__init__(hs.get_datastore())
self.store = hs.get_datastore() self.store = hs.get_datastore()
self.typing_handler = hs.get_typing_handler() self.typing_handler = hs.get_typing_handler()
# NB this is a SynchrotronPresence, not a normal PresenceHandler self.presence_handler = hs.get_presence_handler() # type: GenericWorkerPresence
self.presence_handler = hs.get_presence_handler()
self.notifier = hs.get_notifier() self.notifier = hs.get_notifier()
self.notify_pushers = hs.config.start_pushers self.notify_pushers = hs.config.start_pushers
@ -644,9 +663,6 @@ class GenericWorkerReplicationHandler(ReplicationClientHandler):
args.update(self.send_handler.stream_positions()) args.update(self.send_handler.stream_positions())
return args return args
def get_currently_syncing_users(self):
return self.presence_handler.get_currently_syncing_users()
async def process_and_notify(self, stream_name, token, rows): async def process_and_notify(self, stream_name, token, rows):
try: try:
if self.send_handler: if self.send_handler:

View File

@ -272,6 +272,12 @@ class SynapseHomeServer(HomeServer):
def start_listening(self, listeners): def start_listening(self, listeners):
config = self.get_config() config = self.get_config()
if config.redis_enabled:
# If redis is enabled we connect via the replication command handler
# in the same way as the workers (since we're effectively a client
# rather than a server).
self.get_tcp_replication().start_replication(self)
for listener in listeners: for listener in listeners:
if listener["type"] == "http": if listener["type"] == "http":
self._listening_services.extend(self._listener_http(config, listener)) self._listening_services.extend(self._listener_http(config, listener))

View File

@ -468,8 +468,8 @@ class RootConfig(object):
Returns: Config object, or None if --generate-config or --generate-keys was set Returns: Config object, or None if --generate-config or --generate-keys was set
""" """
config_parser = argparse.ArgumentParser(add_help=False) parser = argparse.ArgumentParser(description=description)
config_parser.add_argument( parser.add_argument(
"-c", "-c",
"--config-path", "--config-path",
action="append", action="append",
@ -478,7 +478,7 @@ class RootConfig(object):
" may specify directories containing *.yaml files.", " may specify directories containing *.yaml files.",
) )
generate_group = config_parser.add_argument_group("Config generation") generate_group = parser.add_argument_group("Config generation")
generate_group.add_argument( generate_group.add_argument(
"--generate-config", "--generate-config",
action="store_true", action="store_true",
@ -526,12 +526,13 @@ class RootConfig(object):
), ),
) )
config_args, remaining_args = config_parser.parse_known_args(argv) cls.invoke_all_static("add_arguments", parser)
config_args = parser.parse_args(argv)
config_files = find_config_files(search_paths=config_args.config_path) config_files = find_config_files(search_paths=config_args.config_path)
if not config_files: if not config_files:
config_parser.error( parser.error(
"Must supply a config file.\nA config file can be automatically" "Must supply a config file.\nA config file can be automatically"
' generated using "--generate-config -H SERVER_NAME' ' generated using "--generate-config -H SERVER_NAME'
' -c CONFIG-FILE"' ' -c CONFIG-FILE"'
@ -550,7 +551,7 @@ class RootConfig(object):
if config_args.generate_config: if config_args.generate_config:
if config_args.report_stats is None: if config_args.report_stats is None:
config_parser.error( parser.error(
"Please specify either --report-stats=yes or --report-stats=no\n\n" "Please specify either --report-stats=yes or --report-stats=no\n\n"
+ MISSING_REPORT_STATS_SPIEL + MISSING_REPORT_STATS_SPIEL
) )
@ -609,15 +610,6 @@ class RootConfig(object):
) )
generate_missing_configs = True generate_missing_configs = True
parser = argparse.ArgumentParser(
parents=[config_parser],
description=description,
formatter_class=argparse.RawDescriptionHelpFormatter,
)
obj.invoke_all_static("add_arguments", parser)
args = parser.parse_args(remaining_args)
config_dict = read_config_files(config_files) config_dict = read_config_files(config_files)
if generate_missing_configs: if generate_missing_configs:
obj.generate_missing_files(config_dict, config_dir_path) obj.generate_missing_files(config_dict, config_dir_path)
@ -626,7 +618,7 @@ class RootConfig(object):
obj.parse_config_dict( obj.parse_config_dict(
config_dict, config_dir_path=config_dir_path, data_dir_path=data_dir_path config_dict, config_dir_path=config_dir_path, data_dir_path=data_dir_path
) )
obj.invoke_all("read_arguments", args) obj.invoke_all("read_arguments", config_args)
return obj return obj

View File

@ -77,13 +77,13 @@ class CacheConfig(Config):
cache_config = config.get("caches", {}) cache_config = config.get("caches", {})
self.global_factor = cache_config.get( self.global_factor = cache_config.get(
"global_factor", CACHE_PROPERTIES["default_cache_size_factor"] "global_factor", CACHE_PROPERTIES["default_size_factor"]
) )
if not isinstance(self.global_factor, (int, float)): if not isinstance(self.global_factor, (int, float)):
raise ConfigError("caches.global_factor must be a number.") raise ConfigError("caches.global_factor must be a number.")
# Set the global one so that it's reflected in new caches # Set the global one so that it's reflected in new caches
CACHE_PROPERTIES["default_cache_size_factor"] = self.global_factor CACHE_PROPERTIES["default_size_factor"] = self.global_factor
# Load cache factors from the environment, but override them with the # Load cache factors from the environment, but override them with the
# ones in the config file if they exist # ones in the config file if they exist

View File

@ -32,6 +32,7 @@ from .password import PasswordConfig
from .password_auth_providers import PasswordAuthProviderConfig from .password_auth_providers import PasswordAuthProviderConfig
from .push import PushConfig from .push import PushConfig
from .ratelimiting import RatelimitConfig from .ratelimiting import RatelimitConfig
from .redis import RedisConfig
from .registration import RegistrationConfig from .registration import RegistrationConfig
from .repository import ContentRepositoryConfig from .repository import ContentRepositoryConfig
from .room_directory import RoomDirectoryConfig from .room_directory import RoomDirectoryConfig
@ -83,5 +84,6 @@ class HomeServerConfig(RootConfig):
RoomDirectoryConfig, RoomDirectoryConfig,
ThirdPartyRulesConfig, ThirdPartyRulesConfig,
TracerConfig, TracerConfig,
RedisConfig,
CacheConfig, CacheConfig,
] ]

View File

@ -35,7 +35,7 @@ class PasswordAuthProviderConfig(Config):
if ldap_config.get("enabled", False): if ldap_config.get("enabled", False):
providers.append({"module": LDAP_PROVIDER, "config": ldap_config}) providers.append({"module": LDAP_PROVIDER, "config": ldap_config})
providers.extend(config.get("password_providers", [])) providers.extend(config.get("password_providers") or [])
for provider in providers: for provider in providers:
mod_name = provider["module"] mod_name = provider["module"]
@ -52,7 +52,19 @@ class PasswordAuthProviderConfig(Config):
def generate_config_section(self, **kwargs): def generate_config_section(self, **kwargs):
return """\ return """\
#password_providers: # Password providers allow homeserver administrators to integrate
# their Synapse installation with existing authentication methods
# ex. LDAP, external tokens, etc.
#
# For more information and known implementations, please see
# https://github.com/matrix-org/synapse/blob/master/docs/password_auth_providers.md
#
# Note: instances wishing to use SAML or CAS authentication should
# instead use the `saml2_config` or `cas_config` options,
# respectively.
#
password_providers:
# # Example config for an LDAP auth provider
# - module: "ldap_auth_provider.LdapAuthProvider" # - module: "ldap_auth_provider.LdapAuthProvider"
# config: # config:
# enabled: true # enabled: true

35
synapse/config/redis.py Normal file
View File

@ -0,0 +1,35 @@
# -*- coding: utf-8 -*-
# Copyright 2020 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from synapse.config._base import Config
from synapse.python_dependencies import check_requirements
class RedisConfig(Config):
section = "redis"
def read_config(self, config, **kwargs):
redis_config = config.get("redis", {})
self.redis_enabled = redis_config.get("enabled", False)
if not self.redis_enabled:
return
check_requirements("redis")
self.redis_host = redis_config.get("host", "localhost")
self.redis_port = redis_config.get("port", 6379)
self.redis_dbid = redis_config.get("dbid")
self.redis_password = redis_config.get("password")

View File

@ -192,6 +192,10 @@ class ContentRepositoryConfig(Config):
self.url_preview_url_blacklist = config.get("url_preview_url_blacklist", ()) self.url_preview_url_blacklist = config.get("url_preview_url_blacklist", ())
self.url_preview_accept_language = config.get(
"url_preview_accept_language"
) or ["en"]
def generate_config_section(self, data_dir_path, **kwargs): def generate_config_section(self, data_dir_path, **kwargs):
media_store = os.path.join(data_dir_path, "media_store") media_store = os.path.join(data_dir_path, "media_store")
uploads_path = os.path.join(data_dir_path, "uploads") uploads_path = os.path.join(data_dir_path, "uploads")
@ -220,12 +224,11 @@ class ContentRepositoryConfig(Config):
# #
#media_storage_providers: #media_storage_providers:
# - module: file_system # - module: file_system
# # Whether to write new local files. # # Whether to store newly uploaded local files
# store_local: false # store_local: false
# # Whether to write new remote media # # Whether to store newly downloaded remote files
# store_remote: false # store_remote: false
# # Whether to block upload requests waiting for write to this # # Whether to wait for successful storage for local uploads
# # provider to complete
# store_synchronous: false # store_synchronous: false
# config: # config:
# directory: /mnt/some/other/directory # directory: /mnt/some/other/directory
@ -329,6 +332,31 @@ class ContentRepositoryConfig(Config):
# The largest allowed URL preview spidering size in bytes # The largest allowed URL preview spidering size in bytes
# #
#max_spider_size: 10M #max_spider_size: 10M
# A list of values for the Accept-Language HTTP header used when
# downloading webpages during URL preview generation. This allows
# Synapse to specify the preferred languages that URL previews should
# be in when communicating with remote servers.
#
# Each value is a IETF language tag; a 2-3 letter identifier for a
# language, optionally followed by subtags separated by '-', specifying
# a country or region variant.
#
# Multiple values can be provided, and a weight can be added to each by
# using quality value syntax (;q=). '*' translates to any language.
#
# Defaults to "en".
#
# Example:
#
# url_preview_accept_language:
# - en-UK
# - en-US;q=0.9
# - fr;q=0.8
# - *;q=0.7
#
url_preview_accept_language:
# - en
""" """
% locals() % locals()
) )

View File

@ -248,32 +248,32 @@ class SAML2Config(Config):
# remote: # remote:
# - url: https://our_idp/metadata.xml # - url: https://our_idp/metadata.xml
# #
# # By default, the user has to go to our login page first. If you'd like # # By default, the user has to go to our login page first. If you'd like
# # to allow IdP-initiated login, set 'allow_unsolicited: true' in a # # to allow IdP-initiated login, set 'allow_unsolicited: true' in a
# # 'service.sp' section: # # 'service.sp' section:
# # # #
# #service: # #service:
# # sp: # # sp:
# # allow_unsolicited: true # # allow_unsolicited: true
# #
# # The examples below are just used to generate our metadata xml, and you # # The examples below are just used to generate our metadata xml, and you
# # may well not need them, depending on your setup. Alternatively you # # may well not need them, depending on your setup. Alternatively you
# # may need a whole lot more detail - see the pysaml2 docs! # # may need a whole lot more detail - see the pysaml2 docs!
# #
# description: ["My awesome SP", "en"] # description: ["My awesome SP", "en"]
# name: ["Test SP", "en"] # name: ["Test SP", "en"]
# #
# organization: # organization:
# name: Example com # name: Example com
# display_name: # display_name:
# - ["Example co", "en"] # - ["Example co", "en"]
# url: "http://example.com" # url: "http://example.com"
# #
# contact_person: # contact_person:
# - given_name: Bob # - given_name: Bob
# sur_name: "the Sysadmin" # sur_name: "the Sysadmin"
# email_address": ["admin@example.com"] # email_address": ["admin@example.com"]
# contact_type": technical # contact_type": technical
# Instead of putting the config inline as above, you can specify a # Instead of putting the config inline as above, you can specify a
# separate pysaml2 configuration file: # separate pysaml2 configuration file:

View File

@ -507,6 +507,17 @@ class ServerConfig(Config):
self.enable_ephemeral_messages = config.get("enable_ephemeral_messages", False) self.enable_ephemeral_messages = config.get("enable_ephemeral_messages", False)
# Inhibits the /requestToken endpoints from returning an error that might leak
# information about whether an e-mail address is in use or not on this
# homeserver, and instead return a 200 with a fake sid if this kind of error is
# met, without sending anything.
# This is a compromise between sending an email, which could be a spam vector,
# and letting the client know which email address is bound to an account and
# which one isn't.
self.request_token_inhibit_3pid_errors = config.get(
"request_token_inhibit_3pid_errors", False,
)
def has_tls_listener(self) -> bool: def has_tls_listener(self) -> bool:
return any(l["tls"] for l in self.listeners) return any(l["tls"] for l in self.listeners)
@ -972,6 +983,16 @@ class ServerConfig(Config):
# - shortest_max_lifetime: 3d # - shortest_max_lifetime: 3d
# longest_max_lifetime: 1y # longest_max_lifetime: 1y
# interval: 1d # interval: 1d
# Inhibits the /requestToken endpoints from returning an error that might leak
# information about whether an e-mail address is in use or not on this
# homeserver.
# Note that for some endpoints the error situation is the e-mail already being
# used, and for others the error is entering the e-mail being unused.
# If this option is enabled, instead of returning an error, these endpoints will
# act as if no error happened and return a fake session ID ('sid') to clients.
#
#request_token_inhibit_3pid_errors: true
""" """
% locals() % locals()
) )

View File

@ -12,6 +12,7 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
import os
from typing import Any, Dict from typing import Any, Dict
import pkg_resources import pkg_resources
@ -36,6 +37,18 @@ class SSOConfig(Config):
template_dir = pkg_resources.resource_filename("synapse", "res/templates",) template_dir = pkg_resources.resource_filename("synapse", "res/templates",)
self.sso_redirect_confirm_template_dir = template_dir self.sso_redirect_confirm_template_dir = template_dir
self.sso_account_deactivated_template = self.read_file(
os.path.join(
self.sso_redirect_confirm_template_dir, "sso_account_deactivated.html"
),
"sso_account_deactivated_template",
)
self.sso_auth_success_template = self.read_file(
os.path.join(
self.sso_redirect_confirm_template_dir, "sso_auth_success.html"
),
"sso_auth_success_template",
)
self.sso_client_whitelist = sso_config.get("client_whitelist") or [] self.sso_client_whitelist = sso_config.get("client_whitelist") or []

View File

@ -15,7 +15,7 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
import logging import logging
from typing import Dict from typing import Any, Callable, Dict, List, Match, Optional, Tuple, Union
import six import six
from six import iteritems from six import iteritems
@ -38,6 +38,7 @@ from synapse.api.errors import (
UnsupportedRoomVersionError, UnsupportedRoomVersionError,
) )
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS from synapse.api.room_versions import KNOWN_ROOM_VERSIONS
from synapse.events import EventBase
from synapse.federation.federation_base import FederationBase, event_from_pdu_json from synapse.federation.federation_base import FederationBase, event_from_pdu_json
from synapse.federation.persistence import TransactionActions from synapse.federation.persistence import TransactionActions
from synapse.federation.units import Edu, Transaction from synapse.federation.units import Edu, Transaction
@ -94,7 +95,9 @@ class FederationServer(FederationBase):
# come in waves. # come in waves.
self._state_resp_cache = ResponseCache(hs, "state_resp", timeout_ms=30000) self._state_resp_cache = ResponseCache(hs, "state_resp", timeout_ms=30000)
async def on_backfill_request(self, origin, room_id, versions, limit): async def on_backfill_request(
self, origin: str, room_id: str, versions: List[str], limit: int
) -> Tuple[int, Dict[str, Any]]:
with (await self._server_linearizer.queue((origin, room_id))): with (await self._server_linearizer.queue((origin, room_id))):
origin_host, _ = parse_server_name(origin) origin_host, _ = parse_server_name(origin)
await self.check_server_matches_acl(origin_host, room_id) await self.check_server_matches_acl(origin_host, room_id)
@ -107,23 +110,25 @@ class FederationServer(FederationBase):
return 200, res return 200, res
async def on_incoming_transaction(self, origin, transaction_data): async def on_incoming_transaction(
self, origin: str, transaction_data: JsonDict
) -> Tuple[int, Dict[str, Any]]:
# keep this as early as possible to make the calculated origin ts as # keep this as early as possible to make the calculated origin ts as
# accurate as possible. # accurate as possible.
request_time = self._clock.time_msec() request_time = self._clock.time_msec()
transaction = Transaction(**transaction_data) transaction = Transaction(**transaction_data)
if not transaction.transaction_id: if not transaction.transaction_id: # type: ignore
raise Exception("Transaction missing transaction_id") raise Exception("Transaction missing transaction_id")
logger.debug("[%s] Got transaction", transaction.transaction_id) logger.debug("[%s] Got transaction", transaction.transaction_id) # type: ignore
# use a linearizer to ensure that we don't process the same transaction # use a linearizer to ensure that we don't process the same transaction
# multiple times in parallel. # multiple times in parallel.
with ( with (
await self._transaction_linearizer.queue( await self._transaction_linearizer.queue(
(origin, transaction.transaction_id) (origin, transaction.transaction_id) # type: ignore
) )
): ):
result = await self._handle_incoming_transaction( result = await self._handle_incoming_transaction(
@ -132,31 +137,33 @@ class FederationServer(FederationBase):
return result return result
async def _handle_incoming_transaction(self, origin, transaction, request_time): async def _handle_incoming_transaction(
self, origin: str, transaction: Transaction, request_time: int
) -> Tuple[int, Dict[str, Any]]:
""" Process an incoming transaction and return the HTTP response """ Process an incoming transaction and return the HTTP response
Args: Args:
origin (unicode): the server making the request origin: the server making the request
transaction (Transaction): incoming transaction transaction: incoming transaction
request_time (int): timestamp that the HTTP request arrived at request_time: timestamp that the HTTP request arrived at
Returns: Returns:
Deferred[(int, object)]: http response code and body HTTP response code and body
""" """
response = await self.transaction_actions.have_responded(origin, transaction) response = await self.transaction_actions.have_responded(origin, transaction)
if response: if response:
logger.debug( logger.debug(
"[%s] We've already responded to this request", "[%s] We've already responded to this request",
transaction.transaction_id, transaction.transaction_id, # type: ignore
) )
return response return response
logger.debug("[%s] Transaction is new", transaction.transaction_id) logger.debug("[%s] Transaction is new", transaction.transaction_id) # type: ignore
# Reject if PDU count > 50 or EDU count > 100 # Reject if PDU count > 50 or EDU count > 100
if len(transaction.pdus) > 50 or ( if len(transaction.pdus) > 50 or ( # type: ignore
hasattr(transaction, "edus") and len(transaction.edus) > 100 hasattr(transaction, "edus") and len(transaction.edus) > 100 # type: ignore
): ):
logger.info("Transaction PDU or EDU count too large. Returning 400") logger.info("Transaction PDU or EDU count too large. Returning 400")
@ -204,13 +211,13 @@ class FederationServer(FederationBase):
report back to the sending server. report back to the sending server.
""" """
received_pdus_counter.inc(len(transaction.pdus)) received_pdus_counter.inc(len(transaction.pdus)) # type: ignore
origin_host, _ = parse_server_name(origin) origin_host, _ = parse_server_name(origin)
pdus_by_room = {} pdus_by_room = {} # type: Dict[str, List[EventBase]]
for p in transaction.pdus: for p in transaction.pdus: # type: ignore
if "unsigned" in p: if "unsigned" in p:
unsigned = p["unsigned"] unsigned = p["unsigned"]
if "age" in unsigned: if "age" in unsigned:
@ -254,7 +261,7 @@ class FederationServer(FederationBase):
# require callouts to other servers to fetch missing events), but # require callouts to other servers to fetch missing events), but
# impose a limit to avoid going too crazy with ram/cpu. # impose a limit to avoid going too crazy with ram/cpu.
async def process_pdus_for_room(room_id): async def process_pdus_for_room(room_id: str):
logger.debug("Processing PDUs for %s", room_id) logger.debug("Processing PDUs for %s", room_id)
try: try:
await self.check_server_matches_acl(origin_host, room_id) await self.check_server_matches_acl(origin_host, room_id)
@ -310,7 +317,9 @@ class FederationServer(FederationBase):
TRANSACTION_CONCURRENCY_LIMIT, TRANSACTION_CONCURRENCY_LIMIT,
) )
async def on_context_state_request(self, origin, room_id, event_id): async def on_context_state_request(
self, origin: str, room_id: str, event_id: str
) -> Tuple[int, Dict[str, Any]]:
origin_host, _ = parse_server_name(origin) origin_host, _ = parse_server_name(origin)
await self.check_server_matches_acl(origin_host, room_id) await self.check_server_matches_acl(origin_host, room_id)
@ -338,7 +347,9 @@ class FederationServer(FederationBase):
return 200, resp return 200, resp
async def on_state_ids_request(self, origin, room_id, event_id): async def on_state_ids_request(
self, origin: str, room_id: str, event_id: str
) -> Tuple[int, Dict[str, Any]]:
if not event_id: if not event_id:
raise NotImplementedError("Specify an event") raise NotImplementedError("Specify an event")
@ -354,7 +365,9 @@ class FederationServer(FederationBase):
return 200, {"pdu_ids": state_ids, "auth_chain_ids": auth_chain_ids} return 200, {"pdu_ids": state_ids, "auth_chain_ids": auth_chain_ids}
async def _on_context_state_request_compute(self, room_id, event_id): async def _on_context_state_request_compute(
self, room_id: str, event_id: str
) -> Dict[str, list]:
if event_id: if event_id:
pdus = await self.handler.get_state_for_pdu(room_id, event_id) pdus = await self.handler.get_state_for_pdu(room_id, event_id)
else: else:
@ -367,7 +380,9 @@ class FederationServer(FederationBase):
"auth_chain": [pdu.get_pdu_json() for pdu in auth_chain], "auth_chain": [pdu.get_pdu_json() for pdu in auth_chain],
} }
async def on_pdu_request(self, origin, event_id): async def on_pdu_request(
self, origin: str, event_id: str
) -> Tuple[int, Union[JsonDict, str]]:
pdu = await self.handler.get_persisted_pdu(origin, event_id) pdu = await self.handler.get_persisted_pdu(origin, event_id)
if pdu: if pdu:
@ -375,12 +390,16 @@ class FederationServer(FederationBase):
else: else:
return 404, "" return 404, ""
async def on_query_request(self, query_type, args): async def on_query_request(
self, query_type: str, args: Dict[str, str]
) -> Tuple[int, Dict[str, Any]]:
received_queries_counter.labels(query_type).inc() received_queries_counter.labels(query_type).inc()
resp = await self.registry.on_query(query_type, args) resp = await self.registry.on_query(query_type, args)
return 200, resp return 200, resp
async def on_make_join_request(self, origin, room_id, user_id, supported_versions): async def on_make_join_request(
self, origin: str, room_id: str, user_id: str, supported_versions: List[str]
) -> Dict[str, Any]:
origin_host, _ = parse_server_name(origin) origin_host, _ = parse_server_name(origin)
await self.check_server_matches_acl(origin_host, room_id) await self.check_server_matches_acl(origin_host, room_id)
@ -397,7 +416,7 @@ class FederationServer(FederationBase):
async def on_invite_request( async def on_invite_request(
self, origin: str, content: JsonDict, room_version_id: str self, origin: str, content: JsonDict, room_version_id: str
): ) -> Dict[str, Any]:
room_version = KNOWN_ROOM_VERSIONS.get(room_version_id) room_version = KNOWN_ROOM_VERSIONS.get(room_version_id)
if not room_version: if not room_version:
raise SynapseError( raise SynapseError(
@ -414,7 +433,9 @@ class FederationServer(FederationBase):
time_now = self._clock.time_msec() time_now = self._clock.time_msec()
return {"event": ret_pdu.get_pdu_json(time_now)} return {"event": ret_pdu.get_pdu_json(time_now)}
async def on_send_join_request(self, origin, content, room_id): async def on_send_join_request(
self, origin: str, content: JsonDict, room_id: str
) -> Dict[str, Any]:
logger.debug("on_send_join_request: content: %s", content) logger.debug("on_send_join_request: content: %s", content)
room_version = await self.store.get_room_version(room_id) room_version = await self.store.get_room_version(room_id)
@ -434,7 +455,9 @@ class FederationServer(FederationBase):
"auth_chain": [p.get_pdu_json(time_now) for p in res_pdus["auth_chain"]], "auth_chain": [p.get_pdu_json(time_now) for p in res_pdus["auth_chain"]],
} }
async def on_make_leave_request(self, origin, room_id, user_id): async def on_make_leave_request(
self, origin: str, room_id: str, user_id: str
) -> Dict[str, Any]:
origin_host, _ = parse_server_name(origin) origin_host, _ = parse_server_name(origin)
await self.check_server_matches_acl(origin_host, room_id) await self.check_server_matches_acl(origin_host, room_id)
pdu = await self.handler.on_make_leave_request(origin, room_id, user_id) pdu = await self.handler.on_make_leave_request(origin, room_id, user_id)
@ -444,7 +467,9 @@ class FederationServer(FederationBase):
time_now = self._clock.time_msec() time_now = self._clock.time_msec()
return {"event": pdu.get_pdu_json(time_now), "room_version": room_version} return {"event": pdu.get_pdu_json(time_now), "room_version": room_version}
async def on_send_leave_request(self, origin, content, room_id): async def on_send_leave_request(
self, origin: str, content: JsonDict, room_id: str
) -> dict:
logger.debug("on_send_leave_request: content: %s", content) logger.debug("on_send_leave_request: content: %s", content)
room_version = await self.store.get_room_version(room_id) room_version = await self.store.get_room_version(room_id)
@ -460,7 +485,9 @@ class FederationServer(FederationBase):
await self.handler.on_send_leave_request(origin, pdu) await self.handler.on_send_leave_request(origin, pdu)
return {} return {}
async def on_event_auth(self, origin, room_id, event_id): async def on_event_auth(
self, origin: str, room_id: str, event_id: str
) -> Tuple[int, Dict[str, Any]]:
with (await self._server_linearizer.queue((origin, room_id))): with (await self._server_linearizer.queue((origin, room_id))):
origin_host, _ = parse_server_name(origin) origin_host, _ = parse_server_name(origin)
await self.check_server_matches_acl(origin_host, room_id) await self.check_server_matches_acl(origin_host, room_id)
@ -471,15 +498,21 @@ class FederationServer(FederationBase):
return 200, res return 200, res
@log_function @log_function
def on_query_client_keys(self, origin, content): async def on_query_client_keys(
return self.on_query_request("client_keys", content) self, origin: str, content: Dict[str, str]
) -> Tuple[int, Dict[str, Any]]:
return await self.on_query_request("client_keys", content)
async def on_query_user_devices(self, origin: str, user_id: str): async def on_query_user_devices(
self, origin: str, user_id: str
) -> Tuple[int, Dict[str, Any]]:
keys = await self.device_handler.on_federation_query_user_devices(user_id) keys = await self.device_handler.on_federation_query_user_devices(user_id)
return 200, keys return 200, keys
@trace @trace
async def on_claim_client_keys(self, origin, content): async def on_claim_client_keys(
self, origin: str, content: JsonDict
) -> Dict[str, Any]:
query = [] query = []
for user_id, device_keys in content.get("one_time_keys", {}).items(): for user_id, device_keys in content.get("one_time_keys", {}).items():
for device_id, algorithm in device_keys.items(): for device_id, algorithm in device_keys.items():
@ -488,7 +521,7 @@ class FederationServer(FederationBase):
log_kv({"message": "Claiming one time keys.", "user, device pairs": query}) log_kv({"message": "Claiming one time keys.", "user, device pairs": query})
results = await self.store.claim_e2e_one_time_keys(query) results = await self.store.claim_e2e_one_time_keys(query)
json_result = {} json_result = {} # type: Dict[str, Dict[str, dict]]
for user_id, device_keys in results.items(): for user_id, device_keys in results.items():
for device_id, keys in device_keys.items(): for device_id, keys in device_keys.items():
for key_id, json_bytes in keys.items(): for key_id, json_bytes in keys.items():
@ -511,8 +544,13 @@ class FederationServer(FederationBase):
return {"one_time_keys": json_result} return {"one_time_keys": json_result}
async def on_get_missing_events( async def on_get_missing_events(
self, origin, room_id, earliest_events, latest_events, limit self,
): origin: str,
room_id: str,
earliest_events: List[str],
latest_events: List[str],
limit: int,
) -> Dict[str, list]:
with (await self._server_linearizer.queue((origin, room_id))): with (await self._server_linearizer.queue((origin, room_id))):
origin_host, _ = parse_server_name(origin) origin_host, _ = parse_server_name(origin)
await self.check_server_matches_acl(origin_host, room_id) await self.check_server_matches_acl(origin_host, room_id)
@ -541,11 +579,11 @@ class FederationServer(FederationBase):
return {"events": [ev.get_pdu_json(time_now) for ev in missing_events]} return {"events": [ev.get_pdu_json(time_now) for ev in missing_events]}
@log_function @log_function
def on_openid_userinfo(self, token): async def on_openid_userinfo(self, token: str) -> Optional[str]:
ts_now_ms = self._clock.time_msec() ts_now_ms = self._clock.time_msec()
return self.store.get_user_id_for_open_id_token(token, ts_now_ms) return await self.store.get_user_id_for_open_id_token(token, ts_now_ms)
def _transaction_from_pdus(self, pdu_list): def _transaction_from_pdus(self, pdu_list: List[EventBase]) -> Transaction:
"""Returns a new Transaction containing the given PDUs suitable for """Returns a new Transaction containing the given PDUs suitable for
transmission. transmission.
""" """
@ -558,7 +596,7 @@ class FederationServer(FederationBase):
destination=None, destination=None,
) )
async def _handle_received_pdu(self, origin, pdu): async def _handle_received_pdu(self, origin: str, pdu: EventBase) -> None:
""" Process a PDU received in a federation /send/ transaction. """ Process a PDU received in a federation /send/ transaction.
If the event is invalid, then this method throws a FederationError. If the event is invalid, then this method throws a FederationError.
@ -579,10 +617,8 @@ class FederationServer(FederationBase):
until we try to backfill across the discontinuity. until we try to backfill across the discontinuity.
Args: Args:
origin (str): server which sent the pdu origin: server which sent the pdu
pdu (FrozenEvent): received pdu pdu: received pdu
Returns (Deferred): completes with None
Raises: FederationError if the signatures / hash do not match, or Raises: FederationError if the signatures / hash do not match, or
if the event was unacceptable for any other reason (eg, too large, if the event was unacceptable for any other reason (eg, too large,
@ -625,25 +661,27 @@ class FederationServer(FederationBase):
return "<ReplicationLayer(%s)>" % self.server_name return "<ReplicationLayer(%s)>" % self.server_name
async def exchange_third_party_invite( async def exchange_third_party_invite(
self, sender_user_id, target_user_id, room_id, signed self, sender_user_id: str, target_user_id: str, room_id: str, signed: Dict
): ):
ret = await self.handler.exchange_third_party_invite( ret = await self.handler.exchange_third_party_invite(
sender_user_id, target_user_id, room_id, signed sender_user_id, target_user_id, room_id, signed
) )
return ret return ret
async def on_exchange_third_party_invite_request(self, room_id, event_dict): async def on_exchange_third_party_invite_request(
self, room_id: str, event_dict: Dict
):
ret = await self.handler.on_exchange_third_party_invite_request( ret = await self.handler.on_exchange_third_party_invite_request(
room_id, event_dict room_id, event_dict
) )
return ret return ret
async def check_server_matches_acl(self, server_name, room_id): async def check_server_matches_acl(self, server_name: str, room_id: str):
"""Check if the given server is allowed by the server ACLs in the room """Check if the given server is allowed by the server ACLs in the room
Args: Args:
server_name (str): name of server, *without any port part* server_name: name of server, *without any port part*
room_id (str): ID of the room to check room_id: ID of the room to check
Raises: Raises:
AuthError if the server does not match the ACL AuthError if the server does not match the ACL
@ -661,15 +699,15 @@ class FederationServer(FederationBase):
raise AuthError(code=403, msg="Server is banned from room") raise AuthError(code=403, msg="Server is banned from room")
def server_matches_acl_event(server_name, acl_event): def server_matches_acl_event(server_name: str, acl_event: EventBase) -> bool:
"""Check if the given server is allowed by the ACL event """Check if the given server is allowed by the ACL event
Args: Args:
server_name (str): name of server, without any port part server_name: name of server, without any port part
acl_event (EventBase): m.room.server_acl event acl_event: m.room.server_acl event
Returns: Returns:
bool: True if this server is allowed by the ACLs True if this server is allowed by the ACLs
""" """
logger.debug("Checking %s against acl %s", server_name, acl_event.content) logger.debug("Checking %s against acl %s", server_name, acl_event.content)
@ -713,7 +751,7 @@ def server_matches_acl_event(server_name, acl_event):
return False return False
def _acl_entry_matches(server_name, acl_entry): def _acl_entry_matches(server_name: str, acl_entry: str) -> Match:
if not isinstance(acl_entry, six.string_types): if not isinstance(acl_entry, six.string_types):
logger.warning( logger.warning(
"Ignoring non-str ACL entry '%s' (is %s)", acl_entry, type(acl_entry) "Ignoring non-str ACL entry '%s' (is %s)", acl_entry, type(acl_entry)
@ -732,13 +770,13 @@ class FederationHandlerRegistry(object):
self.edu_handlers = {} self.edu_handlers = {}
self.query_handlers = {} self.query_handlers = {}
def register_edu_handler(self, edu_type, handler): def register_edu_handler(self, edu_type: str, handler: Callable[[str, dict], None]):
"""Sets the handler callable that will be used to handle an incoming """Sets the handler callable that will be used to handle an incoming
federation EDU of the given type. federation EDU of the given type.
Args: Args:
edu_type (str): The type of the incoming EDU to register handler for edu_type: The type of the incoming EDU to register handler for
handler (Callable[[str, dict]]): A callable invoked on incoming EDU handler: A callable invoked on incoming EDU
of the given type. The arguments are the origin server name and of the given type. The arguments are the origin server name and
the EDU contents. the EDU contents.
""" """
@ -749,14 +787,16 @@ class FederationHandlerRegistry(object):
self.edu_handlers[edu_type] = handler self.edu_handlers[edu_type] = handler
def register_query_handler(self, query_type, handler): def register_query_handler(
self, query_type: str, handler: Callable[[dict], defer.Deferred]
):
"""Sets the handler callable that will be used to handle an incoming """Sets the handler callable that will be used to handle an incoming
federation query of the given type. federation query of the given type.
Args: Args:
query_type (str): Category name of the query, which should match query_type: Category name of the query, which should match
the string used by make_query. the string used by make_query.
handler (Callable[[dict], Deferred[dict]]): Invoked to handle handler: Invoked to handle
incoming queries of this type. The return will be yielded incoming queries of this type. The return will be yielded
on and the result used as the response to the query request. on and the result used as the response to the query request.
""" """
@ -767,10 +807,11 @@ class FederationHandlerRegistry(object):
self.query_handlers[query_type] = handler self.query_handlers[query_type] = handler
async def on_edu(self, edu_type, origin, content): async def on_edu(self, edu_type: str, origin: str, content: dict):
handler = self.edu_handlers.get(edu_type) handler = self.edu_handlers.get(edu_type)
if not handler: if not handler:
logger.warning("No handler registered for EDU type %s", edu_type) logger.warning("No handler registered for EDU type %s", edu_type)
return
with start_active_span_from_edu(content, "handle_edu"): with start_active_span_from_edu(content, "handle_edu"):
try: try:
@ -780,7 +821,7 @@ class FederationHandlerRegistry(object):
except Exception: except Exception:
logger.exception("Failed to handle edu %r", edu_type) logger.exception("Failed to handle edu %r", edu_type)
def on_query(self, query_type, args): def on_query(self, query_type: str, args: dict) -> defer.Deferred:
handler = self.query_handlers.get(query_type) handler = self.query_handlers.get(query_type)
if not handler: if not handler:
logger.warning("No handler registered for query type %s", query_type) logger.warning("No handler registered for query type %s", query_type)
@ -807,7 +848,7 @@ class ReplicationFederationHandlerRegistry(FederationHandlerRegistry):
super(ReplicationFederationHandlerRegistry, self).__init__() super(ReplicationFederationHandlerRegistry, self).__init__()
async def on_edu(self, edu_type, origin, content): async def on_edu(self, edu_type: str, origin: str, content: dict):
"""Overrides FederationHandlerRegistry """Overrides FederationHandlerRegistry
""" """
if not self.config.use_presence and edu_type == "m.presence": if not self.config.use_presence and edu_type == "m.presence":
@ -821,7 +862,7 @@ class ReplicationFederationHandlerRegistry(FederationHandlerRegistry):
return await self._send_edu(edu_type=edu_type, origin=origin, content=content) return await self._send_edu(edu_type=edu_type, origin=origin, content=content)
async def on_query(self, query_type, args): async def on_query(self, query_type: str, args: dict):
"""Overrides FederationHandlerRegistry """Overrides FederationHandlerRegistry
""" """
handler = self.query_handlers.get(query_type) handler = self.query_handlers.get(query_type)

View File

@ -399,20 +399,30 @@ class TransportLayerClient(object):
{ {
"device_keys": { "device_keys": {
"<user_id>": ["<device_id>"] "<user_id>": ["<device_id>"]
} } }
}
Response: Response:
{ {
"device_keys": { "device_keys": {
"<user_id>": { "<user_id>": {
"<device_id>": {...} "<device_id>": {...}
} } } }
},
"master_key": {
"<user_id>": {...}
}
},
"self_signing_key": {
"<user_id>": {...}
}
}
Args: Args:
destination(str): The server to query. destination(str): The server to query.
query_content(dict): The user ids to query. query_content(dict): The user ids to query.
Returns: Returns:
A dict containg the device keys. A dict containing device and cross-signing keys.
""" """
path = _create_v1_path("/user/keys/query") path = _create_v1_path("/user/keys/query")
@ -429,14 +439,30 @@ class TransportLayerClient(object):
Response: Response:
{ {
"stream_id": "...", "stream_id": "...",
"devices": [ { ... } ] "devices": [ { ... } ],
"master_key": {
"user_id": "<user_id>",
"usage": [...],
"keys": {...},
"signatures": {
"<user_id>": {...}
}
},
"self_signing_key": {
"user_id": "<user_id>",
"usage": [...],
"keys": {...},
"signatures": {
"<user_id>": {...}
}
}
} }
Args: Args:
destination(str): The server to query. destination(str): The server to query.
query_content(dict): The user ids to query. query_content(dict): The user ids to query.
Returns: Returns:
A dict containg the device keys. A dict containing device and cross-signing keys.
""" """
path = _create_v1_path("/user/devices/%s", user_id) path = _create_v1_path("/user/devices/%s", user_id)
@ -454,8 +480,10 @@ class TransportLayerClient(object):
{ {
"one_time_keys": { "one_time_keys": {
"<user_id>": { "<user_id>": {
"<device_id>": "<algorithm>" "<device_id>": "<algorithm>"
} } } }
}
}
Response: Response:
{ {
@ -463,13 +491,16 @@ class TransportLayerClient(object):
"<user_id>": { "<user_id>": {
"<device_id>": { "<device_id>": {
"<algorithm>:<key_id>": "<key_base64>" "<algorithm>:<key_id>": "<key_base64>"
} } } } }
}
}
}
Args: Args:
destination(str): The server to query. destination(str): The server to query.
query_content(dict): The user ids to query. query_content(dict): The user ids to query.
Returns: Returns:
A dict containg the one-time keys. A dict containing the one-time keys.
""" """
path = _create_v1_path("/user/keys/claim") path = _create_v1_path("/user/keys/claim")

View File

@ -18,14 +18,12 @@ import logging
import time import time
import unicodedata import unicodedata
import urllib.parse import urllib.parse
from typing import Any, Dict, Iterable, List, Optional from typing import Any, Callable, Dict, Iterable, List, Optional, Tuple, Union
import attr import attr
import bcrypt # type: ignore[import] import bcrypt # type: ignore[import]
import pymacaroons import pymacaroons
from twisted.internet import defer
import synapse.util.stringutils as stringutils import synapse.util.stringutils as stringutils
from synapse.api.constants import LoginType from synapse.api.constants import LoginType
from synapse.api.errors import ( from synapse.api.errors import (
@ -53,31 +51,6 @@ from ._base import BaseHandler
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
SUCCESS_TEMPLATE = """
<html>
<head>
<title>Success!</title>
<meta name='viewport' content='width=device-width, initial-scale=1,
user-scalable=no, minimum-scale=1.0, maximum-scale=1.0'>
<link rel="stylesheet" href="/_matrix/static/client/register/style.css">
<script>
if (window.onAuthDone) {
window.onAuthDone();
} else if (window.opener && window.opener.postMessage) {
window.opener.postMessage("authDone", "*");
}
</script>
</head>
<body>
<div>
<p>Thank you</p>
<p>You may now close this window and return to the application</p>
</div>
</body>
</html>
"""
class AuthHandler(BaseHandler): class AuthHandler(BaseHandler):
SESSION_EXPIRE_MS = 48 * 60 * 60 * 1000 SESSION_EXPIRE_MS = 48 * 60 * 60 * 1000
@ -116,7 +89,7 @@ class AuthHandler(BaseHandler):
self.hs = hs # FIXME better possibility to access registrationHandler later? self.hs = hs # FIXME better possibility to access registrationHandler later?
self.macaroon_gen = hs.get_macaroon_generator() self.macaroon_gen = hs.get_macaroon_generator()
self._password_enabled = hs.config.password_enabled self._password_enabled = hs.config.password_enabled
self._saml2_enabled = hs.config.saml2_enabled self._sso_enabled = hs.config.saml2_enabled or hs.config.cas_enabled
# we keep this as a list despite the O(N^2) implication so that we can # we keep this as a list despite the O(N^2) implication so that we can
# keep PASSWORD first and avoid confusing clients which pick the first # keep PASSWORD first and avoid confusing clients which pick the first
@ -136,7 +109,7 @@ class AuthHandler(BaseHandler):
# necessarily identical. Login types have SSO (and other login types) # necessarily identical. Login types have SSO (and other login types)
# added in the rest layer, see synapse.rest.client.v1.login.LoginRestServerlet.on_GET. # added in the rest layer, see synapse.rest.client.v1.login.LoginRestServerlet.on_GET.
ui_auth_types = login_types.copy() ui_auth_types = login_types.copy()
if self._saml2_enabled: if self._sso_enabled:
ui_auth_types.append(LoginType.SSO) ui_auth_types.append(LoginType.SSO)
self._supported_ui_auth_types = ui_auth_types self._supported_ui_auth_types = ui_auth_types
@ -161,21 +134,28 @@ class AuthHandler(BaseHandler):
self._sso_auth_confirm_template = load_jinja2_templates( self._sso_auth_confirm_template = load_jinja2_templates(
hs.config.sso_redirect_confirm_template_dir, ["sso_auth_confirm.html"], hs.config.sso_redirect_confirm_template_dir, ["sso_auth_confirm.html"],
)[0] )[0]
# The following template is shown after a successful user interactive
# authentication session. It tells the user they can close the window.
self._sso_auth_success_template = hs.config.sso_auth_success_template
# The following template is shown during the SSO authentication process if
# the account is deactivated.
self._sso_account_deactivated_template = (
hs.config.sso_account_deactivated_template
)
self._server_name = hs.config.server_name self._server_name = hs.config.server_name
# cast to tuple for use with str.startswith # cast to tuple for use with str.startswith
self._whitelisted_sso_clients = tuple(hs.config.sso_client_whitelist) self._whitelisted_sso_clients = tuple(hs.config.sso_client_whitelist)
@defer.inlineCallbacks async def validate_user_via_ui_auth(
def validate_user_via_ui_auth(
self, self,
requester: Requester, requester: Requester,
request: SynapseRequest, request: SynapseRequest,
request_body: Dict[str, Any], request_body: Dict[str, Any],
clientip: str, clientip: str,
description: str, description: str,
): ) -> dict:
""" """
Checks that the user is who they claim to be, via a UI auth. Checks that the user is who they claim to be, via a UI auth.
@ -196,7 +176,7 @@ class AuthHandler(BaseHandler):
describes the operation happening on their account. describes the operation happening on their account.
Returns: Returns:
defer.Deferred[dict]: the parameters for this request (which may The parameters for this request (which may
have been given only in a previous call). have been given only in a previous call).
Raises: Raises:
@ -226,7 +206,7 @@ class AuthHandler(BaseHandler):
flows = [[login_type] for login_type in self._supported_ui_auth_types] flows = [[login_type] for login_type in self._supported_ui_auth_types]
try: try:
result, params, _ = yield self.check_auth( result, params, _ = await self.check_auth(
flows, request, request_body, clientip, description flows, request, request_body, clientip, description
) )
except LoginError: except LoginError:
@ -265,23 +245,18 @@ class AuthHandler(BaseHandler):
""" """
return self.checkers.keys() return self.checkers.keys()
@defer.inlineCallbacks async def check_auth(
def check_auth(
self, self,
flows: List[List[str]], flows: List[List[str]],
request: SynapseRequest, request: SynapseRequest,
clientdict: Dict[str, Any], clientdict: Dict[str, Any],
clientip: str, clientip: str,
description: str, description: str,
): ) -> Tuple[dict, dict, str]:
""" """
Takes a dictionary sent by the client in the login / registration Takes a dictionary sent by the client in the login / registration
protocol and handles the User-Interactive Auth flow. protocol and handles the User-Interactive Auth flow.
As a side effect, this function fills in the 'creds' key on the user's
session with a map, which maps each auth-type (str) to the relevant
identity authenticated by that auth-type (mostly str, but for captcha, bool).
If no auth flows have been completed successfully, raises an If no auth flows have been completed successfully, raises an
InteractiveAuthIncompleteError. To handle this, you can use InteractiveAuthIncompleteError. To handle this, you can use
synapse.rest.client.v2_alpha._base.interactive_auth_handler as a synapse.rest.client.v2_alpha._base.interactive_auth_handler as a
@ -303,8 +278,7 @@ class AuthHandler(BaseHandler):
describes the operation happening on their account. describes the operation happening on their account.
Returns: Returns:
defer.Deferred[dict, dict, str]: a deferred tuple of A tuple of (creds, params, session_id).
(creds, params, session_id).
'creds' contains the authenticated credentials of each stage. 'creds' contains the authenticated credentials of each stage.
@ -326,50 +300,47 @@ class AuthHandler(BaseHandler):
del clientdict["auth"] del clientdict["auth"]
if "session" in authdict: if "session" in authdict:
sid = authdict["session"] sid = authdict["session"]
session = self._get_session_info(sid)
if len(clientdict) > 0: # If there's no session ID, create a new session.
# This was designed to allow the client to omit the parameters if not sid:
# and just supply the session in subsequent calls so it split session = self._create_session(
# auth between devices by just sharing the session, (eg. so you clientdict, (request.uri, request.method, clientdict), description
# could continue registration from your phone having clicked the
# email auth link on there). It's probably too open to abuse
# because it lets unauthenticated clients store arbitrary objects
# on a homeserver.
# Revisit: Assuming the REST APIs do sensible validation, the data
# isn't arbintrary.
session["clientdict"] = clientdict
self._save_session(session)
elif "clientdict" in session:
clientdict = session["clientdict"]
# Ensure that the queried operation does not vary between stages of
# the UI authentication session. This is done by generating a stable
# comparator based on the URI, method, and body (minus the auth dict)
# and storing it during the initial query. Subsequent queries ensure
# that this comparator has not changed.
comparator = (request.uri, request.method, clientdict)
if "ui_auth" not in session:
session["ui_auth"] = comparator
self._save_session(session)
elif session["ui_auth"] != comparator:
raise SynapseError(
403,
"Requested operation has changed during the UI authentication session.",
) )
session_id = session["id"]
# Add a human readable description to the session. else:
if "description" not in session: session = self._get_session_info(sid)
session["description"] = description session_id = sid
self._save_session(session)
if not clientdict:
# This was designed to allow the client to omit the parameters
# and just supply the session in subsequent calls so it split
# auth between devices by just sharing the session, (eg. so you
# could continue registration from your phone having clicked the
# email auth link on there). It's probably too open to abuse
# because it lets unauthenticated clients store arbitrary objects
# on a homeserver.
# Revisit: Assuming the REST APIs do sensible validation, the data
# isn't arbitrary.
clientdict = session["clientdict"]
# Ensure that the queried operation does not vary between stages of
# the UI authentication session. This is done by generating a stable
# comparator based on the URI, method, and body (minus the auth dict)
# and storing it during the initial query. Subsequent queries ensure
# that this comparator has not changed.
comparator = (request.uri, request.method, clientdict)
if session["ui_auth"] != comparator:
raise SynapseError(
403,
"Requested operation has changed during the UI authentication session.",
)
if not authdict: if not authdict:
raise InteractiveAuthIncompleteError( raise InteractiveAuthIncompleteError(
self._auth_dict_for_flows(flows, session) self._auth_dict_for_flows(flows, session_id)
) )
if "creds" not in session:
session["creds"] = {}
creds = session["creds"] creds = session["creds"]
# check auth type currently being presented # check auth type currently being presented
@ -377,7 +348,7 @@ class AuthHandler(BaseHandler):
if "type" in authdict: if "type" in authdict:
login_type = authdict["type"] # type: str login_type = authdict["type"] # type: str
try: try:
result = yield self._check_auth_dict(authdict, clientip) result = await self._check_auth_dict(authdict, clientip)
if result: if result:
creds[login_type] = result creds[login_type] = result
self._save_session(session) self._save_session(session)
@ -409,15 +380,16 @@ class AuthHandler(BaseHandler):
list(clientdict), list(clientdict),
) )
return creds, clientdict, session["id"] return creds, clientdict, session_id
ret = self._auth_dict_for_flows(flows, session) ret = self._auth_dict_for_flows(flows, session_id)
ret["completed"] = list(creds) ret["completed"] = list(creds)
ret.update(errordict) ret.update(errordict)
raise InteractiveAuthIncompleteError(ret) raise InteractiveAuthIncompleteError(ret)
@defer.inlineCallbacks async def add_oob_auth(
def add_oob_auth(self, stagetype: str, authdict: Dict[str, Any], clientip: str): self, stagetype: str, authdict: Dict[str, Any], clientip: str
) -> bool:
""" """
Adds the result of out-of-band authentication into an existing auth Adds the result of out-of-band authentication into an existing auth
session. Currently used for adding the result of fallback auth. session. Currently used for adding the result of fallback auth.
@ -428,11 +400,9 @@ class AuthHandler(BaseHandler):
raise LoginError(400, "", Codes.MISSING_PARAM) raise LoginError(400, "", Codes.MISSING_PARAM)
sess = self._get_session_info(authdict["session"]) sess = self._get_session_info(authdict["session"])
if "creds" not in sess:
sess["creds"] = {}
creds = sess["creds"] creds = sess["creds"]
result = yield self.checkers[stagetype].check_auth(authdict, clientip) result = await self.checkers[stagetype].check_auth(authdict, clientip)
if result: if result:
creds[stagetype] = result creds[stagetype] = result
self._save_session(sess) self._save_session(sess)
@ -469,7 +439,7 @@ class AuthHandler(BaseHandler):
value: The data to store value: The data to store
""" """
sess = self._get_session_info(session_id) sess = self._get_session_info(session_id)
sess.setdefault("serverdict", {})[key] = value sess["serverdict"][key] = value
self._save_session(sess) self._save_session(sess)
def get_session_data( def get_session_data(
@ -484,10 +454,11 @@ class AuthHandler(BaseHandler):
default: Value to return if the key has not been set default: Value to return if the key has not been set
""" """
sess = self._get_session_info(session_id) sess = self._get_session_info(session_id)
return sess.setdefault("serverdict", {}).get(key, default) return sess["serverdict"].get(key, default)
@defer.inlineCallbacks async def _check_auth_dict(
def _check_auth_dict(self, authdict: Dict[str, Any], clientip: str): self, authdict: Dict[str, Any], clientip: str
) -> Union[Dict[str, Any], str]:
"""Attempt to validate the auth dict provided by a client """Attempt to validate the auth dict provided by a client
Args: Args:
@ -495,7 +466,7 @@ class AuthHandler(BaseHandler):
clientip: IP address of the client clientip: IP address of the client
Returns: Returns:
Deferred: result of the stage verification. Result of the stage verification.
Raises: Raises:
StoreError if there was a problem accessing the database StoreError if there was a problem accessing the database
@ -505,7 +476,7 @@ class AuthHandler(BaseHandler):
login_type = authdict["type"] login_type = authdict["type"]
checker = self.checkers.get(login_type) checker = self.checkers.get(login_type)
if checker is not None: if checker is not None:
res = yield checker.check_auth(authdict, clientip=clientip) res = await checker.check_auth(authdict, clientip=clientip)
return res return res
# build a v1-login-style dict out of the authdict and fall back to the # build a v1-login-style dict out of the authdict and fall back to the
@ -515,7 +486,7 @@ class AuthHandler(BaseHandler):
if user_id is None: if user_id is None:
raise SynapseError(400, "", Codes.MISSING_PARAM) raise SynapseError(400, "", Codes.MISSING_PARAM)
(canonical_id, callback) = yield self.validate_login(user_id, authdict) (canonical_id, callback) = await self.validate_login(user_id, authdict)
return canonical_id return canonical_id
def _get_params_recaptcha(self) -> dict: def _get_params_recaptcha(self) -> dict:
@ -539,7 +510,7 @@ class AuthHandler(BaseHandler):
} }
def _auth_dict_for_flows( def _auth_dict_for_flows(
self, flows: List[List[str]], session: Dict[str, Any] self, flows: List[List[str]], session_id: str,
) -> Dict[str, Any]: ) -> Dict[str, Any]:
public_flows = [] public_flows = []
for f in flows: for f in flows:
@ -558,31 +529,73 @@ class AuthHandler(BaseHandler):
params[stage] = get_params[stage]() params[stage] = get_params[stage]()
return { return {
"session": session["id"], "session": session_id,
"flows": [{"stages": f} for f in public_flows], "flows": [{"stages": f} for f in public_flows],
"params": params, "params": params,
} }
def _get_session_info(self, session_id: Optional[str]) -> dict: def _create_session(
self,
clientdict: Dict[str, Any],
ui_auth: Tuple[bytes, bytes, Dict[str, Any]],
description: str,
) -> dict:
""" """
Gets or creates a session given a session ID. Creates a new user interactive authentication session.
The session can be used to track data across multiple requests, e.g. for
interactive authentication.
Each session has the following keys:
id:
A unique identifier for this session. Passed back to the client
and returned for each stage.
clientdict:
The dictionary from the client root level, not the 'auth' key.
ui_auth:
A tuple which is checked at each stage of the authentication to
ensure that the asked for operation has not changed.
creds:
A map, which maps each auth-type (str) to the relevant identity
authenticated by that auth-type (mostly str, but for captcha, bool).
serverdict:
A map of data that is stored server-side and cannot be modified
by the client.
description:
A string description of the operation that the current
authentication is authorising.
Returns:
The newly created session.
"""
session_id = None
while session_id is None or session_id in self.sessions:
session_id = stringutils.random_string(24)
self.sessions[session_id] = {
"id": session_id,
"clientdict": clientdict,
"ui_auth": ui_auth,
"creds": {},
"serverdict": {},
"description": description,
}
return self.sessions[session_id]
def _get_session_info(self, session_id: str) -> dict:
"""
Gets a session given a session ID.
The session can be used to track data across multiple requests, e.g. for The session can be used to track data across multiple requests, e.g. for
interactive authentication. interactive authentication.
""" """
if session_id not in self.sessions: try:
session_id = None return self.sessions[session_id]
except KeyError:
raise SynapseError(400, "Unknown session ID: %s" % (session_id,))
if not session_id: async def get_access_token_for_user_id(
# create a new session
while session_id is None or session_id in self.sessions:
session_id = stringutils.random_string(24)
self.sessions[session_id] = {"id": session_id}
return self.sessions[session_id]
@defer.inlineCallbacks
def get_access_token_for_user_id(
self, user_id: str, device_id: Optional[str], valid_until_ms: Optional[int] self, user_id: str, device_id: Optional[str], valid_until_ms: Optional[int]
): ):
""" """
@ -612,10 +625,10 @@ class AuthHandler(BaseHandler):
) )
logger.info("Logging in user %s on device %s%s", user_id, device_id, fmt_expiry) logger.info("Logging in user %s on device %s%s", user_id, device_id, fmt_expiry)
yield self.auth.check_auth_blocking(user_id) await self.auth.check_auth_blocking(user_id)
access_token = self.macaroon_gen.generate_access_token(user_id) access_token = self.macaroon_gen.generate_access_token(user_id)
yield self.store.add_access_token_to_user( await self.store.add_access_token_to_user(
user_id, access_token, device_id, valid_until_ms user_id, access_token, device_id, valid_until_ms
) )
@ -625,15 +638,14 @@ class AuthHandler(BaseHandler):
# device, so we double-check it here. # device, so we double-check it here.
if device_id is not None: if device_id is not None:
try: try:
yield self.store.get_device(user_id, device_id) await self.store.get_device(user_id, device_id)
except StoreError: except StoreError:
yield self.store.delete_access_token(access_token) await self.store.delete_access_token(access_token)
raise StoreError(400, "Login raced against device deletion") raise StoreError(400, "Login raced against device deletion")
return access_token return access_token
@defer.inlineCallbacks async def check_user_exists(self, user_id: str) -> Optional[str]:
def check_user_exists(self, user_id: str):
""" """
Checks to see if a user with the given id exists. Will check case Checks to see if a user with the given id exists. Will check case
insensitively, but return None if there are multiple inexact matches. insensitively, but return None if there are multiple inexact matches.
@ -642,28 +654,25 @@ class AuthHandler(BaseHandler):
user_id: complete @user:id user_id: complete @user:id
Returns: Returns:
defer.Deferred: (unicode) canonical_user_id, or None if zero or The canonical_user_id, or None if zero or multiple matches
multiple matches
Raises:
UserDeactivatedError if a user is found but is deactivated.
""" """
res = yield self._find_user_id_and_pwd_hash(user_id) res = await self._find_user_id_and_pwd_hash(user_id)
if res is not None: if res is not None:
return res[0] return res[0]
return None return None
@defer.inlineCallbacks async def _find_user_id_and_pwd_hash(
def _find_user_id_and_pwd_hash(self, user_id: str): self, user_id: str
) -> Optional[Tuple[str, str]]:
"""Checks to see if a user with the given id exists. Will check case """Checks to see if a user with the given id exists. Will check case
insensitively, but will return None if there are multiple inexact insensitively, but will return None if there are multiple inexact
matches. matches.
Returns: Returns:
tuple: A 2-tuple of `(canonical_user_id, password_hash)` A 2-tuple of `(canonical_user_id, password_hash)` or `None`
None: if there is not exactly one match if there is not exactly one match
""" """
user_infos = yield self.store.get_users_by_id_case_insensitive(user_id) user_infos = await self.store.get_users_by_id_case_insensitive(user_id)
result = None result = None
if not user_infos: if not user_infos:
@ -696,8 +705,9 @@ class AuthHandler(BaseHandler):
""" """
return self._supported_login_types return self._supported_login_types
@defer.inlineCallbacks async def validate_login(
def validate_login(self, username: str, login_submission: Dict[str, Any]): self, username: str, login_submission: Dict[str, Any]
) -> Tuple[str, Optional[Callable[[Dict[str, str]], None]]]:
"""Authenticates the user for the /login API """Authenticates the user for the /login API
Also used by the user-interactive auth flow to validate Also used by the user-interactive auth flow to validate
@ -708,7 +718,7 @@ class AuthHandler(BaseHandler):
login_submission: the whole of the login submission login_submission: the whole of the login submission
(including 'type' and other relevant fields) (including 'type' and other relevant fields)
Returns: Returns:
Deferred[str, func]: canonical user id, and optional callback A tuple of the canonical user id, and optional callback
to be called once the access token and device id are issued to be called once the access token and device id are issued
Raises: Raises:
StoreError if there was a problem accessing the database StoreError if there was a problem accessing the database
@ -737,7 +747,7 @@ class AuthHandler(BaseHandler):
for provider in self.password_providers: for provider in self.password_providers:
if hasattr(provider, "check_password") and login_type == LoginType.PASSWORD: if hasattr(provider, "check_password") and login_type == LoginType.PASSWORD:
known_login_type = True known_login_type = True
is_valid = yield provider.check_password(qualified_user_id, password) is_valid = await provider.check_password(qualified_user_id, password)
if is_valid: if is_valid:
return qualified_user_id, None return qualified_user_id, None
@ -769,7 +779,7 @@ class AuthHandler(BaseHandler):
% (login_type, missing_fields), % (login_type, missing_fields),
) )
result = yield provider.check_auth(username, login_type, login_dict) result = await provider.check_auth(username, login_type, login_dict)
if result: if result:
if isinstance(result, str): if isinstance(result, str):
result = (result, None) result = (result, None)
@ -778,8 +788,8 @@ class AuthHandler(BaseHandler):
if login_type == LoginType.PASSWORD and self.hs.config.password_localdb_enabled: if login_type == LoginType.PASSWORD and self.hs.config.password_localdb_enabled:
known_login_type = True known_login_type = True
canonical_user_id = yield self._check_local_password( canonical_user_id = await self._check_local_password(
qualified_user_id, password qualified_user_id, password # type: ignore
) )
if canonical_user_id: if canonical_user_id:
@ -792,8 +802,9 @@ class AuthHandler(BaseHandler):
# login, it turns all LoginErrors into a 401 anyway. # login, it turns all LoginErrors into a 401 anyway.
raise LoginError(403, "Invalid password", errcode=Codes.FORBIDDEN) raise LoginError(403, "Invalid password", errcode=Codes.FORBIDDEN)
@defer.inlineCallbacks async def check_password_provider_3pid(
def check_password_provider_3pid(self, medium: str, address: str, password: str): self, medium: str, address: str, password: str
) -> Tuple[Optional[str], Optional[Callable[[Dict[str, str]], None]]]:
"""Check if a password provider is able to validate a thirdparty login """Check if a password provider is able to validate a thirdparty login
Args: Args:
@ -802,9 +813,8 @@ class AuthHandler(BaseHandler):
password: The password of the user. password: The password of the user.
Returns: Returns:
Deferred[(str|None, func|None)]: A tuple of `(user_id, A tuple of `(user_id, callback)`. If authentication is successful,
callback)`. If authentication is successful, `user_id` is a `str` `user_id`is the authenticated, canonical user ID. `callback` is
containing the authenticated, canonical user ID. `callback` is
then either a function to be later run after the server has then either a function to be later run after the server has
completed login/registration, or `None`. If authentication was completed login/registration, or `None`. If authentication was
unsuccessful, `user_id` and `callback` are both `None`. unsuccessful, `user_id` and `callback` are both `None`.
@ -816,7 +826,7 @@ class AuthHandler(BaseHandler):
# success, to a str (which is the user_id) or a tuple of # success, to a str (which is the user_id) or a tuple of
# (user_id, callback_func), where callback_func should be run # (user_id, callback_func), where callback_func should be run
# after we've finished everything else # after we've finished everything else
result = yield provider.check_3pid_auth(medium, address, password) result = await provider.check_3pid_auth(medium, address, password)
if result: if result:
# Check if the return value is a str or a tuple # Check if the return value is a str or a tuple
if isinstance(result, str): if isinstance(result, str):
@ -826,8 +836,7 @@ class AuthHandler(BaseHandler):
return None, None return None, None
@defer.inlineCallbacks async def _check_local_password(self, user_id: str, password: str) -> Optional[str]:
def _check_local_password(self, user_id: str, password: str):
"""Authenticate a user against the local password database. """Authenticate a user against the local password database.
user_id is checked case insensitively, but will return None if there are user_id is checked case insensitively, but will return None if there are
@ -837,28 +846,26 @@ class AuthHandler(BaseHandler):
user_id: complete @user:id user_id: complete @user:id
password: the provided password password: the provided password
Returns: Returns:
Deferred[unicode] the canonical_user_id, or Deferred[None] if The canonical_user_id, or None if unknown user/bad password
unknown user/bad password
""" """
lookupres = yield self._find_user_id_and_pwd_hash(user_id) lookupres = await self._find_user_id_and_pwd_hash(user_id)
if not lookupres: if not lookupres:
return None return None
(user_id, password_hash) = lookupres (user_id, password_hash) = lookupres
# If the password hash is None, the account has likely been deactivated # If the password hash is None, the account has likely been deactivated
if not password_hash: if not password_hash:
deactivated = yield self.store.get_user_deactivated_status(user_id) deactivated = await self.store.get_user_deactivated_status(user_id)
if deactivated: if deactivated:
raise UserDeactivatedError("This account has been deactivated") raise UserDeactivatedError("This account has been deactivated")
result = yield self.validate_hash(password, password_hash) result = await self.validate_hash(password, password_hash)
if not result: if not result:
logger.warning("Failed password login for user %s", user_id) logger.warning("Failed password login for user %s", user_id)
return None return None
return user_id return user_id
@defer.inlineCallbacks async def validate_short_term_login_token_and_get_user_id(self, login_token: str):
def validate_short_term_login_token_and_get_user_id(self, login_token: str):
auth_api = self.hs.get_auth() auth_api = self.hs.get_auth()
user_id = None user_id = None
try: try:
@ -868,26 +875,23 @@ class AuthHandler(BaseHandler):
except Exception: except Exception:
raise AuthError(403, "Invalid token", errcode=Codes.FORBIDDEN) raise AuthError(403, "Invalid token", errcode=Codes.FORBIDDEN)
yield self.auth.check_auth_blocking(user_id) await self.auth.check_auth_blocking(user_id)
return user_id return user_id
@defer.inlineCallbacks async def delete_access_token(self, access_token: str):
def delete_access_token(self, access_token: str):
"""Invalidate a single access token """Invalidate a single access token
Args: Args:
access_token: access token to be deleted access_token: access token to be deleted
Returns:
Deferred
""" """
user_info = yield self.auth.get_user_by_access_token(access_token) user_info = await self.auth.get_user_by_access_token(access_token)
yield self.store.delete_access_token(access_token) await self.store.delete_access_token(access_token)
# see if any of our auth providers want to know about this # see if any of our auth providers want to know about this
for provider in self.password_providers: for provider in self.password_providers:
if hasattr(provider, "on_logged_out"): if hasattr(provider, "on_logged_out"):
yield provider.on_logged_out( await provider.on_logged_out(
user_id=str(user_info["user"]), user_id=str(user_info["user"]),
device_id=user_info["device_id"], device_id=user_info["device_id"],
access_token=access_token, access_token=access_token,
@ -895,12 +899,11 @@ class AuthHandler(BaseHandler):
# delete pushers associated with this access token # delete pushers associated with this access token
if user_info["token_id"] is not None: if user_info["token_id"] is not None:
yield self.hs.get_pusherpool().remove_pushers_by_access_token( await self.hs.get_pusherpool().remove_pushers_by_access_token(
str(user_info["user"]), (user_info["token_id"],) str(user_info["user"]), (user_info["token_id"],)
) )
@defer.inlineCallbacks async def delete_access_tokens_for_user(
def delete_access_tokens_for_user(
self, self,
user_id: str, user_id: str,
except_token_id: Optional[str] = None, except_token_id: Optional[str] = None,
@ -914,10 +917,8 @@ class AuthHandler(BaseHandler):
device_id: ID of device the tokens are associated with. device_id: ID of device the tokens are associated with.
If None, tokens associated with any device (or no device) will If None, tokens associated with any device (or no device) will
be deleted be deleted
Returns:
Deferred
""" """
tokens_and_devices = yield self.store.user_delete_access_tokens( tokens_and_devices = await self.store.user_delete_access_tokens(
user_id, except_token_id=except_token_id, device_id=device_id user_id, except_token_id=except_token_id, device_id=device_id
) )
@ -925,17 +926,18 @@ class AuthHandler(BaseHandler):
for provider in self.password_providers: for provider in self.password_providers:
if hasattr(provider, "on_logged_out"): if hasattr(provider, "on_logged_out"):
for token, token_id, device_id in tokens_and_devices: for token, token_id, device_id in tokens_and_devices:
yield provider.on_logged_out( await provider.on_logged_out(
user_id=user_id, device_id=device_id, access_token=token user_id=user_id, device_id=device_id, access_token=token
) )
# delete pushers associated with the access tokens # delete pushers associated with the access tokens
yield self.hs.get_pusherpool().remove_pushers_by_access_token( await self.hs.get_pusherpool().remove_pushers_by_access_token(
user_id, (token_id for _, token_id, _ in tokens_and_devices) user_id, (token_id for _, token_id, _ in tokens_and_devices)
) )
@defer.inlineCallbacks async def add_threepid(
def add_threepid(self, user_id: str, medium: str, address: str, validated_at: int): self, user_id: str, medium: str, address: str, validated_at: int
):
# check if medium has a valid value # check if medium has a valid value
if medium not in ["email", "msisdn"]: if medium not in ["email", "msisdn"]:
raise SynapseError( raise SynapseError(
@ -956,14 +958,13 @@ class AuthHandler(BaseHandler):
if medium == "email": if medium == "email":
address = address.lower() address = address.lower()
yield self.store.user_add_threepid( await self.store.user_add_threepid(
user_id, medium, address, validated_at, self.hs.get_clock().time_msec() user_id, medium, address, validated_at, self.hs.get_clock().time_msec()
) )
@defer.inlineCallbacks async def delete_threepid(
def delete_threepid(
self, user_id: str, medium: str, address: str, id_server: Optional[str] = None self, user_id: str, medium: str, address: str, id_server: Optional[str] = None
): ) -> bool:
"""Attempts to unbind the 3pid on the identity servers and deletes it """Attempts to unbind the 3pid on the identity servers and deletes it
from the local database. from the local database.
@ -976,7 +977,7 @@ class AuthHandler(BaseHandler):
identity server specified when binding (if known). identity server specified when binding (if known).
Returns: Returns:
Deferred[bool]: Returns True if successfully unbound the 3pid on Returns True if successfully unbound the 3pid on
the identity server, False if identity server doesn't support the the identity server, False if identity server doesn't support the
unbind API. unbind API.
""" """
@ -986,11 +987,11 @@ class AuthHandler(BaseHandler):
address = address.lower() address = address.lower()
identity_handler = self.hs.get_handlers().identity_handler identity_handler = self.hs.get_handlers().identity_handler
result = yield identity_handler.try_unbind_threepid( result = await identity_handler.try_unbind_threepid(
user_id, {"medium": medium, "address": address, "id_server": id_server} user_id, {"medium": medium, "address": address, "id_server": id_server}
) )
yield self.store.user_delete_threepid(user_id, medium, address) await self.store.user_delete_threepid(user_id, medium, address)
return result return result
def _save_session(self, session: Dict[str, Any]) -> None: def _save_session(self, session: Dict[str, Any]) -> None:
@ -1000,14 +1001,14 @@ class AuthHandler(BaseHandler):
session["last_used"] = self.hs.get_clock().time_msec() session["last_used"] = self.hs.get_clock().time_msec()
self.sessions[session["id"]] = session self.sessions[session["id"]] = session
def hash(self, password: str): async def hash(self, password: str) -> str:
"""Computes a secure hash of password. """Computes a secure hash of password.
Args: Args:
password: Password to hash. password: Password to hash.
Returns: Returns:
Deferred(unicode): Hashed password. Hashed password.
""" """
def _do_hash(): def _do_hash():
@ -1019,9 +1020,11 @@ class AuthHandler(BaseHandler):
bcrypt.gensalt(self.bcrypt_rounds), bcrypt.gensalt(self.bcrypt_rounds),
).decode("ascii") ).decode("ascii")
return defer_to_thread(self.hs.get_reactor(), _do_hash) return await defer_to_thread(self.hs.get_reactor(), _do_hash)
def validate_hash(self, password: str, stored_hash: bytes): async def validate_hash(
self, password: str, stored_hash: Union[bytes, str]
) -> bool:
"""Validates that self.hash(password) == stored_hash. """Validates that self.hash(password) == stored_hash.
Args: Args:
@ -1029,7 +1032,7 @@ class AuthHandler(BaseHandler):
stored_hash: Expected hash value. stored_hash: Expected hash value.
Returns: Returns:
Deferred(bool): Whether self.hash(password) == stored_hash. Whether self.hash(password) == stored_hash.
""" """
def _do_validate_hash(): def _do_validate_hash():
@ -1045,9 +1048,9 @@ class AuthHandler(BaseHandler):
if not isinstance(stored_hash, bytes): if not isinstance(stored_hash, bytes):
stored_hash = stored_hash.encode("ascii") stored_hash = stored_hash.encode("ascii")
return defer_to_thread(self.hs.get_reactor(), _do_validate_hash) return await defer_to_thread(self.hs.get_reactor(), _do_validate_hash)
else: else:
return defer.succeed(False) return False
def start_sso_ui_auth(self, redirect_url: str, session_id: str) -> str: def start_sso_ui_auth(self, redirect_url: str, session_id: str) -> str:
""" """
@ -1061,11 +1064,8 @@ class AuthHandler(BaseHandler):
The HTML to render. The HTML to render.
""" """
session = self._get_session_info(session_id) session = self._get_session_info(session_id)
# Get the human readable operation of what is occurring, falling back to
# a generic message if it isn't available for some reason.
description = session.get("description", "modify your account")
return self._sso_auth_confirm_template.render( return self._sso_auth_confirm_template.render(
description=description, redirect_url=redirect_url, description=session["description"], redirect_url=redirect_url,
) )
def complete_sso_ui_auth( def complete_sso_ui_auth(
@ -1081,8 +1081,6 @@ class AuthHandler(BaseHandler):
""" """
# Mark the stage of the authentication as successful. # Mark the stage of the authentication as successful.
sess = self._get_session_info(session_id) sess = self._get_session_info(session_id)
if "creds" not in sess:
sess["creds"] = {}
creds = sess["creds"] creds = sess["creds"]
# Save the user who authenticated with SSO, this will be used to ensure # Save the user who authenticated with SSO, this will be used to ensure
@ -1091,7 +1089,7 @@ class AuthHandler(BaseHandler):
self._save_session(sess) self._save_session(sess)
# Render the HTML and return. # Render the HTML and return.
html_bytes = SUCCESS_TEMPLATE.encode("utf8") html_bytes = self._sso_auth_success_template.encode("utf-8")
request.setResponseCode(200) request.setResponseCode(200)
request.setHeader(b"Content-Type", b"text/html; charset=utf-8") request.setHeader(b"Content-Type", b"text/html; charset=utf-8")
request.setHeader(b"Content-Length", b"%d" % (len(html_bytes),)) request.setHeader(b"Content-Length", b"%d" % (len(html_bytes),))
@ -1099,7 +1097,7 @@ class AuthHandler(BaseHandler):
request.write(html_bytes) request.write(html_bytes)
finish_request(request) finish_request(request)
def complete_sso_login( async def complete_sso_login(
self, self,
registered_user_id: str, registered_user_id: str,
request: SynapseRequest, request: SynapseRequest,
@ -1113,6 +1111,32 @@ class AuthHandler(BaseHandler):
client_redirect_url: The URL to which to redirect the user at the end of the client_redirect_url: The URL to which to redirect the user at the end of the
process. process.
""" """
# If the account has been deactivated, do not proceed with the login
# flow.
deactivated = await self.store.get_user_deactivated_status(registered_user_id)
if deactivated:
html_bytes = self._sso_account_deactivated_template.encode("utf-8")
request.setResponseCode(403)
request.setHeader(b"Content-Type", b"text/html; charset=utf-8")
request.setHeader(b"Content-Length", b"%d" % (len(html_bytes),))
request.write(html_bytes)
finish_request(request)
return
self._complete_sso_login(registered_user_id, request, client_redirect_url)
def _complete_sso_login(
self,
registered_user_id: str,
request: SynapseRequest,
client_redirect_url: str,
):
"""
The synchronous portion of complete_sso_login.
This exists purely for backwards compatibility of synapse.module_api.ModuleApi.
"""
# Create a login token # Create a login token
login_token = self.macaroon_gen.generate_short_term_login_token( login_token = self.macaroon_gen.generate_short_term_login_token(
registered_user_id registered_user_id
@ -1138,7 +1162,7 @@ class AuthHandler(BaseHandler):
# URL we redirect users to. # URL we redirect users to.
redirect_url_no_params = client_redirect_url.split("?")[0] redirect_url_no_params = client_redirect_url.split("?")[0]
html = self._sso_redirect_confirm_template.render( html_bytes = self._sso_redirect_confirm_template.render(
display_url=redirect_url_no_params, display_url=redirect_url_no_params,
redirect_url=redirect_url, redirect_url=redirect_url,
server_name=self._server_name, server_name=self._server_name,
@ -1146,8 +1170,8 @@ class AuthHandler(BaseHandler):
request.setResponseCode(200) request.setResponseCode(200)
request.setHeader(b"Content-Type", b"text/html; charset=utf-8") request.setHeader(b"Content-Type", b"text/html; charset=utf-8")
request.setHeader(b"Content-Length", b"%d" % (len(html),)) request.setHeader(b"Content-Length", b"%d" % (len(html_bytes),))
request.write(html) request.write(html_bytes)
finish_request(request) finish_request(request)
@staticmethod @staticmethod

View File

@ -15,7 +15,7 @@
import logging import logging
import xml.etree.ElementTree as ET import xml.etree.ElementTree as ET
from typing import AnyStr, Dict, Optional, Tuple from typing import Dict, Optional, Tuple
from six.moves import urllib from six.moves import urllib
@ -48,26 +48,47 @@ class CasHandler:
self._http_client = hs.get_proxied_http_client() self._http_client = hs.get_proxied_http_client()
def _build_service_param(self, client_redirect_url: AnyStr) -> str: def _build_service_param(self, args: Dict[str, str]) -> str:
"""
Generates a value to use as the "service" parameter when redirecting or
querying the CAS service.
Args:
args: Additional arguments to include in the final redirect URL.
Returns:
The URL to use as a "service" parameter.
"""
return "%s%s?%s" % ( return "%s%s?%s" % (
self._cas_service_url, self._cas_service_url,
"/_matrix/client/r0/login/cas/ticket", "/_matrix/client/r0/login/cas/ticket",
urllib.parse.urlencode({"redirectUrl": client_redirect_url}), urllib.parse.urlencode(args),
) )
async def _handle_cas_response( async def _validate_ticket(
self, request: SynapseRequest, cas_response_body: str, client_redirect_url: str self, ticket: str, service_args: Dict[str, str]
) -> None: ) -> Tuple[str, Optional[str]]:
""" """
Retrieves the user and display name from the CAS response and continues with the authentication. Validate a CAS ticket with the server, parse the response, and return the user and display name.
Args: Args:
request: The original client request. ticket: The CAS ticket from the client.
cas_response_body: The response from the CAS server. service_args: Additional arguments to include in the service URL.
client_redirect_url: The URl to redirect the client to when Should be the same as those passed to `get_redirect_url`.
everything is done.
""" """
user, attributes = self._parse_cas_response(cas_response_body) uri = self._cas_server_url + "/proxyValidate"
args = {
"ticket": ticket,
"service": self._build_service_param(service_args),
}
try:
body = await self._http_client.get_raw(uri, args)
except PartialDownloadError as pde:
# Twisted raises this error if the connection is closed,
# even if that's being used old-http style to signal end-of-data
body = pde.response
user, attributes = self._parse_cas_response(body)
displayname = attributes.pop(self._cas_displayname_attribute, None) displayname = attributes.pop(self._cas_displayname_attribute, None)
for required_attribute, required_value in self._cas_required_attributes.items(): for required_attribute, required_value in self._cas_required_attributes.items():
@ -82,7 +103,7 @@ class CasHandler:
if required_value != actual_value: if required_value != actual_value:
raise LoginError(401, "Unauthorized", errcode=Codes.UNAUTHORIZED) raise LoginError(401, "Unauthorized", errcode=Codes.UNAUTHORIZED)
await self._on_successful_auth(user, request, client_redirect_url, displayname) return user, displayname
def _parse_cas_response( def _parse_cas_response(
self, cas_response_body: str self, cas_response_body: str
@ -127,78 +148,74 @@ class CasHandler:
) )
return user, attributes return user, attributes
async def _on_successful_auth( def get_redirect_url(self, service_args: Dict[str, str]) -> str:
self, """
username: str, Generates a URL for the CAS server where the client should be redirected.
request: SynapseRequest,
client_redirect_url: str,
user_display_name: Optional[str] = None,
) -> None:
"""Called once the user has successfully authenticated with the SSO.
Registers the user if necessary, and then returns a redirect (with
a login token) to the client.
Args: Args:
username: the remote user id. We'll map this onto service_args: Additional arguments to include in the final redirect URL.
something sane for a MXID localpath.
request: the incoming request from the browser. We'll Returns:
respond to it with a redirect. The URL to redirect the client to.
client_redirect_url: the redirect_url the client gave us when
it first started the process.
user_display_name: if set, and we have to register a new user,
we will set their displayname to this.
""" """
args = urllib.parse.urlencode(
{"service": self._build_service_param(service_args)}
)
return "%s/login?%s" % (self._cas_server_url, args)
async def handle_ticket(
self,
request: SynapseRequest,
ticket: str,
client_redirect_url: Optional[str],
session: Optional[str],
) -> None:
"""
Called once the user has successfully authenticated with the SSO.
Validates a CAS ticket sent by the client and completes the auth process.
If the user interactive authentication session is provided, marks the
UI Auth session as complete, then returns an HTML page notifying the
user they are done.
Otherwise, this registers the user if necessary, and then returns a
redirect (with a login token) to the client.
Args:
request: the incoming request from the browser. We'll
respond to it with a redirect or an HTML page.
ticket: The CAS ticket provided by the client.
client_redirect_url: the redirectUrl parameter from the `/cas/ticket` HTTP request, if given.
This should be the same as the redirectUrl from the original `/login/sso/redirect` request.
session: The session parameter from the `/cas/ticket` HTTP request, if given.
This should be the UI Auth session id.
"""
args = {}
if client_redirect_url:
args["redirectUrl"] = client_redirect_url
if session:
args["session"] = session
username, user_display_name = await self._validate_ticket(ticket, args)
localpart = map_username_to_mxid_localpart(username) localpart = map_username_to_mxid_localpart(username)
user_id = UserID(localpart, self._hostname).to_string() user_id = UserID(localpart, self._hostname).to_string()
registered_user_id = await self._auth_handler.check_user_exists(user_id) registered_user_id = await self._auth_handler.check_user_exists(user_id)
if not registered_user_id:
registered_user_id = await self._registration_handler.register_user( if session:
localpart=localpart, default_display_name=user_display_name self._auth_handler.complete_sso_ui_auth(
registered_user_id, session, request,
) )
self._auth_handler.complete_sso_login( else:
registered_user_id, request, client_redirect_url if not registered_user_id:
) registered_user_id = await self._registration_handler.register_user(
localpart=localpart, default_display_name=user_display_name
)
def handle_redirect_request(self, client_redirect_url: bytes) -> bytes: await self._auth_handler.complete_sso_login(
""" registered_user_id, request, client_redirect_url
Generates a URL to the CAS server where the client should be redirected. )
Args:
client_redirect_url: The final URL the client should go to after the
user has negotiated SSO.
Returns:
The URL to redirect to.
"""
args = urllib.parse.urlencode(
{"service": self._build_service_param(client_redirect_url)}
)
return ("%s/login?%s" % (self._cas_server_url, args)).encode("ascii")
async def handle_ticket_request(
self, request: SynapseRequest, client_redirect_url: str, ticket: str
) -> None:
"""
Validates a CAS ticket sent by the client for login/registration.
On a successful request, writes a redirect to the request.
"""
uri = self._cas_server_url + "/proxyValidate"
args = {
"ticket": ticket,
"service": self._build_service_param(client_redirect_url),
}
try:
body = await self._http_client.get_raw(uri, args)
except PartialDownloadError as pde:
# Twisted raises this error if the connection is closed,
# even if that's being used old-http style to signal end-of-data
body = pde.response
await self._handle_cas_response(request, body, client_redirect_url)

View File

@ -338,8 +338,10 @@ class DeviceHandler(DeviceWorkerHandler):
else: else:
raise raise
yield self._auth_handler.delete_access_tokens_for_user( yield defer.ensureDeferred(
user_id, device_id=device_id self._auth_handler.delete_access_tokens_for_user(
user_id, device_id=device_id
)
) )
yield self.store.delete_e2e_keys_by_device(user_id=user_id, device_id=device_id) yield self.store.delete_e2e_keys_by_device(user_id=user_id, device_id=device_id)
@ -391,8 +393,10 @@ class DeviceHandler(DeviceWorkerHandler):
# Delete access tokens and e2e keys for each device. Not optimised as it is not # Delete access tokens and e2e keys for each device. Not optimised as it is not
# considered as part of a critical path. # considered as part of a critical path.
for device_id in device_ids: for device_id in device_ids:
yield self._auth_handler.delete_access_tokens_for_user( yield defer.ensureDeferred(
user_id, device_id=device_id self._auth_handler.delete_access_tokens_for_user(
user_id, device_id=device_id
)
) )
yield self.store.delete_e2e_keys_by_device( yield self.store.delete_e2e_keys_by_device(
user_id=user_id, device_id=device_id user_id=user_id, device_id=device_id

View File

@ -54,19 +54,23 @@ class E2eKeysHandler(object):
self._edu_updater = SigningKeyEduUpdater(hs, self) self._edu_updater = SigningKeyEduUpdater(hs, self)
federation_registry = hs.get_federation_registry()
self._is_master = hs.config.worker_app is None self._is_master = hs.config.worker_app is None
if not self._is_master: if not self._is_master:
self._user_device_resync_client = ReplicationUserDevicesResyncRestServlet.make_client( self._user_device_resync_client = ReplicationUserDevicesResyncRestServlet.make_client(
hs hs
) )
else:
# Only register this edu handler on master as it requires writing
# device updates to the db
#
# FIXME: switch to m.signing_key_update when MSC1756 is merged into the spec
federation_registry.register_edu_handler(
"org.matrix.signing_key_update",
self._edu_updater.incoming_signing_key_update,
)
federation_registry = hs.get_federation_registry()
# FIXME: switch to m.signing_key_update when MSC1756 is merged into the spec
federation_registry.register_edu_handler(
"org.matrix.signing_key_update",
self._edu_updater.incoming_signing_key_update,
)
# doesn't really work as part of the generic query API, because the # doesn't really work as part of the generic query API, because the
# query request requires an object POST, but we abuse the # query request requires an object POST, but we abuse the
# "query handler" interface. # "query handler" interface.
@ -170,8 +174,8 @@ class E2eKeysHandler(object):
"""This is called when we are querying the device list of a user on """This is called when we are querying the device list of a user on
a remote homeserver and their device list is not in the device list a remote homeserver and their device list is not in the device list
cache. If we share a room with this user and we're not querying for cache. If we share a room with this user and we're not querying for
specific user we will update the cache specific user we will update the cache with their device list.
with their device list.""" """
destination_query = remote_queries_not_in_cache[destination] destination_query = remote_queries_not_in_cache[destination]
@ -957,13 +961,19 @@ class E2eKeysHandler(object):
return signature_list, failures return signature_list, failures
@defer.inlineCallbacks @defer.inlineCallbacks
def _get_e2e_cross_signing_verify_key(self, user_id, key_type, from_user_id=None): def _get_e2e_cross_signing_verify_key(
"""Fetch the cross-signing public key from storage and interpret it. self, user_id: str, key_type: str, from_user_id: str = None
):
"""Fetch locally or remotely query for a cross-signing public key.
First, attempt to fetch the cross-signing public key from storage.
If that fails, query the keys from the homeserver they belong to
and update our local copy.
Args: Args:
user_id (str): the user whose key should be fetched user_id: the user whose key should be fetched
key_type (str): the type of key to fetch key_type: the type of key to fetch
from_user_id (str): the user that we are fetching the keys for. from_user_id: the user that we are fetching the keys for.
This affects what signatures are fetched. This affects what signatures are fetched.
Returns: Returns:
@ -972,16 +982,140 @@ class E2eKeysHandler(object):
Raises: Raises:
NotFoundError: if the key is not found NotFoundError: if the key is not found
SynapseError: if `user_id` is invalid
""" """
user = UserID.from_string(user_id)
key = yield self.store.get_e2e_cross_signing_key( key = yield self.store.get_e2e_cross_signing_key(
user_id, key_type, from_user_id user_id, key_type, from_user_id
) )
if key is None:
logger.debug("no %s key found for %s", key_type, user_id) if key:
# We found a copy of this key in our database. Decode and return it
key_id, verify_key = get_verify_key_from_cross_signing_key(key)
return key, key_id, verify_key
# If we couldn't find the key locally, and we're looking for keys of
# another user then attempt to fetch the missing key from the remote
# user's server.
#
# We may run into this in possible edge cases where a user tries to
# cross-sign a remote user, but does not share any rooms with them yet.
# Thus, we would not have their key list yet. We instead fetch the key,
# store it and notify clients of new, associated device IDs.
if self.is_mine(user) or key_type not in ["master", "self_signing"]:
# Note that master and self_signing keys are the only cross-signing keys we
# can request over federation
raise NotFoundError("No %s key found for %s" % (key_type, user_id)) raise NotFoundError("No %s key found for %s" % (key_type, user_id))
key_id, verify_key = get_verify_key_from_cross_signing_key(key)
(
key,
key_id,
verify_key,
) = yield self._retrieve_cross_signing_keys_for_remote_user(user, key_type)
if key is None:
raise NotFoundError("No %s key found for %s" % (key_type, user_id))
return key, key_id, verify_key return key, key_id, verify_key
@defer.inlineCallbacks
def _retrieve_cross_signing_keys_for_remote_user(
self, user: UserID, desired_key_type: str,
):
"""Queries cross-signing keys for a remote user and saves them to the database
Only the key specified by `key_type` will be returned, while all retrieved keys
will be saved regardless
Args:
user: The user to query remote keys for
desired_key_type: The type of key to receive. One of "master", "self_signing"
Returns:
Deferred[Tuple[Optional[Dict], Optional[str], Optional[VerifyKey]]]: A tuple
of the retrieved key content, the key's ID and the matching VerifyKey.
If the key cannot be retrieved, all values in the tuple will instead be None.
"""
try:
remote_result = yield self.federation.query_user_devices(
user.domain, user.to_string()
)
except Exception as e:
logger.warning(
"Unable to query %s for cross-signing keys of user %s: %s %s",
user.domain,
user.to_string(),
type(e),
e,
)
return None, None, None
# Process each of the retrieved cross-signing keys
desired_key = None
desired_key_id = None
desired_verify_key = None
retrieved_device_ids = []
for key_type in ["master", "self_signing"]:
key_content = remote_result.get(key_type + "_key")
if not key_content:
continue
# Ensure these keys belong to the correct user
if "user_id" not in key_content:
logger.warning(
"Invalid %s key retrieved, missing user_id field: %s",
key_type,
key_content,
)
continue
if user.to_string() != key_content["user_id"]:
logger.warning(
"Found %s key of user %s when querying for keys of user %s",
key_type,
key_content["user_id"],
user.to_string(),
)
continue
# Validate the key contents
try:
# verify_key is a VerifyKey from signedjson, which uses
# .version to denote the portion of the key ID after the
# algorithm and colon, which is the device ID
key_id, verify_key = get_verify_key_from_cross_signing_key(key_content)
except ValueError as e:
logger.warning(
"Invalid %s key retrieved: %s - %s %s",
key_type,
key_content,
type(e),
e,
)
continue
# Note down the device ID attached to this key
retrieved_device_ids.append(verify_key.version)
# If this is the desired key type, save it and its ID/VerifyKey
if key_type == desired_key_type:
desired_key = key_content
desired_verify_key = verify_key
desired_key_id = key_id
# At the same time, store this key in the db for subsequent queries
yield self.store.set_e2e_cross_signing_key(
user.to_string(), key_type, key_content
)
# Notify clients that new devices for this user have been discovered
if retrieved_device_ids:
# XXX is this necessary?
yield self.device_handler.notify_device_update(
user.to_string(), retrieved_device_ids
)
return desired_key, desired_key_id, desired_verify_key
def _check_cross_signing_key(key, user_id, key_type, signing_key=None): def _check_cross_signing_key(key, user_id, key_type, signing_key=None):
"""Check a cross-signing key uploaded by a user. Performs some basic sanity """Check a cross-signing key uploaded by a user. Performs some basic sanity

View File

@ -19,6 +19,7 @@ import random
from synapse.api.constants import EventTypes, Membership from synapse.api.constants import EventTypes, Membership
from synapse.api.errors import AuthError, SynapseError from synapse.api.errors import AuthError, SynapseError
from synapse.events import EventBase from synapse.events import EventBase
from synapse.handlers.presence import format_user_presence_state
from synapse.logging.utils import log_function from synapse.logging.utils import log_function
from synapse.types import UserID from synapse.types import UserID
from synapse.visibility import filter_events_for_client from synapse.visibility import filter_events_for_client
@ -97,6 +98,8 @@ class EventStreamHandler(BaseHandler):
explicit_room_id=room_id, explicit_room_id=room_id,
) )
time_now = self.clock.time_msec()
# When the user joins a new room, or another user joins a currently # When the user joins a new room, or another user joins a currently
# joined room, we need to send down presence for those users. # joined room, we need to send down presence for those users.
to_add = [] to_add = []
@ -112,19 +115,20 @@ class EventStreamHandler(BaseHandler):
users = await self.state.get_current_users_in_room( users = await self.state.get_current_users_in_room(
event.room_id event.room_id
) )
states = await presence_handler.get_states(users, as_event=True)
to_add.extend(states)
else: else:
users = [event.state_key]
ev = await presence_handler.get_state( states = await presence_handler.get_states(users)
UserID.from_string(event.state_key), as_event=True to_add.extend(
) {
to_add.append(ev) "type": EventTypes.Presence,
"content": format_user_presence_state(state, time_now),
}
for state in states
)
events.extend(to_add) events.extend(to_add)
time_now = self.clock.time_msec()
chunks = await self._event_serializer.serialize_events( chunks = await self._event_serializer.serialize_events(
events, events,
time_now, time_now,

View File

@ -18,7 +18,7 @@
"""Utilities for interacting with Identity Servers""" """Utilities for interacting with Identity Servers"""
import logging import logging
import urllib import urllib.parse
from canonicaljson import json from canonicaljson import json
from signedjson.key import decode_verify_key_bytes from signedjson.key import decode_verify_key_bytes

View File

@ -381,10 +381,16 @@ class InitialSyncHandler(BaseHandler):
return [] return []
states = await presence_handler.get_states( states = await presence_handler.get_states(
[m.user_id for m in room_members], as_event=True [m.user_id for m in room_members]
) )
return states return [
{
"type": EventTypes.Presence,
"content": format_user_presence_state(s, time_now),
}
for s in states
]
async def get_receipts(): async def get_receipts():
receipts = await self.store.get_linearized_receipts_for_room( receipts = await self.store.get_linearized_receipts_for_room(

View File

@ -1,5 +1,6 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
# Copyright 2014-2016 OpenMarket Ltd # Copyright 2014-2016 OpenMarket Ltd
# Copyright 2020 The Matrix.org Foundation C.I.C.
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License. # you may not use this file except in compliance with the License.
@ -21,10 +22,10 @@ The methods that define policy are:
- PresenceHandler._handle_timeouts - PresenceHandler._handle_timeouts
- should_notify - should_notify
""" """
import abc
import logging import logging
from contextlib import contextmanager from contextlib import contextmanager
from typing import Dict, List, Set from typing import Dict, Iterable, List, Set
from six import iteritems, itervalues from six import iteritems, itervalues
@ -41,7 +42,7 @@ from synapse.logging.utils import log_function
from synapse.metrics import LaterGauge from synapse.metrics import LaterGauge
from synapse.metrics.background_process_metrics import run_as_background_process from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.storage.presence import UserPresenceState from synapse.storage.presence import UserPresenceState
from synapse.types import UserID, get_domain_from_id from synapse.types import JsonDict, UserID, get_domain_from_id
from synapse.util.async_helpers import Linearizer from synapse.util.async_helpers import Linearizer
from synapse.util.caches.descriptors import cached from synapse.util.caches.descriptors import cached
from synapse.util.metrics import Measure from synapse.util.metrics import Measure
@ -99,13 +100,106 @@ EXTERNAL_PROCESS_EXPIRY = 5 * 60 * 1000
assert LAST_ACTIVE_GRANULARITY < IDLE_TIMER assert LAST_ACTIVE_GRANULARITY < IDLE_TIMER
class PresenceHandler(object): class BasePresenceHandler(abc.ABC):
"""Parts of the PresenceHandler that are shared between workers and master"""
def __init__(self, hs: "synapse.server.HomeServer"): def __init__(self, hs: "synapse.server.HomeServer"):
self.clock = hs.get_clock()
self.store = hs.get_datastore()
active_presence = self.store.take_presence_startup_info()
self.user_to_current_state = {state.user_id: state for state in active_presence}
@abc.abstractmethod
async def user_syncing(
self, user_id: str, affect_presence: bool
) -> ContextManager[None]:
"""Returns a context manager that should surround any stream requests
from the user.
This allows us to keep track of who is currently streaming and who isn't
without having to have timers outside of this module to avoid flickering
when users disconnect/reconnect.
Args:
user_id: the user that is starting a sync
affect_presence: If false this function will be a no-op.
Useful for streams that are not associated with an actual
client that is being used by a user.
"""
@abc.abstractmethod
def get_currently_syncing_users_for_replication(self) -> Iterable[str]:
"""Get an iterable of syncing users on this worker, to send to the presence handler
This is called when a replication connection is established. It should return
a list of user ids, which are then sent as USER_SYNC commands to inform the
process handling presence about those users.
Returns:
An iterable of user_id strings.
"""
async def get_state(self, target_user: UserID) -> UserPresenceState:
results = await self.get_states([target_user.to_string()])
return results[0]
async def get_states(
self, target_user_ids: Iterable[str]
) -> List[UserPresenceState]:
"""Get the presence state for users."""
updates_d = await self.current_state_for_users(target_user_ids)
updates = list(updates_d.values())
for user_id in set(target_user_ids) - {u.user_id for u in updates}:
updates.append(UserPresenceState.default(user_id))
return updates
async def current_state_for_users(
self, user_ids: Iterable[str]
) -> Dict[str, UserPresenceState]:
"""Get the current presence state for multiple users.
Returns:
dict: `user_id` -> `UserPresenceState`
"""
states = {
user_id: self.user_to_current_state.get(user_id, None)
for user_id in user_ids
}
missing = [user_id for user_id, state in iteritems(states) if not state]
if missing:
# There are things not in our in memory cache. Lets pull them out of
# the database.
res = await self.store.get_presence_for_users(missing)
states.update(res)
missing = [user_id for user_id, state in iteritems(states) if not state]
if missing:
new = {
user_id: UserPresenceState.default(user_id) for user_id in missing
}
states.update(new)
self.user_to_current_state.update(new)
return states
@abc.abstractmethod
async def set_state(
self, target_user: UserID, state: JsonDict, ignore_status_msg: bool = False
) -> None:
"""Set the presence state of the user. """
class PresenceHandler(BasePresenceHandler):
def __init__(self, hs: "synapse.server.HomeServer"):
super().__init__(hs)
self.hs = hs self.hs = hs
self.is_mine_id = hs.is_mine_id self.is_mine_id = hs.is_mine_id
self.server_name = hs.hostname self.server_name = hs.hostname
self.clock = hs.get_clock()
self.store = hs.get_datastore()
self.wheel_timer = WheelTimer() self.wheel_timer = WheelTimer()
self.notifier = hs.get_notifier() self.notifier = hs.get_notifier()
self.federation = hs.get_federation_sender() self.federation = hs.get_federation_sender()
@ -115,13 +209,6 @@ class PresenceHandler(object):
federation_registry.register_edu_handler("m.presence", self.incoming_presence) federation_registry.register_edu_handler("m.presence", self.incoming_presence)
active_presence = self.store.take_presence_startup_info()
# A dictionary of the current state of users. This is prefilled with
# non-offline presence from the DB. We should fetch from the DB if
# we can't find a users presence in here.
self.user_to_current_state = {state.user_id: state for state in active_presence}
LaterGauge( LaterGauge(
"synapse_handlers_presence_user_to_current_state_size", "synapse_handlers_presence_user_to_current_state_size",
"", "",
@ -130,7 +217,7 @@ class PresenceHandler(object):
) )
now = self.clock.time_msec() now = self.clock.time_msec()
for state in active_presence: for state in self.user_to_current_state.values():
self.wheel_timer.insert( self.wheel_timer.insert(
now=now, obj=state.user_id, then=state.last_active_ts + IDLE_TIMER now=now, obj=state.user_id, then=state.last_active_ts + IDLE_TIMER
) )
@ -361,10 +448,18 @@ class PresenceHandler(object):
timers_fired_counter.inc(len(states)) timers_fired_counter.inc(len(states))
syncing_user_ids = {
user_id
for user_id, count in self.user_to_num_current_syncs.items()
if count
}
for user_ids in self.external_process_to_current_syncs.values():
syncing_user_ids.update(user_ids)
changes = handle_timeouts( changes = handle_timeouts(
states, states,
is_mine_fn=self.is_mine_id, is_mine_fn=self.is_mine_id,
syncing_user_ids=self.get_currently_syncing_users(), syncing_user_ids=syncing_user_ids,
now=now, now=now,
) )
@ -462,22 +557,9 @@ class PresenceHandler(object):
return _user_syncing() return _user_syncing()
def get_currently_syncing_users(self): def get_currently_syncing_users_for_replication(self) -> Iterable[str]:
"""Get the set of user ids that are currently syncing on this HS. # since we are the process handling presence, there is nothing to do here.
Returns: return []
set(str): A set of user_id strings.
"""
if self.hs.config.use_presence:
syncing_user_ids = {
user_id
for user_id, count in self.user_to_num_current_syncs.items()
if count
}
for user_ids in self.external_process_to_current_syncs.values():
syncing_user_ids.update(user_ids)
return syncing_user_ids
else:
return set()
async def update_external_syncs_row( async def update_external_syncs_row(
self, process_id, user_id, is_syncing, sync_time_msec self, process_id, user_id, is_syncing, sync_time_msec
@ -554,34 +636,6 @@ class PresenceHandler(object):
res = await self.current_state_for_users([user_id]) res = await self.current_state_for_users([user_id])
return res[user_id] return res[user_id]
async def current_state_for_users(self, user_ids):
"""Get the current presence state for multiple users.
Returns:
dict: `user_id` -> `UserPresenceState`
"""
states = {
user_id: self.user_to_current_state.get(user_id, None)
for user_id in user_ids
}
missing = [user_id for user_id, state in iteritems(states) if not state]
if missing:
# There are things not in our in memory cache. Lets pull them out of
# the database.
res = await self.store.get_presence_for_users(missing)
states.update(res)
missing = [user_id for user_id, state in iteritems(states) if not state]
if missing:
new = {
user_id: UserPresenceState.default(user_id) for user_id in missing
}
states.update(new)
self.user_to_current_state.update(new)
return states
async def _persist_and_notify(self, states): async def _persist_and_notify(self, states):
"""Persist states in the database, poke the notifier and send to """Persist states in the database, poke the notifier and send to
interested remote servers interested remote servers
@ -669,40 +723,6 @@ class PresenceHandler(object):
federation_presence_counter.inc(len(updates)) federation_presence_counter.inc(len(updates))
await self._update_states(updates) await self._update_states(updates)
async def get_state(self, target_user, as_event=False):
results = await self.get_states([target_user.to_string()], as_event=as_event)
return results[0]
async def get_states(self, target_user_ids, as_event=False):
"""Get the presence state for users.
Args:
target_user_ids (list)
as_event (bool): Whether to format it as a client event or not.
Returns:
list
"""
updates = await self.current_state_for_users(target_user_ids)
updates = list(updates.values())
for user_id in set(target_user_ids) - {u.user_id for u in updates}:
updates.append(UserPresenceState.default(user_id))
now = self.clock.time_msec()
if as_event:
return [
{
"type": "m.presence",
"content": format_user_presence_state(state, now),
}
for state in updates
]
else:
return updates
async def set_state(self, target_user, state, ignore_status_msg=False): async def set_state(self, target_user, state, ignore_status_msg=False):
"""Set the presence state of the user. """Set the presence state of the user.
""" """
@ -889,7 +909,7 @@ class PresenceHandler(object):
user_ids = await self.state.get_current_users_in_room(room_id) user_ids = await self.state.get_current_users_in_room(room_id)
user_ids = list(filter(self.is_mine_id, user_ids)) user_ids = list(filter(self.is_mine_id, user_ids))
states = await self.current_state_for_users(user_ids) states_d = await self.current_state_for_users(user_ids)
# Filter out old presence, i.e. offline presence states where # Filter out old presence, i.e. offline presence states where
# the user hasn't been active for a week. We can change this # the user hasn't been active for a week. We can change this
@ -899,7 +919,7 @@ class PresenceHandler(object):
now = self.clock.time_msec() now = self.clock.time_msec()
states = [ states = [
state state
for state in states.values() for state in states_d.values()
if state.state != PresenceState.OFFLINE if state.state != PresenceState.OFFLINE
or now - state.last_active_ts < 7 * 24 * 60 * 60 * 1000 or now - state.last_active_ts < 7 * 24 * 60 * 60 * 1000
or state.status_msg is not None or state.status_msg is not None

View File

@ -166,7 +166,9 @@ class RegistrationHandler(BaseHandler):
yield self.auth.check_auth_blocking(threepid=threepid) yield self.auth.check_auth_blocking(threepid=threepid)
password_hash = None password_hash = None
if password: if password:
password_hash = yield self._auth_handler.hash(password) password_hash = yield defer.ensureDeferred(
self._auth_handler.hash(password)
)
if localpart is not None: if localpart is not None:
yield self.check_username(localpart, guest_access_token=guest_access_token) yield self.check_username(localpart, guest_access_token=guest_access_token)
@ -540,8 +542,10 @@ class RegistrationHandler(BaseHandler):
user_id, ["guest = true"] user_id, ["guest = true"]
) )
else: else:
access_token = yield self._auth_handler.get_access_token_for_user_id( access_token = yield defer.ensureDeferred(
user_id, device_id=device_id, valid_until_ms=valid_until_ms self._auth_handler.get_access_token_for_user_id(
user_id, device_id=device_id, valid_until_ms=valid_until_ms
)
) )
return (device_id, access_token) return (device_id, access_token)
@ -617,8 +621,13 @@ class RegistrationHandler(BaseHandler):
logger.info("Can't add incomplete 3pid") logger.info("Can't add incomplete 3pid")
return return
yield self._auth_handler.add_threepid( yield defer.ensureDeferred(
user_id, threepid["medium"], threepid["address"], threepid["validated_at"] self._auth_handler.add_threepid(
user_id,
threepid["medium"],
threepid["address"],
threepid["validated_at"],
)
) )
# And we add an email pusher for them by default, but only # And we add an email pusher for them by default, but only
@ -670,6 +679,11 @@ class RegistrationHandler(BaseHandler):
return None return None
raise raise
yield self._auth_handler.add_threepid( yield defer.ensureDeferred(
user_id, threepid["medium"], threepid["address"], threepid["validated_at"] self._auth_handler.add_threepid(
user_id,
threepid["medium"],
threepid["address"],
threepid["validated_at"],
)
) )

View File

@ -645,6 +645,13 @@ class RoomCreationHandler(BaseHandler):
check_membership=False, check_membership=False,
) )
if is_public:
if not self.config.is_publishing_room_allowed(user_id, room_id, room_alias):
# Lets just return a generic message, as there may be all sorts of
# reasons why we said no. TODO: Allow configurable error messages
# per alias creation rule?
raise SynapseError(403, "Not allowed to publish room")
preset_config = config.get( preset_config = config.get(
"preset", "preset",
RoomCreationPreset.PRIVATE_CHAT RoomCreationPreset.PRIVATE_CHAT
@ -806,6 +813,7 @@ class RoomCreationHandler(BaseHandler):
EventTypes.RoomAvatar: 50, EventTypes.RoomAvatar: 50,
EventTypes.Tombstone: 100, EventTypes.Tombstone: 100,
EventTypes.ServerACL: 100, EventTypes.ServerACL: 100,
EventTypes.RoomEncryption: 100,
}, },
"events_default": 0, "events_default": 0,
"state_default": 50, "state_default": 50,

Some files were not shown because too many files have changed in this diff Show More