Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes

pull/8675/head
Erik Johnston 2020-05-21 15:19:00 +01:00
commit cf92310da2
239 changed files with 7385 additions and 2965 deletions

5
.github/ISSUE_TEMPLATE.md vendored Normal file
View File

@ -0,0 +1,5 @@
**If you are looking for support** please ask in **#synapse:matrix.org**
(using a matrix.org account if necessary). We do not use GitHub issues for
support.
**If you want to report a security issue** please see https://matrix.org/security-disclosure-policy/

View File

@ -4,11 +4,13 @@ about: Create a report to help us improve
--- ---
**THIS IS NOT A SUPPORT CHANNEL!**
**IF YOU HAVE SUPPORT QUESTIONS ABOUT RUNNING OR CONFIGURING YOUR OWN HOME SERVER**,
please ask in **#synapse:matrix.org** (using a matrix.org account if necessary)
<!-- <!--
**IF YOU HAVE SUPPORT QUESTIONS ABOUT RUNNING OR CONFIGURING YOUR OWN HOME SERVER**: If you want to report a security issue, please see https://matrix.org/security-disclosure-policy/
You will likely get better support more quickly if you ask in ** #synapse:matrix.org ** ;)
This is a bug report template. By following the instructions below and This is a bug report template. By following the instructions below and
filling out the sections with your information, you will help the us to get all filling out the sections with your information, you will help the us to get all

View File

@ -1,20 +1,5 @@
Synapse 1.13.0rc2 (2020-05-14) Synapse 1.13.0 (2020-05-19)
============================== ===========================
Bugfixes
--------
- Fix a long-standing bug which could cause messages not to be sent over federation, when state events with state keys matching user IDs (such as custom user statuses) were received. ([\#7376](https://github.com/matrix-org/synapse/issues/7376))
- Restore compatibility with non-compliant clients during the user interactive authentication process, fixing a problem introduced in v1.13.0rc1. ([\#7483](https://github.com/matrix-org/synapse/issues/7483))
Internal Changes
----------------
- Fix linting errors in new version of Flake8. ([\#7470](https://github.com/matrix-org/synapse/issues/7470))
Synapse 1.13.0rc1 (2020-05-11)
==============================
This release brings some potential changes necessary for certain This release brings some potential changes necessary for certain
configurations of Synapse: configurations of Synapse:
@ -34,6 +19,53 @@ configurations of Synapse:
Please review [UPGRADE.rst](UPGRADE.rst) for more details on these changes Please review [UPGRADE.rst](UPGRADE.rst) for more details on these changes
and for general upgrade guidance. and for general upgrade guidance.
Notice of change to the default `git` branch for Synapse
--------------------------------------------------------
With the release of Synapse 1.13.0, the default `git` branch for Synapse has
changed to `develop`, which is the development tip. This is more consistent with
common practice and modern `git` usage.
The `master` branch, which tracks the latest release, is still available. It is
recommended that developers and distributors who have scripts which run builds
using the default branch of Synapse should therefore consider pinning their
scripts to `master`.
Internal Changes
----------------
- Update the version of dh-virtualenv we use to build debs, and add focal to the list of target distributions. ([\#7526](https://github.com/matrix-org/synapse/issues/7526))
Synapse 1.13.0rc3 (2020-05-18)
==============================
Bugfixes
--------
- Hash passwords as early as possible during registration. ([\#7523](https://github.com/matrix-org/synapse/issues/7523))
Synapse 1.13.0rc2 (2020-05-14)
==============================
Bugfixes
--------
- Fix a long-standing bug which could cause messages not to be sent over federation, when state events with state keys matching user IDs (such as custom user statuses) were received. ([\#7376](https://github.com/matrix-org/synapse/issues/7376))
- Restore compatibility with non-compliant clients during the user interactive authentication process, fixing a problem introduced in v1.13.0rc1. ([\#7483](https://github.com/matrix-org/synapse/issues/7483))
Internal Changes
----------------
- Fix linting errors in new version of Flake8. ([\#7470](https://github.com/matrix-org/synapse/issues/7470))
Synapse 1.13.0rc1 (2020-05-11)
==============================
Features Features
-------- --------

View File

@ -1,62 +1,48 @@
# Contributing code to Matrix # Contributing code to Synapse
Everyone is welcome to contribute code to Matrix Everyone is welcome to contribute code to [matrix.org
(https://github.com/matrix-org), provided that they are willing to license projects](https://github.com/matrix-org), provided that they are willing to
their contributions under the same license as the project itself. We follow a license their contributions under the same license as the project itself. We
simple 'inbound=outbound' model for contributions: the act of submitting an follow a simple 'inbound=outbound' model for contributions: the act of
'inbound' contribution means that the contributor agrees to license the code submitting an 'inbound' contribution means that the contributor agrees to
under the same terms as the project's overall 'outbound' license - in our license the code under the same terms as the project's overall 'outbound'
case, this is almost always Apache Software License v2 (see [LICENSE](LICENSE)). license - in our case, this is almost always Apache Software License v2 (see
[LICENSE](LICENSE)).
## How to contribute ## How to contribute
The preferred and easiest way to contribute changes to Matrix is to fork the The preferred and easiest way to contribute changes is to fork the relevant
relevant project on github, and then [create a pull request]( project on github, and then [create a pull request](
https://help.github.com/articles/using-pull-requests/) to ask us to pull https://help.github.com/articles/using-pull-requests/) to ask us to pull your
your changes into our repo. changes into our repo.
**The single biggest thing you need to know is: please base your changes on Some other points to follow:
the develop branch - *not* master.**
* Please base your changes on the `develop` branch.
* Please follow the [code style requirements](#code-style).
We use the master branch to track the most recent release, so that folks who * Please include a [changelog entry](#changelog) with each PR.
blindly clone the repo and automatically check out master get something that
works. Develop is the unstable branch where all the development actually
happens: the workflow is that contributors should fork the develop branch to
make a 'feature' branch for a particular contribution, and then make a pull
request to merge this back into the matrix.org 'official' develop branch. We
use github's pull request workflow to review the contribution, and either ask
you to make any refinements needed or merge it and make them ourselves. The
changes will then land on master when we next do a release.
We use [Buildkite](https://buildkite.com/matrix-dot-org/synapse) for continuous * Please [sign off](#sign-off) your contribution.
integration. If your change breaks the build, this will be shown in GitHub, so
please keep an eye on the pull request for feedback.
To run unit tests in a local development environment, you can use: * Please keep an eye on the pull request for feedback from the [continuous
integration system](#continuous-integration-and-testing) and try to fix any
errors that come up.
- ``tox -e py35`` (requires tox to be installed by ``pip install tox``) * If you need to [update your PR](#updating-your-pull-request), just add new
for SQLite-backed Synapse on Python 3.5. commits to your branch rather than rebasing.
- ``tox -e py36`` for SQLite-backed Synapse on Python 3.6.
- ``tox -e py36-postgres`` for PostgreSQL-backed Synapse on Python 3.6
(requires a running local PostgreSQL with access to create databases).
- ``./test_postgresql.sh`` for PostgreSQL-backed Synapse on Python 3.5
(requires Docker). Entirely self-contained, recommended if you don't want to
set up PostgreSQL yourself.
Docker images are available for running the integration tests (SyTest) locally,
see the [documentation in the SyTest repo](
https://github.com/matrix-org/sytest/blob/develop/docker/README.md) for more
information.
## Code style ## Code style
All Matrix projects have a well-defined code-style - and sometimes we've even Synapse's code style is documented [here](docs/code_style.md). Please follow
got as far as documenting it... For instance, synapse's code style doc lives it, including the conventions for the [sample configuration
[here](docs/code_style.md). file](docs/code_style.md#configuration-file-format).
To facilitate meeting these criteria you can run `scripts-dev/lint.sh` Many of the conventions are enforced by scripts which are run as part of the
locally. Since this runs the tools listed in the above document, you'll need [continuous integration system](#continuous-integration-and-testing). To help
python 3.6 and to install each tool: check if you have followed the code style, you can run `scripts-dev/lint.sh`
locally. You'll need python 3.6 or later, and to install a number of tools:
``` ```
# Install the dependencies # Install the dependencies
@ -67,9 +53,11 @@ pip install -U black flake8 flake8-comprehensions isort
``` ```
**Note that the script does not just test/check, but also reformats code, so you **Note that the script does not just test/check, but also reformats code, so you
may wish to ensure any new code is committed first**. By default this script may wish to ensure any new code is committed first**.
checks all files and can take some time; if you alter only certain files, you
might wish to specify paths as arguments to reduce the run-time: By default, this script checks all files and can take some time; if you alter
only certain files, you might wish to specify paths as arguments to reduce the
run-time:
``` ```
./scripts-dev/lint.sh path/to/file1.py path/to/file2.py path/to/folder ./scripts-dev/lint.sh path/to/file1.py path/to/file2.py path/to/folder
@ -82,7 +70,6 @@ Please ensure your changes match the cosmetic style of the existing project,
and **never** mix cosmetic and functional changes in the same commit, as it and **never** mix cosmetic and functional changes in the same commit, as it
makes it horribly hard to review otherwise. makes it horribly hard to review otherwise.
## Changelog ## Changelog
All changes, even minor ones, need a corresponding changelog / newsfragment All changes, even minor ones, need a corresponding changelog / newsfragment
@ -98,24 +85,55 @@ in the format of `PRnumber.type`. The type can be one of the following:
* `removal` (also used for deprecations) * `removal` (also used for deprecations)
* `misc` (for internal-only changes) * `misc` (for internal-only changes)
The content of the file is your changelog entry, which should be a short This file will become part of our [changelog](
description of your change in the same style as the rest of our [changelog]( https://github.com/matrix-org/synapse/blob/master/CHANGES.md) at the next
https://github.com/matrix-org/synapse/blob/master/CHANGES.md). The file can release, so the content of the file should be a short description of your
contain Markdown formatting, and should end with a full stop (.) or an change in the same style as the rest of the changelog. The file can contain Markdown
exclamation mark (!) for consistency. formatting, and should end with a full stop (.) or an exclamation mark (!) for
consistency.
Adding credits to the changelog is encouraged, we value your Adding credits to the changelog is encouraged, we value your
contributions and would like to have you shouted out in the release notes! contributions and would like to have you shouted out in the release notes!
For example, a fix in PR #1234 would have its changelog entry in For example, a fix in PR #1234 would have its changelog entry in
`changelog.d/1234.bugfix`, and contain content like "The security levels of `changelog.d/1234.bugfix`, and contain content like:
Florbs are now validated when received over federation. Contributed by Jane
Matrix.".
## Debian changelog > The security levels of Florbs are now validated when received
> via the `/federation/florb` endpoint. Contributed by Jane Matrix.
If there are multiple pull requests involved in a single bugfix/feature/etc,
then the content for each `changelog.d` file should be the same. Towncrier will
merge the matching files together into a single changelog entry when we come to
release.
### How do I know what to call the changelog file before I create the PR?
Obviously, you don't know if you should call your newsfile
`1234.bugfix` or `5678.bugfix` until you create the PR, which leads to a
chicken-and-egg problem.
There are two options for solving this:
1. Open the PR without a changelog file, see what number you got, and *then*
add the changelog file to your branch (see [Updating your pull
request](#updating-your-pull-request)), or:
1. Look at the [list of all
issues/PRs](https://github.com/matrix-org/synapse/issues?q=), add one to the
highest number you see, and quickly open the PR before somebody else claims
your number.
[This
script](https://github.com/richvdh/scripts/blob/master/next_github_number.sh)
might be helpful if you find yourself doing this a lot.
Sorry, we know it's a bit fiddly, but it's *really* helpful for us when we come
to put together a release!
### Debian changelog
Changes which affect the debian packaging files (in `debian`) are an Changes which affect the debian packaging files (in `debian`) are an
exception. exception to the rule that all changes require a `changelog.d` file.
In this case, you will need to add an entry to the debian changelog for the In this case, you will need to add an entry to the debian changelog for the
next release. For this, run the following command: next release. For this, run the following command:
@ -200,19 +218,45 @@ Git allows you to add this signoff automatically when using the `-s`
flag to `git commit`, which uses the name and email set in your flag to `git commit`, which uses the name and email set in your
`user.name` and `user.email` git configs. `user.name` and `user.email` git configs.
## Merge Strategy ## Continuous integration and testing
We use the commit history of develop/master extensively to identify [Buildkite](https://buildkite.com/matrix-dot-org/synapse) will automatically
when regressions were introduced and what changes have been made. run a series of checks and tests against any PR which is opened against the
project; if your change breaks the build, this will be shown in GitHub, with
links to the build results. If your build fails, please try to fix the errors
and update your branch.
We aim to have a clean merge history, which means we normally squash-merge To run unit tests in a local development environment, you can use:
changes into develop. For small changes this means there is no need to rebase
to clean up your PR before merging. Larger changes with an organised set of
commits may be merged as-is, if the history is judged to be useful.
This use of squash-merging will mean PRs built on each other will be hard to - ``tox -e py35`` (requires tox to be installed by ``pip install tox``)
merge. We suggest avoiding these where possible, and if required, ensuring for SQLite-backed Synapse on Python 3.5.
each PR has a tidy set of commits to ease merging. - ``tox -e py36`` for SQLite-backed Synapse on Python 3.6.
- ``tox -e py36-postgres`` for PostgreSQL-backed Synapse on Python 3.6
(requires a running local PostgreSQL with access to create databases).
- ``./test_postgresql.sh`` for PostgreSQL-backed Synapse on Python 3.5
(requires Docker). Entirely self-contained, recommended if you don't want to
set up PostgreSQL yourself.
Docker images are available for running the integration tests (SyTest) locally,
see the [documentation in the SyTest repo](
https://github.com/matrix-org/sytest/blob/develop/docker/README.md) for more
information.
## Updating your pull request
If you decide to make changes to your pull request - perhaps to address issues
raised in a review, or to fix problems highlighted by [continuous
integration](#continuous-integration-and-testing) - just add new commits to your
branch, and push to GitHub. The pull request will automatically be updated.
Please **avoid** rebasing your branch, especially once the PR has been
reviewed: doing so makes it very difficult for a reviewer to see what has
changed since a previous review.
## Notes for maintainers on merging PRs etc
There are some notes for those with commit access to the project on how we
manage git [here](docs/dev/git.md).
## Conclusion ## Conclusion

View File

@ -1,3 +1,11 @@
================
Synapse |shield|
================
.. |shield| image:: https://img.shields.io/matrix/synapse:matrix.org?label=support&logo=matrix
:alt: (get support on #synapse:matrix.org)
:target: https://matrix.to/#/#synapse:matrix.org
.. contents:: .. contents::
Introduction Introduction
@ -77,6 +85,17 @@ Thanks for using Matrix!
[1] End-to-end encryption is currently in beta: `blog post <https://matrix.org/blog/2016/11/21/matrixs-olm-end-to-end-encryption-security-assessment-released-and-implemented-cross-platform-on-riot-at-last>`_. [1] End-to-end encryption is currently in beta: `blog post <https://matrix.org/blog/2016/11/21/matrixs-olm-end-to-end-encryption-security-assessment-released-and-implemented-cross-platform-on-riot-at-last>`_.
Support
=======
For support installing or managing Synapse, please join |room|_ (from a matrix.org
account if necessary) and ask questions there. We do not use GitHub issues for
support requests, only for bug reports and feature requests.
.. |room| replace:: ``#synapse:matrix.org``
.. _room: https://matrix.to/#/#synapse:matrix.org
Synapse Installation Synapse Installation
==================== ====================

1
changelog.d/6391.feature Normal file
View File

@ -0,0 +1 @@
Synapse's cache factor can now be configured in `homeserver.yaml` by the `caches.global_factor` setting. Additionally, `caches.per_cache_factors` controls the cache factors for individual caches.

1
changelog.d/6590.misc Normal file
View File

@ -0,0 +1 @@
`synctl` now warns if it was unable to stop Synapse and will not attempt to start Synapse if nothing was stopped. Contributed by Romain Bouyé.

1
changelog.d/7256.feature Normal file
View File

@ -0,0 +1 @@
Add OpenID Connect login/registration support. Contributed by Quentin Gliech, on behalf of [les Connecteurs](https://connecteu.rs).

1
changelog.d/7281.misc Normal file
View File

@ -0,0 +1 @@
Add MultiWriterIdGenerator to support multiple concurrent writers of streams.

1
changelog.d/7317.feature Normal file
View File

@ -0,0 +1 @@
Add room details admin endpoint. Contributed by Awesome Technologies Innovationslabor GmbH.

1
changelog.d/7374.misc Normal file
View File

@ -0,0 +1 @@
Move catchup of replication streams logic to worker.

1
changelog.d/7381.bugfix Normal file
View File

@ -0,0 +1 @@
Add an experimental room version which strictly adheres to the canonical JSON specification.

1
changelog.d/7382.misc Normal file
View File

@ -0,0 +1 @@
Add typing annotations in `synapse.federation`.

1
changelog.d/7384.bugfix Normal file
View File

@ -0,0 +1 @@
Fix a bug where event updates might not be sent over replication to worker processes after the stream falls behind.

1
changelog.d/7396.misc Normal file
View File

@ -0,0 +1 @@
Convert the room handler to async/await.

1
changelog.d/7398.docker Normal file
View File

@ -0,0 +1 @@
Update docker runtime image to Alpine v3.11. Contributed by @Starbix.

1
changelog.d/7435.feature Normal file
View File

@ -0,0 +1 @@
Allow for using more than one spam checker module at once.

1
changelog.d/7436.misc Normal file
View File

@ -0,0 +1 @@
Support any process writing to cache invalidation stream.

1
changelog.d/7440.misc Normal file
View File

@ -0,0 +1 @@
Refactor event persistence database functions in preparation for allowing them to be run on non-master processes.

1
changelog.d/7443.bugfix Normal file
View File

@ -0,0 +1 @@
Allow expired user accounts to log out their device sessions.

1
changelog.d/7445.misc Normal file
View File

@ -0,0 +1 @@
Add type hints to the SAML handler.

1
changelog.d/7448.misc Normal file
View File

@ -0,0 +1 @@
Remove storage method `get_hosts_in_room` that is no longer called anywhere.

1
changelog.d/7449.misc Normal file
View File

@ -0,0 +1 @@
Fix some typos in the notice_expiry templates.

1
changelog.d/7457.feature Normal file
View File

@ -0,0 +1 @@
Add OpenID Connect login/registration support. Contributed by Quentin Gliech, on behalf of [les Connecteurs](https://connecteu.rs).

1
changelog.d/7458.doc Normal file
View File

@ -0,0 +1 @@
Update information about mapping providers for SAML and OpenID.

1
changelog.d/7459.misc Normal file
View File

@ -0,0 +1 @@
Convert the federation handler to async/await.

1
changelog.d/7460.misc Normal file
View File

@ -0,0 +1 @@
Convert the search handler to async/await.

1
changelog.d/7463.doc Normal file
View File

@ -0,0 +1 @@
Add additional reverse proxy example for Caddy v2. Contributed by Jeff Peeler.

1
changelog.d/7465.bugfix Normal file
View File

@ -0,0 +1 @@
Prevent rooms with 0 members or with invalid version strings from breaking group queries.

1
changelog.d/7470.misc Normal file
View File

@ -0,0 +1 @@
Fix linting errors in new version of Flake8.

1
changelog.d/7473.bugfix Normal file
View File

@ -0,0 +1 @@
Workaround for an upstream Twisted bug that caused Synapse to become unresponsive after startup.

1
changelog.d/7475.misc Normal file
View File

@ -0,0 +1 @@
Have all instance correctly respond to REPLICATE command.

1
changelog.d/7477.doc Normal file
View File

@ -0,0 +1 @@
Fix copy-paste error in `ServerNoticesConfig` docstring. Contributed by @ptman.

1
changelog.d/7482.bugfix Normal file
View File

@ -0,0 +1 @@
Fix Redis reconnection logic that can result in missed updates over replication if master reconnects to Redis without restarting.

1
changelog.d/7490.misc Normal file
View File

@ -0,0 +1 @@
Clean up replication unit tests.

1
changelog.d/7491.misc Normal file
View File

@ -0,0 +1 @@
Move event stream handling out of slave store.

1
changelog.d/7492.misc Normal file
View File

@ -0,0 +1 @@
Allow censoring of events to happen on workers.

1
changelog.d/7493.misc Normal file
View File

@ -0,0 +1 @@
Move EventStream handling into default ReplicationDataHandler.

1
changelog.d/7495.feature Normal file
View File

@ -0,0 +1 @@
Add `instance_map` config and route replication calls.

1
changelog.d/7497.bugfix Normal file
View File

@ -0,0 +1 @@
When sending `m.room.member` events, omit `displayname` and `avatar_url` if they aren't set instead of setting them to `null`. Contributed by Aaron Raimist.

1
changelog.d/7502.feature Normal file
View File

@ -0,0 +1 @@
Add additional authentication checks for m.room.power_levels event per [MSC2209](https://github.com/matrix-org/matrix-doc/pull/2209).

1
changelog.d/7503.bugfix Normal file
View File

@ -0,0 +1 @@
Fix incorrect `method` label on `synapse_http_matrixfederationclient_{requests,responses}` prometheus metrics.

1
changelog.d/7505.misc Normal file
View File

@ -0,0 +1 @@
Add type hints to `synapse.event_auth`.

1
changelog.d/7506.feature Normal file
View File

@ -0,0 +1 @@
Implement room version 6 per [MSC2240](https://github.com/matrix-org/matrix-doc/pull/2240).

1
changelog.d/7507.misc Normal file
View File

@ -0,0 +1 @@
Convert the room member handler to async/await.

1
changelog.d/7508.bugfix Normal file
View File

@ -0,0 +1 @@
Ignore incoming presence events from other homeservers if presence is disabled locally.

1
changelog.d/7511.bugfix Normal file
View File

@ -0,0 +1 @@
Fix a long-standing bug that broke the update remote profile background process.

1
changelog.d/7513.misc Normal file
View File

@ -0,0 +1 @@
Add type hints to room member handler.

1
changelog.d/7514.doc Normal file
View File

@ -0,0 +1 @@
Improve the formatting of `reverse_proxy.md`.

1
changelog.d/7515.misc Normal file
View File

@ -0,0 +1 @@
Allow `ReplicationRestResource` to be added to workers.

1
changelog.d/7516.misc Normal file
View File

@ -0,0 +1 @@
Add a worker store for search insertion, required for moving event persistence off master.

1
changelog.d/7518.misc Normal file
View File

@ -0,0 +1 @@
Fix typing annotations in `tests.replication`.

1
changelog.d/7519.misc Normal file
View File

@ -0,0 +1 @@
Remove some redundant Python 2 support code.

View File

@ -1 +0,0 @@
Hash passwords as early as possible during registration.

1
changelog.d/7528.doc Normal file
View File

@ -0,0 +1 @@
Change the systemd worker service to check that the worker config file exists instead of silently failing. Contributed by David Vo.

1
changelog.d/7533.doc Normal file
View File

@ -0,0 +1 @@
Minor clarifications to the TURN docs.

1
changelog.d/7538.bugfix Normal file
View File

@ -0,0 +1 @@
Hash passwords as early as possible during password reset.

1
changelog.d/7539.misc Normal file
View File

@ -0,0 +1 @@
Remove Ubuntu Cosmic and Disco from the list of distributions which we provide `.deb`s for, due to end-of-life.

1
changelog.d/7545.misc Normal file
View File

@ -0,0 +1 @@
Make worker processes return a stubbed-out response to `GET /presence` requests.

1
changelog.d/7548.bugfix Normal file
View File

@ -0,0 +1 @@
Fix bug where a local user leaving a room could fail under rare circumstances.

View File

@ -36,7 +36,6 @@ esac
dh_virtualenv \ dh_virtualenv \
--install-suffix "matrix-synapse" \ --install-suffix "matrix-synapse" \
--builtin-venv \ --builtin-venv \
--setuptools \
--python "$SNAKE" \ --python "$SNAKE" \
--upgrade-pip \ --upgrade-pip \
--preinstall="lxml" \ --preinstall="lxml" \

12
debian/changelog vendored
View File

@ -1,16 +1,18 @@
<<<<<<< HEAD matrix-synapse-py3 (1.13.0) stable; urgency=medium
matrix-synapse-py3 (1.12.3ubuntu1) UNRELEASED; urgency=medium
[ Patrick Cloke ]
* Add information about .well-known files to Debian installation scripts. * Add information about .well-known files to Debian installation scripts.
-- Patrick Cloke <patrickc@matrix.org> Mon, 06 Apr 2020 10:10:38 -0400 [ Synapse Packaging team ]
======= * New synapse release 1.13.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 19 May 2020 09:16:56 -0400
matrix-synapse-py3 (1.12.4) stable; urgency=medium matrix-synapse-py3 (1.12.4) stable; urgency=medium
* New synapse release 1.12.4. * New synapse release 1.12.4.
-- Synapse Packaging team <packages@matrix.org> Thu, 23 Apr 2020 10:58:14 -0400 -- Synapse Packaging team <packages@matrix.org> Thu, 23 Apr 2020 10:58:14 -0400
>>>>>>> master
matrix-synapse-py3 (1.12.3) stable; urgency=medium matrix-synapse-py3 (1.12.3) stable; urgency=medium

View File

@ -55,7 +55,7 @@ RUN pip install --prefix="/install" --no-warn-script-location \
### Stage 1: runtime ### Stage 1: runtime
### ###
FROM docker.io/python:${PYTHON_VERSION}-alpine3.10 FROM docker.io/python:${PYTHON_VERSION}-alpine3.11
# xmlsec is required for saml support # xmlsec is required for saml support
RUN apk add --no-cache --virtual .runtime_deps \ RUN apk add --no-cache --virtual .runtime_deps \

View File

@ -27,15 +27,16 @@ RUN env DEBIAN_FRONTEND=noninteractive apt-get install \
wget wget
# fetch and unpack the package # fetch and unpack the package
RUN wget -q -O /dh-virtuenv-1.1.tar.gz https://github.com/spotify/dh-virtualenv/archive/1.1.tar.gz RUN mkdir /dh-virtualenv
RUN tar xvf /dh-virtuenv-1.1.tar.gz RUN wget -q -O /dh-virtualenv.tar.gz https://github.com/matrix-org/dh-virtualenv/archive/matrixorg-20200519.tar.gz
RUN tar -xv --strip-components=1 -C /dh-virtualenv -f /dh-virtualenv.tar.gz
# install its build deps # install its build deps
RUN cd dh-virtualenv-1.1/ \ RUN cd /dh-virtualenv \
&& env DEBIAN_FRONTEND=noninteractive mk-build-deps -ri -t "apt-get -yqq --no-install-recommends" && env DEBIAN_FRONTEND=noninteractive mk-build-deps -ri -t "apt-get -y --no-install-recommends"
# build it # build it
RUN cd dh-virtualenv-1.1 && dpkg-buildpackage -us -uc -b RUN cd /dh-virtualenv && dpkg-buildpackage -us -uc -b
### ###
### Stage 1 ### Stage 1
@ -68,12 +69,12 @@ RUN apt-get update -qq -o Acquire::Languages=none \
sqlite3 \ sqlite3 \
libpq-dev libpq-dev
COPY --from=builder /dh-virtualenv_1.1-1_all.deb / COPY --from=builder /dh-virtualenv_1.2~dev-1_all.deb /
# install dhvirtualenv. Update the apt cache again first, in case we got a # install dhvirtualenv. Update the apt cache again first, in case we got a
# cached cache from docker the first time. # cached cache from docker the first time.
RUN apt-get update -qq -o Acquire::Languages=none \ RUN apt-get update -qq -o Acquire::Languages=none \
&& apt-get install -yq /dh-virtualenv_1.1-1_all.deb && apt-get install -yq /dh-virtualenv_1.2~dev-1_all.deb
WORKDIR /synapse/source WORKDIR /synapse/source
ENTRYPOINT ["bash","/synapse/source/docker/build_debian.sh"] ENTRYPOINT ["bash","/synapse/source/docker/build_debian.sh"]

View File

@ -264,3 +264,57 @@ Response:
Once the `next_token` parameter is no longer present, we know we've reached the Once the `next_token` parameter is no longer present, we know we've reached the
end of the list. end of the list.
# DRAFT: Room Details API
The Room Details admin API allows server admins to get all details of a room.
This API is still a draft and details might change!
The following fields are possible in the JSON response body:
* `room_id` - The ID of the room.
* `name` - The name of the room.
* `canonical_alias` - The canonical (main) alias address of the room.
* `joined_members` - How many users are currently in the room.
* `joined_local_members` - How many local users are currently in the room.
* `version` - The version of the room as a string.
* `creator` - The `user_id` of the room creator.
* `encryption` - Algorithm of end-to-end encryption of messages. Is `null` if encryption is not active.
* `federatable` - Whether users on other servers can join this room.
* `public` - Whether the room is visible in room directory.
* `join_rules` - The type of rules used for users wishing to join this room. One of: ["public", "knock", "invite", "private"].
* `guest_access` - Whether guests can join the room. One of: ["can_join", "forbidden"].
* `history_visibility` - Who can see the room history. One of: ["invited", "joined", "shared", "world_readable"].
* `state_events` - Total number of state_events of a room. Complexity of the room.
## Usage
A standard request:
```
GET /_synapse/admin/v1/rooms/<room_id>
{}
```
Response:
```
{
"room_id": "!mscvqgqpHYjBGDxNym:matrix.org",
"name": "Music Theory",
"canonical_alias": "#musictheory:matrix.org",
"joined_members": 127
"joined_local_members": 2,
"version": "1",
"creator": "@foo:matrix.org",
"encryption": null,
"federatable": true,
"public": true,
"join_rules": "invite",
"guest_access": null,
"history_visibility": "shared",
"state_events": 93534
}
```

View File

@ -33,21 +33,22 @@ with a body of:
including an ``access_token`` of a server admin. including an ``access_token`` of a server admin.
The parameter ``displayname`` is optional and defaults to the value of Parameters:
``user_id``.
The parameter ``threepids`` is optional and allows setting the third-party IDs - ``password``, optional. If provided, the user's password is updated and all
(email, msisdn) belonging to a user. devices are logged out.
- ``displayname``, optional, defaults to the value of ``user_id``.
The parameter ``avatar_url`` is optional. Must be a [MXC - ``threepids``, optional, allows setting the third-party IDs (email, msisdn)
URI](https://matrix.org/docs/spec/client_server/r0.6.0#matrix-content-mxc-uris). belonging to a user.
The parameter ``admin`` is optional and defaults to ``false``. - ``avatar_url``, optional, must be a
`MXC URI <https://matrix.org/docs/spec/client_server/r0.6.0#matrix-content-mxc-uris>`_.
The parameter ``deactivated`` is optional and defaults to ``false``. - ``admin``, optional, defaults to ``false``.
The parameter ``password`` is optional. If provided, the user's password is - ``deactivated``, optional, defaults to ``false``.
updated and all devices are logged out.
If the user already exists then optional parameters default to the current value. If the user already exists then optional parameters default to the current value.

148
docs/dev/git.md Normal file
View File

@ -0,0 +1,148 @@
Some notes on how we use git
============================
On keeping the commit history clean
-----------------------------------
In an ideal world, our git commit history would be a linear progression of
commits each of which contains a single change building on what came
before. Here, by way of an arbitrary example, is the top of `git log --graph
b2dba0607`:
<img src="git/clean.png" alt="clean git graph" width="500px">
Note how the commit comment explains clearly what is changing and why. Also
note the *absence* of merge commits, as well as the absence of commits called
things like (to pick a few culprits):
[“pep8”](https://github.com/matrix-org/synapse/commit/84691da6c), [“fix broken
test”](https://github.com/matrix-org/synapse/commit/474810d9d),
[“oops”](https://github.com/matrix-org/synapse/commit/c9d72e457),
[“typo”](https://github.com/matrix-org/synapse/commit/836358823), or [“Who's
the president?”](https://github.com/matrix-org/synapse/commit/707374d5d).
There are a number of reasons why keeping a clean commit history is a good
thing:
* From time to time, after a change lands, it turns out to be necessary to
revert it, or to backport it to a release branch. Those operations are
*much* easier when the change is contained in a single commit.
* Similarly, it's much easier to answer questions like “is the fix for
`/publicRooms` on the release branch?” if that change consists of a single
commit.
* Likewise: “what has changed on this branch in the last week?” is much
clearer without merges and “pep8” commits everywhere.
* Sometimes we need to figure out where a bug got introduced, or some
behaviour changed. One way of doing that is with `git bisect`: pick an
arbitrary commit between the known good point and the known bad point, and
see how the code behaves. However, that strategy fails if the commit you
chose is the middle of someone's epic branch in which they broke the world
before putting it back together again.
One counterargument is that it is sometimes useful to see how a PR evolved as
it went through review cycles. This is true, but that information is always
available via the GitHub UI (or via the little-known [refs/pull
namespace](https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/checking-out-pull-requests-locally)).
Of course, in reality, things are more complicated than that. We have release
branches as well as `develop` and `master`, and we deliberately merge changes
between them. Bugs often slip through and have to be fixed later. That's all
fine: this not a cast-iron rule which must be obeyed, but an ideal to aim
towards.
Merges, squashes, rebases: wtf?
-------------------------------
Ok, so that's what we'd like to achieve. How do we achieve it?
The TL;DR is: when you come to merge a pull request, you *probably* want to
“squash and merge”:
![squash and merge](git/squash.png).
(This applies whether you are merging your own PR, or that of another
contributor.)
“Squash and merge”<sup id="a1">[1](#f1)</sup> takes all of the changes in the
PR, and bundles them into a single commit. GitHub gives you the opportunity to
edit the commit message before you confirm, and normally you should do so,
because the default will be useless (again: `* woops typo` is not a useful
thing to keep in the historical record).
The main problem with this approach comes when you have a series of pull
requests which build on top of one another: as soon as you squash-merge the
first PR, you'll end up with a stack of conflicts to resolve in all of the
others. In general, it's best to avoid this situation in the first place by
trying not to have multiple related PRs in flight at the same time. Still,
sometimes that's not possible and doing a regular merge is the lesser evil.
Another occasion in which a regular merge makes more sense is a PR where you've
deliberately created a series of commits each of which makes sense in its own
right. For example: [a PR which gradually propagates a refactoring operation
through the codebase](https://github.com/matrix-org/synapse/pull/6837), or [a
PR which is the culmination of several other
PRs](https://github.com/matrix-org/synapse/pull/5987). In this case the ability
to figure out when a particular change/bug was introduced could be very useful.
Ultimately: **this is not a hard-and-fast-rule**. If in doubt, ask yourself “do
each of the commits I am about to merge make sense in their own right”, but
remember that we're just doing our best to balance “keeping the commit history
clean” with other factors.
Git branching model
-------------------
A [lot](https://nvie.com/posts/a-successful-git-branching-model/)
[of](http://scottchacon.com/2011/08/31/github-flow.html)
[words](https://www.endoflineblog.com/gitflow-considered-harmful) have been
written in the past about git branching models (no really, [a
lot](https://martinfowler.com/articles/branching-patterns.html)). I tend to
think the whole thing is overblown. Fundamentally, it's not that
complicated. Here's how we do it.
Let's start with a picture:
![branching model](git/branches.jpg)
It looks complicated, but it's really not. There's one basic rule: *anyone* is
free to merge from *any* more-stable branch to *any* less-stable branch at
*any* time<sup id="a2">[2](#f2)</sup>. (The principle behind this is that if a
change is good enough for the more-stable branch, then it's also good enough go
put in a less-stable branch.)
Meanwhile, merging (or squashing, as per the above) from a less-stable to a
more-stable branch is a deliberate action in which you want to publish a change
or a set of changes to (some subset of) the world: for example, this happens
when a PR is landed, or as part of our release process.
So, what counts as a more- or less-stable branch? A little reflection will show
that our active branches are ordered thus, from more-stable to less-stable:
* `master` (tracks our last release).
* `release-vX.Y.Z` (the branch where we prepare the next release)<sup
id="a3">[3](#f3)</sup>.
* PR branches which are targeting the release.
* `develop` (our "mainline" branch containing our bleeding-edge).
* regular PR branches.
The corollary is: if you have a bugfix that needs to land in both
`release-vX.Y.Z` *and* `develop`, then you should base your PR on
`release-vX.Y.Z`, get it merged there, and then merge from `release-vX.Y.Z` to
`develop`. (If a fix lands in `develop` and we later need it in a
release-branch, we can of course cherry-pick it, but landing it in the release
branch first helps reduce the chance of annoying conflicts.)
---
<b id="f1">[1]</b>: “Squash and merge” is GitHub's term for this
operation. Given that there is no merge involved, I'm not convinced it's the
most intuitive name. [^](#a1)
<b id="f2">[2]</b>: Well, anyone with commit access.[^](#a2)
<b id="f3">[3]</b>: Very, very occasionally (I think this has happened once in
the history of Synapse), we've had two releases in flight at once. Obviously,
`release-v1.2.3` is more-stable than `release-v1.3.0`. [^](#a3)

BIN
docs/dev/git/branches.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 70 KiB

BIN
docs/dev/git/clean.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 108 KiB

BIN
docs/dev/git/squash.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 29 KiB

175
docs/dev/oidc.md Normal file
View File

@ -0,0 +1,175 @@
# How to test OpenID Connect
Any OpenID Connect Provider (OP) should work with Synapse, as long as it supports the authorization code flow.
There are a few options for that:
- start a local OP. Synapse has been tested with [Hydra][hydra] and [Dex][dex-idp].
Note that for an OP to work, it should be served under a secure (HTTPS) origin.
A certificate signed with a self-signed, locally trusted CA should work. In that case, start Synapse with a `SSL_CERT_FILE` environment variable set to the path of the CA.
- use a publicly available OP. Synapse has been tested with [Google][google-idp].
- setup a SaaS OP, like [Auth0][auth0] and [Okta][okta]. Auth0 has a free tier which has been tested with Synapse.
[google-idp]: https://developers.google.com/identity/protocols/OpenIDConnect#authenticatingtheuser
[auth0]: https://auth0.com/
[okta]: https://www.okta.com/
[dex-idp]: https://github.com/dexidp/dex
[hydra]: https://www.ory.sh/docs/hydra/
## Sample configs
Here are a few configs for providers that should work with Synapse.
### [Dex][dex-idp]
[Dex][dex-idp] is a simple, open-source, certified OpenID Connect Provider.
Although it is designed to help building a full-blown provider, with some external database, it can be configured with static passwords in a config file.
Follow the [Getting Started guide](https://github.com/dexidp/dex/blob/master/Documentation/getting-started.md) to install Dex.
Edit `examples/config-dev.yaml` config file from the Dex repo to add a client:
```yaml
staticClients:
- id: synapse
secret: secret
redirectURIs:
- '[synapse base url]/_synapse/oidc/callback'
name: 'Synapse'
```
Run with `dex serve examples/config-dex.yaml`
Synapse config:
```yaml
oidc_config:
enabled: true
skip_verification: true # This is needed as Dex is served on an insecure endpoint
issuer: "http://127.0.0.1:5556/dex"
discover: true
client_id: "synapse"
client_secret: "secret"
scopes:
- openid
- profile
user_mapping_provider:
config:
localpart_template: '{{ user.name }}'
display_name_template: '{{ user.name|capitalize }}'
```
### [Auth0][auth0]
1. Create a regular web application for Synapse
2. Set the Allowed Callback URLs to `[synapse base url]/_synapse/oidc/callback`
3. Add a rule to add the `preferred_username` claim.
<details>
<summary>Code sample</summary>
```js
function addPersistenceAttribute(user, context, callback) {
user.user_metadata = user.user_metadata || {};
user.user_metadata.preferred_username = user.user_metadata.preferred_username || user.user_id;
context.idToken.preferred_username = user.user_metadata.preferred_username;
auth0.users.updateUserMetadata(user.user_id, user.user_metadata)
.then(function(){
callback(null, user, context);
})
.catch(function(err){
callback(err);
});
}
```
</details>
```yaml
oidc_config:
enabled: true
issuer: "https://your-tier.eu.auth0.com/" # TO BE FILLED
discover: true
client_id: "your-client-id" # TO BE FILLED
client_secret: "your-client-secret" # TO BE FILLED
scopes:
- openid
- profile
user_mapping_provider:
config:
localpart_template: '{{ user.preferred_username }}'
display_name_template: '{{ user.name }}'
```
### GitHub
GitHub is a bit special as it is not an OpenID Connect compliant provider, but just a regular OAuth2 provider.
The `/user` API endpoint can be used to retrieve informations from the user.
As the OIDC login mechanism needs an attribute to uniquely identify users and that endpoint does not return a `sub` property, an alternative `subject_claim` has to be set.
1. Create a new OAuth application: https://github.com/settings/applications/new
2. Set the callback URL to `[synapse base url]/_synapse/oidc/callback`
```yaml
oidc_config:
enabled: true
issuer: "https://github.com/"
discover: false
client_id: "your-client-id" # TO BE FILLED
client_secret: "your-client-secret" # TO BE FILLED
authorization_endpoint: "https://github.com/login/oauth/authorize"
token_endpoint: "https://github.com/login/oauth/access_token"
userinfo_endpoint: "https://api.github.com/user"
scopes:
- read:user
user_mapping_provider:
config:
subject_claim: 'id'
localpart_template: '{{ user.login }}'
display_name_template: '{{ user.name }}'
```
### Google
1. Setup a project in the Google API Console
2. Obtain the OAuth 2.0 credentials (see <https://developers.google.com/identity/protocols/oauth2/openid-connect>)
3. Add this Authorized redirect URI: `[synapse base url]/_synapse/oidc/callback`
```yaml
oidc_config:
enabled: true
issuer: "https://accounts.google.com/"
discover: true
client_id: "your-client-id" # TO BE FILLED
client_secret: "your-client-secret" # TO BE FILLED
scopes:
- openid
- profile
user_mapping_provider:
config:
localpart_template: '{{ user.given_name|lower }}'
display_name_template: '{{ user.name }}'
```
### Twitch
1. Setup a developer account on [Twitch](https://dev.twitch.tv/)
2. Obtain the OAuth 2.0 credentials by [creating an app](https://dev.twitch.tv/console/apps/)
3. Add this OAuth Redirect URL: `[synapse base url]/_synapse/oidc/callback`
```yaml
oidc_config:
enabled: true
issuer: "https://id.twitch.tv/oauth2/"
discover: true
client_id: "your-client-id" # TO BE FILLED
client_secret: "your-client-secret" # TO BE FILLED
client_auth_method: "client_secret_post"
scopes:
- openid
user_mapping_provider:
config:
localpart_template: '{{ user.preferred_username }}'
display_name_template: '{{ user.name }}'
```

View File

@ -9,7 +9,7 @@ of doing so is that it means that you can expose the default https port
(443) to Matrix clients without needing to run Synapse with root (443) to Matrix clients without needing to run Synapse with root
privileges. privileges.
> **NOTE**: Your reverse proxy must not `canonicalise` or `normalise` **NOTE**: Your reverse proxy must not `canonicalise` or `normalise`
the requested URI in any way (for example, by decoding `%xx` escapes). the requested URI in any way (for example, by decoding `%xx` escapes).
Beware that Apache *will* canonicalise URIs unless you specifify Beware that Apache *will* canonicalise URIs unless you specifify
`nocanon`. `nocanon`.
@ -18,7 +18,7 @@ When setting up a reverse proxy, remember that Matrix clients and other
Matrix servers do not necessarily need to connect to your server via the Matrix servers do not necessarily need to connect to your server via the
same server name or port. Indeed, clients will use port 443 by default, same server name or port. Indeed, clients will use port 443 by default,
whereas servers default to port 8448. Where these are different, we whereas servers default to port 8448. Where these are different, we
refer to the 'client port' and the \'federation port\'. See [the Matrix refer to the 'client port' and the 'federation port'. See [the Matrix
specification](https://matrix.org/docs/spec/server_server/latest#resolving-server-names) specification](https://matrix.org/docs/spec/server_server/latest#resolving-server-names)
for more details of the algorithm used for federation connections, and for more details of the algorithm used for federation connections, and
[delegate.md](<delegate.md>) for instructions on setting up delegation. [delegate.md](<delegate.md>) for instructions on setting up delegation.
@ -28,93 +28,113 @@ Let's assume that we expect clients to connect to our server at
`https://example.com:8448`. The following sections detail the configuration of `https://example.com:8448`. The following sections detail the configuration of
the reverse proxy and the homeserver. the reverse proxy and the homeserver.
## Webserver configuration examples ## Reverse-proxy configuration examples
> **NOTE**: You only need one of these. **NOTE**: You only need one of these.
### nginx ### nginx
server { ```
listen 443 ssl; server {
listen [::]:443 ssl; listen 443 ssl;
server_name matrix.example.com; listen [::]:443 ssl;
server_name matrix.example.com;
location /_matrix { location /_matrix {
proxy_pass http://localhost:8008; proxy_pass http://localhost:8008;
proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Forwarded-For $remote_addr;
# Nginx by default only allows file uploads up to 1M in size # Nginx by default only allows file uploads up to 1M in size
# Increase client_max_body_size to match max_upload_size defined in homeserver.yaml # Increase client_max_body_size to match max_upload_size defined in homeserver.yaml
client_max_body_size 10M; client_max_body_size 10M;
} }
} }
server { server {
listen 8448 ssl default_server; listen 8448 ssl default_server;
listen [::]:8448 ssl default_server; listen [::]:8448 ssl default_server;
server_name example.com; server_name example.com;
location / { location / {
proxy_pass http://localhost:8008; proxy_pass http://localhost:8008;
proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Forwarded-For $remote_addr;
} }
} }
```
> **NOTE**: Do not add a `/` after the port in `proxy_pass`, otherwise nginx will **NOTE**: Do not add a path after the port in `proxy_pass`, otherwise nginx will
canonicalise/normalise the URI. canonicalise/normalise the URI.
### Caddy ### Caddy 1
matrix.example.com { ```
proxy /_matrix http://localhost:8008 { matrix.example.com {
transparent proxy /_matrix http://localhost:8008 {
} transparent
} }
}
example.com:8448 { example.com:8448 {
proxy / http://localhost:8008 { proxy / http://localhost:8008 {
transparent transparent
} }
} }
```
### Caddy 2
```
matrix.example.com {
reverse_proxy /_matrix/* http://localhost:8008
}
example.com:8448 {
reverse_proxy http://localhost:8008
}
```
### Apache ### Apache
<VirtualHost *:443> ```
SSLEngine on <VirtualHost *:443>
ServerName matrix.example.com; SSLEngine on
ServerName matrix.example.com;
AllowEncodedSlashes NoDecode AllowEncodedSlashes NoDecode
ProxyPass /_matrix http://127.0.0.1:8008/_matrix nocanon ProxyPass /_matrix http://127.0.0.1:8008/_matrix nocanon
ProxyPassReverse /_matrix http://127.0.0.1:8008/_matrix ProxyPassReverse /_matrix http://127.0.0.1:8008/_matrix
</VirtualHost> </VirtualHost>
<VirtualHost *:8448> <VirtualHost *:8448>
SSLEngine on SSLEngine on
ServerName example.com; ServerName example.com;
AllowEncodedSlashes NoDecode AllowEncodedSlashes NoDecode
ProxyPass /_matrix http://127.0.0.1:8008/_matrix nocanon ProxyPass /_matrix http://127.0.0.1:8008/_matrix nocanon
ProxyPassReverse /_matrix http://127.0.0.1:8008/_matrix ProxyPassReverse /_matrix http://127.0.0.1:8008/_matrix
</VirtualHost> </VirtualHost>
```
> **NOTE**: ensure the `nocanon` options are included. **NOTE**: ensure the `nocanon` options are included.
### HAProxy ### HAProxy
frontend https ```
bind :::443 v4v6 ssl crt /etc/ssl/haproxy/ strict-sni alpn h2,http/1.1 frontend https
bind :::443 v4v6 ssl crt /etc/ssl/haproxy/ strict-sni alpn h2,http/1.1
# Matrix client traffic # Matrix client traffic
acl matrix-host hdr(host) -i matrix.example.com acl matrix-host hdr(host) -i matrix.example.com
acl matrix-path path_beg /_matrix acl matrix-path path_beg /_matrix
use_backend matrix if matrix-host matrix-path use_backend matrix if matrix-host matrix-path
frontend matrix-federation frontend matrix-federation
bind :::8448 v4v6 ssl crt /etc/ssl/haproxy/synapse.pem alpn h2,http/1.1 bind :::8448 v4v6 ssl crt /etc/ssl/haproxy/synapse.pem alpn h2,http/1.1
default_backend matrix default_backend matrix
backend matrix backend matrix
server matrix 127.0.0.1:8008 server matrix 127.0.0.1:8008
```
## Homeserver Configuration ## Homeserver Configuration

View File

@ -1,77 +0,0 @@
# SAML Mapping Providers
A SAML mapping provider is a Python class (loaded via a Python module) that
works out how to map attributes of a SAML response object to Matrix-specific
user attributes. Details such as user ID localpart, displayname, and even avatar
URLs are all things that can be mapped from talking to a SSO service.
As an example, a SSO service may return the email address
"john.smith@example.com" for a user, whereas Synapse will need to figure out how
to turn that into a displayname when creating a Matrix user for this individual.
It may choose `John Smith`, or `Smith, John [Example.com]` or any number of
variations. As each Synapse configuration may want something different, this is
where SAML mapping providers come into play.
## Enabling Providers
External mapping providers are provided to Synapse in the form of an external
Python module. Retrieve this module from [PyPi](https://pypi.org) or elsewhere,
then tell Synapse where to look for the handler class by editing the
`saml2_config.user_mapping_provider.module` config option.
`saml2_config.user_mapping_provider.config` allows you to provide custom
configuration options to the module. Check with the module's documentation for
what options it provides (if any). The options listed by default are for the
user mapping provider built in to Synapse. If using a custom module, you should
comment these options out and use those specified by the module instead.
## Building a Custom Mapping Provider
A custom mapping provider must specify the following methods:
* `__init__(self, parsed_config)`
- Arguments:
- `parsed_config` - A configuration object that is the return value of the
`parse_config` method. You should set any configuration options needed by
the module here.
* `saml_response_to_user_attributes(self, saml_response, failures)`
- Arguments:
- `saml_response` - A `saml2.response.AuthnResponse` object to extract user
information from.
- `failures` - An `int` that represents the amount of times the returned
mxid localpart mapping has failed. This should be used
to create a deduplicated mxid localpart which should be
returned instead. For example, if this method returns
`john.doe` as the value of `mxid_localpart` in the returned
dict, and that is already taken on the homeserver, this
method will be called again with the same parameters but
with failures=1. The method should then return a different
`mxid_localpart` value, such as `john.doe1`.
- This method must return a dictionary, which will then be used by Synapse
to build a new user. The following keys are allowed:
* `mxid_localpart` - Required. The mxid localpart of the new user.
* `displayname` - The displayname of the new user. If not provided, will default to
the value of `mxid_localpart`.
* `parse_config(config)`
- This method should have the `@staticmethod` decoration.
- Arguments:
- `config` - A `dict` representing the parsed content of the
`saml2_config.user_mapping_provider.config` homeserver config option.
Runs on homeserver startup. Providers should extract any option values
they need here.
- Whatever is returned will be passed back to the user mapping provider module's
`__init__` method during construction.
* `get_saml_attributes(config)`
- This method should have the `@staticmethod` decoration.
- Arguments:
- `config` - A object resulting from a call to `parse_config`.
- Returns a tuple of two sets. The first set equates to the saml auth
response attributes that are required for the module to function, whereas
the second set consists of those attributes which can be used if available,
but are not necessary.
## Synapse's Default Provider
Synapse has a built-in SAML mapping provider if a custom provider isn't
specified in the config. It is located at
[`synapse.handlers.saml_handler.DefaultSamlMappingProvider`](../synapse/handlers/saml_handler.py).

View File

@ -603,6 +603,45 @@ acme:
## Caching ##
# Caching can be configured through the following options.
#
# A cache 'factor' is a multiplier that can be applied to each of
# Synapse's caches in order to increase or decrease the maximum
# number of entries that can be stored.
# The number of events to cache in memory. Not affected by
# caches.global_factor.
#
#event_cache_size: 10K
caches:
# Controls the global cache factor, which is the default cache factor
# for all caches if a specific factor for that cache is not otherwise
# set.
#
# This can also be set by the "SYNAPSE_CACHE_FACTOR" environment
# variable. Setting by environment variable takes priority over
# setting through the config file.
#
# Defaults to 0.5, which will half the size of all caches.
#
#global_factor: 1.0
# A dictionary of cache name to cache factor for that individual
# cache. Overrides the global cache factor for a given cache.
#
# These can also be set through environment variables comprised
# of "SYNAPSE_CACHE_FACTOR_" + the name of the cache in capital
# letters and underscores. Setting by environment variable
# takes priority over setting through the config file.
# Ex. SYNAPSE_CACHE_FACTOR_GET_USERS_WHO_SHARE_ROOM_WITH_USER=2.0
#
per_cache_factors:
#get_users_who_share_room_with_user: 2.0
## Database ## ## Database ##
# The 'database' setting defines the database that synapse uses to store all of # The 'database' setting defines the database that synapse uses to store all of
@ -646,10 +685,6 @@ database:
args: args:
database: DATADIR/homeserver.db database: DATADIR/homeserver.db
# Number of events to cache in memory.
#
#event_cache_size: 10K
## Logging ## ## Logging ##
@ -1470,6 +1505,94 @@ saml2_config:
#template_dir: "res/templates" #template_dir: "res/templates"
# Enable OpenID Connect for registration and login. Uses authlib.
#
oidc_config:
# enable OpenID Connect. Defaults to false.
#
#enabled: true
# use the OIDC discovery mechanism to discover endpoints. Defaults to true.
#
#discover: true
# the OIDC issuer. Used to validate tokens and discover the providers endpoints. Required.
#
#issuer: "https://accounts.example.com/"
# oauth2 client id to use. Required.
#
#client_id: "provided-by-your-issuer"
# oauth2 client secret to use. Required.
#
#client_secret: "provided-by-your-issuer"
# auth method to use when exchanging the token.
# Valid values are "client_secret_basic" (default), "client_secret_post" and "none".
#
#client_auth_method: "client_auth_basic"
# list of scopes to ask. This should include the "openid" scope. Defaults to ["openid"].
#
#scopes: ["openid"]
# the oauth2 authorization endpoint. Required if provider discovery is disabled.
#
#authorization_endpoint: "https://accounts.example.com/oauth2/auth"
# the oauth2 token endpoint. Required if provider discovery is disabled.
#
#token_endpoint: "https://accounts.example.com/oauth2/token"
# the OIDC userinfo endpoint. Required if discovery is disabled and the "openid" scope is not asked.
#
#userinfo_endpoint: "https://accounts.example.com/userinfo"
# URI where to fetch the JWKS. Required if discovery is disabled and the "openid" scope is used.
#
#jwks_uri: "https://accounts.example.com/.well-known/jwks.json"
# skip metadata verification. Defaults to false.
# Use this if you are connecting to a provider that is not OpenID Connect compliant.
# Avoid this in production.
#
#skip_verification: false
# An external module can be provided here as a custom solution to mapping
# attributes returned from a OIDC provider onto a matrix user.
#
user_mapping_provider:
# The custom module's class. Uncomment to use a custom module.
# Default is 'synapse.handlers.oidc_handler.JinjaOidcMappingProvider'.
#
#module: mapping_provider.OidcMappingProvider
# Custom configuration values for the module. Below options are intended
# for the built-in provider, they should be changed if using a custom
# module. This section will be passed as a Python dictionary to the
# module's `parse_config` method.
#
# Below is the config of the default mapping provider, based on Jinja2
# templates. Those templates are used to render user attributes, where the
# userinfo object is available through the `user` variable.
#
config:
# name of the claim containing a unique identifier for the user.
# Defaults to `sub`, which OpenID Connect compliant providers should provide.
#
#subject_claim: "sub"
# Jinja2 template for the localpart of the MXID
#
localpart_template: "{{ user.preferred_username }}"
# Jinja2 template for the display name to set on first login. Optional.
#
#display_name_template: "{{ user.given_name }} {{ user.last_name }}"
# Enable CAS for registration and login. # Enable CAS for registration and login.
# #
@ -1554,6 +1677,13 @@ sso:
# #
# This template has no additional variables. # This template has no additional variables.
# #
# * HTML page to display to users if something goes wrong during the
# OpenID Connect authentication process: 'sso_error.html'.
#
# When rendering, this template is given two variables:
# * error: the technical name of the error
# * error_description: a human-readable message for the error
#
# You can see the default templates at: # You can see the default templates at:
# https://github.com/matrix-org/synapse/tree/master/synapse/res/templates # https://github.com/matrix-org/synapse/tree/master/synapse/res/templates
# #
@ -1772,10 +1902,17 @@ password_providers:
# include_content: true # include_content: true
#spam_checker: # Spam checkers are third-party modules that can block specific actions
# module: "my_custom_project.SuperSpamChecker" # of local users, such as creating rooms and registering undesirable
# config: # usernames, as well as remote users by redacting incoming events.
# example_option: 'things' #
spam_checker:
#- module: "my_custom_project.SuperSpamChecker"
# config:
# example_option: 'things'
#- module: "some_other_project.BadEventStopper"
# config:
# example_stop_events_from: ['@bad:example.com']
# Uncomment to allow non-server-admin users to create groups on this server # Uncomment to allow non-server-admin users to create groups on this server

View File

@ -64,10 +64,12 @@ class ExampleSpamChecker:
Modify the `spam_checker` section of your `homeserver.yaml` in the following Modify the `spam_checker` section of your `homeserver.yaml` in the following
manner: manner:
`module` should point to the fully qualified Python class that implements your Create a list entry with the keys `module` and `config`.
custom logic, e.g. `my_module.ExampleSpamChecker`.
`config` is a dictionary that gets passed to the spam checker class. * `module` should point to the fully qualified Python class that implements your
custom logic, e.g. `my_module.ExampleSpamChecker`.
* `config` is a dictionary that gets passed to the spam checker class.
### Example ### Example
@ -75,12 +77,15 @@ This section might look like:
```yaml ```yaml
spam_checker: spam_checker:
module: my_module.ExampleSpamChecker - module: my_module.ExampleSpamChecker
config: config:
# Enable or disable a specific option in ExampleSpamChecker. # Enable or disable a specific option in ExampleSpamChecker.
my_custom_option: true my_custom_option: true
``` ```
More spam checkers can be added in tandem by appending more items to the list. An
action is blocked when at least one of the configured spam checkers flags it.
## Examples ## Examples
The [Mjolnir](https://github.com/matrix-org/mjolnir) project is a full fledged The [Mjolnir](https://github.com/matrix-org/mjolnir) project is a full fledged

View File

@ -0,0 +1,146 @@
# SSO Mapping Providers
A mapping provider is a Python class (loaded via a Python module) that
works out how to map attributes of a SSO response to Matrix-specific
user attributes. Details such as user ID localpart, displayname, and even avatar
URLs are all things that can be mapped from talking to a SSO service.
As an example, a SSO service may return the email address
"john.smith@example.com" for a user, whereas Synapse will need to figure out how
to turn that into a displayname when creating a Matrix user for this individual.
It may choose `John Smith`, or `Smith, John [Example.com]` or any number of
variations. As each Synapse configuration may want something different, this is
where SAML mapping providers come into play.
SSO mapping providers are currently supported for OpenID and SAML SSO
configurations. Please see the details below for how to implement your own.
External mapping providers are provided to Synapse in the form of an external
Python module. You can retrieve this module from [PyPi](https://pypi.org) or elsewhere,
but it must be importable via Synapse (e.g. it must be in the same virtualenv
as Synapse). The Synapse config is then modified to point to the mapping provider
(and optionally provide additional configuration for it).
## OpenID Mapping Providers
The OpenID mapping provider can be customized by editing the
`oidc_config.user_mapping_provider.module` config option.
`oidc_config.user_mapping_provider.config` allows you to provide custom
configuration options to the module. Check with the module's documentation for
what options it provides (if any). The options listed by default are for the
user mapping provider built in to Synapse. If using a custom module, you should
comment these options out and use those specified by the module instead.
### Building a Custom OpenID Mapping Provider
A custom mapping provider must specify the following methods:
* `__init__(self, parsed_config)`
- Arguments:
- `parsed_config` - A configuration object that is the return value of the
`parse_config` method. You should set any configuration options needed by
the module here.
* `parse_config(config)`
- This method should have the `@staticmethod` decoration.
- Arguments:
- `config` - A `dict` representing the parsed content of the
`oidc_config.user_mapping_provider.config` homeserver config option.
Runs on homeserver startup. Providers should extract and validate
any option values they need here.
- Whatever is returned will be passed back to the user mapping provider module's
`__init__` method during construction.
* `get_remote_user_id(self, userinfo)`
- Arguments:
- `userinfo` - A `authlib.oidc.core.claims.UserInfo` object to extract user
information from.
- This method must return a string, which is the unique identifier for the
user. Commonly the ``sub`` claim of the response.
* `map_user_attributes(self, userinfo, token)`
- This method should be async.
- Arguments:
- `userinfo` - A `authlib.oidc.core.claims.UserInfo` object to extract user
information from.
- `token` - A dictionary which includes information necessary to make
further requests to the OpenID provider.
- Returns a dictionary with two keys:
- localpart: A required string, used to generate the Matrix ID.
- displayname: An optional string, the display name for the user.
### Default OpenID Mapping Provider
Synapse has a built-in OpenID mapping provider if a custom provider isn't
specified in the config. It is located at
[`synapse.handlers.oidc_handler.JinjaOidcMappingProvider`](../synapse/handlers/oidc_handler.py).
## SAML Mapping Providers
The SAML mapping provider can be customized by editing the
`saml2_config.user_mapping_provider.module` config option.
`saml2_config.user_mapping_provider.config` allows you to provide custom
configuration options to the module. Check with the module's documentation for
what options it provides (if any). The options listed by default are for the
user mapping provider built in to Synapse. If using a custom module, you should
comment these options out and use those specified by the module instead.
### Building a Custom SAML Mapping Provider
A custom mapping provider must specify the following methods:
* `__init__(self, parsed_config)`
- Arguments:
- `parsed_config` - A configuration object that is the return value of the
`parse_config` method. You should set any configuration options needed by
the module here.
* `parse_config(config)`
- This method should have the `@staticmethod` decoration.
- Arguments:
- `config` - A `dict` representing the parsed content of the
`saml_config.user_mapping_provider.config` homeserver config option.
Runs on homeserver startup. Providers should extract and validate
any option values they need here.
- Whatever is returned will be passed back to the user mapping provider module's
`__init__` method during construction.
* `get_saml_attributes(config)`
- This method should have the `@staticmethod` decoration.
- Arguments:
- `config` - A object resulting from a call to `parse_config`.
- Returns a tuple of two sets. The first set equates to the SAML auth
response attributes that are required for the module to function, whereas
the second set consists of those attributes which can be used if available,
but are not necessary.
* `get_remote_user_id(self, saml_response, client_redirect_url)`
- Arguments:
- `saml_response` - A `saml2.response.AuthnResponse` object to extract user
information from.
- `client_redirect_url` - A string, the URL that the client will be
redirected to.
- This method must return a string, which is the unique identifier for the
user. Commonly the ``uid`` claim of the response.
* `saml_response_to_user_attributes(self, saml_response, failures, client_redirect_url)`
- Arguments:
- `saml_response` - A `saml2.response.AuthnResponse` object to extract user
information from.
- `failures` - An `int` that represents the amount of times the returned
mxid localpart mapping has failed. This should be used
to create a deduplicated mxid localpart which should be
returned instead. For example, if this method returns
`john.doe` as the value of `mxid_localpart` in the returned
dict, and that is already taken on the homeserver, this
method will be called again with the same parameters but
with failures=1. The method should then return a different
`mxid_localpart` value, such as `john.doe1`.
- `client_redirect_url` - A string, the URL that the client will be
redirected to.
- This method must return a dictionary, which will then be used by Synapse
to build a new user. The following keys are allowed:
* `mxid_localpart` - Required. The mxid localpart of the new user.
* `displayname` - The displayname of the new user. If not provided, will default to
the value of `mxid_localpart`.
### Default SAML Mapping Provider
Synapse has a built-in SAML mapping provider if a custom provider isn't
specified in the config. It is located at
[`synapse.handlers.saml_handler.DefaultSamlMappingProvider`](../synapse/handlers/saml_handler.py).

View File

@ -1,6 +1,6 @@
[Unit] [Unit]
Description=Synapse %i Description=Synapse %i
AssertPathExists=/etc/matrix-synapse/workers/%i.yaml
# This service should be restarted when the synapse target is restarted. # This service should be restarted when the synapse target is restarted.
PartOf=matrix-synapse.target PartOf=matrix-synapse.target

View File

@ -219,10 +219,6 @@ Asks the server for the current position of all streams.
Inform the server a pusher should be removed Inform the server a pusher should be removed
#### INVALIDATE_CACHE (C)
Inform the server a cache should be invalidated
### REMOTE_SERVER_UP (S, C) ### REMOTE_SERVER_UP (S, C)
Inform other processes that a remote server may have come back online. Inform other processes that a remote server may have come back online.

View File

@ -18,7 +18,7 @@ For TURN relaying with `coturn` to work, it must be hosted on a server/endpoint
Hosting TURN behind a NAT (even with appropriate port forwarding) is known to cause issues Hosting TURN behind a NAT (even with appropriate port forwarding) is known to cause issues
and to often not work. and to often not work.
## `coturn` Setup ## `coturn` setup
### Initial installation ### Initial installation
@ -26,7 +26,13 @@ The TURN daemon `coturn` is available from a variety of sources such as native p
#### Debian installation #### Debian installation
# apt install coturn Just install the debian package:
```sh
apt install coturn
```
This will install and start a systemd service called `coturn`.
#### Source installation #### Source installation
@ -63,38 +69,52 @@ The TURN daemon `coturn` is available from a variety of sources such as native p
1. Consider your security settings. TURN lets users request a relay which will 1. Consider your security settings. TURN lets users request a relay which will
connect to arbitrary IP addresses and ports. The following configuration is connect to arbitrary IP addresses and ports. The following configuration is
suggested as a minimum starting point: suggested as a minimum starting point:
# VoIP traffic is all UDP. There is no reason to let users connect to arbitrary TCP endpoints via the relay. # VoIP traffic is all UDP. There is no reason to let users connect to arbitrary TCP endpoints via the relay.
no-tcp-relay no-tcp-relay
# don't let the relay ever try to connect to private IP address ranges within your network (if any) # don't let the relay ever try to connect to private IP address ranges within your network (if any)
# given the turn server is likely behind your firewall, remember to include any privileged public IPs too. # given the turn server is likely behind your firewall, remember to include any privileged public IPs too.
denied-peer-ip=10.0.0.0-10.255.255.255 denied-peer-ip=10.0.0.0-10.255.255.255
denied-peer-ip=192.168.0.0-192.168.255.255 denied-peer-ip=192.168.0.0-192.168.255.255
denied-peer-ip=172.16.0.0-172.31.255.255 denied-peer-ip=172.16.0.0-172.31.255.255
# special case the turn server itself so that client->TURN->TURN->client flows work # special case the turn server itself so that client->TURN->TURN->client flows work
allowed-peer-ip=10.0.0.1 allowed-peer-ip=10.0.0.1
# consider whether you want to limit the quota of relayed streams per user (or total) to avoid risk of DoS. # consider whether you want to limit the quota of relayed streams per user (or total) to avoid risk of DoS.
user-quota=12 # 4 streams per video call, so 12 streams = 3 simultaneous relayed calls per user. user-quota=12 # 4 streams per video call, so 12 streams = 3 simultaneous relayed calls per user.
total-quota=1200 total-quota=1200
Ideally coturn should refuse to relay traffic which isn't SRTP; see 1. Also consider supporting TLS/DTLS. To do this, add the following settings
<https://github.com/matrix-org/synapse/issues/2009> to `turnserver.conf`:
# TLS certificates, including intermediate certs.
# For Let's Encrypt certificates, use `fullchain.pem` here.
cert=/path/to/fullchain.pem
# TLS private key file
pkey=/path/to/privkey.pem
1. Ensure your firewall allows traffic into the TURN server on the ports 1. Ensure your firewall allows traffic into the TURN server on the ports
you've configured it to listen on (remember to allow both TCP and UDP TURN you've configured it to listen on (By default: 3478 and 5349 for the TURN(s)
traffic) traffic (remember to allow both TCP and UDP traffic), and ports 49152-65535
for the UDP relay.)
1. If you've configured coturn to support TLS/DTLS, generate or import your 1. (Re)start the turn server:
private key and certificate.
1. Start the turn server: * If you used the Debian package (or have set up a systemd unit yourself):
```sh
systemctl restart coturn
```
bin/turnserver -o * If you installed from source:
## synapse Setup ```sh
bin/turnserver -o
```
## Synapse setup
Your home server configuration file needs the following extra keys: Your home server configuration file needs the following extra keys:
@ -126,7 +146,14 @@ As an example, here is the relevant section of the config file for matrix.org:
After updating the homeserver configuration, you must restart synapse: After updating the homeserver configuration, you must restart synapse:
* If you use synctl:
```sh
cd /where/you/run/synapse cd /where/you/run/synapse
./synctl restart ./synctl restart
```
* If you use systemd:
```
systemctl restart synapse.service
```
..and your Home Server now supports VoIP relaying! ..and your Home Server now supports VoIP relaying!

View File

@ -75,3 +75,6 @@ ignore_missing_imports = True
[mypy-jwt.*] [mypy-jwt.*]
ignore_missing_imports = True ignore_missing_imports = True
[mypy-authlib.*]
ignore_missing_imports = True

View File

@ -24,9 +24,8 @@ DISTS = (
"debian:sid", "debian:sid",
"ubuntu:xenial", "ubuntu:xenial",
"ubuntu:bionic", "ubuntu:bionic",
"ubuntu:cosmic",
"ubuntu:disco",
"ubuntu:eoan", "ubuntu:eoan",
"ubuntu:focal",
) )
DESC = '''\ DESC = '''\

View File

@ -3,8 +3,6 @@ import json
import sys import sys
import time import time
import six
import psycopg2 import psycopg2
import yaml import yaml
from canonicaljson import encode_canonical_json from canonicaljson import encode_canonical_json
@ -12,10 +10,7 @@ from signedjson.key import read_signing_keys
from signedjson.sign import sign_json from signedjson.sign import sign_json
from unpaddedbase64 import encode_base64 from unpaddedbase64 import encode_base64
if six.PY2: db_binary_type = memoryview
db_type = six.moves.builtins.buffer
else:
db_type = memoryview
def select_v1_keys(connection): def select_v1_keys(connection):
@ -72,7 +67,7 @@ def rows_v2(server, json):
valid_until = json["valid_until_ts"] valid_until = json["valid_until_ts"]
key_json = encode_canonical_json(json) key_json = encode_canonical_json(json)
for key_id in json["verify_keys"]: for key_id in json["verify_keys"]:
yield (server, key_id, "-", valid_until, valid_until, db_type(key_json)) yield (server, key_id, "-", valid_until, valid_until, db_binary_type(key_json))
def main(): def main():

View File

@ -122,7 +122,7 @@ APPEND_ONLY_TABLES = [
"presence_stream", "presence_stream",
"push_rules_stream", "push_rules_stream",
"ex_outlier_stream", "ex_outlier_stream",
"cache_invalidation_stream", "cache_invalidation_stream_by_instance",
"public_room_list_stream", "public_room_list_stream",
"state_group_edges", "state_group_edges",
"stream_ordering_to_exterm", "stream_ordering_to_exterm",
@ -188,7 +188,7 @@ class MockHomeserver:
self.clock = Clock(reactor) self.clock = Clock(reactor)
self.config = config self.config = config
self.hostname = config.server_name self.hostname = config.server_name
self.version_string = "Synapse/"+get_version_string(synapse) self.version_string = "Synapse/" + get_version_string(synapse)
def get_clock(self): def get_clock(self):
return self.clock return self.clock

View File

@ -36,7 +36,7 @@ try:
except ImportError: except ImportError:
pass pass
__version__ = "1.13.0rc2" __version__ = "1.13.0"
if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)): if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
# We import here so that we don't have to install a bunch of deps when # We import here so that we don't have to install a bunch of deps when

View File

@ -22,6 +22,7 @@ import pymacaroons
from netaddr import IPAddress from netaddr import IPAddress
from twisted.internet import defer from twisted.internet import defer
from twisted.web.server import Request
import synapse.logging.opentracing as opentracing import synapse.logging.opentracing as opentracing
import synapse.types import synapse.types
@ -37,7 +38,7 @@ from synapse.api.errors import (
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS from synapse.api.room_versions import KNOWN_ROOM_VERSIONS
from synapse.events import EventBase from synapse.events import EventBase
from synapse.types import StateMap, UserID from synapse.types import StateMap, UserID
from synapse.util.caches import CACHE_SIZE_FACTOR, register_cache from synapse.util.caches import register_cache
from synapse.util.caches.lrucache import LruCache from synapse.util.caches.lrucache import LruCache
from synapse.util.metrics import Measure from synapse.util.metrics import Measure
@ -73,7 +74,7 @@ class Auth(object):
self.store = hs.get_datastore() self.store = hs.get_datastore()
self.state = hs.get_state_handler() self.state = hs.get_state_handler()
self.token_cache = LruCache(CACHE_SIZE_FACTOR * 10000) self.token_cache = LruCache(10000)
register_cache("cache", "token_cache", self.token_cache) register_cache("cache", "token_cache", self.token_cache)
self._auth_blocking = AuthBlocking(self.hs) self._auth_blocking = AuthBlocking(self.hs)
@ -162,19 +163,25 @@ class Auth(object):
@defer.inlineCallbacks @defer.inlineCallbacks
def get_user_by_req( def get_user_by_req(
self, request, allow_guest=False, rights="access", allow_expired=False self,
request: Request,
allow_guest: bool = False,
rights: str = "access",
allow_expired: bool = False,
): ):
""" Get a registered user's ID. """ Get a registered user's ID.
Args: Args:
request - An HTTP request with an access_token query parameter. request: An HTTP request with an access_token query parameter.
allow_expired - Whether to allow the request through even if the account is allow_guest: If False, will raise an AuthError if the user making the
expired. If true, Synapse will still require an access token to be request is a guest.
provided but won't check if the account it belongs to has expired. This rights: The operation being performed; the access token must allow this
works thanks to /login delivering access tokens regardless of accounts' allow_expired: If True, allow the request through even if the account
expiration. is expired, or session token lifetime has ended. Note that
/login will deliver access tokens regardless of expiration.
Returns: Returns:
defer.Deferred: resolves to a ``synapse.types.Requester`` object defer.Deferred: resolves to a `synapse.types.Requester` object
Raises: Raises:
InvalidClientCredentialsError if no user by that token exists or the token InvalidClientCredentialsError if no user by that token exists or the token
is invalid. is invalid.
@ -205,7 +212,9 @@ class Auth(object):
return synapse.types.create_requester(user_id, app_service=app_service) return synapse.types.create_requester(user_id, app_service=app_service)
user_info = yield self.get_user_by_access_token(access_token, rights) user_info = yield self.get_user_by_access_token(
access_token, rights, allow_expired=allow_expired
)
user = user_info["user"] user = user_info["user"]
token_id = user_info["token_id"] token_id = user_info["token_id"]
is_guest = user_info["is_guest"] is_guest = user_info["is_guest"]
@ -280,13 +289,17 @@ class Auth(object):
return user_id, app_service return user_id, app_service
@defer.inlineCallbacks @defer.inlineCallbacks
def get_user_by_access_token(self, token, rights="access"): def get_user_by_access_token(
self, token: str, rights: str = "access", allow_expired: bool = False,
):
""" Validate access token and get user_id from it """ Validate access token and get user_id from it
Args: Args:
token (str): The access token to get the user by. token: The access token to get the user by
rights (str): The operation being performed; the access token must rights: The operation being performed; the access token must
allow this. allow this
allow_expired: If False, raises an InvalidClientTokenError
if the token is expired
Returns: Returns:
Deferred[dict]: dict that includes: Deferred[dict]: dict that includes:
`user` (UserID) `user` (UserID)
@ -294,8 +307,10 @@ class Auth(object):
`token_id` (int|None): access token id. May be None if guest `token_id` (int|None): access token id. May be None if guest
`device_id` (str|None): device corresponding to access token `device_id` (str|None): device corresponding to access token
Raises: Raises:
InvalidClientTokenError if a user by that token exists, but the token is
expired
InvalidClientCredentialsError if no user by that token exists or the token InvalidClientCredentialsError if no user by that token exists or the token
is invalid. is invalid
""" """
if rights == "access": if rights == "access":
@ -304,7 +319,8 @@ class Auth(object):
if r: if r:
valid_until_ms = r["valid_until_ms"] valid_until_ms = r["valid_until_ms"]
if ( if (
valid_until_ms is not None not allow_expired
and valid_until_ms is not None
and valid_until_ms < self.clock.time_msec() and valid_until_ms < self.clock.time_msec()
): ):
# there was a valid access token, but it has expired. # there was a valid access token, but it has expired.
@ -575,7 +591,7 @@ class Auth(object):
return user_level >= send_level return user_level >= send_level
@staticmethod @staticmethod
def has_access_token(request): def has_access_token(request: Request):
"""Checks if the request has an access_token. """Checks if the request has an access_token.
Returns: Returns:
@ -586,7 +602,7 @@ class Auth(object):
return bool(query_params) or bool(auth_headers) return bool(query_params) or bool(auth_headers)
@staticmethod @staticmethod
def get_access_token_from_request(request): def get_access_token_from_request(request: Request):
"""Extracts the access_token from the request. """Extracts the access_token from the request.
Args: Args:

View File

@ -58,7 +58,15 @@ class RoomVersion(object):
enforce_key_validity = attr.ib() # bool enforce_key_validity = attr.ib() # bool
# bool: before MSC2261/MSC2432, m.room.aliases had special auth rules and redaction rules # bool: before MSC2261/MSC2432, m.room.aliases had special auth rules and redaction rules
special_case_aliases_auth = attr.ib(type=bool, default=False) special_case_aliases_auth = attr.ib(type=bool)
# Strictly enforce canonicaljson, do not allow:
# * Integers outside the range of [-2 ^ 53 + 1, 2 ^ 53 - 1]
# * Floats
# * NaN, Infinity, -Infinity
strict_canonicaljson = attr.ib(type=bool)
# bool: MSC2209: Check 'notifications' key while verifying
# m.room.power_levels auth rules.
limit_notifications_power_levels = attr.ib(type=bool)
class RoomVersions(object): class RoomVersions(object):
@ -69,6 +77,8 @@ class RoomVersions(object):
StateResolutionVersions.V1, StateResolutionVersions.V1,
enforce_key_validity=False, enforce_key_validity=False,
special_case_aliases_auth=True, special_case_aliases_auth=True,
strict_canonicaljson=False,
limit_notifications_power_levels=False,
) )
V2 = RoomVersion( V2 = RoomVersion(
"2", "2",
@ -77,6 +87,8 @@ class RoomVersions(object):
StateResolutionVersions.V2, StateResolutionVersions.V2,
enforce_key_validity=False, enforce_key_validity=False,
special_case_aliases_auth=True, special_case_aliases_auth=True,
strict_canonicaljson=False,
limit_notifications_power_levels=False,
) )
V3 = RoomVersion( V3 = RoomVersion(
"3", "3",
@ -85,6 +97,8 @@ class RoomVersions(object):
StateResolutionVersions.V2, StateResolutionVersions.V2,
enforce_key_validity=False, enforce_key_validity=False,
special_case_aliases_auth=True, special_case_aliases_auth=True,
strict_canonicaljson=False,
limit_notifications_power_levels=False,
) )
V4 = RoomVersion( V4 = RoomVersion(
"4", "4",
@ -93,6 +107,8 @@ class RoomVersions(object):
StateResolutionVersions.V2, StateResolutionVersions.V2,
enforce_key_validity=False, enforce_key_validity=False,
special_case_aliases_auth=True, special_case_aliases_auth=True,
strict_canonicaljson=False,
limit_notifications_power_levels=False,
) )
V5 = RoomVersion( V5 = RoomVersion(
"5", "5",
@ -101,14 +117,18 @@ class RoomVersions(object):
StateResolutionVersions.V2, StateResolutionVersions.V2,
enforce_key_validity=True, enforce_key_validity=True,
special_case_aliases_auth=True, special_case_aliases_auth=True,
strict_canonicaljson=False,
limit_notifications_power_levels=False,
) )
MSC2432_DEV = RoomVersion( V6 = RoomVersion(
"org.matrix.msc2432", "6",
RoomDisposition.UNSTABLE, RoomDisposition.STABLE,
EventFormatVersions.V3, EventFormatVersions.V3,
StateResolutionVersions.V2, StateResolutionVersions.V2,
enforce_key_validity=True, enforce_key_validity=True,
special_case_aliases_auth=False, special_case_aliases_auth=False,
strict_canonicaljson=True,
limit_notifications_power_levels=True,
) )
@ -120,6 +140,6 @@ KNOWN_ROOM_VERSIONS = {
RoomVersions.V3, RoomVersions.V3,
RoomVersions.V4, RoomVersions.V4,
RoomVersions.V5, RoomVersions.V5,
RoomVersions.MSC2432_DEV, RoomVersions.V6,
) )
} # type: Dict[str, RoomVersion] } # type: Dict[str, RoomVersion]

View File

@ -26,7 +26,6 @@ from twisted.web.resource import NoResource
import synapse import synapse
import synapse.events import synapse.events
from synapse.api.constants import EventTypes
from synapse.api.errors import SynapseError from synapse.api.errors import SynapseError
from synapse.api.urls import ( from synapse.api.urls import (
CLIENT_API_PREFIX, CLIENT_API_PREFIX,
@ -48,6 +47,7 @@ from synapse.http.site import SynapseSite
from synapse.logging.context import LoggingContext from synapse.logging.context import LoggingContext
from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
from synapse.metrics.background_process_metrics import run_as_background_process from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.replication.http import REPLICATION_PREFIX, ReplicationRestResource
from synapse.replication.slave.storage._base import BaseSlavedStore from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
@ -81,11 +81,6 @@ from synapse.replication.tcp.streams import (
ToDeviceStream, ToDeviceStream,
TypingStream, TypingStream,
) )
from synapse.replication.tcp.streams.events import (
EventsStream,
EventsStreamEventRow,
EventsStreamRow,
)
from synapse.rest.admin import register_servlets_for_media_repo from synapse.rest.admin import register_servlets_for_media_repo
from synapse.rest.client.v1 import events from synapse.rest.client.v1 import events
from synapse.rest.client.v1.initial_sync import InitialSyncRestServlet from synapse.rest.client.v1.initial_sync import InitialSyncRestServlet
@ -122,11 +117,13 @@ from synapse.rest.client.v2_alpha.register import RegisterRestServlet
from synapse.rest.client.versions import VersionsRestServlet from synapse.rest.client.versions import VersionsRestServlet
from synapse.rest.key.v2 import KeyApiV2Resource from synapse.rest.key.v2 import KeyApiV2Resource
from synapse.server import HomeServer from synapse.server import HomeServer
from synapse.storage.data_stores.main.censor_events import CensorEventsStore
from synapse.storage.data_stores.main.media_repository import MediaRepositoryStore from synapse.storage.data_stores.main.media_repository import MediaRepositoryStore
from synapse.storage.data_stores.main.monthly_active_users import ( from synapse.storage.data_stores.main.monthly_active_users import (
MonthlyActiveUsersWorkerStore, MonthlyActiveUsersWorkerStore,
) )
from synapse.storage.data_stores.main.presence import UserPresenceState from synapse.storage.data_stores.main.presence import UserPresenceState
from synapse.storage.data_stores.main.search import SearchWorkerStore
from synapse.storage.data_stores.main.ui_auth import UIAuthWorkerStore from synapse.storage.data_stores.main.ui_auth import UIAuthWorkerStore
from synapse.storage.data_stores.main.user_directory import UserDirectoryStore from synapse.storage.data_stores.main.user_directory import UserDirectoryStore
from synapse.types import ReadReceipt from synapse.types import ReadReceipt
@ -429,6 +426,7 @@ class GenericWorkerSlavedStore(
SlavedGroupServerStore, SlavedGroupServerStore,
SlavedAccountDataStore, SlavedAccountDataStore,
SlavedPusherStore, SlavedPusherStore,
CensorEventsStore,
SlavedEventStore, SlavedEventStore,
SlavedKeyStore, SlavedKeyStore,
RoomStore, RoomStore,
@ -442,6 +440,7 @@ class GenericWorkerSlavedStore(
SlavedFilteringStore, SlavedFilteringStore,
MonthlyActiveUsersWorkerStore, MonthlyActiveUsersWorkerStore,
MediaRepositoryStore, MediaRepositoryStore,
SearchWorkerStore,
BaseSlavedStore, BaseSlavedStore,
): ):
def __init__(self, database, db_conn, hs): def __init__(self, database, db_conn, hs):
@ -559,6 +558,9 @@ class GenericWorkerServer(HomeServer):
if name in ["keys", "federation"]: if name in ["keys", "federation"]:
resources[SERVER_KEY_V2_PREFIX] = KeyApiV2Resource(self) resources[SERVER_KEY_V2_PREFIX] = KeyApiV2Resource(self)
if name == "replication":
resources[REPLICATION_PREFIX] = ReplicationRestResource(self)
root_resource = create_resource_tree(resources, NoResource()) root_resource = create_resource_tree(resources, NoResource())
_base.listen_tcp( _base.listen_tcp(
@ -618,7 +620,7 @@ class GenericWorkerServer(HomeServer):
class GenericWorkerReplicationHandler(ReplicationDataHandler): class GenericWorkerReplicationHandler(ReplicationDataHandler):
def __init__(self, hs): def __init__(self, hs):
super(GenericWorkerReplicationHandler, self).__init__(hs.get_datastore()) super(GenericWorkerReplicationHandler, self).__init__(hs)
self.store = hs.get_datastore() self.store = hs.get_datastore()
self.typing_handler = hs.get_typing_handler() self.typing_handler = hs.get_typing_handler()
@ -644,30 +646,7 @@ class GenericWorkerReplicationHandler(ReplicationDataHandler):
stream_name, token, rows stream_name, token, rows
) )
if stream_name == EventsStream.NAME: if stream_name == PushRulesStream.NAME:
# We shouldn't get multiple rows per token for events stream, so
# we don't need to optimise this for multiple rows.
for row in rows:
if row.type != EventsStreamEventRow.TypeId:
continue
assert isinstance(row, EventsStreamRow)
event = await self.store.get_event(
row.data.event_id, allow_rejected=True
)
if event.rejected_reason:
continue
extra_users = ()
if event.type == EventTypes.Member:
extra_users = (event.state_key,)
max_token = self.store.get_room_max_stream_ordering()
self.notifier.on_new_room_event(
event, token, max_token, extra_users
)
await self.pusher_pool.on_new_notifications(token, token)
elif stream_name == PushRulesStream.NAME:
self.notifier.on_new_event( self.notifier.on_new_event(
"push_rules_key", token, users=[row.user_id for row in rows] "push_rules_key", token, users=[row.user_id for row in rows]
) )

View File

@ -69,7 +69,6 @@ from synapse.server import HomeServer
from synapse.storage import DataStore from synapse.storage import DataStore
from synapse.storage.engines import IncorrectDatabaseSetup from synapse.storage.engines import IncorrectDatabaseSetup
from synapse.storage.prepare_database import UpgradeDatabaseException from synapse.storage.prepare_database import UpgradeDatabaseException
from synapse.util.caches import CACHE_SIZE_FACTOR
from synapse.util.httpresourcetree import create_resource_tree from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.manhole import manhole from synapse.util.manhole import manhole
from synapse.util.module_loader import load_module from synapse.util.module_loader import load_module
@ -192,6 +191,11 @@ class SynapseHomeServer(HomeServer):
} }
) )
if self.get_config().oidc_enabled:
from synapse.rest.oidc import OIDCResource
resources["/_synapse/oidc"] = OIDCResource(self)
if self.get_config().saml2_enabled: if self.get_config().saml2_enabled:
from synapse.rest.saml2 import SAML2Resource from synapse.rest.saml2 import SAML2Resource
@ -422,6 +426,13 @@ def setup(config_options):
# Check if it needs to be reprovisioned every day. # Check if it needs to be reprovisioned every day.
hs.get_clock().looping_call(reprovision_acme, 24 * 60 * 60 * 1000) hs.get_clock().looping_call(reprovision_acme, 24 * 60 * 60 * 1000)
# Load the OIDC provider metadatas, if OIDC is enabled.
if hs.config.oidc_enabled:
oidc = hs.get_oidc_handler()
# Loading the provider metadata also ensures the provider config is valid.
yield defer.ensureDeferred(oidc.load_metadata())
yield defer.ensureDeferred(oidc.load_jwks())
_base.start(hs, config.listeners) _base.start(hs, config.listeners)
hs.get_datastore().db.updates.start_doing_background_updates() hs.get_datastore().db.updates.start_doing_background_updates()
@ -504,8 +515,8 @@ def phone_stats_home(hs, stats, stats_process=_stats_process):
daily_sent_messages = yield hs.get_datastore().count_daily_sent_messages() daily_sent_messages = yield hs.get_datastore().count_daily_sent_messages()
stats["daily_sent_messages"] = daily_sent_messages stats["daily_sent_messages"] = daily_sent_messages
stats["cache_factor"] = CACHE_SIZE_FACTOR stats["cache_factor"] = hs.config.caches.global_factor
stats["event_cache_size"] = hs.config.event_cache_size stats["event_cache_size"] = hs.config.caches.event_cache_size
# #
# Performance statistics # Performance statistics

View File

@ -270,7 +270,7 @@ class ApplicationService(object):
def is_exclusive_room(self, room_id): def is_exclusive_room(self, room_id):
return self._is_exclusive(ApplicationService.NS_ROOMS, room_id) return self._is_exclusive(ApplicationService.NS_ROOMS, room_id)
def get_exlusive_user_regexes(self): def get_exclusive_user_regexes(self):
"""Get the list of regexes used to determine if a user is exclusively """Get the list of regexes used to determine if a user is exclusively
registered by the AS registered by the AS
""" """

View File

@ -13,6 +13,7 @@ from synapse.config import (
key, key,
logger, logger,
metrics, metrics,
oidc_config,
password, password,
password_auth_providers, password_auth_providers,
push, push,
@ -59,6 +60,7 @@ class RootConfig:
saml2: saml2_config.SAML2Config saml2: saml2_config.SAML2Config
cas: cas.CasConfig cas: cas.CasConfig
sso: sso.SSOConfig sso: sso.SSOConfig
oidc: oidc_config.OIDCConfig
jwt: jwt_config.JWTConfig jwt: jwt_config.JWTConfig
password: password.PasswordConfig password: password.PasswordConfig
email: emailconfig.EmailConfig email: emailconfig.EmailConfig

164
synapse/config/cache.py Normal file
View File

@ -0,0 +1,164 @@
# -*- coding: utf-8 -*-
# Copyright 2019 Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
from typing import Callable, Dict
from ._base import Config, ConfigError
# The prefix for all cache factor-related environment variables
_CACHES = {}
_CACHE_PREFIX = "SYNAPSE_CACHE_FACTOR"
_DEFAULT_FACTOR_SIZE = 0.5
_DEFAULT_EVENT_CACHE_SIZE = "10K"
class CacheProperties(object):
def __init__(self):
# The default factor size for all caches
self.default_factor_size = float(
os.environ.get(_CACHE_PREFIX, _DEFAULT_FACTOR_SIZE)
)
self.resize_all_caches_func = None
properties = CacheProperties()
def add_resizable_cache(cache_name: str, cache_resize_callback: Callable):
"""Register a cache that's size can dynamically change
Args:
cache_name: A reference to the cache
cache_resize_callback: A callback function that will be ran whenever
the cache needs to be resized
"""
_CACHES[cache_name.lower()] = cache_resize_callback
# Ensure all loaded caches are sized appropriately
#
# This method should only run once the config has been read,
# as it uses values read from it
if properties.resize_all_caches_func:
properties.resize_all_caches_func()
class CacheConfig(Config):
section = "caches"
_environ = os.environ
@staticmethod
def reset():
"""Resets the caches to their defaults. Used for tests."""
properties.default_factor_size = float(
os.environ.get(_CACHE_PREFIX, _DEFAULT_FACTOR_SIZE)
)
properties.resize_all_caches_func = None
_CACHES.clear()
def generate_config_section(self, **kwargs):
return """\
## Caching ##
# Caching can be configured through the following options.
#
# A cache 'factor' is a multiplier that can be applied to each of
# Synapse's caches in order to increase or decrease the maximum
# number of entries that can be stored.
# The number of events to cache in memory. Not affected by
# caches.global_factor.
#
#event_cache_size: 10K
caches:
# Controls the global cache factor, which is the default cache factor
# for all caches if a specific factor for that cache is not otherwise
# set.
#
# This can also be set by the "SYNAPSE_CACHE_FACTOR" environment
# variable. Setting by environment variable takes priority over
# setting through the config file.
#
# Defaults to 0.5, which will half the size of all caches.
#
#global_factor: 1.0
# A dictionary of cache name to cache factor for that individual
# cache. Overrides the global cache factor for a given cache.
#
# These can also be set through environment variables comprised
# of "SYNAPSE_CACHE_FACTOR_" + the name of the cache in capital
# letters and underscores. Setting by environment variable
# takes priority over setting through the config file.
# Ex. SYNAPSE_CACHE_FACTOR_GET_USERS_WHO_SHARE_ROOM_WITH_USER=2.0
#
per_cache_factors:
#get_users_who_share_room_with_user: 2.0
"""
def read_config(self, config, **kwargs):
self.event_cache_size = self.parse_size(
config.get("event_cache_size", _DEFAULT_EVENT_CACHE_SIZE)
)
self.cache_factors = {} # type: Dict[str, float]
cache_config = config.get("caches") or {}
self.global_factor = cache_config.get(
"global_factor", properties.default_factor_size
)
if not isinstance(self.global_factor, (int, float)):
raise ConfigError("caches.global_factor must be a number.")
# Set the global one so that it's reflected in new caches
properties.default_factor_size = self.global_factor
# Load cache factors from the config
individual_factors = cache_config.get("per_cache_factors") or {}
if not isinstance(individual_factors, dict):
raise ConfigError("caches.per_cache_factors must be a dictionary")
# Override factors from environment if necessary
individual_factors.update(
{
key[len(_CACHE_PREFIX) + 1 :].lower(): float(val)
for key, val in self._environ.items()
if key.startswith(_CACHE_PREFIX + "_")
}
)
for cache, factor in individual_factors.items():
if not isinstance(factor, (int, float)):
raise ConfigError(
"caches.per_cache_factors.%s must be a number" % (cache.lower(),)
)
self.cache_factors[cache.lower()] = factor
# Resize all caches (if necessary) with the new factors we've loaded
self.resize_all_caches()
# Store this function so that it can be called from other classes without
# needing an instance of Config
properties.resize_all_caches_func = self.resize_all_caches
def resize_all_caches(self):
"""Ensure all cache sizes are up to date
For each cache, run the mapped callback function with either
a specific cache factor or the default, global one.
"""
for cache_name, callback in _CACHES.items():
new_factor = self.cache_factors.get(cache_name, self.global_factor)
callback(new_factor)

View File

@ -68,10 +68,6 @@ database:
name: sqlite3 name: sqlite3
args: args:
database: %(database_path)s database: %(database_path)s
# Number of events to cache in memory.
#
#event_cache_size: 10K
""" """
@ -116,8 +112,6 @@ class DatabaseConfig(Config):
self.databases = [] self.databases = []
def read_config(self, config, **kwargs): def read_config(self, config, **kwargs):
self.event_cache_size = self.parse_size(config.get("event_cache_size", "10K"))
# We *experimentally* support specifying multiple databases via the # We *experimentally* support specifying multiple databases via the
# `databases` key. This is a map from a label to database config in the # `databases` key. This is a map from a label to database config in the
# same format as the `database` config option, plus an extra # same format as the `database` config option, plus an extra

View File

@ -17,6 +17,7 @@
from ._base import RootConfig from ._base import RootConfig
from .api import ApiConfig from .api import ApiConfig
from .appservice import AppServiceConfig from .appservice import AppServiceConfig
from .cache import CacheConfig
from .captcha import CaptchaConfig from .captcha import CaptchaConfig
from .cas import CasConfig from .cas import CasConfig
from .consent_config import ConsentConfig from .consent_config import ConsentConfig
@ -27,6 +28,7 @@ from .jwt_config import JWTConfig
from .key import KeyConfig from .key import KeyConfig
from .logger import LoggingConfig from .logger import LoggingConfig
from .metrics import MetricsConfig from .metrics import MetricsConfig
from .oidc_config import OIDCConfig
from .password import PasswordConfig from .password import PasswordConfig
from .password_auth_providers import PasswordAuthProviderConfig from .password_auth_providers import PasswordAuthProviderConfig
from .push import PushConfig from .push import PushConfig
@ -54,6 +56,7 @@ class HomeServerConfig(RootConfig):
config_classes = [ config_classes = [
ServerConfig, ServerConfig,
TlsConfig, TlsConfig,
CacheConfig,
DatabaseConfig, DatabaseConfig,
LoggingConfig, LoggingConfig,
RatelimitConfig, RatelimitConfig,
@ -66,6 +69,7 @@ class HomeServerConfig(RootConfig):
AppServiceConfig, AppServiceConfig,
KeyConfig, KeyConfig,
SAML2Config, SAML2Config,
OIDCConfig,
CasConfig, CasConfig,
SSOConfig, SSOConfig,
JWTConfig, JWTConfig,

View File

@ -0,0 +1,177 @@
# -*- coding: utf-8 -*-
# Copyright 2020 Quentin Gliech
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from synapse.python_dependencies import DependencyException, check_requirements
from synapse.util.module_loader import load_module
from ._base import Config, ConfigError
DEFAULT_USER_MAPPING_PROVIDER = "synapse.handlers.oidc_handler.JinjaOidcMappingProvider"
class OIDCConfig(Config):
section = "oidc"
def read_config(self, config, **kwargs):
self.oidc_enabled = False
oidc_config = config.get("oidc_config")
if not oidc_config or not oidc_config.get("enabled", False):
return
try:
check_requirements("oidc")
except DependencyException as e:
raise ConfigError(e.message)
public_baseurl = self.public_baseurl
if public_baseurl is None:
raise ConfigError("oidc_config requires a public_baseurl to be set")
self.oidc_callback_url = public_baseurl + "_synapse/oidc/callback"
self.oidc_enabled = True
self.oidc_discover = oidc_config.get("discover", True)
self.oidc_issuer = oidc_config["issuer"]
self.oidc_client_id = oidc_config["client_id"]
self.oidc_client_secret = oidc_config["client_secret"]
self.oidc_client_auth_method = oidc_config.get(
"client_auth_method", "client_secret_basic"
)
self.oidc_scopes = oidc_config.get("scopes", ["openid"])
self.oidc_authorization_endpoint = oidc_config.get("authorization_endpoint")
self.oidc_token_endpoint = oidc_config.get("token_endpoint")
self.oidc_userinfo_endpoint = oidc_config.get("userinfo_endpoint")
self.oidc_jwks_uri = oidc_config.get("jwks_uri")
self.oidc_subject_claim = oidc_config.get("subject_claim", "sub")
self.oidc_skip_verification = oidc_config.get("skip_verification", False)
ump_config = oidc_config.get("user_mapping_provider", {})
ump_config.setdefault("module", DEFAULT_USER_MAPPING_PROVIDER)
ump_config.setdefault("config", {})
(
self.oidc_user_mapping_provider_class,
self.oidc_user_mapping_provider_config,
) = load_module(ump_config)
# Ensure loaded user mapping module has defined all necessary methods
required_methods = [
"get_remote_user_id",
"map_user_attributes",
]
missing_methods = [
method
for method in required_methods
if not hasattr(self.oidc_user_mapping_provider_class, method)
]
if missing_methods:
raise ConfigError(
"Class specified by oidc_config."
"user_mapping_provider.module is missing required "
"methods: %s" % (", ".join(missing_methods),)
)
def generate_config_section(self, config_dir_path, server_name, **kwargs):
return """\
# Enable OpenID Connect for registration and login. Uses authlib.
#
oidc_config:
# enable OpenID Connect. Defaults to false.
#
#enabled: true
# use the OIDC discovery mechanism to discover endpoints. Defaults to true.
#
#discover: true
# the OIDC issuer. Used to validate tokens and discover the providers endpoints. Required.
#
#issuer: "https://accounts.example.com/"
# oauth2 client id to use. Required.
#
#client_id: "provided-by-your-issuer"
# oauth2 client secret to use. Required.
#
#client_secret: "provided-by-your-issuer"
# auth method to use when exchanging the token.
# Valid values are "client_secret_basic" (default), "client_secret_post" and "none".
#
#client_auth_method: "client_auth_basic"
# list of scopes to ask. This should include the "openid" scope. Defaults to ["openid"].
#
#scopes: ["openid"]
# the oauth2 authorization endpoint. Required if provider discovery is disabled.
#
#authorization_endpoint: "https://accounts.example.com/oauth2/auth"
# the oauth2 token endpoint. Required if provider discovery is disabled.
#
#token_endpoint: "https://accounts.example.com/oauth2/token"
# the OIDC userinfo endpoint. Required if discovery is disabled and the "openid" scope is not asked.
#
#userinfo_endpoint: "https://accounts.example.com/userinfo"
# URI where to fetch the JWKS. Required if discovery is disabled and the "openid" scope is used.
#
#jwks_uri: "https://accounts.example.com/.well-known/jwks.json"
# skip metadata verification. Defaults to false.
# Use this if you are connecting to a provider that is not OpenID Connect compliant.
# Avoid this in production.
#
#skip_verification: false
# An external module can be provided here as a custom solution to mapping
# attributes returned from a OIDC provider onto a matrix user.
#
user_mapping_provider:
# The custom module's class. Uncomment to use a custom module.
# Default is {mapping_provider!r}.
#
#module: mapping_provider.OidcMappingProvider
# Custom configuration values for the module. Below options are intended
# for the built-in provider, they should be changed if using a custom
# module. This section will be passed as a Python dictionary to the
# module's `parse_config` method.
#
# Below is the config of the default mapping provider, based on Jinja2
# templates. Those templates are used to render user attributes, where the
# userinfo object is available through the `user` variable.
#
config:
# name of the claim containing a unique identifier for the user.
# Defaults to `sub`, which OpenID Connect compliant providers should provide.
#
#subject_claim: "sub"
# Jinja2 template for the localpart of the MXID
#
localpart_template: "{{{{ user.preferred_username }}}}"
# Jinja2 template for the display name to set on first login. Optional.
#
#display_name_template: "{{{{ user.given_name }}}} {{{{ user.last_name }}}}"
""".format(
mapping_provider=DEFAULT_USER_MAPPING_PROVIDER
)

View File

@ -51,7 +51,7 @@ class ServerNoticesConfig(Config):
None if server notices are not enabled. None if server notices are not enabled.
server_notices_mxid_avatar_url (str|None): server_notices_mxid_avatar_url (str|None):
The display name to use for the server notices user. The MXC URL for the avatar of the server notices user.
None if server notices are not enabled. None if server notices are not enabled.
server_notices_room_name (str|None): server_notices_room_name (str|None):

View File

@ -13,6 +13,9 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
from typing import Any, Dict, List, Tuple
from synapse.config import ConfigError
from synapse.util.module_loader import load_module from synapse.util.module_loader import load_module
from ._base import Config from ._base import Config
@ -22,16 +25,35 @@ class SpamCheckerConfig(Config):
section = "spamchecker" section = "spamchecker"
def read_config(self, config, **kwargs): def read_config(self, config, **kwargs):
self.spam_checker = None self.spam_checkers = [] # type: List[Tuple[Any, Dict]]
provider = config.get("spam_checker", None) spam_checkers = config.get("spam_checker") or []
if provider is not None: if isinstance(spam_checkers, dict):
self.spam_checker = load_module(provider) # The spam_checker config option used to only support one
# spam checker, and thus was simply a dictionary with module
# and config keys. Support this old behaviour by checking
# to see if the option resolves to a dictionary
self.spam_checkers.append(load_module(spam_checkers))
elif isinstance(spam_checkers, list):
for spam_checker in spam_checkers:
if not isinstance(spam_checker, dict):
raise ConfigError("spam_checker syntax is incorrect")
self.spam_checkers.append(load_module(spam_checker))
else:
raise ConfigError("spam_checker syntax is incorrect")
def generate_config_section(self, **kwargs): def generate_config_section(self, **kwargs):
return """\ return """\
#spam_checker: # Spam checkers are third-party modules that can block specific actions
# module: "my_custom_project.SuperSpamChecker" # of local users, such as creating rooms and registering undesirable
# config: # usernames, as well as remote users by redacting incoming events.
# example_option: 'things' #
spam_checker:
#- module: "my_custom_project.SuperSpamChecker"
# config:
# example_option: 'things'
#- module: "some_other_project.BadEventStopper"
# config:
# example_stop_events_from: ['@bad:example.com']
""" """

View File

@ -36,17 +36,13 @@ class SSOConfig(Config):
if not template_dir: if not template_dir:
template_dir = pkg_resources.resource_filename("synapse", "res/templates",) template_dir = pkg_resources.resource_filename("synapse", "res/templates",)
self.sso_redirect_confirm_template_dir = template_dir self.sso_template_dir = template_dir
self.sso_account_deactivated_template = self.read_file( self.sso_account_deactivated_template = self.read_file(
os.path.join( os.path.join(self.sso_template_dir, "sso_account_deactivated.html"),
self.sso_redirect_confirm_template_dir, "sso_account_deactivated.html"
),
"sso_account_deactivated_template", "sso_account_deactivated_template",
) )
self.sso_auth_success_template = self.read_file( self.sso_auth_success_template = self.read_file(
os.path.join( os.path.join(self.sso_template_dir, "sso_auth_success.html"),
self.sso_redirect_confirm_template_dir, "sso_auth_success.html"
),
"sso_auth_success_template", "sso_auth_success_template",
) )
@ -137,6 +133,13 @@ class SSOConfig(Config):
# #
# This template has no additional variables. # This template has no additional variables.
# #
# * HTML page to display to users if something goes wrong during the
# OpenID Connect authentication process: 'sso_error.html'.
#
# When rendering, this template is given two variables:
# * error: the technical name of the error
# * error_description: a human-readable message for the error
#
# You can see the default templates at: # You can see the default templates at:
# https://github.com/matrix-org/synapse/tree/master/synapse/res/templates # https://github.com/matrix-org/synapse/tree/master/synapse/res/templates
# #

View File

@ -13,9 +13,20 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
import attr
from ._base import Config from ._base import Config
@attr.s
class InstanceLocationConfig:
"""The host and port to talk to an instance via HTTP replication.
"""
host = attr.ib(type=str)
port = attr.ib(type=int)
class WorkerConfig(Config): class WorkerConfig(Config):
"""The workers are processes run separately to the main synapse process. """The workers are processes run separately to the main synapse process.
They have their own pid_file and listener configuration. They use the They have their own pid_file and listener configuration. They use the
@ -71,6 +82,12 @@ class WorkerConfig(Config):
elif not bind_addresses: elif not bind_addresses:
bind_addresses.append("") bind_addresses.append("")
# A map from instance name to host/port of their HTTP replication endpoint.
instance_map = config.get("instance_map", {}) or {}
self.instance_map = {
name: InstanceLocationConfig(**c) for name, c in instance_map.items()
}
def read_arguments(self, args): def read_arguments(self, args):
# We support a bunch of command line arguments that override options in # We support a bunch of command line arguments that override options in
# the config. A lot of these options have a worker_* prefix when running # the config. A lot of these options have a worker_* prefix when running

View File

@ -15,7 +15,7 @@
# limitations under the License. # limitations under the License.
import logging import logging
from typing import Set, Tuple from typing import List, Optional, Set, Tuple
from canonicaljson import encode_canonical_json from canonicaljson import encode_canonical_json
from signedjson.key import decode_verify_key_bytes from signedjson.key import decode_verify_key_bytes
@ -29,18 +29,19 @@ from synapse.api.room_versions import (
EventFormatVersions, EventFormatVersions,
RoomVersion, RoomVersion,
) )
from synapse.types import UserID, get_domain_from_id from synapse.events import EventBase
from synapse.types import StateMap, UserID, get_domain_from_id
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
def check( def check(
room_version_obj: RoomVersion, room_version_obj: RoomVersion,
event, event: EventBase,
auth_events, auth_events: StateMap[EventBase],
do_sig_check=True, do_sig_check: bool = True,
do_size_check=True, do_size_check: bool = True,
): ) -> None:
""" Checks if this event is correctly authed. """ Checks if this event is correctly authed.
Args: Args:
@ -181,7 +182,7 @@ def check(
_can_send_event(event, auth_events) _can_send_event(event, auth_events)
if event.type == EventTypes.PowerLevels: if event.type == EventTypes.PowerLevels:
_check_power_levels(event, auth_events) _check_power_levels(room_version_obj, event, auth_events)
if event.type == EventTypes.Redaction: if event.type == EventTypes.Redaction:
check_redaction(room_version_obj, event, auth_events) check_redaction(room_version_obj, event, auth_events)
@ -189,7 +190,7 @@ def check(
logger.debug("Allowing! %s", event) logger.debug("Allowing! %s", event)
def _check_size_limits(event): def _check_size_limits(event: EventBase) -> None:
def too_big(field): def too_big(field):
raise EventSizeError("%s too large" % (field,)) raise EventSizeError("%s too large" % (field,))
@ -207,13 +208,18 @@ def _check_size_limits(event):
too_big("event") too_big("event")
def _can_federate(event, auth_events): def _can_federate(event: EventBase, auth_events: StateMap[EventBase]) -> bool:
creation_event = auth_events.get((EventTypes.Create, "")) creation_event = auth_events.get((EventTypes.Create, ""))
# There should always be a creation event, but if not don't federate.
if not creation_event:
return False
return creation_event.content.get("m.federate", True) is True return creation_event.content.get("m.federate", True) is True
def _is_membership_change_allowed(event, auth_events): def _is_membership_change_allowed(
event: EventBase, auth_events: StateMap[EventBase]
) -> None:
membership = event.content["membership"] membership = event.content["membership"]
# Check if this is the room creator joining: # Check if this is the room creator joining:
@ -339,21 +345,25 @@ def _is_membership_change_allowed(event, auth_events):
raise AuthError(500, "Unknown membership %s" % membership) raise AuthError(500, "Unknown membership %s" % membership)
def _check_event_sender_in_room(event, auth_events): def _check_event_sender_in_room(
event: EventBase, auth_events: StateMap[EventBase]
) -> None:
key = (EventTypes.Member, event.user_id) key = (EventTypes.Member, event.user_id)
member_event = auth_events.get(key) member_event = auth_events.get(key)
return _check_joined_room(member_event, event.user_id, event.room_id) _check_joined_room(member_event, event.user_id, event.room_id)
def _check_joined_room(member, user_id, room_id): def _check_joined_room(member: Optional[EventBase], user_id: str, room_id: str) -> None:
if not member or member.membership != Membership.JOIN: if not member or member.membership != Membership.JOIN:
raise AuthError( raise AuthError(
403, "User %s not in room %s (%s)" % (user_id, room_id, repr(member)) 403, "User %s not in room %s (%s)" % (user_id, room_id, repr(member))
) )
def get_send_level(etype, state_key, power_levels_event): def get_send_level(
etype: str, state_key: Optional[str], power_levels_event: Optional[EventBase]
) -> int:
"""Get the power level required to send an event of a given type """Get the power level required to send an event of a given type
The federation spec [1] refers to this as "Required Power Level". The federation spec [1] refers to this as "Required Power Level".
@ -361,13 +371,13 @@ def get_send_level(etype, state_key, power_levels_event):
https://matrix.org/docs/spec/server_server/unstable.html#definitions https://matrix.org/docs/spec/server_server/unstable.html#definitions
Args: Args:
etype (str): type of event etype: type of event
state_key (str|None): state_key of state event, or None if it is not state_key: state_key of state event, or None if it is not
a state event. a state event.
power_levels_event (synapse.events.EventBase|None): power levels event power_levels_event: power levels event
in force at this point in the room in force at this point in the room
Returns: Returns:
int: power level required to send this event. power level required to send this event.
""" """
if power_levels_event: if power_levels_event:
@ -388,7 +398,7 @@ def get_send_level(etype, state_key, power_levels_event):
return int(send_level) return int(send_level)
def _can_send_event(event, auth_events): def _can_send_event(event: EventBase, auth_events: StateMap[EventBase]) -> bool:
power_levels_event = _get_power_level_event(auth_events) power_levels_event = _get_power_level_event(auth_events)
send_level = get_send_level(event.type, event.get("state_key"), power_levels_event) send_level = get_send_level(event.type, event.get("state_key"), power_levels_event)
@ -410,7 +420,9 @@ def _can_send_event(event, auth_events):
return True return True
def check_redaction(room_version_obj: RoomVersion, event, auth_events): def check_redaction(
room_version_obj: RoomVersion, event: EventBase, auth_events: StateMap[EventBase],
) -> bool:
"""Check whether the event sender is allowed to redact the target event. """Check whether the event sender is allowed to redact the target event.
Returns: Returns:
@ -442,7 +454,9 @@ def check_redaction(room_version_obj: RoomVersion, event, auth_events):
raise AuthError(403, "You don't have permission to redact events") raise AuthError(403, "You don't have permission to redact events")
def _check_power_levels(event, auth_events): def _check_power_levels(
room_version_obj: RoomVersion, event: EventBase, auth_events: StateMap[EventBase],
) -> None:
user_list = event.content.get("users", {}) user_list = event.content.get("users", {})
# Validate users # Validate users
for k, v in user_list.items(): for k, v in user_list.items():
@ -473,7 +487,7 @@ def _check_power_levels(event, auth_events):
("redact", None), ("redact", None),
("kick", None), ("kick", None),
("invite", None), ("invite", None),
] ] # type: List[Tuple[str, Optional[str]]]
old_list = current_state.content.get("users", {}) old_list = current_state.content.get("users", {})
for user in set(list(old_list) + list(user_list)): for user in set(list(old_list) + list(user_list)):
@ -484,6 +498,14 @@ def _check_power_levels(event, auth_events):
for ev_id in set(list(old_list) + list(new_list)): for ev_id in set(list(old_list) + list(new_list)):
levels_to_check.append((ev_id, "events")) levels_to_check.append((ev_id, "events"))
# MSC2209 specifies these checks should also be done for the "notifications"
# key.
if room_version_obj.limit_notifications_power_levels:
old_list = current_state.content.get("notifications", {})
new_list = event.content.get("notifications", {})
for ev_id in set(list(old_list) + list(new_list)):
levels_to_check.append((ev_id, "notifications"))
old_state = current_state.content old_state = current_state.content
new_state = event.content new_state = event.content
@ -495,12 +517,12 @@ def _check_power_levels(event, auth_events):
new_loc = new_loc.get(dir, {}) new_loc = new_loc.get(dir, {})
if level_to_check in old_loc: if level_to_check in old_loc:
old_level = int(old_loc[level_to_check]) old_level = int(old_loc[level_to_check]) # type: Optional[int]
else: else:
old_level = None old_level = None
if level_to_check in new_loc: if level_to_check in new_loc:
new_level = int(new_loc[level_to_check]) new_level = int(new_loc[level_to_check]) # type: Optional[int]
else: else:
new_level = None new_level = None
@ -526,21 +548,21 @@ def _check_power_levels(event, auth_events):
) )
def _get_power_level_event(auth_events): def _get_power_level_event(auth_events: StateMap[EventBase]) -> Optional[EventBase]:
return auth_events.get((EventTypes.PowerLevels, "")) return auth_events.get((EventTypes.PowerLevels, ""))
def get_user_power_level(user_id, auth_events): def get_user_power_level(user_id: str, auth_events: StateMap[EventBase]) -> int:
"""Get a user's power level """Get a user's power level
Args: Args:
user_id (str): user's id to look up in power_levels user_id: user's id to look up in power_levels
auth_events (dict[(str, str), synapse.events.EventBase]): auth_events:
state in force at this point in the room (or rather, a subset of state in force at this point in the room (or rather, a subset of
it including at least the create event and power levels event. it including at least the create event and power levels event.
Returns: Returns:
int: the user's power level in this room. the user's power level in this room.
""" """
power_level_event = _get_power_level_event(auth_events) power_level_event = _get_power_level_event(auth_events)
if power_level_event: if power_level_event:
@ -566,7 +588,7 @@ def get_user_power_level(user_id, auth_events):
return 0 return 0
def _get_named_level(auth_events, name, default): def _get_named_level(auth_events: StateMap[EventBase], name: str, default: int) -> int:
power_level_event = _get_power_level_event(auth_events) power_level_event = _get_power_level_event(auth_events)
if not power_level_event: if not power_level_event:
@ -579,7 +601,7 @@ def _get_named_level(auth_events, name, default):
return default return default
def _verify_third_party_invite(event, auth_events): def _verify_third_party_invite(event: EventBase, auth_events: StateMap[EventBase]):
""" """
Validates that the invite event is authorized by a previous third-party invite. Validates that the invite event is authorized by a previous third-party invite.
@ -654,7 +676,7 @@ def get_public_keys(invite_event):
return public_keys return public_keys
def auth_types_for_event(event) -> Set[Tuple[str, str]]: def auth_types_for_event(event: EventBase) -> Set[Tuple[str, str]]:
"""Given an event, return a list of (EventType, StateKey) that may be """Given an event, return a list of (EventType, StateKey) that may be
needed to auth the event. The returned list may be a superset of what needed to auth the event. The returned list may be a superset of what
would actually be required depending on the full state of the room. would actually be required depending on the full state of the room.

View File

@ -15,7 +15,7 @@
# limitations under the License. # limitations under the License.
import inspect import inspect
from typing import Dict from typing import Any, Dict, List
from synapse.spam_checker_api import SpamCheckerApi from synapse.spam_checker_api import SpamCheckerApi
@ -26,24 +26,17 @@ if MYPY:
class SpamChecker(object): class SpamChecker(object):
def __init__(self, hs: "synapse.server.HomeServer"): def __init__(self, hs: "synapse.server.HomeServer"):
self.spam_checker = None self.spam_checkers = [] # type: List[Any]
module = None for module, config in hs.config.spam_checkers:
config = None
try:
module, config = hs.config.spam_checker
except Exception:
pass
if module is not None:
# Older spam checkers don't accept the `api` argument, so we # Older spam checkers don't accept the `api` argument, so we
# try and detect support. # try and detect support.
spam_args = inspect.getfullargspec(module) spam_args = inspect.getfullargspec(module)
if "api" in spam_args.args: if "api" in spam_args.args:
api = SpamCheckerApi(hs) api = SpamCheckerApi(hs)
self.spam_checker = module(config=config, api=api) self.spam_checkers.append(module(config=config, api=api))
else: else:
self.spam_checker = module(config=config) self.spam_checkers.append(module(config=config))
def check_event_for_spam(self, event: "synapse.events.EventBase") -> bool: def check_event_for_spam(self, event: "synapse.events.EventBase") -> bool:
"""Checks if a given event is considered "spammy" by this server. """Checks if a given event is considered "spammy" by this server.
@ -58,10 +51,11 @@ class SpamChecker(object):
Returns: Returns:
True if the event is spammy. True if the event is spammy.
""" """
if self.spam_checker is None: for spam_checker in self.spam_checkers:
return False if spam_checker.check_event_for_spam(event):
return True
return self.spam_checker.check_event_for_spam(event) return False
def user_may_invite( def user_may_invite(
self, inviter_userid: str, invitee_userid: str, room_id: str self, inviter_userid: str, invitee_userid: str, room_id: str
@ -78,12 +72,14 @@ class SpamChecker(object):
Returns: Returns:
True if the user may send an invite, otherwise False True if the user may send an invite, otherwise False
""" """
if self.spam_checker is None: for spam_checker in self.spam_checkers:
return True if (
spam_checker.user_may_invite(inviter_userid, invitee_userid, room_id)
is False
):
return False
return self.spam_checker.user_may_invite( return True
inviter_userid, invitee_userid, room_id
)
def user_may_create_room(self, userid: str) -> bool: def user_may_create_room(self, userid: str) -> bool:
"""Checks if a given user may create a room """Checks if a given user may create a room
@ -96,10 +92,11 @@ class SpamChecker(object):
Returns: Returns:
True if the user may create a room, otherwise False True if the user may create a room, otherwise False
""" """
if self.spam_checker is None: for spam_checker in self.spam_checkers:
return True if spam_checker.user_may_create_room(userid) is False:
return False
return self.spam_checker.user_may_create_room(userid) return True
def user_may_create_room_alias(self, userid: str, room_alias: str) -> bool: def user_may_create_room_alias(self, userid: str, room_alias: str) -> bool:
"""Checks if a given user may create a room alias """Checks if a given user may create a room alias
@ -113,10 +110,11 @@ class SpamChecker(object):
Returns: Returns:
True if the user may create a room alias, otherwise False True if the user may create a room alias, otherwise False
""" """
if self.spam_checker is None: for spam_checker in self.spam_checkers:
return True if spam_checker.user_may_create_room_alias(userid, room_alias) is False:
return False
return self.spam_checker.user_may_create_room_alias(userid, room_alias) return True
def user_may_publish_room(self, userid: str, room_id: str) -> bool: def user_may_publish_room(self, userid: str, room_id: str) -> bool:
"""Checks if a given user may publish a room to the directory """Checks if a given user may publish a room to the directory
@ -130,10 +128,11 @@ class SpamChecker(object):
Returns: Returns:
True if the user may publish the room, otherwise False True if the user may publish the room, otherwise False
""" """
if self.spam_checker is None: for spam_checker in self.spam_checkers:
return True if spam_checker.user_may_publish_room(userid, room_id) is False:
return False
return self.spam_checker.user_may_publish_room(userid, room_id) return True
def check_username_for_spam(self, user_profile: Dict[str, str]) -> bool: def check_username_for_spam(self, user_profile: Dict[str, str]) -> bool:
"""Checks if a user ID or display name are considered "spammy" by this server. """Checks if a user ID or display name are considered "spammy" by this server.
@ -150,13 +149,14 @@ class SpamChecker(object):
Returns: Returns:
True if the user is spammy. True if the user is spammy.
""" """
if self.spam_checker is None: for spam_checker in self.spam_checkers:
return False # For backwards compatibility, only run if the method exists on the
# spam checker
checker = getattr(spam_checker, "check_username_for_spam", None)
if checker:
# Make a copy of the user profile object to ensure the spam checker
# cannot modify it.
if checker(user_profile.copy()):
return True
# For backwards compatibility, if the method does not exist on the spam checker, fallback to not interfering. return False
checker = getattr(self.spam_checker, "check_username_for_spam", None)
if not checker:
return False
# Make a copy of the user profile object to ensure the spam checker
# cannot modify it.
return checker(user_profile.copy())

Some files were not shown because too many files have changed in this diff Show More