Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes
commit
404a2d70be
|
@ -4,9 +4,9 @@ about: Create a report to help us improve
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
|
|
||||||
**IF YOU HAVE SUPPORT QUESTIONS ABOUT RUNNING OR CONFIGURING YOUR OWN HOME SERVER**:
|
**IF YOU HAVE SUPPORT QUESTIONS ABOUT RUNNING OR CONFIGURING YOUR OWN HOME SERVER**:
|
||||||
You will likely get better support more quickly if you ask in ** #matrix:matrix.org ** ;)
|
You will likely get better support more quickly if you ask in ** #matrix:matrix.org ** ;)
|
||||||
|
|
||||||
|
|
||||||
|
@ -17,7 +17,7 @@ the necessary data to fix your issue.
|
||||||
You can also preview your report before submitting it. You may remove sections
|
You can also preview your report before submitting it. You may remove sections
|
||||||
that aren't relevant to your particular case.
|
that aren't relevant to your particular case.
|
||||||
|
|
||||||
Text between <!-- and --> marks will be invisible in the report.
|
Text between <!-- and --> marks will be invisible in the report.
|
||||||
|
|
||||||
-->
|
-->
|
||||||
|
|
||||||
|
@ -31,7 +31,7 @@ Text between <!-- and --> marks will be invisible in the report.
|
||||||
- that reproduce the bug
|
- that reproduce the bug
|
||||||
- using hyphens as bullet points
|
- using hyphens as bullet points
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
Describe how what happens differs from what you expected.
|
Describe how what happens differs from what you expected.
|
||||||
|
|
||||||
If you can identify any relevant log snippets from _homeserver.log_, please include
|
If you can identify any relevant log snippets from _homeserver.log_, please include
|
||||||
|
@ -48,8 +48,8 @@ those (please be careful to remove any personal or private data). Please surroun
|
||||||
|
|
||||||
If not matrix.org:
|
If not matrix.org:
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
What version of Synapse is running?
|
What version of Synapse is running?
|
||||||
You can find the Synapse version by inspecting the server headers (replace matrix.org with
|
You can find the Synapse version by inspecting the server headers (replace matrix.org with
|
||||||
your own homeserver domain):
|
your own homeserver domain):
|
||||||
$ curl -v https://matrix.org/_matrix/client/versions 2>&1 | grep "Server:"
|
$ curl -v https://matrix.org/_matrix/client/versions 2>&1 | grep "Server:"
|
||||||
|
|
20
INSTALL.md
20
INSTALL.md
|
@ -71,7 +71,8 @@ set this to the hostname of your server. For a more production-ready setup, you
|
||||||
will probably want to specify your domain (`example.com`) rather than a
|
will probably want to specify your domain (`example.com`) rather than a
|
||||||
matrix-specific hostname here (in the same way that your email address is
|
matrix-specific hostname here (in the same way that your email address is
|
||||||
probably `user@example.com` rather than `user@email.example.com`) - but
|
probably `user@example.com` rather than `user@email.example.com`) - but
|
||||||
doing so may require more advanced setup. - see [Setting up Federation](README.rst#setting-up-federation). Beware that the server name cannot be changed later.
|
doing so may require more advanced setup: see [Setting up Federation](docs/federate.md).
|
||||||
|
Beware that the server name cannot be changed later.
|
||||||
|
|
||||||
This command will generate you a config file that you can then customise, but it will
|
This command will generate you a config file that you can then customise, but it will
|
||||||
also generate a set of keys for you. These keys will allow your Home Server to
|
also generate a set of keys for you. These keys will allow your Home Server to
|
||||||
|
@ -374,9 +375,16 @@ To configure Synapse to expose an HTTPS port, you will need to edit
|
||||||
* You will also need to uncomment the `tls_certificate_path` and
|
* You will also need to uncomment the `tls_certificate_path` and
|
||||||
`tls_private_key_path` lines under the `TLS` section. You can either
|
`tls_private_key_path` lines under the `TLS` section. You can either
|
||||||
point these settings at an existing certificate and key, or you can
|
point these settings at an existing certificate and key, or you can
|
||||||
enable Synapse's built-in ACME (Let's Encrypt) support. Instructions
|
enable Synapse's built-in ACME (Let's Encrypt) support. Instructions
|
||||||
for having Synapse automatically provision and renew federation
|
for having Synapse automatically provision and renew federation
|
||||||
certificates through ACME can be found at [ACME.md](docs/ACME.md).
|
certificates through ACME can be found at [ACME.md](docs/ACME.md). If you
|
||||||
|
are using your own certificate, be sure to use a `.pem` file that includes
|
||||||
|
the full certificate chain including any intermediate certificates (for
|
||||||
|
instance, if using certbot, use `fullchain.pem` as your certificate, not
|
||||||
|
`cert.pem`).
|
||||||
|
|
||||||
|
For those of you upgrading your TLS certificate in readiness for Synapse 1.0,
|
||||||
|
please take a look at `our guide <docs/MSC1711_certificates_FAQ.md#configuring-certificates-for-compatibility-with-synapse-100>`_.
|
||||||
|
|
||||||
## Registering a user
|
## Registering a user
|
||||||
|
|
||||||
|
@ -402,8 +410,8 @@ This process uses a setting `registration_shared_secret` in
|
||||||
`homeserver.yaml`, which is shared between Synapse itself and the
|
`homeserver.yaml`, which is shared between Synapse itself and the
|
||||||
`register_new_matrix_user` script. It doesn't matter what it is (a random
|
`register_new_matrix_user` script. It doesn't matter what it is (a random
|
||||||
value is generated by `--generate-config`), but it should be kept secret, as
|
value is generated by `--generate-config`), but it should be kept secret, as
|
||||||
anyone with knowledge of it can register users on your server even if
|
anyone with knowledge of it can register users, including admin accounts,
|
||||||
`enable_registration` is `false`.
|
on your server even if `enable_registration` is `false`.
|
||||||
|
|
||||||
## Setting up a TURN server
|
## Setting up a TURN server
|
||||||
|
|
||||||
|
|
215
README.rst
215
README.rst
|
@ -80,7 +80,10 @@ Thanks for using Matrix!
|
||||||
Synapse Installation
|
Synapse Installation
|
||||||
====================
|
====================
|
||||||
|
|
||||||
For details on how to install synapse, see `<INSTALL.md>`_.
|
.. _federation:
|
||||||
|
|
||||||
|
* For details on how to install synapse, see `<INSTALL.md>`_.
|
||||||
|
* For specific details on how to configure Synapse for federation see `docs/federate.md <docs/federate.md>`_
|
||||||
|
|
||||||
|
|
||||||
Connecting to Synapse from a client
|
Connecting to Synapse from a client
|
||||||
|
@ -93,13 +96,13 @@ Unless you are running a test instance of Synapse on your local machine, in
|
||||||
general, you will need to enable TLS support before you can successfully
|
general, you will need to enable TLS support before you can successfully
|
||||||
connect from a client: see `<INSTALL.md#tls-certificates>`_.
|
connect from a client: see `<INSTALL.md#tls-certificates>`_.
|
||||||
|
|
||||||
An easy way to get started is to login or register via Riot at
|
An easy way to get started is to login or register via Riot at
|
||||||
https://riot.im/app/#/login or https://riot.im/app/#/register respectively.
|
https://riot.im/app/#/login or https://riot.im/app/#/register respectively.
|
||||||
You will need to change the server you are logging into from ``matrix.org``
|
You will need to change the server you are logging into from ``matrix.org``
|
||||||
and instead specify a Homeserver URL of ``https://<server_name>:8448``
|
and instead specify a Homeserver URL of ``https://<server_name>:8448``
|
||||||
(or just ``https://<server_name>`` if you are using a reverse proxy).
|
(or just ``https://<server_name>`` if you are using a reverse proxy).
|
||||||
(Leave the identity server as the default - see `Identity servers`_.)
|
(Leave the identity server as the default - see `Identity servers`_.)
|
||||||
If you prefer to use another client, refer to our
|
If you prefer to use another client, refer to our
|
||||||
`client breakdown <https://matrix.org/docs/projects/clients-matrix>`_.
|
`client breakdown <https://matrix.org/docs/projects/clients-matrix>`_.
|
||||||
|
|
||||||
If all goes well you should at least be able to log in, create a room, and
|
If all goes well you should at least be able to log in, create a room, and
|
||||||
|
@ -151,56 +154,6 @@ server on the same domain.
|
||||||
See https://github.com/vector-im/riot-web/issues/1977 and
|
See https://github.com/vector-im/riot-web/issues/1977 and
|
||||||
https://developer.github.com/changes/2014-04-25-user-content-security for more details.
|
https://developer.github.com/changes/2014-04-25-user-content-security for more details.
|
||||||
|
|
||||||
Troubleshooting
|
|
||||||
===============
|
|
||||||
|
|
||||||
Running out of File Handles
|
|
||||||
---------------------------
|
|
||||||
|
|
||||||
If synapse runs out of filehandles, it typically fails badly - live-locking
|
|
||||||
at 100% CPU, and/or failing to accept new TCP connections (blocking the
|
|
||||||
connecting client). Matrix currently can legitimately use a lot of file handles,
|
|
||||||
thanks to busy rooms like #matrix:matrix.org containing hundreds of participating
|
|
||||||
servers. The first time a server talks in a room it will try to connect
|
|
||||||
simultaneously to all participating servers, which could exhaust the available
|
|
||||||
file descriptors between DNS queries & HTTPS sockets, especially if DNS is slow
|
|
||||||
to respond. (We need to improve the routing algorithm used to be better than
|
|
||||||
full mesh, but as of June 2017 this hasn't happened yet).
|
|
||||||
|
|
||||||
If you hit this failure mode, we recommend increasing the maximum number of
|
|
||||||
open file handles to be at least 4096 (assuming a default of 1024 or 256).
|
|
||||||
This is typically done by editing ``/etc/security/limits.conf``
|
|
||||||
|
|
||||||
Separately, Synapse may leak file handles if inbound HTTP requests get stuck
|
|
||||||
during processing - e.g. blocked behind a lock or talking to a remote server etc.
|
|
||||||
This is best diagnosed by matching up the 'Received request' and 'Processed request'
|
|
||||||
log lines and looking for any 'Processed request' lines which take more than
|
|
||||||
a few seconds to execute. Please let us know at #synapse:matrix.org if
|
|
||||||
you see this failure mode so we can help debug it, however.
|
|
||||||
|
|
||||||
Help!! Synapse eats all my RAM!
|
|
||||||
-------------------------------
|
|
||||||
|
|
||||||
Synapse's architecture is quite RAM hungry currently - we deliberately
|
|
||||||
cache a lot of recent room data and metadata in RAM in order to speed up
|
|
||||||
common requests. We'll improve this in future, but for now the easiest
|
|
||||||
way to either reduce the RAM usage (at the risk of slowing things down)
|
|
||||||
is to set the almost-undocumented ``SYNAPSE_CACHE_FACTOR`` environment
|
|
||||||
variable. The default is 0.5, which can be decreased to reduce RAM usage
|
|
||||||
in memory constrained enviroments, or increased if performance starts to
|
|
||||||
degrade.
|
|
||||||
|
|
||||||
Using `libjemalloc <http://jemalloc.net/>`_ can also yield a significant
|
|
||||||
improvement in overall amount, and especially in terms of giving back RAM
|
|
||||||
to the OS. To use it, the library must simply be put in the LD_PRELOAD
|
|
||||||
environment variable when launching Synapse. On Debian, this can be done
|
|
||||||
by installing the ``libjemalloc1`` package and adding this line to
|
|
||||||
``/etc/default/matrix-synapse``::
|
|
||||||
|
|
||||||
LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libjemalloc.so.1
|
|
||||||
|
|
||||||
This can make a significant difference on Python 2.7 - it's unclear how
|
|
||||||
much of an improvement it provides on Python 3.x.
|
|
||||||
|
|
||||||
Upgrading an existing Synapse
|
Upgrading an existing Synapse
|
||||||
=============================
|
=============================
|
||||||
|
@ -211,100 +164,19 @@ versions of synapse.
|
||||||
|
|
||||||
.. _UPGRADE.rst: UPGRADE.rst
|
.. _UPGRADE.rst: UPGRADE.rst
|
||||||
|
|
||||||
.. _federation:
|
|
||||||
|
|
||||||
Setting up Federation
|
|
||||||
=====================
|
|
||||||
|
|
||||||
Federation is the process by which users on different servers can participate
|
|
||||||
in the same room. For this to work, those other servers must be able to contact
|
|
||||||
yours to send messages.
|
|
||||||
|
|
||||||
The ``server_name`` in your ``homeserver.yaml`` file determines the way that
|
|
||||||
other servers will reach yours. By default, they will treat it as a hostname
|
|
||||||
and try to connect to port 8448. This is easy to set up and will work with the
|
|
||||||
default configuration, provided you set the ``server_name`` to match your
|
|
||||||
machine's public DNS hostname, and give Synapse a TLS certificate which is
|
|
||||||
valid for your ``server_name``.
|
|
||||||
|
|
||||||
For a more flexible configuration, you can set up a DNS SRV record. This allows
|
|
||||||
you to run your server on a machine that might not have the same name as your
|
|
||||||
domain name. For example, you might want to run your server at
|
|
||||||
``synapse.example.com``, but have your Matrix user-ids look like
|
|
||||||
``@user:example.com``. (A SRV record also allows you to change the port from
|
|
||||||
the default 8448).
|
|
||||||
|
|
||||||
To use a SRV record, first create your SRV record and publish it in DNS. This
|
|
||||||
should have the format ``_matrix._tcp.<yourdomain.com> <ttl> IN SRV 10 0 <port>
|
|
||||||
<synapse.server.name>``. The DNS record should then look something like::
|
|
||||||
|
|
||||||
$ dig -t srv _matrix._tcp.example.com
|
|
||||||
_matrix._tcp.example.com. 3600 IN SRV 10 0 8448 synapse.example.com.
|
|
||||||
|
|
||||||
Note that the server hostname cannot be an alias (CNAME record): it has to point
|
|
||||||
directly to the server hosting the synapse instance.
|
|
||||||
|
|
||||||
You can then configure your homeserver to use ``<yourdomain.com>`` as the domain in
|
|
||||||
its user-ids, by setting ``server_name``::
|
|
||||||
|
|
||||||
python -m synapse.app.homeserver \
|
|
||||||
--server-name <yourdomain.com> \
|
|
||||||
--config-path homeserver.yaml \
|
|
||||||
--generate-config
|
|
||||||
python -m synapse.app.homeserver --config-path homeserver.yaml
|
|
||||||
|
|
||||||
If you've already generated the config file, you need to edit the ``server_name``
|
|
||||||
in your ``homeserver.yaml`` file. If you've already started Synapse and a
|
|
||||||
database has been created, you will have to recreate the database.
|
|
||||||
|
|
||||||
If all goes well, you should be able to `connect to your server with a client`__,
|
|
||||||
and then join a room via federation. (Try ``#matrix-dev:matrix.org`` as a first
|
|
||||||
step. "Matrix HQ"'s sheer size and activity level tends to make even the
|
|
||||||
largest boxes pause for thought.)
|
|
||||||
|
|
||||||
.. __: `Connecting to Synapse from a client`_
|
|
||||||
|
|
||||||
Troubleshooting
|
|
||||||
---------------
|
|
||||||
|
|
||||||
You can use the `federation tester <https://matrix.org/federationtester>`_ to
|
|
||||||
check if your homeserver is all set.
|
|
||||||
|
|
||||||
The typical failure mode with federation is that when you try to join a room,
|
|
||||||
it is rejected with "401: Unauthorized". Generally this means that other
|
|
||||||
servers in the room couldn't access yours. (Joining a room over federation is a
|
|
||||||
complicated dance which requires connections in both directions).
|
|
||||||
|
|
||||||
So, things to check are:
|
|
||||||
|
|
||||||
* If you are not using a SRV record, check that your ``server_name`` (the part
|
|
||||||
of your user-id after the ``:``) matches your hostname, and that port 8448 on
|
|
||||||
that hostname is reachable from outside your network.
|
|
||||||
* If you *are* using a SRV record, check that it matches your ``server_name``
|
|
||||||
(it should be ``_matrix._tcp.<server_name>``), and that the port and hostname
|
|
||||||
it specifies are reachable from outside your network.
|
|
||||||
|
|
||||||
Another common problem is that people on other servers can't join rooms that
|
|
||||||
you invite them to. This can be caused by an incorrectly-configured reverse
|
|
||||||
proxy: see `<docs/reverse_proxy.rst>`_ for instructions on how to correctly
|
|
||||||
configure a reverse proxy.
|
|
||||||
|
|
||||||
Running a Demo Federation of Synapses
|
|
||||||
-------------------------------------
|
|
||||||
|
|
||||||
If you want to get up and running quickly with a trio of homeservers in a
|
|
||||||
private federation, there is a script in the ``demo`` directory. This is mainly
|
|
||||||
useful just for development purposes. See `<demo/README>`_.
|
|
||||||
|
|
||||||
|
|
||||||
Using PostgreSQL
|
Using PostgreSQL
|
||||||
================
|
================
|
||||||
|
|
||||||
As of Synapse 0.9, `PostgreSQL <https://www.postgresql.org>`_ is supported as an
|
Synapse offers two database engines:
|
||||||
alternative to the `SQLite <https://sqlite.org/>`_ database that Synapse has
|
* `SQLite <https://sqlite.org/>`_
|
||||||
traditionally used for convenience and simplicity.
|
* `PostgreSQL <https://www.postgresql.org>`_
|
||||||
|
|
||||||
The advantages of Postgres include:
|
By default Synapse uses SQLite in and doing so trades performance for convenience.
|
||||||
|
SQLite is only recommended in Synapse for testing purposes or for servers with
|
||||||
|
light workloads.
|
||||||
|
|
||||||
|
Almost all installations should opt to use PostreSQL. Advantages include:
|
||||||
|
|
||||||
* significant performance improvements due to the superior threading and
|
* significant performance improvements due to the superior threading and
|
||||||
caching model, smarter query optimiser
|
caching model, smarter query optimiser
|
||||||
|
@ -440,3 +312,54 @@ sphinxcontrib-napoleon::
|
||||||
Building internal API documentation::
|
Building internal API documentation::
|
||||||
|
|
||||||
python setup.py build_sphinx
|
python setup.py build_sphinx
|
||||||
|
|
||||||
|
Troubleshooting
|
||||||
|
===============
|
||||||
|
|
||||||
|
Running out of File Handles
|
||||||
|
---------------------------
|
||||||
|
|
||||||
|
If synapse runs out of file handles, it typically fails badly - live-locking
|
||||||
|
at 100% CPU, and/or failing to accept new TCP connections (blocking the
|
||||||
|
connecting client). Matrix currently can legitimately use a lot of file handles,
|
||||||
|
thanks to busy rooms like #matrix:matrix.org containing hundreds of participating
|
||||||
|
servers. The first time a server talks in a room it will try to connect
|
||||||
|
simultaneously to all participating servers, which could exhaust the available
|
||||||
|
file descriptors between DNS queries & HTTPS sockets, especially if DNS is slow
|
||||||
|
to respond. (We need to improve the routing algorithm used to be better than
|
||||||
|
full mesh, but as of March 2019 this hasn't happened yet).
|
||||||
|
|
||||||
|
If you hit this failure mode, we recommend increasing the maximum number of
|
||||||
|
open file handles to be at least 4096 (assuming a default of 1024 or 256).
|
||||||
|
This is typically done by editing ``/etc/security/limits.conf``
|
||||||
|
|
||||||
|
Separately, Synapse may leak file handles if inbound HTTP requests get stuck
|
||||||
|
during processing - e.g. blocked behind a lock or talking to a remote server etc.
|
||||||
|
This is best diagnosed by matching up the 'Received request' and 'Processed request'
|
||||||
|
log lines and looking for any 'Processed request' lines which take more than
|
||||||
|
a few seconds to execute. Please let us know at #synapse:matrix.org if
|
||||||
|
you see this failure mode so we can help debug it, however.
|
||||||
|
|
||||||
|
Help!! Synapse eats all my RAM!
|
||||||
|
-------------------------------
|
||||||
|
|
||||||
|
Synapse's architecture is quite RAM hungry currently - we deliberately
|
||||||
|
cache a lot of recent room data and metadata in RAM in order to speed up
|
||||||
|
common requests. We'll improve this in the future, but for now the easiest
|
||||||
|
way to either reduce the RAM usage (at the risk of slowing things down)
|
||||||
|
is to set the almost-undocumented ``SYNAPSE_CACHE_FACTOR`` environment
|
||||||
|
variable. The default is 0.5, which can be decreased to reduce RAM usage
|
||||||
|
in memory constrained enviroments, or increased if performance starts to
|
||||||
|
degrade.
|
||||||
|
|
||||||
|
Using `libjemalloc <http://jemalloc.net/>`_ can also yield a significant
|
||||||
|
improvement in overall amount, and especially in terms of giving back RAM
|
||||||
|
to the OS. To use it, the library must simply be put in the LD_PRELOAD
|
||||||
|
environment variable when launching Synapse. On Debian, this can be done
|
||||||
|
by installing the ``libjemalloc1`` package and adding this line to
|
||||||
|
``/etc/default/matrix-synapse``::
|
||||||
|
|
||||||
|
LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libjemalloc.so.1
|
||||||
|
|
||||||
|
This can make a significant difference on Python 2.7 - it's unclear how
|
||||||
|
much of an improvement it provides on Python 3.x.
|
||||||
|
|
|
@ -0,0 +1 @@
|
||||||
|
Fix a bug where media with spaces in the name would get a corrupted name.
|
|
@ -0,0 +1 @@
|
||||||
|
Synapse is now permissive about trailing slashes on some of its federation endpoints, allowing zero or more to be present.
|
|
@ -0,0 +1 @@
|
||||||
|
Add checks to incoming events over federation for events evading auth (aka "soft fail").
|
|
@ -0,0 +1 @@
|
||||||
|
Improve federation documentation, specifically .well-known support. Many thanks to @vaab.
|
|
@ -0,0 +1 @@
|
||||||
|
Fix bug where synapse expected an un-specced `prev_state` field on state events.
|
|
@ -0,0 +1 @@
|
||||||
|
Transfer a user's notification settings (push rules) on room upgrade.
|
|
@ -0,0 +1 @@
|
||||||
|
Disable captcha registration by default in unit tests.
|
|
@ -0,0 +1 @@
|
||||||
|
Clarify what registration_shared_secret allows for.
|
|
@ -0,0 +1 @@
|
||||||
|
The user directory has been rewritten to make it faster, with less chance of falling behind on a large server.
|
|
@ -0,0 +1 @@
|
||||||
|
Correctly log expected errors when fetching server keys.
|
|
@ -0,0 +1 @@
|
||||||
|
Update install docs to explicitly state a full-chain (not just the top-level) TLS certificate must be provided to Synapse. This caused some people's Synapse ports to appear correct in a browser but still (rightfully so) upset the federation tester.
|
|
@ -0,0 +1,125 @@
|
||||||
|
Setting up Federation
|
||||||
|
=====================
|
||||||
|
|
||||||
|
Federation is the process by which users on different servers can participate
|
||||||
|
in the same room. For this to work, those other servers must be able to contact
|
||||||
|
yours to send messages.
|
||||||
|
|
||||||
|
The ``server_name`` configured in the Synapse configuration file (often
|
||||||
|
``homeserver.yaml``) defines how resources (users, rooms, etc.) will be
|
||||||
|
identified (eg: ``@user:example.com``, ``#room:example.com``). By
|
||||||
|
default, it is also the domain that other servers will use to
|
||||||
|
try to reach your server (via port 8448). This is easy to set
|
||||||
|
up and will work provided you set the ``server_name`` to match your
|
||||||
|
machine's public DNS hostname, and provide Synapse with a TLS certificate
|
||||||
|
which is valid for your ``server_name``.
|
||||||
|
|
||||||
|
Once you have completed the steps necessary to federate, you should be able to
|
||||||
|
join a room via federation. (A good place to start is ``#synapse:matrix.org``
|
||||||
|
- a room for Synapse admins.)
|
||||||
|
|
||||||
|
|
||||||
|
## Delegation
|
||||||
|
|
||||||
|
For a more flexible configuration, you can have ``server_name``
|
||||||
|
resources (eg: ``@user:example.com``) served by a different host and
|
||||||
|
port (eg: ``synapse.example.com:443``). There are two ways to do this:
|
||||||
|
|
||||||
|
- adding a ``/.well-known/matrix/server`` URL served on ``https://example.com``.
|
||||||
|
- adding a DNS ``SRV`` record in the DNS zone of domain
|
||||||
|
``example.com``.
|
||||||
|
|
||||||
|
Without configuring delegation, the matrix federation will
|
||||||
|
expect to find your server via ``example.com:8448``. The following methods
|
||||||
|
allow you retain a `server_name` of `example.com` so that your user IDs, room
|
||||||
|
aliases, etc continue to look like `*:example.com`, whilst having your
|
||||||
|
federation traffic routed to a different server.
|
||||||
|
|
||||||
|
### .well-known delegation
|
||||||
|
|
||||||
|
To use this method, you need to be able to alter the
|
||||||
|
``server_name`` 's https server to serve the ``/.well-known/matrix/server``
|
||||||
|
URL. Having an active server (with a valid TLS certificate) serving your
|
||||||
|
``server_name`` domain is out of the scope of this documentation.
|
||||||
|
|
||||||
|
The URL ``https://<server_name>/.well-known/matrix/server`` should
|
||||||
|
return a JSON structure containing the key ``m.server`` like so:
|
||||||
|
|
||||||
|
{
|
||||||
|
"m.server": "<synapse.server.name>[:<yourport>]"
|
||||||
|
}
|
||||||
|
|
||||||
|
In our example, this would mean that URL ``https://example.com/.well-known/matrix/server``
|
||||||
|
should return:
|
||||||
|
|
||||||
|
{
|
||||||
|
"m.server": "synapse.example.com:443"
|
||||||
|
}
|
||||||
|
|
||||||
|
Note, specifying a port is optional. If a port is not specified an SRV lookup
|
||||||
|
is performed, as described below. If the target of the
|
||||||
|
delegation does not have an SRV record, then the port defaults to 8448.
|
||||||
|
|
||||||
|
Most installations will not need to configure .well-known. However, it can be
|
||||||
|
useful in cases where the admin is hosting on behalf of someone else and
|
||||||
|
therefore cannot gain access to the necessary certificate. With .well-known,
|
||||||
|
federation servers will check for a valid TLS certificate for the delegated
|
||||||
|
hostname (in our example: ``synapse.example.com``).
|
||||||
|
|
||||||
|
.well-known support first appeared in Synapse v0.99.0. To federate with older
|
||||||
|
servers you may need to additionally configure SRV delegation. Alternatively,
|
||||||
|
encourage the server admin in question to upgrade :).
|
||||||
|
|
||||||
|
### DNS SRV delegation
|
||||||
|
|
||||||
|
To use this delegation method, you need to have write access to your
|
||||||
|
``server_name`` 's domain zone DNS records (in our example it would be
|
||||||
|
``example.com`` DNS zone).
|
||||||
|
|
||||||
|
This method requires the target server to provide a
|
||||||
|
valid TLS certificate for the original ``server_name``.
|
||||||
|
|
||||||
|
You need to add a SRV record in your ``server_name`` 's DNS zone with
|
||||||
|
this format:
|
||||||
|
|
||||||
|
_matrix._tcp.<yourdomain.com> <ttl> IN SRV <priority> <weight> <port> <synapse.server.name>
|
||||||
|
|
||||||
|
In our example, we would need to add this SRV record in the
|
||||||
|
``example.com`` DNS zone:
|
||||||
|
|
||||||
|
_matrix._tcp.example.com. 3600 IN SRV 10 5 443 synapse.example.com.
|
||||||
|
|
||||||
|
|
||||||
|
Once done and set up, you can check the DNS record with ``dig -t srv
|
||||||
|
_matrix._tcp.<server_name>``. In our example, we would expect this:
|
||||||
|
|
||||||
|
$ dig -t srv _matrix._tcp.example.com
|
||||||
|
_matrix._tcp.example.com. 3600 IN SRV 10 0 443 synapse.example.com.
|
||||||
|
|
||||||
|
Note that the target of a SRV record cannot be an alias (CNAME record): it has to point
|
||||||
|
directly to the server hosting the synapse instance.
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
You can use the [federation tester](
|
||||||
|
<https://matrix.org/federationtester>) to check if your homeserver is
|
||||||
|
configured correctly. Alternatively try the [JSON API used by the federation tester](https://matrix.org/federationtester/api/report?server_name=DOMAIN).
|
||||||
|
Note that you'll have to modify this URL to replace ``DOMAIN`` with your
|
||||||
|
``server_name``. Hitting the API directly provides extra detail.
|
||||||
|
|
||||||
|
The typical failure mode for federation is that when the server tries to join
|
||||||
|
a room, it is rejected with "401: Unauthorized". Generally this means that other
|
||||||
|
servers in the room could not access yours. (Joining a room over federation is
|
||||||
|
a complicated dance which requires connections in both directions).
|
||||||
|
|
||||||
|
Another common problem is that people on other servers can't join rooms that
|
||||||
|
you invite them to. This can be caused by an incorrectly-configured reverse
|
||||||
|
proxy: see [reverse_proxy.rst](<reverse_proxy.rst>) for instructions on how to correctly
|
||||||
|
configure a reverse proxy.
|
||||||
|
|
||||||
|
|
||||||
|
## Running a Demo Federation of Synapses
|
||||||
|
|
||||||
|
If you want to get up and running quickly with a trio of homeservers in a
|
||||||
|
private federation, there is a script in the ``demo`` directory. This is mainly
|
||||||
|
useful just for development purposes. See [demo/README](<../demo/README>).
|
|
@ -246,6 +246,11 @@ listeners:
|
||||||
# See 'ACME support' below to enable auto-provisioning this certificate via
|
# See 'ACME support' below to enable auto-provisioning this certificate via
|
||||||
# Let's Encrypt.
|
# Let's Encrypt.
|
||||||
#
|
#
|
||||||
|
# If supplying your own, be sure to use a `.pem` file that includes the
|
||||||
|
# full certificate chain including any intermediate certificates (for
|
||||||
|
# instance, if using certbot, use `fullchain.pem` as your certificate,
|
||||||
|
# not `cert.pem`).
|
||||||
|
#
|
||||||
#tls_certificate_path: "CONFDIR/SERVERNAME.tls.crt"
|
#tls_certificate_path: "CONFDIR/SERVERNAME.tls.crt"
|
||||||
|
|
||||||
# PEM-encoded private key for TLS
|
# PEM-encoded private key for TLS
|
||||||
|
@ -624,8 +629,8 @@ enable_registration: False
|
||||||
# - medium: msisdn
|
# - medium: msisdn
|
||||||
# pattern: '\+44'
|
# pattern: '\+44'
|
||||||
|
|
||||||
# If set, allows registration by anyone who also has the shared
|
# If set, allows registration of standard or admin accounts by anyone who
|
||||||
# secret, even if registration is otherwise disabled.
|
# has the shared secret, even if registration is otherwise disabled.
|
||||||
#
|
#
|
||||||
# registration_shared_secret: <PRIVATE STRING>
|
# registration_shared_secret: <PRIVATE STRING>
|
||||||
|
|
||||||
|
|
|
@ -376,6 +376,7 @@ def setup(config_options):
|
||||||
logger.info("Database prepared in %s.", config.database_config['name'])
|
logger.info("Database prepared in %s.", config.database_config['name'])
|
||||||
|
|
||||||
hs.setup()
|
hs.setup()
|
||||||
|
hs.setup_master()
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
def do_acme():
|
def do_acme():
|
||||||
|
|
|
@ -92,8 +92,8 @@ class RegistrationConfig(Config):
|
||||||
# - medium: msisdn
|
# - medium: msisdn
|
||||||
# pattern: '\\+44'
|
# pattern: '\\+44'
|
||||||
|
|
||||||
# If set, allows registration by anyone who also has the shared
|
# If set, allows registration of standard or admin accounts by anyone who
|
||||||
# secret, even if registration is otherwise disabled.
|
# has the shared secret, even if registration is otherwise disabled.
|
||||||
#
|
#
|
||||||
%(registration_shared_secret)s
|
%(registration_shared_secret)s
|
||||||
|
|
||||||
|
|
|
@ -181,6 +181,11 @@ class TlsConfig(Config):
|
||||||
# See 'ACME support' below to enable auto-provisioning this certificate via
|
# See 'ACME support' below to enable auto-provisioning this certificate via
|
||||||
# Let's Encrypt.
|
# Let's Encrypt.
|
||||||
#
|
#
|
||||||
|
# If supplying your own, be sure to use a `.pem` file that includes the
|
||||||
|
# full certificate chain including any intermediate certificates (for
|
||||||
|
# instance, if using certbot, use `fullchain.pem` as your certificate,
|
||||||
|
# not `cert.pem`).
|
||||||
|
#
|
||||||
#tls_certificate_path: "%(tls_certificate_path)s"
|
#tls_certificate_path: "%(tls_certificate_path)s"
|
||||||
|
|
||||||
# PEM-encoded private key for TLS
|
# PEM-encoded private key for TLS
|
||||||
|
|
|
@ -686,9 +686,9 @@ def _handle_key_deferred(verify_request):
|
||||||
try:
|
try:
|
||||||
with PreserveLoggingContext():
|
with PreserveLoggingContext():
|
||||||
_, key_id, verify_key = yield verify_request.deferred
|
_, key_id, verify_key = yield verify_request.deferred
|
||||||
except (IOError, RequestSendFailed) as e:
|
except KeyLookupError as e:
|
||||||
logger.warn(
|
logger.warn(
|
||||||
"Got IOError when downloading keys for %s: %s %s",
|
"Failed to download keys for %s: %s %s",
|
||||||
server_name, type(e).__name__, str(e),
|
server_name, type(e).__name__, str(e),
|
||||||
)
|
)
|
||||||
raise SynapseError(
|
raise SynapseError(
|
||||||
|
|
|
@ -77,6 +77,20 @@ class _EventInternalMetadata(object):
|
||||||
"""
|
"""
|
||||||
return getattr(self, "recheck_redaction", False)
|
return getattr(self, "recheck_redaction", False)
|
||||||
|
|
||||||
|
def is_soft_failed(self):
|
||||||
|
"""Whether the event has been soft failed.
|
||||||
|
|
||||||
|
Soft failed events should be handled as usual, except:
|
||||||
|
1. They should not go down sync or event streams, or generally
|
||||||
|
sent to clients.
|
||||||
|
2. They should not be added to the forward extremities (and
|
||||||
|
therefore not to current state).
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
bool
|
||||||
|
"""
|
||||||
|
return getattr(self, "soft_failed", False)
|
||||||
|
|
||||||
|
|
||||||
def _event_dict_property(key):
|
def _event_dict_property(key):
|
||||||
# We want to be able to use hasattr with the event dict properties.
|
# We want to be able to use hasattr with the event dict properties.
|
||||||
|
@ -127,7 +141,6 @@ class EventBase(object):
|
||||||
origin = _event_dict_property("origin")
|
origin = _event_dict_property("origin")
|
||||||
origin_server_ts = _event_dict_property("origin_server_ts")
|
origin_server_ts = _event_dict_property("origin_server_ts")
|
||||||
prev_events = _event_dict_property("prev_events")
|
prev_events = _event_dict_property("prev_events")
|
||||||
prev_state = _event_dict_property("prev_state")
|
|
||||||
redacts = _event_dict_property("redacts")
|
redacts = _event_dict_property("redacts")
|
||||||
room_id = _event_dict_property("room_id")
|
room_id = _event_dict_property("room_id")
|
||||||
sender = _event_dict_property("sender")
|
sender = _event_dict_property("sender")
|
||||||
|
|
|
@ -167,7 +167,7 @@ class TransportLayerClient(object):
|
||||||
# generated by the json_data_callback.
|
# generated by the json_data_callback.
|
||||||
json_data = transaction.get_dict()
|
json_data = transaction.get_dict()
|
||||||
|
|
||||||
path = _create_v1_path("/send/%s/", transaction.transaction_id)
|
path = _create_v1_path("/send/%s", transaction.transaction_id)
|
||||||
|
|
||||||
response = yield self.client.put_json(
|
response = yield self.client.put_json(
|
||||||
transaction.destination,
|
transaction.destination,
|
||||||
|
|
|
@ -312,7 +312,7 @@ class BaseFederationServlet(object):
|
||||||
|
|
||||||
|
|
||||||
class FederationSendServlet(BaseFederationServlet):
|
class FederationSendServlet(BaseFederationServlet):
|
||||||
PATH = "/send/(?P<transaction_id>[^/]*)/"
|
PATH = "/send/(?P<transaction_id>[^/]*)/?"
|
||||||
|
|
||||||
def __init__(self, handler, server_name, **kwargs):
|
def __init__(self, handler, server_name, **kwargs):
|
||||||
super(FederationSendServlet, self).__init__(
|
super(FederationSendServlet, self).__init__(
|
||||||
|
@ -378,7 +378,7 @@ class FederationSendServlet(BaseFederationServlet):
|
||||||
|
|
||||||
|
|
||||||
class FederationEventServlet(BaseFederationServlet):
|
class FederationEventServlet(BaseFederationServlet):
|
||||||
PATH = "/event/(?P<event_id>[^/]*)/"
|
PATH = "/event/(?P<event_id>[^/]*)/?"
|
||||||
|
|
||||||
# This is when someone asks for a data item for a given server data_id pair.
|
# This is when someone asks for a data item for a given server data_id pair.
|
||||||
def on_GET(self, origin, content, query, event_id):
|
def on_GET(self, origin, content, query, event_id):
|
||||||
|
@ -386,7 +386,7 @@ class FederationEventServlet(BaseFederationServlet):
|
||||||
|
|
||||||
|
|
||||||
class FederationStateServlet(BaseFederationServlet):
|
class FederationStateServlet(BaseFederationServlet):
|
||||||
PATH = "/state/(?P<context>[^/]*)/"
|
PATH = "/state/(?P<context>[^/]*)/?"
|
||||||
|
|
||||||
# This is when someone asks for all data for a given context.
|
# This is when someone asks for all data for a given context.
|
||||||
def on_GET(self, origin, content, query, context):
|
def on_GET(self, origin, content, query, context):
|
||||||
|
@ -398,7 +398,7 @@ class FederationStateServlet(BaseFederationServlet):
|
||||||
|
|
||||||
|
|
||||||
class FederationStateIdsServlet(BaseFederationServlet):
|
class FederationStateIdsServlet(BaseFederationServlet):
|
||||||
PATH = "/state_ids/(?P<room_id>[^/]*)/"
|
PATH = "/state_ids/(?P<room_id>[^/]*)/?"
|
||||||
|
|
||||||
def on_GET(self, origin, content, query, room_id):
|
def on_GET(self, origin, content, query, room_id):
|
||||||
return self.handler.on_state_ids_request(
|
return self.handler.on_state_ids_request(
|
||||||
|
@ -409,7 +409,7 @@ class FederationStateIdsServlet(BaseFederationServlet):
|
||||||
|
|
||||||
|
|
||||||
class FederationBackfillServlet(BaseFederationServlet):
|
class FederationBackfillServlet(BaseFederationServlet):
|
||||||
PATH = "/backfill/(?P<context>[^/]*)/"
|
PATH = "/backfill/(?P<context>[^/]*)/?"
|
||||||
|
|
||||||
def on_GET(self, origin, content, query, context):
|
def on_GET(self, origin, content, query, context):
|
||||||
versions = [x.decode('ascii') for x in query[b"v"]]
|
versions = [x.decode('ascii') for x in query[b"v"]]
|
||||||
|
@ -1080,7 +1080,7 @@ class FederationGroupsCategoriesServlet(BaseFederationServlet):
|
||||||
"""Get all categories for a group
|
"""Get all categories for a group
|
||||||
"""
|
"""
|
||||||
PATH = (
|
PATH = (
|
||||||
"/groups/(?P<group_id>[^/]*)/categories/"
|
"/groups/(?P<group_id>[^/]*)/categories/?"
|
||||||
)
|
)
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
|
@ -1150,7 +1150,7 @@ class FederationGroupsRolesServlet(BaseFederationServlet):
|
||||||
"""Get roles in a group
|
"""Get roles in a group
|
||||||
"""
|
"""
|
||||||
PATH = (
|
PATH = (
|
||||||
"/groups/(?P<group_id>[^/]*)/roles/"
|
"/groups/(?P<group_id>[^/]*)/roles/?"
|
||||||
)
|
)
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
|
|
|
@ -45,6 +45,7 @@ from synapse.api.errors import (
|
||||||
SynapseError,
|
SynapseError,
|
||||||
)
|
)
|
||||||
from synapse.crypto.event_signing import compute_event_signature
|
from synapse.crypto.event_signing import compute_event_signature
|
||||||
|
from synapse.event_auth import auth_types_for_event
|
||||||
from synapse.events.validator import EventValidator
|
from synapse.events.validator import EventValidator
|
||||||
from synapse.replication.http.federation import (
|
from synapse.replication.http.federation import (
|
||||||
ReplicationCleanRoomRestServlet,
|
ReplicationCleanRoomRestServlet,
|
||||||
|
@ -1628,6 +1629,7 @@ class FederationHandler(BaseHandler):
|
||||||
origin, event,
|
origin, event,
|
||||||
state=state,
|
state=state,
|
||||||
auth_events=auth_events,
|
auth_events=auth_events,
|
||||||
|
backfilled=backfilled,
|
||||||
)
|
)
|
||||||
|
|
||||||
# reraise does not allow inlineCallbacks to preserve the stacktrace, so we
|
# reraise does not allow inlineCallbacks to preserve the stacktrace, so we
|
||||||
|
@ -1672,6 +1674,7 @@ class FederationHandler(BaseHandler):
|
||||||
event,
|
event,
|
||||||
state=ev_info.get("state"),
|
state=ev_info.get("state"),
|
||||||
auth_events=ev_info.get("auth_events"),
|
auth_events=ev_info.get("auth_events"),
|
||||||
|
backfilled=backfilled,
|
||||||
)
|
)
|
||||||
defer.returnValue(res)
|
defer.returnValue(res)
|
||||||
|
|
||||||
|
@ -1794,7 +1797,7 @@ class FederationHandler(BaseHandler):
|
||||||
)
|
)
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
def _prep_event(self, origin, event, state=None, auth_events=None):
|
def _prep_event(self, origin, event, state, auth_events, backfilled):
|
||||||
"""
|
"""
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
|
@ -1802,6 +1805,7 @@ class FederationHandler(BaseHandler):
|
||||||
event:
|
event:
|
||||||
state:
|
state:
|
||||||
auth_events:
|
auth_events:
|
||||||
|
backfilled (bool)
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
Deferred, which resolves to synapse.events.snapshot.EventContext
|
Deferred, which resolves to synapse.events.snapshot.EventContext
|
||||||
|
@ -1843,11 +1847,99 @@ class FederationHandler(BaseHandler):
|
||||||
|
|
||||||
context.rejected = RejectedReason.AUTH_ERROR
|
context.rejected = RejectedReason.AUTH_ERROR
|
||||||
|
|
||||||
|
if not context.rejected:
|
||||||
|
yield self._check_for_soft_fail(event, state, backfilled)
|
||||||
|
|
||||||
if event.type == EventTypes.GuestAccess and not context.rejected:
|
if event.type == EventTypes.GuestAccess and not context.rejected:
|
||||||
yield self.maybe_kick_guest_users(event)
|
yield self.maybe_kick_guest_users(event)
|
||||||
|
|
||||||
defer.returnValue(context)
|
defer.returnValue(context)
|
||||||
|
|
||||||
|
@defer.inlineCallbacks
|
||||||
|
def _check_for_soft_fail(self, event, state, backfilled):
|
||||||
|
"""Checks if we should soft fail the event, if so marks the event as
|
||||||
|
such.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
event (FrozenEvent)
|
||||||
|
state (dict|None): The state at the event if we don't have all the
|
||||||
|
event's prev events
|
||||||
|
backfilled (bool): Whether the event is from backfill
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Deferred
|
||||||
|
"""
|
||||||
|
# For new (non-backfilled and non-outlier) events we check if the event
|
||||||
|
# passes auth based on the current state. If it doesn't then we
|
||||||
|
# "soft-fail" the event.
|
||||||
|
do_soft_fail_check = not backfilled and not event.internal_metadata.is_outlier()
|
||||||
|
if do_soft_fail_check:
|
||||||
|
extrem_ids = yield self.store.get_latest_event_ids_in_room(
|
||||||
|
event.room_id,
|
||||||
|
)
|
||||||
|
|
||||||
|
extrem_ids = set(extrem_ids)
|
||||||
|
prev_event_ids = set(event.prev_event_ids())
|
||||||
|
|
||||||
|
if extrem_ids == prev_event_ids:
|
||||||
|
# If they're the same then the current state is the same as the
|
||||||
|
# state at the event, so no point rechecking auth for soft fail.
|
||||||
|
do_soft_fail_check = False
|
||||||
|
|
||||||
|
if do_soft_fail_check:
|
||||||
|
room_version = yield self.store.get_room_version(event.room_id)
|
||||||
|
|
||||||
|
# Calculate the "current state".
|
||||||
|
if state is not None:
|
||||||
|
# If we're explicitly given the state then we won't have all the
|
||||||
|
# prev events, and so we have a gap in the graph. In this case
|
||||||
|
# we want to be a little careful as we might have been down for
|
||||||
|
# a while and have an incorrect view of the current state,
|
||||||
|
# however we still want to do checks as gaps are easy to
|
||||||
|
# maliciously manufacture.
|
||||||
|
#
|
||||||
|
# So we use a "current state" that is actually a state
|
||||||
|
# resolution across the current forward extremities and the
|
||||||
|
# given state at the event. This should correctly handle cases
|
||||||
|
# like bans, especially with state res v2.
|
||||||
|
|
||||||
|
state_sets = yield self.store.get_state_groups(
|
||||||
|
event.room_id, extrem_ids,
|
||||||
|
)
|
||||||
|
state_sets = list(state_sets.values())
|
||||||
|
state_sets.append(state)
|
||||||
|
current_state_ids = yield self.state_handler.resolve_events(
|
||||||
|
room_version, state_sets, event,
|
||||||
|
)
|
||||||
|
current_state_ids = {
|
||||||
|
k: e.event_id for k, e in iteritems(current_state_ids)
|
||||||
|
}
|
||||||
|
else:
|
||||||
|
current_state_ids = yield self.state_handler.get_current_state_ids(
|
||||||
|
event.room_id, latest_event_ids=extrem_ids,
|
||||||
|
)
|
||||||
|
|
||||||
|
# Now check if event pass auth against said current state
|
||||||
|
auth_types = auth_types_for_event(event)
|
||||||
|
current_state_ids = [
|
||||||
|
e for k, e in iteritems(current_state_ids)
|
||||||
|
if k in auth_types
|
||||||
|
]
|
||||||
|
|
||||||
|
current_auth_events = yield self.store.get_events(current_state_ids)
|
||||||
|
current_auth_events = {
|
||||||
|
(e.type, e.state_key): e for e in current_auth_events.values()
|
||||||
|
}
|
||||||
|
|
||||||
|
try:
|
||||||
|
self.auth.check(room_version, event, auth_events=current_auth_events)
|
||||||
|
except AuthError as e:
|
||||||
|
logger.warn(
|
||||||
|
"Failed current state auth resolution for %r because %s",
|
||||||
|
event, e,
|
||||||
|
)
|
||||||
|
event.internal_metadata.soft_failed = True
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
def on_query_auth(self, origin, event_id, room_id, remote_auth_chain, rejects,
|
def on_query_auth(self, origin, event_id, room_id, remote_auth_chain, rejects,
|
||||||
missing):
|
missing):
|
||||||
|
|
|
@ -233,6 +233,10 @@ class RoomMemberHandler(object):
|
||||||
self.copy_room_tags_and_direct_to_room(
|
self.copy_room_tags_and_direct_to_room(
|
||||||
predecessor["room_id"], room_id, user_id,
|
predecessor["room_id"], room_id, user_id,
|
||||||
)
|
)
|
||||||
|
# Move over old push rules
|
||||||
|
self.store.move_push_rules_from_room_to_room_for_user(
|
||||||
|
predecessor["room_id"], room_id, user_id,
|
||||||
|
)
|
||||||
elif event.membership == Membership.LEAVE:
|
elif event.membership == Membership.LEAVE:
|
||||||
if prev_member_event_id:
|
if prev_member_event_id:
|
||||||
prev_member_event = yield self.store.get_event(prev_member_event_id)
|
prev_member_event = yield self.store.get_event(prev_member_event_id)
|
||||||
|
|
|
@ -60,6 +60,12 @@ class UserDirectoryHandler(object):
|
||||||
self.update_user_directory = hs.config.update_user_directory
|
self.update_user_directory = hs.config.update_user_directory
|
||||||
self.search_all_users = hs.config.user_directory_search_all_users
|
self.search_all_users = hs.config.user_directory_search_all_users
|
||||||
|
|
||||||
|
# If we're a worker, don't sleep when doing the initial room work, as it
|
||||||
|
# won't monopolise the master's CPU.
|
||||||
|
if hs.config.worker_app:
|
||||||
|
self.INITIAL_ROOM_SLEEP_MS = 0
|
||||||
|
self.INITIAL_USER_SLEEP_MS = 0
|
||||||
|
|
||||||
# When start up for the first time we need to populate the user_directory.
|
# When start up for the first time we need to populate the user_directory.
|
||||||
# This is a set of user_id's we've inserted already
|
# This is a set of user_id's we've inserted already
|
||||||
self.initially_handled_users = set()
|
self.initially_handled_users = set()
|
||||||
|
@ -231,7 +237,7 @@ class UserDirectoryHandler(object):
|
||||||
unhandled_users = user_ids - self.initially_handled_users
|
unhandled_users = user_ids - self.initially_handled_users
|
||||||
|
|
||||||
yield self.store.add_profiles_to_user_dir(
|
yield self.store.add_profiles_to_user_dir(
|
||||||
{user_id: users_with_profile[user_id] for user_id in unhandled_users},
|
{user_id: users_with_profile[user_id] for user_id in unhandled_users}
|
||||||
)
|
)
|
||||||
|
|
||||||
self.initially_handled_users |= unhandled_users
|
self.initially_handled_users |= unhandled_users
|
||||||
|
@ -241,38 +247,58 @@ class UserDirectoryHandler(object):
|
||||||
# We also batch up inserts/updates, but try to avoid too many at once.
|
# We also batch up inserts/updates, but try to avoid too many at once.
|
||||||
to_insert = set()
|
to_insert = set()
|
||||||
count = 0
|
count = 0
|
||||||
for user_id in user_ids:
|
|
||||||
if count % self.INITIAL_ROOM_SLEEP_COUNT == 0:
|
|
||||||
yield self.clock.sleep(self.INITIAL_ROOM_SLEEP_MS / 1000.0)
|
|
||||||
|
|
||||||
if not self.is_mine_id(user_id):
|
|
||||||
count += 1
|
|
||||||
continue
|
|
||||||
|
|
||||||
if self.store.get_if_app_services_interested_in_user(user_id):
|
|
||||||
count += 1
|
|
||||||
continue
|
|
||||||
|
|
||||||
for other_user_id in user_ids:
|
|
||||||
if user_id == other_user_id:
|
|
||||||
continue
|
|
||||||
|
|
||||||
|
if is_public:
|
||||||
|
for user_id in user_ids:
|
||||||
if count % self.INITIAL_ROOM_SLEEP_COUNT == 0:
|
if count % self.INITIAL_ROOM_SLEEP_COUNT == 0:
|
||||||
yield self.clock.sleep(self.INITIAL_ROOM_SLEEP_MS / 1000.0)
|
yield self.clock.sleep(self.INITIAL_ROOM_SLEEP_MS / 1000.0)
|
||||||
count += 1
|
|
||||||
|
|
||||||
user_set = (user_id, other_user_id)
|
if self.store.get_if_app_services_interested_in_user(user_id):
|
||||||
to_insert.add(user_set)
|
count += 1
|
||||||
|
continue
|
||||||
|
|
||||||
|
to_insert.add(user_id)
|
||||||
if len(to_insert) > self.INITIAL_ROOM_BATCH_SIZE:
|
if len(to_insert) > self.INITIAL_ROOM_BATCH_SIZE:
|
||||||
yield self.store.add_users_who_share_room(
|
yield self.store.add_users_in_public_rooms(room_id, to_insert)
|
||||||
room_id, not is_public, to_insert
|
|
||||||
)
|
|
||||||
to_insert.clear()
|
to_insert.clear()
|
||||||
|
|
||||||
if to_insert:
|
if to_insert:
|
||||||
yield self.store.add_users_who_share_room(room_id, not is_public, to_insert)
|
yield self.store.add_users_in_public_rooms(room_id, to_insert)
|
||||||
to_insert.clear()
|
to_insert.clear()
|
||||||
|
else:
|
||||||
|
|
||||||
|
for user_id in user_ids:
|
||||||
|
if count % self.INITIAL_ROOM_SLEEP_COUNT == 0:
|
||||||
|
yield self.clock.sleep(self.INITIAL_ROOM_SLEEP_MS / 1000.0)
|
||||||
|
|
||||||
|
if not self.is_mine_id(user_id):
|
||||||
|
count += 1
|
||||||
|
continue
|
||||||
|
|
||||||
|
if self.store.get_if_app_services_interested_in_user(user_id):
|
||||||
|
count += 1
|
||||||
|
continue
|
||||||
|
|
||||||
|
for other_user_id in user_ids:
|
||||||
|
if user_id == other_user_id:
|
||||||
|
continue
|
||||||
|
|
||||||
|
if count % self.INITIAL_ROOM_SLEEP_COUNT == 0:
|
||||||
|
yield self.clock.sleep(self.INITIAL_ROOM_SLEEP_MS / 1000.0)
|
||||||
|
count += 1
|
||||||
|
|
||||||
|
user_set = (user_id, other_user_id)
|
||||||
|
to_insert.add(user_set)
|
||||||
|
|
||||||
|
if len(to_insert) > self.INITIAL_ROOM_BATCH_SIZE:
|
||||||
|
yield self.store.add_users_who_share_private_room(
|
||||||
|
room_id, not is_public, to_insert
|
||||||
|
)
|
||||||
|
to_insert.clear()
|
||||||
|
|
||||||
|
if to_insert:
|
||||||
|
yield self.store.add_users_who_share_private_room(room_id, to_insert)
|
||||||
|
to_insert.clear()
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
def _handle_deltas(self, deltas):
|
def _handle_deltas(self, deltas):
|
||||||
|
@ -445,34 +471,37 @@ class UserDirectoryHandler(object):
|
||||||
# Now we update users who share rooms with users.
|
# Now we update users who share rooms with users.
|
||||||
users_with_profile = yield self.state.get_current_user_in_room(room_id)
|
users_with_profile = yield self.state.get_current_user_in_room(room_id)
|
||||||
|
|
||||||
to_insert = set()
|
if is_public:
|
||||||
|
yield self.store.add_users_in_public_rooms(room_id, (user_id,))
|
||||||
|
else:
|
||||||
|
to_insert = set()
|
||||||
|
|
||||||
# First, if they're our user then we need to update for every user
|
# First, if they're our user then we need to update for every user
|
||||||
if self.is_mine_id(user_id):
|
if self.is_mine_id(user_id):
|
||||||
|
|
||||||
is_appservice = self.store.get_if_app_services_interested_in_user(user_id)
|
is_appservice = self.store.get_if_app_services_interested_in_user(user_id)
|
||||||
|
|
||||||
# We don't care about appservice users.
|
# We don't care about appservice users.
|
||||||
if not is_appservice:
|
if not is_appservice:
|
||||||
for other_user_id in users_with_profile:
|
for other_user_id in users_with_profile:
|
||||||
if user_id == other_user_id:
|
if user_id == other_user_id:
|
||||||
continue
|
continue
|
||||||
|
|
||||||
to_insert.add((user_id, other_user_id))
|
to_insert.add((user_id, other_user_id))
|
||||||
|
|
||||||
# Next we need to update for every local user in the room
|
# Next we need to update for every local user in the room
|
||||||
for other_user_id in users_with_profile:
|
for other_user_id in users_with_profile:
|
||||||
if user_id == other_user_id:
|
if user_id == other_user_id:
|
||||||
continue
|
continue
|
||||||
|
|
||||||
is_appservice = self.store.get_if_app_services_interested_in_user(
|
is_appservice = self.store.get_if_app_services_interested_in_user(
|
||||||
other_user_id
|
other_user_id
|
||||||
)
|
)
|
||||||
if self.is_mine_id(other_user_id) and not is_appservice:
|
if self.is_mine_id(other_user_id) and not is_appservice:
|
||||||
to_insert.add((other_user_id, user_id))
|
to_insert.add((other_user_id, user_id))
|
||||||
|
|
||||||
if to_insert:
|
if to_insert:
|
||||||
yield self.store.add_users_who_share_room(room_id, not is_public, to_insert)
|
yield self.store.add_users_who_share_private_room(room_id, to_insert)
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
def _handle_remove_user(self, room_id, user_id):
|
def _handle_remove_user(self, room_id, user_id):
|
||||||
|
@ -487,10 +516,10 @@ class UserDirectoryHandler(object):
|
||||||
# Remove user from sharing tables
|
# Remove user from sharing tables
|
||||||
yield self.store.remove_user_who_share_room(user_id, room_id)
|
yield self.store.remove_user_who_share_room(user_id, room_id)
|
||||||
|
|
||||||
# Are they still in a room with members? If not, remove them entirely.
|
# Are they still in any rooms? If not, remove them entirely.
|
||||||
users_in_room_with = yield self.store.get_users_who_share_room_from_dir(user_id)
|
rooms_user_is_in = yield self.store.get_user_dir_rooms_user_is_in(user_id)
|
||||||
|
|
||||||
if len(users_in_room_with) == 0:
|
if len(rooms_user_is_in) == 0:
|
||||||
yield self.store.remove_from_user_dir(user_id)
|
yield self.store.remove_from_user_dir(user_id)
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
|
|
|
@ -100,10 +100,29 @@ def add_file_headers(request, media_type, file_size, upload_name):
|
||||||
|
|
||||||
request.setHeader(b"Content-Type", media_type.encode("UTF-8"))
|
request.setHeader(b"Content-Type", media_type.encode("UTF-8"))
|
||||||
if upload_name:
|
if upload_name:
|
||||||
if is_ascii(upload_name):
|
# RFC6266 section 4.1 [1] defines both `filename` and `filename*`.
|
||||||
disposition = "inline; filename=%s" % (_quote(upload_name),)
|
#
|
||||||
|
# `filename` is defined to be a `value`, which is defined by RFC2616
|
||||||
|
# section 3.6 [2] to be a `token` or a `quoted-string`, where a `token`
|
||||||
|
# is (essentially) a single US-ASCII word, and a `quoted-string` is a
|
||||||
|
# US-ASCII string surrounded by double-quotes, using backslash as an
|
||||||
|
# escape charater. Note that %-encoding is *not* permitted.
|
||||||
|
#
|
||||||
|
# `filename*` is defined to be an `ext-value`, which is defined in
|
||||||
|
# RFC5987 section 3.2.1 [3] to be `charset "'" [ language ] "'" value-chars`,
|
||||||
|
# where `value-chars` is essentially a %-encoded string in the given charset.
|
||||||
|
#
|
||||||
|
# [1]: https://tools.ietf.org/html/rfc6266#section-4.1
|
||||||
|
# [2]: https://tools.ietf.org/html/rfc2616#section-3.6
|
||||||
|
# [3]: https://tools.ietf.org/html/rfc5987#section-3.2.1
|
||||||
|
|
||||||
|
# We avoid the quoted-string version of `filename`, because (a) synapse didn't
|
||||||
|
# correctly interpret those as of 0.99.2 and (b) they are a bit of a pain and we
|
||||||
|
# may as well just do the filename* version.
|
||||||
|
if _can_encode_filename_as_token(upload_name):
|
||||||
|
disposition = 'inline; filename=%s' % (upload_name, )
|
||||||
else:
|
else:
|
||||||
disposition = "inline; filename*=utf-8''%s" % (_quote(upload_name),)
|
disposition = "inline; filename*=utf-8''%s" % (_quote(upload_name), )
|
||||||
|
|
||||||
request.setHeader(b"Content-Disposition", disposition.encode('ascii'))
|
request.setHeader(b"Content-Disposition", disposition.encode('ascii'))
|
||||||
|
|
||||||
|
@ -116,6 +135,35 @@ def add_file_headers(request, media_type, file_size, upload_name):
|
||||||
request.setHeader(b"Content-Length", b"%d" % (file_size,))
|
request.setHeader(b"Content-Length", b"%d" % (file_size,))
|
||||||
|
|
||||||
|
|
||||||
|
# separators as defined in RFC2616. SP and HT are handled separately.
|
||||||
|
# see _can_encode_filename_as_token.
|
||||||
|
_FILENAME_SEPARATOR_CHARS = set((
|
||||||
|
"(", ")", "<", ">", "@", ",", ";", ":", "\\", '"',
|
||||||
|
"/", "[", "]", "?", "=", "{", "}",
|
||||||
|
))
|
||||||
|
|
||||||
|
|
||||||
|
def _can_encode_filename_as_token(x):
|
||||||
|
for c in x:
|
||||||
|
# from RFC2616:
|
||||||
|
#
|
||||||
|
# token = 1*<any CHAR except CTLs or separators>
|
||||||
|
#
|
||||||
|
# separators = "(" | ")" | "<" | ">" | "@"
|
||||||
|
# | "," | ";" | ":" | "\" | <">
|
||||||
|
# | "/" | "[" | "]" | "?" | "="
|
||||||
|
# | "{" | "}" | SP | HT
|
||||||
|
#
|
||||||
|
# CHAR = <any US-ASCII character (octets 0 - 127)>
|
||||||
|
#
|
||||||
|
# CTL = <any US-ASCII control character
|
||||||
|
# (octets 0 - 31) and DEL (127)>
|
||||||
|
#
|
||||||
|
if ord(c) >= 127 or ord(c) <= 32 or c in _FILENAME_SEPARATOR_CHARS:
|
||||||
|
return False
|
||||||
|
return True
|
||||||
|
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
def respond_with_responder(request, responder, media_type, file_size, upload_name=None):
|
def respond_with_responder(request, responder, media_type, file_size, upload_name=None):
|
||||||
"""Responds to the request with given responder. If responder is None then
|
"""Responds to the request with given responder. If responder is None then
|
||||||
|
|
|
@ -185,6 +185,10 @@ class HomeServer(object):
|
||||||
'registration_handler',
|
'registration_handler',
|
||||||
]
|
]
|
||||||
|
|
||||||
|
REQUIRED_ON_MASTER_STARTUP = [
|
||||||
|
"user_directory_handler",
|
||||||
|
]
|
||||||
|
|
||||||
# This is overridden in derived application classes
|
# This is overridden in derived application classes
|
||||||
# (such as synapse.app.homeserver.SynapseHomeServer) and gives the class to be
|
# (such as synapse.app.homeserver.SynapseHomeServer) and gives the class to be
|
||||||
# instantiated during setup() for future return by get_datastore()
|
# instantiated during setup() for future return by get_datastore()
|
||||||
|
@ -221,6 +225,15 @@ class HomeServer(object):
|
||||||
conn.commit()
|
conn.commit()
|
||||||
logger.info("Finished setting up.")
|
logger.info("Finished setting up.")
|
||||||
|
|
||||||
|
def setup_master(self):
|
||||||
|
"""
|
||||||
|
Some handlers have side effects on instantiation (like registering
|
||||||
|
background updates). This function causes them to be fetched, and
|
||||||
|
therefore instantiated, to run those side effects.
|
||||||
|
"""
|
||||||
|
for i in self.REQUIRED_ON_MASTER_STARTUP:
|
||||||
|
getattr(self, "get_" + i)()
|
||||||
|
|
||||||
def get_reactor(self):
|
def get_reactor(self):
|
||||||
"""
|
"""
|
||||||
Fetch the Twisted reactor in use by this HomeServer.
|
Fetch the Twisted reactor in use by this HomeServer.
|
||||||
|
|
|
@ -767,18 +767,25 @@ class SQLBaseStore(object):
|
||||||
"""
|
"""
|
||||||
allvalues = {}
|
allvalues = {}
|
||||||
allvalues.update(keyvalues)
|
allvalues.update(keyvalues)
|
||||||
allvalues.update(values)
|
|
||||||
allvalues.update(insertion_values)
|
allvalues.update(insertion_values)
|
||||||
|
|
||||||
|
if not values:
|
||||||
|
latter = "NOTHING"
|
||||||
|
else:
|
||||||
|
allvalues.update(values)
|
||||||
|
latter = (
|
||||||
|
"UPDATE SET " + ", ".join(k + "=EXCLUDED." + k for k in values)
|
||||||
|
)
|
||||||
|
|
||||||
sql = (
|
sql = (
|
||||||
"INSERT INTO %s (%s) VALUES (%s) "
|
"INSERT INTO %s (%s) VALUES (%s) "
|
||||||
"ON CONFLICT (%s) DO UPDATE SET %s"
|
"ON CONFLICT (%s) DO %s"
|
||||||
) % (
|
) % (
|
||||||
table,
|
table,
|
||||||
", ".join(k for k in allvalues),
|
", ".join(k for k in allvalues),
|
||||||
", ".join("?" for _ in allvalues),
|
", ".join("?" for _ in allvalues),
|
||||||
", ".join(k for k in keyvalues),
|
", ".join(k for k in keyvalues),
|
||||||
", ".join(k + "=EXCLUDED." + k for k in values),
|
latter
|
||||||
)
|
)
|
||||||
txn.execute(sql, list(allvalues.values()))
|
txn.execute(sql, list(allvalues.values()))
|
||||||
|
|
||||||
|
|
|
@ -537,6 +537,7 @@ class EventsStore(StateGroupWorkerStore, EventFederationStore, EventsWorkerStore
|
||||||
new_events = [
|
new_events = [
|
||||||
event for event, ctx in event_contexts
|
event for event, ctx in event_contexts
|
||||||
if not event.internal_metadata.is_outlier() and not ctx.rejected
|
if not event.internal_metadata.is_outlier() and not ctx.rejected
|
||||||
|
and not event.internal_metadata.is_soft_failed()
|
||||||
]
|
]
|
||||||
|
|
||||||
# start with the existing forward extremities
|
# start with the existing forward extremities
|
||||||
|
@ -1406,21 +1407,6 @@ class EventsStore(StateGroupWorkerStore, EventFederationStore, EventsWorkerStore
|
||||||
values=state_values,
|
values=state_values,
|
||||||
)
|
)
|
||||||
|
|
||||||
self._simple_insert_many_txn(
|
|
||||||
txn,
|
|
||||||
table="event_edges",
|
|
||||||
values=[
|
|
||||||
{
|
|
||||||
"event_id": event.event_id,
|
|
||||||
"prev_event_id": prev_id,
|
|
||||||
"room_id": event.room_id,
|
|
||||||
"is_state": True,
|
|
||||||
}
|
|
||||||
for event, _ in state_events_and_contexts
|
|
||||||
for prev_id, _ in event.prev_state
|
|
||||||
],
|
|
||||||
)
|
|
||||||
|
|
||||||
# Prefill the event cache
|
# Prefill the event cache
|
||||||
self._add_to_cache(txn, events_and_contexts)
|
self._add_to_cache(txn, events_and_contexts)
|
||||||
|
|
||||||
|
|
|
@ -185,6 +185,63 @@ class PushRulesWorkerStore(ApplicationServiceWorkerStore,
|
||||||
|
|
||||||
defer.returnValue(results)
|
defer.returnValue(results)
|
||||||
|
|
||||||
|
@defer.inlineCallbacks
|
||||||
|
def move_push_rule_from_room_to_room(
|
||||||
|
self, new_room_id, user_id, rule,
|
||||||
|
):
|
||||||
|
"""Move a single push rule from one room to another for a specific user.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
new_room_id (str): ID of the new room.
|
||||||
|
user_id (str): ID of user the push rule belongs to.
|
||||||
|
rule (Dict): A push rule.
|
||||||
|
"""
|
||||||
|
# Create new rule id
|
||||||
|
rule_id_scope = '/'.join(rule["rule_id"].split('/')[:-1])
|
||||||
|
new_rule_id = rule_id_scope + "/" + new_room_id
|
||||||
|
|
||||||
|
# Change room id in each condition
|
||||||
|
for condition in rule.get("conditions", []):
|
||||||
|
if condition.get("key") == "room_id":
|
||||||
|
condition["pattern"] = new_room_id
|
||||||
|
|
||||||
|
# Add the rule for the new room
|
||||||
|
yield self.add_push_rule(
|
||||||
|
user_id=user_id,
|
||||||
|
rule_id=new_rule_id,
|
||||||
|
priority_class=rule["priority_class"],
|
||||||
|
conditions=rule["conditions"],
|
||||||
|
actions=rule["actions"],
|
||||||
|
)
|
||||||
|
|
||||||
|
# Delete push rule for the old room
|
||||||
|
yield self.delete_push_rule(user_id, rule["rule_id"])
|
||||||
|
|
||||||
|
@defer.inlineCallbacks
|
||||||
|
def move_push_rules_from_room_to_room_for_user(
|
||||||
|
self, old_room_id, new_room_id, user_id,
|
||||||
|
):
|
||||||
|
"""Move all of the push rules from one room to another for a specific
|
||||||
|
user.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
old_room_id (str): ID of the old room.
|
||||||
|
new_room_id (str): ID of the new room.
|
||||||
|
user_id (str): ID of user to copy push rules for.
|
||||||
|
"""
|
||||||
|
# Retrieve push rules for this user
|
||||||
|
user_push_rules = yield self.get_push_rules_for_user(user_id)
|
||||||
|
|
||||||
|
# Get rules relating to the old room, move them to the new room, then
|
||||||
|
# delete them from the old room
|
||||||
|
for rule in user_push_rules:
|
||||||
|
conditions = rule.get("conditions", [])
|
||||||
|
if any((c.get("key") == "room_id" and
|
||||||
|
c.get("pattern") == old_room_id) for c in conditions):
|
||||||
|
self.move_push_rule_from_room_to_room(
|
||||||
|
new_room_id, user_id, rule,
|
||||||
|
)
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
def bulk_get_push_rules_for_room(self, event, context):
|
def bulk_get_push_rules_for_room(self, event, context):
|
||||||
state_group = context.state_group
|
state_group = context.state_group
|
||||||
|
|
|
@ -16,9 +16,6 @@
|
||||||
-- Old disused version of the tables below.
|
-- Old disused version of the tables below.
|
||||||
DROP TABLE IF EXISTS users_who_share_rooms;
|
DROP TABLE IF EXISTS users_who_share_rooms;
|
||||||
|
|
||||||
-- This is no longer used because it's duplicated by the users_who_share_public_rooms
|
|
||||||
DROP TABLE IF EXISTS users_in_public_rooms;
|
|
||||||
|
|
||||||
-- Tables keeping track of what users share rooms. This is a map of local users
|
-- Tables keeping track of what users share rooms. This is a map of local users
|
||||||
-- to local or remote users, per room. Remote users cannot be in the user_id
|
-- to local or remote users, per room. Remote users cannot be in the user_id
|
||||||
-- column, only the other_user_id column. There are two tables, one for public
|
-- column, only the other_user_id column. There are two tables, one for public
|
||||||
|
|
|
@ -0,0 +1,28 @@
|
||||||
|
/* Copyright 2019 New Vector Ltd
|
||||||
|
*
|
||||||
|
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
* you may not use this file except in compliance with the License.
|
||||||
|
* You may obtain a copy of the License at
|
||||||
|
*
|
||||||
|
* http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
*
|
||||||
|
* Unless required by applicable law or agreed to in writing, software
|
||||||
|
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
* See the License for the specific language governing permissions and
|
||||||
|
* limitations under the License.
|
||||||
|
*/
|
||||||
|
|
||||||
|
-- We don't need the old version of this table.
|
||||||
|
DROP TABLE IF EXISTS users_in_public_rooms;
|
||||||
|
|
||||||
|
-- Old version of users_in_public_rooms
|
||||||
|
DROP TABLE IF EXISTS users_who_share_public_rooms;
|
||||||
|
|
||||||
|
-- Track what users are in public rooms.
|
||||||
|
CREATE TABLE IF NOT EXISTS users_in_public_rooms (
|
||||||
|
user_id TEXT NOT NULL,
|
||||||
|
room_id TEXT NOT NULL
|
||||||
|
);
|
||||||
|
|
||||||
|
CREATE UNIQUE INDEX users_in_public_rooms_u_idx ON users_in_public_rooms(user_id, room_id);
|
|
@ -37,6 +37,8 @@ CREATE TABLE IF NOT EXISTS event_edges(
|
||||||
event_id TEXT NOT NULL,
|
event_id TEXT NOT NULL,
|
||||||
prev_event_id TEXT NOT NULL,
|
prev_event_id TEXT NOT NULL,
|
||||||
room_id TEXT NOT NULL,
|
room_id TEXT NOT NULL,
|
||||||
|
-- We no longer insert prev_state into this table, so all new rows will have
|
||||||
|
-- is_state as false.
|
||||||
is_state BOOL NOT NULL,
|
is_state BOOL NOT NULL,
|
||||||
UNIQUE (event_id, prev_event_id, room_id, is_state)
|
UNIQUE (event_id, prev_event_id, room_id, is_state)
|
||||||
);
|
);
|
||||||
|
|
|
@ -21,12 +21,11 @@ from six import iteritems
|
||||||
from twisted.internet import defer
|
from twisted.internet import defer
|
||||||
|
|
||||||
from synapse.api.constants import EventTypes, JoinRules
|
from synapse.api.constants import EventTypes, JoinRules
|
||||||
|
from synapse.storage._base import SQLBaseStore
|
||||||
from synapse.storage.engines import PostgresEngine, Sqlite3Engine
|
from synapse.storage.engines import PostgresEngine, Sqlite3Engine
|
||||||
from synapse.storage.state import StateFilter
|
from synapse.storage.state import StateFilter
|
||||||
from synapse.types import get_domain_from_id, get_localpart_from_id
|
from synapse.types import get_domain_from_id, get_localpart_from_id
|
||||||
from synapse.util.caches.descriptors import cached, cachedInlineCallbacks
|
from synapse.util.caches.descriptors import cached
|
||||||
|
|
||||||
from ._base import SQLBaseStore
|
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
@ -242,14 +241,7 @@ class UserDirectoryStore(SQLBaseStore):
|
||||||
txn, table="user_directory_search", keyvalues={"user_id": user_id}
|
txn, table="user_directory_search", keyvalues={"user_id": user_id}
|
||||||
)
|
)
|
||||||
self._simple_delete_txn(
|
self._simple_delete_txn(
|
||||||
txn,
|
txn, table="users_in_public_rooms", keyvalues={"user_id": user_id}
|
||||||
table="users_who_share_public_rooms",
|
|
||||||
keyvalues={"user_id": user_id},
|
|
||||||
)
|
|
||||||
self._simple_delete_txn(
|
|
||||||
txn,
|
|
||||||
table="users_who_share_public_rooms",
|
|
||||||
keyvalues={"other_user_id": user_id},
|
|
||||||
)
|
)
|
||||||
self._simple_delete_txn(
|
self._simple_delete_txn(
|
||||||
txn,
|
txn,
|
||||||
|
@ -271,9 +263,9 @@ class UserDirectoryStore(SQLBaseStore):
|
||||||
in the given room_id
|
in the given room_id
|
||||||
"""
|
"""
|
||||||
user_ids_share_pub = yield self._simple_select_onecol(
|
user_ids_share_pub = yield self._simple_select_onecol(
|
||||||
table="users_who_share_public_rooms",
|
table="users_in_public_rooms",
|
||||||
keyvalues={"room_id": room_id},
|
keyvalues={"room_id": room_id},
|
||||||
retcol="other_user_id",
|
retcol="user_id",
|
||||||
desc="get_users_in_dir_due_to_room",
|
desc="get_users_in_dir_due_to_room",
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -311,26 +303,19 @@ class UserDirectoryStore(SQLBaseStore):
|
||||||
rows = yield self._execute("get_all_local_users", None, sql)
|
rows = yield self._execute("get_all_local_users", None, sql)
|
||||||
defer.returnValue([name for name, in rows])
|
defer.returnValue([name for name, in rows])
|
||||||
|
|
||||||
def add_users_who_share_room(self, room_id, share_private, user_id_tuples):
|
def add_users_who_share_private_room(self, room_id, user_id_tuples):
|
||||||
"""Insert entries into the users_who_share_*_rooms table. The first
|
"""Insert entries into the users_who_share_private_rooms table. The first
|
||||||
user should be a local user.
|
user should be a local user.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
room_id (str)
|
room_id (str)
|
||||||
share_private (bool): Is the room private
|
|
||||||
user_id_tuples([(str, str)]): iterable of 2-tuple of user IDs.
|
user_id_tuples([(str, str)]): iterable of 2-tuple of user IDs.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def _add_users_who_share_room_txn(txn):
|
def _add_users_who_share_room_txn(txn):
|
||||||
|
|
||||||
if share_private:
|
|
||||||
tbl = "users_who_share_private_rooms"
|
|
||||||
else:
|
|
||||||
tbl = "users_who_share_public_rooms"
|
|
||||||
|
|
||||||
self._simple_upsert_many_txn(
|
self._simple_upsert_many_txn(
|
||||||
txn,
|
txn,
|
||||||
table=tbl,
|
table="users_who_share_private_rooms",
|
||||||
key_names=["user_id", "other_user_id", "room_id"],
|
key_names=["user_id", "other_user_id", "room_id"],
|
||||||
key_values=[
|
key_values=[
|
||||||
(user_id, other_user_id, room_id)
|
(user_id, other_user_id, room_id)
|
||||||
|
@ -339,15 +324,35 @@ class UserDirectoryStore(SQLBaseStore):
|
||||||
value_names=(),
|
value_names=(),
|
||||||
value_values=None,
|
value_values=None,
|
||||||
)
|
)
|
||||||
for user_id, other_user_id in user_id_tuples:
|
|
||||||
txn.call_after(
|
|
||||||
self.get_users_who_share_room_from_dir.invalidate, (user_id,)
|
|
||||||
)
|
|
||||||
|
|
||||||
return self.runInteraction(
|
return self.runInteraction(
|
||||||
"add_users_who_share_room", _add_users_who_share_room_txn
|
"add_users_who_share_room", _add_users_who_share_room_txn
|
||||||
)
|
)
|
||||||
|
|
||||||
|
def add_users_in_public_rooms(self, room_id, user_ids):
|
||||||
|
"""Insert entries into the users_who_share_private_rooms table. The first
|
||||||
|
user should be a local user.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
room_id (str)
|
||||||
|
user_ids (list[str])
|
||||||
|
"""
|
||||||
|
|
||||||
|
def _add_users_in_public_rooms_txn(txn):
|
||||||
|
|
||||||
|
self._simple_upsert_many_txn(
|
||||||
|
txn,
|
||||||
|
table="users_in_public_rooms",
|
||||||
|
key_names=["user_id", "room_id"],
|
||||||
|
key_values=[(user_id, room_id) for user_id in user_ids],
|
||||||
|
value_names=(),
|
||||||
|
value_values=None,
|
||||||
|
)
|
||||||
|
|
||||||
|
return self.runInteraction(
|
||||||
|
"add_users_in_public_rooms", _add_users_in_public_rooms_txn
|
||||||
|
)
|
||||||
|
|
||||||
def remove_user_who_share_room(self, user_id, room_id):
|
def remove_user_who_share_room(self, user_id, room_id):
|
||||||
"""
|
"""
|
||||||
Deletes entries in the users_who_share_*_rooms table. The first
|
Deletes entries in the users_who_share_*_rooms table. The first
|
||||||
|
@ -371,25 +376,18 @@ class UserDirectoryStore(SQLBaseStore):
|
||||||
)
|
)
|
||||||
self._simple_delete_txn(
|
self._simple_delete_txn(
|
||||||
txn,
|
txn,
|
||||||
table="users_who_share_public_rooms",
|
table="users_in_public_rooms",
|
||||||
keyvalues={"user_id": user_id, "room_id": room_id},
|
keyvalues={"user_id": user_id, "room_id": room_id},
|
||||||
)
|
)
|
||||||
self._simple_delete_txn(
|
|
||||||
txn,
|
|
||||||
table="users_who_share_public_rooms",
|
|
||||||
keyvalues={"other_user_id": user_id, "room_id": room_id},
|
|
||||||
)
|
|
||||||
txn.call_after(
|
|
||||||
self.get_users_who_share_room_from_dir.invalidate, (user_id,)
|
|
||||||
)
|
|
||||||
|
|
||||||
return self.runInteraction(
|
return self.runInteraction(
|
||||||
"remove_user_who_share_room", _remove_user_who_share_room_txn
|
"remove_user_who_share_room", _remove_user_who_share_room_txn
|
||||||
)
|
)
|
||||||
|
|
||||||
@cachedInlineCallbacks(max_entries=500000, iterable=True)
|
@defer.inlineCallbacks
|
||||||
def get_users_who_share_room_from_dir(self, user_id):
|
def get_user_dir_rooms_user_is_in(self, user_id):
|
||||||
"""Returns the set of users who share a room with `user_id`
|
"""
|
||||||
|
Returns the rooms that a user is in.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
user_id(str): Must be a local user
|
user_id(str): Must be a local user
|
||||||
|
@ -400,23 +398,19 @@ class UserDirectoryStore(SQLBaseStore):
|
||||||
rows = yield self._simple_select_onecol(
|
rows = yield self._simple_select_onecol(
|
||||||
table="users_who_share_private_rooms",
|
table="users_who_share_private_rooms",
|
||||||
keyvalues={"user_id": user_id},
|
keyvalues={"user_id": user_id},
|
||||||
retcol="other_user_id",
|
retcol="room_id",
|
||||||
desc="get_users_who_share_room_with_user",
|
desc="get_rooms_user_is_in",
|
||||||
)
|
)
|
||||||
|
|
||||||
pub_rows = yield self._simple_select_onecol(
|
pub_rows = yield self._simple_select_onecol(
|
||||||
table="users_who_share_public_rooms",
|
table="users_in_public_rooms",
|
||||||
keyvalues={"user_id": user_id},
|
keyvalues={"user_id": user_id},
|
||||||
retcol="other_user_id",
|
retcol="room_id",
|
||||||
desc="get_users_who_share_room_with_user",
|
desc="get_rooms_user_is_in",
|
||||||
)
|
)
|
||||||
|
|
||||||
users = set(pub_rows)
|
users = set(pub_rows)
|
||||||
users.update(rows)
|
users.update(rows)
|
||||||
|
|
||||||
# Remove the user themselves from this list.
|
|
||||||
users.discard(user_id)
|
|
||||||
|
|
||||||
defer.returnValue(list(users))
|
defer.returnValue(list(users))
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
|
@ -452,10 +446,9 @@ class UserDirectoryStore(SQLBaseStore):
|
||||||
def _delete_all_from_user_dir_txn(txn):
|
def _delete_all_from_user_dir_txn(txn):
|
||||||
txn.execute("DELETE FROM user_directory")
|
txn.execute("DELETE FROM user_directory")
|
||||||
txn.execute("DELETE FROM user_directory_search")
|
txn.execute("DELETE FROM user_directory_search")
|
||||||
txn.execute("DELETE FROM users_who_share_public_rooms")
|
txn.execute("DELETE FROM users_in_public_rooms")
|
||||||
txn.execute("DELETE FROM users_who_share_private_rooms")
|
txn.execute("DELETE FROM users_who_share_private_rooms")
|
||||||
txn.call_after(self.get_user_in_directory.invalidate_all)
|
txn.call_after(self.get_user_in_directory.invalidate_all)
|
||||||
txn.call_after(self.get_users_who_share_room_from_dir.invalidate_all)
|
|
||||||
|
|
||||||
return self.runInteraction(
|
return self.runInteraction(
|
||||||
"delete_all_from_user_dir", _delete_all_from_user_dir_txn
|
"delete_all_from_user_dir", _delete_all_from_user_dir_txn
|
||||||
|
@ -560,23 +553,19 @@ class UserDirectoryStore(SQLBaseStore):
|
||||||
"""
|
"""
|
||||||
|
|
||||||
if self.hs.config.user_directory_search_all_users:
|
if self.hs.config.user_directory_search_all_users:
|
||||||
# make s.user_id null to keep the ordering algorithm happy
|
|
||||||
join_clause = """
|
|
||||||
CROSS JOIN (SELECT NULL as user_id) AS s
|
|
||||||
"""
|
|
||||||
join_args = ()
|
join_args = ()
|
||||||
where_clause = "1=1"
|
where_clause = "1=1"
|
||||||
else:
|
else:
|
||||||
join_clause = """
|
|
||||||
LEFT JOIN (
|
|
||||||
SELECT other_user_id AS user_id FROM users_who_share_public_rooms
|
|
||||||
UNION
|
|
||||||
SELECT other_user_id AS user_id FROM users_who_share_private_rooms
|
|
||||||
WHERE user_id = ?
|
|
||||||
) AS p USING (user_id)
|
|
||||||
"""
|
|
||||||
join_args = (user_id,)
|
join_args = (user_id,)
|
||||||
where_clause = "p.user_id IS NOT NULL"
|
where_clause = """
|
||||||
|
(
|
||||||
|
EXISTS (select 1 from users_in_public_rooms WHERE user_id = t.user_id)
|
||||||
|
OR EXISTS (
|
||||||
|
SELECT 1 FROM users_who_share_private_rooms
|
||||||
|
WHERE user_id = ? AND other_user_id = t.user_id
|
||||||
|
)
|
||||||
|
)
|
||||||
|
"""
|
||||||
|
|
||||||
if isinstance(self.database_engine, PostgresEngine):
|
if isinstance(self.database_engine, PostgresEngine):
|
||||||
full_query, exact_query, prefix_query = _parse_query_postgres(search_term)
|
full_query, exact_query, prefix_query = _parse_query_postgres(search_term)
|
||||||
|
@ -588,9 +577,8 @@ class UserDirectoryStore(SQLBaseStore):
|
||||||
# search: (domain, _, display name, localpart)
|
# search: (domain, _, display name, localpart)
|
||||||
sql = """
|
sql = """
|
||||||
SELECT d.user_id AS user_id, display_name, avatar_url
|
SELECT d.user_id AS user_id, display_name, avatar_url
|
||||||
FROM user_directory_search
|
FROM user_directory_search as t
|
||||||
INNER JOIN user_directory AS d USING (user_id)
|
INNER JOIN user_directory AS d USING (user_id)
|
||||||
%s
|
|
||||||
WHERE
|
WHERE
|
||||||
%s
|
%s
|
||||||
AND vector @@ to_tsquery('english', ?)
|
AND vector @@ to_tsquery('english', ?)
|
||||||
|
@ -617,7 +605,6 @@ class UserDirectoryStore(SQLBaseStore):
|
||||||
avatar_url IS NULL
|
avatar_url IS NULL
|
||||||
LIMIT ?
|
LIMIT ?
|
||||||
""" % (
|
""" % (
|
||||||
join_clause,
|
|
||||||
where_clause,
|
where_clause,
|
||||||
)
|
)
|
||||||
args = join_args + (full_query, exact_query, prefix_query, limit + 1)
|
args = join_args + (full_query, exact_query, prefix_query, limit + 1)
|
||||||
|
@ -626,9 +613,8 @@ class UserDirectoryStore(SQLBaseStore):
|
||||||
|
|
||||||
sql = """
|
sql = """
|
||||||
SELECT d.user_id AS user_id, display_name, avatar_url
|
SELECT d.user_id AS user_id, display_name, avatar_url
|
||||||
FROM user_directory_search
|
FROM user_directory_search as t
|
||||||
INNER JOIN user_directory AS d USING (user_id)
|
INNER JOIN user_directory AS d USING (user_id)
|
||||||
%s
|
|
||||||
WHERE
|
WHERE
|
||||||
%s
|
%s
|
||||||
AND value MATCH ?
|
AND value MATCH ?
|
||||||
|
@ -638,7 +624,6 @@ class UserDirectoryStore(SQLBaseStore):
|
||||||
avatar_url IS NULL
|
avatar_url IS NULL
|
||||||
LIMIT ?
|
LIMIT ?
|
||||||
""" % (
|
""" % (
|
||||||
join_clause,
|
|
||||||
where_clause,
|
where_clause,
|
||||||
)
|
)
|
||||||
args = join_args + (search_query, limit + 1)
|
args = join_args + (search_query, limit + 1)
|
||||||
|
|
|
@ -67,6 +67,10 @@ def filter_events_for_client(store, user_id, events, is_peeking=False,
|
||||||
Returns:
|
Returns:
|
||||||
Deferred[list[synapse.events.EventBase]]
|
Deferred[list[synapse.events.EventBase]]
|
||||||
"""
|
"""
|
||||||
|
# Filter out events that have been soft failed so that we don't relay them
|
||||||
|
# to clients.
|
||||||
|
events = list(e for e in events if not e.internal_metadata.is_soft_failed())
|
||||||
|
|
||||||
types = (
|
types = (
|
||||||
(EventTypes.RoomHistoryVisibility, ""),
|
(EventTypes.RoomHistoryVisibility, ""),
|
||||||
(EventTypes.Member, user_id),
|
(EventTypes.Member, user_id),
|
||||||
|
|
|
@ -180,7 +180,7 @@ class TypingNotificationsTestCase(unittest.HomeserverTestCase):
|
||||||
put_json = self.hs.get_http_client().put_json
|
put_json = self.hs.get_http_client().put_json
|
||||||
put_json.assert_called_once_with(
|
put_json.assert_called_once_with(
|
||||||
"farm",
|
"farm",
|
||||||
path="/_matrix/federation/v1/send/1000000/",
|
path="/_matrix/federation/v1/send/1000000",
|
||||||
data=_expect_edu_transaction(
|
data=_expect_edu_transaction(
|
||||||
"m.typing",
|
"m.typing",
|
||||||
content={
|
content={
|
||||||
|
@ -201,7 +201,7 @@ class TypingNotificationsTestCase(unittest.HomeserverTestCase):
|
||||||
|
|
||||||
(request, channel) = self.make_request(
|
(request, channel) = self.make_request(
|
||||||
"PUT",
|
"PUT",
|
||||||
"/_matrix/federation/v1/send/1000000/",
|
"/_matrix/federation/v1/send/1000000",
|
||||||
_make_edu_transaction_json(
|
_make_edu_transaction_json(
|
||||||
"m.typing",
|
"m.typing",
|
||||||
content={
|
content={
|
||||||
|
@ -257,7 +257,7 @@ class TypingNotificationsTestCase(unittest.HomeserverTestCase):
|
||||||
put_json = self.hs.get_http_client().put_json
|
put_json = self.hs.get_http_client().put_json
|
||||||
put_json.assert_called_once_with(
|
put_json.assert_called_once_with(
|
||||||
"farm",
|
"farm",
|
||||||
path="/_matrix/federation/v1/send/1000000/",
|
path="/_matrix/federation/v1/send/1000000",
|
||||||
data=_expect_edu_transaction(
|
data=_expect_edu_transaction(
|
||||||
"m.typing",
|
"m.typing",
|
||||||
content={
|
content={
|
||||||
|
|
|
@ -114,13 +114,13 @@ class UserDirectoryTestCase(unittest.HomeserverTestCase):
|
||||||
self.helper.join(room, user=u2, tok=u2_token)
|
self.helper.join(room, user=u2, tok=u2_token)
|
||||||
|
|
||||||
# Check we have populated the database correctly.
|
# Check we have populated the database correctly.
|
||||||
shares_public = self.get_users_who_share_public_rooms()
|
|
||||||
shares_private = self.get_users_who_share_private_rooms()
|
shares_private = self.get_users_who_share_private_rooms()
|
||||||
|
public_users = self.get_users_in_public_rooms()
|
||||||
|
|
||||||
self.assertEqual(shares_public, [])
|
|
||||||
self.assertEqual(
|
self.assertEqual(
|
||||||
self._compress_shared(shares_private), set([(u1, u2, room), (u2, u1, room)])
|
self._compress_shared(shares_private), set([(u1, u2, room), (u2, u1, room)])
|
||||||
)
|
)
|
||||||
|
self.assertEqual(public_users, [])
|
||||||
|
|
||||||
# We get one search result when searching for user2 by user1.
|
# We get one search result when searching for user2 by user1.
|
||||||
s = self.get_success(self.handler.search_users(u1, "user2", 10))
|
s = self.get_success(self.handler.search_users(u1, "user2", 10))
|
||||||
|
@ -138,11 +138,11 @@ class UserDirectoryTestCase(unittest.HomeserverTestCase):
|
||||||
self.helper.leave(room, user=u2, tok=u2_token)
|
self.helper.leave(room, user=u2, tok=u2_token)
|
||||||
|
|
||||||
# Check we have removed the values.
|
# Check we have removed the values.
|
||||||
shares_public = self.get_users_who_share_public_rooms()
|
|
||||||
shares_private = self.get_users_who_share_private_rooms()
|
shares_private = self.get_users_who_share_private_rooms()
|
||||||
|
public_users = self.get_users_in_public_rooms()
|
||||||
|
|
||||||
self.assertEqual(shares_public, [])
|
|
||||||
self.assertEqual(self._compress_shared(shares_private), set())
|
self.assertEqual(self._compress_shared(shares_private), set())
|
||||||
|
self.assertEqual(public_users, [])
|
||||||
|
|
||||||
# User1 now gets no search results for any of the other users.
|
# User1 now gets no search results for any of the other users.
|
||||||
s = self.get_success(self.handler.search_users(u1, "user2", 10))
|
s = self.get_success(self.handler.search_users(u1, "user2", 10))
|
||||||
|
@ -160,14 +160,18 @@ class UserDirectoryTestCase(unittest.HomeserverTestCase):
|
||||||
r.add((i["user_id"], i["other_user_id"], i["room_id"]))
|
r.add((i["user_id"], i["other_user_id"], i["room_id"]))
|
||||||
return r
|
return r
|
||||||
|
|
||||||
def get_users_who_share_public_rooms(self):
|
def get_users_in_public_rooms(self):
|
||||||
return self.get_success(
|
r = self.get_success(
|
||||||
self.store._simple_select_list(
|
self.store._simple_select_list(
|
||||||
"users_who_share_public_rooms",
|
"users_in_public_rooms",
|
||||||
None,
|
None,
|
||||||
["user_id", "other_user_id", "room_id"],
|
("user_id", "room_id"),
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
|
retval = []
|
||||||
|
for i in r:
|
||||||
|
retval.append((i["user_id"], i["room_id"]))
|
||||||
|
return retval
|
||||||
|
|
||||||
def get_users_who_share_private_rooms(self):
|
def get_users_who_share_private_rooms(self):
|
||||||
return self.get_success(
|
return self.get_success(
|
||||||
|
@ -200,11 +204,12 @@ class UserDirectoryTestCase(unittest.HomeserverTestCase):
|
||||||
self.get_success(self.store.update_user_directory_stream_pos(None))
|
self.get_success(self.store.update_user_directory_stream_pos(None))
|
||||||
self.get_success(self.store.delete_all_from_user_dir())
|
self.get_success(self.store.delete_all_from_user_dir())
|
||||||
|
|
||||||
shares_public = self.get_users_who_share_public_rooms()
|
|
||||||
shares_private = self.get_users_who_share_private_rooms()
|
shares_private = self.get_users_who_share_private_rooms()
|
||||||
|
public_users = self.get_users_in_public_rooms()
|
||||||
|
|
||||||
|
# Nothing updated yet
|
||||||
self.assertEqual(shares_private, [])
|
self.assertEqual(shares_private, [])
|
||||||
self.assertEqual(shares_public, [])
|
self.assertEqual(public_users, [])
|
||||||
|
|
||||||
# Reset the handled users caches
|
# Reset the handled users caches
|
||||||
self.handler.initially_handled_users = set()
|
self.handler.initially_handled_users = set()
|
||||||
|
@ -219,12 +224,12 @@ class UserDirectoryTestCase(unittest.HomeserverTestCase):
|
||||||
|
|
||||||
self.get_success(d)
|
self.get_success(d)
|
||||||
|
|
||||||
shares_public = self.get_users_who_share_public_rooms()
|
|
||||||
shares_private = self.get_users_who_share_private_rooms()
|
shares_private = self.get_users_who_share_private_rooms()
|
||||||
|
public_users = self.get_users_in_public_rooms()
|
||||||
|
|
||||||
# User 1 and User 2 share public rooms
|
# User 1 and User 2 are in the same public room
|
||||||
self.assertEqual(
|
self.assertEqual(
|
||||||
self._compress_shared(shares_public), set([(u1, u2, room), (u2, u1, room)])
|
set(public_users), set([(u1, room), (u2, room)])
|
||||||
)
|
)
|
||||||
|
|
||||||
# User 1 and User 3 share private rooms
|
# User 1 and User 3 share private rooms
|
||||||
|
|
|
@ -41,8 +41,8 @@ class UserDirectoryStoreTestCase(unittest.TestCase):
|
||||||
BOBBY: ProfileInfo(None, "bobby"),
|
BOBBY: ProfileInfo(None, "bobby"),
|
||||||
},
|
},
|
||||||
)
|
)
|
||||||
yield self.store.add_users_who_share_room(
|
yield self.store.add_users_in_public_rooms(
|
||||||
"!room:id", False, ((ALICE, BOB), (BOB, ALICE))
|
"!room:id", (ALICE, BOB)
|
||||||
)
|
)
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
|
|
|
@ -115,6 +115,7 @@ def default_config(name):
|
||||||
config.signing_key = [MockKey()]
|
config.signing_key = [MockKey()]
|
||||||
config.event_cache_size = 1
|
config.event_cache_size = 1
|
||||||
config.enable_registration = True
|
config.enable_registration = True
|
||||||
|
config.enable_registration_captcha = False
|
||||||
config.macaroon_secret_key = "not even a little secret"
|
config.macaroon_secret_key = "not even a little secret"
|
||||||
config.expire_access_token = False
|
config.expire_access_token = False
|
||||||
config.server_name = name
|
config.server_name = name
|
||||||
|
@ -330,6 +331,8 @@ def setup_test_homeserver(
|
||||||
cleanup_func(cleanup)
|
cleanup_func(cleanup)
|
||||||
|
|
||||||
hs.setup()
|
hs.setup()
|
||||||
|
if homeserverToUse.__name__ == "TestHomeServer":
|
||||||
|
hs.setup_master()
|
||||||
else:
|
else:
|
||||||
hs = homeserverToUse(
|
hs = homeserverToUse(
|
||||||
name,
|
name,
|
||||||
|
|
Loading…
Reference in New Issue