Merge branch 'release-v1.38' into matrix-org-hotfixes

anoa/log_11772
Brendan Abolivier 2021-07-06 14:11:37 +01:00
commit fc8a586ab9
66 changed files with 1329 additions and 495 deletions

View File

@ -1,3 +1,54 @@
Synapse 1.38.0rc1 (2021-07-06)
==============================
This release includes a database schema update which could result in elevated disk usage. See the [upgrade notes](https://matrix-org.github.io/synapse/develop/upgrade.md#upgrading-to-v1380) for more information.
Features
--------
- Implement refresh tokens as specified by [MSC2918](https://github.com/matrix-org/matrix-doc/pull/2918). ([\#9450](https://github.com/matrix-org/synapse/issues/9450))
- Add support for evicting cache entries based on last access time. ([\#10205](https://github.com/matrix-org/synapse/issues/10205))
- Omit empty fields from the `/sync` response. Contributed by @deepbluev7. ([\#10214](https://github.com/matrix-org/synapse/issues/10214))
- Improve validation on federation `send_{join,leave,knock}` endpoints. ([\#10225](https://github.com/matrix-org/synapse/issues/10225), [\#10243](https://github.com/matrix-org/synapse/issues/10243))
- Add SSO `external_ids` to the Query User Account admin API. ([\#10261](https://github.com/matrix-org/synapse/issues/10261))
- Mark events received over federation which fail a spam check as "soft-failed". ([\#10263](https://github.com/matrix-org/synapse/issues/10263))
- Add metrics for new inbound federation staging area. ([\#10284](https://github.com/matrix-org/synapse/issues/10284))
- Add script to print information about recently registered users. ([\#10290](https://github.com/matrix-org/synapse/issues/10290))
Bugfixes
--------
- Fix a long-standing bug which meant that invite rejections and knocks were not sent out over federation in a timely manner. ([\#10223](https://github.com/matrix-org/synapse/issues/10223))
- Fix a bug introduced in v1.26.0 where only users who have set profile information could be deactivated with erasure enabled. ([\#10252](https://github.com/matrix-org/synapse/issues/10252))
- Fix a long-standing bug where Synapse would return errors after 2<sup>31</sup> events were handled by the server. ([\#10264](https://github.com/matrix-org/synapse/issues/10264), [\#10267](https://github.com/matrix-org/synapse/issues/10267), [\#10282](https://github.com/matrix-org/synapse/issues/10282), [\#10286](https://github.com/matrix-org/synapse/issues/10286), [\#10291](https://github.com/matrix-org/synapse/issues/10291), [\#10314](https://github.com/matrix-org/synapse/issues/10314))
- Fix the prometheus `synapse_federation_server_pdu_process_time` metric. Broke in v1.37.1. ([\#10279](https://github.com/matrix-org/synapse/issues/10279))
- Ensure that inbound events from federation that were being processed when Synapse was restarted get promptly processed on start up. ([\#10303](https://github.com/matrix-org/synapse/issues/10303))
Improved Documentation
----------------------
- Move the upgrade notes to [docs/upgrade.md](https://github.com/matrix-org/synapse/blob/develop/docs/upgrade.md) and convert them to markdown. ([\#10166](https://github.com/matrix-org/synapse/issues/10166))
- Choose Welcome & Overview as the default page for synapse documentation website. ([\#10242](https://github.com/matrix-org/synapse/issues/10242))
- Adjust the URL in the README.rst file to point to irc.libera.chat. ([\#10258](https://github.com/matrix-org/synapse/issues/10258))
- Fix homeserver config option name in presence router documentation. ([\#10288](https://github.com/matrix-org/synapse/issues/10288))
- Fix link pointing at the wrong section in the modules documentation page. ([\#10302](https://github.com/matrix-org/synapse/issues/10302))
Internal Changes
----------------
- Drop `Origin` and `Accept` from the value of the `Access-Control-Allow-Headers` response header. ([\#10114](https://github.com/matrix-org/synapse/issues/10114))
- Add type hints to the federation servlets. ([\#10213](https://github.com/matrix-org/synapse/issues/10213))
- Improve the reliability of auto-joining remote rooms. ([\#10237](https://github.com/matrix-org/synapse/issues/10237))
- Update the release script to use the semver terminology and determine the release branch based on the next version. ([\#10239](https://github.com/matrix-org/synapse/issues/10239))
- Fix type hints for computing auth events. ([\#10253](https://github.com/matrix-org/synapse/issues/10253))
- Improve the performance of the spaces summary endpoint by only recursing into spaces (and not rooms in general). ([\#10256](https://github.com/matrix-org/synapse/issues/10256))
- Move event authentication methods from `Auth` to `EventAuthHandler`. ([\#10268](https://github.com/matrix-org/synapse/issues/10268))
- Re-enable a SyTest after it has been fixed. ([\#10292](https://github.com/matrix-org/synapse/issues/10292))
Synapse 1.37.1 (2021-06-30)
===========================

View File

@ -1 +0,0 @@
Drop Origin and Accept from the value of the Access-Control-Allow-Headers response header.

View File

@ -1 +0,0 @@
Move the upgrade notes to [docs/upgrade.md](https://github.com/matrix-org/synapse/blob/develop/docs/upgrade.md) and convert them to markdown.

View File

@ -1 +0,0 @@
Add type hints to the federation servlets.

View File

@ -1 +0,0 @@
Omit empty fields from the `/sync` response. Contributed by @deepbluev7.

View File

@ -1 +0,0 @@
Fix a long-standing bug which meant that invite rejections and knocks were not sent out over federation in a timely manner.

View File

@ -1 +0,0 @@
Improve validation on federation `send_{join,leave,knock}` endpoints.

View File

@ -1 +0,0 @@
Improve the reliability of auto-joining remote rooms.

View File

@ -1 +0,0 @@
Update the release script to use the semver terminology and determine the release branch based on the next version.

View File

@ -1 +0,0 @@
Choose Welcome & Overview as the default page for synapse documentation website.

View File

@ -1 +0,0 @@
Improve validation on federation `send_{join,leave,knock}` endpoints.

View File

@ -1 +0,0 @@
Fix type hints for computing auth events.

View File

@ -1 +0,0 @@
Improve the performance of the spaces summary endpoint by only recursing into spaces (and not rooms in general).

View File

@ -1 +0,0 @@
Adjust the URL in the README.rst file to point to irc.libera.chat.

View File

@ -1 +0,0 @@
Mark events received over federation which fail a spam check as "soft-failed".

View File

@ -1 +0,0 @@
Fix a long-standing bug where Synapse would return errors after 2<sup>31</sup> events were handled by the server.

View File

@ -1 +0,0 @@
Fix a long-standing bug where Synapse would return errors after 2<sup>31</sup> events were handled by the server.

View File

@ -1 +0,0 @@
Fix the prometheus `synapse_federation_server_pdu_process_time` metric. Broke in v1.37.1.

View File

@ -1 +0,0 @@
Fix a long-standing bug where Synapse would return errors after 2<sup>31</sup> events were handled by the server.

View File

@ -1 +0,0 @@
Fix a long-standing bug where Synapse would return errors after 2<sup>31</sup> events were handled by the server.

View File

@ -1 +0,0 @@
Fix homeserver config option name in presence router documentation.

View File

@ -1 +0,0 @@
Implement refresh tokens as specified by [MSC2918](https://github.com/matrix-org/matrix-doc/pull/2918).

6
debian/changelog vendored
View File

@ -1,3 +1,9 @@
matrix-synapse-py3 (1.37.1ubuntu1) UNRELEASED; urgency=medium
* Add synapse_review_recent_signups script
-- Erik Johnston <erikj@matrix.org> Thu, 01 Jul 2021 15:55:03 +0100
matrix-synapse-py3 (1.37.1) stable; urgency=medium
* New synapse release 1.37.1.

View File

@ -1,90 +1,58 @@
.\" generated with Ronn/v0.7.3
.\" http://github.com/rtomayko/ronn/tree/0.7.3
.
.TH "HASH_PASSWORD" "1" "February 2017" "" ""
.
.\" generated with Ronn-NG/v0.8.0
.\" http://github.com/apjanke/ronn-ng/tree/0.8.0
.TH "HASH_PASSWORD" "1" "July 2021" "" ""
.SH "NAME"
\fBhash_password\fR \- Calculate the hash of a new password, so that passwords can be reset
.
.SH "SYNOPSIS"
\fBhash_password\fR [\fB\-p\fR|\fB\-\-password\fR [password]] [\fB\-c\fR|\fB\-\-config\fR \fIfile\fR]
.
.SH "DESCRIPTION"
\fBhash_password\fR calculates the hash of a supplied password using bcrypt\.
.
.P
\fBhash_password\fR takes a password as an parameter either on the command line or the \fBSTDIN\fR if not supplied\.
.
.P
It accepts an YAML file which can be used to specify parameters like the number of rounds for bcrypt and password_config section having the pepper value used for the hashing\. By default \fBbcrypt_rounds\fR is set to \fB10\fR\.
.
.P
The hashed password is written on the \fBSTDOUT\fR\.
.
.SH "FILES"
A sample YAML file accepted by \fBhash_password\fR is described below:
.
.P
bcrypt_rounds: 17 password_config: pepper: "random hashing pepper"
.
.SH "OPTIONS"
.
.TP
\fB\-p\fR, \fB\-\-password\fR
Read the password form the command line if [password] is supplied\. If not, prompt the user and read the password form the \fBSTDIN\fR\. It is not recommended to type the password on the command line directly\. Use the STDIN instead\.
.
.TP
\fB\-c\fR, \fB\-\-config\fR
Read the supplied YAML \fIfile\fR containing the options \fBbcrypt_rounds\fR and the \fBpassword_config\fR section containing the \fBpepper\fR value\.
.
.SH "EXAMPLES"
Hash from the command line:
.
.IP "" 4
.
.nf
$ hash_password \-p "p@ssw0rd"
$2b$12$VJNqWQYfsWTEwcELfoSi4Oa8eA17movHqqi8\.X8fWFpum7SxZ9MFe
.
.fi
.
.IP "" 0
.
.P
Hash from the STDIN:
.
.IP "" 4
.
.nf
$ hash_password
Password:
Confirm password:
$2b$12$AszlvfmJl2esnyhmn8m/kuR2tdXgROWtWxnX\.rcuAbM8ErLoUhybG
.
.fi
.
.IP "" 0
.
.P
Using a config file:
.
.IP "" 4
.
.nf
$ hash_password \-c config\.yml
Password:
Confirm password:
$2b$12$CwI\.wBNr\.w3kmiUlV3T5s\.GT2wH7uebDCovDrCOh18dFedlANK99O
.
.fi
.
.IP "" 0
.
.SH "COPYRIGHT"
This man page was written by Rahul De <\fIrahulde@swecha\.net\fR> for Debian GNU/Linux distribution\.
.
This man page was written by Rahul De <\fI\%mailto:rahulde@swecha\.net\fR> for Debian GNU/Linux distribution\.
.SH "SEE ALSO"
synctl(1), synapse_port_db(1), register_new_matrix_user(1)
synctl(1), synapse_port_db(1), register_new_matrix_user(1), synapse_review_recent_signups(1)

View File

@ -66,4 +66,4 @@ for Debian GNU/Linux distribution.
## SEE ALSO
synctl(1), synapse_port_db(1), register_new_matrix_user(1)
synctl(1), synapse_port_db(1), register_new_matrix_user(1), synapse_review_recent_signups(1)

1
debian/manpages vendored
View File

@ -1,4 +1,5 @@
debian/hash_password.1
debian/register_new_matrix_user.1
debian/synapse_port_db.1
debian/synapse_review_recent_signups.1
debian/synctl.1

View File

@ -1,4 +1,5 @@
opt/venvs/matrix-synapse/bin/hash_password usr/bin/hash_password
opt/venvs/matrix-synapse/bin/register_new_matrix_user usr/bin/register_new_matrix_user
opt/venvs/matrix-synapse/bin/synapse_port_db usr/bin/synapse_port_db
opt/venvs/matrix-synapse/bin/synapse_review_recent_signups usr/bin/synapse_review_recent_signups
opt/venvs/matrix-synapse/bin/synctl usr/bin/synctl

View File

@ -1,72 +1,47 @@
.\" generated with Ronn/v0.7.3
.\" http://github.com/rtomayko/ronn/tree/0.7.3
.
.TH "REGISTER_NEW_MATRIX_USER" "1" "February 2017" "" ""
.
.\" generated with Ronn-NG/v0.8.0
.\" http://github.com/apjanke/ronn-ng/tree/0.8.0
.TH "REGISTER_NEW_MATRIX_USER" "1" "July 2021" "" ""
.SH "NAME"
\fBregister_new_matrix_user\fR \- Used to register new users with a given home server when registration has been disabled
.
.SH "SYNOPSIS"
\fBregister_new_matrix_user\fR options\.\.\.
.
\fBregister_new_matrix_user\fR options\|\.\|\.\|\.
.SH "DESCRIPTION"
\fBregister_new_matrix_user\fR registers new users with a given home server when registration has been disabled\. For this to work, the home server must be configured with the \'registration_shared_secret\' option set\.
.
.P
This accepts the user credentials like the username, password, is user an admin or not and registers the user onto the homeserver database\. Also, a YAML file containing the shared secret can be provided\. If not, the shared secret can be provided via the command line\.
.
.P
By default it assumes the home server URL to be \fBhttps://localhost:8448\fR\. This can be changed via the \fBserver_url\fR command line option\.
.
.SH "FILES"
A sample YAML file accepted by \fBregister_new_matrix_user\fR is described below:
.
.IP "" 4
.
.nf
registration_shared_secret: "s3cr3t"
.
.fi
.
.IP "" 0
.
.SH "OPTIONS"
.
.TP
\fB\-u\fR, \fB\-\-user\fR
Local part of the new user\. Will prompt if omitted\.
.
.TP
\fB\-p\fR, \fB\-\-password\fR
New password for user\. Will prompt if omitted\. Supplying the password on the command line is not recommended\. Use the STDIN instead\.
.
.TP
\fB\-a\fR, \fB\-\-admin\fR
Register new user as an admin\. Will prompt if omitted\.
.
.TP
\fB\-c\fR, \fB\-\-config\fR
Path to server config file containing the shared secret\.
.
.TP
\fB\-k\fR, \fB\-\-shared\-secret\fR
Shared secret as defined in server config file\. This is an optional parameter as it can be also supplied via the YAML file\.
.
.TP
\fBserver_url\fR
URL of the home server\. Defaults to \'https://localhost:8448\'\.
.
.SH "EXAMPLES"
.
.nf
$ register_new_matrix_user \-u user1 \-p p@ssword \-a \-c config\.yaml
.
.fi
.
.SH "COPYRIGHT"
This man page was written by Rahul De <\fIrahulde@swecha\.net\fR> for Debian GNU/Linux distribution\.
.
This man page was written by Rahul De <\fI\%mailto:rahulde@swecha\.net\fR> for Debian GNU/Linux distribution\.
.SH "SEE ALSO"
synctl(1), synapse_port_db(1), hash_password(1)
synctl(1), synapse_port_db(1), hash_password(1), synapse_review_recent_signups(1)

View File

@ -58,4 +58,4 @@ for Debian GNU/Linux distribution.
## SEE ALSO
synctl(1), synapse_port_db(1), hash_password(1)
synctl(1), synapse_port_db(1), hash_password(1), synapse_review_recent_signups(1)

View File

@ -1,83 +1,56 @@
.\" generated with Ronn/v0.7.3
.\" http://github.com/rtomayko/ronn/tree/0.7.3
.
.TH "SYNAPSE_PORT_DB" "1" "February 2017" "" ""
.
.\" generated with Ronn-NG/v0.8.0
.\" http://github.com/apjanke/ronn-ng/tree/0.8.0
.TH "SYNAPSE_PORT_DB" "1" "July 2021" "" ""
.SH "NAME"
\fBsynapse_port_db\fR \- A script to port an existing synapse SQLite database to a new PostgreSQL database\.
.
.SH "SYNOPSIS"
\fBsynapse_port_db\fR [\-v] \-\-sqlite\-database=\fIdbfile\fR \-\-postgres\-config=\fIyamlconfig\fR [\-\-curses] [\-\-batch\-size=\fIbatch\-size\fR]
.
.SH "DESCRIPTION"
\fBsynapse_port_db\fR ports an existing synapse SQLite database to a new PostgreSQL database\.
.
.P
SQLite database is specified with \fB\-\-sqlite\-database\fR option and PostgreSQL configuration required to connect to PostgreSQL database is provided using \fB\-\-postgres\-config\fR configuration\. The configuration is specified in YAML format\.
.
.SH "OPTIONS"
.
.TP
\fB\-v\fR
Print log messages in \fBdebug\fR level instead of \fBinfo\fR level\.
.
.TP
\fB\-\-sqlite\-database\fR
The snapshot of the SQLite database file\. This must not be currently used by a running synapse server\.
.
.TP
\fB\-\-postgres\-config\fR
The database config file for the PostgreSQL database\.
.
.TP
\fB\-\-curses\fR
Display a curses based progress UI\.
.
.SH "CONFIG FILE"
The postgres configuration file must be a valid YAML file with the following options\.
.
.IP "\(bu" 4
.IP "\[ci]" 4
\fBdatabase\fR: Database configuration section\. This section header can be ignored and the options below may be specified as top level keys\.
.
.IP "\(bu" 4
.IP "\[ci]" 4
\fBname\fR: Connector to use when connecting to the database\. This value must be \fBpsycopg2\fR\.
.
.IP "\(bu" 4
.IP "\[ci]" 4
\fBargs\fR: DB API 2\.0 compatible arguments to send to the \fBpsycopg2\fR module\.
.
.IP "\(bu" 4
.IP "\[ci]" 4
\fBdbname\fR \- the database name
.
.IP "\(bu" 4
.IP "\[ci]" 4
\fBuser\fR \- user name used to authenticate
.
.IP "\(bu" 4
.IP "\[ci]" 4
\fBpassword\fR \- password used to authenticate
.
.IP "\(bu" 4
.IP "\[ci]" 4
\fBhost\fR \- database host address (defaults to UNIX socket if not provided)
.
.IP "\(bu" 4
.IP "\[ci]" 4
\fBport\fR \- connection port number (defaults to 5432 if not provided)
.
.IP "" 0
.
.IP "\(bu" 4
.IP "\[ci]" 4
\fBsynchronous_commit\fR: Optional\. Default is True\. If the value is \fBFalse\fR, enable asynchronous commit and don\'t wait for the server to call fsync before ending the transaction\. See: https://www\.postgresql\.org/docs/current/static/wal\-async\-commit\.html
.
.IP "" 0
.
.IP "" 0
.
.P
Following example illustrates the configuration file format\.
.
.IP "" 4
.
.nf
database:
name: psycopg2
args:
@ -86,13 +59,9 @@ database:
password: ORohmi9Eet=ohphi
host: localhost
synchronous_commit: false
.
.fi
.
.IP "" 0
.
.SH "COPYRIGHT"
This man page was written by Sunil Mohan Adapa <\fIsunil@medhas\.org\fR> for Debian GNU/Linux distribution\.
.
This man page was written by Sunil Mohan Adapa <\fI\%mailto:sunil@medhas\.org\fR> for Debian GNU/Linux distribution\.
.SH "SEE ALSO"
synctl(1), hash_password(1), register_new_matrix_user(1)
synctl(1), hash_password(1), register_new_matrix_user(1), synapse_review_recent_signups(1)

View File

@ -47,7 +47,7 @@ following options.
* `args`:
DB API 2.0 compatible arguments to send to the `psycopg2` module.
* `dbname` - the database name
* `dbname` - the database name
* `user` - user name used to authenticate
@ -58,7 +58,7 @@ following options.
* `port` - connection port number (defaults to 5432 if not
provided)
* `synchronous_commit`:
Optional. Default is True. If the value is `False`, enable
@ -76,7 +76,7 @@ Following example illustrates the configuration file format.
password: ORohmi9Eet=ohphi
host: localhost
synchronous_commit: false
## COPYRIGHT
This man page was written by Sunil Mohan Adapa <<sunil@medhas.org>> for
@ -84,4 +84,4 @@ Debian GNU/Linux distribution.
## SEE ALSO
synctl(1), hash_password(1), register_new_matrix_user(1)
synctl(1), hash_password(1), register_new_matrix_user(1), synapse_review_recent_signups(1)

26
debian/synapse_review_recent_signups.1 vendored Normal file
View File

@ -0,0 +1,26 @@
.\" generated with Ronn-NG/v0.8.0
.\" http://github.com/apjanke/ronn-ng/tree/0.8.0
.TH "SYNAPSE_REVIEW_RECENT_SIGNUPS" "1" "July 2021" "" ""
.SH "NAME"
\fBsynapse_review_recent_signups\fR \- Print users that have recently registered on Synapse
.SH "SYNOPSIS"
\fBsynapse_review_recent_signups\fR \fB\-c\fR|\fB\-\-config\fR \fIfile\fR [\fB\-s\fR|\fB\-\-since\fR \fIperiod\fR] [\fB\-e\fR|\fB\-\-exclude\-emails\fR] [\fB\-u\fR|\fB\-\-only\-users\fR]
.SH "DESCRIPTION"
\fBsynapse_review_recent_signups\fR prints out recently registered users on a Synapse server, as well as some basic information about the user\.
.P
\fBsynapse_review_recent_signups\fR must be supplied with the config of the Synapse server, so that it can fetch the database config and connect to the database\.
.SH "OPTIONS"
.TP
\fB\-c\fR, \fB\-\-config\fR
The config file(s) used by the Synapse server\.
.TP
\fB\-s\fR, \fB\-\-since\fR
How far back to search for newly registered users\. Defaults to 7d, i\.e\. up to seven days in the past\. Valid units are \'s\', \'m\', \'h\', \'d\', \'w\', or \'y\'\.
.TP
\fB\-e\fR, \fB\-\-exclude\-emails\fR
Do not print out users that have validated emails associated with their account\.
.TP
\fB\-u\fR, \fB\-\-only\-users\fR
Only print out the user IDs of recently registered users, without any additional information
.SH "SEE ALSO"
synctl(1), synapse_port_db(1), register_new_matrix_user(1), hash_password(1)

View File

@ -0,0 +1,37 @@
synapse_review_recent_signups(1) -- Print users that have recently registered on Synapse
========================================================================================
## SYNOPSIS
`synapse_review_recent_signups` `-c`|`--config` <file> [`-s`|`--since` <period>] [`-e`|`--exclude-emails`] [`-u`|`--only-users`]
## DESCRIPTION
**synapse_review_recent_signups** prints out recently registered users on a
Synapse server, as well as some basic information about the user.
`synapse_review_recent_signups` must be supplied with the config of the Synapse
server, so that it can fetch the database config and connect to the database.
## OPTIONS
* `-c`, `--config`:
The config file(s) used by the Synapse server.
* `-s`, `--since`:
How far back to search for newly registered users. Defaults to 7d, i.e. up
to seven days in the past. Valid units are 's', 'm', 'h', 'd', 'w', or 'y'.
* `-e`, `--exclude-emails`:
Do not print out users that have validated emails associated with their
account.
* `-u`, `--only-users`:
Only print out the user IDs of recently registered users, without any
additional information
## SEE ALSO
synctl(1), synapse_port_db(1), register_new_matrix_user(1), hash_password(1)

42
debian/synctl.1 vendored
View File

@ -1,63 +1,41 @@
.\" generated with Ronn/v0.7.3
.\" http://github.com/rtomayko/ronn/tree/0.7.3
.
.TH "SYNCTL" "1" "February 2017" "" ""
.
.\" generated with Ronn-NG/v0.8.0
.\" http://github.com/apjanke/ronn-ng/tree/0.8.0
.TH "SYNCTL" "1" "July 2021" "" ""
.SH "NAME"
\fBsynctl\fR \- Synapse server control interface
.
.SH "SYNOPSIS"
Start, stop or restart synapse server\.
.
.P
\fBsynctl\fR {start|stop|restart} [configfile] [\-w|\-\-worker=\fIWORKERCONFIG\fR] [\-a|\-\-all\-processes=\fIWORKERCONFIGDIR\fR]
.
.SH "DESCRIPTION"
\fBsynctl\fR can be used to start, stop or restart Synapse server\. The control operation can be done on all processes or a single worker process\.
.
.SH "OPTIONS"
.
.TP
\fBaction\fR
The value of action should be one of \fBstart\fR, \fBstop\fR or \fBrestart\fR\.
.
.TP
\fBconfigfile\fR
Optional path of the configuration file to use\. Default value is \fBhomeserver\.yaml\fR\. The configuration file must exist for the operation to succeed\.
.
.TP
\fB\-w\fR, \fB\-\-worker\fR:
.
.IP
Perform start, stop or restart operations on a single worker\. Incompatible with \fB\-a\fR|\fB\-\-all\-processes\fR\. Value passed must be a valid worker\'s configuration file\.
.
.TP
\fB\-a\fR, \fB\-\-all\-processes\fR:
.
.IP
Perform start, stop or restart operations on all the workers in the given directory and the main synapse process\. Incompatible with \fB\-w\fR|\fB\-\-worker\fR\. Value passed must be a directory containing valid work configuration files\. All files ending with \fB\.yaml\fR extension shall be considered as configuration files and all other files in the directory are ignored\.
.
.SH "CONFIGURATION FILE"
Configuration file may be generated as follows:
.
.IP "" 4
.
.nf
$ python \-m synapse\.app\.homeserver \-c config\.yaml \-\-generate\-config \-\-server\-name=<server name>
.
.fi
.
.IP "" 0
.
.SH "ENVIRONMENT"
.
.TP
\fBSYNAPSE_CACHE_FACTOR\fR
Synapse\'s architecture is quite RAM hungry currently \- a lot of recent room data and metadata is deliberately cached in RAM in order to speed up common requests\. This will be improved in future, but for now the easiest way to either reduce the RAM usage (at the risk of slowing things down) is to set the SYNAPSE_CACHE_FACTOR environment variable\. Roughly speaking, a SYNAPSE_CACHE_FACTOR of 1\.0 will max out at around 3\-4GB of resident memory \- this is what we currently run the matrix\.org on\. The default setting is currently 0\.1, which is probably around a ~700MB footprint\. You can dial it down further to 0\.02 if desired, which targets roughly ~512MB\. Conversely you can dial it up if you need performance for lots of users and have a box with a lot of RAM\.
.
Synapse\'s architecture is quite RAM hungry currently \- we deliberately cache a lot of recent room data and metadata in RAM in order to speed up common requests\. We\'ll improve this in the future, but for now the easiest way to either reduce the RAM usage (at the risk of slowing things down) is to set the almost\-undocumented \fBSYNAPSE_CACHE_FACTOR\fR environment variable\. The default is 0\.5, which can be decreased to reduce RAM usage in memory constrained enviroments, or increased if performance starts to degrade\.
.IP
However, degraded performance due to a low cache factor, common on machines with slow disks, often leads to explosions in memory use due backlogged requests\. In this case, reducing the cache factor will make things worse\. Instead, try increasing it drastically\. 2\.0 is a good starting value\.
.SH "COPYRIGHT"
This man page was written by Sunil Mohan Adapa <\fIsunil@medhas\.org\fR> for Debian GNU/Linux distribution\.
.
This man page was written by Sunil Mohan Adapa <\fI\%mailto:sunil@medhas\.org\fR> for Debian GNU/Linux distribution\.
.SH "SEE ALSO"
synapse_port_db(1), hash_password(1), register_new_matrix_user(1)
synapse_port_db(1), hash_password(1), register_new_matrix_user(1), synapse_review_recent_signups(1)

2
debian/synctl.ronn vendored
View File

@ -68,4 +68,4 @@ Debian GNU/Linux distribution.
## SEE ALSO
synapse_port_db(1), hash_password(1), register_new_matrix_user(1)
synapse_port_db(1), hash_password(1), register_new_matrix_user(1), synapse_review_recent_signups(1)

View File

@ -36,7 +36,17 @@ It returns a JSON body like the following:
"creation_ts": 1560432506,
"appservice_id": null,
"consent_server_notice_sent": null,
"consent_version": null
"consent_version": null,
"external_ids": [
{
"auth_provider": "<provider1>",
"external_id": "<user_id_provider_1>"
},
{
"auth_provider": "<provider2>",
"external_id": "<user_id_provider_2>"
}
]
}
```

View File

@ -194,7 +194,7 @@ In order to port a module that uses Synapse's old module interface, its author n
* ensure the module's callbacks are all asynchronous.
* register their callbacks using one or more of the `register_[...]_callbacks` methods
from the `ModuleApi` class in the module's `__init__` method (see [this section](#registering-a-web-resource)
from the `ModuleApi` class in the module's `__init__` method (see [this section](#registering-a-callback)
for more info).
Additionally, if the module is packaged with an additional web resource, the module

View File

@ -673,35 +673,41 @@ retention:
#event_cache_size: 10K
caches:
# Controls the global cache factor, which is the default cache factor
# for all caches if a specific factor for that cache is not otherwise
# set.
#
# This can also be set by the "SYNAPSE_CACHE_FACTOR" environment
# variable. Setting by environment variable takes priority over
# setting through the config file.
#
# Defaults to 0.5, which will half the size of all caches.
#
#global_factor: 1.0
# Controls the global cache factor, which is the default cache factor
# for all caches if a specific factor for that cache is not otherwise
# set.
#
# This can also be set by the "SYNAPSE_CACHE_FACTOR" environment
# variable. Setting by environment variable takes priority over
# setting through the config file.
#
# Defaults to 0.5, which will half the size of all caches.
#
#global_factor: 1.0
# A dictionary of cache name to cache factor for that individual
# cache. Overrides the global cache factor for a given cache.
#
# These can also be set through environment variables comprised
# of "SYNAPSE_CACHE_FACTOR_" + the name of the cache in capital
# letters and underscores. Setting by environment variable
# takes priority over setting through the config file.
# Ex. SYNAPSE_CACHE_FACTOR_GET_USERS_WHO_SHARE_ROOM_WITH_USER=2.0
#
# Some caches have '*' and other characters that are not
# alphanumeric or underscores. These caches can be named with or
# without the special characters stripped. For example, to specify
# the cache factor for `*stateGroupCache*` via an environment
# variable would be `SYNAPSE_CACHE_FACTOR_STATEGROUPCACHE=2.0`.
#
per_cache_factors:
#get_users_who_share_room_with_user: 2.0
# A dictionary of cache name to cache factor for that individual
# cache. Overrides the global cache factor for a given cache.
#
# These can also be set through environment variables comprised
# of "SYNAPSE_CACHE_FACTOR_" + the name of the cache in capital
# letters and underscores. Setting by environment variable
# takes priority over setting through the config file.
# Ex. SYNAPSE_CACHE_FACTOR_GET_USERS_WHO_SHARE_ROOM_WITH_USER=2.0
#
# Some caches have '*' and other characters that are not
# alphanumeric or underscores. These caches can be named with or
# without the special characters stripped. For example, to specify
# the cache factor for `*stateGroupCache*` via an environment
# variable would be `SYNAPSE_CACHE_FACTOR_STATEGROUPCACHE=2.0`.
#
per_cache_factors:
#get_users_who_share_room_with_user: 2.0
# Controls how long an entry can be in a cache without having been
# accessed before being evicted. Defaults to None, which means
# entries are never evicted based on time.
#
#expiry_time: 30m
## Database ##

View File

@ -84,7 +84,45 @@ process, for example:
wget https://packages.matrix.org/debian/pool/main/m/matrix-synapse-py3/matrix-synapse-py3_1.3.0+stretch1_amd64.deb
dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb
```
# Upgrading to v1.38.0
## Re-indexing of `events` table on Postgres databases
This release includes a database schema update which requires re-indexing one of
the larger tables in the database, `events`. This could result in increased
disk I/O for several hours or days after upgrading while the migration
completes. Furthermore, because we have to keep the old indexes until the new
indexes are ready, it could result in a significant, temporary, increase in
disk space.
To get a rough idea of the disk space required, check the current size of one
of the indexes. For example, from a `psql` shell, run the following sql:
```sql
SELECT pg_size_pretty(pg_relation_size('events_order_room'));
```
We need to rebuild **four** indexes, so you will need to multiply this result
by four to give an estimate of the disk space required. For example, on one
particular server:
```
synapse=# select pg_size_pretty(pg_relation_size('events_order_room'));
pg_size_pretty
----------------
288 MB
(1 row)
```
On this server, it would be wise to ensure that at least 1152MB are free.
The additional disk space will be freed once the migration completes.
SQLite databases are unaffected by this change.
# Upgrading to v1.37.0
## Deprecation of the current spam checker interface

View File

@ -75,6 +75,7 @@ files =
synapse/util/daemonize.py,
synapse/util/hash.py,
synapse/util/iterutils.py,
synapse/util/linked_list.py,
synapse/util/metrics.py,
synapse/util/macaroons.py,
synapse/util/module_loader.py,

View File

@ -0,0 +1,19 @@
#!/usr/bin/env python
# Copyright 2021 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from synapse._scripts.review_recent_signups import main
if __name__ == "__main__":
main()

View File

@ -47,7 +47,7 @@ try:
except ImportError:
pass
__version__ = "1.37.1"
__version__ = "1.38.0rc1"
if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
# We import here so that we don't have to install a bunch of deps when

View File

@ -0,0 +1,175 @@
#!/usr/bin/env python
# Copyright 2021 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import sys
import time
from datetime import datetime
from typing import List
import attr
from synapse.config._base import RootConfig, find_config_files, read_config_files
from synapse.config.database import DatabaseConfig
from synapse.storage.database import DatabasePool, LoggingTransaction, make_conn
from synapse.storage.engines import create_engine
class ReviewConfig(RootConfig):
"A config class that just pulls out the database config"
config_classes = [DatabaseConfig]
@attr.s(auto_attribs=True)
class UserInfo:
user_id: str
creation_ts: int
emails: List[str] = attr.Factory(list)
private_rooms: List[str] = attr.Factory(list)
public_rooms: List[str] = attr.Factory(list)
ips: List[str] = attr.Factory(list)
def get_recent_users(txn: LoggingTransaction, since_ms: int) -> List[UserInfo]:
"""Fetches recently registered users and some info on them."""
sql = """
SELECT name, creation_ts FROM users
WHERE
? <= creation_ts
AND deactivated = 0
"""
txn.execute(sql, (since_ms / 1000,))
user_infos = [UserInfo(user_id, creation_ts) for user_id, creation_ts in txn]
for user_info in user_infos:
user_info.emails = DatabasePool.simple_select_onecol_txn(
txn,
table="user_threepids",
keyvalues={"user_id": user_info.user_id, "medium": "email"},
retcol="address",
)
sql = """
SELECT room_id, canonical_alias, name, join_rules
FROM local_current_membership
INNER JOIN room_stats_state USING (room_id)
WHERE user_id = ? AND membership = 'join'
"""
txn.execute(sql, (user_info.user_id,))
for room_id, canonical_alias, name, join_rules in txn:
if join_rules == "public":
user_info.public_rooms.append(canonical_alias or name or room_id)
else:
user_info.private_rooms.append(canonical_alias or name or room_id)
user_info.ips = DatabasePool.simple_select_onecol_txn(
txn,
table="user_ips",
keyvalues={"user_id": user_info.user_id},
retcol="ip",
)
return user_infos
def main():
parser = argparse.ArgumentParser()
parser.add_argument(
"-c",
"--config-path",
action="append",
metavar="CONFIG_FILE",
help="The config files for Synapse.",
required=True,
)
parser.add_argument(
"-s",
"--since",
metavar="duration",
help="Specify how far back to review user registrations for, defaults to 7d (i.e. 7 days).",
default="7d",
)
parser.add_argument(
"-e",
"--exclude-emails",
action="store_true",
help="Exclude users that have validated email addresses",
)
parser.add_argument(
"-u",
"--only-users",
action="store_true",
help="Only print user IDs that match.",
)
config = ReviewConfig()
config_args = parser.parse_args(sys.argv[1:])
config_files = find_config_files(search_paths=config_args.config_path)
config_dict = read_config_files(config_files)
config.parse_config_dict(
config_dict,
)
since_ms = time.time() * 1000 - config.parse_duration(config_args.since)
exclude_users_with_email = config_args.exclude_emails
include_context = not config_args.only_users
for database_config in config.database.databases:
if "main" in database_config.databases:
break
engine = create_engine(database_config.config)
with make_conn(database_config, engine, "review_recent_signups") as db_conn:
user_infos = get_recent_users(db_conn.cursor(), since_ms)
for user_info in user_infos:
if exclude_users_with_email and user_info.emails:
continue
if include_context:
print_public_rooms = ""
if user_info.public_rooms:
print_public_rooms = "(" + ", ".join(user_info.public_rooms[:3])
if len(user_info.public_rooms) > 3:
print_public_rooms += ", ..."
print_public_rooms += ")"
print("# Created:", datetime.fromtimestamp(user_info.creation_ts))
print("# Email:", ", ".join(user_info.emails) or "None")
print("# IPs:", ", ".join(user_info.ips))
print(
"# Number joined public rooms:",
len(user_info.public_rooms),
print_public_rooms,
)
print("# Number joined private rooms:", len(user_info.private_rooms))
print("#")
print(user_info.user_id)
if include_context:
print()
if __name__ == "__main__":
main()

View File

@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from typing import TYPE_CHECKING, Any, Dict, List, Optional, Tuple, Union
from typing import TYPE_CHECKING, Optional, Tuple
import pymacaroons
from netaddr import IPAddress
@ -28,10 +28,8 @@ from synapse.api.errors import (
InvalidClientTokenError,
MissingClientTokenError,
)
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS
from synapse.appservice import ApplicationService
from synapse.events import EventBase
from synapse.events.builder import EventBuilder
from synapse.http import get_request_user_agent
from synapse.http.site import SynapseRequest
from synapse.logging import opentracing as opentracing
@ -39,7 +37,6 @@ from synapse.storage.databases.main.registration import TokenLookupResult
from synapse.types import Requester, StateMap, UserID, create_requester
from synapse.util.caches.lrucache import LruCache
from synapse.util.macaroons import get_value_from_macaroon, satisfy_expiry
from synapse.util.metrics import Measure
if TYPE_CHECKING:
from synapse.server import HomeServer
@ -47,15 +44,6 @@ if TYPE_CHECKING:
logger = logging.getLogger(__name__)
AuthEventTypes = (
EventTypes.Create,
EventTypes.Member,
EventTypes.PowerLevels,
EventTypes.JoinRules,
EventTypes.RoomHistoryVisibility,
EventTypes.ThirdPartyInvite,
)
# guests always get this device id.
GUEST_DEVICE_ID = "guest_device"
@ -66,9 +54,7 @@ class _InvalidMacaroonException(Exception):
class Auth:
"""
FIXME: This class contains a mix of functions for authenticating users
of our client-server API and authenticating events added to room graphs.
The latter should be moved to synapse.handlers.event_auth.EventAuthHandler.
This class contains functions for authenticating users of our client-server API.
"""
def __init__(self, hs: "HomeServer"):
@ -90,18 +76,6 @@ class Auth:
self._macaroon_secret_key = hs.config.macaroon_secret_key
self._force_tracing_for_users = hs.config.tracing.force_tracing_for_users
async def check_from_context(
self, room_version: str, event, context, do_sig_check=True
) -> None:
auth_event_ids = event.auth_event_ids()
auth_events_by_id = await self.store.get_events(auth_event_ids)
auth_events = {(e.type, e.state_key): e for e in auth_events_by_id.values()}
room_version_obj = KNOWN_ROOM_VERSIONS[room_version]
event_auth.check(
room_version_obj, event, auth_events=auth_events, do_sig_check=do_sig_check
)
async def check_user_in_room(
self,
room_id: str,
@ -152,13 +126,6 @@ class Auth:
raise AuthError(403, "User %s not in room %s" % (user_id, room_id))
async def check_host_in_room(self, room_id: str, host: str) -> bool:
with Measure(self.clock, "check_host_in_room"):
return await self.store.is_host_joined(room_id, host)
def get_public_keys(self, invite_event: EventBase) -> List[Dict[str, Any]]:
return event_auth.get_public_keys(invite_event)
async def get_user_by_req(
self,
request: SynapseRequest,
@ -489,44 +456,6 @@ class Auth:
"""
return await self.store.is_server_admin(user)
def compute_auth_events(
self,
event: Union[EventBase, EventBuilder],
current_state_ids: StateMap[str],
for_verification: bool = False,
) -> List[str]:
"""Given an event and current state return the list of event IDs used
to auth an event.
If `for_verification` is False then only return auth events that
should be added to the event's `auth_events`.
Returns:
List of event IDs.
"""
if event.type == EventTypes.Create:
return []
# Currently we ignore the `for_verification` flag even though there are
# some situations where we can drop particular auth events when adding
# to the event's `auth_events` (e.g. joins pointing to previous joins
# when room is publicly joinable). Dropping event IDs has the
# advantage that the auth chain for the room grows slower, but we use
# the auth chain in state resolution v2 to order events, which means
# care must be taken if dropping events to ensure that it doesn't
# introduce undesirable "state reset" behaviour.
#
# All of which sounds a bit tricky so we don't bother for now.
auth_ids = []
for etype, state_key in event_auth.auth_types_for_event(event):
auth_ev_id = current_state_ids.get((etype, state_key))
if auth_ev_id:
auth_ids.append(auth_ev_id)
return auth_ids
async def check_can_change_room_list(self, room_id: str, user: UserID) -> bool:
"""Determine whether the user is allowed to edit the room's entry in the
published room list.

View File

@ -21,7 +21,7 @@ import socket
import sys
import traceback
import warnings
from typing import Awaitable, Callable, Iterable
from typing import TYPE_CHECKING, Awaitable, Callable, Iterable
from cryptography.utils import CryptographyDeprecationWarning
from typing_extensions import NoReturn
@ -41,10 +41,14 @@ from synapse.events.spamcheck import load_legacy_spam_checkers
from synapse.logging.context import PreserveLoggingContext
from synapse.metrics.background_process_metrics import wrap_as_background_process
from synapse.metrics.jemalloc import setup_jemalloc_stats
from synapse.util.caches.lrucache import setup_expire_lru_cache_entries
from synapse.util.daemonize import daemonize_process
from synapse.util.rlimit import change_resource_limit
from synapse.util.versionstring import get_version_string
if TYPE_CHECKING:
from synapse.server import HomeServer
logger = logging.getLogger(__name__)
# list of tuples of function, args list, kwargs dict
@ -312,7 +316,7 @@ def refresh_certificate(hs):
logger.info("Context factories updated.")
async def start(hs: "synapse.server.HomeServer"):
async def start(hs: "HomeServer"):
"""
Start a Synapse server or worker.
@ -365,6 +369,9 @@ async def start(hs: "synapse.server.HomeServer"):
load_legacy_spam_checkers(hs)
# If we've configured an expiry time for caches, start the background job now.
setup_expire_lru_cache_entries(hs)
# It is now safe to start your Synapse.
hs.start_listening()
hs.get_datastore().db_pool.start_profiling()

View File

@ -5,6 +5,7 @@ from synapse.config import (
api,
appservice,
auth,
cache,
captcha,
cas,
consent,
@ -88,6 +89,7 @@ class RootConfig:
tracer: tracer.TracerConfig
redis: redis.RedisConfig
modules: modules.ModulesConfig
caches: cache.CacheConfig
federation: federation.FederationConfig
config_classes: List = ...

View File

@ -116,35 +116,41 @@ class CacheConfig(Config):
#event_cache_size: 10K
caches:
# Controls the global cache factor, which is the default cache factor
# for all caches if a specific factor for that cache is not otherwise
# set.
#
# This can also be set by the "SYNAPSE_CACHE_FACTOR" environment
# variable. Setting by environment variable takes priority over
# setting through the config file.
#
# Defaults to 0.5, which will half the size of all caches.
#
#global_factor: 1.0
# Controls the global cache factor, which is the default cache factor
# for all caches if a specific factor for that cache is not otherwise
# set.
#
# This can also be set by the "SYNAPSE_CACHE_FACTOR" environment
# variable. Setting by environment variable takes priority over
# setting through the config file.
#
# Defaults to 0.5, which will half the size of all caches.
#
#global_factor: 1.0
# A dictionary of cache name to cache factor for that individual
# cache. Overrides the global cache factor for a given cache.
#
# These can also be set through environment variables comprised
# of "SYNAPSE_CACHE_FACTOR_" + the name of the cache in capital
# letters and underscores. Setting by environment variable
# takes priority over setting through the config file.
# Ex. SYNAPSE_CACHE_FACTOR_GET_USERS_WHO_SHARE_ROOM_WITH_USER=2.0
#
# Some caches have '*' and other characters that are not
# alphanumeric or underscores. These caches can be named with or
# without the special characters stripped. For example, to specify
# the cache factor for `*stateGroupCache*` via an environment
# variable would be `SYNAPSE_CACHE_FACTOR_STATEGROUPCACHE=2.0`.
#
per_cache_factors:
#get_users_who_share_room_with_user: 2.0
# A dictionary of cache name to cache factor for that individual
# cache. Overrides the global cache factor for a given cache.
#
# These can also be set through environment variables comprised
# of "SYNAPSE_CACHE_FACTOR_" + the name of the cache in capital
# letters and underscores. Setting by environment variable
# takes priority over setting through the config file.
# Ex. SYNAPSE_CACHE_FACTOR_GET_USERS_WHO_SHARE_ROOM_WITH_USER=2.0
#
# Some caches have '*' and other characters that are not
# alphanumeric or underscores. These caches can be named with or
# without the special characters stripped. For example, to specify
# the cache factor for `*stateGroupCache*` via an environment
# variable would be `SYNAPSE_CACHE_FACTOR_STATEGROUPCACHE=2.0`.
#
per_cache_factors:
#get_users_who_share_room_with_user: 2.0
# Controls how long an entry can be in a cache without having been
# accessed before being evicted. Defaults to None, which means
# entries are never evicted based on time.
#
#expiry_time: 30m
"""
def read_config(self, config, **kwargs):
@ -200,6 +206,12 @@ class CacheConfig(Config):
e.message # noqa: B306, DependencyException.message is a property
)
expiry_time = cache_config.get("expiry_time")
if expiry_time:
self.expiry_time_msec = self.parse_duration(expiry_time)
else:
self.expiry_time_msec = None
# Resize all caches (if necessary) with the new factors we've loaded
self.resize_all_caches()

View File

@ -34,7 +34,7 @@ from synapse.util import Clock
from synapse.util.stringutils import random_string
if TYPE_CHECKING:
from synapse.api.auth import Auth
from synapse.handlers.event_auth import EventAuthHandler
from synapse.server import HomeServer
logger = logging.getLogger(__name__)
@ -66,7 +66,7 @@ class EventBuilder:
"""
_state: StateHandler
_auth: "Auth"
_event_auth_handler: "EventAuthHandler"
_store: DataStore
_clock: Clock
_hostname: str
@ -125,7 +125,9 @@ class EventBuilder:
state_ids = await self._state.get_current_state_ids(
self.room_id, prev_event_ids
)
auth_event_ids = self._auth.compute_auth_events(self, state_ids)
auth_event_ids = self._event_auth_handler.compute_auth_events(
self, state_ids
)
format_version = self.room_version.event_format
if format_version == EventFormatVersions.V1:
@ -193,7 +195,7 @@ class EventBuilderFactory:
self.store = hs.get_datastore()
self.state = hs.get_state_handler()
self.auth = hs.get_auth()
self._event_auth_handler = hs.get_event_auth_handler()
def new(self, room_version: str, key_values: dict) -> EventBuilder:
"""Generate an event builder appropriate for the given room version
@ -229,7 +231,7 @@ class EventBuilderFactory:
return EventBuilder(
store=self.store,
state=self.state,
auth=self.auth,
event_auth_handler=self._event_auth_handler,
clock=self.clock,
hostname=self.hostname,
signing_key=self.signing_key,

View File

@ -108,9 +108,9 @@ class FederationServer(FederationBase):
def __init__(self, hs: "HomeServer"):
super().__init__(hs)
self.auth = hs.get_auth()
self.handler = hs.get_federation_handler()
self.state = hs.get_state_handler()
self._event_auth_handler = hs.get_event_auth_handler()
self.device_handler = hs.get_device_handler()
@ -148,6 +148,41 @@ class FederationServer(FederationBase):
self._room_prejoin_state_types = hs.config.api.room_prejoin_state
# Whether we have started handling old events in the staging area.
self._started_handling_of_staged_events = False
@wrap_as_background_process("_handle_old_staged_events")
async def _handle_old_staged_events(self) -> None:
"""Handle old staged events by fetching all rooms that have staged
events and start the processing of each of those rooms.
"""
# Get all the rooms IDs with staged events.
room_ids = await self.store.get_all_rooms_with_staged_incoming_events()
# We then shuffle them so that if there are multiple instances doing
# this work they're less likely to collide.
random.shuffle(room_ids)
for room_id in room_ids:
room_version = await self.store.get_room_version(room_id)
# Try and acquire the processing lock for the room, if we get it start a
# background process for handling the events in the room.
lock = await self.store.try_acquire_lock(
_INBOUND_EVENT_HANDLING_LOCK_NAME, room_id
)
if lock:
logger.info("Handling old staged inbound events in %s", room_id)
self._process_incoming_pdus_in_room_inner(
room_id,
room_version,
lock,
)
# We pause a bit so that we don't start handling all rooms at once.
await self._clock.sleep(random.uniform(0, 0.1))
async def on_backfill_request(
self, origin: str, room_id: str, versions: List[str], limit: int
) -> Tuple[int, Dict[str, Any]]:
@ -166,6 +201,12 @@ class FederationServer(FederationBase):
async def on_incoming_transaction(
self, origin: str, transaction_data: JsonDict
) -> Tuple[int, Dict[str, Any]]:
# If we receive a transaction we should make sure that kick off handling
# any old events in the staging area.
if not self._started_handling_of_staged_events:
self._started_handling_of_staged_events = True
self._handle_old_staged_events()
# keep this as early as possible to make the calculated origin ts as
# accurate as possible.
request_time = self._clock.time_msec()
@ -420,7 +461,7 @@ class FederationServer(FederationBase):
origin_host, _ = parse_server_name(origin)
await self.check_server_matches_acl(origin_host, room_id)
in_room = await self.auth.check_host_in_room(room_id, origin)
in_room = await self._event_auth_handler.check_host_in_room(room_id, origin)
if not in_room:
raise AuthError(403, "Host not in room.")
@ -453,7 +494,7 @@ class FederationServer(FederationBase):
origin_host, _ = parse_server_name(origin)
await self.check_server_matches_acl(origin_host, room_id)
in_room = await self.auth.check_host_in_room(room_id, origin)
in_room = await self._event_auth_handler.check_host_in_room(room_id, origin)
if not in_room:
raise AuthError(403, "Host not in room.")
@ -882,25 +923,28 @@ class FederationServer(FederationBase):
room_id: str,
room_version: RoomVersion,
lock: Lock,
latest_origin: str,
latest_event: EventBase,
latest_origin: Optional[str] = None,
latest_event: Optional[EventBase] = None,
) -> None:
"""Process events in the staging area for the given room.
The latest_origin and latest_event args are the latest origin and event
received.
received (or None to simply pull the next event from the database).
"""
# The common path is for the event we just received be the only event in
# the room, so instead of pulling the event out of the DB and parsing
# the event we just pull out the next event ID and check if that matches.
next_origin, next_event_id = await self.store.get_next_staged_event_id_for_room(
room_id
)
if next_origin == latest_origin and next_event_id == latest_event.event_id:
origin = latest_origin
event = latest_event
else:
if latest_event is not None and latest_origin is not None:
(
next_origin,
next_event_id,
) = await self.store.get_next_staged_event_id_for_room(room_id)
if next_origin != latest_origin or next_event_id != latest_event.event_id:
latest_origin = None
latest_event = None
if latest_origin is None or latest_event is None:
next = await self.store.get_next_staged_event_for_room(
room_id, room_version
)
@ -908,6 +952,9 @@ class FederationServer(FederationBase):
return
origin, event = next
else:
origin = latest_origin
event = latest_event
# We loop round until there are no more events in the room in the
# staging area, or we fail to get the lock (which means another process

View File

@ -62,9 +62,16 @@ class AdminHandler(BaseHandler):
if ret:
profile = await self.store.get_profileinfo(user.localpart)
threepids = await self.store.user_get_threepids(user.to_string())
external_ids = [
({"auth_provider": auth_provider, "external_id": external_id})
for auth_provider, external_id in await self.store.get_external_ids_by_user(
user.to_string()
)
]
ret["displayname"] = profile.display_name
ret["avatar_url"] = profile.avatar_url
ret["threepids"] = threepids
ret["external_ids"] = external_ids
return ret
async def export_user_data(self, user_id: str, writer: "ExfiltrationWriter") -> Any:

View File

@ -11,8 +11,9 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import TYPE_CHECKING, Collection, Optional
from typing import TYPE_CHECKING, Collection, List, Optional, Union
from synapse import event_auth
from synapse.api.constants import (
EventTypes,
JoinRules,
@ -20,9 +21,11 @@ from synapse.api.constants import (
RestrictedJoinRuleTypes,
)
from synapse.api.errors import AuthError
from synapse.api.room_versions import RoomVersion
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS, RoomVersion
from synapse.events import EventBase
from synapse.events.builder import EventBuilder
from synapse.types import StateMap
from synapse.util.metrics import Measure
if TYPE_CHECKING:
from synapse.server import HomeServer
@ -34,8 +37,63 @@ class EventAuthHandler:
"""
def __init__(self, hs: "HomeServer"):
self._clock = hs.get_clock()
self._store = hs.get_datastore()
async def check_from_context(
self, room_version: str, event, context, do_sig_check=True
) -> None:
auth_event_ids = event.auth_event_ids()
auth_events_by_id = await self._store.get_events(auth_event_ids)
auth_events = {(e.type, e.state_key): e for e in auth_events_by_id.values()}
room_version_obj = KNOWN_ROOM_VERSIONS[room_version]
event_auth.check(
room_version_obj, event, auth_events=auth_events, do_sig_check=do_sig_check
)
def compute_auth_events(
self,
event: Union[EventBase, EventBuilder],
current_state_ids: StateMap[str],
for_verification: bool = False,
) -> List[str]:
"""Given an event and current state return the list of event IDs used
to auth an event.
If `for_verification` is False then only return auth events that
should be added to the event's `auth_events`.
Returns:
List of event IDs.
"""
if event.type == EventTypes.Create:
return []
# Currently we ignore the `for_verification` flag even though there are
# some situations where we can drop particular auth events when adding
# to the event's `auth_events` (e.g. joins pointing to previous joins
# when room is publicly joinable). Dropping event IDs has the
# advantage that the auth chain for the room grows slower, but we use
# the auth chain in state resolution v2 to order events, which means
# care must be taken if dropping events to ensure that it doesn't
# introduce undesirable "state reset" behaviour.
#
# All of which sounds a bit tricky so we don't bother for now.
auth_ids = []
for etype, state_key in event_auth.auth_types_for_event(event):
auth_ev_id = current_state_ids.get((etype, state_key))
if auth_ev_id:
auth_ids.append(auth_ev_id)
return auth_ids
async def check_host_in_room(self, room_id: str, host: str) -> bool:
with Measure(self._clock, "check_host_in_room"):
return await self._store.is_host_joined(room_id, host)
async def check_restricted_join_rules(
self,
state_ids: StateMap[str],

View File

@ -250,7 +250,9 @@ class FederationHandler(BaseHandler):
#
# Note that if we were never in the room then we would have already
# dropped the event, since we wouldn't know the room version.
is_in_room = await self.auth.check_host_in_room(room_id, self.server_name)
is_in_room = await self._event_auth_handler.check_host_in_room(
room_id, self.server_name
)
if not is_in_room:
logger.info(
"Ignoring PDU from %s as we're not in the room",
@ -1674,7 +1676,9 @@ class FederationHandler(BaseHandler):
room_version = await self.store.get_room_version_id(room_id)
# now check that we are *still* in the room
is_in_room = await self.auth.check_host_in_room(room_id, self.server_name)
is_in_room = await self._event_auth_handler.check_host_in_room(
room_id, self.server_name
)
if not is_in_room:
logger.info(
"Got /make_join request for room %s we are no longer in",
@ -1705,7 +1709,7 @@ class FederationHandler(BaseHandler):
# The remote hasn't signed it yet, obviously. We'll do the full checks
# when we get the event back in `on_send_join_request`
await self.auth.check_from_context(
await self._event_auth_handler.check_from_context(
room_version, event, context, do_sig_check=False
)
@ -1877,7 +1881,7 @@ class FederationHandler(BaseHandler):
try:
# The remote hasn't signed it yet, obviously. We'll do the full checks
# when we get the event back in `on_send_leave_request`
await self.auth.check_from_context(
await self._event_auth_handler.check_from_context(
room_version, event, context, do_sig_check=False
)
except AuthError as e:
@ -1939,7 +1943,7 @@ class FederationHandler(BaseHandler):
try:
# The remote hasn't signed it yet, obviously. We'll do the full checks
# when we get the event back in `on_send_knock_request`
await self.auth.check_from_context(
await self._event_auth_handler.check_from_context(
room_version, event, context, do_sig_check=False
)
except AuthError as e:
@ -2111,7 +2115,7 @@ class FederationHandler(BaseHandler):
async def on_backfill_request(
self, origin: str, room_id: str, pdu_list: List[str], limit: int
) -> List[EventBase]:
in_room = await self.auth.check_host_in_room(room_id, origin)
in_room = await self._event_auth_handler.check_host_in_room(room_id, origin)
if not in_room:
raise AuthError(403, "Host not in room.")
@ -2146,7 +2150,9 @@ class FederationHandler(BaseHandler):
)
if event:
in_room = await self.auth.check_host_in_room(event.room_id, origin)
in_room = await self._event_auth_handler.check_host_in_room(
event.room_id, origin
)
if not in_room:
raise AuthError(403, "Host not in room.")
@ -2499,7 +2505,7 @@ class FederationHandler(BaseHandler):
latest_events: List[str],
limit: int,
) -> List[EventBase]:
in_room = await self.auth.check_host_in_room(room_id, origin)
in_room = await self._event_auth_handler.check_host_in_room(room_id, origin)
if not in_room:
raise AuthError(403, "Host not in room.")
@ -2562,7 +2568,7 @@ class FederationHandler(BaseHandler):
if not auth_events:
prev_state_ids = await context.get_prev_state_ids()
auth_events_ids = self.auth.compute_auth_events(
auth_events_ids = self._event_auth_handler.compute_auth_events(
event, prev_state_ids, for_verification=True
)
auth_events_x = await self.store.get_events(auth_events_ids)
@ -2991,7 +2997,7 @@ class FederationHandler(BaseHandler):
"state_key": target_user_id,
}
if await self.auth.check_host_in_room(room_id, self.hs.hostname):
if await self._event_auth_handler.check_host_in_room(room_id, self.hs.hostname):
room_version = await self.store.get_room_version_id(room_id)
builder = self.event_builder_factory.new(room_version, event_dict)
@ -3011,7 +3017,9 @@ class FederationHandler(BaseHandler):
event.internal_metadata.send_on_behalf_of = self.hs.hostname
try:
await self.auth.check_from_context(room_version, event, context)
await self._event_auth_handler.check_from_context(
room_version, event, context
)
except AuthError as e:
logger.warning("Denying new third party invite %r because %s", event, e)
raise e
@ -3054,7 +3062,9 @@ class FederationHandler(BaseHandler):
)
try:
await self.auth.check_from_context(room_version, event, context)
await self._event_auth_handler.check_from_context(
room_version, event, context
)
except AuthError as e:
logger.warning("Denying third party invite %r because %s", event, e)
raise e
@ -3142,7 +3152,7 @@ class FederationHandler(BaseHandler):
last_exception = None # type: Optional[Exception]
# for each public key in the 3pid invite event
for public_key_object in self.hs.get_auth().get_public_keys(invite_event):
for public_key_object in event_auth.get_public_keys(invite_event):
try:
# for each sig on the third_party_invite block of the actual invite
for server, signature_block in signed["signatures"].items():

View File

@ -385,6 +385,7 @@ class EventCreationHandler:
def __init__(self, hs: "HomeServer"):
self.hs = hs
self.auth = hs.get_auth()
self._event_auth_handler = hs.get_event_auth_handler()
self.store = hs.get_datastore()
self.storage = hs.get_storage()
self.state = hs.get_state_handler()
@ -597,7 +598,7 @@ class EventCreationHandler:
(e.type, e.state_key): e.event_id for e in auth_events
}
# Actually strip down and use the necessary auth events
auth_event_ids = self.auth.compute_auth_events(
auth_event_ids = self._event_auth_handler.compute_auth_events(
event=temp_event,
current_state_ids=auth_event_state_map,
for_verification=False,
@ -1056,7 +1057,9 @@ class EventCreationHandler:
assert event.content["membership"] == Membership.LEAVE
else:
try:
await self.auth.check_from_context(room_version, event, context)
await self._event_auth_handler.check_from_context(
room_version, event, context
)
except AuthError as err:
logger.warning("Denying new event %r because %s", event, err)
raise err
@ -1381,7 +1384,7 @@ class EventCreationHandler:
raise AuthError(403, "Redacting server ACL events is not permitted")
prev_state_ids = await context.get_prev_state_ids()
auth_events_ids = self.auth.compute_auth_events(
auth_events_ids = self._event_auth_handler.compute_auth_events(
event, prev_state_ids, for_verification=True
)
auth_events_map = await self.store.get_events(auth_events_ids)

View File

@ -83,6 +83,7 @@ class RoomCreationHandler(BaseHandler):
self.spam_checker = hs.get_spam_checker()
self.event_creation_handler = hs.get_event_creation_handler()
self.room_member_handler = hs.get_room_member_handler()
self._event_auth_handler = hs.get_event_auth_handler()
self.config = hs.config
# Room state based off defined presets
@ -226,7 +227,7 @@ class RoomCreationHandler(BaseHandler):
},
)
old_room_version = await self.store.get_room_version_id(old_room_id)
await self.auth.check_from_context(
await self._event_auth_handler.check_from_context(
old_room_version, tombstone_event, tombstone_context
)

View File

@ -472,7 +472,7 @@ class SpaceSummaryHandler:
# If this is a request over federation, check if the host is in the room or
# is in one of the spaces specified via the join rules.
elif origin:
if await self._auth.check_host_in_room(room_id, origin):
if await self._event_auth_handler.check_host_in_room(room_id, origin):
return True
# Alternately, if the host has a user in any of the spaces specified
@ -485,7 +485,9 @@ class SpaceSummaryHandler:
await self._event_auth_handler.get_rooms_that_allow_join(state_ids)
)
for space_id in allowed_rooms:
if await self._auth.check_host_in_room(space_id, origin):
if await self._event_auth_handler.check_host_in_room(
space_id, origin
):
return True
# otherwise, check if the room is peekable

View File

@ -104,7 +104,7 @@ class BulkPushRuleEvaluator:
def __init__(self, hs: "HomeServer"):
self.hs = hs
self.store = hs.get_datastore()
self.auth = hs.get_auth()
self._event_auth_handler = hs.get_event_auth_handler()
# Used by `RulesForRoom` to ensure only one thing mutates the cache at a
# time. Keyed off room_id.
@ -172,7 +172,7 @@ class BulkPushRuleEvaluator:
# not having a power level event is an extreme edge case
auth_events = {POWER_KEY: await self.store.get_event(pl_event_id)}
else:
auth_events_ids = self.auth.compute_auth_events(
auth_events_ids = self._event_auth_handler.compute_auth_events(
event, prev_state_ids, for_verification=False
)
auth_events_dict = await self.store.get_events(auth_events_ids)

View File

@ -111,7 +111,7 @@ def make_conn(
db_config: DatabaseConnectionConfig,
engine: BaseDatabaseEngine,
default_txn_name: str,
) -> Connection:
) -> "LoggingDatabaseConnection":
"""Make a new connection to the database and return it.
Returns:

View File

@ -16,6 +16,8 @@ import logging
from queue import Empty, PriorityQueue
from typing import Collection, Dict, Iterable, List, Optional, Set, Tuple
from prometheus_client import Gauge
from synapse.api.constants import MAX_DEPTH
from synapse.api.errors import StoreError
from synapse.api.room_versions import RoomVersion
@ -32,6 +34,16 @@ from synapse.util.caches.descriptors import cached
from synapse.util.caches.lrucache import LruCache
from synapse.util.iterutils import batch_iter
oldest_pdu_in_federation_staging = Gauge(
"synapse_federation_server_oldest_inbound_pdu_in_staging",
"The age in seconds since we received the oldest pdu in the federation staging area",
)
number_pdus_in_federation_queue = Gauge(
"synapse_federation_server_number_inbound_pdu_in_staging",
"The total number of events in the inbound federation staging",
)
logger = logging.getLogger(__name__)
@ -54,6 +66,8 @@ class EventFederationWorkerStore(EventsWorkerStore, SignatureWorkerStore, SQLBas
500000, "_event_auth_cache", size_callback=len
) # type: LruCache[str, List[Tuple[str, int]]]
self._clock.looping_call(self._get_stats_for_federation_staging, 30 * 1000)
async def get_auth_chain(
self, room_id: str, event_ids: Collection[str], include_given: bool = False
) -> List[EventBase]:
@ -1193,6 +1207,40 @@ class EventFederationWorkerStore(EventsWorkerStore, SignatureWorkerStore, SQLBas
return origin, event
async def get_all_rooms_with_staged_incoming_events(self) -> List[str]:
"""Get the room IDs of all events currently staged."""
return await self.db_pool.simple_select_onecol(
table="federation_inbound_events_staging",
keyvalues={},
retcol="DISTINCT room_id",
desc="get_all_rooms_with_staged_incoming_events",
)
@wrap_as_background_process("_get_stats_for_federation_staging")
async def _get_stats_for_federation_staging(self):
"""Update the prometheus metrics for the inbound federation staging area."""
def _get_stats_for_federation_staging_txn(txn):
txn.execute(
"SELECT coalesce(count(*), 0) FROM federation_inbound_events_staging"
)
(count,) = txn.fetchone()
txn.execute(
"SELECT coalesce(min(received_ts), 0) FROM federation_inbound_events_staging"
)
(age,) = txn.fetchone()
return count, age
count, age = await self.db_pool.runInteraction(
"_get_stats_for_federation_staging", _get_stats_for_federation_staging_txn
)
number_pdus_in_federation_queue.set(count)
oldest_pdu_in_federation_staging.set(age)
class EventFederationStore(EventFederationWorkerStore):
"""Responsible for storing and serving up the various graphs associated

View File

@ -73,20 +73,20 @@ class ProfileWorkerStore(SQLBaseStore):
async def set_profile_displayname(
self, user_localpart: str, new_displayname: Optional[str]
) -> None:
await self.db_pool.simple_update_one(
await self.db_pool.simple_upsert(
table="profiles",
keyvalues={"user_id": user_localpart},
updatevalues={"displayname": new_displayname},
values={"displayname": new_displayname},
desc="set_profile_displayname",
)
async def set_profile_avatar_url(
self, user_localpart: str, new_avatar_url: Optional[str]
) -> None:
await self.db_pool.simple_update_one(
await self.db_pool.simple_upsert(
table="profiles",
keyvalues={"user_id": user_localpart},
updatevalues={"avatar_url": new_avatar_url},
values={"avatar_url": new_avatar_url},
desc="set_profile_avatar_url",
)

View File

@ -42,4 +42,4 @@ INSERT INTO background_updates (ordering, update_name, progress_json, depends_on
-- ... and another to do the switcheroo
INSERT INTO background_updates (ordering, update_name, progress_json, depends_on) VALUES
(6003, 'replace_stream_ordering_column', '{}', 'index_stream_ordering2_ts');
(6001, 'replace_stream_ordering_column', '{}', 'index_stream_ordering2_ts');

View File

@ -12,9 +12,12 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import threading
import weakref
from functools import wraps
from typing import (
TYPE_CHECKING,
Any,
Callable,
Collection,
@ -31,10 +34,19 @@ from typing import (
from typing_extensions import Literal
from twisted.internet import reactor
from synapse.config import cache as cache_config
from synapse.util import caches
from synapse.metrics.background_process_metrics import wrap_as_background_process
from synapse.util import Clock, caches
from synapse.util.caches import CacheMetric, register_cache
from synapse.util.caches.treecache import TreeCache, iterate_tree_cache_entry
from synapse.util.linked_list import ListNode
if TYPE_CHECKING:
from synapse.server import HomeServer
logger = logging.getLogger(__name__)
try:
from pympler.asizeof import Asizer
@ -82,19 +94,126 @@ def enumerate_leaves(node, depth):
yield m
P = TypeVar("P")
class _TimedListNode(ListNode[P]):
"""A `ListNode` that tracks last access time."""
__slots__ = ["last_access_ts_secs"]
def update_last_access(self, clock: Clock):
self.last_access_ts_secs = int(clock.time())
# Whether to insert new cache entries to the global list. We only add to it if
# time based eviction is enabled.
USE_GLOBAL_LIST = False
# A linked list of all cache entries, allowing efficient time based eviction.
GLOBAL_ROOT = ListNode["_Node"].create_root_node()
@wrap_as_background_process("LruCache._expire_old_entries")
async def _expire_old_entries(clock: Clock, expiry_seconds: int):
"""Walks the global cache list to find cache entries that haven't been
accessed in the given number of seconds.
"""
now = int(clock.time())
node = GLOBAL_ROOT.prev_node
assert node is not None
i = 0
logger.debug("Searching for stale caches")
while node is not GLOBAL_ROOT:
# Only the root node isn't a `_TimedListNode`.
assert isinstance(node, _TimedListNode)
if node.last_access_ts_secs > now - expiry_seconds:
break
cache_entry = node.get_cache_entry()
next_node = node.prev_node
# The node should always have a reference to a cache entry and a valid
# `prev_node`, as we only drop them when we remove the node from the
# list.
assert next_node is not None
assert cache_entry is not None
cache_entry.drop_from_cache()
# If we do lots of work at once we yield to allow other stuff to happen.
if (i + 1) % 10000 == 0:
logger.debug("Waiting during drop")
await clock.sleep(0)
logger.debug("Waking during drop")
node = next_node
# If we've yielded then our current node may have been evicted, so we
# need to check that its still valid.
if node.prev_node is None:
break
i += 1
logger.info("Dropped %d items from caches", i)
def setup_expire_lru_cache_entries(hs: "HomeServer"):
"""Start a background job that expires all cache entries if they have not
been accessed for the given number of seconds.
"""
if not hs.config.caches.expiry_time_msec:
return
logger.info(
"Expiring LRU caches after %d seconds", hs.config.caches.expiry_time_msec / 1000
)
global USE_GLOBAL_LIST
USE_GLOBAL_LIST = True
clock = hs.get_clock()
clock.looping_call(
_expire_old_entries, 30 * 1000, clock, hs.config.caches.expiry_time_msec / 1000
)
class _Node:
__slots__ = ["prev_node", "next_node", "key", "value", "callbacks", "memory"]
__slots__ = [
"_list_node",
"_global_list_node",
"_cache",
"key",
"value",
"callbacks",
"memory",
]
def __init__(
self,
prev_node,
next_node,
root: "ListNode[_Node]",
key,
value,
cache: "weakref.ReferenceType[LruCache]",
clock: Clock,
callbacks: Collection[Callable[[], None]] = (),
):
self.prev_node = prev_node
self.next_node = next_node
self._list_node = ListNode.insert_after(self, root)
self._global_list_node = None
if USE_GLOBAL_LIST:
self._global_list_node = _TimedListNode.insert_after(self, GLOBAL_ROOT)
self._global_list_node.update_last_access(clock)
# We store a weak reference to the cache object so that this _Node can
# remove itself from the cache. If the cache is dropped we ensure we
# remove our entries in the lists.
self._cache = cache
self.key = key
self.value = value
@ -116,11 +235,16 @@ class _Node:
self.memory = (
_get_size_of(key)
+ _get_size_of(value)
+ _get_size_of(self._list_node, recurse=False)
+ _get_size_of(self.callbacks, recurse=False)
+ _get_size_of(self, recurse=False)
)
self.memory += _get_size_of(self.memory, recurse=False)
if self._global_list_node:
self.memory += _get_size_of(self._global_list_node, recurse=False)
self.memory += _get_size_of(self._global_list_node.last_access_ts_secs)
def add_callbacks(self, callbacks: Collection[Callable[[], None]]) -> None:
"""Add to stored list of callbacks, removing duplicates."""
@ -147,6 +271,32 @@ class _Node:
self.callbacks = None
def drop_from_cache(self) -> None:
"""Drop this node from the cache.
Ensures that the entry gets removed from the cache and that we get
removed from all lists.
"""
cache = self._cache()
if not cache or not cache.pop(self.key, None):
# `cache.pop` should call `drop_from_lists()`, unless this Node had
# already been removed from the cache.
self.drop_from_lists()
def drop_from_lists(self) -> None:
"""Remove this node from the cache lists."""
self._list_node.remove_from_list()
if self._global_list_node:
self._global_list_node.remove_from_list()
def move_to_front(self, clock: Clock, cache_list_root: ListNode) -> None:
"""Moves this node to the front of all the lists its in."""
self._list_node.move_after(cache_list_root)
if self._global_list_node:
self._global_list_node.move_after(GLOBAL_ROOT)
self._global_list_node.update_last_access(clock)
class LruCache(Generic[KT, VT]):
"""
@ -163,6 +313,7 @@ class LruCache(Generic[KT, VT]):
size_callback: Optional[Callable] = None,
metrics_collection_callback: Optional[Callable[[], None]] = None,
apply_cache_factor_from_config: bool = True,
clock: Optional[Clock] = None,
):
"""
Args:
@ -188,6 +339,13 @@ class LruCache(Generic[KT, VT]):
apply_cache_factor_from_config (bool): If true, `max_size` will be
multiplied by a cache factor derived from the homeserver config
"""
# Default `clock` to something sensible. Note that we rename it to
# `real_clock` so that mypy doesn't think its still `Optional`.
if clock is None:
real_clock = Clock(reactor)
else:
real_clock = clock
cache = cache_type()
self.cache = cache # Used for introspection.
self.apply_cache_factor_from_config = apply_cache_factor_from_config
@ -219,17 +377,31 @@ class LruCache(Generic[KT, VT]):
# this is exposed for access from outside this class
self.metrics = metrics
list_root = _Node(None, None, None, None)
list_root.next_node = list_root
list_root.prev_node = list_root
# We create a single weakref to self here so that we don't need to keep
# creating more each time we create a `_Node`.
weak_ref_to_self = weakref.ref(self)
list_root = ListNode[_Node].create_root_node()
lock = threading.Lock()
def evict():
while cache_len() > self.max_size:
# Get the last node in the list (i.e. the oldest node).
todelete = list_root.prev_node
evicted_len = delete_node(todelete)
cache.pop(todelete.key, None)
# The list root should always have a valid `prev_node` if the
# cache is not empty.
assert todelete is not None
# The node should always have a reference to a cache entry, as
# we only drop the cache entry when we remove the node from the
# list.
node = todelete.get_cache_entry()
assert node is not None
evicted_len = delete_node(node)
cache.pop(node.key, None)
if metrics:
metrics.inc_evictions(evicted_len)
@ -255,11 +427,7 @@ class LruCache(Generic[KT, VT]):
self.len = synchronized(cache_len)
def add_node(key, value, callbacks: Collection[Callable[[], None]] = ()):
prev_node = list_root
next_node = prev_node.next_node
node = _Node(prev_node, next_node, key, value, callbacks)
prev_node.next_node = node
next_node.prev_node = node
node = _Node(list_root, key, value, weak_ref_to_self, real_clock, callbacks)
cache[key] = node
if size_callback:
@ -268,23 +436,11 @@ class LruCache(Generic[KT, VT]):
if caches.TRACK_MEMORY_USAGE and metrics:
metrics.inc_memory_usage(node.memory)
def move_node_to_front(node):
prev_node = node.prev_node
next_node = node.next_node
prev_node.next_node = next_node
next_node.prev_node = prev_node
prev_node = list_root
next_node = prev_node.next_node
node.prev_node = prev_node
node.next_node = next_node
prev_node.next_node = node
next_node.prev_node = node
def move_node_to_front(node: _Node):
node.move_to_front(real_clock, list_root)
def delete_node(node):
prev_node = node.prev_node
next_node = node.next_node
prev_node.next_node = next_node
next_node.prev_node = prev_node
def delete_node(node: _Node) -> int:
node.drop_from_lists()
deleted_len = 1
if size_callback:
@ -411,10 +567,13 @@ class LruCache(Generic[KT, VT]):
@synchronized
def cache_clear() -> None:
list_root.next_node = list_root
list_root.prev_node = list_root
for node in cache.values():
node.run_and_clear_callbacks()
node.drop_from_lists()
assert list_root.next_node == list_root
assert list_root.prev_node == list_root
cache.clear()
if size_callback:
cached_cache_len[0] = 0
@ -484,3 +643,11 @@ class LruCache(Generic[KT, VT]):
self._on_resize()
return True
return False
def __del__(self) -> None:
# We're about to be deleted, so we make sure to clear up all the nodes
# and run callbacks, etc.
#
# This happens e.g. in the sync code where we have an expiring cache of
# lru caches.
self.clear()

150
synapse/util/linked_list.py Normal file
View File

@ -0,0 +1,150 @@
# Copyright 2021 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""A circular doubly linked list implementation.
"""
import threading
from typing import Generic, Optional, Type, TypeVar
P = TypeVar("P")
LN = TypeVar("LN", bound="ListNode")
class ListNode(Generic[P]):
"""A node in a circular doubly linked list, with an (optional) reference to
a cache entry.
The reference should only be `None` for the root node or if the node has
been removed from the list.
"""
# A lock to protect mutating the list prev/next pointers.
_LOCK = threading.Lock()
# We don't use attrs here as in py3.6 you can't have `attr.s(slots=True)`
# and inherit from `Generic` for some reason
__slots__ = [
"cache_entry",
"prev_node",
"next_node",
]
def __init__(self, cache_entry: Optional[P] = None) -> None:
self.cache_entry = cache_entry
self.prev_node: Optional[ListNode[P]] = None
self.next_node: Optional[ListNode[P]] = None
@classmethod
def create_root_node(cls: Type["ListNode[P]"]) -> "ListNode[P]":
"""Create a new linked list by creating a "root" node, which is a node
that has prev_node/next_node pointing to itself and no associated cache
entry.
"""
root = cls()
root.prev_node = root
root.next_node = root
return root
@classmethod
def insert_after(
cls: Type[LN],
cache_entry: P,
node: "ListNode[P]",
) -> LN:
"""Create a new list node that is placed after the given node.
Args:
cache_entry: The associated cache entry.
node: The existing node in the list to insert the new entry after.
"""
new_node = cls(cache_entry)
with cls._LOCK:
new_node._refs_insert_after(node)
return new_node
def remove_from_list(self):
"""Remove this node from the list."""
with self._LOCK:
self._refs_remove_node_from_list()
# We drop the reference to the cache entry to break the reference cycle
# between the list node and cache entry, allowing the two to be dropped
# immediately rather than at the next GC.
self.cache_entry = None
def move_after(self, node: "ListNode"):
"""Move this node from its current location in the list to after the
given node.
"""
with self._LOCK:
# We assert that both this node and the target node is still "alive".
assert self.prev_node
assert self.next_node
assert node.prev_node
assert node.next_node
assert self is not node
# Remove self from the list
self._refs_remove_node_from_list()
# Insert self back into the list, after target node
self._refs_insert_after(node)
def _refs_remove_node_from_list(self):
"""Internal method to *just* remove the node from the list, without
e.g. clearing out the cache entry.
"""
if self.prev_node is None or self.next_node is None:
# We've already been removed from the list.
return
prev_node = self.prev_node
next_node = self.next_node
prev_node.next_node = next_node
next_node.prev_node = prev_node
# We set these to None so that we don't get circular references,
# allowing us to be dropped without having to go via the GC.
self.prev_node = None
self.next_node = None
def _refs_insert_after(self, node: "ListNode"):
"""Internal method to insert the node after the given node."""
# This method should only be called when we're not already in the list.
assert self.prev_node is None
assert self.next_node is None
# We expect the given node to be in the list and thus have valid
# prev/next refs.
assert node.next_node
assert node.prev_node
prev_node = node
next_node = node.next_node
self.prev_node = prev_node
self.next_node = next_node
prev_node.next_node = self
next_node.prev_node = self
def get_cache_entry(self) -> Optional[P]:
"""Get the cache entry, returns None if this is the root node (i.e.
cache_entry is None) or if the entry has been dropped.
"""
return self.cache_entry

View File

@ -45,5 +45,4 @@ Peeked rooms only turn up in the sync for the device who peeked them
# Blacklisted due to changes made in #10272
Outbound federation will ignore a missing event with bad JSON for room version 6
Backfilled events whose prev_events are in a different room do not allow cross-room back-pagination
Federation rejects inbound events where the prev_events cannot be found

View File

@ -734,7 +734,7 @@ class PresenceJoinTestCase(unittest.HomeserverTestCase):
self.store = hs.get_datastore()
self.state = hs.get_state_handler()
self.auth = hs.get_auth()
self._event_auth_handler = hs.get_event_auth_handler()
# We don't actually check signatures in tests, so lets just create a
# random key to use.
@ -846,7 +846,7 @@ class PresenceJoinTestCase(unittest.HomeserverTestCase):
builder = EventBuilder(
state=self.state,
auth=self.auth,
event_auth_handler=self._event_auth_handler,
store=self.store,
clock=self.clock,
hostname=hostname,

View File

@ -939,7 +939,7 @@ class DeactivateAccountTestCase(unittest.HomeserverTestCase):
"""
channel = self.make_request("POST", self.url, b"{}")
self.assertEqual(401, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(401, channel.code, msg=channel.json_body)
self.assertEqual(Codes.MISSING_TOKEN, channel.json_body["errcode"])
def test_requester_is_not_admin(self):
@ -950,7 +950,7 @@ class DeactivateAccountTestCase(unittest.HomeserverTestCase):
channel = self.make_request("POST", url, access_token=self.other_user_token)
self.assertEqual(403, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(403, channel.code, msg=channel.json_body)
self.assertEqual("You are not a server admin", channel.json_body["error"])
channel = self.make_request(
@ -960,7 +960,7 @@ class DeactivateAccountTestCase(unittest.HomeserverTestCase):
content=b"{}",
)
self.assertEqual(403, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(403, channel.code, msg=channel.json_body)
self.assertEqual("You are not a server admin", channel.json_body["error"])
def test_user_does_not_exist(self):
@ -990,7 +990,7 @@ class DeactivateAccountTestCase(unittest.HomeserverTestCase):
access_token=self.admin_user_tok,
)
self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(400, channel.code, msg=channel.json_body)
self.assertEqual(Codes.BAD_JSON, channel.json_body["errcode"])
def test_user_is_not_local(self):
@ -1006,7 +1006,7 @@ class DeactivateAccountTestCase(unittest.HomeserverTestCase):
def test_deactivate_user_erase_true(self):
"""
Test deactivating an user and set `erase` to `true`
Test deactivating a user and set `erase` to `true`
"""
# Get user
@ -1016,24 +1016,22 @@ class DeactivateAccountTestCase(unittest.HomeserverTestCase):
access_token=self.admin_user_tok,
)
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual("@user:test", channel.json_body["name"])
self.assertEqual(False, channel.json_body["deactivated"])
self.assertEqual("foo@bar.com", channel.json_body["threepids"][0]["address"])
self.assertEqual("mxc://servername/mediaid", channel.json_body["avatar_url"])
self.assertEqual("User1", channel.json_body["displayname"])
# Deactivate user
body = json.dumps({"erase": True})
# Deactivate and erase user
channel = self.make_request(
"POST",
self.url,
access_token=self.admin_user_tok,
content=body.encode(encoding="utf_8"),
content={"erase": True},
)
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(200, channel.code, msg=channel.json_body)
# Get user
channel = self.make_request(
@ -1042,7 +1040,7 @@ class DeactivateAccountTestCase(unittest.HomeserverTestCase):
access_token=self.admin_user_tok,
)
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual("@user:test", channel.json_body["name"])
self.assertEqual(True, channel.json_body["deactivated"])
self.assertEqual(0, len(channel.json_body["threepids"]))
@ -1053,7 +1051,7 @@ class DeactivateAccountTestCase(unittest.HomeserverTestCase):
def test_deactivate_user_erase_false(self):
"""
Test deactivating an user and set `erase` to `false`
Test deactivating a user and set `erase` to `false`
"""
# Get user
@ -1063,7 +1061,7 @@ class DeactivateAccountTestCase(unittest.HomeserverTestCase):
access_token=self.admin_user_tok,
)
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual("@user:test", channel.json_body["name"])
self.assertEqual(False, channel.json_body["deactivated"])
self.assertEqual("foo@bar.com", channel.json_body["threepids"][0]["address"])
@ -1071,13 +1069,11 @@ class DeactivateAccountTestCase(unittest.HomeserverTestCase):
self.assertEqual("User1", channel.json_body["displayname"])
# Deactivate user
body = json.dumps({"erase": False})
channel = self.make_request(
"POST",
self.url,
access_token=self.admin_user_tok,
content=body.encode(encoding="utf_8"),
content={"erase": False},
)
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
@ -1089,7 +1085,7 @@ class DeactivateAccountTestCase(unittest.HomeserverTestCase):
access_token=self.admin_user_tok,
)
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual("@user:test", channel.json_body["name"])
self.assertEqual(True, channel.json_body["deactivated"])
self.assertEqual(0, len(channel.json_body["threepids"]))
@ -1098,6 +1094,60 @@ class DeactivateAccountTestCase(unittest.HomeserverTestCase):
self._is_erased("@user:test", False)
def test_deactivate_user_erase_true_no_profile(self):
"""
Test deactivating a user and set `erase` to `true`
if user has no profile information (stored in the database table `profiles`).
"""
# Users normally have an entry in `profiles`, but occasionally they are created without one.
# To test deactivation for users without a profile, we delete the profile information for our user.
self.get_success(
self.store.db_pool.simple_delete_one(
table="profiles", keyvalues={"user_id": "user"}
)
)
# Get user
channel = self.make_request(
"GET",
self.url_other_user,
access_token=self.admin_user_tok,
)
self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual("@user:test", channel.json_body["name"])
self.assertEqual(False, channel.json_body["deactivated"])
self.assertEqual("foo@bar.com", channel.json_body["threepids"][0]["address"])
self.assertIsNone(channel.json_body["avatar_url"])
self.assertIsNone(channel.json_body["displayname"])
# Deactivate and erase user
channel = self.make_request(
"POST",
self.url,
access_token=self.admin_user_tok,
content={"erase": True},
)
self.assertEqual(200, channel.code, msg=channel.json_body)
# Get user
channel = self.make_request(
"GET",
self.url_other_user,
access_token=self.admin_user_tok,
)
self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual("@user:test", channel.json_body["name"])
self.assertEqual(True, channel.json_body["deactivated"])
self.assertEqual(0, len(channel.json_body["threepids"]))
self.assertIsNone(channel.json_body["avatar_url"])
self.assertIsNone(channel.json_body["displayname"])
self._is_erased("@user:test", True)
def _is_erased(self, user_id: str, expect: bool) -> None:
"""Assert that the user is erased or not"""
d = self.store.is_user_erased(user_id)
@ -1150,7 +1200,7 @@ class UserRestTestCase(unittest.HomeserverTestCase):
access_token=self.other_user_token,
)
self.assertEqual(403, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(403, channel.code, msg=channel.json_body)
self.assertEqual("You are not a server admin", channel.json_body["error"])
channel = self.make_request(
@ -1160,7 +1210,7 @@ class UserRestTestCase(unittest.HomeserverTestCase):
content=b"{}",
)
self.assertEqual(403, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(403, channel.code, msg=channel.json_body)
self.assertEqual("You are not a server admin", channel.json_body["error"])
def test_user_does_not_exist(self):
@ -1177,6 +1227,58 @@ class UserRestTestCase(unittest.HomeserverTestCase):
self.assertEqual(404, channel.code, msg=channel.json_body)
self.assertEqual("M_NOT_FOUND", channel.json_body["errcode"])
def test_get_user(self):
"""
Test a simple get of a user.
"""
channel = self.make_request(
"GET",
self.url_other_user,
access_token=self.admin_user_tok,
)
self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual("@user:test", channel.json_body["name"])
self.assertEqual("User", channel.json_body["displayname"])
self._check_fields(channel.json_body)
def test_get_user_with_sso(self):
"""
Test get a user with SSO details.
"""
self.get_success(
self.store.record_user_external_id(
"auth_provider1", "external_id1", self.other_user
)
)
self.get_success(
self.store.record_user_external_id(
"auth_provider2", "external_id2", self.other_user
)
)
channel = self.make_request(
"GET",
self.url_other_user,
access_token=self.admin_user_tok,
)
self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual("@user:test", channel.json_body["name"])
self.assertEqual(
"external_id1", channel.json_body["external_ids"][0]["external_id"]
)
self.assertEqual(
"auth_provider1", channel.json_body["external_ids"][0]["auth_provider"]
)
self.assertEqual(
"external_id2", channel.json_body["external_ids"][1]["external_id"]
)
self.assertEqual(
"auth_provider2", channel.json_body["external_ids"][1]["auth_provider"]
)
self._check_fields(channel.json_body)
def test_create_server_admin(self):
"""
Check that a new admin user is created successfully.
@ -1184,30 +1286,29 @@ class UserRestTestCase(unittest.HomeserverTestCase):
url = "/_synapse/admin/v2/users/@bob:test"
# Create user (server admin)
body = json.dumps(
{
"password": "abc123",
"admin": True,
"displayname": "Bob's name",
"threepids": [{"medium": "email", "address": "bob@bob.bob"}],
"avatar_url": "mxc://fibble/wibble",
}
)
body = {
"password": "abc123",
"admin": True,
"displayname": "Bob's name",
"threepids": [{"medium": "email", "address": "bob@bob.bob"}],
"avatar_url": "mxc://fibble/wibble",
}
channel = self.make_request(
"PUT",
url,
access_token=self.admin_user_tok,
content=body.encode(encoding="utf_8"),
content=body,
)
self.assertEqual(201, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(201, channel.code, msg=channel.json_body)
self.assertEqual("@bob:test", channel.json_body["name"])
self.assertEqual("Bob's name", channel.json_body["displayname"])
self.assertEqual("email", channel.json_body["threepids"][0]["medium"])
self.assertEqual("bob@bob.bob", channel.json_body["threepids"][0]["address"])
self.assertTrue(channel.json_body["admin"])
self.assertEqual("mxc://fibble/wibble", channel.json_body["avatar_url"])
self._check_fields(channel.json_body)
# Get user
channel = self.make_request(
@ -1216,7 +1317,7 @@ class UserRestTestCase(unittest.HomeserverTestCase):
access_token=self.admin_user_tok,
)
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual("@bob:test", channel.json_body["name"])
self.assertEqual("Bob's name", channel.json_body["displayname"])
self.assertEqual("email", channel.json_body["threepids"][0]["medium"])
@ -1225,6 +1326,7 @@ class UserRestTestCase(unittest.HomeserverTestCase):
self.assertFalse(channel.json_body["is_guest"])
self.assertFalse(channel.json_body["deactivated"])
self.assertEqual("mxc://fibble/wibble", channel.json_body["avatar_url"])
self._check_fields(channel.json_body)
def test_create_user(self):
"""
@ -1233,30 +1335,29 @@ class UserRestTestCase(unittest.HomeserverTestCase):
url = "/_synapse/admin/v2/users/@bob:test"
# Create user
body = json.dumps(
{
"password": "abc123",
"admin": False,
"displayname": "Bob's name",
"threepids": [{"medium": "email", "address": "bob@bob.bob"}],
"avatar_url": "mxc://fibble/wibble",
}
)
body = {
"password": "abc123",
"admin": False,
"displayname": "Bob's name",
"threepids": [{"medium": "email", "address": "bob@bob.bob"}],
"avatar_url": "mxc://fibble/wibble",
}
channel = self.make_request(
"PUT",
url,
access_token=self.admin_user_tok,
content=body.encode(encoding="utf_8"),
content=body,
)
self.assertEqual(201, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(201, channel.code, msg=channel.json_body)
self.assertEqual("@bob:test", channel.json_body["name"])
self.assertEqual("Bob's name", channel.json_body["displayname"])
self.assertEqual("email", channel.json_body["threepids"][0]["medium"])
self.assertEqual("bob@bob.bob", channel.json_body["threepids"][0]["address"])
self.assertFalse(channel.json_body["admin"])
self.assertEqual("mxc://fibble/wibble", channel.json_body["avatar_url"])
self._check_fields(channel.json_body)
# Get user
channel = self.make_request(
@ -1265,7 +1366,7 @@ class UserRestTestCase(unittest.HomeserverTestCase):
access_token=self.admin_user_tok,
)
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual("@bob:test", channel.json_body["name"])
self.assertEqual("Bob's name", channel.json_body["displayname"])
self.assertEqual("email", channel.json_body["threepids"][0]["medium"])
@ -1275,6 +1376,7 @@ class UserRestTestCase(unittest.HomeserverTestCase):
self.assertFalse(channel.json_body["deactivated"])
self.assertFalse(channel.json_body["shadow_banned"])
self.assertEqual("mxc://fibble/wibble", channel.json_body["avatar_url"])
self._check_fields(channel.json_body)
@override_config(
{"limit_usage_by_mau": True, "max_mau_value": 2, "mau_trial_days": 0}
@ -1311,16 +1413,14 @@ class UserRestTestCase(unittest.HomeserverTestCase):
url = "/_synapse/admin/v2/users/@bob:test"
# Create user
body = json.dumps({"password": "abc123", "admin": False})
channel = self.make_request(
"PUT",
url,
access_token=self.admin_user_tok,
content=body.encode(encoding="utf_8"),
content={"password": "abc123", "admin": False},
)
self.assertEqual(201, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(201, channel.code, msg=channel.json_body)
self.assertEqual("@bob:test", channel.json_body["name"])
self.assertFalse(channel.json_body["admin"])
@ -1350,17 +1450,15 @@ class UserRestTestCase(unittest.HomeserverTestCase):
url = "/_synapse/admin/v2/users/@bob:test"
# Create user
body = json.dumps({"password": "abc123", "admin": False})
channel = self.make_request(
"PUT",
url,
access_token=self.admin_user_tok,
content=body.encode(encoding="utf_8"),
content={"password": "abc123", "admin": False},
)
# Admin user is not blocked by mau anymore
self.assertEqual(201, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(201, channel.code, msg=channel.json_body)
self.assertEqual("@bob:test", channel.json_body["name"])
self.assertFalse(channel.json_body["admin"])
@ -1382,21 +1480,19 @@ class UserRestTestCase(unittest.HomeserverTestCase):
url = "/_synapse/admin/v2/users/@bob:test"
# Create user
body = json.dumps(
{
"password": "abc123",
"threepids": [{"medium": "email", "address": "bob@bob.bob"}],
}
)
body = {
"password": "abc123",
"threepids": [{"medium": "email", "address": "bob@bob.bob"}],
}
channel = self.make_request(
"PUT",
url,
access_token=self.admin_user_tok,
content=body.encode(encoding="utf_8"),
content=body,
)
self.assertEqual(201, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(201, channel.code, msg=channel.json_body)
self.assertEqual("@bob:test", channel.json_body["name"])
self.assertEqual("email", channel.json_body["threepids"][0]["medium"])
self.assertEqual("bob@bob.bob", channel.json_body["threepids"][0]["address"])
@ -1426,21 +1522,19 @@ class UserRestTestCase(unittest.HomeserverTestCase):
url = "/_synapse/admin/v2/users/@bob:test"
# Create user
body = json.dumps(
{
"password": "abc123",
"threepids": [{"medium": "email", "address": "bob@bob.bob"}],
}
)
body = {
"password": "abc123",
"threepids": [{"medium": "email", "address": "bob@bob.bob"}],
}
channel = self.make_request(
"PUT",
url,
access_token=self.admin_user_tok,
content=body.encode(encoding="utf_8"),
content=body,
)
self.assertEqual(201, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(201, channel.code, msg=channel.json_body)
self.assertEqual("@bob:test", channel.json_body["name"])
self.assertEqual("email", channel.json_body["threepids"][0]["medium"])
self.assertEqual("bob@bob.bob", channel.json_body["threepids"][0]["address"])
@ -1457,16 +1551,15 @@ class UserRestTestCase(unittest.HomeserverTestCase):
"""
# Change password
body = json.dumps({"password": "hahaha"})
channel = self.make_request(
"PUT",
self.url_other_user,
access_token=self.admin_user_tok,
content=body.encode(encoding="utf_8"),
content={"password": "hahaha"},
)
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(200, channel.code, msg=channel.json_body)
self._check_fields(channel.json_body)
def test_set_displayname(self):
"""
@ -1474,16 +1567,14 @@ class UserRestTestCase(unittest.HomeserverTestCase):
"""
# Modify user
body = json.dumps({"displayname": "foobar"})
channel = self.make_request(
"PUT",
self.url_other_user,
access_token=self.admin_user_tok,
content=body.encode(encoding="utf_8"),
content={"displayname": "foobar"},
)
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual("@user:test", channel.json_body["name"])
self.assertEqual("foobar", channel.json_body["displayname"])
@ -1494,7 +1585,7 @@ class UserRestTestCase(unittest.HomeserverTestCase):
access_token=self.admin_user_tok,
)
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual("@user:test", channel.json_body["name"])
self.assertEqual("foobar", channel.json_body["displayname"])
@ -1504,18 +1595,14 @@ class UserRestTestCase(unittest.HomeserverTestCase):
"""
# Delete old and add new threepid to user
body = json.dumps(
{"threepids": [{"medium": "email", "address": "bob3@bob.bob"}]}
)
channel = self.make_request(
"PUT",
self.url_other_user,
access_token=self.admin_user_tok,
content=body.encode(encoding="utf_8"),
content={"threepids": [{"medium": "email", "address": "bob3@bob.bob"}]},
)
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual("@user:test", channel.json_body["name"])
self.assertEqual("email", channel.json_body["threepids"][0]["medium"])
self.assertEqual("bob3@bob.bob", channel.json_body["threepids"][0]["address"])
@ -1527,7 +1614,7 @@ class UserRestTestCase(unittest.HomeserverTestCase):
access_token=self.admin_user_tok,
)
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual("@user:test", channel.json_body["name"])
self.assertEqual("email", channel.json_body["threepids"][0]["medium"])
self.assertEqual("bob3@bob.bob", channel.json_body["threepids"][0]["address"])
@ -1552,7 +1639,7 @@ class UserRestTestCase(unittest.HomeserverTestCase):
access_token=self.admin_user_tok,
)
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual("@user:test", channel.json_body["name"])
self.assertFalse(channel.json_body["deactivated"])
self.assertEqual("foo@bar.com", channel.json_body["threepids"][0]["address"])
@ -1567,7 +1654,7 @@ class UserRestTestCase(unittest.HomeserverTestCase):
content={"deactivated": True},
)
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual("@user:test", channel.json_body["name"])
self.assertTrue(channel.json_body["deactivated"])
self.assertIsNone(channel.json_body["password_hash"])
@ -1583,7 +1670,7 @@ class UserRestTestCase(unittest.HomeserverTestCase):
access_token=self.admin_user_tok,
)
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual("@user:test", channel.json_body["name"])
self.assertTrue(channel.json_body["deactivated"])
self.assertIsNone(channel.json_body["password_hash"])
@ -1610,7 +1697,7 @@ class UserRestTestCase(unittest.HomeserverTestCase):
content={"deactivated": True},
)
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual("@user:test", channel.json_body["name"])
self.assertTrue(channel.json_body["deactivated"])
@ -1626,7 +1713,7 @@ class UserRestTestCase(unittest.HomeserverTestCase):
content={"displayname": "Foobar"},
)
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual("@user:test", channel.json_body["name"])
self.assertTrue(channel.json_body["deactivated"])
self.assertEqual("Foobar", channel.json_body["displayname"])
@ -1650,7 +1737,7 @@ class UserRestTestCase(unittest.HomeserverTestCase):
access_token=self.admin_user_tok,
content={"deactivated": False},
)
self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(400, channel.code, msg=channel.json_body)
# Reactivate the user.
channel = self.make_request(
@ -1659,7 +1746,7 @@ class UserRestTestCase(unittest.HomeserverTestCase):
access_token=self.admin_user_tok,
content={"deactivated": False, "password": "foo"},
)
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual("@user:test", channel.json_body["name"])
self.assertFalse(channel.json_body["deactivated"])
self.assertIsNotNone(channel.json_body["password_hash"])
@ -1681,7 +1768,7 @@ class UserRestTestCase(unittest.HomeserverTestCase):
access_token=self.admin_user_tok,
content={"deactivated": False, "password": "foo"},
)
self.assertEqual(403, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(403, channel.code, msg=channel.json_body)
self.assertEqual(Codes.FORBIDDEN, channel.json_body["errcode"])
# Reactivate the user without a password.
@ -1691,7 +1778,7 @@ class UserRestTestCase(unittest.HomeserverTestCase):
access_token=self.admin_user_tok,
content={"deactivated": False},
)
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual("@user:test", channel.json_body["name"])
self.assertFalse(channel.json_body["deactivated"])
self.assertIsNone(channel.json_body["password_hash"])
@ -1713,7 +1800,7 @@ class UserRestTestCase(unittest.HomeserverTestCase):
access_token=self.admin_user_tok,
content={"deactivated": False, "password": "foo"},
)
self.assertEqual(403, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(403, channel.code, msg=channel.json_body)
self.assertEqual(Codes.FORBIDDEN, channel.json_body["errcode"])
# Reactivate the user without a password.
@ -1723,7 +1810,7 @@ class UserRestTestCase(unittest.HomeserverTestCase):
access_token=self.admin_user_tok,
content={"deactivated": False},
)
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual("@user:test", channel.json_body["name"])
self.assertFalse(channel.json_body["deactivated"])
self.assertIsNone(channel.json_body["password_hash"])
@ -1742,7 +1829,7 @@ class UserRestTestCase(unittest.HomeserverTestCase):
content={"admin": True},
)
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual("@user:test", channel.json_body["name"])
self.assertTrue(channel.json_body["admin"])
@ -1753,7 +1840,7 @@ class UserRestTestCase(unittest.HomeserverTestCase):
access_token=self.admin_user_tok,
)
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual("@user:test", channel.json_body["name"])
self.assertTrue(channel.json_body["admin"])
@ -1772,7 +1859,7 @@ class UserRestTestCase(unittest.HomeserverTestCase):
content={"password": "abc123"},
)
self.assertEqual(201, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(201, channel.code, msg=channel.json_body)
self.assertEqual("@bob:test", channel.json_body["name"])
self.assertEqual("bob", channel.json_body["displayname"])
@ -1783,7 +1870,7 @@ class UserRestTestCase(unittest.HomeserverTestCase):
access_token=self.admin_user_tok,
)
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual("@bob:test", channel.json_body["name"])
self.assertEqual("bob", channel.json_body["displayname"])
self.assertEqual(0, channel.json_body["deactivated"])
@ -1796,7 +1883,7 @@ class UserRestTestCase(unittest.HomeserverTestCase):
content={"password": "abc123", "deactivated": "false"},
)
self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(400, channel.code, msg=channel.json_body)
# Check user is not deactivated
channel = self.make_request(
@ -1805,7 +1892,7 @@ class UserRestTestCase(unittest.HomeserverTestCase):
access_token=self.admin_user_tok,
)
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual("@bob:test", channel.json_body["name"])
self.assertEqual("bob", channel.json_body["displayname"])
@ -1830,7 +1917,7 @@ class UserRestTestCase(unittest.HomeserverTestCase):
access_token=self.admin_user_tok,
content={"deactivated": True},
)
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertTrue(channel.json_body["deactivated"])
self.assertIsNone(channel.json_body["password_hash"])
self._is_erased(user_id, False)
@ -1838,6 +1925,25 @@ class UserRestTestCase(unittest.HomeserverTestCase):
self.assertIsNone(self.get_success(d))
self._is_erased(user_id, True)
def _check_fields(self, content: JsonDict):
"""Checks that the expected user attributes are present in content
Args:
content: Content dictionary to check
"""
self.assertIn("displayname", content)
self.assertIn("threepids", content)
self.assertIn("avatar_url", content)
self.assertIn("admin", content)
self.assertIn("deactivated", content)
self.assertIn("shadow_banned", content)
self.assertIn("password_hash", content)
self.assertIn("creation_ts", content)
self.assertIn("appservice_id", content)
self.assertIn("consent_server_notice_sent", content)
self.assertIn("consent_version", content)
self.assertIn("external_ids", content)
class UserMembershipRestTestCase(unittest.HomeserverTestCase):

View File

@ -15,7 +15,7 @@
from unittest.mock import Mock
from synapse.util.caches.lrucache import LruCache
from synapse.util.caches.lrucache import LruCache, setup_expire_lru_cache_entries
from synapse.util.caches.treecache import TreeCache
from tests import unittest
@ -260,3 +260,47 @@ class LruCacheSizedTestCase(unittest.HomeserverTestCase):
self.assertEquals(cache["key3"], [3])
self.assertEquals(cache["key4"], [4])
self.assertEquals(cache["key5"], [5, 6])
class TimeEvictionTestCase(unittest.HomeserverTestCase):
"""Test that time based eviction works correctly."""
def default_config(self):
config = super().default_config()
config.setdefault("caches", {})["expiry_time"] = "30m"
return config
def test_evict(self):
setup_expire_lru_cache_entries(self.hs)
cache = LruCache(5, clock=self.hs.get_clock())
# Check that we evict entries we haven't accessed for 30 minutes.
cache["key1"] = 1
cache["key2"] = 2
self.reactor.advance(20 * 60)
self.assertEqual(cache.get("key1"), 1)
self.reactor.advance(20 * 60)
# We have only touched `key1` in the last 30m, so we expect that to
# still be in the cache while `key2` should have been evicted.
self.assertEqual(cache.get("key1"), 1)
self.assertEqual(cache.get("key2"), None)
# Check that re-adding an expired key works correctly.
cache["key2"] = 3
self.assertEqual(cache.get("key2"), 3)
self.reactor.advance(20 * 60)
self.assertEqual(cache.get("key2"), 3)
self.reactor.advance(20 * 60)
self.assertEqual(cache.get("key1"), None)
self.assertEqual(cache.get("key2"), 3)