Merge branch 'develop' into madlittlemods/2716-backfill-historical-events-for-federation

Conflicts:
	synapse/rest/client/v1/room.py
pull/10419/head
Eric Eastwood 2021-07-08 22:31:02 -05:00
commit 281588f120
71 changed files with 203 additions and 739 deletions

View File

@ -7,6 +7,8 @@ on:
- develop - develop
# For documentation specific to a release # For documentation specific to a release
- 'release-v*' - 'release-v*'
# stable docs
- master
workflow_dispatch: workflow_dispatch:
@ -30,40 +32,35 @@ jobs:
mdbook build mdbook build
cp book/welcome_and_overview.html book/index.html cp book/welcome_and_overview.html book/index.html
# Deploy to the latest documentation directories # Figure out the target directory.
- name: Deploy latest documentation #
# The target directory depends on the name of the branch
#
- name: Get the target directory name
id: vars
run: |
# first strip the 'refs/heads/' prefix with some shell foo
branch="${GITHUB_REF#refs/heads/}"
case $branch in
release-*)
# strip 'release-' from the name for release branches.
branch="${branch#release-}"
;;
master)
# deploy to "latest" for the master branch.
branch="latest"
;;
esac
# finally, set the 'branch-version' var.
echo "::set-output name=branch-version::$branch"
# Deploy to the target directory.
- name: Deploy to gh pages
uses: peaceiris/actions-gh-pages@068dc23d9710f1ba62e86896f84735d869951305 # v3.8.0 uses: peaceiris/actions-gh-pages@068dc23d9710f1ba62e86896f84735d869951305 # v3.8.0
with: with:
github_token: ${{ secrets.GITHUB_TOKEN }} github_token: ${{ secrets.GITHUB_TOKEN }}
keep_files: true keep_files: true
publish_dir: ./book publish_dir: ./book
destination_dir: ./develop destination_dir: ./${{ steps.vars.outputs.branch-version }}
- name: Get the current Synapse version
id: vars
# The $GITHUB_REF value for a branch looks like `refs/heads/release-v1.2`. We do some
# shell magic to remove the "refs/heads/release-v" bit from this, to end up with "1.2",
# our major/minor version number, and set this to a var called `branch-version`.
#
# We then use some python to get Synapse's full version string, which may look
# like "1.2.3rc4". We set this to a var called `synapse-version`. We use this
# to determine if this release is still an RC, and if so block deployment.
run: |
echo ::set-output name=branch-version::${GITHUB_REF#refs/heads/release-v}
echo ::set-output name=synapse-version::`python3 -c 'import synapse; print(synapse.__version__)'`
# Deploy to the version-specific directory
- name: Deploy release-specific documentation
# We only carry out this step if we're running on a release branch,
# and the current Synapse version does not have "rc" in the name.
#
# The result is that only full releases are deployed, but can be
# updated if the release branch gets retroactive fixes.
if: ${{ startsWith( github.ref, 'refs/heads/release-v' ) && !contains( steps.vars.outputs.synapse-version, 'rc') }}
uses: peaceiris/actions-gh-pages@v3
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
keep_files: true
publish_dir: ./book
# The resulting documentation will end up in a directory named `vX.Y`.
destination_dir: ./v${{ steps.vars.outputs.branch-version }}

View File

@ -1,6 +1,53 @@
Synapse 1.38.0 (**UNRELEASED**) Synapse 1.38.0rc1 (2021-07-06)
=============================== ==============================
This release includes a database schema update which could result in elevated disk usage. See the [upgrade notes](https://matrix-org.github.io/synapse/develop/upgrade.md#upgrading-to-v1380) for more information.
This release includes a database schema update which could result in elevated disk usage. See the [upgrade notes](https://matrix-org.github.io/synapse/develop/upgrade#upgrading-to-v1380) for more information.
Features
--------
- Implement refresh tokens as specified by [MSC2918](https://github.com/matrix-org/matrix-doc/pull/2918). ([\#9450](https://github.com/matrix-org/synapse/issues/9450))
- Add support for evicting cache entries based on last access time. ([\#10205](https://github.com/matrix-org/synapse/issues/10205))
- Omit empty fields from the `/sync` response. Contributed by @deepbluev7. ([\#10214](https://github.com/matrix-org/synapse/issues/10214))
- Improve validation on federation `send_{join,leave,knock}` endpoints. ([\#10225](https://github.com/matrix-org/synapse/issues/10225), [\#10243](https://github.com/matrix-org/synapse/issues/10243))
- Add SSO `external_ids` to the Query User Account admin API. ([\#10261](https://github.com/matrix-org/synapse/issues/10261))
- Mark events received over federation which fail a spam check as "soft-failed". ([\#10263](https://github.com/matrix-org/synapse/issues/10263))
- Add metrics for new inbound federation staging area. ([\#10284](https://github.com/matrix-org/synapse/issues/10284))
- Add script to print information about recently registered users. ([\#10290](https://github.com/matrix-org/synapse/issues/10290))
Bugfixes
--------
- Fix a long-standing bug which meant that invite rejections and knocks were not sent out over federation in a timely manner. ([\#10223](https://github.com/matrix-org/synapse/issues/10223))
- Fix a bug introduced in v1.26.0 where only users who have set profile information could be deactivated with erasure enabled. ([\#10252](https://github.com/matrix-org/synapse/issues/10252))
- Fix a long-standing bug where Synapse would return errors after 2<sup>31</sup> events were handled by the server. ([\#10264](https://github.com/matrix-org/synapse/issues/10264), [\#10267](https://github.com/matrix-org/synapse/issues/10267), [\#10282](https://github.com/matrix-org/synapse/issues/10282), [\#10286](https://github.com/matrix-org/synapse/issues/10286), [\#10291](https://github.com/matrix-org/synapse/issues/10291), [\#10314](https://github.com/matrix-org/synapse/issues/10314), [\#10326](https://github.com/matrix-org/synapse/issues/10326))
- Fix the prometheus `synapse_federation_server_pdu_process_time` metric. Broke in v1.37.1. ([\#10279](https://github.com/matrix-org/synapse/issues/10279))
- Ensure that inbound events from federation that were being processed when Synapse was restarted get promptly processed on start up. ([\#10303](https://github.com/matrix-org/synapse/issues/10303))
Improved Documentation
----------------------
- Move the upgrade notes to [docs/upgrade.md](https://github.com/matrix-org/synapse/blob/develop/docs/upgrade.md) and convert them to markdown. ([\#10166](https://github.com/matrix-org/synapse/issues/10166))
- Choose Welcome & Overview as the default page for synapse documentation website. ([\#10242](https://github.com/matrix-org/synapse/issues/10242))
- Adjust the URL in the README.rst file to point to irc.libera.chat. ([\#10258](https://github.com/matrix-org/synapse/issues/10258))
- Fix homeserver config option name in presence router documentation. ([\#10288](https://github.com/matrix-org/synapse/issues/10288))
- Fix link pointing at the wrong section in the modules documentation page. ([\#10302](https://github.com/matrix-org/synapse/issues/10302))
Internal Changes
----------------
- Drop `Origin` and `Accept` from the value of the `Access-Control-Allow-Headers` response header. ([\#10114](https://github.com/matrix-org/synapse/issues/10114))
- Add type hints to the federation servlets. ([\#10213](https://github.com/matrix-org/synapse/issues/10213))
- Improve the reliability of auto-joining remote rooms. ([\#10237](https://github.com/matrix-org/synapse/issues/10237))
- Update the release script to use the semver terminology and determine the release branch based on the next version. ([\#10239](https://github.com/matrix-org/synapse/issues/10239))
- Fix type hints for computing auth events. ([\#10253](https://github.com/matrix-org/synapse/issues/10253))
- Improve the performance of the spaces summary endpoint by only recursing into spaces (and not rooms in general). ([\#10256](https://github.com/matrix-org/synapse/issues/10256))
- Move event authentication methods from `Auth` to `EventAuthHandler`. ([\#10268](https://github.com/matrix-org/synapse/issues/10268))
- Re-enable a SyTest after it has been fixed. ([\#10292](https://github.com/matrix-org/synapse/issues/10292))
Synapse 1.37.1 (2021-06-30) Synapse 1.37.1 (2021-06-30)
=========================== ===========================

View File

@ -1 +0,0 @@
Drop Origin and Accept from the value of the Access-Control-Allow-Headers response header.

View File

@ -1 +0,0 @@
Move the upgrade notes to [docs/upgrade.md](https://github.com/matrix-org/synapse/blob/develop/docs/upgrade.md) and convert them to markdown.

View File

@ -1 +0,0 @@
Add support for evicting cache entries based on last access time.

View File

@ -1 +0,0 @@
Add type hints to the federation servlets.

View File

@ -1 +0,0 @@
Omit empty fields from the `/sync` response. Contributed by @deepbluev7.

View File

@ -1 +0,0 @@
Fix a long-standing bug which meant that invite rejections and knocks were not sent out over federation in a timely manner.

View File

@ -1 +0,0 @@
Improve validation on federation `send_{join,leave,knock}` endpoints.

View File

@ -1 +0,0 @@
Improve the reliability of auto-joining remote rooms.

View File

@ -1 +0,0 @@
Update the release script to use the semver terminology and determine the release branch based on the next version.

View File

@ -1 +0,0 @@
Choose Welcome & Overview as the default page for synapse documentation website.

View File

@ -1 +0,0 @@
Improve validation on federation `send_{join,leave,knock}` endpoints.

1
changelog.d/10250.bugfix Normal file
View File

@ -0,0 +1 @@
Add base starting insertion event when no chunk ID is specified in the historical batch send API.

View File

@ -1 +0,0 @@
Fix a bug introduced in v1.26.0 where only users who have set profile information could be deactivated with erasure enabled.

View File

@ -1 +0,0 @@
Fix type hints for computing auth events.

View File

@ -1 +0,0 @@
Improve the performance of the spaces summary endpoint by only recursing into spaces (and not rooms in general).

View File

@ -1 +0,0 @@
Adjust the URL in the README.rst file to point to irc.libera.chat.

View File

@ -1 +0,0 @@
Add SSO `external_ids` to the Query User Account admin API.

View File

@ -1 +0,0 @@
Mark events received over federation which fail a spam check as "soft-failed".

View File

@ -1 +0,0 @@
Fix a long-standing bug where Synapse would return errors after 2<sup>31</sup> events were handled by the server.

View File

@ -1 +0,0 @@
Fix a long-standing bug where Synapse would return errors after 2<sup>31</sup> events were handled by the server.

View File

@ -1 +0,0 @@
Move event authentication methods from `Auth` to `EventAuthHandler`.

View File

@ -1 +0,0 @@
Fix the prometheus `synapse_federation_server_pdu_process_time` metric. Broke in v1.37.1.

View File

@ -1 +0,0 @@
Fix a long-standing bug where Synapse would return errors after 2<sup>31</sup> events were handled by the server.

View File

@ -1 +0,0 @@
Add metrics for new inbound federation staging area.

View File

@ -1 +0,0 @@
Fix a long-standing bug where Synapse would return errors after 2<sup>31</sup> events were handled by the server.

1
changelog.d/10287.doc Normal file
View File

@ -0,0 +1 @@
Update links to documentation in sample config. Contributed by @dklimpel.

View File

@ -1 +0,0 @@
Fix homeserver config option name in presence router documentation.

View File

@ -1 +0,0 @@
Add script to print information about recently registered users.

View File

@ -1 +0,0 @@
Fix a long-standing bug where Synapse would return errors after 2<sup>31</sup> events were handled by the server.

View File

@ -1 +0,0 @@
Reenable a SyTest after it has been fixed.

View File

@ -1 +0,0 @@
Fix link pointing at the wrong section in the modules documentation page.

View File

@ -1 +0,0 @@
Ensure that inbound events from federation that were being processed when Synapse was restarted get promptly processed on start up.

1
changelog.d/10313.doc Normal file
View File

@ -0,0 +1 @@
Simplify structure of room admin API.

View File

@ -1 +0,0 @@
Fix a long-standing bug where Synapse would return errors after 2<sup>31</sup> events were handled by the server.

1
changelog.d/10316.misc Normal file
View File

@ -0,0 +1 @@
Rebuild event context and auth when processing specific results from `ThirdPartyEventRules` modules.

1
changelog.d/10322.doc Normal file
View File

@ -0,0 +1 @@
Fix a broken link in the admin api docs.

1
changelog.d/10324.misc Normal file
View File

@ -0,0 +1 @@
Minor change to the code that populates `user_daily_visits`.

1
changelog.d/10337.doc Normal file
View File

@ -0,0 +1 @@
Fix formatting in the logcontext documentation.

View File

@ -1 +0,0 @@
Implement refresh tokens as specified by [MSC2918](https://github.com/matrix-org/matrix-doc/pull/2918).

1
changelog.d/9721.removal Normal file
View File

@ -0,0 +1 @@
Remove functionality associated with the unused `room_stats_historical` and `user_stats_historical` tables. Contributed by @xmunoz.

View File

@ -47,7 +47,7 @@ The API returns a JSON body like the following:
## List all media uploaded by a user ## List all media uploaded by a user
Listing all media that has been uploaded by a local user can be achieved through Listing all media that has been uploaded by a local user can be achieved through
the use of the [List media of a user](user_admin_api.rst#list-media-of-a-user) the use of the [List media of a user](user_admin_api.md#list-media-of-a-user)
Admin API. Admin API.
# Quarantine media # Quarantine media

View File

@ -1,13 +1,9 @@
# Contents # Contents
- [List Room API](#list-room-api) - [List Room API](#list-room-api)
* [Parameters](#parameters)
* [Usage](#usage)
- [Room Details API](#room-details-api) - [Room Details API](#room-details-api)
- [Room Members API](#room-members-api) - [Room Members API](#room-members-api)
- [Room State API](#room-state-api) - [Room State API](#room-state-api)
- [Delete Room API](#delete-room-api) - [Delete Room API](#delete-room-api)
* [Parameters](#parameters-1)
* [Response](#response)
* [Undoing room shutdowns](#undoing-room-shutdowns) * [Undoing room shutdowns](#undoing-room-shutdowns)
- [Make Room Admin API](#make-room-admin-api) - [Make Room Admin API](#make-room-admin-api)
- [Forward Extremities Admin API](#forward-extremities-admin-api) - [Forward Extremities Admin API](#forward-extremities-admin-api)
@ -19,7 +15,7 @@ The List Room admin API allows server admins to get a list of rooms on their
server. There are various parameters available that allow for filtering and server. There are various parameters available that allow for filtering and
sorting the returned list. This API supports pagination. sorting the returned list. This API supports pagination.
## Parameters **Parameters**
The following query parameters are available: The following query parameters are available:
@ -46,6 +42,8 @@ The following query parameters are available:
* `search_term` - Filter rooms by their room name. Search term can be contained in any * `search_term` - Filter rooms by their room name. Search term can be contained in any
part of the room name. Defaults to no filtering. part of the room name. Defaults to no filtering.
**Response**
The following fields are possible in the JSON response body: The following fields are possible in the JSON response body:
* `rooms` - An array of objects, each containing information about a room. * `rooms` - An array of objects, each containing information about a room.
@ -79,17 +77,15 @@ The following fields are possible in the JSON response body:
Use `prev_batch` for the `from` value in the next request to Use `prev_batch` for the `from` value in the next request to
get the "previous page" of results. get the "previous page" of results.
## Usage The API is:
A standard request with no filtering: A standard request with no filtering:
``` ```
GET /_synapse/admin/v1/rooms GET /_synapse/admin/v1/rooms
{}
``` ```
Response: A response body like the following is returned:
```jsonc ```jsonc
{ {
@ -137,11 +133,9 @@ Filtering by room name:
``` ```
GET /_synapse/admin/v1/rooms?search_term=TWIM GET /_synapse/admin/v1/rooms?search_term=TWIM
{}
``` ```
Response: A response body like the following is returned:
```json ```json
{ {
@ -172,11 +166,9 @@ Paginating through a list of rooms:
``` ```
GET /_synapse/admin/v1/rooms?order_by=size GET /_synapse/admin/v1/rooms?order_by=size
{}
``` ```
Response: A response body like the following is returned:
```jsonc ```jsonc
{ {
@ -228,11 +220,9 @@ parameter to the value of `next_token`.
``` ```
GET /_synapse/admin/v1/rooms?order_by=size&from=100 GET /_synapse/admin/v1/rooms?order_by=size&from=100
{}
``` ```
Response: A response body like the following is returned:
```jsonc ```jsonc
{ {
@ -304,17 +294,13 @@ The following fields are possible in the JSON response body:
* `history_visibility` - Who can see the room history. One of: ["invited", "joined", "shared", "world_readable"]. * `history_visibility` - Who can see the room history. One of: ["invited", "joined", "shared", "world_readable"].
* `state_events` - Total number of state_events of a room. Complexity of the room. * `state_events` - Total number of state_events of a room. Complexity of the room.
## Usage The API is:
A standard request:
``` ```
GET /_synapse/admin/v1/rooms/<room_id> GET /_synapse/admin/v1/rooms/<room_id>
{}
``` ```
Response: A response body like the following is returned:
```json ```json
{ {
@ -347,17 +333,13 @@ The response includes the following fields:
* `members` - A list of all the members that are present in the room, represented by their ids. * `members` - A list of all the members that are present in the room, represented by their ids.
* `total` - Total number of members in the room. * `total` - Total number of members in the room.
## Usage The API is:
A standard request:
``` ```
GET /_synapse/admin/v1/rooms/<room_id>/members GET /_synapse/admin/v1/rooms/<room_id>/members
{}
``` ```
Response: A response body like the following is returned:
```json ```json
{ {
@ -378,17 +360,13 @@ The response includes the following fields:
* `state` - The current state of the room at the time of request. * `state` - The current state of the room at the time of request.
## Usage The API is:
A standard request:
``` ```
GET /_synapse/admin/v1/rooms/<room_id>/state GET /_synapse/admin/v1/rooms/<room_id>/state
{}
``` ```
Response: A response body like the following is returned:
```json ```json
{ {
@ -432,6 +410,7 @@ DELETE /_synapse/admin/v1/rooms/<room_id>
``` ```
with a body of: with a body of:
```json ```json
{ {
"new_room_user_id": "@someuser:example.com", "new_room_user_id": "@someuser:example.com",
@ -461,7 +440,7 @@ A response body like the following is returned:
} }
``` ```
## Parameters **Parameters**
The following parameters should be set in the URL: The following parameters should be set in the URL:
@ -491,7 +470,7 @@ The following JSON body parameters are available:
The JSON body must not be empty. The body must be at least `{}`. The JSON body must not be empty. The body must be at least `{}`.
## Response **Response**
The following fields are returned in the JSON response body: The following fields are returned in the JSON response body:
@ -548,10 +527,10 @@ By default the server admin (the caller) is granted power, but another user can
optionally be specified, e.g.: optionally be specified, e.g.:
``` ```
POST /_synapse/admin/v1/rooms/<room_id_or_alias>/make_room_admin POST /_synapse/admin/v1/rooms/<room_id_or_alias>/make_room_admin
{ {
"user_id": "@foo:example.com" "user_id": "@foo:example.com"
} }
``` ```
# Forward Extremities Admin API # Forward Extremities Admin API
@ -565,7 +544,7 @@ extremities accumulate in a room, performance can become degraded. For details,
To check the status of forward extremities for a room: To check the status of forward extremities for a room:
``` ```
GET /_synapse/admin/v1/rooms/<room_id_or_alias>/forward_extremities GET /_synapse/admin/v1/rooms/<room_id_or_alias>/forward_extremities
``` ```
A response as follows will be returned: A response as follows will be returned:
@ -594,7 +573,7 @@ If a room has lots of forward extremities, the extra can be
deleted as follows: deleted as follows:
``` ```
DELETE /_synapse/admin/v1/rooms/<room_id_or_alias>/forward_extremities DELETE /_synapse/admin/v1/rooms/<room_id_or_alias>/forward_extremities
``` ```
A response as follows will be returned, indicating the amount of forward extremities A response as follows will be returned, indicating the amount of forward extremities

View File

@ -17,7 +17,7 @@ class).
Deferreds make the whole thing complicated, so this document describes Deferreds make the whole thing complicated, so this document describes
how it all works, and how to write code which follows the rules. how it all works, and how to write code which follows the rules.
##Logcontexts without Deferreds ## Logcontexts without Deferreds
In the absence of any Deferred voodoo, things are simple enough. As with In the absence of any Deferred voodoo, things are simple enough. As with
any code of this nature, the rule is that our function should leave any code of this nature, the rule is that our function should leave

View File

@ -1,9 +1,9 @@
Room and User Statistics Room and User Statistics
======================== ========================
Synapse maintains room and user statistics (as well as a cache of room state), Synapse maintains room and user statistics in various tables. These can be used
in various tables. These can be used for administrative purposes but are also for administrative purposes but are also used when generating the public room
used when generating the public room directory. directory.
# Synapse Developer Documentation # Synapse Developer Documentation
@ -15,48 +15,8 @@ used when generating the public room directory.
* **subject**: Something we are tracking stats about currently a room or user. * **subject**: Something we are tracking stats about currently a room or user.
* **current row**: An entry for a subject in the appropriate current statistics * **current row**: An entry for a subject in the appropriate current statistics
table. Each subject can have only one. table. Each subject can have only one.
* **historical row**: An entry for a subject in the appropriate historical
statistics table. Each subject can have any number of these.
### Overview ### Overview
Stats are maintained as time series. There are two kinds of column: Stats correspond to the present values. Current rows contain the most up-to-date
statistics for a room. Each subject can only have one entry.
* absolute columns where the value is correct for the time given by `end_ts`
in the stats row. (Imagine a line graph for these values)
* They can also be thought of as 'gauges' in Prometheus, if you are familiar.
* per-slice columns where the value corresponds to how many of the occurrences
occurred within the time slice given by `(end_ts bucket_size)…end_ts`
or `start_ts…end_ts`. (Imagine a histogram for these values)
Stats are maintained in two tables (for each type): current and historical.
Current stats correspond to the present values. Each subject can only have one
entry.
Historical stats correspond to values in the past. Subjects may have multiple
entries.
## Concepts around the management of stats
### Current rows
Current rows contain the most up-to-date statistics for a room.
They only contain absolute columns
### Historical rows
Historical rows can always be considered to be valid for the time slice and
end time specified.
* historical rows will not exist for every time slice they will be omitted
if there were no changes. In this case, the following assumptions can be
made to interpolate/recreate missing rows:
- absolute fields have the same values as in the preceding row
- per-slice fields are zero (`0`)
* historical rows will not be retained forever rows older than a configurable
time will be purged.
#### Purge
The purging of historical rows is not yet implemented.

View File

@ -36,7 +36,7 @@
# Server admins can expand Synapse's functionality with external modules. # Server admins can expand Synapse's functionality with external modules.
# #
# See https://matrix-org.github.io/synapse/develop/modules.html for more # See https://matrix-org.github.io/synapse/latest/modules.html for more
# documentation on how to configure or create custom modules for Synapse. # documentation on how to configure or create custom modules for Synapse.
# #
modules: modules:
@ -58,7 +58,7 @@ modules:
# In most cases you should avoid using a matrix specific subdomain such as # In most cases you should avoid using a matrix specific subdomain such as
# matrix.example.com or synapse.example.com as the server_name for the same # matrix.example.com or synapse.example.com as the server_name for the same
# reasons you wouldn't use user@email.example.com as your email address. # reasons you wouldn't use user@email.example.com as your email address.
# See https://github.com/matrix-org/synapse/blob/master/docs/delegate.md # See https://matrix-org.github.io/synapse/latest/delegate.html
# for information on how to host Synapse on a subdomain while preserving # for information on how to host Synapse on a subdomain while preserving
# a clean server_name. # a clean server_name.
# #
@ -253,9 +253,9 @@ presence:
# 'all local interfaces'. # 'all local interfaces'.
# #
# type: the type of listener. Normally 'http', but other valid options are: # type: the type of listener. Normally 'http', but other valid options are:
# 'manhole' (see docs/manhole.md), # 'manhole' (see https://matrix-org.github.io/synapse/latest/manhole.html),
# 'metrics' (see docs/metrics-howto.md), # 'metrics' (see https://matrix-org.github.io/synapse/latest/metrics-howto.html),
# 'replication' (see docs/workers.md). # 'replication' (see https://matrix-org.github.io/synapse/latest/workers.html).
# #
# tls: set to true to enable TLS for this listener. Will use the TLS # tls: set to true to enable TLS for this listener. Will use the TLS
# key/cert specified in tls_private_key_path / tls_certificate_path. # key/cert specified in tls_private_key_path / tls_certificate_path.
@ -280,8 +280,8 @@ presence:
# client: the client-server API (/_matrix/client), and the synapse admin # client: the client-server API (/_matrix/client), and the synapse admin
# API (/_synapse/admin). Also implies 'media' and 'static'. # API (/_synapse/admin). Also implies 'media' and 'static'.
# #
# consent: user consent forms (/_matrix/consent). See # consent: user consent forms (/_matrix/consent).
# docs/consent_tracking.md. # See https://matrix-org.github.io/synapse/latest/consent_tracking.html.
# #
# federation: the server-server API (/_matrix/federation). Also implies # federation: the server-server API (/_matrix/federation). Also implies
# 'media', 'keys', 'openid' # 'media', 'keys', 'openid'
@ -290,12 +290,13 @@ presence:
# #
# media: the media API (/_matrix/media). # media: the media API (/_matrix/media).
# #
# metrics: the metrics interface. See docs/metrics-howto.md. # metrics: the metrics interface.
# See https://matrix-org.github.io/synapse/latest/metrics-howto.html.
# #
# openid: OpenID authentication. # openid: OpenID authentication.
# #
# replication: the HTTP replication API (/_synapse/replication). See # replication: the HTTP replication API (/_synapse/replication).
# docs/workers.md. # See https://matrix-org.github.io/synapse/latest/workers.html.
# #
# static: static resources under synapse/static (/_matrix/static). (Mostly # static: static resources under synapse/static (/_matrix/static). (Mostly
# useful for 'fallback authentication'.) # useful for 'fallback authentication'.)
@ -319,7 +320,7 @@ listeners:
# that unwraps TLS. # that unwraps TLS.
# #
# If you plan to use a reverse proxy, please see # If you plan to use a reverse proxy, please see
# https://github.com/matrix-org/synapse/blob/master/docs/reverse_proxy.md. # https://matrix-org.github.io/synapse/latest/reverse_proxy.html.
# #
- port: 8008 - port: 8008
tls: false tls: false
@ -747,7 +748,8 @@ caches:
# cp_min: 5 # cp_min: 5
# cp_max: 10 # cp_max: 10
# #
# For more information on using Synapse with Postgres, see `docs/postgres.md`. # For more information on using Synapse with Postgres,
# see https://matrix-org.github.io/synapse/latest/postgres.html.
# #
database: database:
name: sqlite3 name: sqlite3
@ -900,7 +902,7 @@ media_store_path: "DATADIR/media_store"
# #
# If you are using a reverse proxy you may also need to set this value in # If you are using a reverse proxy you may also need to set this value in
# your reverse proxy's config. Notably Nginx has a small max body size by default. # your reverse proxy's config. Notably Nginx has a small max body size by default.
# See https://matrix-org.github.io/synapse/develop/reverse_proxy.html. # See https://matrix-org.github.io/synapse/latest/reverse_proxy.html.
# #
#max_upload_size: 50M #max_upload_size: 50M
@ -1840,7 +1842,7 @@ saml2_config:
# #
# module: The class name of a custom mapping module. Default is # module: The class name of a custom mapping module. Default is
# 'synapse.handlers.oidc.JinjaOidcMappingProvider'. # 'synapse.handlers.oidc.JinjaOidcMappingProvider'.
# See https://github.com/matrix-org/synapse/blob/master/docs/sso_mapping_providers.md#openid-mapping-providers # See https://matrix-org.github.io/synapse/latest/sso_mapping_providers.html#openid-mapping-providers
# for information on implementing a custom mapping provider. # for information on implementing a custom mapping provider.
# #
# config: Configuration for the mapping provider module. This section will # config: Configuration for the mapping provider module. This section will
@ -1891,7 +1893,7 @@ saml2_config:
# - attribute: groups # - attribute: groups
# value: "admin" # value: "admin"
# #
# See https://github.com/matrix-org/synapse/blob/master/docs/openid.md # See https://matrix-org.github.io/synapse/latest/openid.html
# for information on how to configure these options. # for information on how to configure these options.
# #
# For backwards compatibility, it is also possible to configure a single OIDC # For backwards compatibility, it is also possible to configure a single OIDC
@ -2169,7 +2171,7 @@ sso:
# Note that this is a non-standard login type and client support is # Note that this is a non-standard login type and client support is
# expected to be non-existent. # expected to be non-existent.
# #
# See https://github.com/matrix-org/synapse/blob/master/docs/jwt.md. # See https://matrix-org.github.io/synapse/latest/jwt.html.
# #
#jwt_config: #jwt_config:
# Uncomment the following to enable authorization using JSON web # Uncomment the following to enable authorization using JSON web
@ -2469,7 +2471,7 @@ email:
# ex. LDAP, external tokens, etc. # ex. LDAP, external tokens, etc.
# #
# For more information and known implementations, please see # For more information and known implementations, please see
# https://github.com/matrix-org/synapse/blob/master/docs/password_auth_providers.md # https://matrix-org.github.io/synapse/latest/password_auth_providers.html
# #
# Note: instances wishing to use SAML or CAS authentication should # Note: instances wishing to use SAML or CAS authentication should
# instead use the `saml2_config` or `cas_config` options, # instead use the `saml2_config` or `cas_config` options,
@ -2571,7 +2573,7 @@ user_directory:
# #
# If you set it true, you'll have to rebuild the user_directory search # If you set it true, you'll have to rebuild the user_directory search
# indexes, see: # indexes, see:
# https://github.com/matrix-org/synapse/blob/master/docs/user_directory.md # https://matrix-org.github.io/synapse/latest/user_directory.html
# #
# Uncomment to return search results containing all known users, even if that # Uncomment to return search results containing all known users, even if that
# user does not share a room with the requester. # user does not share a room with the requester.
@ -2591,7 +2593,7 @@ user_directory:
# User Consent configuration # User Consent configuration
# #
# for detailed instructions, see # for detailed instructions, see
# https://github.com/matrix-org/synapse/blob/master/docs/consent_tracking.md # https://matrix-org.github.io/synapse/latest/consent_tracking.html
# #
# Parts of this section are required if enabling the 'consent' resource under # Parts of this section are required if enabling the 'consent' resource under
# 'listeners', in particular 'template_dir' and 'version'. # 'listeners', in particular 'template_dir' and 'version'.
@ -2641,7 +2643,7 @@ user_directory:
# Settings for local room and user statistics collection. See # Settings for local room and user statistics collection. See
# docs/room_and_user_statistics.md. # https://matrix-org.github.io/synapse/latest/room_and_user_statistics.html.
# #
stats: stats:
# Uncomment the following to disable room and user statistics. Note that doing # Uncomment the following to disable room and user statistics. Note that doing
@ -2650,11 +2652,6 @@ stats:
# #
#enabled: false #enabled: false
# The size of each timeslice in the room_stats_historical and
# user_stats_historical tables, as a time period. Defaults to "1d".
#
#bucket_size: 1h
# Server Notices room configuration # Server Notices room configuration
# #
@ -2768,7 +2765,7 @@ opentracing:
#enabled: true #enabled: true
# The list of homeservers we wish to send and receive span contexts and span baggage. # The list of homeservers we wish to send and receive span contexts and span baggage.
# See docs/opentracing.rst. # See https://matrix-org.github.io/synapse/latest/opentracing.html.
# #
# This is a list of regexes which are matched against the server_name of the # This is a list of regexes which are matched against the server_name of the
# homeserver. # homeserver.

View File

@ -7,7 +7,7 @@
# be ingested by ELK stacks. See [2] for details. # be ingested by ELK stacks. See [2] for details.
# #
# [1]: https://docs.python.org/3.7/library/logging.config.html#configuration-dictionary-schema # [1]: https://docs.python.org/3.7/library/logging.config.html#configuration-dictionary-schema
# [2]: https://github.com/matrix-org/synapse/blob/master/docs/structured_logging.md # [2]: https://matrix-org.github.io/synapse/latest/structured_logging.html
version: 1 version: 1

View File

@ -47,7 +47,7 @@ try:
except ImportError: except ImportError:
pass pass
__version__ = "1.37.1" __version__ = "1.38.0rc1"
if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)): if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
# We import here so that we don't have to install a bunch of deps when # We import here so that we don't have to install a bunch of deps when

View File

@ -22,7 +22,7 @@ DEFAULT_CONFIG = """\
# User Consent configuration # User Consent configuration
# #
# for detailed instructions, see # for detailed instructions, see
# https://github.com/matrix-org/synapse/blob/master/docs/consent_tracking.md # https://matrix-org.github.io/synapse/latest/consent_tracking.html
# #
# Parts of this section are required if enabling the 'consent' resource under # Parts of this section are required if enabling the 'consent' resource under
# 'listeners', in particular 'template_dir' and 'version'. # 'listeners', in particular 'template_dir' and 'version'.

View File

@ -62,7 +62,8 @@ DEFAULT_CONFIG = """\
# cp_min: 5 # cp_min: 5
# cp_max: 10 # cp_max: 10
# #
# For more information on using Synapse with Postgres, see `docs/postgres.md`. # For more information on using Synapse with Postgres,
# see https://matrix-org.github.io/synapse/latest/postgres.html.
# #
database: database:
name: sqlite3 name: sqlite3

View File

@ -64,7 +64,7 @@ class JWTConfig(Config):
# Note that this is a non-standard login type and client support is # Note that this is a non-standard login type and client support is
# expected to be non-existent. # expected to be non-existent.
# #
# See https://github.com/matrix-org/synapse/blob/master/docs/jwt.md. # See https://matrix-org.github.io/synapse/latest/jwt.html.
# #
#jwt_config: #jwt_config:
# Uncomment the following to enable authorization using JSON web # Uncomment the following to enable authorization using JSON web

View File

@ -49,7 +49,7 @@ DEFAULT_LOG_CONFIG = Template(
# be ingested by ELK stacks. See [2] for details. # be ingested by ELK stacks. See [2] for details.
# #
# [1]: https://docs.python.org/3.7/library/logging.config.html#configuration-dictionary-schema # [1]: https://docs.python.org/3.7/library/logging.config.html#configuration-dictionary-schema
# [2]: https://github.com/matrix-org/synapse/blob/master/docs/structured_logging.md # [2]: https://matrix-org.github.io/synapse/latest/structured_logging.html
version: 1 version: 1

View File

@ -37,7 +37,7 @@ class ModulesConfig(Config):
# Server admins can expand Synapse's functionality with external modules. # Server admins can expand Synapse's functionality with external modules.
# #
# See https://matrix-org.github.io/synapse/develop/modules.html for more # See https://matrix-org.github.io/synapse/latest/modules.html for more
# documentation on how to configure or create custom modules for Synapse. # documentation on how to configure or create custom modules for Synapse.
# #
modules: modules:

View File

@ -166,7 +166,7 @@ class OIDCConfig(Config):
# #
# module: The class name of a custom mapping module. Default is # module: The class name of a custom mapping module. Default is
# {mapping_provider!r}. # {mapping_provider!r}.
# See https://github.com/matrix-org/synapse/blob/master/docs/sso_mapping_providers.md#openid-mapping-providers # See https://matrix-org.github.io/synapse/latest/sso_mapping_providers.html#openid-mapping-providers
# for information on implementing a custom mapping provider. # for information on implementing a custom mapping provider.
# #
# config: Configuration for the mapping provider module. This section will # config: Configuration for the mapping provider module. This section will
@ -217,7 +217,7 @@ class OIDCConfig(Config):
# - attribute: groups # - attribute: groups
# value: "admin" # value: "admin"
# #
# See https://github.com/matrix-org/synapse/blob/master/docs/openid.md # See https://matrix-org.github.io/synapse/latest/openid.html
# for information on how to configure these options. # for information on how to configure these options.
# #
# For backwards compatibility, it is also possible to configure a single OIDC # For backwards compatibility, it is also possible to configure a single OIDC

View File

@ -57,7 +57,7 @@ class PasswordAuthProviderConfig(Config):
# ex. LDAP, external tokens, etc. # ex. LDAP, external tokens, etc.
# #
# For more information and known implementations, please see # For more information and known implementations, please see
# https://github.com/matrix-org/synapse/blob/master/docs/password_auth_providers.md # https://matrix-org.github.io/synapse/latest/password_auth_providers.html
# #
# Note: instances wishing to use SAML or CAS authentication should # Note: instances wishing to use SAML or CAS authentication should
# instead use the `saml2_config` or `cas_config` options, # instead use the `saml2_config` or `cas_config` options,

View File

@ -250,7 +250,7 @@ class ContentRepositoryConfig(Config):
# #
# If you are using a reverse proxy you may also need to set this value in # If you are using a reverse proxy you may also need to set this value in
# your reverse proxy's config. Notably Nginx has a small max body size by default. # your reverse proxy's config. Notably Nginx has a small max body size by default.
# See https://matrix-org.github.io/synapse/develop/reverse_proxy.html. # See https://matrix-org.github.io/synapse/latest/reverse_proxy.html.
# #
#max_upload_size: 50M #max_upload_size: 50M

View File

@ -153,7 +153,7 @@ ROOM_COMPLEXITY_TOO_GREAT = (
METRICS_PORT_WARNING = """\ METRICS_PORT_WARNING = """\
The metrics_port configuration option is deprecated in Synapse 0.31 in favour of The metrics_port configuration option is deprecated in Synapse 0.31 in favour of
a listener. Please see a listener. Please see
https://github.com/matrix-org/synapse/blob/master/docs/metrics-howto.md https://matrix-org.github.io/synapse/latest/metrics-howto.html
on how to configure the new listener. on how to configure the new listener.
--------------------------------------------------------------------------------""" --------------------------------------------------------------------------------"""
@ -811,7 +811,7 @@ class ServerConfig(Config):
# In most cases you should avoid using a matrix specific subdomain such as # In most cases you should avoid using a matrix specific subdomain such as
# matrix.example.com or synapse.example.com as the server_name for the same # matrix.example.com or synapse.example.com as the server_name for the same
# reasons you wouldn't use user@email.example.com as your email address. # reasons you wouldn't use user@email.example.com as your email address.
# See https://github.com/matrix-org/synapse/blob/master/docs/delegate.md # See https://matrix-org.github.io/synapse/latest/delegate.html
# for information on how to host Synapse on a subdomain while preserving # for information on how to host Synapse on a subdomain while preserving
# a clean server_name. # a clean server_name.
# #
@ -988,9 +988,9 @@ class ServerConfig(Config):
# 'all local interfaces'. # 'all local interfaces'.
# #
# type: the type of listener. Normally 'http', but other valid options are: # type: the type of listener. Normally 'http', but other valid options are:
# 'manhole' (see docs/manhole.md), # 'manhole' (see https://matrix-org.github.io/synapse/latest/manhole.html),
# 'metrics' (see docs/metrics-howto.md), # 'metrics' (see https://matrix-org.github.io/synapse/latest/metrics-howto.html),
# 'replication' (see docs/workers.md). # 'replication' (see https://matrix-org.github.io/synapse/latest/workers.html).
# #
# tls: set to true to enable TLS for this listener. Will use the TLS # tls: set to true to enable TLS for this listener. Will use the TLS
# key/cert specified in tls_private_key_path / tls_certificate_path. # key/cert specified in tls_private_key_path / tls_certificate_path.
@ -1015,8 +1015,8 @@ class ServerConfig(Config):
# client: the client-server API (/_matrix/client), and the synapse admin # client: the client-server API (/_matrix/client), and the synapse admin
# API (/_synapse/admin). Also implies 'media' and 'static'. # API (/_synapse/admin). Also implies 'media' and 'static'.
# #
# consent: user consent forms (/_matrix/consent). See # consent: user consent forms (/_matrix/consent).
# docs/consent_tracking.md. # See https://matrix-org.github.io/synapse/latest/consent_tracking.html.
# #
# federation: the server-server API (/_matrix/federation). Also implies # federation: the server-server API (/_matrix/federation). Also implies
# 'media', 'keys', 'openid' # 'media', 'keys', 'openid'
@ -1025,12 +1025,13 @@ class ServerConfig(Config):
# #
# media: the media API (/_matrix/media). # media: the media API (/_matrix/media).
# #
# metrics: the metrics interface. See docs/metrics-howto.md. # metrics: the metrics interface.
# See https://matrix-org.github.io/synapse/latest/metrics-howto.html.
# #
# openid: OpenID authentication. # openid: OpenID authentication.
# #
# replication: the HTTP replication API (/_synapse/replication). See # replication: the HTTP replication API (/_synapse/replication).
# docs/workers.md. # See https://matrix-org.github.io/synapse/latest/workers.html.
# #
# static: static resources under synapse/static (/_matrix/static). (Mostly # static: static resources under synapse/static (/_matrix/static). (Mostly
# useful for 'fallback authentication'.) # useful for 'fallback authentication'.)
@ -1050,7 +1051,7 @@ class ServerConfig(Config):
# that unwraps TLS. # that unwraps TLS.
# #
# If you plan to use a reverse proxy, please see # If you plan to use a reverse proxy, please see
# https://github.com/matrix-org/synapse/blob/master/docs/reverse_proxy.md. # https://matrix-org.github.io/synapse/latest/reverse_proxy.html.
# #
%(unsecure_http_bindings)s %(unsecure_http_bindings)s

View File

@ -26,7 +26,7 @@ LEGACY_SPAM_CHECKER_WARNING = """
This server is using a spam checker module that is implementing the deprecated spam This server is using a spam checker module that is implementing the deprecated spam
checker interface. Please check with the module's maintainer to see if a new version checker interface. Please check with the module's maintainer to see if a new version
supporting Synapse's generic modules system is available. supporting Synapse's generic modules system is available.
For more information, please see https://matrix-org.github.io/synapse/develop/modules.html For more information, please see https://matrix-org.github.io/synapse/latest/modules.html
---------------------------------------------------------------------------------------""" ---------------------------------------------------------------------------------------"""

View File

@ -38,20 +38,16 @@ class StatsConfig(Config):
def read_config(self, config, **kwargs): def read_config(self, config, **kwargs):
self.stats_enabled = True self.stats_enabled = True
self.stats_bucket_size = 86400 * 1000
stats_config = config.get("stats", None) stats_config = config.get("stats", None)
if stats_config: if stats_config:
self.stats_enabled = stats_config.get("enabled", self.stats_enabled) self.stats_enabled = stats_config.get("enabled", self.stats_enabled)
self.stats_bucket_size = self.parse_duration(
stats_config.get("bucket_size", "1d")
)
if not self.stats_enabled: if not self.stats_enabled:
logger.warning(ROOM_STATS_DISABLED_WARN) logger.warning(ROOM_STATS_DISABLED_WARN)
def generate_config_section(self, config_dir_path, server_name, **kwargs): def generate_config_section(self, config_dir_path, server_name, **kwargs):
return """ return """
# Settings for local room and user statistics collection. See # Settings for local room and user statistics collection. See
# docs/room_and_user_statistics.md. # https://matrix-org.github.io/synapse/latest/room_and_user_statistics.html.
# #
stats: stats:
# Uncomment the following to disable room and user statistics. Note that doing # Uncomment the following to disable room and user statistics. Note that doing
@ -59,9 +55,4 @@ class StatsConfig(Config):
# correctly. # correctly.
# #
#enabled: false #enabled: false
# The size of each timeslice in the room_stats_historical and
# user_stats_historical tables, as a time period. Defaults to "1d".
#
#bucket_size: 1h
""" """

View File

@ -81,7 +81,7 @@ class TracerConfig(Config):
#enabled: true #enabled: true
# The list of homeservers we wish to send and receive span contexts and span baggage. # The list of homeservers we wish to send and receive span contexts and span baggage.
# See docs/opentracing.rst. # See https://matrix-org.github.io/synapse/latest/opentracing.html.
# #
# This is a list of regexes which are matched against the server_name of the # This is a list of regexes which are matched against the server_name of the
# homeserver. # homeserver.

View File

@ -50,7 +50,7 @@ class UserDirectoryConfig(Config):
# #
# If you set it true, you'll have to rebuild the user_directory search # If you set it true, you'll have to rebuild the user_directory search
# indexes, see: # indexes, see:
# https://github.com/matrix-org/synapse/blob/master/docs/user_directory.md # https://matrix-org.github.io/synapse/latest/user_directory.html
# #
# Uncomment to return search results containing all known users, even if that # Uncomment to return search results containing all known users, even if that
# user does not share a room with the requester. # user does not share a room with the requester.

View File

@ -1602,11 +1602,13 @@ class EventCreationHandler:
for k, v in original_event.internal_metadata.get_dict().items(): for k, v in original_event.internal_metadata.get_dict().items():
setattr(builder.internal_metadata, k, v) setattr(builder.internal_metadata, k, v)
# the event type hasn't changed, so there's no point in re-calculating the # modules can send new state events, so we re-calculate the auth events just in
# auth events. # case.
prev_event_ids = await self.store.get_prev_events_for_room(builder.room_id)
event = await builder.build( event = await builder.build(
prev_event_ids=original_event.prev_event_ids(), prev_event_ids=prev_event_ids,
auth_event_ids=original_event.auth_event_ids(), auth_event_ids=None,
) )
# we rebuild the event context, to be on the safe side. If nothing else, # we rebuild the event context, to be on the safe side. If nothing else,

View File

@ -45,7 +45,6 @@ class StatsHandler:
self.clock = hs.get_clock() self.clock = hs.get_clock()
self.notifier = hs.get_notifier() self.notifier = hs.get_notifier()
self.is_mine_id = hs.is_mine_id self.is_mine_id = hs.is_mine_id
self.stats_bucket_size = hs.config.stats_bucket_size
self.stats_enabled = hs.config.stats_enabled self.stats_enabled = hs.config.stats_enabled
@ -106,20 +105,6 @@ class StatsHandler:
room_deltas = {} room_deltas = {}
user_deltas = {} user_deltas = {}
# Then count deltas for total_events and total_event_bytes.
(
room_count,
user_count,
) = await self.store.get_changes_room_total_events_and_bytes(
self.pos, max_pos
)
for room_id, fields in room_count.items():
room_deltas.setdefault(room_id, Counter()).update(fields)
for user_id, fields in user_count.items():
user_deltas.setdefault(user_id, Counter()).update(fields)
logger.debug("room_deltas: %s", room_deltas) logger.debug("room_deltas: %s", room_deltas)
logger.debug("user_deltas: %s", user_deltas) logger.debug("user_deltas: %s", user_deltas)
@ -181,12 +166,10 @@ class StatsHandler:
event_content = {} # type: JsonDict event_content = {} # type: JsonDict
sender = None
if event_id is not None: if event_id is not None:
event = await self.store.get_event(event_id, allow_none=True) event = await self.store.get_event(event_id, allow_none=True)
if event: if event:
event_content = event.content or {} event_content = event.content or {}
sender = event.sender
# All the values in this dict are deltas (RELATIVE changes) # All the values in this dict are deltas (RELATIVE changes)
room_stats_delta = room_to_stats_deltas.setdefault(room_id, Counter()) room_stats_delta = room_to_stats_deltas.setdefault(room_id, Counter())
@ -244,12 +227,6 @@ class StatsHandler:
room_stats_delta["joined_members"] += 1 room_stats_delta["joined_members"] += 1
elif membership == Membership.INVITE: elif membership == Membership.INVITE:
room_stats_delta["invited_members"] += 1 room_stats_delta["invited_members"] += 1
if sender and self.is_mine_id(sender):
user_to_stats_deltas.setdefault(sender, Counter())[
"invites_sent"
] += 1
elif membership == Membership.LEAVE: elif membership == Membership.LEAVE:
room_stats_delta["left_members"] += 1 room_stats_delta["left_members"] += 1
elif membership == Membership.BAN: elif membership == Membership.BAN:
@ -279,10 +256,6 @@ class StatsHandler:
room_state["is_federatable"] = ( room_state["is_federatable"] = (
event_content.get("m.federate", True) is True event_content.get("m.federate", True) is True
) )
if sender and self.is_mine_id(sender):
user_to_stats_deltas.setdefault(sender, Counter())[
"rooms_created"
] += 1
elif typ == EventTypes.JoinRules: elif typ == EventTypes.JoinRules:
room_state["join_rules"] = event_content.get("join_rule") room_state["join_rules"] = event_content.get("join_rule")
elif typ == EventTypes.RoomHistoryVisibility: elif typ == EventTypes.RoomHistoryVisibility:

View File

@ -1146,6 +1146,16 @@ class EventsBackgroundUpdatesStore(SQLBaseStore):
logger.info("completing stream_ordering migration: %s", sql) logger.info("completing stream_ordering migration: %s", sql)
txn.execute(sql) txn.execute(sql)
# ANALYZE the new column to build stats on it, to encourage PostgreSQL to use the
# indexes on it.
# We need to pass execute a dummy function to handle the txn's result otherwise
# it tries to call fetchall() on it and fails because there's no result to fetch.
await self.db_pool.execute(
"background_analyze_new_stream_ordering_column",
lambda txn: None,
"ANALYZE events(stream_ordering2)",
)
await self.db_pool.runInteraction( await self.db_pool.runInteraction(
"_background_replace_stream_ordering_column", process "_background_replace_stream_ordering_column", process
) )

View File

@ -320,7 +320,7 @@ class ServerMetricsStore(EventPushActionsWorkerStore, SQLBaseStore):
""" """
Returns millisecond unixtime for start of UTC day. Returns millisecond unixtime for start of UTC day.
""" """
now = time.gmtime() now = time.gmtime(self._clock.time())
today_start = calendar.timegm((now.tm_year, now.tm_mon, now.tm_mday, 0, 0, 0)) today_start = calendar.timegm((now.tm_year, now.tm_mon, now.tm_mday, 0, 0, 0))
return today_start * 1000 return today_start * 1000
@ -352,7 +352,7 @@ class ServerMetricsStore(EventPushActionsWorkerStore, SQLBaseStore):
) udv ) udv
ON u.user_id = udv.user_id AND u.device_id=udv.device_id ON u.user_id = udv.user_id AND u.device_id=udv.device_id
INNER JOIN users ON users.name=u.user_id INNER JOIN users ON users.name=u.user_id
WHERE last_seen > ? AND last_seen <= ? WHERE ? <= last_seen AND last_seen < ?
AND udv.timestamp IS NULL AND users.is_guest=0 AND udv.timestamp IS NULL AND users.is_guest=0
AND users.appservice_id IS NULL AND users.appservice_id IS NULL
GROUP BY u.user_id, u.device_id GROUP BY u.user_id, u.device_id

View File

@ -392,7 +392,6 @@ class PurgeEventsStore(StateGroupWorkerStore, CacheInvalidationWorkerStore):
"room_memberships", "room_memberships",
"room_stats_state", "room_stats_state",
"room_stats_current", "room_stats_current",
"room_stats_historical",
"room_stats_earliest_token", "room_stats_earliest_token",
"rooms", "rooms",
"stream_ordering_to_exterm", "stream_ordering_to_exterm",

View File

@ -26,7 +26,6 @@ from synapse.api.constants import EventTypes, Membership
from synapse.api.errors import StoreError from synapse.api.errors import StoreError
from synapse.storage.database import DatabasePool from synapse.storage.database import DatabasePool
from synapse.storage.databases.main.state_deltas import StateDeltasStore from synapse.storage.databases.main.state_deltas import StateDeltasStore
from synapse.storage.engines import PostgresEngine
from synapse.types import JsonDict from synapse.types import JsonDict
from synapse.util.caches.descriptors import cached from synapse.util.caches.descriptors import cached
@ -49,14 +48,6 @@ ABSOLUTE_STATS_FIELDS = {
"user": ("joined_rooms",), "user": ("joined_rooms",),
} }
# these fields are per-timeslice and so should be reset to 0 upon a new slice
# You can draw these stats on a histogram.
# Example: number of events sent locally during a time slice
PER_SLICE_FIELDS = {
"room": ("total_events", "total_event_bytes"),
"user": ("invites_sent", "rooms_created", "total_events", "total_event_bytes"),
}
TYPE_TO_TABLE = {"room": ("room_stats", "room_id"), "user": ("user_stats", "user_id")} TYPE_TO_TABLE = {"room": ("room_stats", "room_id"), "user": ("user_stats", "user_id")}
# these are the tables (& ID columns) which contain our actual subjects # these are the tables (& ID columns) which contain our actual subjects
@ -106,7 +97,6 @@ class StatsStore(StateDeltasStore):
self.server_name = hs.hostname self.server_name = hs.hostname
self.clock = self.hs.get_clock() self.clock = self.hs.get_clock()
self.stats_enabled = hs.config.stats_enabled self.stats_enabled = hs.config.stats_enabled
self.stats_bucket_size = hs.config.stats_bucket_size
self.stats_delta_processing_lock = DeferredLock() self.stats_delta_processing_lock = DeferredLock()
@ -122,22 +112,6 @@ class StatsStore(StateDeltasStore):
self.db_pool.updates.register_noop_background_update("populate_stats_cleanup") self.db_pool.updates.register_noop_background_update("populate_stats_cleanup")
self.db_pool.updates.register_noop_background_update("populate_stats_prepare") self.db_pool.updates.register_noop_background_update("populate_stats_prepare")
def quantise_stats_time(self, ts):
"""
Quantises a timestamp to be a multiple of the bucket size.
Args:
ts (int): the timestamp to quantise, in milliseconds since the Unix
Epoch
Returns:
int: a timestamp which
- is divisible by the bucket size;
- is no later than `ts`; and
- is the largest such timestamp.
"""
return (ts // self.stats_bucket_size) * self.stats_bucket_size
async def _populate_stats_process_users(self, progress, batch_size): async def _populate_stats_process_users(self, progress, batch_size):
""" """
This is a background update which regenerates statistics for users. This is a background update which regenerates statistics for users.
@ -288,56 +262,6 @@ class StatsStore(StateDeltasStore):
desc="update_room_state", desc="update_room_state",
) )
async def get_statistics_for_subject(
self, stats_type: str, stats_id: str, start: str, size: int = 100
) -> List[dict]:
"""
Get statistics for a given subject.
Args:
stats_type: The type of subject
stats_id: The ID of the subject (e.g. room_id or user_id)
start: Pagination start. Number of entries, not timestamp.
size: How many entries to return.
Returns:
A list of dicts, where the dict has the keys of
ABSOLUTE_STATS_FIELDS[stats_type], and "bucket_size" and "end_ts".
"""
return await self.db_pool.runInteraction(
"get_statistics_for_subject",
self._get_statistics_for_subject_txn,
stats_type,
stats_id,
start,
size,
)
def _get_statistics_for_subject_txn(
self, txn, stats_type, stats_id, start, size=100
):
"""
Transaction-bound version of L{get_statistics_for_subject}.
"""
table, id_col = TYPE_TO_TABLE[stats_type]
selected_columns = list(
ABSOLUTE_STATS_FIELDS[stats_type] + PER_SLICE_FIELDS[stats_type]
)
slice_list = self.db_pool.simple_select_list_paginate_txn(
txn,
table + "_historical",
"end_ts",
start,
size,
retcols=selected_columns + ["bucket_size", "end_ts"],
keyvalues={id_col: stats_id},
order_direction="DESC",
)
return slice_list
@cached() @cached()
async def get_earliest_token_for_stats( async def get_earliest_token_for_stats(
self, stats_type: str, id: str self, stats_type: str, id: str
@ -451,14 +375,10 @@ class StatsStore(StateDeltasStore):
table, id_col = TYPE_TO_TABLE[stats_type] table, id_col = TYPE_TO_TABLE[stats_type]
quantised_ts = self.quantise_stats_time(int(ts))
end_ts = quantised_ts + self.stats_bucket_size
# Lets be paranoid and check that all the given field names are known # Lets be paranoid and check that all the given field names are known
abs_field_names = ABSOLUTE_STATS_FIELDS[stats_type] abs_field_names = ABSOLUTE_STATS_FIELDS[stats_type]
slice_field_names = PER_SLICE_FIELDS[stats_type]
for field in chain(fields.keys(), absolute_field_overrides.keys()): for field in chain(fields.keys(), absolute_field_overrides.keys()):
if field not in abs_field_names and field not in slice_field_names: if field not in abs_field_names:
# guard against potential SQL injection dodginess # guard against potential SQL injection dodginess
raise ValueError( raise ValueError(
"%s is not a recognised field" "%s is not a recognised field"
@ -491,20 +411,6 @@ class StatsStore(StateDeltasStore):
additive_relatives=deltas_of_absolute_fields, additive_relatives=deltas_of_absolute_fields,
) )
per_slice_additive_relatives = {
key: fields.get(key, 0) for key in slice_field_names
}
self._upsert_copy_from_table_with_additive_relatives_txn(
txn=txn,
into_table=table + "_historical",
keyvalues={id_col: stats_id},
extra_dst_insvalues={"bucket_size": self.stats_bucket_size},
extra_dst_keyvalues={"end_ts": end_ts},
additive_relatives=per_slice_additive_relatives,
src_table=table + "_current",
copy_columns=abs_field_names,
)
def _upsert_with_additive_relatives_txn( def _upsert_with_additive_relatives_txn(
self, txn, table, keyvalues, absolutes, additive_relatives self, txn, table, keyvalues, absolutes, additive_relatives
): ):
@ -572,201 +478,6 @@ class StatsStore(StateDeltasStore):
current_row.update(absolutes) current_row.update(absolutes)
self.db_pool.simple_update_one_txn(txn, table, keyvalues, current_row) self.db_pool.simple_update_one_txn(txn, table, keyvalues, current_row)
def _upsert_copy_from_table_with_additive_relatives_txn(
self,
txn,
into_table,
keyvalues,
extra_dst_keyvalues,
extra_dst_insvalues,
additive_relatives,
src_table,
copy_columns,
):
"""Updates the historic stats table with latest updates.
This involves copying "absolute" fields from the `_current` table, and
adding relative fields to any existing values.
Args:
txn: Transaction
into_table (str): The destination table to UPSERT the row into
keyvalues (dict[str, any]): Row-identifying key values
extra_dst_keyvalues (dict[str, any]): Additional keyvalues
for `into_table`.
extra_dst_insvalues (dict[str, any]): Additional values to insert
on new row creation for `into_table`.
additive_relatives (dict[str, any]): Fields that will be added onto
if existing row present. (Must be disjoint from copy_columns.)
src_table (str): The source table to copy from
copy_columns (iterable[str]): The list of columns to copy
"""
if self.database_engine.can_native_upsert:
ins_columns = chain(
keyvalues,
copy_columns,
additive_relatives,
extra_dst_keyvalues,
extra_dst_insvalues,
)
sel_exprs = chain(
keyvalues,
copy_columns,
(
"?"
for _ in chain(
additive_relatives, extra_dst_keyvalues, extra_dst_insvalues
)
),
)
keyvalues_where = ("%s = ?" % f for f in keyvalues)
sets_cc = ("%s = EXCLUDED.%s" % (f, f) for f in copy_columns)
sets_ar = (
"%s = EXCLUDED.%s + %s.%s" % (f, f, into_table, f)
for f in additive_relatives
)
sql = """
INSERT INTO %(into_table)s (%(ins_columns)s)
SELECT %(sel_exprs)s
FROM %(src_table)s
WHERE %(keyvalues_where)s
ON CONFLICT (%(keyvalues)s)
DO UPDATE SET %(sets)s
""" % {
"into_table": into_table,
"ins_columns": ", ".join(ins_columns),
"sel_exprs": ", ".join(sel_exprs),
"keyvalues_where": " AND ".join(keyvalues_where),
"src_table": src_table,
"keyvalues": ", ".join(
chain(keyvalues.keys(), extra_dst_keyvalues.keys())
),
"sets": ", ".join(chain(sets_cc, sets_ar)),
}
qargs = list(
chain(
additive_relatives.values(),
extra_dst_keyvalues.values(),
extra_dst_insvalues.values(),
keyvalues.values(),
)
)
txn.execute(sql, qargs)
else:
self.database_engine.lock_table(txn, into_table)
src_row = self.db_pool.simple_select_one_txn(
txn, src_table, keyvalues, copy_columns
)
all_dest_keyvalues = {**keyvalues, **extra_dst_keyvalues}
dest_current_row = self.db_pool.simple_select_one_txn(
txn,
into_table,
keyvalues=all_dest_keyvalues,
retcols=list(chain(additive_relatives.keys(), copy_columns)),
allow_none=True,
)
if dest_current_row is None:
merged_dict = {
**keyvalues,
**extra_dst_keyvalues,
**extra_dst_insvalues,
**src_row,
**additive_relatives,
}
self.db_pool.simple_insert_txn(txn, into_table, merged_dict)
else:
for (key, val) in additive_relatives.items():
src_row[key] = dest_current_row[key] + val
self.db_pool.simple_update_txn(
txn, into_table, all_dest_keyvalues, src_row
)
async def get_changes_room_total_events_and_bytes(
self, min_pos: int, max_pos: int
) -> Tuple[Dict[str, Dict[str, int]], Dict[str, Dict[str, int]]]:
"""Fetches the counts of events in the given range of stream IDs.
Args:
min_pos
max_pos
Returns:
Mapping of room ID to field changes.
"""
return await self.db_pool.runInteraction(
"stats_incremental_total_events_and_bytes",
self.get_changes_room_total_events_and_bytes_txn,
min_pos,
max_pos,
)
def get_changes_room_total_events_and_bytes_txn(
self, txn, low_pos: int, high_pos: int
) -> Tuple[Dict[str, Dict[str, int]], Dict[str, Dict[str, int]]]:
"""Gets the total_events and total_event_bytes counts for rooms and
senders, in a range of stream_orderings (including backfilled events).
Args:
txn
low_pos: Low stream ordering
high_pos: High stream ordering
Returns:
The room and user deltas for total_events/total_event_bytes in the
format of `stats_id` -> fields
"""
if low_pos >= high_pos:
# nothing to do here.
return {}, {}
if isinstance(self.database_engine, PostgresEngine):
new_bytes_expression = "OCTET_LENGTH(json)"
else:
new_bytes_expression = "LENGTH(CAST(json AS BLOB))"
sql = """
SELECT events.room_id, COUNT(*) AS new_events, SUM(%s) AS new_bytes
FROM events INNER JOIN event_json USING (event_id)
WHERE (? < stream_ordering AND stream_ordering <= ?)
OR (? <= stream_ordering AND stream_ordering <= ?)
GROUP BY events.room_id
""" % (
new_bytes_expression,
)
txn.execute(sql, (low_pos, high_pos, -high_pos, -low_pos))
room_deltas = {
room_id: {"total_events": new_events, "total_event_bytes": new_bytes}
for room_id, new_events, new_bytes in txn
}
sql = """
SELECT events.sender, COUNT(*) AS new_events, SUM(%s) AS new_bytes
FROM events INNER JOIN event_json USING (event_id)
WHERE (? < stream_ordering AND stream_ordering <= ?)
OR (? <= stream_ordering AND stream_ordering <= ?)
GROUP BY events.sender
""" % (
new_bytes_expression,
)
txn.execute(sql, (low_pos, high_pos, -high_pos, -low_pos))
user_deltas = {
user_id: {"total_events": new_events, "total_event_bytes": new_bytes}
for user_id, new_events, new_bytes in txn
if self.hs.is_mine_id(user_id)
}
return room_deltas, user_deltas
async def _calculate_and_set_initial_state_for_room( async def _calculate_and_set_initial_state_for_room(
self, room_id: str self, room_id: str
) -> Tuple[dict, dict, int]: ) -> Tuple[dict, dict, int]:

View File

@ -21,6 +21,10 @@ older versions of Synapse).
See `README.md <synapse/storage/schema/README.md>`_ for more information on how this See `README.md <synapse/storage/schema/README.md>`_ for more information on how this
works. works.
Changes in SCHEMA_VERSION = 61:
- The `user_stats_historical` and `room_stats_historical` tables are not written and
are not read (previously, they were written but not read).
""" """

View File

@ -88,16 +88,12 @@ class StatsRoomTests(unittest.HomeserverTestCase):
def _get_current_stats(self, stats_type, stat_id): def _get_current_stats(self, stats_type, stat_id):
table, id_col = stats.TYPE_TO_TABLE[stats_type] table, id_col = stats.TYPE_TO_TABLE[stats_type]
cols = list(stats.ABSOLUTE_STATS_FIELDS[stats_type]) + list( cols = list(stats.ABSOLUTE_STATS_FIELDS[stats_type])
stats.PER_SLICE_FIELDS[stats_type]
)
end_ts = self.store.quantise_stats_time(self.reactor.seconds() * 1000)
return self.get_success( return self.get_success(
self.store.db_pool.simple_select_one( self.store.db_pool.simple_select_one(
table + "_historical", table + "_current",
{id_col: stat_id, end_ts: end_ts}, {id_col: stat_id},
cols, cols,
allow_none=True, allow_none=True,
) )
@ -156,115 +152,6 @@ class StatsRoomTests(unittest.HomeserverTestCase):
self.assertEqual(len(r), 1) self.assertEqual(len(r), 1)
self.assertEqual(r[0]["topic"], "foo") self.assertEqual(r[0]["topic"], "foo")
def test_initial_earliest_token(self):
"""
Ingestion via notify_new_event will ignore tokens that the background
update have already processed.
"""
self.reactor.advance(86401)
self.hs.config.stats_enabled = False
self.handler.stats_enabled = False
u1 = self.register_user("u1", "pass")
u1_token = self.login("u1", "pass")
u2 = self.register_user("u2", "pass")
u2_token = self.login("u2", "pass")
u3 = self.register_user("u3", "pass")
u3_token = self.login("u3", "pass")
room_1 = self.helper.create_room_as(u1, tok=u1_token)
self.helper.send_state(
room_1, event_type="m.room.topic", body={"topic": "foo"}, tok=u1_token
)
# Begin the ingestion by creating the temp tables. This will also store
# the position that the deltas should begin at, once they take over.
self.hs.config.stats_enabled = True
self.handler.stats_enabled = True
self.store.db_pool.updates._all_done = False
self.get_success(
self.store.db_pool.simple_update_one(
table="stats_incremental_position",
keyvalues={},
updatevalues={"stream_id": 0},
)
)
self.get_success(
self.store.db_pool.simple_insert(
"background_updates",
{"update_name": "populate_stats_prepare", "progress_json": "{}"},
)
)
while not self.get_success(
self.store.db_pool.updates.has_completed_background_updates()
):
self.get_success(
self.store.db_pool.updates.do_next_background_update(100), by=0.1
)
# Now, before the table is actually ingested, add some more events.
self.helper.invite(room=room_1, src=u1, targ=u2, tok=u1_token)
self.helper.join(room=room_1, user=u2, tok=u2_token)
# orig_delta_processor = self.store.
# Now do the initial ingestion.
self.get_success(
self.store.db_pool.simple_insert(
"background_updates",
{"update_name": "populate_stats_process_rooms", "progress_json": "{}"},
)
)
self.get_success(
self.store.db_pool.simple_insert(
"background_updates",
{
"update_name": "populate_stats_cleanup",
"progress_json": "{}",
"depends_on": "populate_stats_process_rooms",
},
)
)
self.store.db_pool.updates._all_done = False
while not self.get_success(
self.store.db_pool.updates.has_completed_background_updates()
):
self.get_success(
self.store.db_pool.updates.do_next_background_update(100), by=0.1
)
self.reactor.advance(86401)
# Now add some more events, triggering ingestion. Because of the stream
# position being set to before the events sent in the middle, a simpler
# implementation would reprocess those events, and say there were four
# users, not three.
self.helper.invite(room=room_1, src=u1, targ=u3, tok=u1_token)
self.helper.join(room=room_1, user=u3, tok=u3_token)
# self.handler.notify_new_event()
# We need to let the delta processor advance…
self.reactor.advance(10 * 60)
# Get the slices! There should be two -- day 1, and day 2.
r = self.get_success(self.store.get_statistics_for_subject("room", room_1, 0))
self.assertEqual(len(r), 2)
# The oldest has 2 joined members
self.assertEqual(r[-1]["joined_members"], 2)
# The newest has 3
self.assertEqual(r[0]["joined_members"], 3)
def test_create_user(self): def test_create_user(self):
""" """
When we create a user, it should have statistics already ready. When we create a user, it should have statistics already ready.
@ -296,22 +183,6 @@ class StatsRoomTests(unittest.HomeserverTestCase):
self.assertIsNotNone(r1stats) self.assertIsNotNone(r1stats)
self.assertIsNotNone(r2stats) self.assertIsNotNone(r2stats)
# contains the default things you'd expect in a fresh room
self.assertEqual(
r1stats["total_events"],
EXPT_NUM_STATE_EVTS_IN_FRESH_PUBLIC_ROOM,
"Wrong number of total_events in new room's stats!"
" You may need to update this if more state events are added to"
" the room creation process.",
)
self.assertEqual(
r2stats["total_events"],
EXPT_NUM_STATE_EVTS_IN_FRESH_PRIVATE_ROOM,
"Wrong number of total_events in new room's stats!"
" You may need to update this if more state events are added to"
" the room creation process.",
)
self.assertEqual( self.assertEqual(
r1stats["current_state_events"], EXPT_NUM_STATE_EVTS_IN_FRESH_PUBLIC_ROOM r1stats["current_state_events"], EXPT_NUM_STATE_EVTS_IN_FRESH_PUBLIC_ROOM
) )
@ -327,24 +198,6 @@ class StatsRoomTests(unittest.HomeserverTestCase):
self.assertEqual(r2stats["invited_members"], 0) self.assertEqual(r2stats["invited_members"], 0)
self.assertEqual(r2stats["banned_members"], 0) self.assertEqual(r2stats["banned_members"], 0)
def test_send_message_increments_total_events(self):
"""
When we send a message, it increments total_events.
"""
self._perform_background_initial_update()
u1 = self.register_user("u1", "pass")
u1token = self.login("u1", "pass")
r1 = self.helper.create_room_as(u1, tok=u1token)
r1stats_ante = self._get_current_stats("room", r1)
self.helper.send(r1, "hiss", tok=u1token)
r1stats_post = self._get_current_stats("room", r1)
self.assertEqual(r1stats_post["total_events"] - r1stats_ante["total_events"], 1)
def test_updating_profile_information_does_not_increase_joined_members_count(self): def test_updating_profile_information_does_not_increase_joined_members_count(self):
""" """
Check that the joined_members count does not increase when a user changes their Check that the joined_members count does not increase when a user changes their
@ -378,7 +231,7 @@ class StatsRoomTests(unittest.HomeserverTestCase):
def test_send_state_event_nonoverwriting(self): def test_send_state_event_nonoverwriting(self):
""" """
When we send a non-overwriting state event, it increments total_events AND current_state_events When we send a non-overwriting state event, it increments current_state_events
""" """
self._perform_background_initial_update() self._perform_background_initial_update()
@ -399,44 +252,14 @@ class StatsRoomTests(unittest.HomeserverTestCase):
r1stats_post = self._get_current_stats("room", r1) r1stats_post = self._get_current_stats("room", r1)
self.assertEqual(r1stats_post["total_events"] - r1stats_ante["total_events"], 1)
self.assertEqual( self.assertEqual(
r1stats_post["current_state_events"] - r1stats_ante["current_state_events"], r1stats_post["current_state_events"] - r1stats_ante["current_state_events"],
1, 1,
) )
def test_send_state_event_overwriting(self):
"""
When we send an overwriting state event, it increments total_events ONLY
"""
self._perform_background_initial_update()
u1 = self.register_user("u1", "pass")
u1token = self.login("u1", "pass")
r1 = self.helper.create_room_as(u1, tok=u1token)
self.helper.send_state(
r1, "cat.hissing", {"value": True}, tok=u1token, state_key="tabby"
)
r1stats_ante = self._get_current_stats("room", r1)
self.helper.send_state(
r1, "cat.hissing", {"value": False}, tok=u1token, state_key="tabby"
)
r1stats_post = self._get_current_stats("room", r1)
self.assertEqual(r1stats_post["total_events"] - r1stats_ante["total_events"], 1)
self.assertEqual(
r1stats_post["current_state_events"] - r1stats_ante["current_state_events"],
0,
)
def test_join_first_time(self): def test_join_first_time(self):
""" """
When a user joins a room for the first time, total_events, current_state_events and When a user joins a room for the first time, current_state_events and
joined_members should increase by exactly 1. joined_members should increase by exactly 1.
""" """
@ -455,7 +278,6 @@ class StatsRoomTests(unittest.HomeserverTestCase):
r1stats_post = self._get_current_stats("room", r1) r1stats_post = self._get_current_stats("room", r1)
self.assertEqual(r1stats_post["total_events"] - r1stats_ante["total_events"], 1)
self.assertEqual( self.assertEqual(
r1stats_post["current_state_events"] - r1stats_ante["current_state_events"], r1stats_post["current_state_events"] - r1stats_ante["current_state_events"],
1, 1,
@ -466,7 +288,7 @@ class StatsRoomTests(unittest.HomeserverTestCase):
def test_join_after_leave(self): def test_join_after_leave(self):
""" """
When a user joins a room after being previously left, total_events and When a user joins a room after being previously left,
joined_members should increase by exactly 1. joined_members should increase by exactly 1.
current_state_events should not increase. current_state_events should not increase.
left_members should decrease by exactly 1. left_members should decrease by exactly 1.
@ -490,7 +312,6 @@ class StatsRoomTests(unittest.HomeserverTestCase):
r1stats_post = self._get_current_stats("room", r1) r1stats_post = self._get_current_stats("room", r1)
self.assertEqual(r1stats_post["total_events"] - r1stats_ante["total_events"], 1)
self.assertEqual( self.assertEqual(
r1stats_post["current_state_events"] - r1stats_ante["current_state_events"], r1stats_post["current_state_events"] - r1stats_ante["current_state_events"],
0, 0,
@ -504,7 +325,7 @@ class StatsRoomTests(unittest.HomeserverTestCase):
def test_invited(self): def test_invited(self):
""" """
When a user invites another user, current_state_events, total_events and When a user invites another user, current_state_events and
invited_members should increase by exactly 1. invited_members should increase by exactly 1.
""" """
@ -522,7 +343,6 @@ class StatsRoomTests(unittest.HomeserverTestCase):
r1stats_post = self._get_current_stats("room", r1) r1stats_post = self._get_current_stats("room", r1)
self.assertEqual(r1stats_post["total_events"] - r1stats_ante["total_events"], 1)
self.assertEqual( self.assertEqual(
r1stats_post["current_state_events"] - r1stats_ante["current_state_events"], r1stats_post["current_state_events"] - r1stats_ante["current_state_events"],
1, 1,
@ -533,7 +353,7 @@ class StatsRoomTests(unittest.HomeserverTestCase):
def test_join_after_invite(self): def test_join_after_invite(self):
""" """
When a user joins a room after being invited, total_events and When a user joins a room after being invited and
joined_members should increase by exactly 1. joined_members should increase by exactly 1.
current_state_events should not increase. current_state_events should not increase.
invited_members should decrease by exactly 1. invited_members should decrease by exactly 1.
@ -556,7 +376,6 @@ class StatsRoomTests(unittest.HomeserverTestCase):
r1stats_post = self._get_current_stats("room", r1) r1stats_post = self._get_current_stats("room", r1)
self.assertEqual(r1stats_post["total_events"] - r1stats_ante["total_events"], 1)
self.assertEqual( self.assertEqual(
r1stats_post["current_state_events"] - r1stats_ante["current_state_events"], r1stats_post["current_state_events"] - r1stats_ante["current_state_events"],
0, 0,
@ -570,7 +389,7 @@ class StatsRoomTests(unittest.HomeserverTestCase):
def test_left(self): def test_left(self):
""" """
When a user leaves a room after joining, total_events and When a user leaves a room after joining and
left_members should increase by exactly 1. left_members should increase by exactly 1.
current_state_events should not increase. current_state_events should not increase.
joined_members should decrease by exactly 1. joined_members should decrease by exactly 1.
@ -593,7 +412,6 @@ class StatsRoomTests(unittest.HomeserverTestCase):
r1stats_post = self._get_current_stats("room", r1) r1stats_post = self._get_current_stats("room", r1)
self.assertEqual(r1stats_post["total_events"] - r1stats_ante["total_events"], 1)
self.assertEqual( self.assertEqual(
r1stats_post["current_state_events"] - r1stats_ante["current_state_events"], r1stats_post["current_state_events"] - r1stats_ante["current_state_events"],
0, 0,
@ -607,7 +425,7 @@ class StatsRoomTests(unittest.HomeserverTestCase):
def test_banned(self): def test_banned(self):
""" """
When a user is banned from a room after joining, total_events and When a user is banned from a room after joining and
left_members should increase by exactly 1. left_members should increase by exactly 1.
current_state_events should not increase. current_state_events should not increase.
banned_members should decrease by exactly 1. banned_members should decrease by exactly 1.
@ -630,7 +448,6 @@ class StatsRoomTests(unittest.HomeserverTestCase):
r1stats_post = self._get_current_stats("room", r1) r1stats_post = self._get_current_stats("room", r1)
self.assertEqual(r1stats_post["total_events"] - r1stats_ante["total_events"], 1)
self.assertEqual( self.assertEqual(
r1stats_post["current_state_events"] - r1stats_ante["current_state_events"], r1stats_post["current_state_events"] - r1stats_ante["current_state_events"],
0, 0,

View File

@ -1753,7 +1753,6 @@ PURGE_TABLES = [
"room_memberships", "room_memberships",
"room_stats_state", "room_stats_state",
"room_stats_current", "room_stats_current",
"room_stats_historical",
"room_stats_earliest_token", "room_stats_earliest_token",
"rooms", "rooms",
"stream_ordering_to_exterm", "stream_ordering_to_exterm",