Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes

michaelkaye/remove_warning
Erik Johnston 2020-11-13 12:05:55 +00:00
commit 52984e9e69
55 changed files with 1445 additions and 158 deletions

Binary file not shown.

View File

@ -156,6 +156,24 @@ directory, you will need both a regular newsfragment *and* an entry in the
debian changelog. (Though typically such changes should be submitted as two
separate pull requests.)
## Documentation
There is a growing amount of documentation located in the [docs](docs)
directory. This documentation is intended primarily for sysadmins running their
own Synapse instance, as well as developers interacting externally with
Synapse. [docs/dev](docs/dev) exists primarily to house documentation for
Synapse developers. [docs/admin_api](docs/admin_api) houses documentation
regarding Synapse's Admin API, which is used mostly by sysadmins and external
service developers.
New files added to both folders should be written in [Github-Flavoured
Markdown](https://guides.github.com/features/mastering-markdown/), and attempts
should be made to migrate existing documents to markdown where possible.
Some documentation also exists in [Synapse's Github
Wiki](https://github.com/matrix-org/synapse/wiki), although this is primarily
contributed to by community authors.
## Sign off
In order to have a concrete record that your contribution is intentional

1
changelog.d/8286.feature Normal file
View File

@ -0,0 +1 @@
Add a push rule that highlights when a jitsi conference is created in a room.

1
changelog.d/8636.misc Normal file
View File

@ -0,0 +1 @@
Catch exceptions during initialization of `password_providers`. Contributed by Nicolai Søborg.

1
changelog.d/8694.misc Normal file
View File

@ -0,0 +1 @@
Improve start time by adding an index to `e2e_cross_signing_keys.stream_id`.

1
changelog.d/8698.misc Normal file
View File

@ -0,0 +1 @@
Use Python 3.8 in Docker images by default.

1
changelog.d/8700.feature Normal file
View File

@ -0,0 +1 @@
Add an admin API for local user media statistics. Contributed by @dklimpel.

1
changelog.d/8701.doc Normal file
View File

@ -0,0 +1 @@
Notes on SSO logins and media_repository worker.

1
changelog.d/8702.misc Normal file
View File

@ -0,0 +1 @@
Remove the "draft" status of the Room Details Admin API.

1
changelog.d/8705.misc Normal file
View File

@ -0,0 +1 @@
Improve the error returned when a non-string displayname or avatar_url is used when updating a user's profile.

1
changelog.d/8706.doc Normal file
View File

@ -0,0 +1 @@
Document experimental support for running multiple event persisters.

1
changelog.d/8708.misc Normal file
View File

@ -0,0 +1 @@
Block attempts by clients to send server ACLs, or redactions of server ACLs, that would result in the local server being blocked from the room.

1
changelog.d/8712.misc Normal file
View File

@ -0,0 +1 @@
Add metrics the allow the local sysadmin to track 3PID `/requestToken` requests.

1
changelog.d/8713.misc Normal file
View File

@ -0,0 +1 @@
Consolidate duplicated lists of purged tables that are checked in tests.

1
changelog.d/8714.doc Normal file
View File

@ -0,0 +1 @@
Add information regarding the various sources of, and expected contributions to, Synapse's documentation to `CONTRIBUTING.md`.

1
changelog.d/8719.misc Normal file
View File

@ -0,0 +1 @@
Improve the error message returned when a remote server incorrectly sets the `Content-Type` header in response to a JSON request.

1
changelog.d/8722.feature Normal file
View File

@ -0,0 +1 @@
Add `displayname` to Shared-Secret Registration for admins.

1
changelog.d/8726.bugfix Normal file
View File

@ -0,0 +1 @@
Fix bug where Synapse would not recover after losing connection to the database.

1
changelog.d/8728.bugfix Normal file
View File

@ -0,0 +1 @@
Fix bug where the `/_synapse/admin/v1/send_server_notice` API could send notices to non-notice rooms.

1
changelog.d/8729.bugfix Normal file
View File

@ -0,0 +1 @@
Fix port script fails when DB has no backfilled events. Broke in v1.21.0.

1
changelog.d/8730.bugfix Normal file
View File

@ -0,0 +1 @@
Fix port script to correctly handle foreign key constraints. Broke in v1.21.0.

1
changelog.d/8752.misc Normal file
View File

@ -0,0 +1 @@
Speed up repeated state resolutions on the same room by caching event ID to auth event ID lookups.

1
changelog.d/8755.bugfix Normal file
View File

@ -0,0 +1 @@
Fix port script so that it can be run again after a failure. Broke in v1.21.0.

View File

@ -11,7 +11,7 @@
# docker build -f docker/Dockerfile --build-arg PYTHON_VERSION=3.6 .
#
ARG PYTHON_VERSION=3.7
ARG PYTHON_VERSION=3.8
###
### Stage 0: builder

View File

@ -18,7 +18,8 @@ To fetch the nonce, you need to request one from the API::
Once you have the nonce, you can make a ``POST`` to the same URL with a JSON
body containing the nonce, username, password, whether they are an admin
(optional, False by default), and a HMAC digest of the content.
(optional, False by default), and a HMAC digest of the content. Also you can
set the displayname (optional, ``username`` by default).
As an example::
@ -26,6 +27,7 @@ As an example::
> {
"nonce": "thisisanonce",
"username": "pepper_roni",
"displayname": "Pepper Roni",
"password": "pizza",
"admin": true,
"mac": "mac_digest_here"

View File

@ -265,12 +265,10 @@ Response:
Once the `next_token` parameter is no longer present, we know we've reached the
end of the list.
# DRAFT: Room Details API
# Room Details API
The Room Details admin API allows server admins to get all details of a room.
This API is still a draft and details might change!
The following fields are possible in the JSON response body:
* `room_id` - The ID of the room.

View File

@ -0,0 +1,83 @@
# Users' media usage statistics
Returns information about all local media usage of users. Gives the
possibility to filter them by time and user.
The API is:
```
GET /_synapse/admin/v1/statistics/users/media
```
To use it, you will need to authenticate by providing an `access_token`
for a server admin: see [README.rst](README.rst).
A response body like the following is returned:
```json
{
"users": [
{
"displayname": "foo_user_0",
"media_count": 2,
"media_length": 134,
"user_id": "@foo_user_0:test"
},
{
"displayname": "foo_user_1",
"media_count": 2,
"media_length": 134,
"user_id": "@foo_user_1:test"
}
],
"next_token": 3,
"total": 10
}
```
To paginate, check for `next_token` and if present, call the endpoint
again with `from` set to the value of `next_token`. This will return a new page.
If the endpoint does not return a `next_token` then there are no more
reports to paginate through.
**Parameters**
The following parameters should be set in the URL:
* `limit`: string representing a positive integer - Is optional but is
used for pagination, denoting the maximum number of items to return
in this call. Defaults to `100`.
* `from`: string representing a positive integer - Is optional but used for pagination,
denoting the offset in the returned results. This should be treated as an opaque value
and not explicitly set to anything other than the return value of `next_token` from a
previous call. Defaults to `0`.
* `order_by` - string - The method in which to sort the returned list of users. Valid values are:
- `user_id` - Users are ordered alphabetically by `user_id`. This is the default.
- `displayname` - Users are ordered alphabetically by `displayname`.
- `media_length` - Users are ordered by the total size of uploaded media in bytes.
Smallest to largest.
- `media_count` - Users are ordered by number of uploaded media. Smallest to largest.
* `from_ts` - string representing a positive integer - Considers only
files created at this timestamp or later. Unix timestamp in ms.
* `until_ts` - string representing a positive integer - Considers only
files created at this timestamp or earlier. Unix timestamp in ms.
* `search_term` - string - Filter users by their user ID localpart **or** displayname.
The search term can be found in any part of the string.
Defaults to no filtering.
* `dir` - string - Direction of order. Either `f` for forwards or `b` for backwards.
Setting this value to `b` will reverse the above sort order. Defaults to `f`.
**Response**
The following fields are returned in the JSON response body:
* `users` - An array of objects, each containing information
about the user and their local media. Objects contain the following fields:
- `displayname` - string - Displayname of this user.
- `media_count` - integer - Number of uploaded media by this user.
- `media_length` - integer - Size of uploaded media in bytes by this user.
- `user_id` - string - Fully-qualified user ID (ex. `@user:server.com`).
* `next_token` - integer - Opaque value used for pagination. See above.
* `total` - integer - Total number of users after filtering.

View File

@ -205,7 +205,7 @@ GitHub is a bit special as it is not an OpenID Connect compliant provider, but
just a regular OAuth2 provider.
The [`/user` API endpoint](https://developer.github.com/v3/users/#get-the-authenticated-user)
can be used to retrieve information on the authenticated user. As the Synaspse
can be used to retrieve information on the authenticated user. As the Synapse
login mechanism needs an attribute to uniquely identify users, and that endpoint
does not return a `sub` property, an alternative `subject_claim` has to be set.

View File

@ -37,10 +37,10 @@ synapse master process to be started as part of the `matrix-synapse.target`
target.
1. For each worker process to be enabled, run `systemctl enable
matrix-synapse-worker@<worker_name>.service`. For each `<worker_name>`, there
should be a corresponding configuration file
should be a corresponding configuration file.
`/etc/matrix-synapse/workers/<worker_name>.yaml`.
1. Start all the synapse processes with `systemctl start matrix-synapse.target`.
1. Tell systemd to start synapse on boot with `systemctl enable matrix-synapse.target`/
1. Tell systemd to start synapse on boot with `systemctl enable matrix-synapse.target`.
## Usage

View File

@ -116,7 +116,7 @@ public internet; it has no authentication and is unencrypted.
### Worker configuration
In the config file for each worker, you must specify the type of worker
application (`worker_app`), and you should specify a unqiue name for the worker
application (`worker_app`), and you should specify a unique name for the worker
(`worker_name`). The currently available worker applications are listed below.
You must also specify the HTTP replication endpoint that it should talk to on
the main synapse process. `worker_replication_host` should specify the host of
@ -262,6 +262,9 @@ using):
Note that a HTTP listener with `client` and `federation` resources must be
configured in the `worker_listeners` option in the worker config.
Ensure that all SSO logins go to a single process (usually the main process).
For multiple workers not handling the SSO endpoints properly, see
[#7530](https://github.com/matrix-org/synapse/issues/7530).
#### Load balancing
@ -302,7 +305,7 @@ Additionally, there is *experimental* support for moving writing of specific
streams (such as events) off of the main process to a particular worker. (This
is only supported with Redis-based replication.)
Currently support streams are `events` and `typing`.
Currently supported streams are `events` and `typing`.
To enable this, the worker must have a HTTP replication listener configured,
have a `worker_name` and be listed in the `instance_map` config. For example to
@ -319,6 +322,18 @@ stream_writers:
events: event_persister1
```
The `events` stream also experimentally supports having multiple writers, where
work is sharded between them by room ID. Note that you *must* restart all worker
instances when adding or removing event persisters. An example `stream_writers`
configuration with multiple writers:
```yaml
stream_writers:
events:
- event_persister1
- event_persister2
```
#### Background tasks
There is also *experimental* support for moving background tasks to a separate
@ -408,6 +423,8 @@ and you must configure a single instance to run the background tasks, e.g.:
media_instance_running_background_jobs: "media-repository-1"
```
Note that if a reverse proxy is used , then `/_matrix/media/` must be routed for both inbound client and federation requests (if they are handled separately).
### `synapse.app.user_dir`
Handles searches in the user directory. It can handle REST endpoints matching

View File

@ -13,6 +13,7 @@ files =
synapse/config,
synapse/event_auth.py,
synapse/events/builder.py,
synapse/events/validator.py,
synapse/events/spamcheck.py,
synapse/federation,
synapse/handlers/_base.py,

View File

@ -22,7 +22,7 @@ import logging
import sys
import time
import traceback
from typing import Optional
from typing import Dict, Optional, Set
import yaml
@ -40,6 +40,7 @@ from synapse.storage.database import DatabasePool, make_conn
from synapse.storage.databases.main.client_ips import ClientIpBackgroundUpdateStore
from synapse.storage.databases.main.deviceinbox import DeviceInboxBackgroundUpdateStore
from synapse.storage.databases.main.devices import DeviceBackgroundUpdateStore
from synapse.storage.databases.main.end_to_end_keys import EndToEndKeyBackgroundStore
from synapse.storage.databases.main.events_bg_updates import (
EventsBackgroundUpdatesStore,
)
@ -174,6 +175,7 @@ class Store(
StateBackgroundUpdateStore,
MainStateBackgroundUpdateStore,
UserDirectoryBackgroundUpdateStore,
EndToEndKeyBackgroundStore,
StatsStore,
):
def execute(self, f, *args, **kwargs):
@ -290,6 +292,34 @@ class Porter(object):
return table, already_ported, total_to_port, forward_chunk, backward_chunk
async def get_table_constraints(self) -> Dict[str, Set[str]]:
"""Returns a map of tables that have foreign key constraints to tables they depend on.
"""
def _get_constraints(txn):
# We can pull the information about foreign key constraints out from
# the postgres schema tables.
sql = """
SELECT DISTINCT
tc.table_name,
ccu.table_name AS foreign_table_name
FROM
information_schema.table_constraints AS tc
INNER JOIN information_schema.constraint_column_usage AS ccu
USING (table_schema, constraint_name)
WHERE tc.constraint_type = 'FOREIGN KEY';
"""
txn.execute(sql)
results = {}
for table, foreign_table in txn:
results.setdefault(table, set()).add(foreign_table)
return results
return await self.postgres_store.db_pool.runInteraction(
"get_table_constraints", _get_constraints
)
async def handle_table(
self, table, postgres_size, table_size, forward_chunk, backward_chunk
):
@ -589,7 +619,18 @@ class Porter(object):
"create_port_table", create_port_table
)
# Step 2. Get tables.
# Step 2. Set up sequences
#
# We do this before porting the tables so that event if we fail half
# way through the postgres DB always have sequences that are greater
# than their respective tables. If we don't then creating the
# `DataStore` object will fail due to the inconsistency.
self.progress.set_state("Setting up sequence generators")
await self._setup_state_group_id_seq()
await self._setup_user_id_seq()
await self._setup_events_stream_seqs()
# Step 3. Get tables.
self.progress.set_state("Fetching tables")
sqlite_tables = await self.sqlite_store.db_pool.simple_select_onecol(
table="sqlite_master", keyvalues={"type": "table"}, retcol="name"
@ -604,7 +645,7 @@ class Porter(object):
tables = set(sqlite_tables) & set(postgres_tables)
logger.info("Found %d tables", len(tables))
# Step 3. Figure out what still needs copying
# Step 4. Figure out what still needs copying
self.progress.set_state("Checking on port progress")
setup_res = await make_deferred_yieldable(
defer.gatherResults(
@ -617,21 +658,43 @@ class Porter(object):
consumeErrors=True,
)
)
# Map from table name to args passed to `handle_table`, i.e. a tuple
# of: `postgres_size`, `table_size`, `forward_chunk`, `backward_chunk`.
tables_to_port_info_map = {r[0]: r[1:] for r in setup_res}
# Step 4. Do the copying.
# Step 5. Do the copying.
#
# This is slightly convoluted as we need to ensure tables are ported
# in the correct order due to foreign key constraints.
self.progress.set_state("Copying to postgres")
await make_deferred_yieldable(
defer.gatherResults(
[run_in_background(self.handle_table, *res) for res in setup_res],
consumeErrors=True,
)
)
# Step 5. Set up sequences
self.progress.set_state("Setting up sequence generators")
await self._setup_state_group_id_seq()
await self._setup_user_id_seq()
await self._setup_events_stream_seqs()
constraints = await self.get_table_constraints()
tables_ported = set() # type: Set[str]
while tables_to_port_info_map:
# Pulls out all tables that are still to be ported and which
# only depend on tables that are already ported (if any).
tables_to_port = [
table
for table in tables_to_port_info_map
if not constraints.get(table, set()) - tables_ported
]
await make_deferred_yieldable(
defer.gatherResults(
[
run_in_background(
self.handle_table,
table,
*tables_to_port_info_map.pop(table),
)
for table in tables_to_port
],
consumeErrors=True,
)
)
tables_ported.update(tables_to_port)
self.progress.done()
except Exception as e:
@ -790,45 +853,62 @@ class Porter(object):
return done, remaining + done
def _setup_state_group_id_seq(self):
async def _setup_state_group_id_seq(self):
curr_id = await self.sqlite_store.db_pool.simple_select_one_onecol(
table="state_groups", keyvalues={}, retcol="MAX(id)", allow_none=True
)
if not curr_id:
return
def r(txn):
txn.execute("SELECT MAX(id) FROM state_groups")
curr_id = txn.fetchone()[0]
if not curr_id:
return
next_id = curr_id + 1
txn.execute("ALTER SEQUENCE state_group_id_seq RESTART WITH %s", (next_id,))
return self.postgres_store.db_pool.runInteraction("setup_state_group_id_seq", r)
await self.postgres_store.db_pool.runInteraction("setup_state_group_id_seq", r)
async def _setup_user_id_seq(self):
curr_id = await self.sqlite_store.db_pool.runInteraction(
"setup_user_id_seq", find_max_generated_user_id_localpart
)
def _setup_user_id_seq(self):
def r(txn):
next_id = find_max_generated_user_id_localpart(txn) + 1
next_id = curr_id + 1
txn.execute("ALTER SEQUENCE user_id_seq RESTART WITH %s", (next_id,))
return self.postgres_store.db_pool.runInteraction("setup_user_id_seq", r)
def _setup_events_stream_seqs(self):
def r(txn):
txn.execute("SELECT MAX(stream_ordering) FROM events")
curr_id = txn.fetchone()[0]
if curr_id:
next_id = curr_id + 1
async def _setup_events_stream_seqs(self):
"""Set the event stream sequences to the correct values.
"""
# We get called before we've ported the events table, so we need to
# fetch the current positions from the SQLite store.
curr_forward_id = await self.sqlite_store.db_pool.simple_select_one_onecol(
table="events", keyvalues={}, retcol="MAX(stream_ordering)", allow_none=True
)
curr_backward_id = await self.sqlite_store.db_pool.simple_select_one_onecol(
table="events",
keyvalues={},
retcol="MAX(-MIN(stream_ordering), 1)",
allow_none=True,
)
def _setup_events_stream_seqs_set_pos(txn):
if curr_forward_id:
txn.execute(
"ALTER SEQUENCE events_stream_seq RESTART WITH %s", (next_id,)
"ALTER SEQUENCE events_stream_seq RESTART WITH %s",
(curr_forward_id + 1,),
)
txn.execute("SELECT -MIN(stream_ordering) FROM events")
curr_id = txn.fetchone()[0]
if curr_id:
next_id = curr_id + 1
txn.execute(
"ALTER SEQUENCE events_backfill_stream_seq RESTART WITH %s",
(next_id,),
)
txn.execute(
"ALTER SEQUENCE events_backfill_stream_seq RESTART WITH %s",
(curr_backward_id + 1,),
)
return self.postgres_store.db_pool.runInteraction(
"_setup_events_stream_seqs", r
return await self.postgres_store.db_pool.runInteraction(
"_setup_events_stream_seqs", _setup_events_stream_seqs_set_pos,
)

View File

@ -13,20 +13,26 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import Union
from synapse.api.constants import MAX_ALIAS_LENGTH, EventTypes, Membership
from synapse.api.errors import Codes, SynapseError
from synapse.api.room_versions import EventFormatVersions
from synapse.config.homeserver import HomeServerConfig
from synapse.events import EventBase
from synapse.events.builder import EventBuilder
from synapse.events.utils import validate_canonicaljson
from synapse.federation.federation_server import server_matches_acl_event
from synapse.types import EventID, RoomID, UserID
class EventValidator:
def validate_new(self, event, config):
def validate_new(self, event: EventBase, config: HomeServerConfig):
"""Validates the event has roughly the right format
Args:
event (FrozenEvent): The event to validate.
config (Config): The homeserver's configuration.
event: The event to validate.
config: The homeserver's configuration.
"""
self.validate_builder(event)
@ -76,12 +82,18 @@ class EventValidator:
if event.type == EventTypes.Retention:
self._validate_retention(event)
def _validate_retention(self, event):
if event.type == EventTypes.ServerACL:
if not server_matches_acl_event(config.server_name, event):
raise SynapseError(
400, "Can't create an ACL event that denies the local server"
)
def _validate_retention(self, event: EventBase):
"""Checks that an event that defines the retention policy for a room respects the
format enforced by the spec.
Args:
event (FrozenEvent): The event to validate.
event: The event to validate.
"""
if not event.is_state():
raise SynapseError(code=400, msg="must be a state event")
@ -116,13 +128,10 @@ class EventValidator:
errcode=Codes.BAD_JSON,
)
def validate_builder(self, event):
def validate_builder(self, event: Union[EventBase, EventBuilder]):
"""Validates that the builder/event has roughly the right format. Only
checks values that we expect a proto event to have, rather than all the
fields an event would have
Args:
event (EventBuilder|FrozenEvent)
"""
strings = ["room_id", "sender", "type"]

View File

@ -181,10 +181,15 @@ class AuthHandler(BaseHandler):
# better way to break the loop
account_handler = ModuleApi(hs, self)
self.password_providers = [
module(config=config, account_handler=account_handler)
for module, config in hs.config.password_providers
]
self.password_providers = []
for module, config in hs.config.password_providers:
try:
self.password_providers.append(
module(config=config, account_handler=account_handler)
)
except Exception as e:
logger.error("Error while initializing %r: %s", module, e)
raise
logger.info("Extra password_providers: %r", self.password_providers)

View File

@ -1138,6 +1138,9 @@ class EventCreationHandler:
if original_event.room_id != event.room_id:
raise SynapseError(400, "Cannot redact event from a different room")
if original_event.type == EventTypes.ServerACL:
raise AuthError(403, "Redacting server ACL events is not permitted")
prev_state_ids = await context.get_prev_state_ids()
auth_events_ids = self.auth.compute_auth_events(
event, prev_state_ids, for_verification=True

View File

@ -189,7 +189,9 @@ class ProfileHandler(BaseHandler):
)
if not isinstance(new_displayname, str):
raise SynapseError(400, "Invalid displayname")
raise SynapseError(
400, "'displayname' must be a string", errcode=Codes.INVALID_PARAM
)
if len(new_displayname) > MAX_DISPLAYNAME_LEN:
raise SynapseError(
@ -273,7 +275,9 @@ class ProfileHandler(BaseHandler):
)
if not isinstance(new_avatar_url, str):
raise SynapseError(400, "Invalid displayname")
raise SynapseError(
400, "'avatar_url' must be a string", errcode=Codes.INVALID_PARAM
)
if len(new_avatar_url) > MAX_AVATAR_URL_LEN:
raise SynapseError(

View File

@ -1063,13 +1063,19 @@ def check_content_type_is_json(headers):
"""
c_type = headers.getRawHeaders(b"Content-Type")
if c_type is None:
raise RequestSendFailed(RuntimeError("No Content-Type header"), can_retry=False)
raise RequestSendFailed(
RuntimeError("No Content-Type header received from remote server"),
can_retry=False,
)
c_type = c_type[0].decode("ascii") # only the first header
val, options = cgi.parse_header(c_type)
if val != "application/json":
raise RequestSendFailed(
RuntimeError("Content-Type not application/json: was '%s'" % c_type),
RuntimeError(
"Remote server sent Content-Type header of '%s', not 'application/json'"
% c_type,
),
can_retry=False,
)

View File

@ -502,6 +502,16 @@ build_info.labels(
last_ticked = time.time()
# 3PID send info
threepid_send_requests = Histogram(
"synapse_threepid_send_requests_with_tries",
documentation="Number of requests for a 3pid token by try count. Note if"
" there is a request with try count of 4, then there would have been one"
" each for 1, 2 and 3",
buckets=(1, 2, 3, 4, 5, 10),
labelnames=("type", "reason"),
)
class ReactorLastSeenMetric:
def collect(self):

View File

@ -498,6 +498,30 @@ BASE_APPEND_UNDERRIDE_RULES = [
],
"actions": ["notify", {"set_tweak": "highlight", "value": False}],
},
{
"rule_id": "global/underride/.im.vector.jitsi",
"conditions": [
{
"kind": "event_match",
"key": "type",
"pattern": "im.vector.modular.widgets",
"_id": "_type_modular_widgets",
},
{
"kind": "event_match",
"key": "content.type",
"pattern": "jitsi",
"_id": "_content_type_jitsi",
},
{
"kind": "event_match",
"key": "state_key",
"pattern": "*",
"_id": "_is_state_event",
},
],
"actions": ["notify", {"set_tweak": "highlight", "value": False}],
},
]

View File

@ -47,6 +47,7 @@ from synapse.rest.admin.rooms import (
ShutdownRoomRestServlet,
)
from synapse.rest.admin.server_notice_servlet import SendServerNoticeServlet
from synapse.rest.admin.statistics import UserMediaStatisticsRestServlet
from synapse.rest.admin.users import (
AccountValidityRenewServlet,
DeactivateAccountRestServlet,
@ -227,6 +228,7 @@ def register_servlets(hs, http_server):
DeviceRestServlet(hs).register(http_server)
DevicesRestServlet(hs).register(http_server)
DeleteDevicesRestServlet(hs).register(http_server)
UserMediaStatisticsRestServlet(hs).register(http_server)
EventReportDetailRestServlet(hs).register(http_server)
EventReportsRestServlet(hs).register(http_server)
PushersRestServlet(hs).register(http_server)

View File

@ -0,0 +1,122 @@
# -*- coding: utf-8 -*-
# Copyright 2020 Dirk Klimpel
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from typing import TYPE_CHECKING, Tuple
from synapse.api.errors import Codes, SynapseError
from synapse.http.servlet import RestServlet, parse_integer, parse_string
from synapse.http.site import SynapseRequest
from synapse.rest.admin._base import admin_patterns, assert_requester_is_admin
from synapse.storage.databases.main.stats import UserSortOrder
from synapse.types import JsonDict
if TYPE_CHECKING:
from synapse.server import HomeServer
logger = logging.getLogger(__name__)
class UserMediaStatisticsRestServlet(RestServlet):
"""
Get statistics about uploaded media by users.
"""
PATTERNS = admin_patterns("/statistics/users/media$")
def __init__(self, hs: "HomeServer"):
self.hs = hs
self.auth = hs.get_auth()
self.store = hs.get_datastore()
async def on_GET(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
await assert_requester_is_admin(self.auth, request)
order_by = parse_string(
request, "order_by", default=UserSortOrder.USER_ID.value
)
if order_by not in (
UserSortOrder.MEDIA_LENGTH.value,
UserSortOrder.MEDIA_COUNT.value,
UserSortOrder.USER_ID.value,
UserSortOrder.DISPLAYNAME.value,
):
raise SynapseError(
400,
"Unknown value for order_by: %s" % (order_by,),
errcode=Codes.INVALID_PARAM,
)
start = parse_integer(request, "from", default=0)
if start < 0:
raise SynapseError(
400,
"Query parameter from must be a string representing a positive integer.",
errcode=Codes.INVALID_PARAM,
)
limit = parse_integer(request, "limit", default=100)
if limit < 0:
raise SynapseError(
400,
"Query parameter limit must be a string representing a positive integer.",
errcode=Codes.INVALID_PARAM,
)
from_ts = parse_integer(request, "from_ts", default=0)
if from_ts < 0:
raise SynapseError(
400,
"Query parameter from_ts must be a string representing a positive integer.",
errcode=Codes.INVALID_PARAM,
)
until_ts = parse_integer(request, "until_ts")
if until_ts is not None:
if until_ts < 0:
raise SynapseError(
400,
"Query parameter until_ts must be a string representing a positive integer.",
errcode=Codes.INVALID_PARAM,
)
if until_ts <= from_ts:
raise SynapseError(
400,
"Query parameter until_ts must be greater than from_ts.",
errcode=Codes.INVALID_PARAM,
)
search_term = parse_string(request, "search_term")
if search_term == "":
raise SynapseError(
400,
"Query parameter search_term cannot be an empty string.",
errcode=Codes.INVALID_PARAM,
)
direction = parse_string(request, "dir", default="f")
if direction not in ("f", "b"):
raise SynapseError(
400, "Unknown direction: %s" % (direction,), errcode=Codes.INVALID_PARAM
)
users_media, total = await self.store.get_users_media_usage_paginate(
start, limit, from_ts, until_ts, order_by, direction, search_term
)
ret = {"users": users_media, "total": total}
if (start + limit) < total:
ret["next_token"] = start + len(users_media)
return 200, ret

View File

@ -412,6 +412,7 @@ class UserRegisterServlet(RestServlet):
admin = body.get("admin", None)
user_type = body.get("user_type", None)
displayname = body.get("displayname", None)
if user_type is not None and user_type not in UserTypes.ALL_USER_TYPES:
raise SynapseError(400, "Invalid user type")
@ -448,6 +449,7 @@ class UserRegisterServlet(RestServlet):
password_hash=password_hash,
admin=bool(admin),
user_type=user_type,
default_display_name=displayname,
by_admin=True,
)

View File

@ -38,6 +38,7 @@ from synapse.http.servlet import (
parse_json_object_from_request,
parse_string,
)
from synapse.metrics import threepid_send_requests
from synapse.push.mailer import Mailer
from synapse.util.msisdn import phone_number_to_msisdn
from synapse.util.stringutils import assert_valid_client_secret, random_string
@ -143,6 +144,10 @@ class EmailPasswordRequestTokenRestServlet(RestServlet):
# Wrap the session id in a JSON object
ret = {"sid": sid}
threepid_send_requests.labels(type="email", reason="password_reset").observe(
send_attempt
)
return 200, ret
@ -411,6 +416,10 @@ class EmailThreepidRequestTokenRestServlet(RestServlet):
# Wrap the session id in a JSON object
ret = {"sid": sid}
threepid_send_requests.labels(type="email", reason="add_threepid").observe(
send_attempt
)
return 200, ret
@ -481,6 +490,10 @@ class MsisdnThreepidRequestTokenRestServlet(RestServlet):
next_link,
)
threepid_send_requests.labels(type="msisdn", reason="add_threepid").observe(
send_attempt
)
return 200, ret

View File

@ -45,6 +45,7 @@ from synapse.http.servlet import (
parse_json_object_from_request,
parse_string,
)
from synapse.metrics import threepid_send_requests
from synapse.push.mailer import Mailer
from synapse.util.msisdn import phone_number_to_msisdn
from synapse.util.ratelimitutils import FederationRateLimiter
@ -163,6 +164,10 @@ class EmailRegisterRequestTokenRestServlet(RestServlet):
# Wrap the session id in a JSON object
ret = {"sid": sid}
threepid_send_requests.labels(type="email", reason="register").observe(
send_attempt
)
return 200, ret
@ -234,6 +239,10 @@ class MsisdnRegisterRequestTokenRestServlet(RestServlet):
next_link,
)
threepid_send_requests.labels(type="msisdn", reason="register").observe(
send_attempt
)
return 200, ret

View File

@ -119,7 +119,7 @@ class ServerNoticesManager:
# manages to invite the system user to a room, that doesn't make it
# the server notices room.
user_ids = await self._store.get_users_in_room(room.room_id)
if self.server_notices_mxid in user_ids:
if len(user_ids) <= 2 and self.server_notices_mxid in user_ids:
# we found a room which our user shares with the system notice
# user
logger.info(

View File

@ -88,13 +88,18 @@ def make_pool(
"""Get the connection pool for the database.
"""
# By default enable `cp_reconnect`. We need to fiddle with db_args in case
# someone has explicitly set `cp_reconnect`.
db_args = dict(db_config.config.get("args", {}))
db_args.setdefault("cp_reconnect", True)
return adbapi.ConnectionPool(
db_config.config["name"],
cp_reactor=reactor,
cp_openfun=lambda conn: engine.on_new_connection(
LoggingDatabaseConnection(conn, engine, "on_new_connection")
),
**db_config.config.get("args", {}),
**db_args,
)

View File

@ -24,7 +24,7 @@ from twisted.enterprise.adbapi import Connection
from synapse.logging.opentracing import log_kv, set_tag, trace
from synapse.storage._base import SQLBaseStore, db_to_json
from synapse.storage.database import make_in_list_sql_clause
from synapse.storage.database import DatabasePool, make_in_list_sql_clause
from synapse.storage.types import Cursor
from synapse.types import JsonDict
from synapse.util import json_encoder
@ -33,6 +33,7 @@ from synapse.util.iterutils import batch_iter
if TYPE_CHECKING:
from synapse.handlers.e2e_keys import SignatureListItem
from synapse.server import HomeServer
@attr.s(slots=True)
@ -47,7 +48,20 @@ class DeviceKeyLookupResult:
keys = attr.ib(type=Optional[JsonDict])
class EndToEndKeyWorkerStore(SQLBaseStore):
class EndToEndKeyBackgroundStore(SQLBaseStore):
def __init__(self, database: DatabasePool, db_conn: Connection, hs: "HomeServer"):
super().__init__(database, db_conn, hs)
self.db_pool.updates.register_background_index_update(
"e2e_cross_signing_keys_idx",
index_name="e2e_cross_signing_keys_stream_idx",
table="e2e_cross_signing_keys",
columns=["stream_id"],
unique=True,
)
class EndToEndKeyWorkerStore(EndToEndKeyBackgroundStore):
async def get_e2e_device_keys_for_federation_query(
self, user_id: str
) -> Tuple[int, List[JsonDict]]:

View File

@ -26,6 +26,7 @@ from synapse.storage.databases.main.events_worker import EventsWorkerStore
from synapse.storage.databases.main.signatures import SignatureWorkerStore
from synapse.types import Collection
from synapse.util.caches.descriptors import cached
from synapse.util.caches.lrucache import LruCache
from synapse.util.iterutils import batch_iter
logger = logging.getLogger(__name__)
@ -40,6 +41,11 @@ class EventFederationWorkerStore(EventsWorkerStore, SignatureWorkerStore, SQLBas
self._delete_old_forward_extrem_cache, 60 * 60 * 1000
)
# Cache of event ID to list of auth event IDs and their depths.
self._event_auth_cache = LruCache(
500000, "_event_auth_cache", size_callback=len
) # type: LruCache[str, List[Tuple[str, int]]]
async def get_auth_chain(
self, event_ids: Collection[str], include_given: bool = False
) -> List[EventBase]:
@ -84,17 +90,45 @@ class EventFederationWorkerStore(EventsWorkerStore, SignatureWorkerStore, SQLBas
else:
results = set()
base_sql = "SELECT DISTINCT auth_id FROM event_auth WHERE "
# We pull out the depth simply so that we can populate the
# `_event_auth_cache` cache.
base_sql = """
SELECT a.event_id, auth_id, depth
FROM event_auth AS a
INNER JOIN events AS e ON (e.event_id = a.auth_id)
WHERE
"""
front = set(event_ids)
while front:
new_front = set()
for chunk in batch_iter(front, 100):
clause, args = make_in_list_sql_clause(
txn.database_engine, "event_id", chunk
)
txn.execute(base_sql + clause, args)
new_front.update(r[0] for r in txn)
# Pull the auth events either from the cache or DB.
to_fetch = [] # Event IDs to fetch from DB # type: List[str]
for event_id in chunk:
res = self._event_auth_cache.get(event_id)
if res is None:
to_fetch.append(event_id)
else:
new_front.update(auth_id for auth_id, depth in res)
if to_fetch:
clause, args = make_in_list_sql_clause(
txn.database_engine, "a.event_id", to_fetch
)
txn.execute(base_sql + clause, args)
# Note we need to batch up the results by event ID before
# adding to the cache.
to_cache = {}
for event_id, auth_event_id, auth_event_depth in txn:
to_cache.setdefault(event_id, []).append(
(auth_event_id, auth_event_depth)
)
new_front.add(auth_event_id)
for event_id, auth_events in to_cache.items():
self._event_auth_cache.set(event_id, auth_events)
new_front -= results
@ -213,14 +247,38 @@ class EventFederationWorkerStore(EventsWorkerStore, SignatureWorkerStore, SQLBas
break
# Fetch the auth events and their depths of the N last events we're
# currently walking
# currently walking, either from cache or DB.
search, chunk = search[:-100], search[-100:]
clause, args = make_in_list_sql_clause(
txn.database_engine, "a.event_id", [e_id for _, e_id in chunk]
)
txn.execute(base_sql + clause, args)
for event_id, auth_event_id, auth_event_depth in txn:
found = [] # Results found # type: List[Tuple[str, str, int]]
to_fetch = [] # Event IDs to fetch from DB # type: List[str]
for _, event_id in chunk:
res = self._event_auth_cache.get(event_id)
if res is None:
to_fetch.append(event_id)
else:
found.extend((event_id, auth_id, depth) for auth_id, depth in res)
if to_fetch:
clause, args = make_in_list_sql_clause(
txn.database_engine, "a.event_id", to_fetch
)
txn.execute(base_sql + clause, args)
# We parse the results and add the to the `found` set and the
# cache (note we need to batch up the results by event ID before
# adding to the cache).
to_cache = {}
for event_id, auth_event_id, auth_event_depth in txn:
to_cache.setdefault(event_id, []).append(
(auth_event_id, auth_event_depth)
)
found.append((event_id, auth_event_id, auth_event_depth))
for event_id, auth_events in to_cache.items():
self._event_auth_cache.set(event_id, auth_events)
for event_id, auth_event_id, auth_event_depth in found:
event_to_auth_events.setdefault(event_id, set()).add(auth_event_id)
sets = event_to_missing_sets.get(auth_event_id)

View File

@ -0,0 +1,17 @@
/* Copyright 2020 The Matrix.org Foundation C.I.C
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
INSERT INTO background_updates (update_name, progress_json) VALUES
('e2e_cross_signing_keys_idx', '{}');

View File

@ -16,15 +16,18 @@
import logging
from collections import Counter
from enum import Enum
from itertools import chain
from typing import Any, Dict, List, Optional, Tuple
from twisted.internet.defer import DeferredLock
from synapse.api.constants import EventTypes, Membership
from synapse.api.errors import StoreError
from synapse.storage.database import DatabasePool
from synapse.storage.databases.main.state_deltas import StateDeltasStore
from synapse.storage.engines import PostgresEngine
from synapse.types import JsonDict
from synapse.util.caches.descriptors import cached
logger = logging.getLogger(__name__)
@ -59,6 +62,23 @@ TYPE_TO_TABLE = {"room": ("room_stats", "room_id"), "user": ("user_stats", "user
TYPE_TO_ORIGIN_TABLE = {"room": ("rooms", "room_id"), "user": ("users", "name")}
class UserSortOrder(Enum):
"""
Enum to define the sorting method used when returning users
with get_users_media_usage_paginate
MEDIA_LENGTH = ordered by size of uploaded media. Smallest to largest.
MEDIA_COUNT = ordered by number of uploaded media. Smallest to largest.
USER_ID = ordered alphabetically by `user_id`.
DISPLAYNAME = ordered alphabetically by `displayname`
"""
MEDIA_LENGTH = "media_length"
MEDIA_COUNT = "media_count"
USER_ID = "user_id"
DISPLAYNAME = "displayname"
class StatsStore(StateDeltasStore):
def __init__(self, database: DatabasePool, db_conn, hs):
super().__init__(database, db_conn, hs)
@ -882,3 +902,110 @@ class StatsStore(StateDeltasStore):
complete_with_stream_id=pos,
absolute_field_overrides={"joined_rooms": joined_rooms},
)
async def get_users_media_usage_paginate(
self,
start: int,
limit: int,
from_ts: Optional[int] = None,
until_ts: Optional[int] = None,
order_by: Optional[UserSortOrder] = UserSortOrder.USER_ID.value,
direction: Optional[str] = "f",
search_term: Optional[str] = None,
) -> Tuple[List[JsonDict], Dict[str, int]]:
"""Function to retrieve a paginated list of users and their uploaded local media
(size and number). This will return a json list of users and the
total number of users matching the filter criteria.
Args:
start: offset to begin the query from
limit: number of rows to retrieve
from_ts: request only media that are created later than this timestamp (ms)
until_ts: request only media that are created earlier than this timestamp (ms)
order_by: the sort order of the returned list
direction: sort ascending or descending
search_term: a string to filter user names by
Returns:
A list of user dicts and an integer representing the total number of
users that exist given this query
"""
def get_users_media_usage_paginate_txn(txn):
filters = []
args = [self.hs.config.server_name]
if search_term:
filters.append("(lmr.user_id LIKE ? OR displayname LIKE ?)")
args.extend(["@%" + search_term + "%:%", "%" + search_term + "%"])
if from_ts:
filters.append("created_ts >= ?")
args.extend([from_ts])
if until_ts:
filters.append("created_ts <= ?")
args.extend([until_ts])
# Set ordering
if UserSortOrder(order_by) == UserSortOrder.MEDIA_LENGTH:
order_by_column = "media_length"
elif UserSortOrder(order_by) == UserSortOrder.MEDIA_COUNT:
order_by_column = "media_count"
elif UserSortOrder(order_by) == UserSortOrder.USER_ID:
order_by_column = "lmr.user_id"
elif UserSortOrder(order_by) == UserSortOrder.DISPLAYNAME:
order_by_column = "displayname"
else:
raise StoreError(
500, "Incorrect value for order_by provided: %s" % order_by
)
if direction == "b":
order = "DESC"
else:
order = "ASC"
where_clause = "WHERE " + " AND ".join(filters) if len(filters) > 0 else ""
sql_base = """
FROM local_media_repository as lmr
LEFT JOIN profiles AS p ON lmr.user_id = '@' || p.user_id || ':' || ?
{}
GROUP BY lmr.user_id, displayname
""".format(
where_clause
)
# SQLite does not support SELECT COUNT(*) OVER()
sql = """
SELECT COUNT(*) FROM (
SELECT lmr.user_id
{sql_base}
) AS count_user_ids
""".format(
sql_base=sql_base,
)
txn.execute(sql, args)
count = txn.fetchone()[0]
sql = """
SELECT
lmr.user_id,
displayname,
COUNT(lmr.user_id) as media_count,
SUM(media_length) as media_length
{sql_base}
ORDER BY {order_by_column} {order}
LIMIT ? OFFSET ?
""".format(
sql_base=sql_base, order_by_column=order_by_column, order=order,
)
args += [limit, start]
txn.execute(sql, args)
users = self.db_pool.cursor_to_dict(txn)
return users, count
return await self.db_pool.runInteraction(
"get_users_media_usage_paginate_txn", get_users_media_usage_paginate_txn
)

View File

@ -154,3 +154,60 @@ class EventCreationTestCase(unittest.HomeserverTestCase):
# Check that we've deduplicated the events.
self.assertEqual(len(events), 2)
self.assertEqual(events[0].event_id, events[1].event_id)
class ServerAclValidationTestCase(unittest.HomeserverTestCase):
servlets = [
admin.register_servlets,
login.register_servlets,
room.register_servlets,
]
def prepare(self, reactor, clock, hs):
self.user_id = self.register_user("tester", "foobar")
self.access_token = self.login("tester", "foobar")
self.room_id = self.helper.create_room_as(self.user_id, tok=self.access_token)
def test_allow_server_acl(self):
"""Test that sending an ACL that blocks everyone but ourselves works.
"""
self.helper.send_state(
self.room_id,
EventTypes.ServerACL,
body={"allow": [self.hs.hostname]},
tok=self.access_token,
expect_code=200,
)
def test_deny_server_acl_block_outselves(self):
"""Test that sending an ACL that blocks ourselves does not work.
"""
self.helper.send_state(
self.room_id,
EventTypes.ServerACL,
body={},
tok=self.access_token,
expect_code=400,
)
def test_deny_redact_server_acl(self):
"""Test that attempting to redact an ACL is blocked.
"""
body = self.helper.send_state(
self.room_id,
EventTypes.ServerACL,
body={"allow": [self.hs.hostname]},
tok=self.access_token,
expect_code=200,
)
event_id = body["event_id"]
# Redaction of event should fail.
path = "/_matrix/client/r0/rooms/%s/redact/%s" % (self.room_id, event_id)
request, channel = self.make_request(
"POST", path, content={}, access_token=self.access_token
)
self.render(request)
self.assertEqual(int(channel.result["code"]), 403)

View File

@ -531,40 +531,7 @@ class DeleteRoomTestCase(unittest.HomeserverTestCase):
def _is_purged(self, room_id):
"""Test that the following tables have been purged of all rows related to the room.
"""
for table in (
"current_state_events",
"event_backward_extremities",
"event_forward_extremities",
"event_json",
"event_push_actions",
"event_search",
"events",
"group_rooms",
"public_room_list_stream",
"receipts_graph",
"receipts_linearized",
"room_aliases",
"room_depth",
"room_memberships",
"room_stats_state",
"room_stats_current",
"room_stats_historical",
"room_stats_earliest_token",
"rooms",
"stream_ordering_to_exterm",
"users_in_public_rooms",
"users_who_share_private_rooms",
"appservice_room_list",
"e2e_room_keys",
"event_push_summary",
"pusher_throttle",
"group_summary_rooms",
"local_invites",
"room_account_data",
"room_tags",
# "state_groups", # Current impl leaves orphaned state groups around.
"state_groups_state",
):
for table in PURGE_TABLES:
count = self.get_success(
self.store.db_pool.simple_select_one_onecol(
table=table,
@ -633,39 +600,7 @@ class PurgeRoomTestCase(unittest.HomeserverTestCase):
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
# Test that the following tables have been purged of all rows related to the room.
for table in (
"current_state_events",
"event_backward_extremities",
"event_forward_extremities",
"event_json",
"event_push_actions",
"event_search",
"events",
"group_rooms",
"public_room_list_stream",
"receipts_graph",
"receipts_linearized",
"room_aliases",
"room_depth",
"room_memberships",
"room_stats_state",
"room_stats_current",
"room_stats_historical",
"room_stats_earliest_token",
"rooms",
"stream_ordering_to_exterm",
"users_in_public_rooms",
"users_who_share_private_rooms",
"appservice_room_list",
"e2e_room_keys",
"event_push_summary",
"pusher_throttle",
"group_summary_rooms",
"room_account_data",
"room_tags",
# "state_groups", # Current impl leaves orphaned state groups around.
"state_groups_state",
):
for table in PURGE_TABLES:
count = self.get_success(
self.store.db_pool.simple_select_one_onecol(
table=table,
@ -1500,3 +1435,39 @@ class JoinAliasRoomTestCase(unittest.HomeserverTestCase):
self.render(request)
self.assertEquals(200, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(private_room_id, channel.json_body["joined_rooms"][0])
PURGE_TABLES = [
"current_state_events",
"event_backward_extremities",
"event_forward_extremities",
"event_json",
"event_push_actions",
"event_search",
"events",
"group_rooms",
"public_room_list_stream",
"receipts_graph",
"receipts_linearized",
"room_aliases",
"room_depth",
"room_memberships",
"room_stats_state",
"room_stats_current",
"room_stats_historical",
"room_stats_earliest_token",
"rooms",
"stream_ordering_to_exterm",
"users_in_public_rooms",
"users_who_share_private_rooms",
"appservice_room_list",
"e2e_room_keys",
"event_push_summary",
"pusher_throttle",
"group_summary_rooms",
"local_invites",
"room_account_data",
"room_tags",
# "state_groups", # Current impl leaves orphaned state groups around.
"state_groups_state",
]

View File

@ -0,0 +1,485 @@
# -*- coding: utf-8 -*-
# Copyright 2020 Dirk Klimpel
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import json
from binascii import unhexlify
from typing import Any, Dict, List, Optional
import synapse.rest.admin
from synapse.api.errors import Codes
from synapse.rest.client.v1 import login
from tests import unittest
class UserMediaStatisticsTestCase(unittest.HomeserverTestCase):
servlets = [
synapse.rest.admin.register_servlets,
login.register_servlets,
]
def prepare(self, reactor, clock, hs):
self.store = hs.get_datastore()
self.media_repo = hs.get_media_repository_resource()
self.admin_user = self.register_user("admin", "pass", admin=True)
self.admin_user_tok = self.login("admin", "pass")
self.other_user = self.register_user("user", "pass")
self.other_user_tok = self.login("user", "pass")
self.url = "/_synapse/admin/v1/statistics/users/media"
def test_no_auth(self):
"""
Try to list users without authentication.
"""
request, channel = self.make_request("GET", self.url, b"{}")
self.render(request)
self.assertEqual(401, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(Codes.MISSING_TOKEN, channel.json_body["errcode"])
def test_requester_is_no_admin(self):
"""
If the user is not a server admin, an error 403 is returned.
"""
request, channel = self.make_request(
"GET", self.url, json.dumps({}), access_token=self.other_user_tok,
)
self.render(request)
self.assertEqual(403, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(Codes.FORBIDDEN, channel.json_body["errcode"])
def test_invalid_parameter(self):
"""
If parameters are invalid, an error is returned.
"""
# unkown order_by
request, channel = self.make_request(
"GET", self.url + "?order_by=bar", access_token=self.admin_user_tok,
)
self.render(request)
self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(Codes.INVALID_PARAM, channel.json_body["errcode"])
# negative from
request, channel = self.make_request(
"GET", self.url + "?from=-5", access_token=self.admin_user_tok,
)
self.render(request)
self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(Codes.INVALID_PARAM, channel.json_body["errcode"])
# negative limit
request, channel = self.make_request(
"GET", self.url + "?limit=-5", access_token=self.admin_user_tok,
)
self.render(request)
self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(Codes.INVALID_PARAM, channel.json_body["errcode"])
# negative from_ts
request, channel = self.make_request(
"GET", self.url + "?from_ts=-1234", access_token=self.admin_user_tok,
)
self.render(request)
self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(Codes.INVALID_PARAM, channel.json_body["errcode"])
# negative until_ts
request, channel = self.make_request(
"GET", self.url + "?until_ts=-1234", access_token=self.admin_user_tok,
)
self.render(request)
self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(Codes.INVALID_PARAM, channel.json_body["errcode"])
# until_ts smaller from_ts
request, channel = self.make_request(
"GET",
self.url + "?from_ts=10&until_ts=5",
access_token=self.admin_user_tok,
)
self.render(request)
self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(Codes.INVALID_PARAM, channel.json_body["errcode"])
# empty search term
request, channel = self.make_request(
"GET", self.url + "?search_term=", access_token=self.admin_user_tok,
)
self.render(request)
self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(Codes.INVALID_PARAM, channel.json_body["errcode"])
# invalid search order
request, channel = self.make_request(
"GET", self.url + "?dir=bar", access_token=self.admin_user_tok,
)
self.render(request)
self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(Codes.INVALID_PARAM, channel.json_body["errcode"])
def test_limit(self):
"""
Testing list of media with limit
"""
self._create_users_with_media(10, 2)
request, channel = self.make_request(
"GET", self.url + "?limit=5", access_token=self.admin_user_tok,
)
self.render(request)
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(channel.json_body["total"], 10)
self.assertEqual(len(channel.json_body["users"]), 5)
self.assertEqual(channel.json_body["next_token"], 5)
self._check_fields(channel.json_body["users"])
def test_from(self):
"""
Testing list of media with a defined starting point (from)
"""
self._create_users_with_media(20, 2)
request, channel = self.make_request(
"GET", self.url + "?from=5", access_token=self.admin_user_tok,
)
self.render(request)
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(channel.json_body["total"], 20)
self.assertEqual(len(channel.json_body["users"]), 15)
self.assertNotIn("next_token", channel.json_body)
self._check_fields(channel.json_body["users"])
def test_limit_and_from(self):
"""
Testing list of media with a defined starting point and limit
"""
self._create_users_with_media(20, 2)
request, channel = self.make_request(
"GET", self.url + "?from=5&limit=10", access_token=self.admin_user_tok,
)
self.render(request)
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(channel.json_body["total"], 20)
self.assertEqual(channel.json_body["next_token"], 15)
self.assertEqual(len(channel.json_body["users"]), 10)
self._check_fields(channel.json_body["users"])
def test_next_token(self):
"""
Testing that `next_token` appears at the right place
"""
number_users = 20
self._create_users_with_media(number_users, 3)
# `next_token` does not appear
# Number of results is the number of entries
request, channel = self.make_request(
"GET", self.url + "?limit=20", access_token=self.admin_user_tok,
)
self.render(request)
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(channel.json_body["total"], number_users)
self.assertEqual(len(channel.json_body["users"]), number_users)
self.assertNotIn("next_token", channel.json_body)
# `next_token` does not appear
# Number of max results is larger than the number of entries
request, channel = self.make_request(
"GET", self.url + "?limit=21", access_token=self.admin_user_tok,
)
self.render(request)
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(channel.json_body["total"], number_users)
self.assertEqual(len(channel.json_body["users"]), number_users)
self.assertNotIn("next_token", channel.json_body)
# `next_token` does appear
# Number of max results is smaller than the number of entries
request, channel = self.make_request(
"GET", self.url + "?limit=19", access_token=self.admin_user_tok,
)
self.render(request)
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(channel.json_body["total"], number_users)
self.assertEqual(len(channel.json_body["users"]), 19)
self.assertEqual(channel.json_body["next_token"], 19)
# Set `from` to value of `next_token` for request remaining entries
# Check `next_token` does not appear
request, channel = self.make_request(
"GET", self.url + "?from=19", access_token=self.admin_user_tok,
)
self.render(request)
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(channel.json_body["total"], number_users)
self.assertEqual(len(channel.json_body["users"]), 1)
self.assertNotIn("next_token", channel.json_body)
def test_no_media(self):
"""
Tests that a normal lookup for statistics is successfully
if users have no media created
"""
request, channel = self.make_request(
"GET", self.url, access_token=self.admin_user_tok,
)
self.render(request)
self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual(0, channel.json_body["total"])
self.assertEqual(0, len(channel.json_body["users"]))
def test_order_by(self):
"""
Testing order list with parameter `order_by`
"""
# create users
self.register_user("user_a", "pass", displayname="UserZ")
userA_tok = self.login("user_a", "pass")
self._create_media(userA_tok, 1)
self.register_user("user_b", "pass", displayname="UserY")
userB_tok = self.login("user_b", "pass")
self._create_media(userB_tok, 3)
self.register_user("user_c", "pass", displayname="UserX")
userC_tok = self.login("user_c", "pass")
self._create_media(userC_tok, 2)
# order by user_id
self._order_test("user_id", ["@user_a:test", "@user_b:test", "@user_c:test"])
self._order_test(
"user_id", ["@user_a:test", "@user_b:test", "@user_c:test"], "f",
)
self._order_test(
"user_id", ["@user_c:test", "@user_b:test", "@user_a:test"], "b",
)
# order by displayname
self._order_test(
"displayname", ["@user_c:test", "@user_b:test", "@user_a:test"]
)
self._order_test(
"displayname", ["@user_c:test", "@user_b:test", "@user_a:test"], "f",
)
self._order_test(
"displayname", ["@user_a:test", "@user_b:test", "@user_c:test"], "b",
)
# order by media_length
self._order_test(
"media_length", ["@user_a:test", "@user_c:test", "@user_b:test"],
)
self._order_test(
"media_length", ["@user_a:test", "@user_c:test", "@user_b:test"], "f",
)
self._order_test(
"media_length", ["@user_b:test", "@user_c:test", "@user_a:test"], "b",
)
# order by media_count
self._order_test(
"media_count", ["@user_a:test", "@user_c:test", "@user_b:test"],
)
self._order_test(
"media_count", ["@user_a:test", "@user_c:test", "@user_b:test"], "f",
)
self._order_test(
"media_count", ["@user_b:test", "@user_c:test", "@user_a:test"], "b",
)
def test_from_until_ts(self):
"""
Testing filter by time with parameters `from_ts` and `until_ts`
"""
# create media earlier than `ts1` to ensure that `from_ts` is working
self._create_media(self.other_user_tok, 3)
self.pump(1)
ts1 = self.clock.time_msec()
# list all media when filter is not set
request, channel = self.make_request(
"GET", self.url, access_token=self.admin_user_tok,
)
self.render(request)
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(channel.json_body["users"][0]["media_count"], 3)
# filter media starting at `ts1` after creating first media
# result is 0
request, channel = self.make_request(
"GET", self.url + "?from_ts=%s" % (ts1,), access_token=self.admin_user_tok,
)
self.render(request)
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(channel.json_body["total"], 0)
self._create_media(self.other_user_tok, 3)
self.pump(1)
ts2 = self.clock.time_msec()
# create media after `ts2` to ensure that `until_ts` is working
self._create_media(self.other_user_tok, 3)
# filter media between `ts1` and `ts2`
request, channel = self.make_request(
"GET",
self.url + "?from_ts=%s&until_ts=%s" % (ts1, ts2),
access_token=self.admin_user_tok,
)
self.render(request)
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(channel.json_body["users"][0]["media_count"], 3)
# filter media until `ts2` and earlier
request, channel = self.make_request(
"GET", self.url + "?until_ts=%s" % (ts2,), access_token=self.admin_user_tok,
)
self.render(request)
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(channel.json_body["users"][0]["media_count"], 6)
def test_search_term(self):
self._create_users_with_media(20, 1)
# check without filter get all users
request, channel = self.make_request(
"GET", self.url, access_token=self.admin_user_tok,
)
self.render(request)
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(channel.json_body["total"], 20)
# filter user 1 and 10-19 by `user_id`
request, channel = self.make_request(
"GET",
self.url + "?search_term=foo_user_1",
access_token=self.admin_user_tok,
)
self.render(request)
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(channel.json_body["total"], 11)
# filter on this user in `displayname`
request, channel = self.make_request(
"GET",
self.url + "?search_term=bar_user_10",
access_token=self.admin_user_tok,
)
self.render(request)
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(channel.json_body["users"][0]["displayname"], "bar_user_10")
self.assertEqual(channel.json_body["total"], 1)
# filter and get empty result
request, channel = self.make_request(
"GET", self.url + "?search_term=foobar", access_token=self.admin_user_tok,
)
self.render(request)
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(channel.json_body["total"], 0)
def _create_users_with_media(self, number_users: int, media_per_user: int):
"""
Create a number of users with a number of media
Args:
number_users: Number of users to be created
media_per_user: Number of media to be created for each user
"""
for i in range(number_users):
self.register_user("foo_user_%s" % i, "pass", displayname="bar_user_%s" % i)
user_tok = self.login("foo_user_%s" % i, "pass")
self._create_media(user_tok, media_per_user)
def _create_media(self, user_token: str, number_media: int):
"""
Create a number of media for a specific user
Args:
user_token: Access token of the user
number_media: Number of media to be created for the user
"""
upload_resource = self.media_repo.children[b"upload"]
for i in range(number_media):
# file size is 67 Byte
image_data = unhexlify(
b"89504e470d0a1a0a0000000d4948445200000001000000010806"
b"0000001f15c4890000000a49444154789c63000100000500010d"
b"0a2db40000000049454e44ae426082"
)
# Upload some media into the room
self.helper.upload_media(
upload_resource, image_data, tok=user_token, expect_code=200
)
def _check_fields(self, content: List[Dict[str, Any]]):
"""Checks that all attributes are present in content
Args:
content: List that is checked for content
"""
for c in content:
self.assertIn("user_id", c)
self.assertIn("displayname", c)
self.assertIn("media_count", c)
self.assertIn("media_length", c)
def _order_test(
self, order_type: str, expected_user_list: List[str], dir: Optional[str] = None
):
"""Request the list of users in a certain order. Assert that order is what
we expect
Args:
order_type: The type of ordering to give the server
expected_user_list: The list of user_ids in the order we expect to get
back from the server
dir: The direction of ordering to give the server
"""
url = self.url + "?order_by=%s" % (order_type,)
if dir is not None and dir in ("b", "f"):
url += "&dir=%s" % (dir,)
request, channel = self.make_request(
"GET", url.encode("ascii"), access_token=self.admin_user_tok,
)
self.render(request)
self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual(channel.json_body["total"], len(expected_user_list))
returned_order = [row["user_id"] for row in channel.json_body["users"]]
self.assertListEqual(expected_user_list, returned_order)
self._check_fields(channel.json_body["users"])

View File

@ -24,7 +24,7 @@ from mock import Mock
import synapse.rest.admin
from synapse.api.constants import UserTypes
from synapse.api.errors import Codes, HttpResponseException, ResourceLimitError
from synapse.rest.client.v1 import login, room
from synapse.rest.client.v1 import login, profile, room
from synapse.rest.client.v2_alpha import sync
from tests import unittest
@ -34,7 +34,10 @@ from tests.unittest import override_config
class UserRegisterTestCase(unittest.HomeserverTestCase):
servlets = [synapse.rest.admin.register_servlets_for_client_rest_resource]
servlets = [
synapse.rest.admin.register_servlets_for_client_rest_resource,
profile.register_servlets,
]
def make_homeserver(self, reactor, clock):
@ -325,6 +328,120 @@ class UserRegisterTestCase(unittest.HomeserverTestCase):
self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual("Invalid user type", channel.json_body["error"])
def test_displayname(self):
"""
Test that displayname of new user is set
"""
# set no displayname
request, channel = self.make_request("GET", self.url)
self.render(request)
nonce = channel.json_body["nonce"]
want_mac = hmac.new(key=b"shared", digestmod=hashlib.sha1)
want_mac.update(nonce.encode("ascii") + b"\x00bob1\x00abc123\x00notadmin")
want_mac = want_mac.hexdigest()
body = json.dumps(
{"nonce": nonce, "username": "bob1", "password": "abc123", "mac": want_mac}
)
request, channel = self.make_request("POST", self.url, body.encode("utf8"))
self.render(request)
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual("@bob1:test", channel.json_body["user_id"])
request, channel = self.make_request("GET", "/profile/@bob1:test/displayname")
self.render(request)
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual("bob1", channel.json_body["displayname"])
# displayname is None
request, channel = self.make_request("GET", self.url)
self.render(request)
nonce = channel.json_body["nonce"]
want_mac = hmac.new(key=b"shared", digestmod=hashlib.sha1)
want_mac.update(nonce.encode("ascii") + b"\x00bob2\x00abc123\x00notadmin")
want_mac = want_mac.hexdigest()
body = json.dumps(
{
"nonce": nonce,
"username": "bob2",
"displayname": None,
"password": "abc123",
"mac": want_mac,
}
)
request, channel = self.make_request("POST", self.url, body.encode("utf8"))
self.render(request)
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual("@bob2:test", channel.json_body["user_id"])
request, channel = self.make_request("GET", "/profile/@bob2:test/displayname")
self.render(request)
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual("bob2", channel.json_body["displayname"])
# displayname is empty
request, channel = self.make_request("GET", self.url)
self.render(request)
nonce = channel.json_body["nonce"]
want_mac = hmac.new(key=b"shared", digestmod=hashlib.sha1)
want_mac.update(nonce.encode("ascii") + b"\x00bob3\x00abc123\x00notadmin")
want_mac = want_mac.hexdigest()
body = json.dumps(
{
"nonce": nonce,
"username": "bob3",
"displayname": "",
"password": "abc123",
"mac": want_mac,
}
)
request, channel = self.make_request("POST", self.url, body.encode("utf8"))
self.render(request)
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual("@bob3:test", channel.json_body["user_id"])
request, channel = self.make_request("GET", "/profile/@bob3:test/displayname")
self.render(request)
self.assertEqual(404, int(channel.result["code"]), msg=channel.result["body"])
# set displayname
request, channel = self.make_request("GET", self.url)
self.render(request)
nonce = channel.json_body["nonce"]
want_mac = hmac.new(key=b"shared", digestmod=hashlib.sha1)
want_mac.update(nonce.encode("ascii") + b"\x00bob4\x00abc123\x00notadmin")
want_mac = want_mac.hexdigest()
body = json.dumps(
{
"nonce": nonce,
"username": "bob4",
"displayname": "Bob's Name",
"password": "abc123",
"mac": want_mac,
}
)
request, channel = self.make_request("POST", self.url, body.encode("utf8"))
self.render(request)
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual("@bob4:test", channel.json_body["user_id"])
request, channel = self.make_request("GET", "/profile/@bob4:test/displayname")
self.render(request)
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual("Bob's Name", channel.json_body["displayname"])
@override_config(
{"limit_usage_by_mau": True, "max_mau_value": 2, "mau_trial_days": 0}
)

View File

@ -546,18 +546,24 @@ class HomeserverTestCase(TestCase):
return result
def register_user(self, username, password, admin=False):
def register_user(
self,
username: str,
password: str,
admin: Optional[bool] = False,
displayname: Optional[str] = None,
) -> str:
"""
Register a user. Requires the Admin API be registered.
Args:
username (bytes/unicode): The user part of the new user.
password (bytes/unicode): The password of the new user.
admin (bool): Whether the user should be created as an admin
or not.
username: The user part of the new user.
password: The password of the new user.
admin: Whether the user should be created as an admin or not.
displayname: The displayname of the new user.
Returns:
The MXID of the new user (unicode).
The MXID of the new user.
"""
self.hs.config.registration_shared_secret = "shared"
@ -581,6 +587,7 @@ class HomeserverTestCase(TestCase):
{
"nonce": nonce,
"username": username,
"displayname": displayname,
"password": password,
"admin": admin,
"mac": want_mac,