Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes

pull/13598/head
Erik Johnston 2022-04-07 13:31:07 +01:00
commit c37cd85d8a
218 changed files with 6439 additions and 2454 deletions

View File

@ -4,8 +4,12 @@
# things to include
!docker
!synapse
!MANIFEST.in
!README.rst
!pyproject.toml
!poetry.lock
# TODO: remove these once we have moved over to using poetry-core in pyproject.toml
!MANIFEST.in
!setup.py
**/__pycache__

View File

@ -22,7 +22,7 @@ jobs:
- name: Setup mdbook
uses: peaceiris/actions-mdbook@4b5ef36b314c2599664ca107bb8c02412548d79d # v1.1.14
with:
mdbook-version: '0.4.9'
mdbook-version: '0.4.17'
- name: Build the documentation
# mdbook will only create an index.html if we're including docs/README.md in SUMMARY.md.

View File

@ -27,7 +27,6 @@ jobs:
- "check_codestyle"
- "check_isort"
- "mypy"
- "packaging"
steps:
- uses: actions/checkout@v2
@ -377,7 +376,7 @@ jobs:
# Run Complement
- run: |
set -o pipefail
go test -v -json -tags synapse_blacklist,msc2403,msc2716,msc3030 ./tests/... 2>&1 | gotestfmt
go test -v -json -tags synapse_blacklist,msc2716,msc3030 ./tests/... 2>&1 | gotestfmt
shell: bash
name: Run Complement Tests
env:

7
.gitignore vendored
View File

@ -15,6 +15,10 @@ _trial_temp*/
.DS_Store
__pycache__/
# We do want the poetry lockfile. TODO: is there a good reason for ignoring
# '*.lock' above? If not, let's nuke it.
!poetry.lock
# stuff that is likely to exist when you run a server locally
/*.db
/*.log
@ -30,6 +34,9 @@ __pycache__/
/media_store/
/uploads
# For direnv users
/.envrc
# IDEs
/.idea/
/.ropeproject/

View File

@ -0,0 +1 @@
Send device list changes to application services as specified by [MSC3202](https://github.com/matrix-org/matrix-spec-proposals/pull/3202), using unstable prefixes. The `msc3202_transaction_extensions` experimental homeserver config option must be enabled and `org.matrix.msc3202: true` must be present in the application service registration file for device list changes to be sent. The "left" field is currently always empty.

View File

@ -0,0 +1 @@
Optimise fetching large quantities of missing room state over federation.

1
changelog.d/12165.misc Normal file
View File

@ -0,0 +1 @@
Remove lingering unstable references to MSC2403 (knocking).

1
changelog.d/12191.misc Normal file
View File

@ -0,0 +1 @@
Avoid trying to calculate the state at outlier events.

1
changelog.d/12193.misc Normal file
View File

@ -0,0 +1 @@
Omit sending "offline" presence updates to application services after they are initially configured.

1
changelog.d/12209.misc Normal file
View File

@ -0,0 +1 @@
Switch to using a sequence to generate AS transaction IDs. Contributed by Nick Beeper. If running synapse with a dedicated appservice worker, this MUST be stopped before upgrading the main process and database.

View File

@ -0,0 +1 @@
Offload the `update_client_ip` background job from the main process to the background worker, when using Redis-based replication.

1
changelog.d/12267.misc Normal file
View File

@ -0,0 +1 @@
Add missing type hints for storage.

1
changelog.d/12271.doc Normal file
View File

@ -0,0 +1 @@
Clarify documentation for running SyTest against Synapse, including use of Postgres and worker mode.

View File

@ -0,0 +1 @@
Remove the unused and unstable `/aggregations` endpoint which was removed from [MSC2675](https://github.com/matrix-org/matrix-doc/pull/2675).

1
changelog.d/12295.misc Normal file
View File

@ -0,0 +1 @@
Move [MSC2654](https://github.com/matrix-org/matrix-doc/pull/2654) support behind an experimental configuration flag.

View File

@ -0,0 +1 @@
Add a module callback to react to new 3PID (email address, phone number) associations.

View File

@ -0,0 +1 @@
Add a configuration option to remove a specific set of rooms from sync responses.

1
changelog.d/12315.doc Normal file
View File

@ -0,0 +1 @@
Document the behaviour of `LoggingTransaction.call_after` and `LoggingTransaction.call_on_exception` methods when transactions are retried.

1
changelog.d/12316.misc Normal file
View File

@ -0,0 +1 @@
Avoid trying to calculate the state at outlier events.

1
changelog.d/12317.misc Normal file
View File

@ -0,0 +1 @@
Update docstrings to explain how to decipher live and historic pagination tokens.

1
changelog.d/12321.misc Normal file
View File

@ -0,0 +1 @@
Add ground work for speeding up device list updates for users in large numbers of rooms.

1
changelog.d/12326.misc Normal file
View File

@ -0,0 +1 @@
Fix typechecker problems exposed by signedjson 1.1.2.

View File

@ -0,0 +1 @@
Add a module callback to react to account data changes.

1
changelog.d/12329.bugfix Normal file
View File

@ -0,0 +1 @@
Fix non-member state events not resolving for historical events when used in [MSC2716](https://github.com/matrix-org/matrix-spec-proposals/pull/2716) `/batch_send` `state_events_at_start`.

1
changelog.d/12330.misc Normal file
View File

@ -0,0 +1 @@
Avoid trying to calculate the state at outlier events.

1
changelog.d/12331.doc Normal file
View File

@ -0,0 +1 @@
Update dead links in `check-newsfragment.sh` to point to the correct documentation URL.

1
changelog.d/12332.misc Normal file
View File

@ -0,0 +1 @@
Avoid trying to calculate the state at outlier events.

1
changelog.d/12333.bugfix Normal file
View File

@ -0,0 +1 @@
Fix a long-standing bug affecting URL previews that would generate a 500 response instead of a 403 if the previewed URL includes a port that isn't allowed by the relevant blacklist.

1
changelog.d/12334.misc Normal file
View File

@ -0,0 +1 @@
Remove the `tox` packaging job: it will be redundant once #11537 lands.

1
changelog.d/12335.misc Normal file
View File

@ -0,0 +1 @@
Ignore `.envrc` for `direnv` users.

1
changelog.d/12336.misc Normal file
View File

@ -0,0 +1 @@
Remove the (broadly unused, dev-only) dockerfile for pg tests.

1
changelog.d/12338.misc Normal file
View File

@ -0,0 +1 @@
Refactor relations code to remove an unnecessary class.

1
changelog.d/12339.doc Normal file
View File

@ -0,0 +1 @@
Upgrade the version of `mdbook` in CI to 0.4.17.

View File

@ -0,0 +1 @@
Allow setting user admin status using the module API. Contributed by Famedly.

1
changelog.d/12345.doc Normal file
View File

@ -0,0 +1 @@
Updates to the Room DAG concepts development document to clarify that we mark events as outliers because we don't have any state for them.

1
changelog.d/12346.misc Normal file
View File

@ -0,0 +1 @@
Remove redundant `get_success` calls in test code.

1
changelog.d/12347.misc Normal file
View File

@ -0,0 +1 @@
Add type annotations for `tests/unittest.py`.

1
changelog.d/12348.misc Normal file
View File

@ -0,0 +1 @@
Move single-use methods out of `TestCase`.

1
changelog.d/12349.misc Normal file
View File

@ -0,0 +1 @@
Remove broken and unused development scripts.

1
changelog.d/12350.bugfix Normal file
View File

@ -0,0 +1 @@
Default to `private` room visibility rather than `public` when a client does not specify one, according to spec.

1
changelog.d/12351.misc Normal file
View File

@ -0,0 +1 @@
Remove broken and unused development scripts.

1
changelog.d/12353.misc Normal file
View File

@ -0,0 +1 @@
Convert `Linearizer` tests from `inlineCallbacks` to async.

1
changelog.d/12354.misc Normal file
View File

@ -0,0 +1 @@
Update docstrings for `ReadWriteLock` tests.

1
changelog.d/12355.misc Normal file
View File

@ -0,0 +1 @@
Remove broken and unused development scripts.

1
changelog.d/12357.misc Normal file
View File

@ -0,0 +1 @@
Refactor `Linearizer`, convert methods to async and use an async context manager.

1
changelog.d/12358.misc Normal file
View File

@ -0,0 +1 @@
Fix a long-standing bug where `Linearizer`s could get stuck if a cancellation were to happen at the wrong time.

1
changelog.d/12364.bugfix Normal file
View File

@ -0,0 +1 @@
Fix a spec compliance issue where requests to the `/publicRooms` federation API would specify `limit` as a string.

1
changelog.d/12366.misc Normal file
View File

@ -0,0 +1 @@
Make `StreamToken.from_string` and `RoomStreamToken.parse` propagate cancellations instead of replacing them with `SynapseError`s.

1
changelog.d/12369.doc Normal file
View File

@ -0,0 +1 @@
Update the link to Redis pub/sub documentation in the workers documentation..

View File

@ -0,0 +1 @@
Update `/messages` to use historic pagination tokens if no `from` query parameter is given.

1
changelog.d/12371.misc Normal file
View File

@ -0,0 +1 @@
Add type hints to tests files.

View File

@ -0,0 +1 @@
Reduce overhead of restarting synchrotrons.

1
changelog.d/12376.misc Normal file
View File

@ -0,0 +1 @@
Allow specifying the Postgres database's port when running unit tests with Postgres.

1
changelog.d/12379.misc Normal file
View File

@ -0,0 +1 @@
Remove temporary pin of signedjson<=1.1.1 that was added in Synapse 1.56.0.

1
changelog.d/12380.misc Normal file
View File

@ -0,0 +1 @@
Add opentracing spans to calls to external cache.

1
changelog.d/12381.misc Normal file
View File

@ -0,0 +1 @@
Lay groundwork for using `poetry` to manage Synapse's dependencies.

1
changelog.d/12384.misc Normal file
View File

@ -0,0 +1 @@
Make missing `importlib_metadata` dependency explicit.

1
changelog.d/12385.docker Normal file
View File

@ -0,0 +1 @@
Bundle locked versions of dependencies into the Docker image.

1
changelog.d/12389.misc Normal file
View File

@ -0,0 +1 @@
Update type annotations for compatiblity with prometheus_client 0.14.

1
changelog.d/12390.bugfix Normal file
View File

@ -0,0 +1 @@
Fix a bug introduced in Synapse 1.49.0 which caused the `synapse_event_persisted_position` metric to have invalid values.

1
changelog.d/12392.doc Normal file
View File

@ -0,0 +1 @@
Remove documentation for converting a legacy structured logging configuration to the new format.

1
changelog.d/12398.misc Normal file
View File

@ -0,0 +1 @@
Remove support for the unstable identifiers specified in [MSC3288](https://github.com/matrix-org/matrix-doc/pull/3288).

1
changelog.d/12400.misc Normal file
View File

@ -0,0 +1 @@
Make missing `importlib_metadata` dependency explicit.

View File

@ -0,0 +1 @@
Support the stable `v1` endpoint for `/relations`, per [MSC2675](https://github.com/matrix-org/matrix-doc/pull/2675).

View File

@ -14,20 +14,61 @@
# DOCKER_BUILDKIT=1 docker build -f docker/Dockerfile --build-arg PYTHON_VERSION=3.10 .
#
# Irritatingly, there is no blessed guide on how to distribute an application with its
# poetry-managed environment in a docker image. We have opted for
# `poetry export | pip install -r /dev/stdin`, but there are known bugs in
# in `poetry export` whose fixes (scheduled for poetry 1.2) have yet to be released.
# In case we get bitten by those bugs in the future, the recommendations here might
# be useful:
# https://github.com/python-poetry/poetry/discussions/1879#discussioncomment-216865
# https://stackoverflow.com/questions/53835198/integrating-python-poetry-with-docker?answertab=scoredesc
ARG PYTHON_VERSION=3.9
###
### Stage 0: builder
### Stage 0: generate requirements.txt
###
FROM docker.io/python:${PYTHON_VERSION}-slim as requirements
# RUN --mount is specific to buildkit and is documented at
# https://github.com/moby/buildkit/blob/master/frontend/dockerfile/docs/syntax.md#build-mounts-run---mount.
# Here we use it to set up a cache for apt (and below for pip), to improve
# rebuild speeds on slow connections.
RUN \
--mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update && apt-get install -y git \
&& rm -rf /var/lib/apt/lists/*
# We install poetry in its own build stage to avoid its dependencies conflicting with
# synapse's dependencies.
# We use a specific commit from poetry's master branch instead of our usual 1.1.12,
# to incorporate fixes to some bugs in `poetry export`. This commit corresponds to
# https://github.com/python-poetry/poetry/pull/5156 and
# https://github.com/python-poetry/poetry/issues/5141 ;
# without it, we generate a requirements.txt with incorrect environment markers,
# which causes necessary packages to be omitted when we `pip install`.
#
# NB: In poetry 1.2 `poetry export` will be moved into a plugin; we'll need to also
# pip install poetry-plugin-export (https://github.com/python-poetry/poetry-plugin-export).
RUN --mount=type=cache,target=/root/.cache/pip \
pip install --user git+https://github.com/python-poetry/poetry.git@fb13b3a676f476177f7937ffa480ee5cff9a90a5
WORKDIR /synapse
# Copy just what we need to run `poetry export`...
COPY pyproject.toml poetry.lock README.rst /synapse/
RUN /root/.local/bin/poetry export --extras all -o /synapse/requirements.txt
###
### Stage 1: builder
###
FROM docker.io/python:${PYTHON_VERSION}-slim as builder
# install the OS build deps
#
# RUN --mount is specific to buildkit and is documented at
# https://github.com/moby/buildkit/blob/master/frontend/dockerfile/docs/syntax.md#build-mounts-run---mount.
# Here we use it to set up a cache for apt, to improve rebuild speeds on
# slow connections.
#
RUN \
--mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
@ -45,30 +86,27 @@ RUN \
zlib1g-dev \
&& rm -rf /var/lib/apt/lists/*
# Copy just what we need to pip install
COPY MANIFEST.in README.rst setup.py /synapse/
COPY synapse/__init__.py /synapse/synapse/__init__.py
COPY synapse/python_dependencies.py /synapse/synapse/python_dependencies.py
# To speed up rebuilds, install all of the dependencies before we copy over
# the whole synapse project so that we this layer in the Docker cache can be
# the whole synapse project, so that this layer in the Docker cache can be
# used while you develop on the source
#
# This is aiming at installing the `install_requires` and `extras_require` from `setup.py`
# This is aiming at installing the `[tool.poetry.depdendencies]` from pyproject.toml.
COPY --from=requirements /synapse/requirements.txt /synapse/
RUN --mount=type=cache,target=/root/.cache/pip \
pip install --prefix="/install" --no-warn-script-location \
/synapse[all]
pip install --prefix="/install" --no-warn-script-location -r /synapse/requirements.txt
# Copy over the rest of the project
# Copy over the rest of the synapse source code.
COPY synapse /synapse/synapse/
# ... and what we need to `pip install`.
# TODO: once pyproject.toml declares poetry-core as its build system, we'll need to copy
# pyproject.toml here, ditching setup.py and MANIFEST.in.
COPY setup.py MANIFEST.in README.rst /synapse/
# Install the synapse package itself and all of its children packages.
#
# This is aiming at installing only the `packages=find_packages(...)` from `setup.py
# Install the synapse package itself.
RUN pip install --prefix="/install" --no-deps --no-warn-script-location /synapse
###
### Stage 1: runtime
### Stage 2: runtime
###
FROM docker.io/python:${PYTHON_VERSION}-slim

View File

@ -1,30 +0,0 @@
# Use the Sytest image that comes with a lot of the build dependencies
# pre-installed
FROM matrixdotorg/sytest:focal
# The Sytest image doesn't come with python, so install that
RUN apt-get update && apt-get -qq install -y python3 python3-dev python3-pip
# We need tox to run the tests in run_pg_tests.sh
RUN python3 -m pip install tox
# Initialise the db
RUN su -c '/usr/lib/postgresql/10/bin/initdb -D /var/lib/postgresql/data -E "UTF-8" --lc-collate="C.UTF-8" --lc-ctype="C.UTF-8" --username=postgres' postgres
# Add a user with our UID and GID so that files get created on the host owned
# by us, not root.
ARG UID
ARG GID
RUN groupadd --gid $GID user
RUN useradd --uid $UID --gid $GID --groups sudo --no-create-home user
# Ensure we can start postgres by sudo-ing as the postgres user.
RUN apt-get update && apt-get -qq install -y sudo
RUN echo "user ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
ADD run_pg_tests.sh /run_pg_tests.sh
# Use the "exec form" of ENTRYPOINT (https://docs.docker.com/engine/reference/builder/#entrypoint)
# so that we can `docker run` this container and pass arguments to pg_tests.sh
ENTRYPOINT ["/run_pg_tests.sh"]
USER user

View File

@ -78,7 +78,7 @@ the root of your Complement checkout and run:
docker build -t matrixdotorg/complement-synapse-workers -f dockerfiles/SynapseWorkers.Dockerfile dockerfiles
```
This will build an image with the tag `complement-synapse`, which can be handed to
This will build an image with the tag `complement-synapse-workers`, which can be handed to
Complement for testing via the `COMPLEMENT_BASE_IMAGE` environment variable. Refer to
[Complement's documentation](https://github.com/matrix-org/complement/#running) for
how to run the tests, as well as the various available command line flags.

View File

@ -1,19 +0,0 @@
#!/usr/bin/env bash
# This script runs the PostgreSQL tests inside a Docker container. It expects
# the relevant source files to be mounted into /src (done automatically by the
# caller script). It will set up the database, run it, and then use the tox
# configuration to run the tests.
set -e
# Set PGUSER so Synapse's tests know what user to connect to the database with
export PGUSER=postgres
# Start the database
sudo -u postgres /usr/lib/postgresql/10/bin/pg_ctl -w -D /var/lib/postgresql/data start
# Run the tests
cd /src
export TRIAL_FLAGS="-j 4"
tox --workdir=./.tox-pg-container -e py37-postgres "$@"

View File

@ -108,7 +108,7 @@ def generate_config_from_template(config_dir, config_path, environ, ownership):
# Hopefully we already have a signing key, but generate one if not.
args = [
"python",
sys.executable,
"-m",
"synapse.app.homeserver",
"--config-path",
@ -158,7 +158,7 @@ def run_generate_config(environ, ownership):
# generate the main config file, and a signing key.
args = [
"python",
sys.executable,
"-m",
"synapse.app.homeserver",
"--server-name",
@ -175,7 +175,7 @@ def run_generate_config(environ, ownership):
"--open-private-ports",
]
# log("running %s" % (args, ))
os.execv("/usr/local/bin/python", args)
os.execv(sys.executable, args)
def main(args, environ):
@ -254,12 +254,12 @@ running with 'migrate_config'. See the README for more details.
log("Starting synapse with args " + " ".join(args))
args = ["python"] + args
args = [sys.executable] + args
if ownership is not None:
args = ["gosu", ownership] + args
os.execve("/usr/sbin/gosu", args, environ)
else:
os.execve("/usr/local/bin/python", args, environ)
os.execve(sys.executable, args, environ)
if __name__ == "__main__":

View File

@ -45,6 +45,7 @@
- [Account validity callbacks](modules/account_validity_callbacks.md)
- [Password auth provider callbacks](modules/password_auth_provider_callbacks.md)
- [Background update controller callbacks](modules/background_update_controller_callbacks.md)
- [Account data callbacks](modules/account_data_callbacks.md)
- [Porting a legacy module to the new interface](modules/porting_legacy_module.md)
- [Workers](workers.md)
- [Using `synctl` with Workers](synctl_workers.md)

View File

@ -206,9 +206,10 @@ To do so, [configure Postgres](../postgres.md) and run `trial` with the
following environment variables matching your configuration:
- `SYNAPSE_POSTGRES` to anything nonempty
- `SYNAPSE_POSTGRES_HOST`
- `SYNAPSE_POSTGRES_USER`
- `SYNAPSE_POSTGRES_PASSWORD`
- `SYNAPSE_POSTGRES_HOST` (optional if it's the default: UNIX socket)
- `SYNAPSE_POSTGRES_PORT` (optional if it's the default: 5432)
- `SYNAPSE_POSTGRES_USER` (optional if using a UNIX socket)
- `SYNAPSE_POSTGRES_PASSWORD` (optional if using a UNIX socket)
For example:
@ -220,26 +221,12 @@ export SYNAPSE_POSTGRES_PASSWORD=mydevenvpassword
trial
```
#### Prebuilt container
You don't need to specify the host, user, port or password if your Postgres
server is set to authenticate you over the UNIX socket (i.e. if the `psql` command
works without further arguments).
Since configuring PostgreSQL can be fiddly, we can make use of a pre-made
Docker container to set up PostgreSQL and run our tests for us. To do so, run
Your Postgres account needs to be able to create databases.
```shell
scripts-dev/test_postgresql.sh
```
Any extra arguments to the script will be passed to `tox` and then to `trial`,
so we can run a specific test in this container with e.g.
```shell
scripts-dev/test_postgresql.sh tests.replication.test_sharded_event_persister.EventPersisterShardTestCase
```
The container creates a folder in your Synapse checkout called
`.tox-pg-container` and uses this as a tox environment. The output of any
`trial` runs goes into `_trial_temp` in your synapse source directory — the same
as running `trial` directly on your host machine.
## Run the integration tests ([Sytest](https://github.com/matrix-org/sytest)).
@ -254,8 +241,14 @@ configuration:
```sh
$ docker run --rm -it -v /path/where/you/have/cloned/the/repository\:/src:ro -v /path/to/where/you/want/logs\:/logs matrixdotorg/sytest-synapse:buster
```
(Note that the paths must be full paths! You could also write `$(realpath relative/path)` if needed.)
This configuration should generally cover your needs. For more details about other configurations, see [documentation in the SyTest repo](https://github.com/matrix-org/sytest/blob/develop/docker/README.md).
This configuration should generally cover your needs.
- To run with Postgres, supply the `-e POSTGRES=1 -e MULTI_POSTGRES=1` environment flags.
- To run with Synapse in worker mode, supply the `-e WORKERS=1 -e REDIS=1` environment flags (in addition to the Postgres flags).
For more details about other configurations, see the [Docker-specific documentation in the SyTest repo](https://github.com/matrix-org/sytest/blob/develop/docker/README.md).
## Run the integration tests ([Complement](https://github.com/matrix-org/complement)).

View File

@ -39,7 +39,8 @@ yet correlated to the DAG.
Outliers typically arise when we fetch the auth chain or state for a given
event. When that happens, we just grab the events in the state/auth chain,
without calculating the state at those events, or backfilling their
`prev_events`.
`prev_events`. Since we don't have the state at any events fetched in that
way, we mark them as outliers.
So, typically, we won't have the `prev_events` of an `outlier` in the database,
(though it's entirely possible that we *might* have them for some other

View File

@ -0,0 +1,106 @@
# Account data callbacks
Account data callbacks allow module developers to react to changes of the account data
of local users. Account data callbacks can be registered using the module API's
`register_account_data_callbacks` method.
## Callbacks
The available account data callbacks are:
### `on_account_data_updated`
_First introduced in Synapse v1.57.0_
```python
async def on_account_data_updated(
user_id: str,
room_id: Optional[str],
account_data_type: str,
content: "synapse.module_api.JsonDict",
) -> None:
```
Called after user's account data has been updated. The module is given the
Matrix ID of the user whose account data is changing, the room ID the data is associated
with, the type associated with the change, as well as the new content. If the account
data is not associated with a specific room, then the room ID is `None`.
This callback is triggered when new account data is added or when the data associated with
a given type (and optionally room) changes. This includes deletion, since in Matrix,
deleting account data consists of replacing the data associated with a given type
(and optionally room) with an empty dictionary (`{}`).
Note that this doesn't trigger when changing the tags associated with a room, as these are
processed separately by Synapse.
If multiple modules implement this callback, Synapse runs them all in order.
## Example
The example below is a module that implements the `on_account_data_updated` callback, and
sends an event to an audit room when a user changes their account data.
```python
import json
import attr
from typing import Any, Dict, Optional
from synapse.module_api import JsonDict, ModuleApi
from synapse.module_api.errors import ConfigError
@attr.s(auto_attribs=True)
class CustomAccountDataConfig:
audit_room: str
sender: str
class CustomAccountDataModule:
def __init__(self, config: CustomAccountDataConfig, api: ModuleApi):
self.api = api
self.config = config
self.api.register_account_data_callbacks(
on_account_data_updated=self.log_new_account_data,
)
@staticmethod
def parse_config(config: Dict[str, Any]) -> CustomAccountDataConfig:
def check_in_config(param: str):
if param not in config:
raise ConfigError(f"'{param}' is required")
check_in_config("audit_room")
check_in_config("sender")
return CustomAccountDataConfig(
audit_room=config["audit_room"],
sender=config["sender"],
)
async def log_new_account_data(
self,
user_id: str,
room_id: Optional[str],
account_data_type: str,
content: JsonDict,
) -> None:
content_raw = json.dumps(content)
msg_content = f"{user_id} has changed their account data for type {account_data_type} to: {content_raw}"
if room_id is not None:
msg_content += f" (in room {room_id})"
await self.api.create_and_send_event_into_room(
{
"room_id": self.config.audit_room,
"sender": self.config.sender,
"type": "m.room.message",
"content": {
"msgtype": "m.text",
"body": msg_content
}
}
)
```

View File

@ -247,6 +247,24 @@ admin API.
If multiple modules implement this callback, Synapse runs them all in order.
### `on_threepid_bind`
_First introduced in Synapse v1.56.0_
```python
async def on_threepid_bind(user_id: str, medium: str, address: str) -> None:
```
Called after creating an association between a local user and a third-party identifier
(email address, phone number). The module is given the Matrix ID of the user the
association is for, as well as the medium (`email` or `msisdn`) and address of the
third-party identifier.
Note that this callback is _not_ called after a successful association on an _identity
server_.
If multiple modules implement this callback, Synapse runs them all in order.
## Example
The example below is a module that implements the third-party rules callback

View File

@ -33,7 +33,7 @@ A module can implement the following static method:
```python
@staticmethod
def parse_config(config: dict) -> dict
def parse_config(config: dict) -> Any
```
This method is given a dictionary resulting from parsing the YAML configuration for the

View File

@ -539,6 +539,15 @@ templates:
#
#custom_template_directory: /path/to/custom/templates/
# List of rooms to exclude from sync responses. This is useful for server
# administrators wishing to group users into a room without these users being able
# to see it from their client.
#
# By default, no room is excluded.
#
#exclude_rooms_from_sync:
# - !foo:example.com
# Message retention policy at the server level.
#

View File

@ -78,82 +78,3 @@ loggers:
The above logging config will set Synapse as 'INFO' logging level by default,
with the SQL layer at 'WARNING', and will log JSON formatted messages to a
remote endpoint at 10.1.2.3:9999.
## Upgrading from legacy structured logging configuration
Versions of Synapse prior to v1.54.0 automatically converted the legacy
structured logging configuration, which was deprecated in v1.23.0, to the standard
library logging configuration.
The following reference can be used to update your configuration. Based on the
drain `type`, we can pick a new handler:
1. For a type of `console`, `console_json`, or `console_json_terse`: a handler
with a class of `logging.StreamHandler` and a `stream` of `ext://sys.stdout`
or `ext://sys.stderr` should be used.
2. For a type of `file` or `file_json`: a handler of `logging.FileHandler` with
a location of the file path should be used.
3. For a type of `network_json_terse`: a handler of `synapse.logging.RemoteHandler`
with the host and port should be used.
Then based on the drain `type` we can pick a new formatter:
1. For a type of `console` or `file` no formatter is necessary.
2. For a type of `console_json` or `file_json`: a formatter of
`synapse.logging.JsonFormatter` should be used.
3. For a type of `console_json_terse` or `network_json_terse`: a formatter of
`synapse.logging.TerseJsonFormatter` should be used.
For each new handler and formatter they should be added to the logging configuration
and then assigned to either a logger or the root logger.
An example legacy configuration:
```yaml
structured: true
loggers:
synapse:
level: INFO
synapse.storage.SQL:
level: WARNING
drains:
console:
type: console
location: stdout
file:
type: file_json
location: homeserver.log
```
Would be converted into a new configuration:
```yaml
version: 1
formatters:
json:
class: synapse.logging.JsonFormatter
handlers:
console:
class: logging.StreamHandler
stream: ext://sys.stdout
file:
class: logging.FileHandler
formatter: json
filename: homeserver.log
loggers:
synapse:
level: INFO
handlers: [console, file]
synapse.storage.SQL:
level: WARNING
```
The new logging configuration is a bit more verbose, but significantly more
flexible. It allows for configuration that were not previously possible, such as
sending plain logs over the network, or using different handlers for different
modules.

View File

@ -85,6 +85,19 @@ process, for example:
dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb
```
# Upgrading to v1.57.0
## Changes to database schema for application services
Synapse v1.57.0 includes a [change](https://github.com/matrix-org/synapse/pull/12209) to the
way transaction IDs are managed for application services. If your deployment uses a dedicated
worker for application service traffic, **it must be stopped** when the database is upgraded
(which normally happens when the main process is upgraded), to ensure the change is made safely
without any risk of reusing transaction IDs.
Deployments which do not use separate worker processes can be upgraded as normal. Similarly,
deployments where no applciation services are in use can be upgraded as normal.
# Upgrading to v1.56.0
## Open registration without verification is now disabled by default
@ -147,7 +160,7 @@ please upgrade Mjolnir to version 1.3.2 or later before upgrading Synapse.
This release removes support for the `structured: true` logging configuration
which was deprecated in Synapse v1.23.0. If your logging configuration contains
`structured: true` then it should be modified based on the
[structured logging documentation](structured_logging.md).
[structured logging documentation](https://matrix-org.github.io/synapse/v1.56/structured_logging.html#upgrading-from-legacy-structured-logging-configuration).
# Upgrading to v1.53.0
@ -764,7 +777,7 @@ lock down external access to the Admin API endpoints.
This release deprecates use of the `structured: true` logging
configuration for structured logging. If your logging configuration
contains `structured: true` then it should be modified based on the
[structured logging documentation](structured_logging.md).
[structured logging documentation](https://matrix-org.github.io/synapse/v1.56/structured_logging.html#upgrading-from-legacy-structured-logging-configuration).
The `structured` and `drains` logging options are now deprecated and
should be replaced by standard logging configuration of `handlers` and

View File

@ -27,7 +27,7 @@ feeds streams of newly written data between processes so they can be kept in
sync with the database state.
When configured to do so, Synapse uses a
[Redis pub/sub channel](https://redis.io/topics/pubsub) to send the replication
[Redis pub/sub channel](https://redis.io/docs/manual/pubsub/) to send the replication
stream between all configured Synapse processes. Additionally, processes may
make HTTP requests to each other, primarily for operations which need to wait
for a reply ─ such as sending an event.

View File

@ -24,13 +24,8 @@ files =
exclude = (?x)
^(
|scripts-dev/build_debian_packages.py
|scripts-dev/check_signature.py
|scripts-dev/definitions.py
|scripts-dev/federation_client.py
|scripts-dev/hash_history.py
|scripts-dev/list_url_patterns.py
|scripts-dev/release.py
|scripts-dev/tail-synapse.py
|synapse/_scripts/export_signing_key.py
|synapse/_scripts/move_remote_media_to_new_store.py
@ -43,7 +38,6 @@ exclude = (?x)
|synapse/storage/databases/main/event_federation.py
|synapse/storage/databases/main/push_rule.py
|synapse/storage/databases/main/roommember.py
|synapse/storage/databases/main/state.py
|synapse/storage/schema/
|tests/api/test_auth.py
@ -82,11 +76,9 @@ exclude = (?x)
|tests/storage/test_base.py
|tests/storage/test_roommember.py
|tests/test_metrics.py
|tests/test_phone_home.py
|tests/test_server.py
|tests/test_state.py
|tests/test_terms_auth.py
|tests/unittest.py
|tests/util/caches/test_cached_call.py
|tests/util/caches/test_deferred_cache.py
|tests/util/caches/test_descriptors.py
@ -273,6 +265,9 @@ ignore_missing_imports = True
[mypy-ijson.*]
ignore_missing_imports = True
[mypy-importlib_metadata.*]
ignore_missing_imports = True
[mypy-jaeger_client.*]
ignore_missing_imports = True

2831
poetry.lock generated Normal file

File diff suppressed because it is too large Load Diff

View File

@ -66,3 +66,232 @@ multi_line_output = 3
include_trailing_comma = true
combine_as_imports = true
[tool.poetry]
name = "matrix-synapse"
version = "1.56.0"
description = "Homeserver for the Matrix decentralised comms protocol"
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
license = "Apache-2.0"
readme = "README.rst"
repository = "https://github.com/matrix-org/synapse"
packages = [
{ include = "synapse" },
]
classifiers = [
"Development Status :: 5 - Production/Stable",
"Topic :: Communications :: Chat",
]
include = [
{ path = "AUTHORS.rst", format = "sdist" },
{ path = "book.toml", format = "sdist" },
{ path = "changelog.d", format = "sdist" },
{ path = "CHANGES.md", format = "sdist" },
{ path = "CONTRIBUTING.md", format = "sdist" },
{ path = "demo", format = "sdist" },
{ path = "docs", format = "sdist" },
{ path = "INSTALL.md", format = "sdist" },
{ path = "mypy.ini", format = "sdist" },
{ path = "scripts-dev", format = "sdist" },
{ path = "synmark", format="sdist" },
{ path = "sytest-blacklist", format = "sdist" },
{ path = "tests", format = "sdist" },
{ path = "UPGRADE.rst", format = "sdist" },
]
[tool.poetry.scripts]
synapse_homeserver = "synapse.app.homeserver:main"
synapse_worker = "synapse.app.generic_worker:main"
synctl = "synapse._scripts.synctl:main"
export_signing_key = "synapse._scripts.export_signing_key:main"
generate_config = "synapse._scripts.generate_config:main"
generate_log_config = "synapse._scripts.generate_log_config:main"
generate_signing_key = "synapse._scripts.generate_signing_key:main"
hash_password = "synapse._scripts.hash_password:main"
register_new_matrix_user = "synapse._scripts.register_new_matrix_user:main"
synapse_port_db = "synapse._scripts.synapse_port_db:main"
synapse_review_recent_signups = "synapse._scripts.review_recent_signups:main"
update_synapse_database = "synapse._scripts.update_synapse_database:main"
[tool.poetry.dependencies]
python = "^3.7"
# Mandatory Dependencies
# ----------------------
# we use the TYPE_CHECKER.redefine method added in jsonschema 3.0.0
jsonschema = ">=3.0.0"
# frozendict 2.1.2 is broken on Debian 10: https://github.com/Marco-Sulla/python-frozendict/issues/41
frozendict = ">=1,!=2.1.2"
# We require 2.1.0 or higher for type hints. Previous guard was >= 1.1.0
unpaddedbase64 = ">=2.1.0"
canonicaljson = ">=1.4.0"
# we use the type definitions added in signedjson 1.1.
signedjson = ">=1.1.0"
PyNaCl = ">=1.2.1"
idna = ">=2.5"
# validating SSL certs for IP addresses requires service_identity 18.1.
service-identity = ">=18.1.0"
# Twisted 18.9 introduces some logger improvements that the structured
# logger utilises
Twisted = ">=18.9.0"
treq = ">=15.1"
# Twisted has required pyopenssl 16.0 since about Twisted 16.6.
pyOpenSSL = ">=16.0.0"
PyYAML = ">=3.11"
pyasn1 = ">=0.1.9"
pyasn1-modules = ">=0.0.7"
bcrypt = ">=3.1.0"
Pillow = ">=5.4.0"
sortedcontainers = ">=1.4.4"
pymacaroons = ">=0.13.0"
msgpack = ">=0.5.2"
phonenumbers = ">=8.2.0"
# we use GaugeHistogramMetric, which was added in prom-client 0.4.0.
prometheus-client = ">=0.4.0"
# we use `order`, which arrived in attrs 19.2.0.
# Note: 21.1.0 broke `/sync`, see #9936
attrs = ">=19.2.0,!=21.1.0"
netaddr = ">=0.7.18"
# Jinja 2.x is incompatible with MarkupSafe>=2.1. To ensure that admins do not
# end up with a broken installation, with recent MarkupSafe but old Jinja, we
# add a lower bound to the Jinja2 dependency.
Jinja2 = ">=3.0"
bleach = ">=1.4.3"
# We use `ParamSpec`, which was added in `typing-extensions` 3.10.0.0.
typing-extensions = ">=3.10.0"
# We enforce that we have a `cryptography` version that bundles an `openssl`
# with the latest security patches.
cryptography = ">=3.4.7"
# ijson 3.1.4 fixes a bug with "." in property names
ijson = ">=3.1.4"
matrix-common = "~=1.1.0"
# We need packaging.requirements.Requirement, added in 16.1.
packaging = ">=16.1"
# At the time of writing, we only use functions from the version `importlib.metadata`
# which shipped in Python 3.8. This corresponds to version 1.4 of the backport.
importlib_metadata = { version = ">=1.4", python = "<3.8" }
# Optional Dependencies
# ---------------------
matrix-synapse-ldap3 = { version = ">=0.1", optional = true }
psycopg2 = { version = ">=2.8", markers = "platform_python_implementation != 'PyPy'", optional = true }
psycopg2cffi = { version = ">=2.8", markers = "platform_python_implementation == 'PyPy'", optional = true }
psycopg2cffi-compat = { version = "==1.1", markers = "platform_python_implementation == 'PyPy'", optional = true }
pysaml2 = { version = ">=4.5.0", optional = true }
authlib = { version = ">=0.14.0", optional = true }
# systemd-python is necessary for logging to the systemd journal via
# `systemd.journal.JournalHandler`, as is documented in
# `contrib/systemd/log_config.yaml`.
# Note: systemd-python 231 appears to have been yanked from pypi
systemd-python = { version = ">=231", optional = true }
lxml = { version = ">=4.2.0", optional = true }
sentry-sdk = { version = ">=0.7.2", optional = true }
opentracing = { version = ">=2.2.0", optional = true }
jaeger-client = { version = ">=4.0.0", optional = true }
pyjwt = { version = ">=1.6.4", optional = true }
txredisapi = { version = ">=1.4.7", optional = true }
hiredis = { version = "*", optional = true }
Pympler = { version = "*", optional = true }
parameterized = { version = ">=0.7.4", optional = true }
[tool.poetry.extras]
# NB: Packages that should be part of `pip install matrix-synapse[all]` need to be specified
# twice: once here, and once in the `all` extra.
matrix-synapse-ldap3 = ["matrix-synapse-ldap3"]
postgres = ["psycopg2", "psycopg2cffi", "psycopg2cffi-compat"]
saml2 = ["pysaml2"]
oidc = ["authlib"]
# systemd-python is necessary for logging to the systemd journal via
# `systemd.journal.JournalHandler`, as is documented in
# `contrib/systemd/log_config.yaml`.
systemd = ["systemd-python"]
url_preview = ["lxml"]
sentry = ["sentry-sdk"]
opentracing = ["jaeger-client", "opentracing"]
jwt = ["pyjwt"]
# hiredis is not a *strict* dependency, but it makes things much faster.
# (if it is not installed, we fall back to slow code.)
redis = ["txredisapi", "hiredis"]
# Required to use experimental `caches.track_memory_usage` config option.
cache_memory = ["pympler"]
test = ["parameterized"]
# The duplication here is awful. I hate hate hate hate hate it. However, for now I want
# to ensure you can still `pip install matrix-synapse[all]` like today. Two motivations:
# 1) for new installations, I want instructions in existing documentation and tutorials
# out there to still work.
# 2) I don't want to hard-code a list of extras into CI if I can help it. The ideal
# solution here would be something like https://github.com/python-poetry/poetry/issues/3413
# Poetry 1.2's dependency groups might make this easier. But I'm not trying that out
# until there's a stable release of 1.2.
#
# NB: the strings in this list must be *package* names, not extra names.
# Some of our extra names _are_ package names, which can lead to great confusion.
all = [
# matrix-synapse-ldap3
"matrix-synapse-ldap3",
# postgres
"psycopg2", "psycopg2cffi", "psycopg2cffi-compat",
# saml2
"pysaml2",
# oidc
"authlib",
# url_preview
"lxml",
# sentry
"sentry-sdk",
# opentracing
"jaeger-client", "opentracing",
# jwt
"pyjwt",
#redis
"txredisapi", "hiredis"
# omitted:
# - cache_memory: this is an experimental option
# - test: it's useful to have this separate from dev deps in the olddeps job
# - systemd: this is a system-based requirement
]
[tool.poetry.dev-dependencies]
## We pin black so that our tests don't start failing on new releases.
isort = "==5.7.0"
black = "==22.3.0"
flake8-comprehensions = "*"
flake8-bugbear = "==21.3.2"
flake8 = "*"
# Typechecking
mypy = "==0.931"
mypy-zope = "==0.3.5"
types-bleach = ">=4.1.0"
types-jsonschema = ">=3.2.0"
types-opentracing = ">=2.4.2"
types-Pillow = ">=8.3.4"
types-psycopg2 = ">=2.9.9"
types-pyOpenSSL = ">=20.0.7"
types-PyYAML = ">=5.4.10"
types-requests = ">=2.26.0"
types-setuptools = ">=57.4.0"
# Dependencies which are exclusively required by unit test code. This is
# NOT a list of all modules that are necessary to run the unit tests.
# Tests assume that all optional dependencies are installed.
# parameterized<0.7.4 can create classes with names that would normally be invalid
# identifiers. trial really does not like this when running with multiple workers.
parameterized = ">=0.7.4"
# The following are used by the release script
click = "==8.1.0"
redbaron = "==0.9.2"
GitPython = "==3.1.14"
commonmark = "==0.9.1"
pygithub = "==1.55"
# The following are executed as commands by the release script.
twine = "*"
# Towncrier min version comes from #3425. Rationale unclear.
towncrier = ">=18.6.0rc1"
[build-system]
requires = ["setuptools"]
build-backend = "setuptools.build_meta"

View File

@ -19,7 +19,7 @@ if ! git diff --quiet FETCH_HEAD... -- debian; then
if git diff --quiet FETCH_HEAD... -- debian/changelog; then
echo "Updates to debian directory, but no update to the changelog." >&2
echo "!! Please see the contributing guide for help writing your changelog entry:" >&2
echo "https://github.com/matrix-org/synapse/blob/develop/CONTRIBUTING.md#debian-changelog" >&2
echo "https://matrix-org.github.io/synapse/latest/development/contributing_guide.html#debian-changelog" >&2
exit 1
fi
fi
@ -32,7 +32,7 @@ fi
# Print a link to the contributing guide if the user makes a mistake
CONTRIBUTING_GUIDE_TEXT="!! Please see the contributing guide for help writing your changelog entry:
https://github.com/matrix-org/synapse/blob/develop/CONTRIBUTING.md#changelog"
https://matrix-org.github.io/synapse/latest/development/contributing_guide.html#changelog"
# If check-newsfragment returns a non-zero exit code, print the contributing guide and exit
python -m towncrier.check --compare-with=origin/develop || (echo -e "$CONTRIBUTING_GUIDE_TEXT" >&2 && exit 1)

View File

@ -1,72 +0,0 @@
import argparse
import json
import logging
import sys
import dns.resolver
import urllib2
from signedjson.key import decode_verify_key_bytes, write_signing_keys
from signedjson.sign import verify_signed_json
from unpaddedbase64 import decode_base64
def get_targets(server_name):
if ":" in server_name:
target, port = server_name.split(":")
yield (target, int(port))
return
try:
answers = dns.resolver.query("_matrix._tcp." + server_name, "SRV")
for srv in answers:
yield (srv.target, srv.port)
except dns.resolver.NXDOMAIN:
yield (server_name, 8448)
def get_server_keys(server_name, target, port):
url = "https://%s:%i/_matrix/key/v1" % (target, port)
keys = json.load(urllib2.urlopen(url))
verify_keys = {}
for key_id, key_base64 in keys["verify_keys"].items():
verify_key = decode_verify_key_bytes(key_id, decode_base64(key_base64))
verify_signed_json(keys, server_name, verify_key)
verify_keys[key_id] = verify_key
return verify_keys
def main():
parser = argparse.ArgumentParser()
parser.add_argument("signature_name")
parser.add_argument(
"input_json", nargs="?", type=argparse.FileType("r"), default=sys.stdin
)
args = parser.parse_args()
logging.basicConfig()
server_name = args.signature_name
keys = {}
for target, port in get_targets(server_name):
try:
keys = get_server_keys(server_name, target, port)
print("Using keys from https://%s:%s/_matrix/key/v1" % (target, port))
write_signing_keys(sys.stdout, keys.values())
break
except Exception:
logging.exception("Error talking to %s:%s", target, port)
json_to_check = json.load(args.input_json)
print("Checking JSON:")
for key_id in json_to_check["signatures"][args.signature_name]:
try:
key = keys[key_id]
verify_signed_json(json_to_check, args.signature_name, key)
print("PASS %s" % (key_id,))
except Exception:
logging.exception("Check for key %s failed" % (key_id,))
print("FAIL %s" % (key_id,))
if __name__ == "__main__":
main()

View File

@ -71,4 +71,4 @@ fi
# Run the tests!
echo "Images built; running complement"
go test -v -tags synapse_blacklist,msc2403,msc2716,msc3030 -count=1 $EXTRA_COMPLEMENT_ARGS ./tests/...
go test -v -tags synapse_blacklist,msc2716,msc3030 -count=1 $EXTRA_COMPLEMENT_ARGS ./tests/...

View File

@ -1,208 +0,0 @@
#! /usr/bin/python
import argparse
import ast
import os
import re
import sys
import yaml
class DefinitionVisitor(ast.NodeVisitor):
def __init__(self):
super().__init__()
self.functions = {}
self.classes = {}
self.names = {}
self.attrs = set()
self.definitions = {
"def": self.functions,
"class": self.classes,
"names": self.names,
"attrs": self.attrs,
}
def visit_Name(self, node):
self.names.setdefault(type(node.ctx).__name__, set()).add(node.id)
def visit_Attribute(self, node):
self.attrs.add(node.attr)
for child in ast.iter_child_nodes(node):
self.visit(child)
def visit_ClassDef(self, node):
visitor = DefinitionVisitor()
self.classes[node.name] = visitor.definitions
for child in ast.iter_child_nodes(node):
visitor.visit(child)
def visit_FunctionDef(self, node):
visitor = DefinitionVisitor()
self.functions[node.name] = visitor.definitions
for child in ast.iter_child_nodes(node):
visitor.visit(child)
def non_empty(defs):
functions = {name: non_empty(f) for name, f in defs["def"].items()}
classes = {name: non_empty(f) for name, f in defs["class"].items()}
result = {}
if functions:
result["def"] = functions
if classes:
result["class"] = classes
names = defs["names"]
uses = []
for name in names.get("Load", ()):
if name not in names.get("Param", ()) and name not in names.get("Store", ()):
uses.append(name)
uses.extend(defs["attrs"])
if uses:
result["uses"] = uses
result["names"] = names
result["attrs"] = defs["attrs"]
return result
def definitions_in_code(input_code):
input_ast = ast.parse(input_code)
visitor = DefinitionVisitor()
visitor.visit(input_ast)
definitions = non_empty(visitor.definitions)
return definitions
def definitions_in_file(filepath):
with open(filepath) as f:
return definitions_in_code(f.read())
def defined_names(prefix, defs, names):
for name, funcs in defs.get("def", {}).items():
names.setdefault(name, {"defined": []})["defined"].append(prefix + name)
defined_names(prefix + name + ".", funcs, names)
for name, funcs in defs.get("class", {}).items():
names.setdefault(name, {"defined": []})["defined"].append(prefix + name)
defined_names(prefix + name + ".", funcs, names)
def used_names(prefix, item, defs, names):
for name, funcs in defs.get("def", {}).items():
used_names(prefix + name + ".", name, funcs, names)
for name, funcs in defs.get("class", {}).items():
used_names(prefix + name + ".", name, funcs, names)
path = prefix.rstrip(".")
for used in defs.get("uses", ()):
if used in names:
if item:
names[item].setdefault("uses", []).append(used)
names[used].setdefault("used", {}).setdefault(item, []).append(path)
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Find definitions.")
parser.add_argument(
"--unused", action="store_true", help="Only list unused definitions"
)
parser.add_argument(
"--ignore", action="append", metavar="REGEXP", help="Ignore a pattern"
)
parser.add_argument(
"--pattern", action="append", metavar="REGEXP", help="Search for a pattern"
)
parser.add_argument(
"directories",
nargs="+",
metavar="DIR",
help="Directories to search for definitions",
)
parser.add_argument(
"--referrers",
default=0,
type=int,
help="Include referrers up to the given depth",
)
parser.add_argument(
"--referred",
default=0,
type=int,
help="Include referred down to the given depth",
)
parser.add_argument(
"--format", default="yaml", help="Output format, one of 'yaml' or 'dot'"
)
args = parser.parse_args()
definitions = {}
for directory in args.directories:
for root, _, files in os.walk(directory):
for filename in files:
if filename.endswith(".py"):
filepath = os.path.join(root, filename)
definitions[filepath] = definitions_in_file(filepath)
names = {}
for filepath, defs in definitions.items():
defined_names(filepath + ":", defs, names)
for filepath, defs in definitions.items():
used_names(filepath + ":", None, defs, names)
patterns = [re.compile(pattern) for pattern in args.pattern or ()]
ignore = [re.compile(pattern) for pattern in args.ignore or ()]
result = {}
for name, definition in names.items():
if patterns and not any(pattern.match(name) for pattern in patterns):
continue
if ignore and any(pattern.match(name) for pattern in ignore):
continue
if args.unused and definition.get("used"):
continue
result[name] = definition
referrer_depth = args.referrers
referrers = set()
while referrer_depth:
referrer_depth -= 1
for entry in result.values():
for used_by in entry.get("used", ()):
referrers.add(used_by)
for name, definition in names.items():
if name not in referrers:
continue
if ignore and any(pattern.match(name) for pattern in ignore):
continue
result[name] = definition
referred_depth = args.referred
referred = set()
while referred_depth:
referred_depth -= 1
for entry in result.values():
for uses in entry.get("uses", ()):
referred.add(uses)
for name, definition in names.items():
if name not in referred:
continue
if ignore and any(pattern.match(name) for pattern in ignore):
continue
result[name] = definition
if args.format == "yaml":
yaml.dump(result, sys.stdout, default_flow_style=False)
elif args.format == "dot":
print("digraph {")
for name, entry in result.items():
print(name)
for used_by in entry.get("used", ()):
if used_by in result:
print(used_by, "->", name)
print("}")
else:
raise ValueError("Unknown format %r" % (args.format))

View File

@ -1,81 +0,0 @@
import sqlite3
import sys
from unpaddedbase64 import decode_base64, encode_base64
from synapse.crypto.event_signing import (
add_event_pdu_content_hash,
compute_pdu_event_reference_hash,
)
from synapse.federation.units import Pdu
from synapse.storage._base import SQLBaseStore
from synapse.storage.pdu import PduStore
from synapse.storage.signatures import SignatureStore
class Store:
_get_pdu_tuples = PduStore.__dict__["_get_pdu_tuples"]
_get_pdu_content_hashes_txn = SignatureStore.__dict__["_get_pdu_content_hashes_txn"]
_get_prev_pdu_hashes_txn = SignatureStore.__dict__["_get_prev_pdu_hashes_txn"]
_get_pdu_origin_signatures_txn = SignatureStore.__dict__[
"_get_pdu_origin_signatures_txn"
]
_store_pdu_content_hash_txn = SignatureStore.__dict__["_store_pdu_content_hash_txn"]
_store_pdu_reference_hash_txn = SignatureStore.__dict__[
"_store_pdu_reference_hash_txn"
]
_store_prev_pdu_hash_txn = SignatureStore.__dict__["_store_prev_pdu_hash_txn"]
simple_insert_txn = SQLBaseStore.__dict__["simple_insert_txn"]
store = Store()
def select_pdus(cursor):
cursor.execute("SELECT pdu_id, origin FROM pdus ORDER BY depth ASC")
ids = cursor.fetchall()
pdu_tuples = store._get_pdu_tuples(cursor, ids)
pdus = [Pdu.from_pdu_tuple(p) for p in pdu_tuples]
reference_hashes = {}
for pdu in pdus:
try:
if pdu.prev_pdus:
print("PROCESS", pdu.pdu_id, pdu.origin, pdu.prev_pdus)
for pdu_id, origin, hashes in pdu.prev_pdus:
ref_alg, ref_hsh = reference_hashes[(pdu_id, origin)]
hashes[ref_alg] = encode_base64(ref_hsh)
store._store_prev_pdu_hash_txn(
cursor, pdu.pdu_id, pdu.origin, pdu_id, origin, ref_alg, ref_hsh
)
print("SUCCESS", pdu.pdu_id, pdu.origin, pdu.prev_pdus)
pdu = add_event_pdu_content_hash(pdu)
ref_alg, ref_hsh = compute_pdu_event_reference_hash(pdu)
reference_hashes[(pdu.pdu_id, pdu.origin)] = (ref_alg, ref_hsh)
store._store_pdu_reference_hash_txn(
cursor, pdu.pdu_id, pdu.origin, ref_alg, ref_hsh
)
for alg, hsh_base64 in pdu.hashes.items():
print(alg, hsh_base64)
store._store_pdu_content_hash_txn(
cursor, pdu.pdu_id, pdu.origin, alg, decode_base64(hsh_base64)
)
except Exception:
print("FAILED_", pdu.pdu_id, pdu.origin, pdu.prev_pdus)
def main():
conn = sqlite3.connect(sys.argv[1])
cursor = conn.cursor()
select_pdus(cursor)
conn.commit()
if __name__ == "__main__":
main()

View File

@ -1,60 +0,0 @@
#! /usr/bin/python
import argparse
import ast
import os
import sys
import yaml
PATTERNS_V1 = []
PATTERNS_V2 = []
RESULT = {"v1": PATTERNS_V1, "v2": PATTERNS_V2}
class CallVisitor(ast.NodeVisitor):
def visit_Call(self, node):
if isinstance(node.func, ast.Name):
name = node.func.id
else:
return
if name == "client_patterns":
PATTERNS_V2.append(node.args[0].s)
def find_patterns_in_code(input_code):
input_ast = ast.parse(input_code)
visitor = CallVisitor()
visitor.visit(input_ast)
def find_patterns_in_file(filepath):
with open(filepath) as f:
find_patterns_in_code(f.read())
parser = argparse.ArgumentParser(description="Find url patterns.")
parser.add_argument(
"directories",
nargs="+",
metavar="DIR",
help="Directories to search for definitions",
)
args = parser.parse_args()
for directory in args.directories:
for root, _, files in os.walk(directory):
for filename in files:
if filename.endswith(".py"):
filepath = os.path.join(root, filename)
find_patterns_in_file(filepath)
PATTERNS_V1.sort()
PATTERNS_V2.sort()
yaml.dump(RESULT, sys.stdout, default_flow_style=False)

View File

@ -1,67 +0,0 @@
import collections
import json
import sys
import time
import requests
Entry = collections.namedtuple("Entry", "name position rows")
ROW_TYPES = {}
def row_type_for_columns(name, column_names):
column_names = tuple(column_names)
row_type = ROW_TYPES.get((name, column_names))
if row_type is None:
row_type = collections.namedtuple(name, column_names)
ROW_TYPES[(name, column_names)] = row_type
return row_type
def parse_response(content):
streams = json.loads(content)
result = {}
for name, value in streams.items():
row_type = row_type_for_columns(name, value["field_names"])
position = value["position"]
rows = [row_type(*row) for row in value["rows"]]
result[name] = Entry(name, position, rows)
return result
def replicate(server, streams):
return parse_response(
requests.get(
server + "/_synapse/replication", verify=False, params=streams
).content
)
def main():
server = sys.argv[1]
streams = None
while not streams:
try:
streams = {
row.name: row.position
for row in replicate(server, {"streams": "-1"})["streams"].rows
}
except requests.exceptions.ConnectionError:
time.sleep(0.1)
while True:
try:
results = replicate(server, streams)
except Exception:
sys.stdout.write("connection_lost(" + repr(streams) + ")\n")
break
for update in results.values():
for row in update.rows:
sys.stdout.write(repr(row) + "\n")
streams[update.name] = update.position
if __name__ == "__main__":
main()

View File

@ -1,19 +0,0 @@
#!/usr/bin/env bash
# This script builds the Docker image to run the PostgreSQL tests, and then runs
# the tests. It uses a dedicated tox environment so that we don't have to
# rebuild it each time.
# Command line arguments to this script are forwarded to "tox" and then to "trial".
set -e
# Build, and tag
docker build docker/ \
--build-arg "UID=$(id -u)" \
--build-arg "GID=$(id -g)" \
-f docker/Dockerfile-pgtests \
-t synapsepgtests
# Run, mounting the current directory into /src
docker run --rm -it -v "$(pwd):/src" -v synapse-pg-test-tox:/tox synapsepgtests "$@"

View File

@ -97,6 +97,7 @@ BOOLEAN_COLUMNS = {
"users": ["shadow_banned"],
"e2e_fallback_keys_json": ["used"],
"access_tokens": ["used"],
"device_lists_changes_in_room": ["converted_to_destinations"],
}

View File

@ -33,7 +33,6 @@ from synapse.handlers.admin import ExfiltrationWriter
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.client_ips import SlavedClientIpStore
from synapse.replication.slave.storage.deviceinbox import SlavedDeviceInboxStore
from synapse.replication.slave.storage.devices import SlavedDeviceStore
from synapse.replication.slave.storage.events import SlavedEventStore
@ -61,7 +60,6 @@ class AdminCmdSlavedStore(
SlavedDeviceStore,
SlavedPushRuleStore,
SlavedEventStore,
SlavedClientIpStore,
BaseSlavedStore,
RoomWorkerStore,
):

View File

@ -53,7 +53,6 @@ from synapse.replication.http import REPLICATION_PREFIX, ReplicationRestResource
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.client_ips import SlavedClientIpStore
from synapse.replication.slave.storage.deviceinbox import SlavedDeviceInboxStore
from synapse.replication.slave.storage.devices import SlavedDeviceStore
from synapse.replication.slave.storage.directory import DirectoryStore
@ -247,7 +246,6 @@ class GenericWorkerSlavedStore(
SlavedApplicationServiceStore,
SlavedRegistrationStore,
SlavedProfileStore,
SlavedClientIpStore,
SlavedFilteringStore,
MonthlyActiveUsersWorkerStore,
MediaRepositoryStore,

View File

@ -1,4 +1,5 @@
# Copyright 2015, 2016 OpenMarket Ltd
# Copyright 2022 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@ -22,7 +23,13 @@ from netaddr import IPSet
from synapse.api.constants import EventTypes
from synapse.events import EventBase
from synapse.types import GroupID, JsonDict, UserID, get_domain_from_id
from synapse.types import (
DeviceListUpdates,
GroupID,
JsonDict,
UserID,
get_domain_from_id,
)
from synapse.util.caches.descriptors import _CacheContext, cached
if TYPE_CHECKING:
@ -400,6 +407,7 @@ class AppServiceTransaction:
to_device_messages: List[JsonDict],
one_time_key_counts: TransactionOneTimeKeyCounts,
unused_fallback_keys: TransactionUnusedFallbackKeys,
device_list_summary: DeviceListUpdates,
):
self.service = service
self.id = id
@ -408,6 +416,7 @@ class AppServiceTransaction:
self.to_device_messages = to_device_messages
self.one_time_key_counts = one_time_key_counts
self.unused_fallback_keys = unused_fallback_keys
self.device_list_summary = device_list_summary
async def send(self, as_api: "ApplicationServiceApi") -> bool:
"""Sends this transaction using the provided AS API interface.
@ -424,6 +433,7 @@ class AppServiceTransaction:
to_device_messages=self.to_device_messages,
one_time_key_counts=self.one_time_key_counts,
unused_fallback_keys=self.unused_fallback_keys,
device_list_summary=self.device_list_summary,
txn_id=self.id,
)

View File

@ -1,4 +1,5 @@
# Copyright 2015, 2016 OpenMarket Ltd
# Copyright 2022 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@ -27,7 +28,7 @@ from synapse.appservice import (
from synapse.events import EventBase
from synapse.events.utils import SerializeEventConfig, serialize_event
from synapse.http.client import SimpleHttpClient
from synapse.types import JsonDict, ThirdPartyInstanceID
from synapse.types import DeviceListUpdates, JsonDict, ThirdPartyInstanceID
from synapse.util.caches.response_cache import ResponseCache
if TYPE_CHECKING:
@ -225,6 +226,7 @@ class ApplicationServiceApi(SimpleHttpClient):
to_device_messages: List[JsonDict],
one_time_key_counts: TransactionOneTimeKeyCounts,
unused_fallback_keys: TransactionUnusedFallbackKeys,
device_list_summary: DeviceListUpdates,
txn_id: Optional[int] = None,
) -> bool:
"""
@ -268,6 +270,7 @@ class ApplicationServiceApi(SimpleHttpClient):
}
)
# TODO: Update to stable prefixes once MSC3202 completes FCP merge
if service.msc3202_transaction_extensions:
if one_time_key_counts:
body[
@ -277,6 +280,11 @@ class ApplicationServiceApi(SimpleHttpClient):
body[
"org.matrix.msc3202.device_unused_fallback_keys"
] = unused_fallback_keys
if device_list_summary:
body["org.matrix.msc3202.device_lists"] = {
"changed": list(device_list_summary.changed),
"left": list(device_list_summary.left),
}
try:
await self.put_json(

View File

@ -72,7 +72,7 @@ from synapse.events import EventBase
from synapse.logging.context import run_in_background
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.storage.databases.main import DataStore
from synapse.types import JsonDict
from synapse.types import DeviceListUpdates, JsonDict
from synapse.util import Clock
if TYPE_CHECKING:
@ -122,6 +122,7 @@ class ApplicationServiceScheduler:
events: Optional[Collection[EventBase]] = None,
ephemeral: Optional[Collection[JsonDict]] = None,
to_device_messages: Optional[Collection[JsonDict]] = None,
device_list_summary: Optional[DeviceListUpdates] = None,
) -> None:
"""
Enqueue some data to be sent off to an application service.
@ -133,10 +134,18 @@ class ApplicationServiceScheduler:
to_device_messages: The to-device messages to send. These differ from normal
to-device messages sent to clients, as they have 'to_device_id' and
'to_user_id' fields.
device_list_summary: A summary of users that the application service either needs
to refresh the device lists of, or those that the application service need no
longer track the device lists of.
"""
# We purposefully allow this method to run with empty events/ephemeral
# collections, so that callers do not need to check iterable size themselves.
if not events and not ephemeral and not to_device_messages:
if (
not events
and not ephemeral
and not to_device_messages
and not device_list_summary
):
return
if events:
@ -147,6 +156,10 @@ class ApplicationServiceScheduler:
self.queuer.queued_to_device_messages.setdefault(appservice.id, []).extend(
to_device_messages
)
if device_list_summary:
self.queuer.queued_device_list_summaries.setdefault(
appservice.id, []
).append(device_list_summary)
# Kick off a new application service transaction
self.queuer.start_background_request(appservice)
@ -169,6 +182,8 @@ class _ServiceQueuer:
self.queued_ephemeral: Dict[str, List[JsonDict]] = {}
# dict of {service_id: [to_device_message_json]}
self.queued_to_device_messages: Dict[str, List[JsonDict]] = {}
# dict of {service_id: [device_list_summary]}
self.queued_device_list_summaries: Dict[str, List[DeviceListUpdates]] = {}
# the appservices which currently have a transaction in flight
self.requests_in_flight: Set[str] = set()
@ -212,7 +227,35 @@ class _ServiceQueuer:
]
del all_to_device_messages[:MAX_TO_DEVICE_MESSAGES_PER_TRANSACTION]
if not events and not ephemeral and not to_device_messages_to_send:
# Consolidate any pending device list summaries into a single, up-to-date
# summary.
# Note: this code assumes that in a single DeviceListUpdates, a user will
# never be in both "changed" and "left" sets.
device_list_summary = DeviceListUpdates()
for summary in self.queued_device_list_summaries.get(service.id, []):
# For every user in the incoming "changed" set:
# * Remove them from the existing "left" set if necessary
# (as we need to start tracking them again)
# * Add them to the existing "changed" set if necessary.
device_list_summary.left.difference_update(summary.changed)
device_list_summary.changed.update(summary.changed)
# For every user in the incoming "left" set:
# * Remove them from the existing "changed" set if necessary
# (we no longer need to track them)
# * Add them to the existing "left" set if necessary.
device_list_summary.changed.difference_update(summary.left)
device_list_summary.left.update(summary.left)
self.queued_device_list_summaries.clear()
if (
not events
and not ephemeral
and not to_device_messages_to_send
# DeviceListUpdates is True if either the 'changed' or 'left' sets have
# at least one entry, otherwise False
and not device_list_summary
):
return
one_time_key_counts: Optional[TransactionOneTimeKeyCounts] = None
@ -240,6 +283,7 @@ class _ServiceQueuer:
to_device_messages_to_send,
one_time_key_counts,
unused_fallback_keys,
device_list_summary,
)
except Exception:
logger.exception("AS request failed")
@ -322,6 +366,7 @@ class _TransactionController:
to_device_messages: Optional[List[JsonDict]] = None,
one_time_key_counts: Optional[TransactionOneTimeKeyCounts] = None,
unused_fallback_keys: Optional[TransactionUnusedFallbackKeys] = None,
device_list_summary: Optional[DeviceListUpdates] = None,
) -> None:
"""
Create a transaction with the given data and send to the provided
@ -336,6 +381,7 @@ class _TransactionController:
appservice devices in the transaction.
unused_fallback_keys: Lists of unused fallback keys for relevant
appservice devices in the transaction.
device_list_summary: The device list summary to include in the transaction.
"""
try:
txn = await self.store.create_appservice_txn(
@ -345,6 +391,7 @@ class _TransactionController:
to_device_messages=to_device_messages or [],
one_time_key_counts=one_time_key_counts or {},
unused_fallback_keys=unused_fallback_keys or {},
device_list_summary=device_list_summary or DeviceListUpdates(),
)
service_is_up = await self._is_service_up(service)
if service_is_up:

View File

@ -170,6 +170,7 @@ def _load_appservice(
# When enabled, appservice transactions contain the following information:
# - device One-Time Key counts
# - device unused fallback key usage states
# - device list changes
msc3202_transaction_extensions = as_info.get("org.matrix.msc3202", False)
if not isinstance(msc3202_transaction_extensions, bool):
raise ValueError(

View File

@ -59,8 +59,9 @@ class ExperimentalConfig(Config):
"msc3202_device_masquerading", False
)
# Portion of MSC3202 related to transaction extensions:
# sending one-time key counts and fallback key usage to application services.
# The portion of MSC3202 related to transaction extensions:
# sending device list changes, one-time key counts and fallback key
# usage to application services.
self.msc3202_transaction_extensions: bool = experimental.get(
"msc3202_transaction_extensions", False
)
@ -77,3 +78,6 @@ class ExperimentalConfig(Config):
# The deprecated groups feature.
self.groups_enabled: bool = experimental.get("groups_enabled", True)
# MSC2654: Unread counts
self.msc2654_enabled: bool = experimental.get("msc2654_enabled", False)

View File

@ -16,7 +16,7 @@
import hashlib
import logging
import os
from typing import Any, Dict, Iterator, List, Optional
from typing import TYPE_CHECKING, Any, Dict, Iterator, List, Optional
import attr
import jsonschema
@ -38,6 +38,9 @@ from synapse.util.stringutils import random_string, random_string_with_symbols
from ._base import Config, ConfigError
if TYPE_CHECKING:
from signedjson.key import VerifyKeyWithExpiry
INSECURE_NOTARY_ERROR = """\
Your server is configured to accept key server responses without signature
validation or TLS certificate validation. This is likely to be very insecure. If
@ -300,7 +303,7 @@ class KeyConfig(Config):
def read_old_signing_keys(
self, old_signing_keys: Optional[JsonDict]
) -> Dict[str, VerifyKey]:
) -> Dict[str, "VerifyKeyWithExpiry"]:
if old_signing_keys is None:
return {}
keys = {}
@ -308,8 +311,8 @@ class KeyConfig(Config):
if is_signing_algorithm_supported(key_id):
key_base64 = key_data["key"]
key_bytes = decode_base64(key_base64)
verify_key = decode_verify_key_bytes(key_id, key_bytes)
verify_key.expired_ts = key_data["expired_ts"]
verify_key: "VerifyKeyWithExpiry" = decode_verify_key_bytes(key_id, key_bytes) # type: ignore[assignment]
verify_key.expired = key_data["expired_ts"]
keys[key_id] = verify_key
else:
raise ConfigError(
@ -422,7 +425,7 @@ def _parse_key_servers(
server_name = server["server_name"]
result = TrustedKeyServer(server_name=server_name)
verify_keys = server.get("verify_keys")
verify_keys: Optional[Dict[str, str]] = server.get("verify_keys")
if verify_keys is not None:
result.verify_keys = {}
for key_id, key_base64 in verify_keys.items():

View File

@ -680,6 +680,18 @@ class ServerConfig(Config):
config.get("use_account_validity_in_account_status") or False
)
# This is a temporary option that enables fully using the new
# `device_lists_changes_in_room` without the backwards compat code. This
# is primarily for testing. If enabled the server should *not* be
# downgraded, as it may lead to missing device list updates.
self.use_new_device_lists_changes_in_room = (
config.get("use_new_device_lists_changes_in_room") or False
)
self.rooms_to_exclude_from_sync: List[str] = (
config.get("exclude_rooms_from_sync") or []
)
def has_tls_listener(self) -> bool:
return any(listener.tls for listener in self.listeners)
@ -1234,6 +1246,15 @@ class ServerConfig(Config):
# information about using custom templates.
#
#custom_template_directory: /path/to/custom/templates/
# List of rooms to exclude from sync responses. This is useful for server
# administrators wishing to group users into a room without these users being able
# to see it from their client.
#
# By default, no room is excluded.
#
#exclude_rooms_from_sync:
# - !foo:example.com
"""
% locals()
)

Some files were not shown because too many files have changed in this diff Show More