Merge branch 'develop' into matrix-org-hotfixes

pull/8675/head
Richard van der Hoff 2020-07-09 11:06:52 +01:00
commit cc86fbc9ad
170 changed files with 2123 additions and 1169 deletions

View File

@ -1,3 +1,120 @@
Synapse 1.16.0 (2020-07-08)
===========================
No significant changes since 1.16.0rc2.
Note that this release deprecates the `m.login.jwt` login method, renaming it
to `org.matrix.login.jwt`, as `m.login.jwt` is not part of the Matrix spec.
Otherwise the behaviour is identical. Synapse will accept both names for now,
but this may change in a future release.
Synapse 1.16.0rc2 (2020-07-02)
==============================
Synapse 1.16.0rc2 includes the security fixes released with Synapse 1.15.2.
Please see [below](#synapse-1152-2020-07-02) for more details.
Improved Documentation
----------------------
- Update postgres image in example `docker-compose.yaml` to tag `12-alpine`. ([\#7696](https://github.com/matrix-org/synapse/issues/7696))
Internal Changes
----------------
- Add some metrics for inbound and outbound federation latencies: `synapse_federation_server_pdu_process_time` and `synapse_event_processing_lag_by_event`. ([\#7771](https://github.com/matrix-org/synapse/issues/7771))
Synapse 1.15.2 (2020-07-02)
===========================
Due to the two security issues highlighted below, server administrators are
encouraged to update Synapse. We are not aware of these vulnerabilities being
exploited in the wild.
Security advisory
-----------------
* A malicious homeserver could force Synapse to reset the state in a room to a
small subset of the correct state. This affects all Synapse deployments which
federate with untrusted servers. ([96e9afe6](https://github.com/matrix-org/synapse/commit/96e9afe62500310977dc3cbc99a8d16d3d2fa15c))
* HTML pages served via Synapse were vulnerable to clickjacking attacks. This
predominantly affects homeservers with single-sign-on enabled, but all server
administrators are encouraged to upgrade. ([ea26e9a9](https://github.com/matrix-org/synapse/commit/ea26e9a98b0541fc886a1cb826a38352b7599dbe))
This was reported by [Quentin Gliech](https://sandhose.fr/).
Synapse 1.16.0rc1 (2020-07-01)
==============================
Features
--------
- Add an option to enable encryption by default for new rooms. ([\#7639](https://github.com/matrix-org/synapse/issues/7639))
- Add support for running multiple media repository workers. See [docs/workers.md](https://github.com/matrix-org/synapse/blob/release-v1.16.0/docs/workers.md) for instructions. ([\#7706](https://github.com/matrix-org/synapse/issues/7706))
- Media can now be marked as safe from quarantined. ([\#7718](https://github.com/matrix-org/synapse/issues/7718))
- Expand the configuration options for auto-join rooms. ([\#7763](https://github.com/matrix-org/synapse/issues/7763))
Bugfixes
--------
- Remove `user_id` from the response to `GET /_matrix/client/r0/presence/{userId}/status` to match the specification. ([\#7606](https://github.com/matrix-org/synapse/issues/7606))
- In worker mode, ensure that replicated data has not already been received. ([\#7648](https://github.com/matrix-org/synapse/issues/7648))
- Fix intermittent exception during startup, introduced in Synapse 1.14.0. ([\#7663](https://github.com/matrix-org/synapse/issues/7663))
- Include a user-agent for federation and well-known requests. ([\#7677](https://github.com/matrix-org/synapse/issues/7677))
- Accept the proper field (`phone`) for the `m.id.phone` identifier type. The legacy field of `number` is still accepted as a fallback. Bug introduced in v0.20.0. ([\#7687](https://github.com/matrix-org/synapse/issues/7687))
- Fix "Starting db txn 'get_completed_ui_auth_stages' from sentinel context" warning. The bug was introduced in 1.13.0. ([\#7688](https://github.com/matrix-org/synapse/issues/7688))
- Compare the URI and method during user interactive authentication (instead of the URI twice). Bug introduced in 1.13.0. ([\#7689](https://github.com/matrix-org/synapse/issues/7689))
- Fix a long standing bug where the response to the `GET room_keys/version` endpoint had the incorrect type for the `etag` field. ([\#7691](https://github.com/matrix-org/synapse/issues/7691))
- Fix logged error during device resync in opentracing. Broke in v1.14.0. ([\#7698](https://github.com/matrix-org/synapse/issues/7698))
- Do not break push rule evaluation when receiving an event with a non-string body. This is a long-standing bug. ([\#7701](https://github.com/matrix-org/synapse/issues/7701))
- Fixs a long standing bug which resulted in an exception: "TypeError: argument of type 'ObservableDeferred' is not iterable". ([\#7708](https://github.com/matrix-org/synapse/issues/7708))
- The `synapse_port_db` script no longer fails when the `ui_auth_sessions` table is non-empty. This bug has existed since v1.13.0. ([\#7711](https://github.com/matrix-org/synapse/issues/7711))
- Synapse will now fetch media from the proper specified URL (using the r0 prefix instead of the unspecified v1). ([\#7714](https://github.com/matrix-org/synapse/issues/7714))
- Fix the tables ignored by `synapse_port_db` to be in sync the current database schema. ([\#7717](https://github.com/matrix-org/synapse/issues/7717))
- Fix missing `Content-Length` on HTTP responses from the metrics handler. ([\#7730](https://github.com/matrix-org/synapse/issues/7730))
- Fix large state resolutions from stalling Synapse for seconds at a time. ([\#7735](https://github.com/matrix-org/synapse/issues/7735), [\#7746](https://github.com/matrix-org/synapse/issues/7746))
Improved Documentation
----------------------
- Spelling correction in sample_config.yaml. ([\#7652](https://github.com/matrix-org/synapse/issues/7652))
- Added instructions for how to use Keycloak via OpenID Connect to authenticate with Synapse. ([\#7659](https://github.com/matrix-org/synapse/issues/7659))
- Corrected misspelling of PostgreSQL. ([\#7724](https://github.com/matrix-org/synapse/issues/7724))
Deprecations and Removals
-------------------------
- Deprecate `m.login.jwt` login method in favour of `org.matrix.login.jwt`, as `m.login.jwt` is not part of the Matrix spec. ([\#7675](https://github.com/matrix-org/synapse/issues/7675))
Internal Changes
----------------
- Refactor getting replication updates from database. ([\#7636](https://github.com/matrix-org/synapse/issues/7636))
- Clean-up the login fallback code. ([\#7657](https://github.com/matrix-org/synapse/issues/7657))
- Increase the default SAML session expiry time to 15 minutes. ([\#7664](https://github.com/matrix-org/synapse/issues/7664))
- Convert the device message and pagination handlers to async/await. ([\#7678](https://github.com/matrix-org/synapse/issues/7678))
- Convert typing handler to async/await. ([\#7679](https://github.com/matrix-org/synapse/issues/7679))
- Require `parameterized` package version to be at least 0.7.0. ([\#7680](https://github.com/matrix-org/synapse/issues/7680))
- Refactor handling of `listeners` configuration settings. ([\#7681](https://github.com/matrix-org/synapse/issues/7681))
- Replace uses of `six.iterkeys`/`iteritems`/`itervalues` with `keys()`/`items()`/`values()`. ([\#7692](https://github.com/matrix-org/synapse/issues/7692))
- Add support for using `rust-python-jaeger-reporter` library to reduce jaeger tracing overhead. ([\#7697](https://github.com/matrix-org/synapse/issues/7697))
- Make Tox actions work on Debian 10. ([\#7703](https://github.com/matrix-org/synapse/issues/7703))
- Replace all remaining uses of `six` with native Python 3 equivalents. Contributed by @ilmari. ([\#7704](https://github.com/matrix-org/synapse/issues/7704))
- Fix broken link in sample config. ([\#7712](https://github.com/matrix-org/synapse/issues/7712))
- Speed up state res v2 across large state differences. ([\#7725](https://github.com/matrix-org/synapse/issues/7725))
- Convert directory handler to async/await. ([\#7727](https://github.com/matrix-org/synapse/issues/7727))
- Move `flake8` to the end of `scripts-dev/lint.sh` as it takes the longest and could cause the script to exit early. ([\#7738](https://github.com/matrix-org/synapse/issues/7738))
- Explain the "test" conditional requirement for dependencies is not all of the modules necessary to run the unit tests. ([\#7751](https://github.com/matrix-org/synapse/issues/7751))
- Add some metrics for inbound and outbound federation latencies: `synapse_federation_server_pdu_process_time` and `synapse_event_processing_lag_by_event`. ([\#7755](https://github.com/matrix-org/synapse/issues/7755))
Synapse 1.15.1 (2020-06-16)
===========================

View File

@ -215,7 +215,7 @@ Using a reverse proxy with Synapse
It is recommended to put a reverse proxy such as
`nginx <https://nginx.org/en/docs/http/ngx_http_proxy_module.html>`_,
`Apache <https://httpd.apache.org/docs/current/mod/mod_proxy_http.html>`_,
`Caddy <https://caddyserver.com/docs/proxy>`_ or
`Caddy <https://caddyserver.com/docs/quick-starts/reverse-proxy>`_ or
`HAProxy <https://www.haproxy.org/>`_ in front of Synapse. One advantage of
doing so is that it means that you can expose the default https port (443) to
Matrix clients without needing to run Synapse with root privileges.

1
changelog.d/7021.bugfix Normal file
View File

@ -0,0 +1 @@
Fix inconsistent handling of upper and lower case in email addresses when used as identifiers for login, etc. Contributed by @dklimpel.

View File

@ -1 +0,0 @@
Remove `user_id` from the response to `GET /_matrix/client/r0/presence/{userId}/status` to match the specification.

View File

@ -1 +0,0 @@
Add an option to enable encryption by default for new rooms.

View File

@ -1 +0,0 @@
In working mode, ensure that replicated data has not already been received.

View File

@ -1 +0,0 @@
Spelling correction in sample_config.yaml.

View File

@ -1 +0,0 @@
Clean-up the login fallback code.

View File

@ -1 +0,0 @@
Added instructions for how to use Keycloak via OpenID Connect to authenticate with Synapse.

View File

@ -1 +0,0 @@
Fix intermittent exception during startup, introduced in Synapse 1.14.0.

View File

@ -1 +0,0 @@
Increase the default SAML session expirary time to 15 minutes.

View File

@ -1 +0,0 @@
Deprecate `m.login.jwt` login method in favour of `org.matrix.login.jwt`, as `m.login.jwt` is not part of the Matrix spec.

View File

@ -1 +0,0 @@
Include a user-agent for federation and well-known requests.

View File

@ -1 +0,0 @@
Convert the device message and pagination handlers to async/await.

View File

@ -1 +0,0 @@
Convert typing handler to async/await.

View File

@ -1 +0,0 @@
Require `parameterized` package version to be at least 0.7.0.

View File

@ -1 +0,0 @@
Refactor handling of `listeners` configuration settings.

View File

@ -1 +0,0 @@
Accept the proper field (`phone`) for the `m.id.phone` identifier type. The legacy field of `number` is still accepted as a fallback. Bug introduced in v0.20.0-rc1.

View File

@ -1 +0,0 @@
Fix "Starting db txn 'get_completed_ui_auth_stages' from sentinel context" warning. The bug was introduced in 1.13.0rc1.

View File

@ -1 +0,0 @@
Compare the URI and method during user interactive authentication (instead of the URI twice). Bug introduced in 1.13.0rc1.

View File

@ -1 +0,0 @@
Fix a long standing bug where the response to the `GET room_keys/version` endpoint had the incorrect type for the `etag` field.

View File

@ -1 +0,0 @@
Replace uses of `six.iterkeys`/`iteritems`/`itervalues` with `keys()`/`items()`/`values()`.

View File

@ -1 +0,0 @@
Add support for using `rust-python-jaeger-reporter` library to reduce jaeger tracing overhead.

View File

@ -1 +0,0 @@
Fix logged error during device resync in opentracing. Broke in v1.14.0.

View File

@ -1 +0,0 @@
Do not break push rule evaluation when receiving an event with a non-string body. This is a long-standing bug.

View File

@ -1 +0,0 @@
Make Tox actions work on Debian 10.

View File

@ -1 +0,0 @@
Replace all remaining uses of `six` with native Python 3 equivalents. Contributed by @ilmari.

View File

@ -1 +0,0 @@
Add support for running multiple media repository workers. See [docs/workers.md](docs/workers.md) for instructions.

View File

@ -1 +0,0 @@
Fixs a long standing bug which resulted in an exception: "TypeError: argument of type 'ObservableDeferred' is not iterable".

View File

@ -1 +0,0 @@
The `synapse_port_db` script no longer fails when the `ui_auth_sessions` table is non-empty. This bug has existed since v1.13.0rc1.

View File

@ -1 +0,0 @@
Fix broken link in sample config.

View File

@ -1 +0,0 @@
Synapse will now fetch media from the proper specified URL (using the r0 prefix instead of the unspecified v1).

View File

@ -1 +0,0 @@
Fix the tables ignored by `synapse_port_db` to be in sync the current database schema.

View File

@ -1 +0,0 @@
Media can now be marked as safe from quarantined.

View File

@ -1 +0,0 @@
Corrected misspelling of PostgreSQL.

View File

@ -1 +0,0 @@
Speed up state res v2 across large state differences.

View File

@ -1 +0,0 @@
Convert directory handler to async/await.

View File

@ -1 +0,0 @@
Fix missing `Content-Length` on HTTP responses from the metrics handler.

1
changelog.d/7732.bugfix Normal file
View File

@ -0,0 +1 @@
Fix "Tried to close a non-active scope!" error messages when opentracing is enabled.

View File

@ -1 +0,0 @@
Fix large state resolutions from stalling Synapse for seconds at a time.

View File

@ -1 +0,0 @@
Move `flake8` to the end of `scripts-dev/lint.sh` as it takes the longest and could cause the script to exit early.

View File

@ -1 +0,0 @@
Fix large state resolutions from stalling Synapse for seconds at a time.

View File

@ -1 +0,0 @@
Explain the "test" conditional requirement for dependencies is not all of the modules necessary to run the unit tests.

View File

@ -1 +0,0 @@
Add some metrics for inbound and outbound federation latencies: `synapse_federation_server_pdu_process_time` and `synapse_event_processing_lag_by_event`.

1
changelog.d/7760.bugfix Normal file
View File

@ -0,0 +1 @@
Fix incorrect error message when database CTYPE was set incorrectly.

View File

@ -1 +0,0 @@
Add unread messages count to sync responses.

View File

@ -1 +0,0 @@
Expand the configuration options for auto-join rooms.

1
changelog.d/7765.misc Normal file
View File

@ -0,0 +1 @@
Send push notifications with a high or low priority depending upon whether they may generate user-observable effects.

1
changelog.d/7766.bugfix Normal file
View File

@ -0,0 +1 @@
Fix to not ignore `set_tweak` actions in Push Rules that have no `value`, as permitted by the specification.

1
changelog.d/7768.misc Normal file
View File

@ -0,0 +1 @@
Use symbolic names for replication stream names.

1
changelog.d/7769.misc Normal file
View File

@ -0,0 +1 @@
Add early returns to `_check_for_soft_fail`.

1
changelog.d/7770.misc Normal file
View File

@ -0,0 +1 @@
Fix up `synapse.handlers.federation` to pass mypy.

1
changelog.d/7775.misc Normal file
View File

@ -0,0 +1 @@
Convert the appserver handler to async/await.

1
changelog.d/7776.doc Normal file
View File

@ -0,0 +1 @@
Improve the documentation of the non-standard JSON web token login type.

1
changelog.d/7779.bugfix Normal file
View File

@ -0,0 +1 @@
Fix synctl to handle empty config files correctly. Contributed by @kotovalexarian.

1
changelog.d/7780.misc Normal file
View File

@ -0,0 +1 @@
Allow to use higher versions of prometheus_client <0.9.0 which are expected to introduce no breaking changes. Contributed by Oliver Kurz.

1
changelog.d/7786.misc Normal file
View File

@ -0,0 +1 @@
Update linting scripts and codebase to be compatible with `isort` v5.

1
changelog.d/7789.doc Normal file
View File

@ -0,0 +1 @@
Update doc links for caddy. Contributed by Nicolai Søborg.

1
changelog.d/7791.docker Normal file
View File

@ -0,0 +1 @@
Include libwebp in the Docker file to properly handle webp image uploads.

1
changelog.d/7793.misc Normal file
View File

@ -0,0 +1 @@
Stop populating unused table `local_invites`.

1
changelog.d/7799.misc Normal file
View File

@ -0,0 +1 @@
Ensure that strings (not bytes) are passed into JSON serialization.

1
changelog.d/7800.misc Normal file
View File

@ -0,0 +1 @@
Switch from simplejson to the standard library json.

1
changelog.d/7804.bugfix Normal file
View File

@ -0,0 +1 @@
Fix 'stuck invites' which happen when we are unable to reject a room invite received over federation.

1
changelog.d/7805.misc Normal file
View File

@ -0,0 +1 @@
Add `signing_key` property to `HomeServer` to save code duplication.

View File

@ -50,7 +50,7 @@ services:
- traefik.http.routers.https-synapse.tls.certResolver=le-ssl
db:
image: docker.io/postgres:10-alpine
image: docker.io/postgres:12-alpine
# Change that password, of course!
environment:
- POSTGRES_USER=synapse

12
debian/changelog vendored
View File

@ -1,3 +1,15 @@
matrix-synapse-py3 (1.16.0) stable; urgency=medium
* New synapse release 1.16.0.
-- Synapse Packaging team <packages@matrix.org> Wed, 08 Jul 2020 11:03:48 +0100
matrix-synapse-py3 (1.15.2) stable; urgency=medium
* New synapse release 1.15.2.
-- Synapse Packaging team <packages@matrix.org> Thu, 02 Jul 2020 10:34:00 -0400
matrix-synapse-py3 (1.15.1) stable; urgency=medium
* New synapse release 1.15.1.

View File

@ -24,6 +24,7 @@ RUN apk add \
build-base \
libffi-dev \
libjpeg-turbo-dev \
libwebp-dev \
libressl-dev \
libxslt-dev \
linux-headers \
@ -61,6 +62,7 @@ FROM docker.io/python:${PYTHON_VERSION}-alpine3.11
RUN apk add --no-cache --virtual .runtime_deps \
libffi \
libjpeg-turbo \
libwebp \
libressl \
libxslt \
libpq \

90
docs/jwt.md Normal file
View File

@ -0,0 +1,90 @@
# JWT Login Type
Synapse comes with a non-standard login type to support
[JSON Web Tokens](https://en.wikipedia.org/wiki/JSON_Web_Token). In general the
documentation for
[the login endpoint](https://matrix.org/docs/spec/client_server/r0.6.1#login)
is still valid (and the mechanism works similarly to the
[token based login](https://matrix.org/docs/spec/client_server/r0.6.1#token-based)).
To log in using a JSON Web Token, clients should submit a `/login` request as
follows:
```json
{
"type": "org.matrix.login.jwt",
"token": "<jwt>"
}
```
Note that the login type of `m.login.jwt` is supported, but is deprecated. This
will be removed in a future version of Synapse.
The `jwt` should encode the local part of the user ID as the standard `sub`
claim. In the case that the token is not valid, the homeserver must respond with
`401 Unauthorized` and an error code of `M_UNAUTHORIZED`.
(Note that this differs from the token based logins which return a
`403 Forbidden` and an error code of `M_FORBIDDEN` if an error occurs.)
As with other login types, there are additional fields (e.g. `device_id` and
`initial_device_display_name`) which can be included in the above request.
## Preparing Synapse
The JSON Web Token integration in Synapse uses the
[`PyJWT`](https://pypi.org/project/pyjwt/) library, which must be installed
as follows:
* The relevant libraries are included in the Docker images and Debian packages
provided by `matrix.org` so no further action is needed.
* If you installed Synapse into a virtualenv, run `/path/to/env/bin/pip
install synapse[pyjwt]` to install the necessary dependencies.
* For other installation mechanisms, see the documentation provided by the
maintainer.
To enable the JSON web token integration, you should then add an `jwt_config` section
to your configuration file (or uncomment the `enabled: true` line in the
existing section). See [sample_config.yaml](./sample_config.yaml) for some
sample settings.
## How to test JWT as a developer
Although JSON Web Tokens are typically generated from an external server, the
examples below use [PyJWT](https://pyjwt.readthedocs.io/en/latest/) directly.
1. Configure Synapse with JWT logins:
```yaml
jwt_config:
enabled: true
secret: "my-secret-token"
algorithm: "HS256"
```
2. Generate a JSON web token:
```bash
$ pyjwt --key=my-secret-token --alg=HS256 encode sub=test-user
eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJ0ZXN0LXVzZXIifQ.Ag71GT8v01UO3w80aqRPTeuVPBIBZkYhNTJJ-_-zQIc
```
3. Query for the login types and ensure `org.matrix.login.jwt` is there:
```bash
curl http://localhost:8080/_matrix/client/r0/login
```
4. Login used the generated JSON web token from above:
```bash
$ curl http://localhost:8082/_matrix/client/r0/login -X POST \
--data '{"type":"org.matrix.login.jwt","token":"eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJ0ZXN0LXVzZXIifQ.Ag71GT8v01UO3w80aqRPTeuVPBIBZkYhNTJJ-_-zQIc"}'
{
"access_token": "<access token>",
"device_id": "ACBDEFGHI",
"home_server": "localhost:8080",
"user_id": "@test-user:localhost:8480"
}
```
You should now be able to use the returned access token to query the client API.

View File

@ -3,7 +3,7 @@
It is recommended to put a reverse proxy such as
[nginx](https://nginx.org/en/docs/http/ngx_http_proxy_module.html),
[Apache](https://httpd.apache.org/docs/current/mod/mod_proxy_http.html),
[Caddy](https://caddyserver.com/docs/proxy) or
[Caddy](https://caddyserver.com/docs/quick-starts/reverse-proxy) or
[HAProxy](https://www.haproxy.org/) in front of Synapse. One advantage
of doing so is that it means that you can expose the default https port
(443) to Matrix clients without needing to run Synapse with root

View File

@ -1804,12 +1804,39 @@ sso:
#template_dir: "res/templates"
# The JWT needs to contain a globally unique "sub" (subject) claim.
# JSON web token integration. The following settings can be used to make
# Synapse JSON web tokens for authentication, instead of its internal
# password database.
#
# Each JSON Web Token needs to contain a "sub" (subject) claim, which is
# used as the localpart of the mxid.
#
# Note that this is a non-standard login type and client support is
# expected to be non-existant.
#
# See https://github.com/matrix-org/synapse/blob/master/docs/jwt.md.
#
#jwt_config:
# enabled: true
# secret: "a secret"
# algorithm: "HS256"
# Uncomment the following to enable authorization using JSON web
# tokens. Defaults to false.
#
#enabled: true
# This is either the private shared secret or the public key used to
# decode the contents of the JSON web token.
#
# Required if 'enabled' is true.
#
#secret: "provided-by-your-issuer"
# The algorithm used to sign the JSON web token.
#
# Supported algorithms are listed at
# https://pyjwt.readthedocs.io/en/latest/algorithms.html
#
# Required if 'enabled' is true.
#
#algorithm: "provided-by-your-issuer"
password_config:

View File

@ -2,9 +2,9 @@ import argparse
import json
import logging
import sys
import urllib2
import dns.resolver
import urllib2
from signedjson.key import decode_verify_key_bytes, write_signing_keys
from signedjson.sign import verify_signed_json
from unpaddedbase64 import decode_base64

View File

@ -15,7 +15,7 @@ else
fi
echo "Linting these locations: $files"
isort -y -rc $files
isort $files
python3 -m black $files
./scripts-dev/config-lint.sh
flake8 $files

View File

@ -26,7 +26,6 @@ ignore=W503,W504,E203,E731,E501
[isort]
line_length = 88
not_skip = __init__.py
sections=FUTURE,STDLIB,COMPAT,THIRDPARTY,TWISTED,FIRSTPARTY,TESTS,LOCALFOLDER
default_section=THIRDPARTY
known_first_party = synapse

View File

@ -36,7 +36,7 @@ try:
except ImportError:
pass
__version__ = "1.15.1"
__version__ = "1.16.0"
if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
# We import here so that we don't have to install a bunch of deps when

View File

@ -12,7 +12,6 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from typing import Optional
@ -22,7 +21,6 @@ from netaddr import IPAddress
from twisted.internet import defer
from twisted.web.server import Request
import synapse.logging.opentracing as opentracing
import synapse.types
from synapse import event_auth
from synapse.api.auth_blocking import AuthBlocking
@ -35,6 +33,7 @@ from synapse.api.errors import (
)
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS
from synapse.events import EventBase
from synapse.logging import opentracing as opentracing
from synapse.types import StateMap, UserID
from synapse.util.caches import register_cache
from synapse.util.caches.lrucache import LruCache

View File

@ -98,7 +98,6 @@ class ApplicationServiceApi(SimpleHttpClient):
if service.url is None:
return False
uri = service.url + ("/users/%s" % urllib.parse.quote(user_id))
response = None
try:
response = yield self.get_json(uri, {"access_token": service.hs_token})
if response is not None: # just an empty json object

View File

@ -16,6 +16,7 @@ from synapse.config._base import ConfigError
if __name__ == "__main__":
import sys
from synapse.config.homeserver import HomeServerConfig
action = sys.argv[1]

View File

@ -14,7 +14,6 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import print_function
# This file can't be called email.py because if it is, we cannot:
@ -145,8 +144,8 @@ class EmailConfig(Config):
or self.threepid_behaviour_email == ThreepidBehaviour.LOCAL
):
# make sure we can import the required deps
import jinja2
import bleach
import jinja2
# prevent unused warnings
jinja2

View File

@ -45,10 +45,37 @@ class JWTConfig(Config):
def generate_config_section(self, **kwargs):
return """\
# The JWT needs to contain a globally unique "sub" (subject) claim.
# JSON web token integration. The following settings can be used to make
# Synapse JSON web tokens for authentication, instead of its internal
# password database.
#
# Each JSON Web Token needs to contain a "sub" (subject) claim, which is
# used as the localpart of the mxid.
#
# Note that this is a non-standard login type and client support is
# expected to be non-existant.
#
# See https://github.com/matrix-org/synapse/blob/master/docs/jwt.md.
#
#jwt_config:
# enabled: true
# secret: "a secret"
# algorithm: "HS256"
# Uncomment the following to enable authorization using JSON web
# tokens. Defaults to false.
#
#enabled: true
# This is either the private shared secret or the public key used to
# decode the contents of the JSON web token.
#
# Required if 'enabled' is true.
#
#secret: "provided-by-your-issuer"
# The algorithm used to sign the JSON web token.
#
# Supported algorithms are listed at
# https://pyjwt.readthedocs.io/en/latest/algorithms.html
#
# Required if 'enabled' is true.
#
#algorithm: "provided-by-your-issuer"
"""

View File

@ -162,7 +162,7 @@ class EventBuilderFactory(object):
def __init__(self, hs):
self.clock = hs.get_clock()
self.hostname = hs.hostname
self.signing_key = hs.config.signing_key[0]
self.signing_key = hs.signing_key
self.store = hs.get_datastore()
self.state = hs.get_state_handler()

View File

@ -87,7 +87,7 @@ class FederationClient(FederationBase):
self.transport_layer = hs.get_federation_transport_client()
self.hostname = hs.hostname
self.signing_key = hs.config.signing_key[0]
self.signing_key = hs.signing_key
self._get_pdu_cache = ExpiringCache(
cache_name="get_pdu_cache",

View File

@ -224,7 +224,7 @@ class FederationSender(object):
synapse.metrics.event_processing_lag_by_event.labels(
"federation_sender"
).observe(now - ts)
).observe((now - ts) / 1000)
async def handle_room_events(events: Iterable[EventBase]) -> None:
with Measure(self.clock, "handle_room_events"):

View File

@ -361,11 +361,7 @@ class BaseFederationServlet(object):
continue
server.register_paths(
method,
(pattern,),
self._wrap(code),
self.__class__.__name__,
trace=False,
method, (pattern,), self._wrap(code), self.__class__.__name__,
)

View File

@ -70,7 +70,7 @@ class GroupAttestationSigning(object):
self.keyring = hs.get_keyring()
self.clock = hs.get_clock()
self.server_name = hs.hostname
self.signing_key = hs.config.signing_key[0]
self.signing_key = hs.signing_key
@defer.inlineCallbacks
def verify_attestation(self, attestation, group_id, user_id, server_name=None):

View File

@ -41,7 +41,7 @@ class GroupsServerWorkerHandler(object):
self.clock = hs.get_clock()
self.keyring = hs.get_keyring()
self.is_mine_id = hs.is_mine_id
self.signing_key = hs.config.signing_key[0]
self.signing_key = hs.signing_key
self.server_name = hs.hostname
self.attestations = hs.get_groups_attestation_signing()
self.transport_client = hs.get_federation_transport_client()

View File

@ -48,8 +48,7 @@ class ApplicationServicesHandler(object):
self.current_max = 0
self.is_processing = False
@defer.inlineCallbacks
def notify_interested_services(self, current_id):
async def notify_interested_services(self, current_id):
"""Notifies (pushes) all application services interested in this event.
Pushing is done asynchronously, so this method won't block for any
@ -74,7 +73,7 @@ class ApplicationServicesHandler(object):
(
upper_bound,
events,
) = yield self.store.get_new_events_for_appservice(
) = await self.store.get_new_events_for_appservice(
self.current_max, limit
)
@ -85,10 +84,9 @@ class ApplicationServicesHandler(object):
for event in events:
events_by_room.setdefault(event.room_id, []).append(event)
@defer.inlineCallbacks
def handle_event(event):
async def handle_event(event):
# Gather interested services
services = yield self._get_services_for_event(event)
services = await self._get_services_for_event(event)
if len(services) == 0:
return # no services need notifying
@ -96,9 +94,9 @@ class ApplicationServicesHandler(object):
# query API for all services which match that user regex.
# This needs to block as these user queries need to be
# made BEFORE pushing the event.
yield self._check_user_exists(event.sender)
await self._check_user_exists(event.sender)
if event.type == EventTypes.Member:
yield self._check_user_exists(event.state_key)
await self._check_user_exists(event.state_key)
if not self.started_scheduler:
@ -115,17 +113,16 @@ class ApplicationServicesHandler(object):
self.scheduler.submit_event_for_as(service, event)
now = self.clock.time_msec()
ts = yield self.store.get_received_ts(event.event_id)
ts = await self.store.get_received_ts(event.event_id)
synapse.metrics.event_processing_lag_by_event.labels(
"appservice_sender"
).observe(now - ts)
).observe((now - ts) / 1000)
@defer.inlineCallbacks
def handle_room_events(events):
async def handle_room_events(events):
for event in events:
yield handle_event(event)
await handle_event(event)
yield make_deferred_yieldable(
await make_deferred_yieldable(
defer.gatherResults(
[
run_in_background(handle_room_events, evs)
@ -135,10 +132,10 @@ class ApplicationServicesHandler(object):
)
)
yield self.store.set_appservice_last_pos(upper_bound)
await self.store.set_appservice_last_pos(upper_bound)
now = self.clock.time_msec()
ts = yield self.store.get_received_ts(events[-1].event_id)
ts = await self.store.get_received_ts(events[-1].event_id)
synapse.metrics.event_processing_positions.labels(
"appservice_sender"
@ -161,8 +158,7 @@ class ApplicationServicesHandler(object):
finally:
self.is_processing = False
@defer.inlineCallbacks
def query_user_exists(self, user_id):
async def query_user_exists(self, user_id):
"""Check if any application service knows this user_id exists.
Args:
@ -170,15 +166,14 @@ class ApplicationServicesHandler(object):
Returns:
True if this user exists on at least one application service.
"""
user_query_services = yield self._get_services_for_user(user_id=user_id)
user_query_services = self._get_services_for_user(user_id=user_id)
for user_service in user_query_services:
is_known_user = yield self.appservice_api.query_user(user_service, user_id)
is_known_user = await self.appservice_api.query_user(user_service, user_id)
if is_known_user:
return True
return False
@defer.inlineCallbacks
def query_room_alias_exists(self, room_alias):
async def query_room_alias_exists(self, room_alias):
"""Check if an application service knows this room alias exists.
Args:
@ -193,19 +188,18 @@ class ApplicationServicesHandler(object):
s for s in services if (s.is_interested_in_alias(room_alias_str))
]
for alias_service in alias_query_services:
is_known_alias = yield self.appservice_api.query_alias(
is_known_alias = await self.appservice_api.query_alias(
alias_service, room_alias_str
)
if is_known_alias:
# the alias exists now so don't query more ASes.
result = yield self.store.get_association_from_room_alias(room_alias)
result = await self.store.get_association_from_room_alias(room_alias)
return result
@defer.inlineCallbacks
def query_3pe(self, kind, protocol, fields):
services = yield self._get_services_for_3pn(protocol)
async def query_3pe(self, kind, protocol, fields):
services = self._get_services_for_3pn(protocol)
results = yield make_deferred_yieldable(
results = await make_deferred_yieldable(
defer.DeferredList(
[
run_in_background(
@ -224,8 +218,7 @@ class ApplicationServicesHandler(object):
return ret
@defer.inlineCallbacks
def get_3pe_protocols(self, only_protocol=None):
async def get_3pe_protocols(self, only_protocol=None):
services = self.store.get_app_services()
protocols = {}
@ -238,7 +231,7 @@ class ApplicationServicesHandler(object):
if p not in protocols:
protocols[p] = []
info = yield self.appservice_api.get_3pe_protocol(s, p)
info = await self.appservice_api.get_3pe_protocol(s, p)
if info is not None:
protocols[p].append(info)
@ -263,8 +256,7 @@ class ApplicationServicesHandler(object):
return protocols
@defer.inlineCallbacks
def _get_services_for_event(self, event):
async def _get_services_for_event(self, event):
"""Retrieve a list of application services interested in this event.
Args:
@ -280,7 +272,7 @@ class ApplicationServicesHandler(object):
# inside of a list comprehension anymore.
interested_list = []
for s in services:
if (yield s.is_interested(event, self.store)):
if await s.is_interested(event, self.store):
interested_list.append(s)
return interested_list
@ -288,21 +280,20 @@ class ApplicationServicesHandler(object):
def _get_services_for_user(self, user_id):
services = self.store.get_app_services()
interested_list = [s for s in services if (s.is_interested_in_user(user_id))]
return defer.succeed(interested_list)
return interested_list
def _get_services_for_3pn(self, protocol):
services = self.store.get_app_services()
interested_list = [s for s in services if s.is_interested_in_protocol(protocol)]
return defer.succeed(interested_list)
return interested_list
@defer.inlineCallbacks
def _is_unknown_user(self, user_id):
async def _is_unknown_user(self, user_id):
if not self.is_mine_id(user_id):
# we don't know if they are unknown or not since it isn't one of our
# users. We can't poke ASes.
return False
user_info = yield self.store.get_user_by_id(user_id)
user_info = await self.store.get_user_by_id(user_id)
if user_info:
return False
@ -311,10 +302,9 @@ class ApplicationServicesHandler(object):
service_list = [s for s in services if s.sender == user_id]
return len(service_list) == 0
@defer.inlineCallbacks
def _check_user_exists(self, user_id):
unknown_user = yield self._is_unknown_user(user_id)
async def _check_user_exists(self, user_id):
unknown_user = await self._is_unknown_user(user_id)
if unknown_user:
exists = yield self.query_user_exists(user_id)
exists = await self.query_user_exists(user_id)
return exists
return True

View File

@ -13,7 +13,6 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import time
import unicodedata
@ -24,7 +23,6 @@ import attr
import bcrypt # type: ignore[import]
import pymacaroons
import synapse.util.stringutils as stringutils
from synapse.api.constants import LoginType
from synapse.api.errors import (
AuthError,
@ -45,6 +43,8 @@ from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.module_api import ModuleApi
from synapse.push.mailer import load_jinja2_templates
from synapse.types import Requester, UserID
from synapse.util import stringutils as stringutils
from synapse.util.threepids import canonicalise_email
from ._base import BaseHandler
@ -928,7 +928,7 @@ class AuthHandler(BaseHandler):
# for the presence of an email address during password reset was
# case sensitive).
if medium == "email":
address = address.lower()
address = canonicalise_email(address)
await self.store.user_add_threepid(
user_id, medium, address, validated_at, self.hs.get_clock().time_msec()
@ -956,7 +956,7 @@ class AuthHandler(BaseHandler):
# 'Canonicalise' email addresses as per above
if medium == "email":
address = address.lower()
address = canonicalise_email(address)
identity_handler = self.hs.get_handlers().identity_handler
result = await identity_handler.try_unbind_threepid(

View File

@ -12,11 +12,10 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import urllib
import xml.etree.ElementTree as ET
from typing import Dict, Optional, Tuple
from xml.etree import ElementTree as ET
from twisted.web.client import PartialDownloadError

View File

@ -19,8 +19,9 @@
import itertools
import logging
from collections import Container
from http import HTTPStatus
from typing import Dict, Iterable, List, Optional, Sequence, Tuple
from typing import Dict, Iterable, List, Optional, Sequence, Tuple, Union
import attr
from signedjson.key import decode_verify_key_bytes
@ -742,6 +743,9 @@ class FederationHandler(BaseHandler):
# device and recognize the algorithm then we can work out the
# exact key to expect. Otherwise check it matches any key we
# have for that device.
current_keys = [] # type: Container[str]
if device:
keys = device.get("keys", {}).get("keys", {})
@ -758,15 +762,15 @@ class FederationHandler(BaseHandler):
current_keys = keys.values()
elif device_id:
# We don't have any keys for the device ID.
current_keys = []
pass
else:
# The event didn't include a device ID, so we just look for
# keys across all devices.
current_keys = (
current_keys = [
key
for device in cached_devices
for key in device.get("keys", {}).get("keys", {}).values()
)
]
# We now check that the sender key matches (one of) the expected
# keys.
@ -1011,7 +1015,7 @@ class FederationHandler(BaseHandler):
if e_type == EventTypes.Member and event.membership == Membership.JOIN
]
joined_domains = {}
joined_domains = {} # type: Dict[str, int]
for u, d in joined_users:
try:
dom = get_domain_from_id(u)
@ -1277,14 +1281,15 @@ class FederationHandler(BaseHandler):
try:
# Try the host we successfully got a response to /make_join/
# request first.
host_list = list(target_hosts)
try:
target_hosts.remove(origin)
target_hosts.insert(0, origin)
host_list.remove(origin)
host_list.insert(0, origin)
except ValueError:
pass
ret = await self.federation_client.send_join(
target_hosts, event, room_version_obj
host_list, event, room_version_obj
)
origin = ret["origin"]
@ -1562,7 +1567,7 @@ class FederationHandler(BaseHandler):
room_version,
event.get_pdu_json(),
self.hs.hostname,
self.hs.config.signing_key[0],
self.hs.signing_key,
)
)
@ -1584,13 +1589,14 @@ class FederationHandler(BaseHandler):
# Try the host that we succesfully called /make_leave/ on first for
# the /send_leave/ request.
host_list = list(target_hosts)
try:
target_hosts.remove(origin)
target_hosts.insert(0, origin)
host_list.remove(origin)
host_list.insert(0, origin)
except ValueError:
pass
await self.federation_client.send_leave(target_hosts, event)
await self.federation_client.send_leave(host_list, event)
context = await self.state_handler.compute_event_context(event)
stream_id = await self.persist_events_and_notify([(event, context)])
@ -1604,7 +1610,7 @@ class FederationHandler(BaseHandler):
user_id: str,
membership: str,
content: JsonDict = {},
params: Optional[Dict[str, str]] = None,
params: Optional[Dict[str, Union[str, Iterable[str]]]] = None,
) -> Tuple[str, EventBase, RoomVersion]:
(
origin,
@ -2018,8 +2024,8 @@ class FederationHandler(BaseHandler):
auth_events_ids = await self.auth.compute_auth_events(
event, prev_state_ids, for_verification=True
)
auth_events = await self.store.get_events(auth_events_ids)
auth_events = {(e.type, e.state_key): e for e in auth_events.values()}
auth_events_x = await self.store.get_events(auth_events_ids)
auth_events = {(e.type, e.state_key): e for e in auth_events_x.values()}
# This is a hack to fix some old rooms where the initial join event
# didn't reference the create event in its auth events.
@ -2055,76 +2061,67 @@ class FederationHandler(BaseHandler):
# For new (non-backfilled and non-outlier) events we check if the event
# passes auth based on the current state. If it doesn't then we
# "soft-fail" the event.
do_soft_fail_check = not backfilled and not event.internal_metadata.is_outlier()
if do_soft_fail_check:
extrem_ids = await self.store.get_latest_event_ids_in_room(event.room_id)
if backfilled or event.internal_metadata.is_outlier():
return
extrem_ids = set(extrem_ids)
prev_event_ids = set(event.prev_event_ids())
extrem_ids = await self.store.get_latest_event_ids_in_room(event.room_id)
extrem_ids = set(extrem_ids)
prev_event_ids = set(event.prev_event_ids())
if extrem_ids == prev_event_ids:
# If they're the same then the current state is the same as the
# state at the event, so no point rechecking auth for soft fail.
do_soft_fail_check = False
if extrem_ids == prev_event_ids:
# If they're the same then the current state is the same as the
# state at the event, so no point rechecking auth for soft fail.
return
if do_soft_fail_check:
room_version = await self.store.get_room_version_id(event.room_id)
room_version_obj = KNOWN_ROOM_VERSIONS[room_version]
room_version = await self.store.get_room_version_id(event.room_id)
room_version_obj = KNOWN_ROOM_VERSIONS[room_version]
# Calculate the "current state".
if state is not None:
# If we're explicitly given the state then we won't have all the
# prev events, and so we have a gap in the graph. In this case
# we want to be a little careful as we might have been down for
# a while and have an incorrect view of the current state,
# however we still want to do checks as gaps are easy to
# maliciously manufacture.
#
# So we use a "current state" that is actually a state
# resolution across the current forward extremities and the
# given state at the event. This should correctly handle cases
# like bans, especially with state res v2.
# Calculate the "current state".
if state is not None:
# If we're explicitly given the state then we won't have all the
# prev events, and so we have a gap in the graph. In this case
# we want to be a little careful as we might have been down for
# a while and have an incorrect view of the current state,
# however we still want to do checks as gaps are easy to
# maliciously manufacture.
#
# So we use a "current state" that is actually a state
# resolution across the current forward extremities and the
# given state at the event. This should correctly handle cases
# like bans, especially with state res v2.
state_sets = await self.state_store.get_state_groups(
event.room_id, extrem_ids
)
state_sets = list(state_sets.values())
state_sets.append(state)
current_state_ids = await self.state_handler.resolve_events(
room_version, state_sets, event
)
current_state_ids = {
k: e.event_id for k, e in current_state_ids.items()
}
else:
current_state_ids = await self.state_handler.get_current_state_ids(
event.room_id, latest_event_ids=extrem_ids
)
logger.debug(
"Doing soft-fail check for %s: state %s",
event.event_id,
current_state_ids,
state_sets = await self.state_store.get_state_groups(
event.room_id, extrem_ids
)
state_sets = list(state_sets.values())
state_sets.append(state)
current_state_ids = await self.state_handler.resolve_events(
room_version, state_sets, event
)
current_state_ids = {k: e.event_id for k, e in current_state_ids.items()}
else:
current_state_ids = await self.state_handler.get_current_state_ids(
event.room_id, latest_event_ids=extrem_ids
)
# Now check if event pass auth against said current state
auth_types = auth_types_for_event(event)
current_state_ids = [
e for k, e in current_state_ids.items() if k in auth_types
]
logger.debug(
"Doing soft-fail check for %s: state %s", event.event_id, current_state_ids,
)
current_auth_events = await self.store.get_events(current_state_ids)
current_auth_events = {
(e.type, e.state_key): e for e in current_auth_events.values()
}
# Now check if event pass auth against said current state
auth_types = auth_types_for_event(event)
current_state_ids = [e for k, e in current_state_ids.items() if k in auth_types]
try:
event_auth.check(
room_version_obj, event, auth_events=current_auth_events
)
except AuthError as e:
logger.warning("Soft-failing %r because %s", event, e)
event.internal_metadata.soft_failed = True
current_auth_events = await self.store.get_events(current_state_ids)
current_auth_events = {
(e.type, e.state_key): e for e in current_auth_events.values()
}
try:
event_auth.check(room_version_obj, event, auth_events=current_auth_events)
except AuthError as e:
logger.warning("Soft-failing %r because %s", event, e)
event.internal_metadata.soft_failed = True
async def on_query_auth(
self, origin, event_id, room_id, remote_auth_chain, rejects, missing
@ -2293,10 +2290,10 @@ class FederationHandler(BaseHandler):
remote_auth_chain = await self.federation_client.get_event_auth(
origin, event.room_id, event.event_id
)
except RequestSendFailed as e:
except RequestSendFailed as e1:
# The other side isn't around or doesn't implement the
# endpoint, so lets just bail out.
logger.info("Failed to get event auth from remote: %s", e)
logger.info("Failed to get event auth from remote: %s", e1)
return context
seen_remotes = await self.store.have_seen_events(
@ -2774,7 +2771,8 @@ class FederationHandler(BaseHandler):
logger.debug("Checking auth on event %r", event.content)
last_exception = None
last_exception = None # type: Optional[Exception]
# for each public key in the 3pid invite event
for public_key_object in self.hs.get_auth().get_public_keys(invite_event):
try:
@ -2828,6 +2826,12 @@ class FederationHandler(BaseHandler):
return
except Exception as e:
last_exception = e
if last_exception is None:
# we can only get here if get_public_keys() returned an empty list
# TODO: make this better
raise RuntimeError("no public key in invite event")
raise last_exception
async def _check_key_revocation(self, public_key, url):

View File

@ -70,7 +70,7 @@ class GroupsLocalWorkerHandler(object):
self.clock = hs.get_clock()
self.keyring = hs.get_keyring()
self.is_mine_id = hs.is_mine_id
self.signing_key = hs.config.signing_key[0]
self.signing_key = hs.signing_key
self.server_name = hs.hostname
self.notifier = hs.get_notifier()
self.attestations = hs.get_groups_attestation_signing()

View File

@ -251,10 +251,10 @@ class IdentityHandler(BaseHandler):
# 'browser-like' HTTPS.
auth_headers = self.federation_http_client.build_auth_headers(
destination=None,
method="POST",
method=b"POST",
url_bytes=url_bytes,
content=content,
destination_is=id_server,
destination_is=id_server.encode("ascii"),
)
headers = {b"Authorization": auth_headers}

View File

@ -15,7 +15,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from typing import Optional, Tuple
from typing import TYPE_CHECKING, Optional, Tuple
from canonicaljson import encode_canonical_json, json
@ -55,6 +55,9 @@ from synapse.visibility import filter_events_for_client
from ._base import BaseHandler
if TYPE_CHECKING:
from synapse.server import HomeServer
logger = logging.getLogger(__name__)
@ -349,7 +352,7 @@ _DUMMY_EVENT_ROOM_EXCLUSION_EXPIRY = 7 * 24 * 60 * 60 * 1000
class EventCreationHandler(object):
def __init__(self, hs):
def __init__(self, hs: "HomeServer"):
self.hs = hs
self.auth = hs.get_auth()
self.store = hs.get_datastore()
@ -816,11 +819,17 @@ class EventCreationHandler(object):
403, "This event is not allowed in this context", Codes.FORBIDDEN
)
try:
await self.auth.check_from_context(room_version, event, context)
except AuthError as err:
logger.warning("Denying new event %r because %s", event, err)
raise err
if event.internal_metadata.is_out_of_band_membership():
# the only sort of out-of-band-membership events we expect to see here
# are invite rejections we have generated ourselves.
assert event.type == EventTypes.Member
assert event.content["membership"] == Membership.LEAVE
else:
try:
await self.auth.check_from_context(room_version, event, context)
except AuthError as err:
logger.warning("Denying new event %r because %s", event, err)
raise err
# Ensure that we can round trip before trying to persist in db
try:

View File

@ -1,7 +1,5 @@
# -*- coding: utf-8 -*-
# Copyright 2016 OpenMarket Ltd
# Copyright 2018 New Vector Ltd
# Copyright 2019 The Matrix.org Foundation C.I.C.
# Copyright 2016-2020 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@ -18,17 +16,21 @@
import abc
import logging
from http import HTTPStatus
from typing import Dict, Iterable, List, Optional, Tuple
from typing import Dict, Iterable, List, Optional, Tuple, Union
from unpaddedbase64 import encode_base64
from synapse import types
from synapse.api.constants import EventTypes, Membership
from synapse.api.constants import MAX_DEPTH, EventTypes, Membership
from synapse.api.errors import AuthError, Codes, SynapseError
from synapse.api.room_versions import EventFormatVersions
from synapse.crypto.event_signing import compute_event_reference_hash
from synapse.events import EventBase
from synapse.events.builder import create_local_event_from_event_dict
from synapse.events.snapshot import EventContext
from synapse.replication.http.membership import (
ReplicationLocallyRejectInviteRestServlet,
)
from synapse.types import Collection, Requester, RoomAlias, RoomID, UserID
from synapse.events.validator import EventValidator
from synapse.storage.roommember import RoomsForUser
from synapse.types import Collection, JsonDict, Requester, RoomAlias, RoomID, UserID
from synapse.util.async_helpers import Linearizer
from synapse.util.distributor import user_joined_room, user_left_room
@ -75,10 +77,6 @@ class RoomMemberHandler(object):
)
if self._is_on_event_persistence_instance:
self.persist_event_storage = hs.get_storage().persistence
else:
self._locally_reject_client = ReplicationLocallyRejectInviteRestServlet.make_client(
hs
)
# This is only used to get at ratelimit function, and
# maybe_kick_guest_users. It's fine there are multiple of these as
@ -106,46 +104,28 @@ class RoomMemberHandler(object):
raise NotImplementedError()
@abc.abstractmethod
async def _remote_reject_invite(
async def remote_reject_invite(
self,
invite_event_id: str,
txn_id: Optional[str],
requester: Requester,
remote_room_hosts: List[str],
room_id: str,
target: UserID,
content: dict,
content: JsonDict,
) -> Tuple[Optional[str], int]:
"""Attempt to reject an invite for a room this server is not in. If we
fail to do so we locally mark the invite as rejected.
"""
Rejects an out-of-band invite we have received from a remote server
Args:
requester
remote_room_hosts: List of servers to use to try and reject invite
room_id
target: The user rejecting the invite
content: The content for the rejection event
invite_event_id: ID of the invite to be rejected
txn_id: optional transaction ID supplied by the client
requester: user making the rejection request, according to the access token
content: additional content to include in the rejection event.
Normally an empty dict.
Returns:
A dictionary to be returned to the client, may
include event_id etc, or nothing if we locally rejected
event id, stream_id of the leave event
"""
raise NotImplementedError()
async def locally_reject_invite(self, user_id: str, room_id: str) -> int:
"""Mark the invite has having been rejected even though we failed to
create a leave event for it.
"""
if self._is_on_event_persistence_instance:
return await self.persist_event_storage.locally_reject_invite(
user_id, room_id
)
else:
result = await self._locally_reject_client(
instance_name=self._event_stream_writer_instance,
user_id=user_id,
room_id=room_id,
)
return result["stream_id"]
@abc.abstractmethod
async def _user_joined_room(self, target: UserID, room_id: str) -> None:
"""Notifies distributor on master process that the user has joined the
@ -505,11 +485,17 @@ class RoomMemberHandler(object):
elif effective_membership_state == Membership.LEAVE:
if not is_host_in_room:
# perhaps we've been invited
inviter = await self._get_inviter(target.to_string(), room_id)
if not inviter:
invite = await self.store.get_invite_for_local_user_in_room(
user_id=target.to_string(), room_id=room_id
) # type: Optional[RoomsForUser]
if not invite:
raise SynapseError(404, "Not a known room")
if self.hs.is_mine(inviter):
logger.info(
"%s rejects invite to %s from %s", target, room_id, invite.sender
)
if self.hs.is_mine_id(invite.sender):
# the inviter was on our server, but has now left. Carry on
# with the normal rejection codepath.
#
@ -517,10 +503,10 @@ class RoomMemberHandler(object):
# active on other servers.
pass
else:
# send the rejection to the inviter's HS.
remote_room_hosts = remote_room_hosts + [inviter.domain]
return await self._remote_reject_invite(
requester, remote_room_hosts, room_id, target, content,
# send the rejection to the inviter's HS (with fallback to
# local event)
return await self.remote_reject_invite(
invite.event_id, txn_id, requester, content,
)
return await self._local_membership_update(
@ -1034,33 +1020,119 @@ class RoomMemberMasterHandler(RoomMemberHandler):
return event_id, stream_id
async def _remote_reject_invite(
async def remote_reject_invite(
self,
invite_event_id: str,
txn_id: Optional[str],
requester: Requester,
remote_room_hosts: List[str],
room_id: str,
target: UserID,
content: dict,
content: JsonDict,
) -> Tuple[Optional[str], int]:
"""Implements RoomMemberHandler._remote_reject_invite
"""
Rejects an out-of-band invite received from a remote user
Implements RoomMemberHandler.remote_reject_invite
"""
invite_event = await self.store.get_event(invite_event_id)
room_id = invite_event.room_id
target_user = invite_event.state_key
# first of all, try doing a rejection via the inviting server
fed_handler = self.federation_handler
try:
inviter_id = UserID.from_string(invite_event.sender)
event, stream_id = await fed_handler.do_remotely_reject_invite(
remote_room_hosts, room_id, target.to_string(), content=content,
[inviter_id.domain], room_id, target_user, content=content
)
return event.event_id, stream_id
except Exception as e:
# if we were unable to reject the exception, just mark
# it as rejected on our end and plough ahead.
# if we were unable to reject the invite, we will generate our own
# leave event.
#
# The 'except' clause is very broad, but we need to
# capture everything from DNS failures upwards
#
logger.warning("Failed to reject invite: %s", e)
stream_id = await self.locally_reject_invite(target.to_string(), room_id)
return None, stream_id
return await self._locally_reject_invite(
invite_event, txn_id, requester, content
)
async def _locally_reject_invite(
self,
invite_event: EventBase,
txn_id: Optional[str],
requester: Requester,
content: JsonDict,
) -> Tuple[str, int]:
"""Generate a local invite rejection
This is called after we fail to reject an invite via a remote server. It
generates an out-of-band membership event locally.
Args:
invite_event: the invite to be rejected
txn_id: optional transaction ID supplied by the client
requester: user making the rejection request, according to the access token
content: additional content to include in the rejection event.
Normally an empty dict.
"""
room_id = invite_event.room_id
target_user = invite_event.state_key
room_version = await self.store.get_room_version(room_id)
content["membership"] = Membership.LEAVE
# the auth events for the new event are the same as that of the invite, plus
# the invite itself.
#
# the prev_events are just the invite.
invite_hash = invite_event.event_id # type: Union[str, Tuple]
if room_version.event_format == EventFormatVersions.V1:
alg, h = compute_event_reference_hash(invite_event)
invite_hash = (invite_event.event_id, {alg: encode_base64(h)})
auth_events = invite_event.auth_events + (invite_hash,)
prev_events = (invite_hash,)
# we cap depth of generated events, to ensure that they are not
# rejected by other servers (and so that they can be persisted in
# the db)
depth = min(invite_event.depth + 1, MAX_DEPTH)
event_dict = {
"depth": depth,
"auth_events": auth_events,
"prev_events": prev_events,
"type": EventTypes.Member,
"room_id": room_id,
"sender": target_user,
"content": content,
"state_key": target_user,
}
event = create_local_event_from_event_dict(
clock=self.clock,
hostname=self.hs.hostname,
signing_key=self.hs.signing_key,
room_version=room_version,
event_dict=event_dict,
)
event.internal_metadata.outlier = True
event.internal_metadata.out_of_band_membership = True
if txn_id is not None:
event.internal_metadata.txn_id = txn_id
if requester.access_token_id is not None:
event.internal_metadata.token_id = requester.access_token_id
EventValidator().validate_new(event, self.config)
context = await self.state_handler.compute_event_context(event)
context.app_service = requester.app_service
stream_id = await self.event_creation_handler.handle_new_client_event(
requester, event, context, extra_users=[UserID.from_string(target_user)],
)
return event.event_id, stream_id
async def _user_joined_room(self, target: UserID, room_id: str) -> None:
"""Implements RoomMemberHandler._user_joined_room

View File

@ -61,21 +61,22 @@ class RoomMemberWorkerHandler(RoomMemberHandler):
return ret["event_id"], ret["stream_id"]
async def _remote_reject_invite(
async def remote_reject_invite(
self,
invite_event_id: str,
txn_id: Optional[str],
requester: Requester,
remote_room_hosts: List[str],
room_id: str,
target: UserID,
content: dict,
) -> Tuple[Optional[str], int]:
"""Implements RoomMemberHandler._remote_reject_invite
"""
Rejects an out-of-band invite received from a remote user
Implements RoomMemberHandler.remote_reject_invite
"""
ret = await self._remote_reject_client(
invite_event_id=invite_event_id,
txn_id=txn_id,
requester=requester,
remote_room_hosts=remote_room_hosts,
room_id=room_id,
user_id=target.to_string(),
content=content,
)
return ret["event_id"], ret["stream_id"]

View File

@ -294,6 +294,9 @@ class TypingHandler(object):
rows.sort()
limited = False
# We, unusually, use a strict limit here as we have all the rows in
# memory rather than pulling them out of the database with a `LIMIT ?`
# clause.
if len(rows) > limit:
rows = rows[:limit]
current_id = rows[-1][0]

View File

@ -13,13 +13,10 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from twisted.web.resource import Resource
from twisted.web.server import NOT_DONE_YET
from synapse.http.server import wrap_json_request_handler
from synapse.http.server import DirectServeJsonResource
class AdditionalResource(Resource):
class AdditionalResource(DirectServeJsonResource):
"""Resource wrapper for additional_resources
If the user has configured additional_resources, we need to wrap the
@ -41,16 +38,10 @@ class AdditionalResource(Resource):
handler ((twisted.web.server.Request) -> twisted.internet.defer.Deferred):
function to be called to handle the request.
"""
Resource.__init__(self)
super().__init__()
self._handler = handler
# required by the request_handler wrapper
self.clock = hs.get_clock()
def render(self, request):
self._async_render(request)
return NOT_DONE_YET
@wrap_json_request_handler
def _async_render(self, request):
# Cheekily pass the result straight through, so we don't need to worry
# if its an awaitable or not.
return self._handler(request)

View File

@ -176,7 +176,7 @@ class MatrixFederationHttpClient(object):
def __init__(self, hs, tls_client_options_factory):
self.hs = hs
self.signing_key = hs.config.signing_key[0]
self.signing_key = hs.signing_key
self.server_name = hs.hostname
real_reactor = hs.get_reactor()
@ -562,13 +562,17 @@ class MatrixFederationHttpClient(object):
Returns:
list[bytes]: a list of headers to be added as "Authorization:" headers
"""
request = {"method": method, "uri": url_bytes, "origin": self.server_name}
request = {
"method": method.decode("ascii"),
"uri": url_bytes.decode("ascii"),
"origin": self.server_name,
}
if destination is not None:
request["destination"] = destination
request["destination"] = destination.decode("ascii")
if destination_is is not None:
request["destination_is"] = destination_is
request["destination_is"] = destination_is.decode("ascii")
if content is not None:
request["content"] = content

View File

@ -14,6 +14,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import abc
import collections
import html
import logging
@ -21,7 +22,7 @@ import types
import urllib
from http import HTTPStatus
from io import BytesIO
from typing import Awaitable, Callable, TypeVar, Union
from typing import Any, Callable, Dict, Tuple, Union
import jinja2
from canonicaljson import encode_canonical_json, encode_pretty_printed_json, json
@ -62,99 +63,43 @@ HTML_ERROR_TEMPLATE = """<!DOCTYPE html>
"""
def wrap_json_request_handler(h):
"""Wraps a request handler method with exception handling.
Also does the wrapping with request.processing as per wrap_async_request_handler.
The handler method must have a signature of "handle_foo(self, request)",
where "request" must be a SynapseRequest.
The handler must return a deferred or a coroutine. If the deferred succeeds
we assume that a response has been sent. If the deferred fails with a SynapseError we use
it to send a JSON response with the appropriate HTTP reponse code. If the
deferred fails with any other type of error we send a 500 reponse.
def return_json_error(f: failure.Failure, request: SynapseRequest) -> None:
"""Sends a JSON error response to clients.
"""
async def wrapped_request_handler(self, request):
try:
await h(self, request)
except SynapseError as e:
code = e.code
logger.info("%s SynapseError: %s - %s", request, code, e.msg)
if f.check(SynapseError):
error_code = f.value.code
error_dict = f.value.error_dict()
# Only respond with an error response if we haven't already started
# writing, otherwise lets just kill the connection
if request.startedWriting:
if request.transport:
try:
request.transport.abortConnection()
except Exception:
# abortConnection throws if the connection is already closed
pass
else:
respond_with_json(
request,
code,
e.error_dict(),
send_cors=True,
pretty_print=_request_user_agent_is_curl(request),
)
logger.info("%s SynapseError: %s - %s", request, error_code, f.value.msg)
else:
error_code = 500
error_dict = {"error": "Internal server error", "errcode": Codes.UNKNOWN}
except Exception:
# failure.Failure() fishes the original Failure out
# of our stack, and thus gives us a sensible stack
# trace.
f = failure.Failure()
logger.error(
"Failed handle request via %r: %r",
request.request_metrics.name,
request,
exc_info=(f.type, f.value, f.getTracebackObject()),
)
# Only respond with an error response if we haven't already started
# writing, otherwise lets just kill the connection
if request.startedWriting:
if request.transport:
try:
request.transport.abortConnection()
except Exception:
# abortConnection throws if the connection is already closed
pass
else:
respond_with_json(
request,
500,
{"error": "Internal server error", "errcode": Codes.UNKNOWN},
send_cors=True,
pretty_print=_request_user_agent_is_curl(request),
)
logger.error(
"Failed handle request via %r: %r",
request.request_metrics.name,
request,
exc_info=(f.type, f.value, f.getTracebackObject()),
)
return wrap_async_request_handler(wrapped_request_handler)
TV = TypeVar("TV")
def wrap_html_request_handler(
h: Callable[[TV, SynapseRequest], Awaitable]
) -> Callable[[TV, SynapseRequest], Awaitable[None]]:
"""Wraps a request handler method with exception handling.
Also does the wrapping with request.processing as per wrap_async_request_handler.
The handler method must have a signature of "handle_foo(self, request)",
where "request" must be a SynapseRequest.
"""
async def wrapped_request_handler(self, request):
try:
await h(self, request)
except Exception:
f = failure.Failure()
return_html_error(f, request, HTML_ERROR_TEMPLATE)
return wrap_async_request_handler(wrapped_request_handler)
# Only respond with an error response if we haven't already started writing,
# otherwise lets just kill the connection
if request.startedWriting:
if request.transport:
try:
request.transport.abortConnection()
except Exception:
# abortConnection throws if the connection is already closed
pass
else:
respond_with_json(
request,
error_code,
error_dict,
send_cors=True,
pretty_print=_request_user_agent_is_curl(request),
)
def return_html_error(
@ -249,7 +194,113 @@ class HttpServer(object):
pass
class JsonResource(HttpServer, resource.Resource):
class _AsyncResource(resource.Resource, metaclass=abc.ABCMeta):
"""Base class for resources that have async handlers.
Sub classes can either implement `_async_render_<METHOD>` to handle
requests by method, or override `_async_render` to handle all requests.
Args:
extract_context: Whether to attempt to extract the opentracing
context from the request the servlet is handling.
"""
def __init__(self, extract_context=False):
super().__init__()
self._extract_context = extract_context
def render(self, request):
""" This gets called by twisted every time someone sends us a request.
"""
defer.ensureDeferred(self._async_render_wrapper(request))
return NOT_DONE_YET
@wrap_async_request_handler
async def _async_render_wrapper(self, request):
"""This is a wrapper that delegates to `_async_render` and handles
exceptions, return values, metrics, etc.
"""
try:
request.request_metrics.name = self.__class__.__name__
with trace_servlet(request, self._extract_context):
callback_return = await self._async_render(request)
if callback_return is not None:
code, response = callback_return
self._send_response(request, code, response)
except Exception:
# failure.Failure() fishes the original Failure out
# of our stack, and thus gives us a sensible stack
# trace.
f = failure.Failure()
self._send_error_response(f, request)
async def _async_render(self, request):
"""Delegates to `_async_render_<METHOD>` methods, or returns a 400 if
no appropriate method exists. Can be overriden in sub classes for
different routing.
"""
method_handler = getattr(
self, "_async_render_%s" % (request.method.decode("ascii"),), None
)
if method_handler:
raw_callback_return = method_handler(request)
# Is it synchronous? We'll allow this for now.
if isinstance(raw_callback_return, (defer.Deferred, types.CoroutineType)):
callback_return = await raw_callback_return
else:
callback_return = raw_callback_return
return callback_return
_unrecognised_request_handler(request)
@abc.abstractmethod
def _send_response(
self, request: SynapseRequest, code: int, response_object: Any,
) -> None:
raise NotImplementedError()
@abc.abstractmethod
def _send_error_response(
self, f: failure.Failure, request: SynapseRequest,
) -> None:
raise NotImplementedError()
class DirectServeJsonResource(_AsyncResource):
"""A resource that will call `self._async_on_<METHOD>` on new requests,
formatting responses and errors as JSON.
"""
def _send_response(
self, request, code, response_object,
):
"""Implements _AsyncResource._send_response
"""
# TODO: Only enable CORS for the requests that need it.
respond_with_json(
request,
code,
response_object,
send_cors=True,
pretty_print=_request_user_agent_is_curl(request),
canonical_json=self.canonical_json,
)
def _send_error_response(
self, f: failure.Failure, request: SynapseRequest,
) -> None:
"""Implements _AsyncResource._send_error_response
"""
return_json_error(f, request)
class JsonResource(DirectServeJsonResource):
""" This implements the HttpServer interface and provides JSON support for
Resources.
@ -269,17 +320,15 @@ class JsonResource(HttpServer, resource.Resource):
"_PathEntry", ["pattern", "callback", "servlet_classname"]
)
def __init__(self, hs, canonical_json=True):
resource.Resource.__init__(self)
def __init__(self, hs, canonical_json=True, extract_context=False):
super().__init__(extract_context)
self.canonical_json = canonical_json
self.clock = hs.get_clock()
self.path_regexs = {}
self.hs = hs
def register_paths(
self, method, path_patterns, callback, servlet_classname, trace=True
):
def register_paths(self, method, path_patterns, callback, servlet_classname):
"""
Registers a request handler against a regular expression. Later request URLs are
checked against these regular expressions in order to identify an appropriate
@ -295,74 +344,23 @@ class JsonResource(HttpServer, resource.Resource):
servlet_classname (str): The name of the handler to be used in prometheus
and opentracing logs.
trace (bool): Whether we should start a span to trace the servlet.
"""
method = method.encode("utf-8") # method is bytes on py3
if trace:
# We don't extract the context from the servlet because we can't
# trust the sender
callback = trace_servlet(servlet_classname)(callback)
for path_pattern in path_patterns:
logger.debug("Registering for %s %s", method, path_pattern.pattern)
self.path_regexs.setdefault(method, []).append(
self._PathEntry(path_pattern, callback, servlet_classname)
)
def render(self, request):
""" This gets called by twisted every time someone sends us a request.
"""
defer.ensureDeferred(self._async_render(request))
return NOT_DONE_YET
@wrap_json_request_handler
async def _async_render(self, request):
""" This gets called from render() every time someone sends us a request.
This checks if anyone has registered a callback for that method and
path.
"""
callback, servlet_classname, group_dict = self._get_handler_for_request(request)
# Make sure we have a name for this handler in prometheus.
request.request_metrics.name = servlet_classname
# Now trigger the callback. If it returns a response, we send it
# here. If it throws an exception, that is handled by the wrapper
# installed by @request_handler.
kwargs = intern_dict(
{
name: urllib.parse.unquote(value) if value else value
for name, value in group_dict.items()
}
)
callback_return = callback(request, **kwargs)
# Is it synchronous? We'll allow this for now.
if isinstance(callback_return, (defer.Deferred, types.CoroutineType)):
callback_return = await callback_return
if callback_return is not None:
code, response = callback_return
self._send_response(request, code, response)
def _get_handler_for_request(self, request):
"""Finds a callback method to handle the given request
Args:
request (twisted.web.http.Request):
def _get_handler_for_request(
self, request: SynapseRequest
) -> Tuple[Callable, str, Dict[str, str]]:
"""Finds a callback method to handle the given request.
Returns:
Tuple[Callable, str, dict[unicode, unicode]]: callback method, the
label to use for that method in prometheus metrics, and the
dict mapping keys to path components as specified in the
handler's path match regexp.
The callback will normally be a method registered via
register_paths, so will return (possibly via Deferred) either
None, or a tuple of (http code, response body).
A tuple of the callback to use, the name of the servlet, and the
key word arguments to pass to the callback
"""
request_path = request.path.decode("ascii")
@ -377,42 +375,59 @@ class JsonResource(HttpServer, resource.Resource):
# Huh. No one wanted to handle that? Fiiiiiine. Send 400.
return _unrecognised_request_handler, "unrecognised_request_handler", {}
def _send_response(
self, request, code, response_json_object, response_code_message=None
):
# TODO: Only enable CORS for the requests that need it.
respond_with_json(
request,
code,
response_json_object,
send_cors=True,
response_code_message=response_code_message,
pretty_print=_request_user_agent_is_curl(request),
canonical_json=self.canonical_json,
async def _async_render(self, request):
callback, servlet_classname, group_dict = self._get_handler_for_request(request)
# Make sure we have an appopriate name for this handler in prometheus
# (rather than the default of JsonResource).
request.request_metrics.name = servlet_classname
# Now trigger the callback. If it returns a response, we send it
# here. If it throws an exception, that is handled by the wrapper
# installed by @request_handler.
kwargs = intern_dict(
{
name: urllib.parse.unquote(value) if value else value
for name, value in group_dict.items()
}
)
raw_callback_return = callback(request, **kwargs)
class DirectServeResource(resource.Resource):
def render(self, request):
# Is it synchronous? We'll allow this for now.
if isinstance(raw_callback_return, (defer.Deferred, types.CoroutineType)):
callback_return = await raw_callback_return
else:
callback_return = raw_callback_return
return callback_return
class DirectServeHtmlResource(_AsyncResource):
"""A resource that will call `self._async_on_<METHOD>` on new requests,
formatting responses and errors as HTML.
"""
# The error template to use for this resource
ERROR_TEMPLATE = HTML_ERROR_TEMPLATE
def _send_response(
self, request: SynapseRequest, code: int, response_object: Any,
):
"""Implements _AsyncResource._send_response
"""
Render the request, using an asynchronous render handler if it exists.
# We expect to get bytes for us to write
assert isinstance(response_object, bytes)
html_bytes = response_object
respond_with_html_bytes(request, 200, html_bytes)
def _send_error_response(
self, f: failure.Failure, request: SynapseRequest,
) -> None:
"""Implements _AsyncResource._send_error_response
"""
async_render_callback_name = "_async_render_" + request.method.decode("ascii")
# Try and get the async renderer
callback = getattr(self, async_render_callback_name, None)
# No async renderer for this request method.
if not callback:
return super().render(request)
resp = trace_servlet(self.__class__.__name__)(callback)(request)
# If it's a coroutine, turn it into a Deferred
if isinstance(resp, types.CoroutineType):
defer.ensureDeferred(resp)
return NOT_DONE_YET
return_html_error(f, request, self.ERROR_TEMPLATE)
class StaticResource(File):

View File

@ -164,12 +164,10 @@ Gotchas
than one caller? Will all of those calling functions have be in a context
with an active span?
"""
import contextlib
import inspect
import logging
import re
import types
from functools import wraps
from typing import TYPE_CHECKING, Dict, Optional, Type
@ -181,6 +179,7 @@ from twisted.internet import defer
from synapse.config import ConfigError
if TYPE_CHECKING:
from synapse.http.site import SynapseRequest
from synapse.server import HomeServer
# Helper class
@ -227,6 +226,7 @@ except ImportError:
tags = _DummyTagNames
try:
from jaeger_client import Config as JaegerConfig
from synapse.logging.scopecontextmanager import LogContextScopeManager
except ImportError:
JaegerConfig = None # type: ignore
@ -793,48 +793,42 @@ def tag_args(func):
return _tag_args_inner
def trace_servlet(servlet_name, extract_context=False):
"""Decorator which traces a serlet. It starts a span with some servlet specific
tags such as the servlet_name and request information
@contextlib.contextmanager
def trace_servlet(request: "SynapseRequest", extract_context: bool = False):
"""Returns a context manager which traces a request. It starts a span
with some servlet specific tags such as the request metrics name and
request information.
Args:
servlet_name (str): The name to be used for the span's operation_name
extract_context (bool): Whether to attempt to extract the opentracing
request
extract_context: Whether to attempt to extract the opentracing
context from the request the servlet is handling.
"""
def _trace_servlet_inner_1(func):
if not opentracing:
return func
if opentracing is None:
yield
return
@wraps(func)
async def _trace_servlet_inner(request, *args, **kwargs):
request_tags = {
"request_id": request.get_request_id(),
tags.SPAN_KIND: tags.SPAN_KIND_RPC_SERVER,
tags.HTTP_METHOD: request.get_method(),
tags.HTTP_URL: request.get_redacted_uri(),
tags.PEER_HOST_IPV6: request.getClientIP(),
}
request_tags = {
"request_id": request.get_request_id(),
tags.SPAN_KIND: tags.SPAN_KIND_RPC_SERVER,
tags.HTTP_METHOD: request.get_method(),
tags.HTTP_URL: request.get_redacted_uri(),
tags.PEER_HOST_IPV6: request.getClientIP(),
}
if extract_context:
scope = start_active_span_from_request(
request, servlet_name, tags=request_tags
)
else:
scope = start_active_span(servlet_name, tags=request_tags)
request_name = request.request_metrics.name
if extract_context:
scope = start_active_span_from_request(request, request_name, tags=request_tags)
else:
scope = start_active_span(request_name, tags=request_tags)
with scope:
result = func(request, *args, **kwargs)
with scope:
try:
yield
finally:
# We set the operation name again in case its changed (which happens
# with JsonResource).
scope.span.set_operation_name(request.request_metrics.name)
if not isinstance(result, (types.CoroutineType, defer.Deferred)):
# Some servlets aren't async and just return results
# directly, so we handle that here.
return result
return await result
return _trace_servlet_inner
return _trace_servlet_inner_1
scope.span.set_tag("request_tag", request.request_metrics.start_context.tag)

Some files were not shown because too many files have changed in this diff Show More