Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes
commit
859663565c
|
@ -34,33 +34,8 @@ Device list doesn't change if remote server is down
|
|||
Remote servers cannot set power levels in rooms without existing powerlevels
|
||||
Remote servers should reject attempts by non-creators to set the power levels
|
||||
|
||||
# new failures as of https://github.com/matrix-org/sytest/pull/753
|
||||
GET /rooms/:room_id/messages returns a message
|
||||
GET /rooms/:room_id/messages lazy loads members correctly
|
||||
Read receipts are sent as events
|
||||
Only original members of the room can see messages from erased users
|
||||
Device deletion propagates over federation
|
||||
If user leaves room, remote user changes device and rejoins we see update in /sync and /keys/changes
|
||||
Changing user-signing key notifies local users
|
||||
Newly updated tags appear in an incremental v2 /sync
|
||||
# https://buildkite.com/matrix-dot-org/synapse/builds/6134#6f67bf47-e234-474d-80e8-c6e1868b15c5
|
||||
Server correctly handles incoming m.device_list_update
|
||||
Local device key changes get to remote servers with correct prev_id
|
||||
AS-ghosted users can use rooms via AS
|
||||
Ghost user must register before joining room
|
||||
Test that a message is pushed
|
||||
Invites are pushed
|
||||
Rooms with aliases are correctly named in pushed
|
||||
Rooms with names are correctly named in pushed
|
||||
Rooms with canonical alias are correctly named in pushed
|
||||
Rooms with many users are correctly pushed
|
||||
Don't get pushed for rooms you've muted
|
||||
Rejected events are not pushed
|
||||
Test that rejected pushers are removed.
|
||||
Events come down the correct room
|
||||
|
||||
# https://buildkite.com/matrix-dot-org/sytest/builds/326#cca62404-a88a-4fcb-ad41-175fd3377603
|
||||
Presence changes to UNAVAILABLE are reported to remote room members
|
||||
If remote user leaves room, changes device and rejoins we see update in sync
|
||||
uploading self-signing key notifies over federation
|
||||
Inbound federation can receive redacted events
|
||||
Outbound federation can request missing events
|
||||
# this fails reliably with a torture level of 100 due to https://github.com/matrix-org/synapse/issues/6536
|
||||
Outbound federation requests missing prev_events and then asks for /state_ids and resolves the state
|
||||
|
|
|
@ -46,3 +46,6 @@ Joseph Weston <joseph at weston.cloud>
|
|||
|
||||
Benjamin Saunders <ben.e.saunders at gmail dot com>
|
||||
* Documentation improvements
|
||||
|
||||
Werner Sembach <werner.sembach at fau dot de>
|
||||
* Automatically remove a group/community when it is empty
|
||||
|
|
|
@ -0,0 +1 @@
|
|||
Split out state storage into separate data store.
|
|
@ -0,0 +1 @@
|
|||
Implement v2 APIs for the `send_join` and `send_leave` federation endpoints (as described in [MSC1802](https://github.com/matrix-org/matrix-doc/pull/1802)).
|
|
@ -0,0 +1 @@
|
|||
Prevent redacted events from being returned during message search.
|
|
@ -0,0 +1 @@
|
|||
Prevent error on trying to search a upgraded room when the server is not in the predecessor room.
|
|
@ -0,0 +1 @@
|
|||
Add a develop script to generate full SQL schemas.
|
|
@ -0,0 +1 @@
|
|||
Allow custom SAML username mapping functinality through an external provider plugin.
|
|
@ -0,0 +1 @@
|
|||
Automatically delete empty groups/communities.
|
|
@ -0,0 +1 @@
|
|||
Improve performance of looking up cross-signing keys.
|
|
@ -0,0 +1 @@
|
|||
Port synapse.handlers.initial_sync to async/await.
|
|
@ -0,0 +1 @@
|
|||
Remove redundant code from event authorisation implementation.
|
|
@ -0,0 +1 @@
|
|||
Port handlers.account_data and handlers.account_validity to async/await.
|
|
@ -0,0 +1 @@
|
|||
Make `make_deferred_yieldable` to work with async/await.
|
|
@ -0,0 +1 @@
|
|||
Remove `SnapshotCache` in favour of `ResponseCache`.
|
|
@ -0,0 +1 @@
|
|||
Change phone home stats to not assume there is a single database and report information about the database used by the main data store.
|
|
@ -0,0 +1 @@
|
|||
Move database config from apps into HomeServer object.
|
|
@ -0,0 +1 @@
|
|||
Silence mypy errors for files outside those specified.
|
|
@ -0,0 +1 @@
|
|||
Remove all assumptions of there being a single phyiscal DB apart from the `synapse.config`.
|
|
@ -0,0 +1 @@
|
|||
Fix race which occasionally caused deleted devices to reappear.
|
|
@ -0,0 +1 @@
|
|||
Clean up some logging when handling incoming events over federation.
|
|
@ -0,0 +1 @@
|
|||
Port some of FederationHandler to async/await.
|
|
@ -0,0 +1 @@
|
|||
Prevent redacted events from being returned during message search.
|
|
@ -0,0 +1 @@
|
|||
Add option `limit_profile_requests_to_users_who_share_rooms` to prevent requirement of a local user sharing a room with another user to query their profile information.
|
|
@ -0,0 +1 @@
|
|||
Test more folders against mypy.
|
|
@ -0,0 +1 @@
|
|||
Update `mypy` to new version.
|
|
@ -0,0 +1 @@
|
|||
Adjust the sytest blacklist for worker mode.
|
|
@ -0,0 +1 @@
|
|||
Document the Room Shutdown Admin API.
|
|
@ -0,0 +1 @@
|
|||
Add an export_signing_key script to extract the public part of signing keys when rotating them.
|
|
@ -0,0 +1 @@
|
|||
Fix missing row in device_max_stream_id that could cause unable to decrypt errors after server restart.
|
|
@ -0,0 +1 @@
|
|||
Remove unused `get_pagination_rows` methods from `EventSource` classes.
|
|
@ -0,0 +1 @@
|
|||
Clean up logs from the push notifier at startup.
|
|
@ -0,0 +1 @@
|
|||
Port `synapse.handlers.admin` and `synapse.handlers.deactivate_account` to async/await.
|
|
@ -0,0 +1 @@
|
|||
Change `EventContext` to use the `Storage` class, in preparation for moving state database queries to a separate data store.
|
|
@ -0,0 +1 @@
|
|||
Add assertion that schema delta file names are unique.
|
|
@ -0,0 +1 @@
|
|||
Improve diagnostics on database upgrade failure.
|
|
@ -0,0 +1 @@
|
|||
Fix a bug which meant that we did not send systemd notifications on startup if acme was enabled.
|
|
@ -0,0 +1 @@
|
|||
Add experimental config option to specify multiple databases.
|
|
@ -0,0 +1 @@
|
|||
Reword sections of federate.md that explained delegation at time of Synapse 1.0 transition.
|
|
@ -0,0 +1 @@
|
|||
Added the section 'Configuration' in /docs/turn-howto.md.
|
|
@ -0,0 +1 @@
|
|||
Reduce the reconnect time when worker replication fails, to make it easier to catch up.
|
|
@ -0,0 +1 @@
|
|||
Simplify http handling by removing redundant SynapseRequestFactory.
|
|
@ -0,0 +1 @@
|
|||
Add a workaround for synapse raising exceptions when fetching the notary's own key from the notary.
|
|
@ -0,0 +1 @@
|
|||
Fix exception when fetching the `matrix.org:ed25519:auto` key.
|
|
@ -0,0 +1 @@
|
|||
Raise an error if someone tries to use the log_file config option.
|
|
@ -0,0 +1 @@
|
|||
Automate generation of the sample log config.
|
|
@ -0,0 +1 @@
|
|||
Remove unused, undocumented /_matrix/content API.
|
|
@ -0,0 +1 @@
|
|||
Fix bug where a moderator upgraded a room and became an admin in the new room.
|
|
@ -0,0 +1 @@
|
|||
Fix an error which was thrown by the PresenceHandler _on_shutdown handler.
|
|
@ -0,0 +1 @@
|
|||
Fix errors when frozen_dicts are enabled.
|
|
@ -85,6 +85,9 @@ PYTHONPATH="$tmpdir" \
|
|||
|
||||
' > "${PACKAGE_BUILD_DIR}/etc/matrix-synapse/homeserver.yaml"
|
||||
|
||||
# build the log config file
|
||||
"${TARGET_PYTHON}" -B "${VIRTUALENV_DIR}/bin/generate_log_config" \
|
||||
--output-file="${PACKAGE_BUILD_DIR}/etc/matrix-synapse/log.yaml"
|
||||
|
||||
# add a dependency on the right version of python to substvars.
|
||||
PYPKG=`basename $SNAKE`
|
||||
|
|
|
@ -1,3 +1,9 @@
|
|||
matrix-synapse-py3 (1.7.3ubuntu1) UNRELEASED; urgency=medium
|
||||
|
||||
* Automate generation of the default log configuration file.
|
||||
|
||||
-- Richard van der Hoff <richard@matrix.org> Fri, 03 Jan 2020 13:55:38 +0000
|
||||
|
||||
matrix-synapse-py3 (1.7.3) stable; urgency=medium
|
||||
|
||||
* New synapse release 1.7.3.
|
||||
|
|
|
@ -1,2 +1 @@
|
|||
debian/log.yaml etc/matrix-synapse
|
||||
debian/manage_debconf.pl /opt/venvs/matrix-synapse/lib/
|
||||
|
|
|
@ -1,36 +0,0 @@
|
|||
|
||||
version: 1
|
||||
|
||||
formatters:
|
||||
precise:
|
||||
format: '%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s- %(message)s'
|
||||
|
||||
filters:
|
||||
context:
|
||||
(): synapse.logging.context.LoggingContextFilter
|
||||
request: ""
|
||||
|
||||
handlers:
|
||||
file:
|
||||
class: logging.handlers.RotatingFileHandler
|
||||
formatter: precise
|
||||
filename: /var/log/matrix-synapse/homeserver.log
|
||||
maxBytes: 104857600
|
||||
backupCount: 10
|
||||
filters: [context]
|
||||
encoding: utf8
|
||||
console:
|
||||
class: logging.StreamHandler
|
||||
formatter: precise
|
||||
level: WARN
|
||||
|
||||
loggers:
|
||||
synapse:
|
||||
level: INFO
|
||||
|
||||
synapse.storage.SQL:
|
||||
level: INFO
|
||||
|
||||
root:
|
||||
level: INFO
|
||||
handlers: [file, console]
|
|
@ -0,0 +1,72 @@
|
|||
# Shutdown room API
|
||||
|
||||
Shuts down a room, preventing new joins and moves local users and room aliases automatically
|
||||
to a new room. The new room will be created with the user specified by the
|
||||
`new_room_user_id` parameter as room administrator and will contain a message
|
||||
explaining what happened. Users invited to the new room will have power level
|
||||
-10 by default, and thus be unable to speak. The old room's power levels will be changed to
|
||||
disallow any further invites or joins.
|
||||
|
||||
The local server will only have the power to move local user and room aliases to
|
||||
the new room. Users on other servers will be unaffected.
|
||||
|
||||
## API
|
||||
|
||||
You will need to authenticate with an access token for an admin user.
|
||||
|
||||
### URL
|
||||
|
||||
`POST /_synapse/admin/v1/shutdown_room/{room_id}`
|
||||
|
||||
### URL Parameters
|
||||
|
||||
* `room_id` - The ID of the room (e.g `!someroom:example.com`)
|
||||
|
||||
### JSON Body Parameters
|
||||
|
||||
* `new_room_user_id` - Required. A string representing the user ID of the user that will admin
|
||||
the new room that all users in the old room will be moved to.
|
||||
* `room_name` - Optional. A string representing the name of the room that new users will be
|
||||
invited to.
|
||||
* `message` - Optional. A string containing the first message that will be sent as
|
||||
`new_room_user_id` in the new room. Ideally this will clearly convey why the
|
||||
original room was shut down.
|
||||
|
||||
If not specified, the default value of `room_name` is "Content Violation
|
||||
Notification". The default value of `message` is "Sharing illegal content on
|
||||
othis server is not permitted and rooms in violation will be blocked."
|
||||
|
||||
### Response Parameters
|
||||
|
||||
* `kicked_users` - An integer number representing the number of users that
|
||||
were kicked.
|
||||
* `failed_to_kick_users` - An integer number representing the number of users
|
||||
that were not kicked.
|
||||
* `local_aliases` - An array of strings representing the local aliases that were migrated from
|
||||
the old room to the new.
|
||||
* `new_room_id` - A string representing the room ID of the new room.
|
||||
|
||||
## Example
|
||||
|
||||
Request:
|
||||
|
||||
```
|
||||
POST /_synapse/admin/v1/shutdown_room/!somebadroom%3Aexample.com
|
||||
|
||||
{
|
||||
"new_room_user_id": "@someuser:example.com",
|
||||
"room_name": "Content Violation Notification",
|
||||
"message": "Bad Room has been shutdown due to content violations on this server. Please review our Terms of Service."
|
||||
}
|
||||
```
|
||||
|
||||
Response:
|
||||
|
||||
```
|
||||
{
|
||||
"kicked_users": 5,
|
||||
"failed_to_kick_users": 0,
|
||||
"local_aliases": ["#badroom:example.com", "#evilsaloon:example.com],
|
||||
"new_room_id": "!newroomid:example.com",
|
||||
},
|
||||
```
|
|
@ -137,6 +137,7 @@ Some guidelines follow:
|
|||
correctly handles the top-level option being set to `None` (as it
|
||||
will be if no sub-options are enabled).
|
||||
- Lines should be wrapped at 80 characters.
|
||||
- Use two-space indents.
|
||||
|
||||
Example:
|
||||
|
||||
|
@ -155,13 +156,13 @@ Example:
|
|||
# Settings for the frobber
|
||||
#
|
||||
frobber:
|
||||
# frobbing speed. Defaults to 1.
|
||||
#
|
||||
#speed: 10
|
||||
# frobbing speed. Defaults to 1.
|
||||
#
|
||||
#speed: 10
|
||||
|
||||
# frobbing distance. Defaults to 1000.
|
||||
#
|
||||
#distance: 100
|
||||
# frobbing distance. Defaults to 1000.
|
||||
#
|
||||
#distance: 100
|
||||
|
||||
Note that the sample configuration is generated from the synapse code
|
||||
and is maintained by a script, `scripts-dev/generate_sample_config`.
|
||||
|
|
|
@ -66,10 +66,6 @@ therefore cannot gain access to the necessary certificate. With .well-known,
|
|||
federation servers will check for a valid TLS certificate for the delegated
|
||||
hostname (in our example: ``synapse.example.com``).
|
||||
|
||||
.well-known support first appeared in Synapse v0.99.0. To federate with older
|
||||
servers you may need to additionally configure SRV delegation. Alternatively,
|
||||
encourage the server admin in question to upgrade :).
|
||||
|
||||
### DNS SRV delegation
|
||||
|
||||
To use this delegation method, you need to have write access to your
|
||||
|
@ -111,29 +107,15 @@ giving it a `server_name` of `example.com`, and once [ACME](acme.md) support is
|
|||
it would automatically generate a valid TLS certificate for you via Let's Encrypt
|
||||
and no SRV record or .well-known URI would be needed.
|
||||
|
||||
This is the common case, although you can add an SRV record or
|
||||
`.well-known/matrix/server` URI for completeness if you wish.
|
||||
|
||||
**However**, if your server does not listen on port 8448, or if your `server_name`
|
||||
does not point to the host that your homeserver runs on, you will need to let
|
||||
other servers know how to find it. The way to do this is via .well-known or an
|
||||
SRV record.
|
||||
|
||||
#### I have created a .well-known URI. Do I still need an SRV record?
|
||||
#### I have created a .well-known URI. Do I also need an SRV record?
|
||||
|
||||
As of Synapse 0.99, Synapse will first check for the existence of a .well-known
|
||||
URI and follow any delegation it suggests. It will only then check for the
|
||||
existence of an SRV record.
|
||||
|
||||
That means that the SRV record will often be redundant. However, you should
|
||||
remember that there may still be older versions of Synapse in the federation
|
||||
which do not understand .well-known URIs, so if you removed your SRV record
|
||||
you would no longer be able to federate with them.
|
||||
|
||||
It is therefore best to leave the SRV record in place for now. Synapse 0.34 and
|
||||
earlier will follow the SRV record (and not care about the invalid
|
||||
certificate). Synapse 0.99 and later will follow the .well-known URI, with the
|
||||
correct certificate chain.
|
||||
No. You can use either `.well-known` delegation or use an SRV record for delegation. You
|
||||
do not need to use both to delegate to the same location.
|
||||
|
||||
#### Can I manage my own certificates rather than having Synapse renew certificates itself?
|
||||
|
||||
|
|
|
@ -0,0 +1,77 @@
|
|||
# SAML Mapping Providers
|
||||
|
||||
A SAML mapping provider is a Python class (loaded via a Python module) that
|
||||
works out how to map attributes of a SAML response object to Matrix-specific
|
||||
user attributes. Details such as user ID localpart, displayname, and even avatar
|
||||
URLs are all things that can be mapped from talking to a SSO service.
|
||||
|
||||
As an example, a SSO service may return the email address
|
||||
"john.smith@example.com" for a user, whereas Synapse will need to figure out how
|
||||
to turn that into a displayname when creating a Matrix user for this individual.
|
||||
It may choose `John Smith`, or `Smith, John [Example.com]` or any number of
|
||||
variations. As each Synapse configuration may want something different, this is
|
||||
where SAML mapping providers come into play.
|
||||
|
||||
## Enabling Providers
|
||||
|
||||
External mapping providers are provided to Synapse in the form of an external
|
||||
Python module. Retrieve this module from [PyPi](https://pypi.org) or elsewhere,
|
||||
then tell Synapse where to look for the handler class by editing the
|
||||
`saml2_config.user_mapping_provider.module` config option.
|
||||
|
||||
`saml2_config.user_mapping_provider.config` allows you to provide custom
|
||||
configuration options to the module. Check with the module's documentation for
|
||||
what options it provides (if any). The options listed by default are for the
|
||||
user mapping provider built in to Synapse. If using a custom module, you should
|
||||
comment these options out and use those specified by the module instead.
|
||||
|
||||
## Building a Custom Mapping Provider
|
||||
|
||||
A custom mapping provider must specify the following methods:
|
||||
|
||||
* `__init__(self, parsed_config)`
|
||||
- Arguments:
|
||||
- `parsed_config` - A configuration object that is the return value of the
|
||||
`parse_config` method. You should set any configuration options needed by
|
||||
the module here.
|
||||
* `saml_response_to_user_attributes(self, saml_response, failures)`
|
||||
- Arguments:
|
||||
- `saml_response` - A `saml2.response.AuthnResponse` object to extract user
|
||||
information from.
|
||||
- `failures` - An `int` that represents the amount of times the returned
|
||||
mxid localpart mapping has failed. This should be used
|
||||
to create a deduplicated mxid localpart which should be
|
||||
returned instead. For example, if this method returns
|
||||
`john.doe` as the value of `mxid_localpart` in the returned
|
||||
dict, and that is already taken on the homeserver, this
|
||||
method will be called again with the same parameters but
|
||||
with failures=1. The method should then return a different
|
||||
`mxid_localpart` value, such as `john.doe1`.
|
||||
- This method must return a dictionary, which will then be used by Synapse
|
||||
to build a new user. The following keys are allowed:
|
||||
* `mxid_localpart` - Required. The mxid localpart of the new user.
|
||||
* `displayname` - The displayname of the new user. If not provided, will default to
|
||||
the value of `mxid_localpart`.
|
||||
* `parse_config(config)`
|
||||
- This method should have the `@staticmethod` decoration.
|
||||
- Arguments:
|
||||
- `config` - A `dict` representing the parsed content of the
|
||||
`saml2_config.user_mapping_provider.config` homeserver config option.
|
||||
Runs on homeserver startup. Providers should extract any option values
|
||||
they need here.
|
||||
- Whatever is returned will be passed back to the user mapping provider module's
|
||||
`__init__` method during construction.
|
||||
* `get_saml_attributes(config)`
|
||||
- This method should have the `@staticmethod` decoration.
|
||||
- Arguments:
|
||||
- `config` - A object resulting from a call to `parse_config`.
|
||||
- Returns a tuple of two sets. The first set equates to the saml auth
|
||||
response attributes that are required for the module to function, whereas
|
||||
the second set consists of those attributes which can be used if available,
|
||||
but are not necessary.
|
||||
|
||||
## Synapse's Default Provider
|
||||
|
||||
Synapse has a built-in SAML mapping provider if a custom provider isn't
|
||||
specified in the config. It is located at
|
||||
[`synapse.handlers.saml_handler.DefaultSamlMappingProvider`](../synapse/handlers/saml_handler.py).
|
|
@ -54,6 +54,13 @@ pid_file: DATADIR/homeserver.pid
|
|||
#
|
||||
#require_auth_for_profile_requests: true
|
||||
|
||||
# Uncomment to require a user to share a room with another user in order
|
||||
# to retrieve their profile information. Only checked on Client-Server
|
||||
# requests. Profile requests from other servers should be checked by the
|
||||
# requesting server. Defaults to 'false'.
|
||||
#
|
||||
#limit_profile_requests_to_users_who_share_rooms: true
|
||||
|
||||
# If set to 'true', removes the need for authentication to access the server's
|
||||
# public rooms directory through the client API, meaning that anyone can
|
||||
# query the room directory. Defaults to 'false'.
|
||||
|
@ -685,10 +692,6 @@ media_store_path: "DATADIR/media_store"
|
|||
# config:
|
||||
# directory: /mnt/some/other/directory
|
||||
|
||||
# Directory where in-progress uploads are stored.
|
||||
#
|
||||
uploads_path: "DATADIR/uploads"
|
||||
|
||||
# The largest allowed upload size in bytes
|
||||
#
|
||||
#max_upload_size: 10M
|
||||
|
@ -1115,14 +1118,19 @@ metrics_flags:
|
|||
signing_key_path: "CONFDIR/SERVERNAME.signing.key"
|
||||
|
||||
# The keys that the server used to sign messages with but won't use
|
||||
# to sign new messages. E.g. it has lost its private key
|
||||
# to sign new messages.
|
||||
#
|
||||
#old_signing_keys:
|
||||
# "ed25519:auto":
|
||||
# # Base64 encoded public key
|
||||
# key: "The public part of your old signing key."
|
||||
# # Millisecond POSIX timestamp when the key expired.
|
||||
# expired_ts: 123456789123
|
||||
old_signing_keys:
|
||||
# For each key, `key` should be the base64-encoded public key, and
|
||||
# `expired_ts`should be the time (in milliseconds since the unix epoch) that
|
||||
# it was last used.
|
||||
#
|
||||
# It is possible to build an entry from an old signing.key file using the
|
||||
# `export_signing_key` script which is provided with synapse.
|
||||
#
|
||||
# For example:
|
||||
#
|
||||
#"ed25519:id": { key: "base64string", expired_ts: 123456789123 }
|
||||
|
||||
# How long key response published by this server is valid for.
|
||||
# Used to set the valid_until_ts in /key/v2 APIs.
|
||||
|
@ -1250,33 +1258,58 @@ saml2_config:
|
|||
#
|
||||
#config_path: "CONFDIR/sp_conf.py"
|
||||
|
||||
# the lifetime of a SAML session. This defines how long a user has to
|
||||
# The lifetime of a SAML session. This defines how long a user has to
|
||||
# complete the authentication process, if allow_unsolicited is unset.
|
||||
# The default is 5 minutes.
|
||||
#
|
||||
#saml_session_lifetime: 5m
|
||||
|
||||
# The SAML attribute (after mapping via the attribute maps) to use to derive
|
||||
# the Matrix ID from. 'uid' by default.
|
||||
# An external module can be provided here as a custom solution to
|
||||
# mapping attributes returned from a saml provider onto a matrix user.
|
||||
#
|
||||
#mxid_source_attribute: displayName
|
||||
user_mapping_provider:
|
||||
# The custom module's class. Uncomment to use a custom module.
|
||||
#
|
||||
#module: mapping_provider.SamlMappingProvider
|
||||
|
||||
# The mapping system to use for mapping the saml attribute onto a matrix ID.
|
||||
# Options include:
|
||||
# * 'hexencode' (which maps unpermitted characters to '=xx')
|
||||
# * 'dotreplace' (which replaces unpermitted characters with '.').
|
||||
# The default is 'hexencode'.
|
||||
#
|
||||
#mxid_mapping: dotreplace
|
||||
# Custom configuration values for the module. Below options are
|
||||
# intended for the built-in provider, they should be changed if
|
||||
# using a custom module. This section will be passed as a Python
|
||||
# dictionary to the module's `parse_config` method.
|
||||
#
|
||||
config:
|
||||
# The SAML attribute (after mapping via the attribute maps) to use
|
||||
# to derive the Matrix ID from. 'uid' by default.
|
||||
#
|
||||
# Note: This used to be configured by the
|
||||
# saml2_config.mxid_source_attribute option. If that is still
|
||||
# defined, its value will be used instead.
|
||||
#
|
||||
#mxid_source_attribute: displayName
|
||||
|
||||
# In previous versions of synapse, the mapping from SAML attribute to MXID was
|
||||
# always calculated dynamically rather than stored in a table. For backwards-
|
||||
# compatibility, we will look for user_ids matching such a pattern before
|
||||
# creating a new account.
|
||||
# The mapping system to use for mapping the saml attribute onto a
|
||||
# matrix ID.
|
||||
#
|
||||
# Options include:
|
||||
# * 'hexencode' (which maps unpermitted characters to '=xx')
|
||||
# * 'dotreplace' (which replaces unpermitted characters with
|
||||
# '.').
|
||||
# The default is 'hexencode'.
|
||||
#
|
||||
# Note: This used to be configured by the
|
||||
# saml2_config.mxid_mapping option. If that is still defined, its
|
||||
# value will be used instead.
|
||||
#
|
||||
#mxid_mapping: dotreplace
|
||||
|
||||
# In previous versions of synapse, the mapping from SAML attribute to
|
||||
# MXID was always calculated dynamically rather than stored in a
|
||||
# table. For backwards- compatibility, we will look for user_ids
|
||||
# matching such a pattern before creating a new account.
|
||||
#
|
||||
# This setting controls the SAML attribute which will be used for this
|
||||
# backwards-compatibility lookup. Typically it should be 'uid', but if the
|
||||
# attribute maps are changed, it may be necessary to change it.
|
||||
# backwards-compatibility lookup. Typically it should be 'uid', but if
|
||||
# the attribute maps are changed, it may be necessary to change it.
|
||||
#
|
||||
# The default is 'uid'.
|
||||
#
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# Example log config file for synapse.
|
||||
# Log configuration for Synapse.
|
||||
#
|
||||
# This is a YAML file containing a standard Python logging configuration
|
||||
# dictionary. See [1] for details on the valid settings.
|
||||
|
@ -20,7 +20,7 @@ handlers:
|
|||
file:
|
||||
class: logging.handlers.RotatingFileHandler
|
||||
formatter: precise
|
||||
filename: /home/rav/work/synapse/homeserver.log
|
||||
filename: /var/log/matrix-synapse/homeserver.log
|
||||
maxBytes: 104857600
|
||||
backupCount: 10
|
||||
filters: [context]
|
||||
|
|
|
@ -39,6 +39,8 @@ The TURN daemon `coturn` is available from a variety of sources such as native p
|
|||
make
|
||||
make install
|
||||
|
||||
### Configuration
|
||||
|
||||
1. Create or edit the config file in `/etc/turnserver.conf`. The relevant
|
||||
lines, with example values, are:
|
||||
|
||||
|
|
2
mypy.ini
2
mypy.ini
|
@ -1,7 +1,7 @@
|
|||
[mypy]
|
||||
namespace_packages = True
|
||||
plugins = mypy_zope:plugin
|
||||
follow_imports = normal
|
||||
follow_imports = silent
|
||||
check_untyped_defs = True
|
||||
show_error_codes = True
|
||||
show_traceback = True
|
||||
|
|
|
@ -7,12 +7,22 @@ set -e
|
|||
cd `dirname $0`/..
|
||||
|
||||
SAMPLE_CONFIG="docs/sample_config.yaml"
|
||||
SAMPLE_LOG_CONFIG="docs/sample_log_config.yaml"
|
||||
|
||||
check() {
|
||||
diff -u "$SAMPLE_LOG_CONFIG" <(./scripts/generate_log_config) >/dev/null || return 1
|
||||
}
|
||||
|
||||
if [ "$1" == "--check" ]; then
|
||||
diff -u "$SAMPLE_CONFIG" <(./scripts/generate_config --header-file docs/.sample_config_header.yaml) >/dev/null || {
|
||||
echo -e "\e[1m\e[31m$SAMPLE_CONFIG is not up-to-date. Regenerate it with \`scripts-dev/generate_sample_config\`.\e[0m" >&2
|
||||
exit 1
|
||||
}
|
||||
diff -u "$SAMPLE_LOG_CONFIG" <(./scripts/generate_log_config) >/dev/null || {
|
||||
echo -e "\e[1m\e[31m$SAMPLE_LOG_CONFIG is not up-to-date. Regenerate it with \`scripts-dev/generate_sample_config\`.\e[0m" >&2
|
||||
exit 1
|
||||
}
|
||||
else
|
||||
./scripts/generate_config --header-file docs/.sample_config_header.yaml -o "$SAMPLE_CONFIG"
|
||||
./scripts/generate_log_config -o "$SAMPLE_LOG_CONFIG"
|
||||
fi
|
||||
|
|
|
@ -0,0 +1,184 @@
|
|||
#!/bin/bash
|
||||
#
|
||||
# This script generates SQL files for creating a brand new Synapse DB with the latest
|
||||
# schema, on both SQLite3 and Postgres.
|
||||
#
|
||||
# It does so by having Synapse generate an up-to-date SQLite DB, then running
|
||||
# synapse_port_db to convert it to Postgres. It then dumps the contents of both.
|
||||
|
||||
POSTGRES_HOST="localhost"
|
||||
POSTGRES_DB_NAME="synapse_full_schema.$$"
|
||||
|
||||
SQLITE_FULL_SCHEMA_OUTPUT_FILE="full.sql.sqlite"
|
||||
POSTGRES_FULL_SCHEMA_OUTPUT_FILE="full.sql.postgres"
|
||||
|
||||
REQUIRED_DEPS=("matrix-synapse" "psycopg2")
|
||||
|
||||
usage() {
|
||||
echo
|
||||
echo "Usage: $0 -p <postgres_username> -o <path> [-c] [-n] [-h]"
|
||||
echo
|
||||
echo "-p <postgres_username>"
|
||||
echo " Username to connect to local postgres instance. The password will be requested"
|
||||
echo " during script execution."
|
||||
echo "-c"
|
||||
echo " CI mode. Enables coverage tracking and prints every command that the script runs."
|
||||
echo "-o <path>"
|
||||
echo " Directory to output full schema files to."
|
||||
echo "-h"
|
||||
echo " Display this help text."
|
||||
}
|
||||
|
||||
while getopts "p:co:h" opt; do
|
||||
case $opt in
|
||||
p)
|
||||
POSTGRES_USERNAME=$OPTARG
|
||||
;;
|
||||
c)
|
||||
# Print all commands that are being executed
|
||||
set -x
|
||||
|
||||
# Modify required dependencies for coverage
|
||||
REQUIRED_DEPS+=("coverage" "coverage-enable-subprocess")
|
||||
|
||||
COVERAGE=1
|
||||
;;
|
||||
o)
|
||||
command -v realpath > /dev/null || (echo "The -o flag requires the 'realpath' binary to be installed" && exit 1)
|
||||
OUTPUT_DIR="$(realpath "$OPTARG")"
|
||||
;;
|
||||
h)
|
||||
usage
|
||||
exit
|
||||
;;
|
||||
\?)
|
||||
echo "ERROR: Invalid option: -$OPTARG" >&2
|
||||
usage
|
||||
exit
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
# Check that required dependencies are installed
|
||||
unsatisfied_requirements=()
|
||||
for dep in "${REQUIRED_DEPS[@]}"; do
|
||||
pip show "$dep" --quiet || unsatisfied_requirements+=("$dep")
|
||||
done
|
||||
if [ ${#unsatisfied_requirements} -ne 0 ]; then
|
||||
echo "Please install the following python packages: ${unsatisfied_requirements[*]}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ -z "$POSTGRES_USERNAME" ]; then
|
||||
echo "No postgres username supplied"
|
||||
usage
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ -z "$OUTPUT_DIR" ]; then
|
||||
echo "No output directory supplied"
|
||||
usage
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Create the output directory if it doesn't exist
|
||||
mkdir -p "$OUTPUT_DIR"
|
||||
|
||||
read -rsp "Postgres password for '$POSTGRES_USERNAME': " POSTGRES_PASSWORD
|
||||
echo ""
|
||||
|
||||
# Exit immediately if a command fails
|
||||
set -e
|
||||
|
||||
# cd to root of the synapse directory
|
||||
cd "$(dirname "$0")/.."
|
||||
|
||||
# Create temporary SQLite and Postgres homeserver db configs and key file
|
||||
TMPDIR=$(mktemp -d)
|
||||
KEY_FILE=$TMPDIR/test.signing.key # default Synapse signing key path
|
||||
SQLITE_CONFIG=$TMPDIR/sqlite.conf
|
||||
SQLITE_DB=$TMPDIR/homeserver.db
|
||||
POSTGRES_CONFIG=$TMPDIR/postgres.conf
|
||||
|
||||
# Ensure these files are delete on script exit
|
||||
trap 'rm -rf $TMPDIR' EXIT
|
||||
|
||||
cat > "$SQLITE_CONFIG" <<EOF
|
||||
server_name: "test"
|
||||
|
||||
signing_key_path: "$KEY_FILE"
|
||||
macaroon_secret_key: "abcde"
|
||||
|
||||
report_stats: false
|
||||
|
||||
database:
|
||||
name: "sqlite3"
|
||||
args:
|
||||
database: "$SQLITE_DB"
|
||||
|
||||
# Suppress the key server warning.
|
||||
trusted_key_servers: []
|
||||
EOF
|
||||
|
||||
cat > "$POSTGRES_CONFIG" <<EOF
|
||||
server_name: "test"
|
||||
|
||||
signing_key_path: "$KEY_FILE"
|
||||
macaroon_secret_key: "abcde"
|
||||
|
||||
report_stats: false
|
||||
|
||||
database:
|
||||
name: "psycopg2"
|
||||
args:
|
||||
user: "$POSTGRES_USERNAME"
|
||||
host: "$POSTGRES_HOST"
|
||||
password: "$POSTGRES_PASSWORD"
|
||||
database: "$POSTGRES_DB_NAME"
|
||||
|
||||
# Suppress the key server warning.
|
||||
trusted_key_servers: []
|
||||
EOF
|
||||
|
||||
# Generate the server's signing key.
|
||||
echo "Generating SQLite3 db schema..."
|
||||
python -m synapse.app.homeserver --generate-keys -c "$SQLITE_CONFIG"
|
||||
|
||||
# Make sure the SQLite3 database is using the latest schema and has no pending background update.
|
||||
echo "Running db background jobs..."
|
||||
scripts-dev/update_database --database-config "$SQLITE_CONFIG"
|
||||
|
||||
# Create the PostgreSQL database.
|
||||
echo "Creating postgres database..."
|
||||
createdb $POSTGRES_DB_NAME
|
||||
|
||||
echo "Copying data from SQLite3 to Postgres with synapse_port_db..."
|
||||
if [ -z "$COVERAGE" ]; then
|
||||
# No coverage needed
|
||||
scripts/synapse_port_db --sqlite-database "$SQLITE_DB" --postgres-config "$POSTGRES_CONFIG"
|
||||
else
|
||||
# Coverage desired
|
||||
coverage run scripts/synapse_port_db --sqlite-database "$SQLITE_DB" --postgres-config "$POSTGRES_CONFIG"
|
||||
fi
|
||||
|
||||
# Delete schema_version, applied_schema_deltas and applied_module_schemas tables
|
||||
# This needs to be done after synapse_port_db is run
|
||||
echo "Dropping unwanted db tables..."
|
||||
SQL="
|
||||
DROP TABLE schema_version;
|
||||
DROP TABLE applied_schema_deltas;
|
||||
DROP TABLE applied_module_schemas;
|
||||
"
|
||||
sqlite3 "$SQLITE_DB" <<< "$SQL"
|
||||
psql $POSTGRES_DB_NAME -U "$POSTGRES_USERNAME" -w <<< "$SQL"
|
||||
|
||||
echo "Dumping SQLite3 schema to '$OUTPUT_DIR/$SQLITE_FULL_SCHEMA_OUTPUT_FILE'..."
|
||||
sqlite3 "$SQLITE_DB" ".dump" > "$OUTPUT_DIR/$SQLITE_FULL_SCHEMA_OUTPUT_FILE"
|
||||
|
||||
echo "Dumping Postgres schema to '$OUTPUT_DIR/$POSTGRES_FULL_SCHEMA_OUTPUT_FILE'..."
|
||||
pg_dump --format=plain --no-tablespaces --no-acl --no-owner $POSTGRES_DB_NAME | sed -e '/^--/d' -e 's/public\.//g' -e '/^SET /d' -e '/^SELECT /d' > "$OUTPUT_DIR/$POSTGRES_FULL_SCHEMA_OUTPUT_FILE"
|
||||
|
||||
echo "Cleaning up temporary Postgres database..."
|
||||
dropdb $POSTGRES_DB_NAME
|
||||
|
||||
echo "Done! Files dumped to: $OUTPUT_DIR"
|
|
@ -26,8 +26,6 @@ from synapse.config.homeserver import HomeServerConfig
|
|||
from synapse.metrics.background_process_metrics import run_as_background_process
|
||||
from synapse.server import HomeServer
|
||||
from synapse.storage import DataStore
|
||||
from synapse.storage.engines import create_engine
|
||||
from synapse.storage.prepare_database import prepare_database
|
||||
|
||||
logger = logging.getLogger("update_database")
|
||||
|
||||
|
@ -35,21 +33,11 @@ logger = logging.getLogger("update_database")
|
|||
class MockHomeserver(HomeServer):
|
||||
DATASTORE_CLASS = DataStore
|
||||
|
||||
def __init__(self, config, database_engine, db_conn, **kwargs):
|
||||
def __init__(self, config, **kwargs):
|
||||
super(MockHomeserver, self).__init__(
|
||||
config.server_name,
|
||||
reactor=reactor,
|
||||
config=config,
|
||||
database_engine=database_engine,
|
||||
**kwargs
|
||||
config.server_name, reactor=reactor, config=config, **kwargs
|
||||
)
|
||||
|
||||
self.database_engine = database_engine
|
||||
self.db_conn = db_conn
|
||||
|
||||
def get_db_conn(self):
|
||||
return self.db_conn
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
parser = argparse.ArgumentParser(
|
||||
|
@ -85,25 +73,11 @@ if __name__ == "__main__":
|
|||
config = HomeServerConfig()
|
||||
config.parse_config_dict(hs_config, "", "")
|
||||
|
||||
# Create the database engine and a connection to it.
|
||||
database_engine = create_engine(config.database_config)
|
||||
db_conn = database_engine.module.connect(
|
||||
**{
|
||||
k: v
|
||||
for k, v in config.database_config.get("args", {}).items()
|
||||
if not k.startswith("cp_")
|
||||
}
|
||||
)
|
||||
|
||||
# Update the database to the latest schema.
|
||||
prepare_database(db_conn, database_engine, config=config)
|
||||
db_conn.commit()
|
||||
|
||||
# Instantiate and initialise the homeserver object.
|
||||
hs = MockHomeserver(
|
||||
config, database_engine, db_conn, db_config=config.database_config,
|
||||
)
|
||||
# setup instantiates the store within the homeserver object.
|
||||
hs = MockHomeserver(config)
|
||||
|
||||
# Setup instantiates the store within the homeserver object and updates the
|
||||
# DB.
|
||||
hs.setup()
|
||||
store = hs.get_datastore()
|
||||
|
||||
|
|
|
@ -0,0 +1,94 @@
|
|||
#!/usr/bin/env python
|
||||
# -*- coding: utf-8 -*-
|
||||
# Copyright 2019 The Matrix.org Foundation C.I.C.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
import argparse
|
||||
import sys
|
||||
import time
|
||||
from typing import Optional
|
||||
|
||||
import nacl.signing
|
||||
from signedjson.key import encode_verify_key_base64, get_verify_key, read_signing_keys
|
||||
|
||||
|
||||
def exit(status: int = 0, message: Optional[str] = None):
|
||||
if message:
|
||||
print(message, file=sys.stderr)
|
||||
sys.exit(status)
|
||||
|
||||
|
||||
def format_plain(public_key: nacl.signing.VerifyKey):
|
||||
print(
|
||||
"%s:%s %s"
|
||||
% (public_key.alg, public_key.version, encode_verify_key_base64(public_key),)
|
||||
)
|
||||
|
||||
|
||||
def format_for_config(public_key: nacl.signing.VerifyKey, expiry_ts: int):
|
||||
print(
|
||||
' "%s:%s": { key: "%s", expired_ts: %i }'
|
||||
% (
|
||||
public_key.alg,
|
||||
public_key.version,
|
||||
encode_verify_key_base64(public_key),
|
||||
expiry_ts,
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
parser = argparse.ArgumentParser()
|
||||
|
||||
parser.add_argument(
|
||||
"key_file", nargs="+", type=argparse.FileType("r"), help="The key file to read",
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
"-x",
|
||||
action="store_true",
|
||||
dest="for_config",
|
||||
help="format the output for inclusion in the old_signing_keys config setting",
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
"--expiry-ts",
|
||||
type=int,
|
||||
default=int(time.time() * 1000) + 6*3600000,
|
||||
help=(
|
||||
"The expiry time to use for -x, in milliseconds since 1970. The default "
|
||||
"is (now+6h)."
|
||||
),
|
||||
)
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
formatter = (
|
||||
(lambda k: format_for_config(k, args.expiry_ts))
|
||||
if args.for_config
|
||||
else format_plain
|
||||
)
|
||||
|
||||
keys = []
|
||||
for file in args.key_file:
|
||||
try:
|
||||
res = read_signing_keys(file)
|
||||
except Exception as e:
|
||||
exit(
|
||||
status=1,
|
||||
message="Error reading key from file %s: %s %s"
|
||||
% (file.name, type(e), e),
|
||||
)
|
||||
res = []
|
||||
for key in res:
|
||||
formatter(get_verify_key(key))
|
|
@ -0,0 +1,43 @@
|
|||
#!/usr/bin/env python3
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
# Copyright 2020 The Matrix.org Foundation C.I.C.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import argparse
|
||||
import sys
|
||||
|
||||
from synapse.config.logger import DEFAULT_LOG_CONFIG
|
||||
|
||||
if __name__ == "__main__":
|
||||
parser = argparse.ArgumentParser()
|
||||
|
||||
parser.add_argument(
|
||||
"-o",
|
||||
"--output-file",
|
||||
type=argparse.FileType("w"),
|
||||
default=sys.stdout,
|
||||
help="File to write the configuration to. Default: stdout",
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
"-f",
|
||||
"--log-file",
|
||||
type=str,
|
||||
default="/var/log/matrix-synapse/homeserver.log",
|
||||
help="name of the log file",
|
||||
)
|
||||
|
||||
args = parser.parse_args()
|
||||
args.output_file.write(DEFAULT_LOG_CONFIG.substitute(log_file=args.log_file))
|
|
@ -30,6 +30,7 @@ import yaml
|
|||
from twisted.enterprise import adbapi
|
||||
from twisted.internet import defer, reactor
|
||||
|
||||
from synapse.config.database import DatabaseConnectionConfig
|
||||
from synapse.config.homeserver import HomeServerConfig
|
||||
from synapse.logging.context import PreserveLoggingContext
|
||||
from synapse.storage._base import LoggingTransaction
|
||||
|
@ -50,12 +51,13 @@ from synapse.storage.data_stores.main.registration import (
|
|||
from synapse.storage.data_stores.main.room import RoomBackgroundUpdateStore
|
||||
from synapse.storage.data_stores.main.roommember import RoomMemberBackgroundUpdateStore
|
||||
from synapse.storage.data_stores.main.search import SearchBackgroundUpdateStore
|
||||
from synapse.storage.data_stores.main.state import StateBackgroundUpdateStore
|
||||
from synapse.storage.data_stores.main.state import MainStateBackgroundUpdateStore
|
||||
from synapse.storage.data_stores.main.stats import StatsStore
|
||||
from synapse.storage.data_stores.main.user_directory import (
|
||||
UserDirectoryBackgroundUpdateStore,
|
||||
)
|
||||
from synapse.storage.database import Database
|
||||
from synapse.storage.data_stores.state.bg_updates import StateBackgroundUpdateStore
|
||||
from synapse.storage.database import Database, make_conn
|
||||
from synapse.storage.engines import create_engine
|
||||
from synapse.storage.prepare_database import prepare_database
|
||||
from synapse.util import Clock
|
||||
|
@ -137,6 +139,7 @@ class Store(
|
|||
RoomMemberBackgroundUpdateStore,
|
||||
SearchBackgroundUpdateStore,
|
||||
StateBackgroundUpdateStore,
|
||||
MainStateBackgroundUpdateStore,
|
||||
UserDirectoryBackgroundUpdateStore,
|
||||
StatsStore,
|
||||
):
|
||||
|
@ -165,23 +168,17 @@ class Store(
|
|||
|
||||
|
||||
class MockHomeserver:
|
||||
def __init__(self, config, database_engine, db_conn, db_pool):
|
||||
self.database_engine = database_engine
|
||||
self.db_conn = db_conn
|
||||
self.db_pool = db_pool
|
||||
def __init__(self, config):
|
||||
self.clock = Clock(reactor)
|
||||
self.config = config
|
||||
self.hostname = config.server_name
|
||||
|
||||
def get_db_conn(self):
|
||||
return self.db_conn
|
||||
|
||||
def get_db_pool(self):
|
||||
return self.db_pool
|
||||
|
||||
def get_clock(self):
|
||||
return self.clock
|
||||
|
||||
def get_reactor(self):
|
||||
return reactor
|
||||
|
||||
|
||||
class Porter(object):
|
||||
def __init__(self, **kwargs):
|
||||
|
@ -445,45 +442,36 @@ class Porter(object):
|
|||
else:
|
||||
return
|
||||
|
||||
def setup_db(self, db_config, database_engine):
|
||||
db_conn = database_engine.module.connect(
|
||||
**{
|
||||
k: v
|
||||
for k, v in db_config.get("args", {}).items()
|
||||
if not k.startswith("cp_")
|
||||
}
|
||||
)
|
||||
|
||||
prepare_database(db_conn, database_engine, config=None)
|
||||
def setup_db(self, db_config: DatabaseConnectionConfig, engine):
|
||||
db_conn = make_conn(db_config, engine)
|
||||
prepare_database(db_conn, engine, config=None)
|
||||
|
||||
db_conn.commit()
|
||||
|
||||
return db_conn
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def build_db_store(self, config):
|
||||
def build_db_store(self, db_config: DatabaseConnectionConfig):
|
||||
"""Builds and returns a database store using the provided configuration.
|
||||
|
||||
Args:
|
||||
config: The database configuration, i.e. a dict following the structure of
|
||||
the "database" section of Synapse's configuration file.
|
||||
config: The database configuration
|
||||
|
||||
Returns:
|
||||
The built Store object.
|
||||
"""
|
||||
engine = create_engine(config)
|
||||
self.progress.set_state("Preparing %s" % db_config.config["name"])
|
||||
|
||||
self.progress.set_state("Preparing %s" % config["name"])
|
||||
conn = self.setup_db(config, engine)
|
||||
engine = create_engine(db_config.config)
|
||||
conn = self.setup_db(db_config, engine)
|
||||
|
||||
db_pool = adbapi.ConnectionPool(config["name"], **config["args"])
|
||||
hs = MockHomeserver(self.hs_config)
|
||||
|
||||
hs = MockHomeserver(self.hs_config, engine, conn, db_pool)
|
||||
|
||||
store = Store(Database(hs), conn, hs)
|
||||
store = Store(Database(hs, db_config, engine), conn, hs)
|
||||
|
||||
yield store.db.runInteraction(
|
||||
"%s_engine.check_database" % config["name"], engine.check_database,
|
||||
"%s_engine.check_database" % db_config.config["name"],
|
||||
engine.check_database,
|
||||
)
|
||||
|
||||
return store
|
||||
|
@ -509,7 +497,9 @@ class Porter(object):
|
|||
@defer.inlineCallbacks
|
||||
def run(self):
|
||||
try:
|
||||
self.sqlite_store = yield self.build_db_store(self.sqlite_config)
|
||||
self.sqlite_store = yield self.build_db_store(
|
||||
DatabaseConnectionConfig("master-sqlite", self.sqlite_config)
|
||||
)
|
||||
|
||||
# Check if all background updates are done, abort if not.
|
||||
updates_complete = (
|
||||
|
@ -524,7 +514,7 @@ class Porter(object):
|
|||
defer.returnValue(None)
|
||||
|
||||
self.postgres_store = yield self.build_db_store(
|
||||
self.hs_config.database_config
|
||||
self.hs_config.get_single_database()
|
||||
)
|
||||
|
||||
yield self.run_background_updates_on_postgres()
|
||||
|
|
|
@ -79,7 +79,7 @@ class Auth(object):
|
|||
|
||||
@defer.inlineCallbacks
|
||||
def check_from_context(self, room_version, event, context, do_sig_check=True):
|
||||
prev_state_ids = yield context.get_prev_state_ids(self.store)
|
||||
prev_state_ids = yield context.get_prev_state_ids()
|
||||
auth_events_ids = yield self.compute_auth_events(
|
||||
event, prev_state_ids, for_verification=True
|
||||
)
|
||||
|
|
|
@ -29,7 +29,6 @@ FEDERATION_V2_PREFIX = FEDERATION_PREFIX + "/v2"
|
|||
FEDERATION_UNSTABLE_PREFIX = FEDERATION_PREFIX + "/unstable"
|
||||
STATIC_PREFIX = "/_matrix/static"
|
||||
WEB_CLIENT_PREFIX = "/_matrix/client"
|
||||
CONTENT_REPO_PREFIX = "/_matrix/content"
|
||||
SERVER_KEY_V2_PREFIX = "/_matrix/key/v2"
|
||||
MEDIA_PREFIX = "/_matrix/media/r0"
|
||||
LEGACY_MEDIA_PREFIX = "/_matrix/media/v1"
|
||||
|
|
|
@ -237,6 +237,12 @@ def start(hs, listeners=None):
|
|||
"""
|
||||
Start a Synapse server or worker.
|
||||
|
||||
Should be called once the reactor is running and (if we're using ACME) the
|
||||
TLS certificates are in place.
|
||||
|
||||
Will start the main HTTP listeners and do some other startup tasks, and then
|
||||
notify systemd.
|
||||
|
||||
Args:
|
||||
hs (synapse.server.HomeServer)
|
||||
listeners (list[dict]): Listener configuration ('listeners' in homeserver.yaml)
|
||||
|
@ -311,9 +317,7 @@ def setup_sdnotify(hs):
|
|||
|
||||
# Tell systemd our state, if we're using it. This will silently fail if
|
||||
# we're not using systemd.
|
||||
hs.get_reactor().addSystemEventTrigger(
|
||||
"after", "startup", sdnotify, b"READY=1\nMAINPID=%i" % (os.getpid(),)
|
||||
)
|
||||
sdnotify(b"READY=1\nMAINPID=%i" % (os.getpid(),))
|
||||
|
||||
hs.get_reactor().addSystemEventTrigger(
|
||||
"before", "shutdown", sdnotify, b"STOPPING=1"
|
||||
|
|
|
@ -45,7 +45,6 @@ from synapse.replication.slave.storage.registration import SlavedRegistrationSto
|
|||
from synapse.replication.slave.storage.room import RoomStore
|
||||
from synapse.replication.tcp.client import ReplicationClientHandler
|
||||
from synapse.server import HomeServer
|
||||
from synapse.storage.engines import create_engine
|
||||
from synapse.util.logcontext import LoggingContext
|
||||
from synapse.util.versionstring import get_version_string
|
||||
|
||||
|
@ -105,8 +104,10 @@ def export_data_command(hs, args):
|
|||
user_id = args.user_id
|
||||
directory = args.output_directory
|
||||
|
||||
res = yield hs.get_handlers().admin_handler.export_user_data(
|
||||
user_id, FileExfiltrationWriter(user_id, directory=directory)
|
||||
res = yield defer.ensureDeferred(
|
||||
hs.get_handlers().admin_handler.export_user_data(
|
||||
user_id, FileExfiltrationWriter(user_id, directory=directory)
|
||||
)
|
||||
)
|
||||
print(res)
|
||||
|
||||
|
@ -229,14 +230,10 @@ def start(config_options):
|
|||
|
||||
synapse.events.USE_FROZEN_DICTS = config.use_frozen_dicts
|
||||
|
||||
database_engine = create_engine(config.database_config)
|
||||
|
||||
ss = AdminCmdServer(
|
||||
config.server_name,
|
||||
db_config=config.database_config,
|
||||
config=config,
|
||||
version_string="Synapse/" + get_version_string(synapse),
|
||||
database_engine=database_engine,
|
||||
)
|
||||
|
||||
setup_logging(ss, config, use_worker_options=True)
|
||||
|
|
|
@ -34,7 +34,6 @@ from synapse.replication.slave.storage.events import SlavedEventStore
|
|||
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
|
||||
from synapse.replication.tcp.client import ReplicationClientHandler
|
||||
from synapse.server import HomeServer
|
||||
from synapse.storage.engines import create_engine
|
||||
from synapse.util.httpresourcetree import create_resource_tree
|
||||
from synapse.util.manhole import manhole
|
||||
from synapse.util.versionstring import get_version_string
|
||||
|
@ -143,8 +142,6 @@ def start(config_options):
|
|||
|
||||
events.USE_FROZEN_DICTS = config.use_frozen_dicts
|
||||
|
||||
database_engine = create_engine(config.database_config)
|
||||
|
||||
if config.notify_appservices:
|
||||
sys.stderr.write(
|
||||
"\nThe appservices must be disabled in the main synapse process"
|
||||
|
@ -159,10 +156,8 @@ def start(config_options):
|
|||
|
||||
ps = AppserviceServer(
|
||||
config.server_name,
|
||||
db_config=config.database_config,
|
||||
config=config,
|
||||
version_string="Synapse/" + get_version_string(synapse),
|
||||
database_engine=database_engine,
|
||||
)
|
||||
|
||||
setup_logging(ps, config, use_worker_options=True)
|
||||
|
|
|
@ -62,7 +62,6 @@ from synapse.rest.client.v2_alpha.keys import KeyChangesServlet, KeyQueryServlet
|
|||
from synapse.rest.client.v2_alpha.register import RegisterRestServlet
|
||||
from synapse.rest.client.versions import VersionsRestServlet
|
||||
from synapse.server import HomeServer
|
||||
from synapse.storage.engines import create_engine
|
||||
from synapse.util.httpresourcetree import create_resource_tree
|
||||
from synapse.util.manhole import manhole
|
||||
from synapse.util.versionstring import get_version_string
|
||||
|
@ -181,14 +180,10 @@ def start(config_options):
|
|||
|
||||
events.USE_FROZEN_DICTS = config.use_frozen_dicts
|
||||
|
||||
database_engine = create_engine(config.database_config)
|
||||
|
||||
ss = ClientReaderServer(
|
||||
config.server_name,
|
||||
db_config=config.database_config,
|
||||
config=config,
|
||||
version_string="Synapse/" + get_version_string(synapse),
|
||||
database_engine=database_engine,
|
||||
)
|
||||
|
||||
setup_logging(ss, config, use_worker_options=True)
|
||||
|
|
|
@ -57,7 +57,6 @@ from synapse.rest.client.v1.room import (
|
|||
)
|
||||
from synapse.server import HomeServer
|
||||
from synapse.storage.data_stores.main.user_directory import UserDirectoryStore
|
||||
from synapse.storage.engines import create_engine
|
||||
from synapse.util.httpresourcetree import create_resource_tree
|
||||
from synapse.util.manhole import manhole
|
||||
from synapse.util.versionstring import get_version_string
|
||||
|
@ -180,14 +179,10 @@ def start(config_options):
|
|||
|
||||
events.USE_FROZEN_DICTS = config.use_frozen_dicts
|
||||
|
||||
database_engine = create_engine(config.database_config)
|
||||
|
||||
ss = EventCreatorServer(
|
||||
config.server_name,
|
||||
db_config=config.database_config,
|
||||
config=config,
|
||||
version_string="Synapse/" + get_version_string(synapse),
|
||||
database_engine=database_engine,
|
||||
)
|
||||
|
||||
setup_logging(ss, config, use_worker_options=True)
|
||||
|
|
|
@ -46,7 +46,6 @@ from synapse.replication.slave.storage.transactions import SlavedTransactionStor
|
|||
from synapse.replication.tcp.client import ReplicationClientHandler
|
||||
from synapse.rest.key.v2 import KeyApiV2Resource
|
||||
from synapse.server import HomeServer
|
||||
from synapse.storage.engines import create_engine
|
||||
from synapse.util.httpresourcetree import create_resource_tree
|
||||
from synapse.util.manhole import manhole
|
||||
from synapse.util.versionstring import get_version_string
|
||||
|
@ -162,14 +161,10 @@ def start(config_options):
|
|||
|
||||
events.USE_FROZEN_DICTS = config.use_frozen_dicts
|
||||
|
||||
database_engine = create_engine(config.database_config)
|
||||
|
||||
ss = FederationReaderServer(
|
||||
config.server_name,
|
||||
db_config=config.database_config,
|
||||
config=config,
|
||||
version_string="Synapse/" + get_version_string(synapse),
|
||||
database_engine=database_engine,
|
||||
)
|
||||
|
||||
setup_logging(ss, config, use_worker_options=True)
|
||||
|
|
|
@ -41,7 +41,6 @@ from synapse.replication.tcp.client import ReplicationClientHandler
|
|||
from synapse.replication.tcp.streams._base import ReceiptsStream
|
||||
from synapse.server import HomeServer
|
||||
from synapse.storage.database import Database
|
||||
from synapse.storage.engines import create_engine
|
||||
from synapse.types import ReadReceipt
|
||||
from synapse.util.async_helpers import Linearizer
|
||||
from synapse.util.httpresourcetree import create_resource_tree
|
||||
|
@ -174,8 +173,6 @@ def start(config_options):
|
|||
|
||||
events.USE_FROZEN_DICTS = config.use_frozen_dicts
|
||||
|
||||
database_engine = create_engine(config.database_config)
|
||||
|
||||
if config.send_federation:
|
||||
sys.stderr.write(
|
||||
"\nThe send_federation must be disabled in the main synapse process"
|
||||
|
@ -190,10 +187,8 @@ def start(config_options):
|
|||
|
||||
ss = FederationSenderServer(
|
||||
config.server_name,
|
||||
db_config=config.database_config,
|
||||
config=config,
|
||||
version_string="Synapse/" + get_version_string(synapse),
|
||||
database_engine=database_engine,
|
||||
)
|
||||
|
||||
setup_logging(ss, config, use_worker_options=True)
|
||||
|
|
|
@ -39,7 +39,6 @@ from synapse.replication.slave.storage.registration import SlavedRegistrationSto
|
|||
from synapse.replication.tcp.client import ReplicationClientHandler
|
||||
from synapse.rest.client.v2_alpha._base import client_patterns
|
||||
from synapse.server import HomeServer
|
||||
from synapse.storage.engines import create_engine
|
||||
from synapse.util.httpresourcetree import create_resource_tree
|
||||
from synapse.util.manhole import manhole
|
||||
from synapse.util.versionstring import get_version_string
|
||||
|
@ -234,14 +233,10 @@ def start(config_options):
|
|||
|
||||
events.USE_FROZEN_DICTS = config.use_frozen_dicts
|
||||
|
||||
database_engine = create_engine(config.database_config)
|
||||
|
||||
ss = FrontendProxyServer(
|
||||
config.server_name,
|
||||
db_config=config.database_config,
|
||||
config=config,
|
||||
version_string="Synapse/" + get_version_string(synapse),
|
||||
database_engine=database_engine,
|
||||
)
|
||||
|
||||
setup_logging(ss, config, use_worker_options=True)
|
||||
|
|
|
@ -39,7 +39,6 @@ import synapse
|
|||
import synapse.config.logger
|
||||
from synapse import events
|
||||
from synapse.api.urls import (
|
||||
CONTENT_REPO_PREFIX,
|
||||
FEDERATION_PREFIX,
|
||||
LEGACY_MEDIA_PREFIX,
|
||||
MEDIA_PREFIX,
|
||||
|
@ -65,11 +64,10 @@ from synapse.replication.tcp.resource import ReplicationStreamProtocolFactory
|
|||
from synapse.rest import ClientRestResource
|
||||
from synapse.rest.admin import AdminRestResource
|
||||
from synapse.rest.key.v2 import KeyApiV2Resource
|
||||
from synapse.rest.media.v0.content_repository import ContentRepoResource
|
||||
from synapse.rest.well_known import WellKnownResource
|
||||
from synapse.server import HomeServer
|
||||
from synapse.storage import DataStore
|
||||
from synapse.storage.engines import IncorrectDatabaseSetup, create_engine
|
||||
from synapse.storage.engines import IncorrectDatabaseSetup
|
||||
from synapse.storage.prepare_database import UpgradeDatabaseException
|
||||
from synapse.util.caches import CACHE_SIZE_FACTOR
|
||||
from synapse.util.httpresourcetree import create_resource_tree
|
||||
|
@ -223,13 +221,7 @@ class SynapseHomeServer(HomeServer):
|
|||
if self.get_config().enable_media_repo:
|
||||
media_repo = self.get_media_repository_resource()
|
||||
resources.update(
|
||||
{
|
||||
MEDIA_PREFIX: media_repo,
|
||||
LEGACY_MEDIA_PREFIX: media_repo,
|
||||
CONTENT_REPO_PREFIX: ContentRepoResource(
|
||||
self, self.config.uploads_path
|
||||
),
|
||||
}
|
||||
{MEDIA_PREFIX: media_repo, LEGACY_MEDIA_PREFIX: media_repo}
|
||||
)
|
||||
elif name == "media":
|
||||
raise ConfigError(
|
||||
|
@ -318,7 +310,7 @@ def setup(config_options):
|
|||
"Synapse Homeserver", config_options
|
||||
)
|
||||
except ConfigError as e:
|
||||
sys.stderr.write("\n" + str(e) + "\n")
|
||||
sys.stderr.write("\nERROR: %s\n" % (e,))
|
||||
sys.exit(1)
|
||||
|
||||
if not config:
|
||||
|
@ -328,15 +320,10 @@ def setup(config_options):
|
|||
|
||||
events.USE_FROZEN_DICTS = config.use_frozen_dicts
|
||||
|
||||
database_engine = create_engine(config.database_config)
|
||||
config.database_config["args"]["cp_openfun"] = database_engine.on_new_connection
|
||||
|
||||
hs = SynapseHomeServer(
|
||||
config.server_name,
|
||||
db_config=config.database_config,
|
||||
config=config,
|
||||
version_string="Synapse/" + get_version_string(synapse),
|
||||
database_engine=database_engine,
|
||||
)
|
||||
|
||||
synapse.config.logger.setup_logging(hs, config, use_worker_options=False)
|
||||
|
@ -347,13 +334,8 @@ def setup(config_options):
|
|||
hs.setup()
|
||||
except IncorrectDatabaseSetup as e:
|
||||
quit_with_error(str(e))
|
||||
except UpgradeDatabaseException:
|
||||
sys.stderr.write(
|
||||
"\nFailed to upgrade database.\n"
|
||||
"Have you checked for version specific instructions in"
|
||||
" UPGRADES.rst?\n"
|
||||
)
|
||||
sys.exit(1)
|
||||
except UpgradeDatabaseException as e:
|
||||
quit_with_error("Failed to upgrade database: %s" % (e,))
|
||||
|
||||
hs.setup_master()
|
||||
|
||||
|
@ -519,8 +501,10 @@ def phone_stats_home(hs, stats, stats_process=_stats_process):
|
|||
# Database version
|
||||
#
|
||||
|
||||
stats["database_engine"] = hs.database_engine.module.__name__
|
||||
stats["database_server_version"] = hs.database_engine.server_version
|
||||
# This only reports info about the *main* database.
|
||||
stats["database_engine"] = hs.get_datastore().db.engine.module.__name__
|
||||
stats["database_server_version"] = hs.get_datastore().db.engine.server_version
|
||||
|
||||
logger.info("Reporting stats to %s: %s" % (hs.config.report_stats_endpoint, stats))
|
||||
try:
|
||||
yield hs.get_proxied_http_client().put_json(
|
||||
|
|
|
@ -21,7 +21,7 @@ from twisted.web.resource import NoResource
|
|||
|
||||
import synapse
|
||||
from synapse import events
|
||||
from synapse.api.urls import CONTENT_REPO_PREFIX, LEGACY_MEDIA_PREFIX, MEDIA_PREFIX
|
||||
from synapse.api.urls import LEGACY_MEDIA_PREFIX, MEDIA_PREFIX
|
||||
from synapse.app import _base
|
||||
from synapse.config._base import ConfigError
|
||||
from synapse.config.homeserver import HomeServerConfig
|
||||
|
@ -37,10 +37,8 @@ from synapse.replication.slave.storage.registration import SlavedRegistrationSto
|
|||
from synapse.replication.slave.storage.transactions import SlavedTransactionStore
|
||||
from synapse.replication.tcp.client import ReplicationClientHandler
|
||||
from synapse.rest.admin import register_servlets_for_media_repo
|
||||
from synapse.rest.media.v0.content_repository import ContentRepoResource
|
||||
from synapse.server import HomeServer
|
||||
from synapse.storage.data_stores.main.media_repository import MediaRepositoryStore
|
||||
from synapse.storage.engines import create_engine
|
||||
from synapse.util.httpresourcetree import create_resource_tree
|
||||
from synapse.util.manhole import manhole
|
||||
from synapse.util.versionstring import get_version_string
|
||||
|
@ -83,9 +81,6 @@ class MediaRepositoryServer(HomeServer):
|
|||
{
|
||||
MEDIA_PREFIX: media_repo,
|
||||
LEGACY_MEDIA_PREFIX: media_repo,
|
||||
CONTENT_REPO_PREFIX: ContentRepoResource(
|
||||
self, self.config.uploads_path
|
||||
),
|
||||
"/_synapse/admin": admin_resource,
|
||||
}
|
||||
)
|
||||
|
@ -157,14 +152,10 @@ def start(config_options):
|
|||
|
||||
events.USE_FROZEN_DICTS = config.use_frozen_dicts
|
||||
|
||||
database_engine = create_engine(config.database_config)
|
||||
|
||||
ss = MediaRepositoryServer(
|
||||
config.server_name,
|
||||
db_config=config.database_config,
|
||||
config=config,
|
||||
version_string="Synapse/" + get_version_string(synapse),
|
||||
database_engine=database_engine,
|
||||
)
|
||||
|
||||
setup_logging(ss, config, use_worker_options=True)
|
||||
|
|
|
@ -37,7 +37,6 @@ from synapse.replication.slave.storage.room import RoomStore
|
|||
from synapse.replication.tcp.client import ReplicationClientHandler
|
||||
from synapse.server import HomeServer
|
||||
from synapse.storage import DataStore
|
||||
from synapse.storage.engines import create_engine
|
||||
from synapse.util.httpresourcetree import create_resource_tree
|
||||
from synapse.util.manhole import manhole
|
||||
from synapse.util.versionstring import get_version_string
|
||||
|
@ -203,14 +202,10 @@ def start(config_options):
|
|||
# Force the pushers to start since they will be disabled in the main config
|
||||
config.start_pushers = True
|
||||
|
||||
database_engine = create_engine(config.database_config)
|
||||
|
||||
ps = PusherServer(
|
||||
config.server_name,
|
||||
db_config=config.database_config,
|
||||
config=config,
|
||||
version_string="Synapse/" + get_version_string(synapse),
|
||||
database_engine=database_engine,
|
||||
)
|
||||
|
||||
setup_logging(ps, config, use_worker_options=True)
|
||||
|
|
|
@ -55,7 +55,6 @@ from synapse.rest.client.v1.room import RoomInitialSyncRestServlet
|
|||
from synapse.rest.client.v2_alpha import sync
|
||||
from synapse.server import HomeServer
|
||||
from synapse.storage.data_stores.main.presence import UserPresenceState
|
||||
from synapse.storage.engines import create_engine
|
||||
from synapse.util.httpresourcetree import create_resource_tree
|
||||
from synapse.util.manhole import manhole
|
||||
from synapse.util.stringutils import random_string
|
||||
|
@ -437,14 +436,10 @@ def start(config_options):
|
|||
|
||||
synapse.events.USE_FROZEN_DICTS = config.use_frozen_dicts
|
||||
|
||||
database_engine = create_engine(config.database_config)
|
||||
|
||||
ss = SynchrotronServer(
|
||||
config.server_name,
|
||||
db_config=config.database_config,
|
||||
config=config,
|
||||
version_string="Synapse/" + get_version_string(synapse),
|
||||
database_engine=database_engine,
|
||||
application_service_handler=SynchrotronApplicationService(),
|
||||
)
|
||||
|
||||
|
|
|
@ -44,7 +44,6 @@ from synapse.rest.client.v2_alpha import user_directory
|
|||
from synapse.server import HomeServer
|
||||
from synapse.storage.data_stores.main.user_directory import UserDirectoryStore
|
||||
from synapse.storage.database import Database
|
||||
from synapse.storage.engines import create_engine
|
||||
from synapse.util.caches.stream_change_cache import StreamChangeCache
|
||||
from synapse.util.httpresourcetree import create_resource_tree
|
||||
from synapse.util.manhole import manhole
|
||||
|
@ -200,8 +199,6 @@ def start(config_options):
|
|||
|
||||
events.USE_FROZEN_DICTS = config.use_frozen_dicts
|
||||
|
||||
database_engine = create_engine(config.database_config)
|
||||
|
||||
if config.update_user_directory:
|
||||
sys.stderr.write(
|
||||
"\nThe update_user_directory must be disabled in the main synapse process"
|
||||
|
@ -216,10 +213,8 @@ def start(config_options):
|
|||
|
||||
ss = UserDirectoryServer(
|
||||
config.server_name,
|
||||
db_config=config.database_config,
|
||||
config=config,
|
||||
version_string="Synapse/" + get_version_string(synapse),
|
||||
database_engine=database_engine,
|
||||
)
|
||||
|
||||
setup_logging(ss, config, use_worker_options=True)
|
||||
|
|
|
@ -12,12 +12,45 @@
|
|||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
import logging
|
||||
import os
|
||||
from textwrap import indent
|
||||
|
||||
import yaml
|
||||
|
||||
from ._base import Config
|
||||
from synapse.config._base import Config, ConfigError
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class DatabaseConnectionConfig:
|
||||
"""Contains the connection config for a particular database.
|
||||
|
||||
Args:
|
||||
name: A label for the database, used for logging.
|
||||
db_config: The config for a particular database, as per `database`
|
||||
section of main config. Has three fields: `name` for database
|
||||
module name, `args` for the args to give to the database
|
||||
connector, and optional `data_stores` that is a list of stores to
|
||||
provision on this database (defaulting to all).
|
||||
"""
|
||||
|
||||
def __init__(self, name: str, db_config: dict):
|
||||
if db_config["name"] not in ("sqlite3", "psycopg2"):
|
||||
raise ConfigError("Unsupported database type %r" % (db_config["name"],))
|
||||
|
||||
if db_config["name"] == "sqlite3":
|
||||
db_config.setdefault("args", {}).update(
|
||||
{"cp_min": 1, "cp_max": 1, "check_same_thread": False}
|
||||
)
|
||||
|
||||
data_stores = db_config.get("data_stores")
|
||||
if data_stores is None:
|
||||
data_stores = ["main", "state"]
|
||||
|
||||
self.name = name
|
||||
self.config = db_config
|
||||
self.data_stores = data_stores
|
||||
|
||||
|
||||
class DatabaseConfig(Config):
|
||||
|
@ -26,22 +59,43 @@ class DatabaseConfig(Config):
|
|||
def read_config(self, config, **kwargs):
|
||||
self.event_cache_size = self.parse_size(config.get("event_cache_size", "10K"))
|
||||
|
||||
self.database_config = config.get("database")
|
||||
# We *experimentally* support specifying multiple databases via the
|
||||
# `databases` key. This is a map from a label to database config in the
|
||||
# same format as the `database` config option, plus an extra
|
||||
# `data_stores` key to specify which data store goes where. For example:
|
||||
#
|
||||
# databases:
|
||||
# master:
|
||||
# name: psycopg2
|
||||
# data_stores: ["main"]
|
||||
# args: {}
|
||||
# state:
|
||||
# name: psycopg2
|
||||
# data_stores: ["state"]
|
||||
# args: {}
|
||||
|
||||
if self.database_config is None:
|
||||
self.database_config = {"name": "sqlite3", "args": {}}
|
||||
multi_database_config = config.get("databases")
|
||||
database_config = config.get("database")
|
||||
|
||||
if multi_database_config and database_config:
|
||||
raise ConfigError("Can't specify both 'database' and 'datbases' in config")
|
||||
|
||||
if multi_database_config:
|
||||
if config.get("database_path"):
|
||||
raise ConfigError("Can't specify 'database_path' with 'databases'")
|
||||
|
||||
self.databases = [
|
||||
DatabaseConnectionConfig(name, db_conf)
|
||||
for name, db_conf in multi_database_config.items()
|
||||
]
|
||||
|
||||
name = self.database_config.get("name", None)
|
||||
if name == "psycopg2":
|
||||
pass
|
||||
elif name == "sqlite3":
|
||||
self.database_config.setdefault("args", {}).update(
|
||||
{"cp_min": 1, "cp_max": 1, "check_same_thread": False}
|
||||
)
|
||||
else:
|
||||
raise RuntimeError("Unsupported database type '%s'" % (name,))
|
||||
if database_config is None:
|
||||
database_config = {"name": "sqlite3", "args": {}}
|
||||
|
||||
self.set_databasepath(config.get("database_path"))
|
||||
self.databases = [DatabaseConnectionConfig("master", database_config)]
|
||||
|
||||
self.set_databasepath(config.get("database_path"))
|
||||
|
||||
def generate_config_section(self, data_dir_path, database_conf, **kwargs):
|
||||
if not database_conf:
|
||||
|
@ -76,11 +130,24 @@ class DatabaseConfig(Config):
|
|||
self.set_databasepath(args.database_path)
|
||||
|
||||
def set_databasepath(self, database_path):
|
||||
if database_path is None:
|
||||
return
|
||||
|
||||
if database_path != ":memory:":
|
||||
database_path = self.abspath(database_path)
|
||||
if self.database_config.get("name", None) == "sqlite3":
|
||||
if database_path is not None:
|
||||
self.database_config["args"]["database"] = database_path
|
||||
|
||||
# We only support setting a database path if we have a single sqlite3
|
||||
# database.
|
||||
if len(self.databases) != 1:
|
||||
raise ConfigError("Cannot specify 'database_path' with multiple databases")
|
||||
|
||||
database = self.get_single_database()
|
||||
if database.config["name"] != "sqlite3":
|
||||
# We don't raise here as we haven't done so before for this case.
|
||||
logger.warn("Ignoring 'database_path' for non-sqlite3 database")
|
||||
return
|
||||
|
||||
database.config["args"]["database"] = database_path
|
||||
|
||||
@staticmethod
|
||||
def add_arguments(parser):
|
||||
|
@ -91,3 +158,11 @@ class DatabaseConfig(Config):
|
|||
metavar="SQLITE_DATABASE_PATH",
|
||||
help="The path to a sqlite database to use.",
|
||||
)
|
||||
|
||||
def get_single_database(self) -> DatabaseConnectionConfig:
|
||||
"""Returns the database if there is only one, useful for e.g. tests
|
||||
"""
|
||||
if len(self.databases) != 1:
|
||||
raise Exception("More than one database exists")
|
||||
|
||||
return self.databases[0]
|
||||
|
|
|
@ -21,6 +21,7 @@ from __future__ import print_function
|
|||
import email.utils
|
||||
import os
|
||||
from enum import Enum
|
||||
from typing import Optional
|
||||
|
||||
import pkg_resources
|
||||
|
||||
|
@ -101,7 +102,7 @@ class EmailConfig(Config):
|
|||
# both in RegistrationConfig and here. We should factor this bit out
|
||||
self.account_threepid_delegate_email = self.trusted_third_party_id_servers[
|
||||
0
|
||||
]
|
||||
] # type: Optional[str]
|
||||
self.using_identity_server_from_trusted_list = True
|
||||
else:
|
||||
raise ConfigError(
|
||||
|
|
|
@ -108,7 +108,7 @@ class KeyConfig(Config):
|
|||
self.signing_key = self.read_signing_keys(signing_key_path, "signing_key")
|
||||
|
||||
self.old_signing_keys = self.read_old_signing_keys(
|
||||
config.get("old_signing_keys", {})
|
||||
config.get("old_signing_keys")
|
||||
)
|
||||
self.key_refresh_interval = self.parse_duration(
|
||||
config.get("key_refresh_interval", "1d")
|
||||
|
@ -199,14 +199,19 @@ class KeyConfig(Config):
|
|||
signing_key_path: "%(base_key_name)s.signing.key"
|
||||
|
||||
# The keys that the server used to sign messages with but won't use
|
||||
# to sign new messages. E.g. it has lost its private key
|
||||
# to sign new messages.
|
||||
#
|
||||
#old_signing_keys:
|
||||
# "ed25519:auto":
|
||||
# # Base64 encoded public key
|
||||
# key: "The public part of your old signing key."
|
||||
# # Millisecond POSIX timestamp when the key expired.
|
||||
# expired_ts: 123456789123
|
||||
old_signing_keys:
|
||||
# For each key, `key` should be the base64-encoded public key, and
|
||||
# `expired_ts`should be the time (in milliseconds since the unix epoch) that
|
||||
# it was last used.
|
||||
#
|
||||
# It is possible to build an entry from an old signing.key file using the
|
||||
# `export_signing_key` script which is provided with synapse.
|
||||
#
|
||||
# For example:
|
||||
#
|
||||
#"ed25519:id": { key: "base64string", expired_ts: 123456789123 }
|
||||
|
||||
# How long key response published by this server is valid for.
|
||||
# Used to set the valid_until_ts in /key/v2 APIs.
|
||||
|
@ -290,6 +295,8 @@ class KeyConfig(Config):
|
|||
raise ConfigError("Error reading %s: %s" % (name, str(e)))
|
||||
|
||||
def read_old_signing_keys(self, old_signing_keys):
|
||||
if old_signing_keys is None:
|
||||
return {}
|
||||
keys = {}
|
||||
for key_id, key_data in old_signing_keys.items():
|
||||
if is_signing_algorithm_supported(key_id):
|
||||
|
|
|
@ -12,7 +12,7 @@
|
|||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import argparse
|
||||
import logging
|
||||
import logging.config
|
||||
import os
|
||||
|
@ -37,10 +37,17 @@ from synapse.logging._structured import (
|
|||
from synapse.logging.context import LoggingContextFilter
|
||||
from synapse.util.versionstring import get_version_string
|
||||
|
||||
from ._base import Config
|
||||
from ._base import Config, ConfigError
|
||||
|
||||
DEFAULT_LOG_CONFIG = Template(
|
||||
"""
|
||||
"""\
|
||||
# Log configuration for Synapse.
|
||||
#
|
||||
# This is a YAML file containing a standard Python logging configuration
|
||||
# dictionary. See [1] for details on the valid settings.
|
||||
#
|
||||
# [1]: https://docs.python.org/3.7/library/logging.config.html#configuration-dictionary-schema
|
||||
|
||||
version: 1
|
||||
|
||||
formatters:
|
||||
|
@ -81,11 +88,18 @@ disable_existing_loggers: false
|
|||
"""
|
||||
)
|
||||
|
||||
LOG_FILE_ERROR = """\
|
||||
Support for the log_file configuration option and --log-file command-line option was
|
||||
removed in Synapse 1.3.0. You should instead set up a separate log configuration file.
|
||||
"""
|
||||
|
||||
|
||||
class LoggingConfig(Config):
|
||||
section = "logging"
|
||||
|
||||
def read_config(self, config, **kwargs):
|
||||
if config.get("log_file"):
|
||||
raise ConfigError(LOG_FILE_ERROR)
|
||||
self.log_config = self.abspath(config.get("log_config"))
|
||||
self.no_redirect_stdio = config.get("no_redirect_stdio", False)
|
||||
|
||||
|
@ -106,6 +120,8 @@ class LoggingConfig(Config):
|
|||
def read_arguments(self, args):
|
||||
if args.no_redirect_stdio is not None:
|
||||
self.no_redirect_stdio = args.no_redirect_stdio
|
||||
if args.log_file is not None:
|
||||
raise ConfigError(LOG_FILE_ERROR)
|
||||
|
||||
@staticmethod
|
||||
def add_arguments(parser):
|
||||
|
@ -118,6 +134,10 @@ class LoggingConfig(Config):
|
|||
help="Do not redirect stdout/stderr to the log",
|
||||
)
|
||||
|
||||
logging_group.add_argument(
|
||||
"-f", "--log-file", dest="log_file", help=argparse.SUPPRESS,
|
||||
)
|
||||
|
||||
def generate_files(self, config, config_dir_path):
|
||||
log_config = config.get("log_config")
|
||||
if log_config and not os.path.exists(log_config):
|
||||
|
|
|
@ -83,10 +83,9 @@ class RatelimitConfig(Config):
|
|||
)
|
||||
|
||||
rc_admin_redaction = config.get("rc_admin_redaction")
|
||||
self.rc_admin_redaction = None
|
||||
if rc_admin_redaction:
|
||||
self.rc_admin_redaction = RateLimitConfig(rc_admin_redaction)
|
||||
else:
|
||||
self.rc_admin_redaction = None
|
||||
|
||||
def generate_config_section(self, **kwargs):
|
||||
return """\
|
||||
|
|
|
@ -156,7 +156,6 @@ class ContentRepositoryConfig(Config):
|
|||
(provider_class, parsed_config, wrapper_config)
|
||||
)
|
||||
|
||||
self.uploads_path = self.ensure_directory(config.get("uploads_path", "uploads"))
|
||||
self.dynamic_thumbnails = config.get("dynamic_thumbnails", False)
|
||||
self.thumbnail_requirements = parse_thumbnail_requirements(
|
||||
config.get("thumbnail_sizes", DEFAULT_THUMBNAIL_SIZES)
|
||||
|
@ -231,10 +230,6 @@ class ContentRepositoryConfig(Config):
|
|||
# config:
|
||||
# directory: /mnt/some/other/directory
|
||||
|
||||
# Directory where in-progress uploads are stored.
|
||||
#
|
||||
uploads_path: "%(uploads_path)s"
|
||||
|
||||
# The largest allowed upload size in bytes
|
||||
#
|
||||
#max_upload_size: 10M
|
||||
|
|
|
@ -14,17 +14,19 @@
|
|||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import re
|
||||
import logging
|
||||
|
||||
from synapse.python_dependencies import DependencyException, check_requirements
|
||||
from synapse.types import (
|
||||
map_username_to_mxid_localpart,
|
||||
mxid_localpart_allowed_characters,
|
||||
)
|
||||
from synapse.util.module_loader import load_python_module
|
||||
from synapse.util.module_loader import load_module, load_python_module
|
||||
|
||||
from ._base import Config, ConfigError
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
DEFAULT_USER_MAPPING_PROVIDER = (
|
||||
"synapse.handlers.saml_handler.DefaultSamlMappingProvider"
|
||||
)
|
||||
|
||||
|
||||
def _dict_merge(merge_dict, into_dict):
|
||||
"""Do a deep merge of two dicts
|
||||
|
@ -75,15 +77,69 @@ class SAML2Config(Config):
|
|||
|
||||
self.saml2_enabled = True
|
||||
|
||||
self.saml2_mxid_source_attribute = saml2_config.get(
|
||||
"mxid_source_attribute", "uid"
|
||||
)
|
||||
|
||||
self.saml2_grandfathered_mxid_source_attribute = saml2_config.get(
|
||||
"grandfathered_mxid_source_attribute", "uid"
|
||||
)
|
||||
|
||||
saml2_config_dict = self._default_saml_config_dict()
|
||||
# user_mapping_provider may be None if the key is present but has no value
|
||||
ump_dict = saml2_config.get("user_mapping_provider") or {}
|
||||
|
||||
# Use the default user mapping provider if not set
|
||||
ump_dict.setdefault("module", DEFAULT_USER_MAPPING_PROVIDER)
|
||||
|
||||
# Ensure a config is present
|
||||
ump_dict["config"] = ump_dict.get("config") or {}
|
||||
|
||||
if ump_dict["module"] == DEFAULT_USER_MAPPING_PROVIDER:
|
||||
# Load deprecated options for use by the default module
|
||||
old_mxid_source_attribute = saml2_config.get("mxid_source_attribute")
|
||||
if old_mxid_source_attribute:
|
||||
logger.warning(
|
||||
"The config option saml2_config.mxid_source_attribute is deprecated. "
|
||||
"Please use saml2_config.user_mapping_provider.config"
|
||||
".mxid_source_attribute instead."
|
||||
)
|
||||
ump_dict["config"]["mxid_source_attribute"] = old_mxid_source_attribute
|
||||
|
||||
old_mxid_mapping = saml2_config.get("mxid_mapping")
|
||||
if old_mxid_mapping:
|
||||
logger.warning(
|
||||
"The config option saml2_config.mxid_mapping is deprecated. Please "
|
||||
"use saml2_config.user_mapping_provider.config.mxid_mapping instead."
|
||||
)
|
||||
ump_dict["config"]["mxid_mapping"] = old_mxid_mapping
|
||||
|
||||
# Retrieve an instance of the module's class
|
||||
# Pass the config dictionary to the module for processing
|
||||
(
|
||||
self.saml2_user_mapping_provider_class,
|
||||
self.saml2_user_mapping_provider_config,
|
||||
) = load_module(ump_dict)
|
||||
|
||||
# Ensure loaded user mapping module has defined all necessary methods
|
||||
# Note parse_config() is already checked during the call to load_module
|
||||
required_methods = [
|
||||
"get_saml_attributes",
|
||||
"saml_response_to_user_attributes",
|
||||
]
|
||||
missing_methods = [
|
||||
method
|
||||
for method in required_methods
|
||||
if not hasattr(self.saml2_user_mapping_provider_class, method)
|
||||
]
|
||||
if missing_methods:
|
||||
raise ConfigError(
|
||||
"Class specified by saml2_config."
|
||||
"user_mapping_provider.module is missing required "
|
||||
"methods: %s" % (", ".join(missing_methods),)
|
||||
)
|
||||
|
||||
# Get the desired saml auth response attributes from the module
|
||||
saml2_config_dict = self._default_saml_config_dict(
|
||||
*self.saml2_user_mapping_provider_class.get_saml_attributes(
|
||||
self.saml2_user_mapping_provider_config
|
||||
)
|
||||
)
|
||||
_dict_merge(
|
||||
merge_dict=saml2_config.get("sp_config", {}), into_dict=saml2_config_dict
|
||||
)
|
||||
|
@ -103,22 +159,27 @@ class SAML2Config(Config):
|
|||
saml2_config.get("saml_session_lifetime", "5m")
|
||||
)
|
||||
|
||||
mapping = saml2_config.get("mxid_mapping", "hexencode")
|
||||
try:
|
||||
self.saml2_mxid_mapper = MXID_MAPPER_MAP[mapping]
|
||||
except KeyError:
|
||||
raise ConfigError("%s is not a known mxid_mapping" % (mapping,))
|
||||
def _default_saml_config_dict(
|
||||
self, required_attributes: set, optional_attributes: set
|
||||
):
|
||||
"""Generate a configuration dictionary with required and optional attributes that
|
||||
will be needed to process new user registration
|
||||
|
||||
def _default_saml_config_dict(self):
|
||||
Args:
|
||||
required_attributes: SAML auth response attributes that are
|
||||
necessary to function
|
||||
optional_attributes: SAML auth response attributes that can be used to add
|
||||
additional information to Synapse user accounts, but are not required
|
||||
|
||||
Returns:
|
||||
dict: A SAML configuration dictionary
|
||||
"""
|
||||
import saml2
|
||||
|
||||
public_baseurl = self.public_baseurl
|
||||
if public_baseurl is None:
|
||||
raise ConfigError("saml2_config requires a public_baseurl to be set")
|
||||
|
||||
required_attributes = {"uid", self.saml2_mxid_source_attribute}
|
||||
|
||||
optional_attributes = {"displayName"}
|
||||
if self.saml2_grandfathered_mxid_source_attribute:
|
||||
optional_attributes.add(self.saml2_grandfathered_mxid_source_attribute)
|
||||
optional_attributes -= required_attributes
|
||||
|
@ -207,33 +268,58 @@ class SAML2Config(Config):
|
|||
#
|
||||
#config_path: "%(config_dir_path)s/sp_conf.py"
|
||||
|
||||
# the lifetime of a SAML session. This defines how long a user has to
|
||||
# The lifetime of a SAML session. This defines how long a user has to
|
||||
# complete the authentication process, if allow_unsolicited is unset.
|
||||
# The default is 5 minutes.
|
||||
#
|
||||
#saml_session_lifetime: 5m
|
||||
|
||||
# The SAML attribute (after mapping via the attribute maps) to use to derive
|
||||
# the Matrix ID from. 'uid' by default.
|
||||
# An external module can be provided here as a custom solution to
|
||||
# mapping attributes returned from a saml provider onto a matrix user.
|
||||
#
|
||||
#mxid_source_attribute: displayName
|
||||
user_mapping_provider:
|
||||
# The custom module's class. Uncomment to use a custom module.
|
||||
#
|
||||
#module: mapping_provider.SamlMappingProvider
|
||||
|
||||
# The mapping system to use for mapping the saml attribute onto a matrix ID.
|
||||
# Options include:
|
||||
# * 'hexencode' (which maps unpermitted characters to '=xx')
|
||||
# * 'dotreplace' (which replaces unpermitted characters with '.').
|
||||
# The default is 'hexencode'.
|
||||
#
|
||||
#mxid_mapping: dotreplace
|
||||
# Custom configuration values for the module. Below options are
|
||||
# intended for the built-in provider, they should be changed if
|
||||
# using a custom module. This section will be passed as a Python
|
||||
# dictionary to the module's `parse_config` method.
|
||||
#
|
||||
config:
|
||||
# The SAML attribute (after mapping via the attribute maps) to use
|
||||
# to derive the Matrix ID from. 'uid' by default.
|
||||
#
|
||||
# Note: This used to be configured by the
|
||||
# saml2_config.mxid_source_attribute option. If that is still
|
||||
# defined, its value will be used instead.
|
||||
#
|
||||
#mxid_source_attribute: displayName
|
||||
|
||||
# In previous versions of synapse, the mapping from SAML attribute to MXID was
|
||||
# always calculated dynamically rather than stored in a table. For backwards-
|
||||
# compatibility, we will look for user_ids matching such a pattern before
|
||||
# creating a new account.
|
||||
# The mapping system to use for mapping the saml attribute onto a
|
||||
# matrix ID.
|
||||
#
|
||||
# Options include:
|
||||
# * 'hexencode' (which maps unpermitted characters to '=xx')
|
||||
# * 'dotreplace' (which replaces unpermitted characters with
|
||||
# '.').
|
||||
# The default is 'hexencode'.
|
||||
#
|
||||
# Note: This used to be configured by the
|
||||
# saml2_config.mxid_mapping option. If that is still defined, its
|
||||
# value will be used instead.
|
||||
#
|
||||
#mxid_mapping: dotreplace
|
||||
|
||||
# In previous versions of synapse, the mapping from SAML attribute to
|
||||
# MXID was always calculated dynamically rather than stored in a
|
||||
# table. For backwards- compatibility, we will look for user_ids
|
||||
# matching such a pattern before creating a new account.
|
||||
#
|
||||
# This setting controls the SAML attribute which will be used for this
|
||||
# backwards-compatibility lookup. Typically it should be 'uid', but if the
|
||||
# attribute maps are changed, it may be necessary to change it.
|
||||
# backwards-compatibility lookup. Typically it should be 'uid', but if
|
||||
# the attribute maps are changed, it may be necessary to change it.
|
||||
#
|
||||
# The default is 'uid'.
|
||||
#
|
||||
|
@ -241,23 +327,3 @@ class SAML2Config(Config):
|
|||
""" % {
|
||||
"config_dir_path": config_dir_path
|
||||
}
|
||||
|
||||
|
||||
DOT_REPLACE_PATTERN = re.compile(
|
||||
("[^%s]" % (re.escape("".join(mxid_localpart_allowed_characters)),))
|
||||
)
|
||||
|
||||
|
||||
def dot_replace_for_mxid(username: str) -> str:
|
||||
username = username.lower()
|
||||
username = DOT_REPLACE_PATTERN.sub(".", username)
|
||||
|
||||
# regular mxids aren't allowed to start with an underscore either
|
||||
username = re.sub("^_", "", username)
|
||||
return username
|
||||
|
||||
|
||||
MXID_MAPPER_MAP = {
|
||||
"hexencode": map_username_to_mxid_localpart,
|
||||
"dotreplace": dot_replace_for_mxid,
|
||||
}
|
||||
|
|
|
@ -102,6 +102,12 @@ class ServerConfig(Config):
|
|||
"require_auth_for_profile_requests", False
|
||||
)
|
||||
|
||||
# Whether to require sharing a room with a user to retrieve their
|
||||
# profile data
|
||||
self.limit_profile_requests_to_users_who_share_rooms = config.get(
|
||||
"limit_profile_requests_to_users_who_share_rooms", False,
|
||||
)
|
||||
|
||||
if "restrict_public_rooms_to_local_users" in config and (
|
||||
"allow_public_rooms_without_auth" in config
|
||||
or "allow_public_rooms_over_federation" in config
|
||||
|
@ -200,7 +206,7 @@ class ServerConfig(Config):
|
|||
self.admin_contact = config.get("admin_contact", None)
|
||||
|
||||
# FIXME: federation_domain_whitelist needs sytests
|
||||
self.federation_domain_whitelist = None
|
||||
self.federation_domain_whitelist = None # type: Optional[dict]
|
||||
federation_domain_whitelist = config.get("federation_domain_whitelist", None)
|
||||
|
||||
if federation_domain_whitelist is not None:
|
||||
|
@ -621,6 +627,13 @@ class ServerConfig(Config):
|
|||
#
|
||||
#require_auth_for_profile_requests: true
|
||||
|
||||
# Uncomment to require a user to share a room with another user in order
|
||||
# to retrieve their profile information. Only checked on Client-Server
|
||||
# requests. Profile requests from other servers should be checked by the
|
||||
# requesting server. Defaults to 'false'.
|
||||
#
|
||||
#limit_profile_requests_to_users_who_share_rooms: true
|
||||
|
||||
# If set to 'true', removes the need for authentication to access the server's
|
||||
# public rooms directory through the client API, meaning that anyone can
|
||||
# query the room directory. Defaults to 'false'.
|
||||
|
|
|
@ -14,7 +14,7 @@
|
|||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
|
||||
import collections.abc
|
||||
import hashlib
|
||||
import logging
|
||||
|
||||
|
@ -40,8 +40,11 @@ def check_event_content_hash(event, hash_algorithm=hashlib.sha256):
|
|||
# some malformed events lack a 'hashes'. Protect against it being missing
|
||||
# or a weird type by basically treating it the same as an unhashed event.
|
||||
hashes = event.get("hashes")
|
||||
if not isinstance(hashes, dict):
|
||||
raise SynapseError(400, "Malformed 'hashes'", Codes.UNAUTHORIZED)
|
||||
# nb it might be a frozendict or a dict
|
||||
if not isinstance(hashes, collections.abc.Mapping):
|
||||
raise SynapseError(
|
||||
400, "Malformed 'hashes': %s" % (type(hashes),), Codes.UNAUTHORIZED
|
||||
)
|
||||
|
||||
if name not in hashes:
|
||||
raise SynapseError(
|
||||
|
|
|
@ -511,17 +511,18 @@ class BaseV2KeyFetcher(object):
|
|||
server_name = response_json["server_name"]
|
||||
verified = False
|
||||
for key_id in response_json["signatures"].get(server_name, {}):
|
||||
# each of the keys used for the signature must be present in the response
|
||||
# json.
|
||||
key = verify_keys.get(key_id)
|
||||
if not key:
|
||||
raise KeyLookupError(
|
||||
"Key response is signed by key id %s:%s but that key is not "
|
||||
"present in the response" % (server_name, key_id)
|
||||
)
|
||||
# the key may not be present in verify_keys if:
|
||||
# * we got the key from the notary server, and:
|
||||
# * the key belongs to the notary server, and:
|
||||
# * the notary server is using a different key to sign notary
|
||||
# responses.
|
||||
continue
|
||||
|
||||
verify_signed_json(response_json, server_name, key.verify_key)
|
||||
verified = True
|
||||
break
|
||||
|
||||
if not verified:
|
||||
raise KeyLookupError(
|
||||
|
|
|
@ -43,6 +43,8 @@ def check(room_version, event, auth_events, do_sig_check=True, do_size_check=Tru
|
|||
Returns:
|
||||
if the auth checks pass.
|
||||
"""
|
||||
assert isinstance(auth_events, dict)
|
||||
|
||||
if do_size_check:
|
||||
_check_size_limits(event)
|
||||
|
||||
|
@ -87,12 +89,6 @@ def check(room_version, event, auth_events, do_sig_check=True, do_size_check=Tru
|
|||
if not event.signatures.get(event_id_domain):
|
||||
raise AuthError(403, "Event not signed by sending server")
|
||||
|
||||
if auth_events is None:
|
||||
# Oh, we don't know what the state of the room was, so we
|
||||
# are trusting that this is allowed (at least for now)
|
||||
logger.warning("Trusting event: %s", event.event_id)
|
||||
return
|
||||
|
||||
if event.type == EventTypes.Create:
|
||||
sender_domain = get_domain_from_id(event.sender)
|
||||
room_id_domain = get_domain_from_id(event.room_id)
|
||||
|
|
|
@ -149,7 +149,7 @@ class EventContext:
|
|||
# the prev_state_ids, so if we're a state event we include the event
|
||||
# id that we replaced in the state.
|
||||
if event.is_state():
|
||||
prev_state_ids = yield self.get_prev_state_ids(store)
|
||||
prev_state_ids = yield self.get_prev_state_ids()
|
||||
prev_state_id = prev_state_ids.get((event.type, event.state_key))
|
||||
else:
|
||||
prev_state_id = None
|
||||
|
@ -167,12 +167,13 @@ class EventContext:
|
|||
}
|
||||
|
||||
@staticmethod
|
||||
def deserialize(store, input):
|
||||
def deserialize(storage, input):
|
||||
"""Converts a dict that was produced by `serialize` back into a
|
||||
EventContext.
|
||||
|
||||
Args:
|
||||
store (DataStore): Used to convert AS ID to AS object
|
||||
storage (Storage): Used to convert AS ID to AS object and fetch
|
||||
state.
|
||||
input (dict): A dict produced by `serialize`
|
||||
|
||||
Returns:
|
||||
|
@ -181,6 +182,7 @@ class EventContext:
|
|||
context = _AsyncEventContextImpl(
|
||||
# We use the state_group and prev_state_id stuff to pull the
|
||||
# current_state_ids out of the DB and construct prev_state_ids.
|
||||
storage=storage,
|
||||
prev_state_id=input["prev_state_id"],
|
||||
event_type=input["event_type"],
|
||||
event_state_key=input["event_state_key"],
|
||||
|
@ -193,7 +195,7 @@ class EventContext:
|
|||
|
||||
app_service_id = input["app_service_id"]
|
||||
if app_service_id:
|
||||
context.app_service = store.get_app_service_by_id(app_service_id)
|
||||
context.app_service = storage.main.get_app_service_by_id(app_service_id)
|
||||
|
||||
return context
|
||||
|
||||
|
@ -216,7 +218,7 @@ class EventContext:
|
|||
return self._state_group
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def get_current_state_ids(self, store):
|
||||
def get_current_state_ids(self):
|
||||
"""
|
||||
Gets the room state map, including this event - ie, the state in ``state_group``
|
||||
|
||||
|
@ -234,11 +236,11 @@ class EventContext:
|
|||
if self.rejected:
|
||||
raise RuntimeError("Attempt to access state_ids of rejected event")
|
||||
|
||||
yield self._ensure_fetched(store)
|
||||
yield self._ensure_fetched()
|
||||
return self._current_state_ids
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def get_prev_state_ids(self, store):
|
||||
def get_prev_state_ids(self):
|
||||
"""
|
||||
Gets the room state map, excluding this event.
|
||||
|
||||
|
@ -250,7 +252,7 @@ class EventContext:
|
|||
Maps a (type, state_key) to the event ID of the state event matching
|
||||
this tuple.
|
||||
"""
|
||||
yield self._ensure_fetched(store)
|
||||
yield self._ensure_fetched()
|
||||
return self._prev_state_ids
|
||||
|
||||
def get_cached_current_state_ids(self):
|
||||
|
@ -270,7 +272,7 @@ class EventContext:
|
|||
|
||||
return self._current_state_ids
|
||||
|
||||
def _ensure_fetched(self, store):
|
||||
def _ensure_fetched(self):
|
||||
return defer.succeed(None)
|
||||
|
||||
|
||||
|
@ -282,6 +284,8 @@ class _AsyncEventContextImpl(EventContext):
|
|||
|
||||
Attributes:
|
||||
|
||||
_storage (Storage)
|
||||
|
||||
_fetching_state_deferred (Deferred|None): Resolves when *_state_ids have
|
||||
been calculated. None if we haven't started calculating yet
|
||||
|
||||
|
@ -295,28 +299,30 @@ class _AsyncEventContextImpl(EventContext):
|
|||
that was replaced.
|
||||
"""
|
||||
|
||||
# This needs to have a default as we're inheriting
|
||||
_storage = attr.ib(default=None)
|
||||
_prev_state_id = attr.ib(default=None)
|
||||
_event_type = attr.ib(default=None)
|
||||
_event_state_key = attr.ib(default=None)
|
||||
_fetching_state_deferred = attr.ib(default=None)
|
||||
|
||||
def _ensure_fetched(self, store):
|
||||
def _ensure_fetched(self):
|
||||
if not self._fetching_state_deferred:
|
||||
self._fetching_state_deferred = run_in_background(
|
||||
self._fill_out_state, store
|
||||
)
|
||||
self._fetching_state_deferred = run_in_background(self._fill_out_state)
|
||||
|
||||
return make_deferred_yieldable(self._fetching_state_deferred)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def _fill_out_state(self, store):
|
||||
def _fill_out_state(self):
|
||||
"""Called to populate the _current_state_ids and _prev_state_ids
|
||||
attributes by loading from the database.
|
||||
"""
|
||||
if self.state_group is None:
|
||||
return
|
||||
|
||||
self._current_state_ids = yield store.get_state_ids_for_group(self.state_group)
|
||||
self._current_state_ids = yield self._storage.state.get_state_ids_for_group(
|
||||
self.state_group
|
||||
)
|
||||
if self._prev_state_id and self._event_state_key is not None:
|
||||
self._prev_state_ids = dict(self._current_state_ids)
|
||||
|
||||
|
|
|
@ -53,7 +53,7 @@ class ThirdPartyEventRules(object):
|
|||
if self.third_party_rules is None:
|
||||
return True
|
||||
|
||||
prev_state_ids = yield context.get_prev_state_ids(self.store)
|
||||
prev_state_ids = yield context.get_prev_state_ids()
|
||||
|
||||
# Retrieve the state events from the database.
|
||||
state_events = {}
|
||||
|
|
|
@ -526,13 +526,7 @@ class FederationClient(FederationBase):
|
|||
|
||||
@defer.inlineCallbacks
|
||||
def send_request(destination):
|
||||
time_now = self._clock.time_msec()
|
||||
_, content = yield self.transport_layer.send_join(
|
||||
destination=destination,
|
||||
room_id=pdu.room_id,
|
||||
event_id=pdu.event_id,
|
||||
content=pdu.get_pdu_json(time_now),
|
||||
)
|
||||
content = yield self._do_send_join(destination, pdu)
|
||||
|
||||
logger.debug("Got content: %s", content)
|
||||
|
||||
|
@ -599,6 +593,44 @@ class FederationClient(FederationBase):
|
|||
|
||||
return self._try_destination_list("send_join", destinations, send_request)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def _do_send_join(self, destination, pdu):
|
||||
time_now = self._clock.time_msec()
|
||||
|
||||
try:
|
||||
content = yield self.transport_layer.send_join_v2(
|
||||
destination=destination,
|
||||
room_id=pdu.room_id,
|
||||
event_id=pdu.event_id,
|
||||
content=pdu.get_pdu_json(time_now),
|
||||
)
|
||||
|
||||
return content
|
||||
except HttpResponseException as e:
|
||||
if e.code in [400, 404]:
|
||||
err = e.to_synapse_error()
|
||||
|
||||
# If we receive an error response that isn't a generic error, or an
|
||||
# unrecognised endpoint error, we assume that the remote understands
|
||||
# the v2 invite API and this is a legitimate error.
|
||||
if err.errcode not in [Codes.UNKNOWN, Codes.UNRECOGNIZED]:
|
||||
raise err
|
||||
else:
|
||||
raise e.to_synapse_error()
|
||||
|
||||
logger.debug("Couldn't send_join with the v2 API, falling back to the v1 API")
|
||||
|
||||
resp = yield self.transport_layer.send_join_v1(
|
||||
destination=destination,
|
||||
room_id=pdu.room_id,
|
||||
event_id=pdu.event_id,
|
||||
content=pdu.get_pdu_json(time_now),
|
||||
)
|
||||
|
||||
# We expect the v1 API to respond with [200, content], so we only return the
|
||||
# content.
|
||||
return resp[1]
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def send_invite(self, destination, room_id, event_id, pdu):
|
||||
room_version = yield self.store.get_room_version(room_id)
|
||||
|
@ -708,18 +740,50 @@ class FederationClient(FederationBase):
|
|||
|
||||
@defer.inlineCallbacks
|
||||
def send_request(destination):
|
||||
time_now = self._clock.time_msec()
|
||||
_, content = yield self.transport_layer.send_leave(
|
||||
content = yield self._do_send_leave(destination, pdu)
|
||||
|
||||
logger.debug("Got content: %s", content)
|
||||
return None
|
||||
|
||||
return self._try_destination_list("send_leave", destinations, send_request)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def _do_send_leave(self, destination, pdu):
|
||||
time_now = self._clock.time_msec()
|
||||
|
||||
try:
|
||||
content = yield self.transport_layer.send_leave_v2(
|
||||
destination=destination,
|
||||
room_id=pdu.room_id,
|
||||
event_id=pdu.event_id,
|
||||
content=pdu.get_pdu_json(time_now),
|
||||
)
|
||||
|
||||
logger.debug("Got content: %s", content)
|
||||
return None
|
||||
return content
|
||||
except HttpResponseException as e:
|
||||
if e.code in [400, 404]:
|
||||
err = e.to_synapse_error()
|
||||
|
||||
return self._try_destination_list("send_leave", destinations, send_request)
|
||||
# If we receive an error response that isn't a generic error, or an
|
||||
# unrecognised endpoint error, we assume that the remote understands
|
||||
# the v2 invite API and this is a legitimate error.
|
||||
if err.errcode not in [Codes.UNKNOWN, Codes.UNRECOGNIZED]:
|
||||
raise err
|
||||
else:
|
||||
raise e.to_synapse_error()
|
||||
|
||||
logger.debug("Couldn't send_leave with the v2 API, falling back to the v1 API")
|
||||
|
||||
resp = yield self.transport_layer.send_leave_v1(
|
||||
destination=destination,
|
||||
room_id=pdu.room_id,
|
||||
event_id=pdu.event_id,
|
||||
content=pdu.get_pdu_json(time_now),
|
||||
)
|
||||
|
||||
# We expect the v1 API to respond with [200, content], so we only return the
|
||||
# content.
|
||||
return resp[1]
|
||||
|
||||
def get_public_rooms(
|
||||
self,
|
||||
|
|
|
@ -384,15 +384,10 @@ class FederationServer(FederationBase):
|
|||
|
||||
res_pdus = await self.handler.on_send_join_request(origin, pdu)
|
||||
time_now = self._clock.time_msec()
|
||||
return (
|
||||
200,
|
||||
{
|
||||
"state": [p.get_pdu_json(time_now) for p in res_pdus["state"]],
|
||||
"auth_chain": [
|
||||
p.get_pdu_json(time_now) for p in res_pdus["auth_chain"]
|
||||
],
|
||||
},
|
||||
)
|
||||
return {
|
||||
"state": [p.get_pdu_json(time_now) for p in res_pdus["state"]],
|
||||
"auth_chain": [p.get_pdu_json(time_now) for p in res_pdus["auth_chain"]],
|
||||
}
|
||||
|
||||
async def on_make_leave_request(self, origin, room_id, user_id):
|
||||
origin_host, _ = parse_server_name(origin)
|
||||
|
@ -419,7 +414,7 @@ class FederationServer(FederationBase):
|
|||
pdu = await self._check_sigs_and_hash(room_version, pdu)
|
||||
|
||||
await self.handler.on_send_leave_request(origin, pdu)
|
||||
return 200, {}
|
||||
return {}
|
||||
|
||||
async def on_event_auth(self, origin, room_id, event_id):
|
||||
with (await self._server_linearizer.queue((origin, room_id))):
|
||||
|
|
|
@ -243,7 +243,7 @@ class TransportLayerClient(object):
|
|||
|
||||
@defer.inlineCallbacks
|
||||
@log_function
|
||||
def send_join(self, destination, room_id, event_id, content):
|
||||
def send_join_v1(self, destination, room_id, event_id, content):
|
||||
path = _create_v1_path("/send_join/%s/%s", room_id, event_id)
|
||||
|
||||
response = yield self.client.put_json(
|
||||
|
@ -254,7 +254,18 @@ class TransportLayerClient(object):
|
|||
|
||||
@defer.inlineCallbacks
|
||||
@log_function
|
||||
def send_leave(self, destination, room_id, event_id, content):
|
||||
def send_join_v2(self, destination, room_id, event_id, content):
|
||||
path = _create_v2_path("/send_join/%s/%s", room_id, event_id)
|
||||
|
||||
response = yield self.client.put_json(
|
||||
destination=destination, path=path, data=content
|
||||
)
|
||||
|
||||
return response
|
||||
|
||||
@defer.inlineCallbacks
|
||||
@log_function
|
||||
def send_leave_v1(self, destination, room_id, event_id, content):
|
||||
path = _create_v1_path("/send_leave/%s/%s", room_id, event_id)
|
||||
|
||||
response = yield self.client.put_json(
|
||||
|
@ -270,6 +281,24 @@ class TransportLayerClient(object):
|
|||
|
||||
return response
|
||||
|
||||
@defer.inlineCallbacks
|
||||
@log_function
|
||||
def send_leave_v2(self, destination, room_id, event_id, content):
|
||||
path = _create_v2_path("/send_leave/%s/%s", room_id, event_id)
|
||||
|
||||
response = yield self.client.put_json(
|
||||
destination=destination,
|
||||
path=path,
|
||||
data=content,
|
||||
# we want to do our best to send this through. The problem is
|
||||
# that if it fails, we won't retry it later, so if the remote
|
||||
# server was just having a momentary blip, the room will be out of
|
||||
# sync.
|
||||
ignore_backoff=True,
|
||||
)
|
||||
|
||||
return response
|
||||
|
||||
@defer.inlineCallbacks
|
||||
@log_function
|
||||
def send_invite_v1(self, destination, room_id, event_id, content):
|
||||
|
|
|
@ -506,9 +506,19 @@ class FederationMakeLeaveServlet(BaseFederationServlet):
|
|||
return 200, content
|
||||
|
||||
|
||||
class FederationSendLeaveServlet(BaseFederationServlet):
|
||||
class FederationV1SendLeaveServlet(BaseFederationServlet):
|
||||
PATH = "/send_leave/(?P<room_id>[^/]*)/(?P<event_id>[^/]*)"
|
||||
|
||||
async def on_PUT(self, origin, content, query, room_id, event_id):
|
||||
content = await self.handler.on_send_leave_request(origin, content, room_id)
|
||||
return 200, (200, content)
|
||||
|
||||
|
||||
class FederationV2SendLeaveServlet(BaseFederationServlet):
|
||||
PATH = "/send_leave/(?P<room_id>[^/]*)/(?P<event_id>[^/]*)"
|
||||
|
||||
PREFIX = FEDERATION_V2_PREFIX
|
||||
|
||||
async def on_PUT(self, origin, content, query, room_id, event_id):
|
||||
content = await self.handler.on_send_leave_request(origin, content, room_id)
|
||||
return 200, content
|
||||
|
@ -521,9 +531,21 @@ class FederationEventAuthServlet(BaseFederationServlet):
|
|||
return await self.handler.on_event_auth(origin, context, event_id)
|
||||
|
||||
|
||||
class FederationSendJoinServlet(BaseFederationServlet):
|
||||
class FederationV1SendJoinServlet(BaseFederationServlet):
|
||||
PATH = "/send_join/(?P<context>[^/]*)/(?P<event_id>[^/]*)"
|
||||
|
||||
async def on_PUT(self, origin, content, query, context, event_id):
|
||||
# TODO(paul): assert that context/event_id parsed from path actually
|
||||
# match those given in content
|
||||
content = await self.handler.on_send_join_request(origin, content, context)
|
||||
return 200, (200, content)
|
||||
|
||||
|
||||
class FederationV2SendJoinServlet(BaseFederationServlet):
|
||||
PATH = "/send_join/(?P<context>[^/]*)/(?P<event_id>[^/]*)"
|
||||
|
||||
PREFIX = FEDERATION_V2_PREFIX
|
||||
|
||||
async def on_PUT(self, origin, content, query, context, event_id):
|
||||
# TODO(paul): assert that context/event_id parsed from path actually
|
||||
# match those given in content
|
||||
|
@ -1367,8 +1389,10 @@ FEDERATION_SERVLET_CLASSES = (
|
|||
FederationMakeJoinServlet,
|
||||
FederationMakeLeaveServlet,
|
||||
FederationEventServlet,
|
||||
FederationSendJoinServlet,
|
||||
FederationSendLeaveServlet,
|
||||
FederationV1SendJoinServlet,
|
||||
FederationV2SendJoinServlet,
|
||||
FederationV1SendLeaveServlet,
|
||||
FederationV2SendLeaveServlet,
|
||||
FederationV1InviteServlet,
|
||||
FederationV2InviteServlet,
|
||||
FederationQueryAuthServlet,
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue