Merge branch 'develop' of github.com:matrix-org/synapse into neilj/reserved_users

pull/3662/head
Neil Johnson 2018-08-08 13:45:21 +01:00
commit be59910b93
16 changed files with 225 additions and 156 deletions

View File

@ -35,3 +35,4 @@ recursive-include changelog.d *
prune .github prune .github
prune demo/etc prune demo/etc
prune docker

View File

@ -157,12 +157,19 @@ if you prefer.
In case of problems, please see the _`Troubleshooting` section below. In case of problems, please see the _`Troubleshooting` section below.
There is an offical synapse image available at https://hub.docker.com/r/matrixdotorg/synapse/tags/ which can be used with the docker-compose file available at `contrib/docker`. Further information on this including configuration options is available in `contrib/docker/README.md`. There is an offical synapse image available at
https://hub.docker.com/r/matrixdotorg/synapse/tags/ which can be used with
the docker-compose file available at `contrib/docker <contrib/docker>`_. Further information on
this including configuration options is available in the README on
hub.docker.com.
Alternatively, Andreas Peters (previously Silvio Fricke) has contributed a Dockerfile to automate a synapse server in a single Docker image, at https://hub.docker.com/r/avhost/docker-matrix/tags/ Alternatively, Andreas Peters (previously Silvio Fricke) has contributed a
Dockerfile to automate a synapse server in a single Docker image, at
https://hub.docker.com/r/avhost/docker-matrix/tags/
Also, Martin Giess has created an auto-deployment process with vagrant/ansible, Also, Martin Giess has created an auto-deployment process with vagrant/ansible,
tested with VirtualBox/AWS/DigitalOcean - see https://github.com/EMnify/matrix-synapse-auto-deploy tested with VirtualBox/AWS/DigitalOcean - see
https://github.com/EMnify/matrix-synapse-auto-deploy
for details. for details.
Configuring synapse Configuring synapse

1
changelog.d/3585.bugfix Normal file
View File

@ -0,0 +1 @@
Respond with M_NOT_FOUND when profiles are not found locally or over federation. Fixes #3585

1
changelog.d/3644.misc Normal file
View File

@ -0,0 +1 @@
Refactor location of docker build script.

1
changelog.d/3658.bugfix Normal file
View File

@ -0,0 +1 @@
Fix occasional glitches in the synapse_event_persisted_position metric

View File

@ -1,23 +1,5 @@
# Synapse Docker # Synapse Docker
The `matrixdotorg/synapse` Docker image will run Synapse as a single process. It does not provide a
database server or a TURN server, you should run these separately.
If you run a Postgres server, you should simply include it in the same Compose
project or set the proper environment variables and the image will automatically
use that server.
## Build
Build the docker image with the `docker-compose build` command.
You may have a local Python wheel cache available, in which case copy the relevant packages in the ``cache/`` directory at the root of the project.
## Run
This image is designed to run either with an automatically generated configuration
file or with a custom configuration that requires manual edition.
### Automated configuration ### Automated configuration
It is recommended that you use Docker Compose to run your containers, including It is recommended that you use Docker Compose to run your containers, including
@ -54,94 +36,6 @@ Then, customize your configuration and run the server:
docker-compose up -d docker-compose up -d
``` ```
### Without Compose ### More information
If you do not wish to use Compose, you may still run this image using plain For more information on required environment variables and mounts, see the main docker documentation at [/docker/README.md](../../docker/README.md)
Docker commands. Note that the following is just a guideline and you may need
to add parameters to the docker run command to account for the network situation
with your postgres database.
```
docker run \
-d \
--name synapse \
-v ${DATA_PATH}:/data \
-e SYNAPSE_SERVER_NAME=my.matrix.host \
-e SYNAPSE_REPORT_STATS=yes \
docker.io/matrixdotorg/synapse:latest
```
## Volumes
The image expects a single volume, located at ``/data``, that will hold:
* temporary files during uploads;
* uploaded media and thumbnails;
* the SQLite database if you do not configure postgres;
* the appservices configuration.
You are free to use separate volumes depending on storage endpoints at your
disposal. For instance, ``/data/media`` coud be stored on a large but low
performance hdd storage while other files could be stored on high performance
endpoints.
In order to setup an application service, simply create an ``appservices``
directory in the data volume and write the application service Yaml
configuration file there. Multiple application services are supported.
## Environment
Unless you specify a custom path for the configuration file, a very generic
file will be generated, based on the following environment settings.
These are a good starting point for setting up your own deployment.
Global settings:
* ``UID``, the user id Synapse will run as [default 991]
* ``GID``, the group id Synapse will run as [default 991]
* ``SYNAPSE_CONFIG_PATH``, path to a custom config file
If ``SYNAPSE_CONFIG_PATH`` is set, you should generate a configuration file
then customize it manually. No other environment variable is required.
Otherwise, a dynamic configuration file will be used. The following environment
variables are available for configuration:
* ``SYNAPSE_SERVER_NAME`` (mandatory), the current server public hostname.
* ``SYNAPSE_REPORT_STATS``, (mandatory, ``yes`` or ``no``), enable anonymous
statistics reporting back to the Matrix project which helps us to get funding.
* ``SYNAPSE_NO_TLS``, set this variable to disable TLS in Synapse (use this if
you run your own TLS-capable reverse proxy).
* ``SYNAPSE_ENABLE_REGISTRATION``, set this variable to enable registration on
the Synapse instance.
* ``SYNAPSE_ALLOW_GUEST``, set this variable to allow guest joining this server.
* ``SYNAPSE_EVENT_CACHE_SIZE``, the event cache size [default `10K`].
* ``SYNAPSE_CACHE_FACTOR``, the cache factor [default `0.5`].
* ``SYNAPSE_RECAPTCHA_PUBLIC_KEY``, set this variable to the recaptcha public
key in order to enable recaptcha upon registration.
* ``SYNAPSE_RECAPTCHA_PRIVATE_KEY``, set this variable to the recaptcha private
key in order to enable recaptcha upon registration.
* ``SYNAPSE_TURN_URIS``, set this variable to the coma-separated list of TURN
uris to enable TURN for this homeserver.
* ``SYNAPSE_TURN_SECRET``, set this to the TURN shared secret if required.
Shared secrets, that will be initialized to random values if not set:
* ``SYNAPSE_REGISTRATION_SHARED_SECRET``, secret for registrering users if
registration is disable.
* ``SYNAPSE_MACAROON_SECRET_KEY`` secret for signing access tokens
to the server.
Database specific values (will use SQLite if not set):
* `POSTGRES_DB` - The database name for the synapse postgres database. [default: `synapse`]
* `POSTGRES_HOST` - The host of the postgres database if you wish to use postgresql instead of sqlite3. [default: `db` which is useful when using a container on the same docker network in a compose file where the postgres service is called `db`]
* `POSTGRES_PASSWORD` - The password for the synapse postgres database. **If this is set then postgres will be used instead of sqlite3.** [default: none] **NOTE**: You are highly encouraged to use postgresql! Please use the compose file to make it easier to deploy.
* `POSTGRES_USER` - The user for the synapse postgres database. [default: `matrix`]
Mail server specific values (will not send emails if not set):
* ``SYNAPSE_SMTP_HOST``, hostname to the mail server.
* ``SYNAPSE_SMTP_PORT``, TCP port for accessing the mail server [default ``25``].
* ``SYNAPSE_SMTP_USER``, username for authenticating against the mail server if any.
* ``SYNAPSE_SMTP_PASSWORD``, password for authenticating against the mail server if any.

View File

@ -54,7 +54,7 @@
"gnetId": null, "gnetId": null,
"graphTooltip": 0, "graphTooltip": 0,
"id": null, "id": null,
"iteration": 1533026624326, "iteration": 1533598785368,
"links": [ "links": [
{ {
"asDropdown": true, "asDropdown": true,
@ -4629,7 +4629,7 @@
"h": 9, "h": 9,
"w": 12, "w": 12,
"x": 0, "x": 0,
"y": 11 "y": 29
}, },
"id": 67, "id": 67,
"legend": { "legend": {
@ -4655,11 +4655,11 @@
"steppedLine": false, "steppedLine": false,
"targets": [ "targets": [
{ {
"expr": " synapse_event_persisted_position{instance=\"$instance\"} - ignoring(index, job, name) group_right(instance) synapse_event_processing_positions{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}", "expr": " synapse_event_persisted_position{instance=\"$instance\",job=\"synapse\"} - ignoring(index, job, name) group_right() synapse_event_processing_positions{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}",
"format": "time_series", "format": "time_series",
"interval": "", "interval": "",
"intervalFactor": 1, "intervalFactor": 1,
"legendFormat": "{{job}}-{{index}}", "legendFormat": "{{job}}-{{index}} ",
"refId": "A" "refId": "A"
} }
], ],
@ -4697,7 +4697,11 @@
"min": null, "min": null,
"show": true "show": true
} }
] ],
"yaxis": {
"align": false,
"alignLevel": null
}
}, },
{ {
"aliasColors": {}, "aliasColors": {},
@ -4710,7 +4714,7 @@
"h": 9, "h": 9,
"w": 12, "w": 12,
"x": 12, "x": 12,
"y": 11 "y": 29
}, },
"id": 71, "id": 71,
"legend": { "legend": {
@ -4778,7 +4782,11 @@
"min": null, "min": null,
"show": true "show": true
} }
] ],
"yaxis": {
"align": false,
"alignLevel": null
}
} }
], ],
"title": "Event processing loop positions", "title": "Event processing loop positions",
@ -4957,5 +4965,5 @@
"timezone": "", "timezone": "",
"title": "Synapse", "title": "Synapse",
"uid": "000000012", "uid": "000000012",
"version": 125 "version": 127
} }

View File

@ -22,7 +22,7 @@ RUN cd /synapse \
setuptools \ setuptools \
&& mkdir -p /synapse/cache \ && mkdir -p /synapse/cache \
&& pip install -f /synapse/cache --upgrade --process-dependency-links . \ && pip install -f /synapse/cache --upgrade --process-dependency-links . \
&& mv /synapse/contrib/docker/start.py /synapse/contrib/docker/conf / \ && mv /synapse/docker/start.py /synapse/docker/conf / \
&& rm -rf \ && rm -rf \
setup.cfg \ setup.cfg \
setup.py \ setup.py \

124
docker/README.md Normal file
View File

@ -0,0 +1,124 @@
# Synapse Docker
This Docker image will run Synapse as a single process. It does not provide a database
server or a TURN server, you should run these separately.
## Run
We do not currently offer a `latest` image, as this has somewhat undefined semantics.
We instead release only tagged versions so upgrading between releases is entirely
within your control.
### Using docker-compose (easier)
This image is designed to run either with an automatically generated configuration
file or with a custom configuration that requires manual editing.
An easy way to make use of this image is via docker-compose. See the
[contrib/docker](../contrib/docker)
section of the synapse project for examples.
### Without Compose (harder)
If you do not wish to use Compose, you may still run this image using plain
Docker commands. Note that the following is just a guideline and you may need
to add parameters to the docker run command to account for the network situation
with your postgres database.
```
docker run \
-d \
--name synapse \
-v ${DATA_PATH}:/data \
-e SYNAPSE_SERVER_NAME=my.matrix.host \
-e SYNAPSE_REPORT_STATS=yes \
docker.io/matrixdotorg/synapse:latest
```
## Volumes
The image expects a single volume, located at ``/data``, that will hold:
* temporary files during uploads;
* uploaded media and thumbnails;
* the SQLite database if you do not configure postgres;
* the appservices configuration.
You are free to use separate volumes depending on storage endpoints at your
disposal. For instance, ``/data/media`` coud be stored on a large but low
performance hdd storage while other files could be stored on high performance
endpoints.
In order to setup an application service, simply create an ``appservices``
directory in the data volume and write the application service Yaml
configuration file there. Multiple application services are supported.
## Environment
Unless you specify a custom path for the configuration file, a very generic
file will be generated, based on the following environment settings.
These are a good starting point for setting up your own deployment.
Global settings:
* ``UID``, the user id Synapse will run as [default 991]
* ``GID``, the group id Synapse will run as [default 991]
* ``SYNAPSE_CONFIG_PATH``, path to a custom config file
If ``SYNAPSE_CONFIG_PATH`` is set, you should generate a configuration file
then customize it manually. No other environment variable is required.
Otherwise, a dynamic configuration file will be used. The following environment
variables are available for configuration:
* ``SYNAPSE_SERVER_NAME`` (mandatory), the current server public hostname.
* ``SYNAPSE_REPORT_STATS``, (mandatory, ``yes`` or ``no``), enable anonymous
statistics reporting back to the Matrix project which helps us to get funding.
* ``SYNAPSE_NO_TLS``, set this variable to disable TLS in Synapse (use this if
you run your own TLS-capable reverse proxy).
* ``SYNAPSE_ENABLE_REGISTRATION``, set this variable to enable registration on
the Synapse instance.
* ``SYNAPSE_ALLOW_GUEST``, set this variable to allow guest joining this server.
* ``SYNAPSE_EVENT_CACHE_SIZE``, the event cache size [default `10K`].
* ``SYNAPSE_CACHE_FACTOR``, the cache factor [default `0.5`].
* ``SYNAPSE_RECAPTCHA_PUBLIC_KEY``, set this variable to the recaptcha public
key in order to enable recaptcha upon registration.
* ``SYNAPSE_RECAPTCHA_PRIVATE_KEY``, set this variable to the recaptcha private
key in order to enable recaptcha upon registration.
* ``SYNAPSE_TURN_URIS``, set this variable to the coma-separated list of TURN
uris to enable TURN for this homeserver.
* ``SYNAPSE_TURN_SECRET``, set this to the TURN shared secret if required.
Shared secrets, that will be initialized to random values if not set:
* ``SYNAPSE_REGISTRATION_SHARED_SECRET``, secret for registrering users if
registration is disable.
* ``SYNAPSE_MACAROON_SECRET_KEY`` secret for signing access tokens
to the server.
Database specific values (will use SQLite if not set):
* `POSTGRES_DB` - The database name for the synapse postgres database. [default: `synapse`]
* `POSTGRES_HOST` - The host of the postgres database if you wish to use postgresql instead of sqlite3. [default: `db` which is useful when using a container on the same docker network in a compose file where the postgres service is called `db`]
* `POSTGRES_PASSWORD` - The password for the synapse postgres database. **If this is set then postgres will be used instead of sqlite3.** [default: none] **NOTE**: You are highly encouraged to use postgresql! Please use the compose file to make it easier to deploy.
* `POSTGRES_USER` - The user for the synapse postgres database. [default: `matrix`]
Mail server specific values (will not send emails if not set):
* ``SYNAPSE_SMTP_HOST``, hostname to the mail server.
* ``SYNAPSE_SMTP_PORT``, TCP port for accessing the mail server [default ``25``].
* ``SYNAPSE_SMTP_USER``, username for authenticating against the mail server if any.
* ``SYNAPSE_SMTP_PASSWORD``, password for authenticating against the mail server if any.
## Build
Build the docker image with the `docker build` command from the root of the synapse repository.
```
docker build -t docker.io/matrixdotorg/synapse . -f docker/Dockerfile
```
The `-t` option sets the image tag. Official images are tagged `matrixdotorg/synapse:<version>` where `<version>` is the same as the release tag in the synapse git repository.
You may have a local Python wheel cache available, in which case copy the relevant
packages in the ``cache/`` directory at the root of the project.

View File

@ -17,7 +17,13 @@ import logging
from twisted.internet import defer from twisted.internet import defer
from synapse.api.errors import AuthError, CodeMessageException, SynapseError from synapse.api.errors import (
AuthError,
CodeMessageException,
Codes,
StoreError,
SynapseError,
)
from synapse.metrics.background_process_metrics import run_as_background_process from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.types import UserID, get_domain_from_id from synapse.types import UserID, get_domain_from_id
@ -49,12 +55,17 @@ class ProfileHandler(BaseHandler):
def get_profile(self, user_id): def get_profile(self, user_id):
target_user = UserID.from_string(user_id) target_user = UserID.from_string(user_id)
if self.hs.is_mine(target_user): if self.hs.is_mine(target_user):
displayname = yield self.store.get_profile_displayname( try:
target_user.localpart displayname = yield self.store.get_profile_displayname(
) target_user.localpart
avatar_url = yield self.store.get_profile_avatar_url( )
target_user.localpart avatar_url = yield self.store.get_profile_avatar_url(
) target_user.localpart
)
except StoreError as e:
if e.code == 404:
raise SynapseError(404, "Profile was not found", Codes.NOT_FOUND)
raise
defer.returnValue({ defer.returnValue({
"displayname": displayname, "displayname": displayname,
@ -74,7 +85,6 @@ class ProfileHandler(BaseHandler):
except CodeMessageException as e: except CodeMessageException as e:
if e.code != 404: if e.code != 404:
logger.exception("Failed to get displayname") logger.exception("Failed to get displayname")
raise raise
@defer.inlineCallbacks @defer.inlineCallbacks
@ -85,12 +95,17 @@ class ProfileHandler(BaseHandler):
""" """
target_user = UserID.from_string(user_id) target_user = UserID.from_string(user_id)
if self.hs.is_mine(target_user): if self.hs.is_mine(target_user):
displayname = yield self.store.get_profile_displayname( try:
target_user.localpart displayname = yield self.store.get_profile_displayname(
) target_user.localpart
avatar_url = yield self.store.get_profile_avatar_url( )
target_user.localpart avatar_url = yield self.store.get_profile_avatar_url(
) target_user.localpart
)
except StoreError as e:
if e.code == 404:
raise SynapseError(404, "Profile was not found", Codes.NOT_FOUND)
raise
defer.returnValue({ defer.returnValue({
"displayname": displayname, "displayname": displayname,
@ -103,9 +118,14 @@ class ProfileHandler(BaseHandler):
@defer.inlineCallbacks @defer.inlineCallbacks
def get_displayname(self, target_user): def get_displayname(self, target_user):
if self.hs.is_mine(target_user): if self.hs.is_mine(target_user):
displayname = yield self.store.get_profile_displayname( try:
target_user.localpart displayname = yield self.store.get_profile_displayname(
) target_user.localpart
)
except StoreError as e:
if e.code == 404:
raise SynapseError(404, "Profile was not found", Codes.NOT_FOUND)
raise
defer.returnValue(displayname) defer.returnValue(displayname)
else: else:
@ -122,7 +142,6 @@ class ProfileHandler(BaseHandler):
except CodeMessageException as e: except CodeMessageException as e:
if e.code != 404: if e.code != 404:
logger.exception("Failed to get displayname") logger.exception("Failed to get displayname")
raise raise
except Exception: except Exception:
logger.exception("Failed to get displayname") logger.exception("Failed to get displayname")
@ -157,10 +176,14 @@ class ProfileHandler(BaseHandler):
@defer.inlineCallbacks @defer.inlineCallbacks
def get_avatar_url(self, target_user): def get_avatar_url(self, target_user):
if self.hs.is_mine(target_user): if self.hs.is_mine(target_user):
avatar_url = yield self.store.get_profile_avatar_url( try:
target_user.localpart avatar_url = yield self.store.get_profile_avatar_url(
) target_user.localpart
)
except StoreError as e:
if e.code == 404:
raise SynapseError(404, "Profile was not found", Codes.NOT_FOUND)
raise
defer.returnValue(avatar_url) defer.returnValue(avatar_url)
else: else:
try: try:
@ -213,16 +236,20 @@ class ProfileHandler(BaseHandler):
just_field = args.get("field", None) just_field = args.get("field", None)
response = {} response = {}
try:
if just_field is None or just_field == "displayname":
response["displayname"] = yield self.store.get_profile_displayname(
user.localpart
)
if just_field is None or just_field == "displayname": if just_field is None or just_field == "avatar_url":
response["displayname"] = yield self.store.get_profile_displayname( response["avatar_url"] = yield self.store.get_profile_avatar_url(
user.localpart user.localpart
) )
except StoreError as e:
if just_field is None or just_field == "avatar_url": if e.code == 404:
response["avatar_url"] = yield self.store.get_profile_avatar_url( raise SynapseError(404, "Profile was not found", Codes.NOT_FOUND)
user.localpart raise
)
defer.returnValue(response) defer.returnValue(response)

View File

@ -485,9 +485,14 @@ class EventsStore(EventFederationStore, EventsWorkerStore, BackgroundUpdateStore
new_forward_extremeties=new_forward_extremeties, new_forward_extremeties=new_forward_extremeties,
) )
persist_event_counter.inc(len(chunk)) persist_event_counter.inc(len(chunk))
synapse.metrics.event_persisted_position.set(
chunk[-1][0].internal_metadata.stream_ordering, if not backfilled:
) # backfilled events have negative stream orderings, so we don't
# want to set the event_persisted_position to that.
synapse.metrics.event_persisted_position.set(
chunk[-1][0].internal_metadata.stream_ordering,
)
for event, context in chunk: for event, context in chunk:
if context.app_service: if context.app_service:
origin_type = "local" origin_type = "local"

View File

@ -87,7 +87,7 @@ class MonthlyActiveUsersStore(SQLBaseStore):
# If MAU user count still exceeds the MAU threshold, then delete on # If MAU user count still exceeds the MAU threshold, then delete on
# a least recently active basis. # a least recently active basis.
# Note it is not possible to write this query using OFFSET due to # Note it is not possible to write this query using OFFSET due to
# incompatibilities in how sqlite an postgres support the feature. # incompatibilities in how sqlite and postgres support the feature.
# sqlite requires 'LIMIT -1 OFFSET ?', the LIMIT must be present # sqlite requires 'LIMIT -1 OFFSET ?', the LIMIT must be present
# While Postgres does not require 'LIMIT', but also does not support # While Postgres does not require 'LIMIT', but also does not support
# negative LIMIT values. So there is no way to write it that both can # negative LIMIT values. So there is no way to write it that both can

View File

@ -18,7 +18,7 @@ CREATE TABLE monthly_active_users (
user_id TEXT NOT NULL, user_id TEXT NOT NULL,
-- Last time we saw the user. Not guaranteed to be accurate due to rate limiting -- Last time we saw the user. Not guaranteed to be accurate due to rate limiting
-- on updates, Granularity of updates governed by -- on updates, Granularity of updates governed by
-- syanpse.storage.monthly_active_users.LAST_SEEN_GRANULARITY -- synapse.storage.monthly_active_users.LAST_SEEN_GRANULARITY
-- Measured in ms since epoch. -- Measured in ms since epoch.
timestamp BIGINT NOT NULL timestamp BIGINT NOT NULL
); );