Merge remote-tracking branch 'origin/develop' into rav/warn_on_logcontext_fail

pull/3007/head
Richard van der Hoff 2018-05-03 14:59:29 +01:00
commit 093d8c415a
173 changed files with 3282 additions and 1451 deletions

View File

@ -1,14 +1,22 @@
sudo: false sudo: false
language: python language: python
python: 2.7
# tell travis to cache ~/.cache/pip # tell travis to cache ~/.cache/pip
cache: pip cache: pip
env: matrix:
- TOX_ENV=packaging include:
- TOX_ENV=pep8 - python: 2.7
- TOX_ENV=py27 env: TOX_ENV=packaging
- python: 2.7
env: TOX_ENV=pep8
- python: 2.7
env: TOX_ENV=py27
- python: 3.6
env: TOX_ENV=py36
install: install:
- pip install tox - pip install tox

View File

@ -1,11 +1,245 @@
Unreleased Changes in synapse <unreleased>
========== ===============================
synctl no longer starts the main synapse when using ``-a`` option with workers. Potentially breaking change:
A new worker file should be added with ``worker_app: synapse.app.homeserver``.
* Make Client-Server API return 401 for invalid token (PR #3161).
This changes the Client-server spec to return a 401 error code instead of 403
when the access token is unrecognised. This is the behaviour required by the
specification, but some clients may be relying on the old, incorrect
behaviour.
Thanks to @NotAFile for fixing this.
Changes in synapse v0.28.1 (2018-05-01)
=======================================
SECURITY UPDATE
* Clamp the allowed values of event depth received over federation to be
[0, 2^63 - 1]. This mitigates an attack where malicious events
injected with depth = 2^63 - 1 render rooms unusable. Depth is used to
determine the cosmetic ordering of events within a room, and so the ordering
of events in such a room will default to using stream_ordering rather than depth
(topological_ordering).
This is a temporary solution to mitigate abuse in the wild, whilst a long term solution
is being implemented to improve how the depth parameter is used.
Full details at
https://docs.google.com/document/d/1I3fi2S-XnpO45qrpCsowZv8P8dHcNZ4fsBsbOW7KABI
* Pin Twisted to <18.4 until we stop using the private _OpenSSLECCurve API.
Changes in synapse v0.28.0 (2018-04-26)
=======================================
Bug Fixes:
* Fix quarantine media admin API and search reindex (PR #3130)
* Fix media admin APIs (PR #3134)
Changes in synapse v0.28.0-rc1 (2018-04-24)
===========================================
Minor performance improvement to federation sending and bug fixes.
(Note: This release does not include the delta state resolution implementation discussed in matrix live)
Features:
* Add metrics for event processing lag (PR #3090)
* Add metrics for ResponseCache (PR #3092)
Changes:
* Synapse on PyPy (PR #2760) Thanks to @Valodim!
* move handling of auto_join_rooms to RegisterHandler (PR #2996) Thanks to @krombel!
* Improve handling of SRV records for federation connections (PR #3016) Thanks to @silkeh!
* Document the behaviour of ResponseCache (PR #3059)
* Preparation for py3 (PR #3061, #3073, #3074, #3075, #3103, #3104, #3106, #3107, #3109, #3110) Thanks to @NotAFile!
* update prometheus dashboard to use new metric names (PR #3069) Thanks to @krombel!
* use python3-compatible prints (PR #3074) Thanks to @NotAFile!
* Send federation events concurrently (PR #3078)
* Limit concurrent event sends for a room (PR #3079)
* Improve R30 stat definition (PR #3086)
* Send events to ASes concurrently (PR #3088)
* Refactor ResponseCache usage (PR #3093)
* Clarify that SRV may not point to a CNAME (PR #3100) Thanks to @silkeh!
* Use str(e) instead of e.message (PR #3103) Thanks to @NotAFile!
* Use six.itervalues in some places (PR #3106) Thanks to @NotAFile!
* Refactor store.have_events (PR #3117)
Bug Fixes:
* Return 401 for invalid access_token on logout (PR #2938) Thanks to @dklug!
* Return a 404 rather than a 500 on rejoining empty rooms (PR #3080)
* fix federation_domain_whitelist (PR #3099)
* Avoid creating events with huge numbers of prev_events (PR #3113)
* Reject events which have lots of prev_events (PR #3118)
Changes in synapse v0.27.4 (2018-04-13)
======================================
Changes:
* Update canonicaljson dependency (#3095)
Changes in synapse v0.27.3 (2018-04-11)
======================================
Bug fixes:
* URL quote path segments over federation (#3082)
Changes in synapse v0.27.3-rc2 (2018-04-09)
==========================================
v0.27.3-rc1 used a stale version of the develop branch so the changelog overstates
the functionality. v0.27.3-rc2 is up to date, rc1 should be ignored.
Changes in synapse v0.27.3-rc1 (2018-04-09)
=======================================
Notable changes include API support for joinability of groups. Also new metrics
and phone home stats. Phone home stats include better visibility of system usage
so we can tweak synpase to work better for all users rather than our own experience
with matrix.org. Also, recording 'r30' stat which is the measure we use to track
overal growth of the Matrix ecosystem. It is defined as:-
Counts the number of native 30 day retained users, defined as:-
* Users who have created their accounts more than 30 days
* Where last seen at most 30 days ago
* Where account creation and last_seen are > 30 days"
Features:
* Add joinability for groups (PR #3045)
* Implement group join API (PR #3046)
* Add counter metrics for calculating state delta (PR #3033)
* R30 stats (PR #3041)
* Measure time it takes to calculate state group ID (PR #3043)
* Add basic performance statistics to phone home (PR #3044)
* Add response size metrics (PR #3071)
* phone home cache size configurations (PR #3063)
Changes:
* Add a blurb explaining the main synapse worker (PR #2886) Thanks to @turt2live!
* Replace old style error catching with 'as' keyword (PR #3000) Thanks to @NotAFile!
* Use .iter* to avoid copies in StateHandler (PR #3006)
* Linearize calls to _generate_user_id (PR #3029)
* Remove last usage of ujson (PR #3030)
* Use simplejson throughout (PR #3048)
* Use static JSONEncoders (PR #3049)
* Remove uses of events.content (PR #3060)
* Improve database cache performance (PR #3068)
Bug fixes:
* Add room_id to the response of `rooms/{roomId}/join` (PR #2986) Thanks to @jplatte!
* Fix replication after switch to simplejson (PR #3015)
* 404 correctly on missing paths via NoResource (PR #3022)
* Fix error when claiming e2e keys from offline servers (PR #3034)
* fix tests/storage/test_user_directory.py (PR #3042)
* use PUT instead of POST for federating groups/m.join_policy (PR #3070) Thanks to @krombel!
* postgres port script: fix state_groups_pkey error (PR #3072)
Changes in synapse v0.27.2 (2018-03-26)
=======================================
Bug fixes:
* Fix bug which broke TCP replication between workers (PR #3015)
Changes in synapse v0.27.1 (2018-03-26)
=======================================
Meta release as v0.27.0 temporarily pointed to the wrong commit
Changes in synapse v0.27.0 (2018-03-26)
=======================================
No changes since v0.27.0-rc2
Changes in synapse v0.27.0-rc2 (2018-03-19)
===========================================
Pulls in v0.26.1
Bug fixes:
* Fix bug introduced in v0.27.0-rc1 that causes much increased memory usage in state cache (PR #3005)
Changes in synapse v0.26.1 (2018-03-15)
=======================================
Bug fixes:
* Fix bug where an invalid event caused server to stop functioning correctly,
due to parsing and serializing bugs in ujson library (PR #3008)
Changes in synapse v0.27.0-rc1 (2018-03-14)
===========================================
The common case for running Synapse is not to run separate workers, but for those that do, be aware that synctl no longer starts the main synapse when using ``-a`` option with workers. A new worker file should be added with ``worker_app: synapse.app.homeserver``.
This release also begins the process of renaming a number of the metrics This release also begins the process of renaming a number of the metrics
reported to prometheus. See `docs/metrics-howto.rst <docs/metrics-howto.rst#block-and-response-metrics-renamed-for-0-27-0>`_. reported to prometheus. See `docs/metrics-howto.rst <docs/metrics-howto.rst#block-and-response-metrics-renamed-for-0-27-0>`_.
Note that the v0.28.0 release will remove the deprecated metric names.
Features:
* Add ability for ASes to override message send time (PR #2754)
* Add support for custom storage providers for media repository (PR #2867, #2777, #2783, #2789, #2791, #2804, #2812, #2814, #2857, #2868, #2767)
* Add purge API features, see `docs/admin_api/purge_history_api.rst <docs/admin_api/purge_history_api.rst>`_ for full details (PR #2858, #2867, #2882, #2946, #2962, #2943)
* Add support for whitelisting 3PIDs that users can register. (PR #2813)
* Add ``/room/{id}/event/{id}`` API (PR #2766)
* Add an admin API to get all the media in a room (PR #2818) Thanks to @turt2live!
* Add ``federation_domain_whitelist`` option (PR #2820, #2821)
Changes:
* Continue to factor out processing from main process and into worker processes. See updated `docs/workers.rst <docs/workers.rst>`_ (PR #2892 - #2904, #2913, #2920 - #2926, #2947, #2847, #2854, #2872, #2873, #2874, #2928, #2929, #2934, #2856, #2976 - #2984, #2987 - #2989, #2991 - #2993, #2995, #2784)
* Ensure state cache is used when persisting events (PR #2864, #2871, #2802, #2835, #2836, #2841, #2842, #2849)
* Change the default config to bind on both IPv4 and IPv6 on all platforms (PR #2435) Thanks to @silkeh!
* No longer require a specific version of saml2 (PR #2695) Thanks to @okurz!
* Remove ``verbosity``/``log_file`` from generated config (PR #2755)
* Add and improve metrics and logging (PR #2770, #2778, #2785, #2786, #2787, #2793, #2794, #2795, #2809, #2810, #2833, #2834, #2844, #2965, #2927, #2975, #2790, #2796, #2838)
* When using synctl with workers, don't start the main synapse automatically (PR #2774)
* Minor performance improvements (PR #2773, #2792)
* Use a connection pool for non-federation outbound connections (PR #2817)
* Make it possible to run unit tests against postgres (PR #2829)
* Update pynacl dependency to 1.2.1 or higher (PR #2888) Thanks to @bachp!
* Remove ability for AS users to call /events and /sync (PR #2948)
* Use bcrypt.checkpw (PR #2949) Thanks to @krombel!
Bug fixes:
* Fix broken ``ldap_config`` config option (PR #2683) Thanks to @seckrv!
* Fix error message when user is not allowed to unban (PR #2761) Thanks to @turt2live!
* Fix publicised groups GET API (singular) over federation (PR #2772)
* Fix user directory when using ``user_directory_search_all_users`` config option (PR #2803, #2831)
* Fix error on ``/publicRooms`` when no rooms exist (PR #2827)
* Fix bug in quarantine_media (PR #2837)
* Fix url_previews when no Content-Type is returned from URL (PR #2845)
* Fix rare race in sync API when joining room (PR #2944)
* Fix slow event search, switch back from GIST to GIN indexes (PR #2769, #2848)
Changes in synapse v0.26.0 (2018-01-05) Changes in synapse v0.26.0 (2018-01-05)

View File

@ -30,8 +30,12 @@ use github's pull request workflow to review the contribution, and either ask
you to make any refinements needed or merge it and make them ourselves. The you to make any refinements needed or merge it and make them ourselves. The
changes will then land on master when we next do a release. changes will then land on master when we next do a release.
We use Jenkins for continuous integration (http://matrix.org/jenkins), and We use `Jenkins <http://matrix.org/jenkins>`_ and
typically all pull requests get automatically tested Jenkins: if your change breaks the build, Jenkins will yell about it in #matrix-dev:matrix.org so please lurk there and keep an eye open. `Travis <https://travis-ci.org/matrix-org/synapse>`_ for continuous
integration. All pull requests to synapse get automatically tested by Travis;
the Jenkins builds require an adminstrator to start them. If your change
breaks the build, this will be shown in github, so please keep an eye on the
pull request for feedback.
Code style Code style
~~~~~~~~~~ ~~~~~~~~~~

View File

@ -157,8 +157,8 @@ if you prefer.
In case of problems, please see the _`Troubleshooting` section below. In case of problems, please see the _`Troubleshooting` section below.
Alternatively, Silvio Fricke has contributed a Dockerfile to automate the Alternatively, Andreas Peters (previously Silvio Fricke) has contributed a Dockerfile to automate the
above in Docker at https://registry.hub.docker.com/u/silviof/docker-matrix/. above in Docker at https://hub.docker.com/r/avhost/docker-matrix/tags/
Also, Martin Giess has created an auto-deployment process with vagrant/ansible, Also, Martin Giess has created an auto-deployment process with vagrant/ansible,
tested with VirtualBox/AWS/DigitalOcean - see https://github.com/EMnify/matrix-synapse-auto-deploy tested with VirtualBox/AWS/DigitalOcean - see https://github.com/EMnify/matrix-synapse-auto-deploy
@ -354,6 +354,10 @@ https://matrix.org/docs/projects/try-matrix-now.html (or build your own with one
Fedora Fedora
------ ------
Synapse is in the Fedora repositories as ``matrix-synapse``::
sudo dnf install matrix-synapse
Oleg Girko provides Fedora RPMs at Oleg Girko provides Fedora RPMs at
https://obs.infoserver.lv/project/monitor/matrix-synapse https://obs.infoserver.lv/project/monitor/matrix-synapse
@ -610,6 +614,9 @@ should have the format ``_matrix._tcp.<yourdomain.com> <ttl> IN SRV 10 0 <port>
$ dig -t srv _matrix._tcp.example.com $ dig -t srv _matrix._tcp.example.com
_matrix._tcp.example.com. 3600 IN SRV 10 0 8448 synapse.example.com. _matrix._tcp.example.com. 3600 IN SRV 10 0 8448 synapse.example.com.
Note that the server hostname cannot be an alias (CNAME record): it has to point
directly to the server hosting the synapse instance.
You can then configure your homeserver to use ``<yourdomain.com>`` as the domain in You can then configure your homeserver to use ``<yourdomain.com>`` as the domain in
its user-ids, by setting ``server_name``:: its user-ids, by setting ``server_name``::
@ -890,6 +897,17 @@ This should end with a 'PASSED' result::
PASSED (successes=143) PASSED (successes=143)
Running the Integration Tests
=============================
Synapse is accompanied by `SyTest <https://github.com/matrix-org/sytest>`_,
a Matrix homeserver integration testing suite, which uses HTTP requests to
access the API as a Matrix client would. It is able to run Synapse directly from
the source tree, so installation of the server is not required.
Testing with SyTest is recommended for verifying that changes related to the
Client-Server API are functioning correctly. See the `installation instructions
<https://github.com/matrix-org/sytest#installing>`_ for details.
Building Internal API Documentation Building Internal API Documentation
=================================== ===================================

View File

@ -48,6 +48,18 @@ returned by the Client-Server API:
# configured on port 443. # configured on port 443.
curl -kv https://<host.name>/_matrix/client/versions 2>&1 | grep "Server:" curl -kv https://<host.name>/_matrix/client/versions 2>&1 | grep "Server:"
Upgrading to $NEXT_VERSION
====================
This release expands the anonymous usage stats sent if the opt-in
``report_stats`` configuration is set to ``true``. We now capture RSS memory
and cpu use at a very coarse level. This requires administrators to install
the optional ``psutil`` python module.
We would appreciate it if you could assist by ensuring this module is available
and ``report_stats`` is enabled. This will let us see if performance changes to
synapse are having an impact to the general community.
Upgrading to v0.15.0 Upgrading to v0.15.0
==================== ====================

10
contrib/README.rst Normal file
View File

@ -0,0 +1,10 @@
Community Contributions
=======================
Everything in this directory are projects submitted by the community that may be useful
to others. As such, the project maintainers cannot guarantee support, stability
or backwards compatibility of these projects.
Files in this directory should *not* be relied on directly, as they may not
continue to work or exist in future. If you wish to use any of these files then
they should be copied to avoid them breaking from underneath you.

View File

@ -22,6 +22,8 @@ import argparse
from synapse.events import FrozenEvent from synapse.events import FrozenEvent
from synapse.util.frozenutils import unfreeze from synapse.util.frozenutils import unfreeze
from six import string_types
def make_graph(file_name, room_id, file_prefix, limit): def make_graph(file_name, room_id, file_prefix, limit):
print "Reading lines" print "Reading lines"
@ -58,7 +60,7 @@ def make_graph(file_name, room_id, file_prefix, limit):
for key, value in unfreeze(event.get_dict()["content"]).items(): for key, value in unfreeze(event.get_dict()["content"]).items():
if value is None: if value is None:
value = "<null>" value = "<null>"
elif isinstance(value, basestring): elif isinstance(value, string_types):
pass pass
else: else:
value = json.dumps(value) value = json.dumps(value)

View File

@ -202,11 +202,11 @@ new PromConsole.Graph({
<h1>Requests</h1> <h1>Requests</h1>
<h3>Requests by Servlet</h3> <h3>Requests by Servlet</h3>
<div id="synapse_http_server_requests_servlet"></div> <div id="synapse_http_server_request_count_servlet"></div>
<script> <script>
new PromConsole.Graph({ new PromConsole.Graph({
node: document.querySelector("#synapse_http_server_requests_servlet"), node: document.querySelector("#synapse_http_server_request_count_servlet"),
expr: "rate(synapse_http_server_requests:servlet[2m])", expr: "rate(synapse_http_server_request_count:servlet[2m])",
name: "[[servlet]]", name: "[[servlet]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize, yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize, yHoverFormatter: PromConsole.NumberFormatter.humanize,
@ -215,11 +215,11 @@ new PromConsole.Graph({
}) })
</script> </script>
<h4>&nbsp;(without <tt>EventStreamRestServlet</tt> or <tt>SyncRestServlet</tt>)</h4> <h4>&nbsp;(without <tt>EventStreamRestServlet</tt> or <tt>SyncRestServlet</tt>)</h4>
<div id="synapse_http_server_requests_servlet_minus_events"></div> <div id="synapse_http_server_request_count_servlet_minus_events"></div>
<script> <script>
new PromConsole.Graph({ new PromConsole.Graph({
node: document.querySelector("#synapse_http_server_requests_servlet_minus_events"), node: document.querySelector("#synapse_http_server_request_count_servlet_minus_events"),
expr: "rate(synapse_http_server_requests:servlet{servlet!=\"EventStreamRestServlet\", servlet!=\"SyncRestServlet\"}[2m])", expr: "rate(synapse_http_server_request_count:servlet{servlet!=\"EventStreamRestServlet\", servlet!=\"SyncRestServlet\"}[2m])",
name: "[[servlet]]", name: "[[servlet]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize, yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize, yHoverFormatter: PromConsole.NumberFormatter.humanize,
@ -233,7 +233,7 @@ new PromConsole.Graph({
<script> <script>
new PromConsole.Graph({ new PromConsole.Graph({
node: document.querySelector("#synapse_http_server_response_time_avg"), node: document.querySelector("#synapse_http_server_response_time_avg"),
expr: "rate(synapse_http_server_response_time:total[2m]) / rate(synapse_http_server_response_time:count[2m]) / 1000", expr: "rate(synapse_http_server_response_time_seconds[2m]) / rate(synapse_http_server_response_count[2m]) / 1000",
name: "[[servlet]]", name: "[[servlet]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize, yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize, yHoverFormatter: PromConsole.NumberFormatter.humanize,
@ -276,7 +276,7 @@ new PromConsole.Graph({
<script> <script>
new PromConsole.Graph({ new PromConsole.Graph({
node: document.querySelector("#synapse_http_server_response_ru_utime"), node: document.querySelector("#synapse_http_server_response_ru_utime"),
expr: "rate(synapse_http_server_response_ru_utime:total[2m])", expr: "rate(synapse_http_server_response_ru_utime_seconds[2m])",
name: "[[servlet]]", name: "[[servlet]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize, yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize, yHoverFormatter: PromConsole.NumberFormatter.humanize,
@ -291,7 +291,7 @@ new PromConsole.Graph({
<script> <script>
new PromConsole.Graph({ new PromConsole.Graph({
node: document.querySelector("#synapse_http_server_response_db_txn_duration"), node: document.querySelector("#synapse_http_server_response_db_txn_duration"),
expr: "rate(synapse_http_server_response_db_txn_duration:total[2m])", expr: "rate(synapse_http_server_response_db_txn_duration_seconds[2m])",
name: "[[servlet]]", name: "[[servlet]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize, yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize, yHoverFormatter: PromConsole.NumberFormatter.humanize,
@ -306,7 +306,7 @@ new PromConsole.Graph({
<script> <script>
new PromConsole.Graph({ new PromConsole.Graph({
node: document.querySelector("#synapse_http_server_send_time_avg"), node: document.querySelector("#synapse_http_server_send_time_avg"),
expr: "rate(synapse_http_server_response_time:total{servlet='RoomSendEventRestServlet'}[2m]) / rate(synapse_http_server_response_time:count{servlet='RoomSendEventRestServlet'}[2m]) / 1000", expr: "rate(synapse_http_server_response_time_second{servlet='RoomSendEventRestServlet'}[2m]) / rate(synapse_http_server_response_count{servlet='RoomSendEventRestServlet'}[2m]) / 1000",
name: "[[servlet]]", name: "[[servlet]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize, yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize, yHoverFormatter: PromConsole.NumberFormatter.humanize,

View File

@ -1,10 +1,10 @@
synapse_federation_transaction_queue_pendingEdus:total = sum(synapse_federation_transaction_queue_pendingEdus or absent(synapse_federation_transaction_queue_pendingEdus)*0) synapse_federation_transaction_queue_pendingEdus:total = sum(synapse_federation_transaction_queue_pendingEdus or absent(synapse_federation_transaction_queue_pendingEdus)*0)
synapse_federation_transaction_queue_pendingPdus:total = sum(synapse_federation_transaction_queue_pendingPdus or absent(synapse_federation_transaction_queue_pendingPdus)*0) synapse_federation_transaction_queue_pendingPdus:total = sum(synapse_federation_transaction_queue_pendingPdus or absent(synapse_federation_transaction_queue_pendingPdus)*0)
synapse_http_server_requests:method{servlet=""} = sum(synapse_http_server_requests) by (method) synapse_http_server_request_count:method{servlet=""} = sum(synapse_http_server_request_count) by (method)
synapse_http_server_requests:servlet{method=""} = sum(synapse_http_server_requests) by (servlet) synapse_http_server_request_count:servlet{method=""} = sum(synapse_http_server_request_count) by (servlet)
synapse_http_server_requests:total{servlet=""} = sum(synapse_http_server_requests:by_method) by (servlet) synapse_http_server_request_count:total{servlet=""} = sum(synapse_http_server_request_count:by_method) by (servlet)
synapse_cache:hit_ratio_5m = rate(synapse_util_caches_cache:hits[5m]) / rate(synapse_util_caches_cache:total[5m]) synapse_cache:hit_ratio_5m = rate(synapse_util_caches_cache:hits[5m]) / rate(synapse_util_caches_cache:total[5m])
synapse_cache:hit_ratio_30s = rate(synapse_util_caches_cache:hits[30s]) / rate(synapse_util_caches_cache:total[30s]) synapse_cache:hit_ratio_30s = rate(synapse_util_caches_cache:hits[30s]) / rate(synapse_util_caches_cache:total[30s])

View File

@ -5,19 +5,19 @@ groups:
expr: "sum(synapse_federation_transaction_queue_pendingEdus or absent(synapse_federation_transaction_queue_pendingEdus)*0)" expr: "sum(synapse_federation_transaction_queue_pendingEdus or absent(synapse_federation_transaction_queue_pendingEdus)*0)"
- record: "synapse_federation_transaction_queue_pendingPdus:total" - record: "synapse_federation_transaction_queue_pendingPdus:total"
expr: "sum(synapse_federation_transaction_queue_pendingPdus or absent(synapse_federation_transaction_queue_pendingPdus)*0)" expr: "sum(synapse_federation_transaction_queue_pendingPdus or absent(synapse_federation_transaction_queue_pendingPdus)*0)"
- record: 'synapse_http_server_requests:method' - record: 'synapse_http_server_request_count:method'
labels: labels:
servlet: "" servlet: ""
expr: "sum(synapse_http_server_requests) by (method)" expr: "sum(synapse_http_server_request_count) by (method)"
- record: 'synapse_http_server_requests:servlet' - record: 'synapse_http_server_request_count:servlet'
labels: labels:
method: "" method: ""
expr: 'sum(synapse_http_server_requests) by (servlet)' expr: 'sum(synapse_http_server_request_count) by (servlet)'
- record: 'synapse_http_server_requests:total' - record: 'synapse_http_server_request_count:total'
labels: labels:
servlet: "" servlet: ""
expr: 'sum(synapse_http_server_requests:by_method) by (servlet)' expr: 'sum(synapse_http_server_request_count:by_method) by (servlet)'
- record: 'synapse_cache:hit_ratio_5m' - record: 'synapse_cache:hit_ratio_5m'
expr: 'rate(synapse_util_caches_cache:hits[5m]) / rate(synapse_util_caches_cache:total[5m])' expr: 'rate(synapse_util_caches_cache:hits[5m]) / rate(synapse_util_caches_cache:total[5m])'

View File

@ -2,6 +2,9 @@
# (e.g. https://www.archlinux.org/packages/community/any/matrix-synapse/ for ArchLinux) # (e.g. https://www.archlinux.org/packages/community/any/matrix-synapse/ for ArchLinux)
# rather than in a user home directory or similar under virtualenv. # rather than in a user home directory or similar under virtualenv.
# **NOTE:** This is an example service file that may change in the future. If you
# wish to use this please copy rather than symlink it.
[Unit] [Unit]
Description=Synapse Matrix homeserver Description=Synapse Matrix homeserver
@ -12,6 +15,7 @@ Group=synapse
WorkingDirectory=/var/lib/synapse WorkingDirectory=/var/lib/synapse
ExecStart=/usr/bin/python2.7 -m synapse.app.homeserver --config-path=/etc/synapse/homeserver.yaml ExecStart=/usr/bin/python2.7 -m synapse.app.homeserver --config-path=/etc/synapse/homeserver.yaml
ExecStop=/usr/bin/synctl stop /etc/synapse/homeserver.yaml ExecStop=/usr/bin/synctl stop /etc/synapse/homeserver.yaml
# EnvironmentFile=-/etc/sysconfig/synapse # Can be used to e.g. set SYNAPSE_CACHE_FACTOR
[Install] [Install]
WantedBy=multi-user.target WantedBy=multi-user.target

View File

@ -16,9 +16,11 @@ including an ``access_token`` of a server admin.
By default, events sent by local users are not deleted, as they may represent By default, events sent by local users are not deleted, as they may represent
the only copies of this content in existence. (Events sent by remote users are the only copies of this content in existence. (Events sent by remote users are
deleted, and room state data before the cutoff is always removed). deleted.)
To delete local events as well, set ``delete_local_events`` in the body: Room state data (such as joins, leaves, topic) is always preserved.
To delete local message events as well, set ``delete_local_events`` in the body:
.. code:: json .. code:: json

View File

@ -55,7 +55,12 @@ synapse process.)
You then create a set of configs for the various worker processes. These You then create a set of configs for the various worker processes. These
should be worker configuration files, and should be stored in a dedicated should be worker configuration files, and should be stored in a dedicated
subdirectory, to allow synctl to manipulate them. subdirectory, to allow synctl to manipulate them. An additional configuration
for the master synapse process will need to be created because the process will
not be started automatically. That configuration should look like this::
worker_app: synapse.app.homeserver
daemonize: true
Each worker configuration file inherits the configuration of the main homeserver Each worker configuration file inherits the configuration of the main homeserver
configuration file. You can then override configuration specific to that worker, configuration file. You can then override configuration specific to that worker,
@ -230,9 +235,11 @@ file. For example::
``synapse.app.event_creator`` ``synapse.app.event_creator``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Handles non-state event creation. It can handle REST endpoints matching: Handles some event creation. It can handle REST endpoints matching::
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/send ^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/send
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/(join|invite|leave|ban|unban|kick)$
^/_matrix/client/(api/v1|r0|unstable)/join/
It will create events locally and then send them on to the main synapse It will create events locally and then send them on to the main synapse
instance to be persisted and handled. instance to be persisted and handled.

View File

@ -1,5 +1,7 @@
#! /bin/bash #! /bin/bash
set -eux
cd "`dirname $0`/.." cd "`dirname $0`/.."
TOX_DIR=$WORKSPACE/.tox TOX_DIR=$WORKSPACE/.tox
@ -14,7 +16,20 @@ fi
tox -e py27 --notest -v tox -e py27 --notest -v
TOX_BIN=$TOX_DIR/py27/bin TOX_BIN=$TOX_DIR/py27/bin
$TOX_BIN/pip install setuptools
# cryptography 2.2 requires setuptools >= 18.5.
#
# older versions of virtualenv (?) give us a virtualenv with the same version
# of setuptools as is installed on the system python (and tox runs virtualenv
# under python3, so we get the version of setuptools that is installed on that).
#
# anyway, make sure that we have a recent enough setuptools.
$TOX_BIN/pip install 'setuptools>=18.5'
# we also need a semi-recent version of pip, because old ones fail to install
# the "enum34" dependency of cryptography.
$TOX_BIN/pip install 'pip>=10'
{ python synapse/python_dependencies.py { python synapse/python_dependencies.py
echo lxml psycopg2 echo lxml psycopg2
} | xargs $TOX_BIN/pip install } | xargs $TOX_BIN/pip install

View File

@ -1,6 +1,7 @@
#!/usr/bin/env python #!/usr/bin/env python
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
# Copyright 2015, 2016 OpenMarket Ltd # Copyright 2015, 2016 OpenMarket Ltd
# Copyright 2018 New Vector Ltd
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License. # you may not use this file except in compliance with the License.
@ -29,6 +30,8 @@ import time
import traceback import traceback
import yaml import yaml
from six import string_types
logger = logging.getLogger("synapse_port_db") logger = logging.getLogger("synapse_port_db")
@ -250,6 +253,12 @@ class Porter(object):
@defer.inlineCallbacks @defer.inlineCallbacks
def handle_table(self, table, postgres_size, table_size, forward_chunk, def handle_table(self, table, postgres_size, table_size, forward_chunk,
backward_chunk): backward_chunk):
logger.info(
"Table %s: %i/%i (rows %i-%i) already ported",
table, postgres_size, table_size,
backward_chunk+1, forward_chunk-1,
)
if not table_size: if not table_size:
return return
@ -467,31 +476,10 @@ class Porter(object):
self.progress.set_state("Preparing PostgreSQL") self.progress.set_state("Preparing PostgreSQL")
self.setup_db(postgres_config, postgres_engine) self.setup_db(postgres_config, postgres_engine)
# Step 2. Get tables. self.progress.set_state("Creating port tables")
self.progress.set_state("Fetching tables")
sqlite_tables = yield self.sqlite_store._simple_select_onecol(
table="sqlite_master",
keyvalues={
"type": "table",
},
retcol="name",
)
postgres_tables = yield self.postgres_store._simple_select_onecol(
table="information_schema.tables",
keyvalues={},
retcol="distinct table_name",
)
tables = set(sqlite_tables) & set(postgres_tables)
self.progress.set_state("Creating tables")
logger.info("Found %d tables", len(tables))
def create_port_table(txn): def create_port_table(txn):
txn.execute( txn.execute(
"CREATE TABLE port_from_sqlite3 (" "CREATE TABLE IF NOT EXISTS port_from_sqlite3 ("
" table_name varchar(100) NOT NULL UNIQUE," " table_name varchar(100) NOT NULL UNIQUE,"
" forward_rowid bigint NOT NULL," " forward_rowid bigint NOT NULL,"
" backward_rowid bigint NOT NULL" " backward_rowid bigint NOT NULL"
@ -517,18 +505,33 @@ class Porter(object):
"alter_table", alter_table "alter_table", alter_table
) )
except Exception as e: except Exception as e:
logger.info("Failed to create port table: %s", e) pass
try:
yield self.postgres_store.runInteraction( yield self.postgres_store.runInteraction(
"create_port_table", create_port_table "create_port_table", create_port_table
) )
except Exception as e:
logger.info("Failed to create port table: %s", e)
self.progress.set_state("Setting up") # Step 2. Get tables.
self.progress.set_state("Fetching tables")
sqlite_tables = yield self.sqlite_store._simple_select_onecol(
table="sqlite_master",
keyvalues={
"type": "table",
},
retcol="name",
)
# Set up tables. postgres_tables = yield self.postgres_store._simple_select_onecol(
table="information_schema.tables",
keyvalues={},
retcol="distinct table_name",
)
tables = set(sqlite_tables) & set(postgres_tables)
logger.info("Found %d tables", len(tables))
# Step 3. Figure out what still needs copying
self.progress.set_state("Checking on port progress")
setup_res = yield defer.gatherResults( setup_res = yield defer.gatherResults(
[ [
self.setup_table(table) self.setup_table(table)
@ -539,7 +542,8 @@ class Porter(object):
consumeErrors=True, consumeErrors=True,
) )
# Process tables. # Step 4. Do the copying.
self.progress.set_state("Copying to postgres")
yield defer.gatherResults( yield defer.gatherResults(
[ [
self.handle_table(*res) self.handle_table(*res)
@ -548,6 +552,9 @@ class Porter(object):
consumeErrors=True, consumeErrors=True,
) )
# Step 5. Do final post-processing
yield self._setup_state_group_id_seq()
self.progress.done() self.progress.done()
except: except:
global end_error_exec_info global end_error_exec_info
@ -569,7 +576,7 @@ class Porter(object):
def conv(j, col): def conv(j, col):
if j in bool_cols: if j in bool_cols:
return bool(col) return bool(col)
elif isinstance(col, basestring) and "\0" in col: elif isinstance(col, string_types) and "\0" in col:
logger.warn("DROPPING ROW: NUL value in table %s col %s: %r", table, headers[j], col) logger.warn("DROPPING ROW: NUL value in table %s col %s: %r", table, headers[j], col)
raise BadValueException(); raise BadValueException();
return col return col
@ -707,6 +714,16 @@ class Porter(object):
defer.returnValue((done, remaining + done)) defer.returnValue((done, remaining + done))
def _setup_state_group_id_seq(self):
def r(txn):
txn.execute("SELECT MAX(id) FROM state_groups")
next_id = txn.fetchone()[0]+1
txn.execute(
"ALTER SEQUENCE state_group_id_seq RESTART WITH %s",
(next_id,),
)
return self.postgres_store.runInteraction("setup_state_group_id_seq", r)
############################################## ##############################################
###### The following is simply UI stuff ###### ###### The following is simply UI stuff ######

View File

@ -16,4 +16,4 @@
""" This is a reference implementation of a Matrix home server. """ This is a reference implementation of a Matrix home server.
""" """
__version__ = "0.26.0" __version__ = "0.28.1"

View File

@ -204,8 +204,8 @@ class Auth(object):
ip_addr = self.hs.get_ip_from_request(request) ip_addr = self.hs.get_ip_from_request(request)
user_agent = request.requestHeaders.getRawHeaders( user_agent = request.requestHeaders.getRawHeaders(
"User-Agent", b"User-Agent",
default=[""] default=[b""]
)[0] )[0]
if user and access_token and ip_addr: if user and access_token and ip_addr:
self.store.insert_client_ip( self.store.insert_client_ip(
@ -672,7 +672,7 @@ def has_access_token(request):
bool: False if no access_token was given, True otherwise. bool: False if no access_token was given, True otherwise.
""" """
query_params = request.args.get("access_token") query_params = request.args.get("access_token")
auth_headers = request.requestHeaders.getRawHeaders("Authorization") auth_headers = request.requestHeaders.getRawHeaders(b"Authorization")
return bool(query_params) or bool(auth_headers) return bool(query_params) or bool(auth_headers)
@ -692,8 +692,8 @@ def get_access_token_from_request(request, token_not_found_http_status=401):
AuthError: If there isn't an access_token in the request. AuthError: If there isn't an access_token in the request.
""" """
auth_headers = request.requestHeaders.getRawHeaders("Authorization") auth_headers = request.requestHeaders.getRawHeaders(b"Authorization")
query_params = request.args.get("access_token") query_params = request.args.get(b"access_token")
if auth_headers: if auth_headers:
# Try the get the access_token from a "Authorization: Bearer" # Try the get the access_token from a "Authorization: Bearer"
# header # header

View File

@ -16,6 +16,9 @@
"""Contains constants from the specification.""" """Contains constants from the specification."""
# the "depth" field on events is limited to 2**63 - 1
MAX_DEPTH = 2**63 - 1
class Membership(object): class Membership(object):

View File

@ -15,9 +15,11 @@
"""Contains exceptions and error codes.""" """Contains exceptions and error codes."""
import json
import logging import logging
import simplejson as json
from six import iteritems
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -296,7 +298,7 @@ def cs_error(msg, code=Codes.UNKNOWN, **kwargs):
A dict representing the error response JSON. A dict representing the error response JSON.
""" """
err = {"error": msg, "errcode": code} err = {"error": msg, "errcode": code}
for key, value in kwargs.iteritems(): for key, value in iteritems(kwargs):
err[key] = value err[key] = value
return err return err

View File

@ -17,7 +17,7 @@ from synapse.storage.presence import UserPresenceState
from synapse.types import UserID, RoomID from synapse.types import UserID, RoomID
from twisted.internet import defer from twisted.internet import defer
import ujson as json import simplejson as json
import jsonschema import jsonschema
from jsonschema import FormatChecker from jsonschema import FormatChecker

View File

@ -32,11 +32,11 @@ from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.server import HomeServer from synapse.server import HomeServer
from synapse.storage.engines import create_engine from synapse.storage.engines import create_engine
from synapse.util.httpresourcetree import create_resource_tree from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext, preserve_fn from synapse.util.logcontext import LoggingContext, run_in_background
from synapse.util.manhole import manhole from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string from synapse.util.versionstring import get_version_string
from twisted.internet import reactor from twisted.internet import reactor, defer
from twisted.web.resource import Resource from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.appservice") logger = logging.getLogger("synapse.app.appservice")
@ -64,7 +64,7 @@ class AppserviceServer(HomeServer):
if name == "metrics": if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self) resources[METRICS_PREFIX] = MetricsResource(self)
root_resource = create_resource_tree(resources, Resource()) root_resource = create_resource_tree(resources, NoResource())
_base.listen_tcp( _base.listen_tcp(
bind_addresses, bind_addresses,
@ -112,9 +112,14 @@ class ASReplicationHandler(ReplicationClientHandler):
if stream_name == "events": if stream_name == "events":
max_stream_id = self.store.get_room_max_stream_ordering() max_stream_id = self.store.get_room_max_stream_ordering()
preserve_fn( run_in_background(self._notify_app_services, max_stream_id)
self.appservice_handler.notify_interested_services
)(max_stream_id) @defer.inlineCallbacks
def _notify_app_services(self, room_stream_id):
try:
yield self.appservice_handler.notify_interested_services(room_stream_id)
except Exception:
logger.exception("Error notifying application services of event")
def start(config_options): def start(config_options):

View File

@ -44,7 +44,7 @@ from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string from synapse.util.versionstring import get_version_string
from twisted.internet import reactor from twisted.internet import reactor
from twisted.web.resource import Resource from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.client_reader") logger = logging.getLogger("synapse.app.client_reader")
@ -88,7 +88,7 @@ class ClientReaderServer(HomeServer):
"/_matrix/client/api/v1": resource, "/_matrix/client/api/v1": resource,
}) })
root_resource = create_resource_tree(resources, Resource()) root_resource = create_resource_tree(resources, NoResource())
_base.listen_tcp( _base.listen_tcp(
bind_addresses, bind_addresses,

View File

@ -52,7 +52,7 @@ from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string from synapse.util.versionstring import get_version_string
from twisted.internet import reactor from twisted.internet import reactor
from twisted.web.resource import Resource from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.event_creator") logger = logging.getLogger("synapse.app.event_creator")
@ -104,7 +104,7 @@ class EventCreatorServer(HomeServer):
"/_matrix/client/api/v1": resource, "/_matrix/client/api/v1": resource,
}) })
root_resource = create_resource_tree(resources, Resource()) root_resource = create_resource_tree(resources, NoResource())
_base.listen_tcp( _base.listen_tcp(
bind_addresses, bind_addresses,

View File

@ -41,7 +41,7 @@ from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string from synapse.util.versionstring import get_version_string
from twisted.internet import reactor from twisted.internet import reactor
from twisted.web.resource import Resource from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.federation_reader") logger = logging.getLogger("synapse.app.federation_reader")
@ -77,7 +77,7 @@ class FederationReaderServer(HomeServer):
FEDERATION_PREFIX: TransportLayerServer(self), FEDERATION_PREFIX: TransportLayerServer(self),
}) })
root_resource = create_resource_tree(resources, Resource()) root_resource = create_resource_tree(resources, NoResource())
_base.listen_tcp( _base.listen_tcp(
bind_addresses, bind_addresses,

View File

@ -38,11 +38,11 @@ from synapse.server import HomeServer
from synapse.storage.engines import create_engine from synapse.storage.engines import create_engine
from synapse.util.async import Linearizer from synapse.util.async import Linearizer
from synapse.util.httpresourcetree import create_resource_tree from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext, preserve_fn from synapse.util.logcontext import LoggingContext, run_in_background
from synapse.util.manhole import manhole from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string from synapse.util.versionstring import get_version_string
from twisted.internet import defer, reactor from twisted.internet import defer, reactor
from twisted.web.resource import Resource from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.federation_sender") logger = logging.getLogger("synapse.app.federation_sender")
@ -91,7 +91,7 @@ class FederationSenderServer(HomeServer):
if name == "metrics": if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self) resources[METRICS_PREFIX] = MetricsResource(self)
root_resource = create_resource_tree(resources, Resource()) root_resource = create_resource_tree(resources, NoResource())
_base.listen_tcp( _base.listen_tcp(
bind_addresses, bind_addresses,
@ -229,7 +229,7 @@ class FederationSenderHandler(object):
# presence, typing, etc. # presence, typing, etc.
if stream_name == "federation": if stream_name == "federation":
send_queue.process_rows_for_federation(self.federation_sender, rows) send_queue.process_rows_for_federation(self.federation_sender, rows)
preserve_fn(self.update_token)(token) run_in_background(self.update_token, token)
# We also need to poke the federation sender when new events happen # We also need to poke the federation sender when new events happen
elif stream_name == "events": elif stream_name == "events":
@ -237,6 +237,7 @@ class FederationSenderHandler(object):
@defer.inlineCallbacks @defer.inlineCallbacks
def update_token(self, token): def update_token(self, token):
try:
self.federation_position = token self.federation_position = token
# We linearize here to ensure we don't have races updating the token # We linearize here to ensure we don't have races updating the token
@ -250,6 +251,8 @@ class FederationSenderHandler(object):
# its in memory queues # its in memory queues
self.replication_client.send_federation_ack(self.federation_position) self.replication_client.send_federation_ack(self.federation_position)
self._last_ack = self.federation_position self._last_ack = self.federation_position
except Exception:
logger.exception("Error updating federation stream position")
if __name__ == '__main__': if __name__ == '__main__':

View File

@ -44,7 +44,7 @@ from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string from synapse.util.versionstring import get_version_string
from twisted.internet import defer, reactor from twisted.internet import defer, reactor
from twisted.web.resource import Resource from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.frontend_proxy") logger = logging.getLogger("synapse.app.frontend_proxy")
@ -90,7 +90,7 @@ class KeyUploadServlet(RestServlet):
# They're actually trying to upload something, proxy to main synapse. # They're actually trying to upload something, proxy to main synapse.
# Pass through the auth headers, if any, in case the access token # Pass through the auth headers, if any, in case the access token
# is there. # is there.
auth_headers = request.requestHeaders.getRawHeaders("Authorization", []) auth_headers = request.requestHeaders.getRawHeaders(b"Authorization", [])
headers = { headers = {
"Authorization": auth_headers, "Authorization": auth_headers,
} }
@ -142,7 +142,7 @@ class FrontendProxyServer(HomeServer):
"/_matrix/client/api/v1": resource, "/_matrix/client/api/v1": resource,
}) })
root_resource = create_resource_tree(resources, Resource()) root_resource = create_resource_tree(resources, NoResource())
_base.listen_tcp( _base.listen_tcp(
bind_addresses, bind_addresses,

View File

@ -48,6 +48,7 @@ from synapse.server import HomeServer
from synapse.storage import are_all_users_on_domain from synapse.storage import are_all_users_on_domain
from synapse.storage.engines import IncorrectDatabaseSetup, create_engine from synapse.storage.engines import IncorrectDatabaseSetup, create_engine
from synapse.storage.prepare_database import UpgradeDatabaseException, prepare_database from synapse.storage.prepare_database import UpgradeDatabaseException, prepare_database
from synapse.util.caches import CACHE_SIZE_FACTOR
from synapse.util.httpresourcetree import create_resource_tree from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole from synapse.util.manhole import manhole
@ -56,7 +57,7 @@ from synapse.util.rlimit import change_resource_limit
from synapse.util.versionstring import get_version_string from synapse.util.versionstring import get_version_string
from twisted.application import service from twisted.application import service
from twisted.internet import defer, reactor from twisted.internet import defer, reactor
from twisted.web.resource import EncodingResourceWrapper, Resource from twisted.web.resource import EncodingResourceWrapper, NoResource
from twisted.web.server import GzipEncoderFactory from twisted.web.server import GzipEncoderFactory
from twisted.web.static import File from twisted.web.static import File
@ -126,7 +127,7 @@ class SynapseHomeServer(HomeServer):
if WEB_CLIENT_PREFIX in resources: if WEB_CLIENT_PREFIX in resources:
root_resource = RootRedirect(WEB_CLIENT_PREFIX) root_resource = RootRedirect(WEB_CLIENT_PREFIX)
else: else:
root_resource = Resource() root_resource = NoResource()
root_resource = create_resource_tree(resources, root_resource) root_resource = create_resource_tree(resources, root_resource)
@ -402,6 +403,10 @@ def run(hs):
stats = {} stats = {}
# Contains the list of processes we will be monitoring
# currently either 0 or 1
stats_process = []
@defer.inlineCallbacks @defer.inlineCallbacks
def phone_stats_home(): def phone_stats_home():
logger.info("Gathering stats for reporting") logger.info("Gathering stats for reporting")
@ -425,8 +430,21 @@ def run(hs):
stats["daily_active_rooms"] = yield hs.get_datastore().count_daily_active_rooms() stats["daily_active_rooms"] = yield hs.get_datastore().count_daily_active_rooms()
stats["daily_messages"] = yield hs.get_datastore().count_daily_messages() stats["daily_messages"] = yield hs.get_datastore().count_daily_messages()
r30_results = yield hs.get_datastore().count_r30_users()
for name, count in r30_results.iteritems():
stats["r30_users_" + name] = count
daily_sent_messages = yield hs.get_datastore().count_daily_sent_messages() daily_sent_messages = yield hs.get_datastore().count_daily_sent_messages()
stats["daily_sent_messages"] = daily_sent_messages stats["daily_sent_messages"] = daily_sent_messages
stats["cache_factor"] = CACHE_SIZE_FACTOR
stats["event_cache_size"] = hs.config.event_cache_size
if len(stats_process) > 0:
stats["memory_rss"] = 0
stats["cpu_average"] = 0
for process in stats_process:
stats["memory_rss"] += process.memory_info().rss
stats["cpu_average"] += int(process.cpu_percent(interval=None))
logger.info("Reporting stats to matrix.org: %s" % (stats,)) logger.info("Reporting stats to matrix.org: %s" % (stats,))
try: try:
@ -437,10 +455,32 @@ def run(hs):
except Exception as e: except Exception as e:
logger.warn("Error reporting stats: %s", e) logger.warn("Error reporting stats: %s", e)
def performance_stats_init():
try:
import psutil
process = psutil.Process()
# Ensure we can fetch both, and make the initial request for cpu_percent
# so the next request will use this as the initial point.
process.memory_info().rss
process.cpu_percent(interval=None)
logger.info("report_stats can use psutil")
stats_process.append(process)
except (ImportError, AttributeError):
logger.warn(
"report_stats enabled but psutil is not installed or incorrect version."
" Disabling reporting of memory/cpu stats."
" Ensuring psutil is available will help matrix.org track performance"
" changes across releases."
)
if hs.config.report_stats: if hs.config.report_stats:
logger.info("Scheduling stats reporting for 3 hour intervals") logger.info("Scheduling stats reporting for 3 hour intervals")
clock.looping_call(phone_stats_home, 3 * 60 * 60 * 1000) clock.looping_call(phone_stats_home, 3 * 60 * 60 * 1000)
# We need to defer this init for the cases that we daemonize
# otherwise the process ID we get is that of the non-daemon process
clock.call_later(0, performance_stats_init)
# We wait 5 minutes to send the first set of stats as the server can # We wait 5 minutes to send the first set of stats as the server can
# be quite busy the first few minutes # be quite busy the first few minutes
clock.call_later(5 * 60, phone_stats_home) clock.call_later(5 * 60, phone_stats_home)

View File

@ -43,7 +43,7 @@ from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string from synapse.util.versionstring import get_version_string
from twisted.internet import reactor from twisted.internet import reactor
from twisted.web.resource import Resource from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.media_repository") logger = logging.getLogger("synapse.app.media_repository")
@ -84,7 +84,7 @@ class MediaRepositoryServer(HomeServer):
), ),
}) })
root_resource = create_resource_tree(resources, Resource()) root_resource = create_resource_tree(resources, NoResource())
_base.listen_tcp( _base.listen_tcp(
bind_addresses, bind_addresses,

View File

@ -33,11 +33,11 @@ from synapse.server import HomeServer
from synapse.storage import DataStore from synapse.storage import DataStore
from synapse.storage.engines import create_engine from synapse.storage.engines import create_engine
from synapse.util.httpresourcetree import create_resource_tree from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext, preserve_fn from synapse.util.logcontext import LoggingContext, run_in_background
from synapse.util.manhole import manhole from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string from synapse.util.versionstring import get_version_string
from twisted.internet import defer, reactor from twisted.internet import defer, reactor
from twisted.web.resource import Resource from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.pusher") logger = logging.getLogger("synapse.app.pusher")
@ -94,7 +94,7 @@ class PusherServer(HomeServer):
if name == "metrics": if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self) resources[METRICS_PREFIX] = MetricsResource(self)
root_resource = create_resource_tree(resources, Resource()) root_resource = create_resource_tree(resources, NoResource())
_base.listen_tcp( _base.listen_tcp(
bind_addresses, bind_addresses,
@ -140,10 +140,11 @@ class PusherReplicationHandler(ReplicationClientHandler):
def on_rdata(self, stream_name, token, rows): def on_rdata(self, stream_name, token, rows):
super(PusherReplicationHandler, self).on_rdata(stream_name, token, rows) super(PusherReplicationHandler, self).on_rdata(stream_name, token, rows)
preserve_fn(self.poke_pushers)(stream_name, token, rows) run_in_background(self.poke_pushers, stream_name, token, rows)
@defer.inlineCallbacks @defer.inlineCallbacks
def poke_pushers(self, stream_name, token, rows): def poke_pushers(self, stream_name, token, rows):
try:
if stream_name == "pushers": if stream_name == "pushers":
for row in rows: for row in rows:
if row.deleted: if row.deleted:
@ -158,6 +159,8 @@ class PusherReplicationHandler(ReplicationClientHandler):
yield self.pusher_pool.on_new_receipts( yield self.pusher_pool.on_new_receipts(
token, token, set(row.room_id for row in rows) token, token, set(row.room_id for row in rows)
) )
except Exception:
logger.exception("Error poking pushers")
def stop_pusher(self, user_id, app_id, pushkey): def stop_pusher(self, user_id, app_id, pushkey):
key = "%s:%s" % (app_id, pushkey) key = "%s:%s" % (app_id, pushkey)

View File

@ -51,12 +51,14 @@ from synapse.storage.engines import create_engine
from synapse.storage.presence import UserPresenceState from synapse.storage.presence import UserPresenceState
from synapse.storage.roommember import RoomMemberStore from synapse.storage.roommember import RoomMemberStore
from synapse.util.httpresourcetree import create_resource_tree from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext, preserve_fn from synapse.util.logcontext import LoggingContext, run_in_background
from synapse.util.manhole import manhole from synapse.util.manhole import manhole
from synapse.util.stringutils import random_string from synapse.util.stringutils import random_string
from synapse.util.versionstring import get_version_string from synapse.util.versionstring import get_version_string
from twisted.internet import defer, reactor from twisted.internet import defer, reactor
from twisted.web.resource import Resource from twisted.web.resource import NoResource
from six import iteritems
logger = logging.getLogger("synapse.app.synchrotron") logger = logging.getLogger("synapse.app.synchrotron")
@ -211,7 +213,7 @@ class SynchrotronPresence(object):
def get_currently_syncing_users(self): def get_currently_syncing_users(self):
return [ return [
user_id for user_id, count in self.user_to_num_current_syncs.iteritems() user_id for user_id, count in iteritems(self.user_to_num_current_syncs)
if count > 0 if count > 0
] ]
@ -269,7 +271,7 @@ class SynchrotronServer(HomeServer):
"/_matrix/client/api/v1": resource, "/_matrix/client/api/v1": resource,
}) })
root_resource = create_resource_tree(resources, Resource()) root_resource = create_resource_tree(resources, NoResource())
_base.listen_tcp( _base.listen_tcp(
bind_addresses, bind_addresses,
@ -325,8 +327,7 @@ class SyncReplicationHandler(ReplicationClientHandler):
def on_rdata(self, stream_name, token, rows): def on_rdata(self, stream_name, token, rows):
super(SyncReplicationHandler, self).on_rdata(stream_name, token, rows) super(SyncReplicationHandler, self).on_rdata(stream_name, token, rows)
run_in_background(self.process_and_notify, stream_name, token, rows)
preserve_fn(self.process_and_notify)(stream_name, token, rows)
def get_streams_to_replicate(self): def get_streams_to_replicate(self):
args = super(SyncReplicationHandler, self).get_streams_to_replicate() args = super(SyncReplicationHandler, self).get_streams_to_replicate()
@ -338,6 +339,7 @@ class SyncReplicationHandler(ReplicationClientHandler):
@defer.inlineCallbacks @defer.inlineCallbacks
def process_and_notify(self, stream_name, token, rows): def process_and_notify(self, stream_name, token, rows):
try:
if stream_name == "events": if stream_name == "events":
# We shouldn't get multiple rows per token for events stream, so # We shouldn't get multiple rows per token for events stream, so
# we don't need to optimise this for multiple rows. # we don't need to optimise this for multiple rows.
@ -387,6 +389,8 @@ class SyncReplicationHandler(ReplicationClientHandler):
self.notifier.on_new_event( self.notifier.on_new_event(
"groups_key", token, users=[row.user_id for row in rows], "groups_key", token, users=[row.user_id for row in rows],
) )
except Exception:
logger.exception("Error processing replication")
def start(config_options): def start(config_options):

View File

@ -38,7 +38,7 @@ def pid_running(pid):
try: try:
os.kill(pid, 0) os.kill(pid, 0)
return True return True
except OSError, err: except OSError as err:
if err.errno == errno.EPERM: if err.errno == errno.EPERM:
return True return True
return False return False
@ -98,7 +98,7 @@ def stop(pidfile, app):
try: try:
os.kill(pid, signal.SIGTERM) os.kill(pid, signal.SIGTERM)
write("stopped %s" % (app,), colour=GREEN) write("stopped %s" % (app,), colour=GREEN)
except OSError, err: except OSError as err:
if err.errno == errno.ESRCH: if err.errno == errno.ESRCH:
write("%s not running" % (app,), colour=YELLOW) write("%s not running" % (app,), colour=YELLOW)
elif err.errno == errno.EPERM: elif err.errno == errno.EPERM:
@ -252,6 +252,7 @@ def main():
for running_pid in running_pids: for running_pid in running_pids:
while pid_running(running_pid): while pid_running(running_pid):
time.sleep(0.2) time.sleep(0.2)
write("All processes exited; now restarting...")
if action == "start" or action == "restart": if action == "start" or action == "restart":
if start_stop_synapse: if start_stop_synapse:

View File

@ -39,11 +39,11 @@ from synapse.storage.engines import create_engine
from synapse.storage.user_directory import UserDirectoryStore from synapse.storage.user_directory import UserDirectoryStore
from synapse.util.caches.stream_change_cache import StreamChangeCache from synapse.util.caches.stream_change_cache import StreamChangeCache
from synapse.util.httpresourcetree import create_resource_tree from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext, preserve_fn from synapse.util.logcontext import LoggingContext, run_in_background
from synapse.util.manhole import manhole from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string from synapse.util.versionstring import get_version_string
from twisted.internet import reactor from twisted.internet import reactor, defer
from twisted.web.resource import Resource from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.user_dir") logger = logging.getLogger("synapse.app.user_dir")
@ -116,7 +116,7 @@ class UserDirectoryServer(HomeServer):
"/_matrix/client/api/v1": resource, "/_matrix/client/api/v1": resource,
}) })
root_resource = create_resource_tree(resources, Resource()) root_resource = create_resource_tree(resources, NoResource())
_base.listen_tcp( _base.listen_tcp(
bind_addresses, bind_addresses,
@ -164,7 +164,14 @@ class UserDirectoryReplicationHandler(ReplicationClientHandler):
stream_name, token, rows stream_name, token, rows
) )
if stream_name == "current_state_deltas": if stream_name == "current_state_deltas":
preserve_fn(self.user_directory.notify_new_event)() run_in_background(self._notify_directory)
@defer.inlineCallbacks
def _notify_directory(self):
try:
yield self.user_directory.notify_new_event()
except Exception:
logger.exception("Error notifiying user directory of state update")
def start(config_options): def start(config_options):

View File

@ -21,6 +21,8 @@ from twisted.internet import defer
import logging import logging
import re import re
from six import string_types
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -146,7 +148,7 @@ class ApplicationService(object):
) )
regex = regex_obj.get("regex") regex = regex_obj.get("regex")
if isinstance(regex, basestring): if isinstance(regex, string_types):
regex_obj["regex"] = re.compile(regex) # Pre-compile regex regex_obj["regex"] = re.compile(regex) # Pre-compile regex
else: else:
raise ValueError( raise ValueError(

View File

@ -18,7 +18,6 @@ from synapse.api.constants import ThirdPartyEntityKind
from synapse.api.errors import CodeMessageException from synapse.api.errors import CodeMessageException
from synapse.http.client import SimpleHttpClient from synapse.http.client import SimpleHttpClient
from synapse.events.utils import serialize_event from synapse.events.utils import serialize_event
from synapse.util.logcontext import preserve_fn, make_deferred_yieldable
from synapse.util.caches.response_cache import ResponseCache from synapse.util.caches.response_cache import ResponseCache
from synapse.types import ThirdPartyInstanceID from synapse.types import ThirdPartyInstanceID
@ -73,7 +72,8 @@ class ApplicationServiceApi(SimpleHttpClient):
super(ApplicationServiceApi, self).__init__(hs) super(ApplicationServiceApi, self).__init__(hs)
self.clock = hs.get_clock() self.clock = hs.get_clock()
self.protocol_meta_cache = ResponseCache(hs, timeout_ms=HOUR_IN_MS) self.protocol_meta_cache = ResponseCache(hs, "as_protocol_meta",
timeout_ms=HOUR_IN_MS)
@defer.inlineCallbacks @defer.inlineCallbacks
def query_user(self, service, user_id): def query_user(self, service, user_id):
@ -193,12 +193,7 @@ class ApplicationServiceApi(SimpleHttpClient):
defer.returnValue(None) defer.returnValue(None)
key = (service.id, protocol) key = (service.id, protocol)
result = self.protocol_meta_cache.get(key) return self.protocol_meta_cache.wrap(key, _get)
if not result:
result = self.protocol_meta_cache.set(
key, preserve_fn(_get)()
)
return make_deferred_yieldable(result)
@defer.inlineCallbacks @defer.inlineCallbacks
def push_bulk(self, service, events, txn_id=None): def push_bulk(self, service, events, txn_id=None):

View File

@ -51,7 +51,7 @@ components.
from twisted.internet import defer from twisted.internet import defer
from synapse.appservice import ApplicationServiceState from synapse.appservice import ApplicationServiceState
from synapse.util.logcontext import preserve_fn from synapse.util.logcontext import run_in_background
from synapse.util.metrics import Measure from synapse.util.metrics import Measure
import logging import logging
@ -106,7 +106,7 @@ class _ServiceQueuer(object):
def enqueue(self, service, event): def enqueue(self, service, event):
# if this service isn't being sent something # if this service isn't being sent something
self.queued_events.setdefault(service.id, []).append(event) self.queued_events.setdefault(service.id, []).append(event)
preserve_fn(self._send_request)(service) run_in_background(self._send_request, service)
@defer.inlineCallbacks @defer.inlineCallbacks
def _send_request(self, service): def _send_request(self, service):
@ -152,10 +152,10 @@ class _TransactionController(object):
if sent: if sent:
yield txn.complete(self.store) yield txn.complete(self.store)
else: else:
preserve_fn(self._start_recoverer)(service) run_in_background(self._start_recoverer, service)
except Exception as e: except Exception:
logger.exception(e) logger.exception("Error creating appservice transaction")
preserve_fn(self._start_recoverer)(service) run_in_background(self._start_recoverer, service)
@defer.inlineCallbacks @defer.inlineCallbacks
def on_recovered(self, recoverer): def on_recovered(self, recoverer):
@ -176,6 +176,7 @@ class _TransactionController(object):
@defer.inlineCallbacks @defer.inlineCallbacks
def _start_recoverer(self, service): def _start_recoverer(self, service):
try:
yield self.store.set_appservice_state( yield self.store.set_appservice_state(
service, service,
ApplicationServiceState.DOWN ApplicationServiceState.DOWN
@ -187,6 +188,8 @@ class _TransactionController(object):
recoverer = self.recoverer_fn(service, self.on_recovered) recoverer = self.recoverer_fn(service, self.on_recovered)
self.add_recoverers([recoverer]) self.add_recoverers([recoverer])
recoverer.recover() recoverer.recover()
except Exception:
logger.exception("Error starting AS recoverer")
@defer.inlineCallbacks @defer.inlineCallbacks
def _is_service_up(self, service): def _is_service_up(self, service):

View File

@ -19,6 +19,8 @@ import os
import yaml import yaml
from textwrap import dedent from textwrap import dedent
from six import integer_types
class ConfigError(Exception): class ConfigError(Exception):
pass pass
@ -49,7 +51,7 @@ Missing mandatory `server_name` config option.
class Config(object): class Config(object):
@staticmethod @staticmethod
def parse_size(value): def parse_size(value):
if isinstance(value, int) or isinstance(value, long): if isinstance(value, integer_types):
return value return value
sizes = {"K": 1024, "M": 1024 * 1024} sizes = {"K": 1024, "M": 1024 * 1024}
size = 1 size = 1
@ -61,7 +63,7 @@ class Config(object):
@staticmethod @staticmethod
def parse_duration(value): def parse_duration(value):
if isinstance(value, int) or isinstance(value, long): if isinstance(value, integer_types):
return value return value
second = 1000 second = 1000
minute = 60 * second minute = 60 * second
@ -279,31 +281,31 @@ class Config(object):
) )
if not cls.path_exists(config_dir_path): if not cls.path_exists(config_dir_path):
os.makedirs(config_dir_path) os.makedirs(config_dir_path)
with open(config_path, "wb") as config_file: with open(config_path, "w") as config_file:
config_bytes, config = obj.generate_config( config_str, config = obj.generate_config(
config_dir_path=config_dir_path, config_dir_path=config_dir_path,
server_name=server_name, server_name=server_name,
report_stats=(config_args.report_stats == "yes"), report_stats=(config_args.report_stats == "yes"),
is_generating_file=True is_generating_file=True
) )
obj.invoke_all("generate_files", config) obj.invoke_all("generate_files", config)
config_file.write(config_bytes) config_file.write(config_str)
print ( print((
"A config file has been generated in %r for server name" "A config file has been generated in %r for server name"
" %r with corresponding SSL keys and self-signed" " %r with corresponding SSL keys and self-signed"
" certificates. Please review this file and customise it" " certificates. Please review this file and customise it"
" to your needs." " to your needs."
) % (config_path, server_name) ) % (config_path, server_name))
print ( print(
"If this server name is incorrect, you will need to" "If this server name is incorrect, you will need to"
" regenerate the SSL certificates" " regenerate the SSL certificates"
) )
return return
else: else:
print ( print((
"Config file %r already exists. Generating any missing key" "Config file %r already exists. Generating any missing key"
" files." " files."
) % (config_path,) ) % (config_path,))
generate_keys = True generate_keys = True
parser = argparse.ArgumentParser( parser = argparse.ArgumentParser(

View File

@ -17,10 +17,12 @@ from ._base import Config, ConfigError
from synapse.appservice import ApplicationService from synapse.appservice import ApplicationService
from synapse.types import UserID from synapse.types import UserID
import urllib
import yaml import yaml
import logging import logging
from six import string_types
from six.moves.urllib import parse as urlparse
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -89,21 +91,21 @@ def _load_appservice(hostname, as_info, config_filename):
"id", "as_token", "hs_token", "sender_localpart" "id", "as_token", "hs_token", "sender_localpart"
] ]
for field in required_string_fields: for field in required_string_fields:
if not isinstance(as_info.get(field), basestring): if not isinstance(as_info.get(field), string_types):
raise KeyError("Required string field: '%s' (%s)" % ( raise KeyError("Required string field: '%s' (%s)" % (
field, config_filename, field, config_filename,
)) ))
# 'url' must either be a string or explicitly null, not missing # 'url' must either be a string or explicitly null, not missing
# to avoid accidentally turning off push for ASes. # to avoid accidentally turning off push for ASes.
if (not isinstance(as_info.get("url"), basestring) and if (not isinstance(as_info.get("url"), string_types) and
as_info.get("url", "") is not None): as_info.get("url", "") is not None):
raise KeyError( raise KeyError(
"Required string field or explicit null: 'url' (%s)" % (config_filename,) "Required string field or explicit null: 'url' (%s)" % (config_filename,)
) )
localpart = as_info["sender_localpart"] localpart = as_info["sender_localpart"]
if urllib.quote(localpart) != localpart: if urlparse.quote(localpart) != localpart:
raise ValueError( raise ValueError(
"sender_localpart needs characters which are not URL encoded." "sender_localpart needs characters which are not URL encoded."
) )
@ -128,7 +130,7 @@ def _load_appservice(hostname, as_info, config_filename):
"Expected namespace entry in %s to be an object," "Expected namespace entry in %s to be an object,"
" but got %s", ns, regex_obj " but got %s", ns, regex_obj
) )
if not isinstance(regex_obj.get("regex"), basestring): if not isinstance(regex_obj.get("regex"), string_types):
raise ValueError( raise ValueError(
"Missing/bad type 'regex' key in %s", regex_obj "Missing/bad type 'regex' key in %s", regex_obj
) )

View File

@ -117,7 +117,7 @@ class LoggingConfig(Config):
log_config = config.get("log_config") log_config = config.get("log_config")
if log_config and not os.path.exists(log_config): if log_config and not os.path.exists(log_config):
log_file = self.abspath("homeserver.log") log_file = self.abspath("homeserver.log")
with open(log_config, "wb") as log_config_file: with open(log_config, "w") as log_config_file:
log_config_file.write( log_config_file.write(
DEFAULT_LOG_CONFIG.substitute(log_file=log_file) DEFAULT_LOG_CONFIG.substitute(log_file=log_file)
) )

View File

@ -77,7 +77,9 @@ class RegistrationConfig(Config):
# Set the number of bcrypt rounds used to generate password hash. # Set the number of bcrypt rounds used to generate password hash.
# Larger numbers increase the work factor needed to generate the hash. # Larger numbers increase the work factor needed to generate the hash.
# The default number of rounds is 12. # The default number is 12 (which equates to 2^12 rounds).
# N.B. that increasing this will exponentially increase the time required
# to register or login - e.g. 24 => 2^24 rounds which will take >20 mins.
bcrypt_rounds: 12 bcrypt_rounds: 12
# Allows users to register as guests without a password/email/etc, and # Allows users to register as guests without a password/email/etc, and

View File

@ -133,7 +133,7 @@ class TlsConfig(Config):
tls_dh_params_path = config["tls_dh_params_path"] tls_dh_params_path = config["tls_dh_params_path"]
if not self.path_exists(tls_private_key_path): if not self.path_exists(tls_private_key_path):
with open(tls_private_key_path, "w") as private_key_file: with open(tls_private_key_path, "wb") as private_key_file:
tls_private_key = crypto.PKey() tls_private_key = crypto.PKey()
tls_private_key.generate_key(crypto.TYPE_RSA, 2048) tls_private_key.generate_key(crypto.TYPE_RSA, 2048)
private_key_pem = crypto.dump_privatekey( private_key_pem = crypto.dump_privatekey(
@ -148,7 +148,7 @@ class TlsConfig(Config):
) )
if not self.path_exists(tls_certificate_path): if not self.path_exists(tls_certificate_path):
with open(tls_certificate_path, "w") as certificate_file: with open(tls_certificate_path, "wb") as certificate_file:
cert = crypto.X509() cert = crypto.X509()
subject = cert.get_subject() subject = cert.get_subject()
subject.CN = config["server_name"] subject.CN = config["server_name"]

View File

@ -13,8 +13,8 @@
# limitations under the License. # limitations under the License.
from twisted.internet import ssl from twisted.internet import ssl
from OpenSSL import SSL from OpenSSL import SSL, crypto
from twisted.internet._sslverify import _OpenSSLECCurve, _defaultCurveName from twisted.internet._sslverify import _defaultCurveName
import logging import logging
@ -32,8 +32,9 @@ class ServerContextFactory(ssl.ContextFactory):
@staticmethod @staticmethod
def configure_context(context, config): def configure_context(context, config):
try: try:
_ecCurve = _OpenSSLECCurve(_defaultCurveName) _ecCurve = crypto.get_elliptic_curve(_defaultCurveName)
_ecCurve.addECKeyToContext(context) context.set_tmp_ecdh(_ecCurve)
except Exception: except Exception:
logger.exception("Failed to enable elliptic curve for TLS") logger.exception("Failed to enable elliptic curve for TLS")
context.set_options(SSL.OP_NO_SSLv2 | SSL.OP_NO_SSLv3) context.set_options(SSL.OP_NO_SSLv2 | SSL.OP_NO_SSLv3)

View File

@ -19,7 +19,8 @@ from synapse.api.errors import SynapseError, Codes
from synapse.util import unwrapFirstError, logcontext from synapse.util import unwrapFirstError, logcontext
from synapse.util.logcontext import ( from synapse.util.logcontext import (
PreserveLoggingContext, PreserveLoggingContext,
preserve_fn preserve_fn,
run_in_background,
) )
from synapse.util.metrics import Measure from synapse.util.metrics import Measure
@ -127,7 +128,7 @@ class Keyring(object):
verify_requests.append(verify_request) verify_requests.append(verify_request)
preserve_fn(self._start_key_lookups)(verify_requests) run_in_background(self._start_key_lookups, verify_requests)
# Pass those keys to handle_key_deferred so that the json object # Pass those keys to handle_key_deferred so that the json object
# signatures can be verified # signatures can be verified
@ -146,6 +147,7 @@ class Keyring(object):
verify_requests (List[VerifyKeyRequest]): verify_requests (List[VerifyKeyRequest]):
""" """
try:
# create a deferred for each server we're going to look up the keys # create a deferred for each server we're going to look up the keys
# for; we'll resolve them once we have completed our lookups. # for; we'll resolve them once we have completed our lookups.
# These will be passed into wait_for_previous_lookups to block # These will be passed into wait_for_previous_lookups to block
@ -192,6 +194,8 @@ class Keyring(object):
verify_request.deferred.addBoth( verify_request.deferred.addBoth(
remove_deferreds, verify_request, remove_deferreds, verify_request,
) )
except Exception:
logger.exception("Error starting key lookups")
@defer.inlineCallbacks @defer.inlineCallbacks
def wait_for_previous_lookups(self, server_names, server_to_deferred): def wait_for_previous_lookups(self, server_names, server_to_deferred):
@ -313,7 +317,7 @@ class Keyring(object):
if not verify_request.deferred.called: if not verify_request.deferred.called:
verify_request.deferred.errback(err) verify_request.deferred.errback(err)
preserve_fn(do_iterations)().addErrback(on_err) run_in_background(do_iterations).addErrback(on_err)
@defer.inlineCallbacks @defer.inlineCallbacks
def get_keys_from_store(self, server_name_and_key_ids): def get_keys_from_store(self, server_name_and_key_ids):
@ -329,8 +333,9 @@ class Keyring(object):
""" """
res = yield logcontext.make_deferred_yieldable(defer.gatherResults( res = yield logcontext.make_deferred_yieldable(defer.gatherResults(
[ [
preserve_fn(self.store.get_server_verify_keys)( run_in_background(
server_name, key_ids self.store.get_server_verify_keys,
server_name, key_ids,
).addCallback(lambda ks, server: (server, ks), server_name) ).addCallback(lambda ks, server: (server, ks), server_name)
for server_name, key_ids in server_name_and_key_ids for server_name, key_ids in server_name_and_key_ids
], ],
@ -352,13 +357,13 @@ class Keyring(object):
logger.exception( logger.exception(
"Unable to get key from %r: %s %s", "Unable to get key from %r: %s %s",
perspective_name, perspective_name,
type(e).__name__, str(e.message), type(e).__name__, str(e),
) )
defer.returnValue({}) defer.returnValue({})
results = yield logcontext.make_deferred_yieldable(defer.gatherResults( results = yield logcontext.make_deferred_yieldable(defer.gatherResults(
[ [
preserve_fn(get_key)(p_name, p_keys) run_in_background(get_key, p_name, p_keys)
for p_name, p_keys in self.perspective_servers.items() for p_name, p_keys in self.perspective_servers.items()
], ],
consumeErrors=True, consumeErrors=True,
@ -384,7 +389,7 @@ class Keyring(object):
logger.info( logger.info(
"Unable to get key %r for %r directly: %s %s", "Unable to get key %r for %r directly: %s %s",
key_ids, server_name, key_ids, server_name,
type(e).__name__, str(e.message), type(e).__name__, str(e),
) )
if not keys: if not keys:
@ -398,7 +403,7 @@ class Keyring(object):
results = yield logcontext.make_deferred_yieldable(defer.gatherResults( results = yield logcontext.make_deferred_yieldable(defer.gatherResults(
[ [
preserve_fn(get_key)(server_name, key_ids) run_in_background(get_key, server_name, key_ids)
for server_name, key_ids in server_name_and_key_ids for server_name, key_ids in server_name_and_key_ids
], ],
consumeErrors=True, consumeErrors=True,
@ -481,7 +486,8 @@ class Keyring(object):
yield logcontext.make_deferred_yieldable(defer.gatherResults( yield logcontext.make_deferred_yieldable(defer.gatherResults(
[ [
preserve_fn(self.store_keys)( run_in_background(
self.store_keys,
server_name=server_name, server_name=server_name,
from_server=perspective_name, from_server=perspective_name,
verify_keys=response_keys, verify_keys=response_keys,
@ -539,7 +545,8 @@ class Keyring(object):
yield logcontext.make_deferred_yieldable(defer.gatherResults( yield logcontext.make_deferred_yieldable(defer.gatherResults(
[ [
preserve_fn(self.store_keys)( run_in_background(
self.store_keys,
server_name=key_server_name, server_name=key_server_name,
from_server=server_name, from_server=server_name,
verify_keys=verify_keys, verify_keys=verify_keys,
@ -615,7 +622,8 @@ class Keyring(object):
yield logcontext.make_deferred_yieldable(defer.gatherResults( yield logcontext.make_deferred_yieldable(defer.gatherResults(
[ [
preserve_fn(self.store.store_server_keys_json)( run_in_background(
self.store.store_server_keys_json,
server_name=server_name, server_name=server_name,
key_id=key_id, key_id=key_id,
from_server=server_name, from_server=server_name,
@ -716,7 +724,8 @@ class Keyring(object):
# TODO(markjh): Store whether the keys have expired. # TODO(markjh): Store whether the keys have expired.
return logcontext.make_deferred_yieldable(defer.gatherResults( return logcontext.make_deferred_yieldable(defer.gatherResults(
[ [
preserve_fn(self.store.store_server_verify_key)( run_in_background(
self.store.store_server_verify_key,
server_name, server_name, key.time_added, key server_name, server_name, key.time_added, key
) )
for key_id, key in verify_keys.items() for key_id, key in verify_keys.items()
@ -734,7 +743,7 @@ def _handle_key_deferred(verify_request):
except IOError as e: except IOError as e:
logger.warn( logger.warn(
"Got IOError when downloading keys for %s: %s %s", "Got IOError when downloading keys for %s: %s %s",
server_name, type(e).__name__, str(e.message), server_name, type(e).__name__, str(e),
) )
raise SynapseError( raise SynapseError(
502, 502,
@ -744,7 +753,7 @@ def _handle_key_deferred(verify_request):
except Exception as e: except Exception as e:
logger.exception( logger.exception(
"Got Exception when downloading keys for %s: %s %s", "Got Exception when downloading keys for %s: %s %s",
server_name, type(e).__name__, str(e.message), server_name, type(e).__name__, str(e),
) )
raise SynapseError( raise SynapseError(
401, 401,

View File

@ -47,14 +47,26 @@ class _EventInternalMetadata(object):
def _event_dict_property(key): def _event_dict_property(key):
# We want to be able to use hasattr with the event dict properties.
# However, (on python3) hasattr expects AttributeError to be raised. Hence,
# we need to transform the KeyError into an AttributeError
def getter(self): def getter(self):
try:
return self._event_dict[key] return self._event_dict[key]
except KeyError:
raise AttributeError(key)
def setter(self, v): def setter(self, v):
try:
self._event_dict[key] = v self._event_dict[key] = v
except KeyError:
raise AttributeError(key)
def delete(self): def delete(self):
try:
del self._event_dict[key] del self._event_dict[key]
except KeyError:
raise AttributeError(key)
return property( return property(
getter, getter,

View File

@ -14,7 +14,10 @@
# limitations under the License. # limitations under the License.
import logging import logging
from synapse.api.errors import SynapseError import six
from synapse.api.constants import MAX_DEPTH
from synapse.api.errors import SynapseError, Codes
from synapse.crypto.event_signing import check_event_content_hash from synapse.crypto.event_signing import check_event_content_hash
from synapse.events import FrozenEvent from synapse.events import FrozenEvent
from synapse.events.utils import prune_event from synapse.events.utils import prune_event
@ -190,11 +193,23 @@ def event_from_pdu_json(pdu_json, outlier=False):
FrozenEvent FrozenEvent
Raises: Raises:
SynapseError: if the pdu is missing required fields SynapseError: if the pdu is missing required fields or is otherwise
not a valid matrix event
""" """
# we could probably enforce a bunch of other fields here (room_id, sender, # we could probably enforce a bunch of other fields here (room_id, sender,
# origin, etc etc) # origin, etc etc)
assert_params_in_request(pdu_json, ('event_id', 'type')) assert_params_in_request(pdu_json, ('event_id', 'type', 'depth'))
depth = pdu_json['depth']
if not isinstance(depth, six.integer_types):
raise SynapseError(400, "Depth %r not an intger" % (depth, ),
Codes.BAD_JSON)
if depth < 0:
raise SynapseError(400, "Depth too small", Codes.BAD_JSON)
elif depth > MAX_DEPTH:
raise SynapseError(400, "Depth too large", Codes.BAD_JSON)
event = FrozenEvent( event = FrozenEvent(
pdu_json pdu_json
) )

View File

@ -19,6 +19,8 @@ import itertools
import logging import logging
import random import random
from six.moves import range
from twisted.internet import defer from twisted.internet import defer
from synapse.api.constants import Membership from synapse.api.constants import Membership
@ -33,7 +35,7 @@ from synapse.federation.federation_base import (
import synapse.metrics import synapse.metrics
from synapse.util import logcontext, unwrapFirstError from synapse.util import logcontext, unwrapFirstError
from synapse.util.caches.expiringcache import ExpiringCache from synapse.util.caches.expiringcache import ExpiringCache
from synapse.util.logcontext import make_deferred_yieldable, preserve_fn from synapse.util.logcontext import make_deferred_yieldable, run_in_background
from synapse.util.logutils import log_function from synapse.util.logutils import log_function
from synapse.util.retryutils import NotRetryingDestination from synapse.util.retryutils import NotRetryingDestination
@ -394,7 +396,7 @@ class FederationClient(FederationBase):
seen_events = yield self.store.get_events(event_ids, allow_rejected=True) seen_events = yield self.store.get_events(event_ids, allow_rejected=True)
signed_events = seen_events.values() signed_events = seen_events.values()
else: else:
seen_events = yield self.store.have_events(event_ids) seen_events = yield self.store.have_seen_events(event_ids)
signed_events = [] signed_events = []
failed_to_fetch = set() failed_to_fetch = set()
@ -413,11 +415,12 @@ class FederationClient(FederationBase):
batch_size = 20 batch_size = 20
missing_events = list(missing_events) missing_events = list(missing_events)
for i in xrange(0, len(missing_events), batch_size): for i in range(0, len(missing_events), batch_size):
batch = set(missing_events[i:i + batch_size]) batch = set(missing_events[i:i + batch_size])
deferreds = [ deferreds = [
preserve_fn(self.get_pdu)( run_in_background(
self.get_pdu,
destinations=random_server_list(), destinations=random_server_list(),
event_id=e_id, event_id=e_id,
) )

View File

@ -1,5 +1,6 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
# Copyright 2015, 2016 OpenMarket Ltd # Copyright 2015, 2016 OpenMarket Ltd
# Copyright 2018 New Vector Ltd
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License. # you may not use this file except in compliance with the License.
@ -30,9 +31,10 @@ import synapse.metrics
from synapse.types import get_domain_from_id from synapse.types import get_domain_from_id
from synapse.util import async from synapse.util import async
from synapse.util.caches.response_cache import ResponseCache from synapse.util.caches.response_cache import ResponseCache
from synapse.util.logcontext import make_deferred_yieldable, preserve_fn
from synapse.util.logutils import log_function from synapse.util.logutils import log_function
from six import iteritems
# when processing incoming transactions, we try to handle multiple rooms in # when processing incoming transactions, we try to handle multiple rooms in
# parallel, up to this limit. # parallel, up to this limit.
TRANSACTION_CONCURRENCY_LIMIT = 10 TRANSACTION_CONCURRENCY_LIMIT = 10
@ -65,7 +67,7 @@ class FederationServer(FederationBase):
# We cache responses to state queries, as they take a while and often # We cache responses to state queries, as they take a while and often
# come in waves. # come in waves.
self._state_resp_cache = ResponseCache(hs, timeout_ms=30000) self._state_resp_cache = ResponseCache(hs, "state_resp", timeout_ms=30000)
@defer.inlineCallbacks @defer.inlineCallbacks
@log_function @log_function
@ -212,16 +214,17 @@ class FederationServer(FederationBase):
if not in_room: if not in_room:
raise AuthError(403, "Host not in room.") raise AuthError(403, "Host not in room.")
result = self._state_resp_cache.get((room_id, event_id)) # we grab the linearizer to protect ourselves from servers which hammer
if not result: # us. In theory we might already have the response to this query
# in the cache so we could return it without waiting for the linearizer
# - but that's non-trivial to get right, and anyway somewhat defeats
# the point of the linearizer.
with (yield self._server_linearizer.queue((origin, room_id))): with (yield self._server_linearizer.queue((origin, room_id))):
d = self._state_resp_cache.set( resp = yield self._state_resp_cache.wrap(
(room_id, event_id), (room_id, event_id),
preserve_fn(self._on_context_state_request_compute)(room_id, event_id) self._on_context_state_request_compute,
room_id, event_id,
) )
resp = yield make_deferred_yieldable(d)
else:
resp = yield make_deferred_yieldable(result)
defer.returnValue((200, resp)) defer.returnValue((200, resp))
@ -425,9 +428,9 @@ class FederationServer(FederationBase):
"Claimed one-time-keys: %s", "Claimed one-time-keys: %s",
",".join(( ",".join((
"%s for %s:%s" % (key_id, user_id, device_id) "%s for %s:%s" % (key_id, user_id, device_id)
for user_id, user_keys in json_result.iteritems() for user_id, user_keys in iteritems(json_result)
for device_id, device_keys in user_keys.iteritems() for device_id, device_keys in iteritems(user_keys)
for key_id, _ in device_keys.iteritems() for key_id, _ in iteritems(device_keys)
)), )),
) )
@ -494,12 +497,32 @@ class FederationServer(FederationBase):
def _handle_received_pdu(self, origin, pdu): def _handle_received_pdu(self, origin, pdu):
""" Process a PDU received in a federation /send/ transaction. """ Process a PDU received in a federation /send/ transaction.
If the event is invalid, then this method throws a FederationError.
(The error will then be logged and sent back to the sender (which
probably won't do anything with it), and other events in the
transaction will be processed as normal).
It is likely that we'll then receive other events which refer to
this rejected_event in their prev_events, etc. When that happens,
we'll attempt to fetch the rejected event again, which will presumably
fail, so those second-generation events will also get rejected.
Eventually, we get to the point where there are more than 10 events
between any new events and the original rejected event. Since we
only try to backfill 10 events deep on received pdu, we then accept the
new event, possibly introducing a discontinuity in the DAG, with new
forward extremities, so normal service is approximately returned,
until we try to backfill across the discontinuity.
Args: Args:
origin (str): server which sent the pdu origin (str): server which sent the pdu
pdu (FrozenEvent): received pdu pdu (FrozenEvent): received pdu
Returns (Deferred): completes with None Returns (Deferred): completes with None
Raises: FederationError if the signatures / hash do not match
Raises: FederationError if the signatures / hash do not match, or
if the event was unacceptable for any other reason (eg, too large,
too many prev_events, couldn't find the prev_events)
""" """
# check that it's actually being sent from a valid destination to # check that it's actually being sent from a valid destination to
# workaround bug #1753 in 0.18.5 and 0.18.6 # workaround bug #1753 in 0.18.5 and 0.18.6

View File

@ -40,6 +40,8 @@ from collections import namedtuple
import logging import logging
from six import itervalues, iteritems
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -122,7 +124,7 @@ class FederationRemoteSendQueue(object):
user_ids = set( user_ids = set(
user_id user_id
for uids in self.presence_changed.itervalues() for uids in itervalues(self.presence_changed)
for user_id in uids for user_id in uids
) )
@ -276,7 +278,7 @@ class FederationRemoteSendQueue(object):
# stream position. # stream position.
keyed_edus = {self.keyed_edu_changed[k]: k for k in keys[i:j]} keyed_edus = {self.keyed_edu_changed[k]: k for k in keys[i:j]}
for ((destination, edu_key), pos) in keyed_edus.iteritems(): for ((destination, edu_key), pos) in iteritems(keyed_edus):
rows.append((pos, KeyedEduRow( rows.append((pos, KeyedEduRow(
key=edu_key, key=edu_key,
edu=self.keyed_edu[(destination, edu_key)], edu=self.keyed_edu[(destination, edu_key)],
@ -309,7 +311,7 @@ class FederationRemoteSendQueue(object):
j = keys.bisect_right(to_token) + 1 j = keys.bisect_right(to_token) + 1
device_messages = {self.device_messages[k]: k for k in keys[i:j]} device_messages = {self.device_messages[k]: k for k in keys[i:j]}
for (destination, pos) in device_messages.iteritems(): for (destination, pos) in iteritems(device_messages):
rows.append((pos, DeviceRow( rows.append((pos, DeviceRow(
destination=destination, destination=destination,
))) )))
@ -528,19 +530,19 @@ def process_rows_for_federation(transaction_queue, rows):
if buff.presence: if buff.presence:
transaction_queue.send_presence(buff.presence) transaction_queue.send_presence(buff.presence)
for destination, edu_map in buff.keyed_edus.iteritems(): for destination, edu_map in iteritems(buff.keyed_edus):
for key, edu in edu_map.items(): for key, edu in edu_map.items():
transaction_queue.send_edu( transaction_queue.send_edu(
edu.destination, edu.edu_type, edu.content, key=key, edu.destination, edu.edu_type, edu.content, key=key,
) )
for destination, edu_list in buff.edus.iteritems(): for destination, edu_list in iteritems(buff.edus):
for edu in edu_list: for edu in edu_list:
transaction_queue.send_edu( transaction_queue.send_edu(
edu.destination, edu.edu_type, edu.content, key=None, edu.destination, edu.edu_type, edu.content, key=None,
) )
for destination, failure_list in buff.failures.iteritems(): for destination, failure_list in iteritems(buff.failures):
for failure in failure_list: for failure in failure_list:
transaction_queue.send_failure(destination, failure) transaction_queue.send_failure(destination, failure)

View File

@ -169,7 +169,7 @@ class TransactionQueue(object):
while True: while True:
last_token = yield self.store.get_federation_out_pos("events") last_token = yield self.store.get_federation_out_pos("events")
next_token, events = yield self.store.get_all_new_events_stream( next_token, events = yield self.store.get_all_new_events_stream(
last_token, self._last_poked_id, limit=20, last_token, self._last_poked_id, limit=100,
) )
logger.debug("Handling %s -> %s", last_token, next_token) logger.debug("Handling %s -> %s", last_token, next_token)
@ -177,13 +177,15 @@ class TransactionQueue(object):
if not events and next_token >= self._last_poked_id: if not events and next_token >= self._last_poked_id:
break break
for event in events: @defer.inlineCallbacks
def handle_event(event):
# Only send events for this server. # Only send events for this server.
send_on_behalf_of = event.internal_metadata.get_send_on_behalf_of() send_on_behalf_of = event.internal_metadata.get_send_on_behalf_of()
is_mine = self.is_mine_id(event.event_id) is_mine = self.is_mine_id(event.event_id)
if not is_mine and send_on_behalf_of is None: if not is_mine and send_on_behalf_of is None:
continue return
try:
# Get the state from before the event. # Get the state from before the event.
# We need to make sure that this is the state from before # We need to make sure that this is the state from before
# the event and not from after it. # the event and not from after it.
@ -195,6 +197,13 @@ class TransactionQueue(object):
prev_id for prev_id, _ in event.prev_events prev_id for prev_id, _ in event.prev_events
], ],
) )
except Exception:
logger.exception(
"Failed to calculate hosts in room for event: %s",
event.event_id,
)
return
destinations = set(destinations) destinations = set(destinations)
if send_on_behalf_of is not None: if send_on_behalf_of is not None:
@ -207,12 +216,44 @@ class TransactionQueue(object):
self._send_pdu(event, destinations) self._send_pdu(event, destinations)
events_processed_counter.inc_by(len(events)) @defer.inlineCallbacks
def handle_room_events(events):
for event in events:
yield handle_event(event)
events_by_room = {}
for event in events:
events_by_room.setdefault(event.room_id, []).append(event)
yield logcontext.make_deferred_yieldable(defer.gatherResults(
[
logcontext.run_in_background(handle_room_events, evs)
for evs in events_by_room.itervalues()
],
consumeErrors=True
))
yield self.store.update_federation_out_pos( yield self.store.update_federation_out_pos(
"events", next_token "events", next_token
) )
if events:
now = self.clock.time_msec()
ts = yield self.store.get_received_ts(events[-1].event_id)
synapse.metrics.event_processing_lag.set(
now - ts, "federation_sender",
)
synapse.metrics.event_processing_last_ts.set(
ts, "federation_sender",
)
events_processed_counter.inc_by(len(events))
synapse.metrics.event_processing_positions.set(
next_token, "federation_sender",
)
finally: finally:
self._is_processing = False self._is_processing = False
@ -282,6 +323,8 @@ class TransactionQueue(object):
break break
yield self._process_presence_inner(states_map.values()) yield self._process_presence_inner(states_map.values())
except Exception:
logger.exception("Error sending presence states to servers")
finally: finally:
self._processing_pending_presence = False self._processing_pending_presence = False

View File

@ -1,5 +1,6 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
# Copyright 2014-2016 OpenMarket Ltd # Copyright 2014-2016 OpenMarket Ltd
# Copyright 2018 New Vector Ltd
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License. # you may not use this file except in compliance with the License.
@ -20,6 +21,7 @@ from synapse.api.urls import FEDERATION_PREFIX as PREFIX
from synapse.util.logutils import log_function from synapse.util.logutils import log_function
import logging import logging
import urllib
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -49,7 +51,7 @@ class TransportLayerClient(object):
logger.debug("get_room_state dest=%s, room=%s", logger.debug("get_room_state dest=%s, room=%s",
destination, room_id) destination, room_id)
path = PREFIX + "/state/%s/" % room_id path = _create_path(PREFIX, "/state/%s/", room_id)
return self.client.get_json( return self.client.get_json(
destination, path=path, args={"event_id": event_id}, destination, path=path, args={"event_id": event_id},
) )
@ -71,7 +73,7 @@ class TransportLayerClient(object):
logger.debug("get_room_state_ids dest=%s, room=%s", logger.debug("get_room_state_ids dest=%s, room=%s",
destination, room_id) destination, room_id)
path = PREFIX + "/state_ids/%s/" % room_id path = _create_path(PREFIX, "/state_ids/%s/", room_id)
return self.client.get_json( return self.client.get_json(
destination, path=path, args={"event_id": event_id}, destination, path=path, args={"event_id": event_id},
) )
@ -93,7 +95,7 @@ class TransportLayerClient(object):
logger.debug("get_pdu dest=%s, event_id=%s", logger.debug("get_pdu dest=%s, event_id=%s",
destination, event_id) destination, event_id)
path = PREFIX + "/event/%s/" % (event_id, ) path = _create_path(PREFIX, "/event/%s/", event_id)
return self.client.get_json(destination, path=path, timeout=timeout) return self.client.get_json(destination, path=path, timeout=timeout)
@log_function @log_function
@ -119,7 +121,7 @@ class TransportLayerClient(object):
# TODO: raise? # TODO: raise?
return return
path = PREFIX + "/backfill/%s/" % (room_id,) path = _create_path(PREFIX, "/backfill/%s/", room_id)
args = { args = {
"v": event_tuples, "v": event_tuples,
@ -157,9 +159,11 @@ class TransportLayerClient(object):
# generated by the json_data_callback. # generated by the json_data_callback.
json_data = transaction.get_dict() json_data = transaction.get_dict()
path = _create_path(PREFIX, "/send/%s/", transaction.transaction_id)
response = yield self.client.put_json( response = yield self.client.put_json(
transaction.destination, transaction.destination,
path=PREFIX + "/send/%s/" % transaction.transaction_id, path=path,
data=json_data, data=json_data,
json_data_callback=json_data_callback, json_data_callback=json_data_callback,
long_retries=True, long_retries=True,
@ -177,7 +181,7 @@ class TransportLayerClient(object):
@log_function @log_function
def make_query(self, destination, query_type, args, retry_on_dns_fail, def make_query(self, destination, query_type, args, retry_on_dns_fail,
ignore_backoff=False): ignore_backoff=False):
path = PREFIX + "/query/%s" % query_type path = _create_path(PREFIX, "/query/%s", query_type)
content = yield self.client.get_json( content = yield self.client.get_json(
destination=destination, destination=destination,
@ -222,7 +226,7 @@ class TransportLayerClient(object):
"make_membership_event called with membership='%s', must be one of %s" % "make_membership_event called with membership='%s', must be one of %s" %
(membership, ",".join(valid_memberships)) (membership, ",".join(valid_memberships))
) )
path = PREFIX + "/make_%s/%s/%s" % (membership, room_id, user_id) path = _create_path(PREFIX, "/make_%s/%s/%s", membership, room_id, user_id)
ignore_backoff = False ignore_backoff = False
retry_on_dns_fail = False retry_on_dns_fail = False
@ -248,7 +252,7 @@ class TransportLayerClient(object):
@defer.inlineCallbacks @defer.inlineCallbacks
@log_function @log_function
def send_join(self, destination, room_id, event_id, content): def send_join(self, destination, room_id, event_id, content):
path = PREFIX + "/send_join/%s/%s" % (room_id, event_id) path = _create_path(PREFIX, "/send_join/%s/%s", room_id, event_id)
response = yield self.client.put_json( response = yield self.client.put_json(
destination=destination, destination=destination,
@ -261,7 +265,7 @@ class TransportLayerClient(object):
@defer.inlineCallbacks @defer.inlineCallbacks
@log_function @log_function
def send_leave(self, destination, room_id, event_id, content): def send_leave(self, destination, room_id, event_id, content):
path = PREFIX + "/send_leave/%s/%s" % (room_id, event_id) path = _create_path(PREFIX, "/send_leave/%s/%s", room_id, event_id)
response = yield self.client.put_json( response = yield self.client.put_json(
destination=destination, destination=destination,
@ -280,7 +284,7 @@ class TransportLayerClient(object):
@defer.inlineCallbacks @defer.inlineCallbacks
@log_function @log_function
def send_invite(self, destination, room_id, event_id, content): def send_invite(self, destination, room_id, event_id, content):
path = PREFIX + "/invite/%s/%s" % (room_id, event_id) path = _create_path(PREFIX, "/invite/%s/%s", room_id, event_id)
response = yield self.client.put_json( response = yield self.client.put_json(
destination=destination, destination=destination,
@ -322,7 +326,7 @@ class TransportLayerClient(object):
@defer.inlineCallbacks @defer.inlineCallbacks
@log_function @log_function
def exchange_third_party_invite(self, destination, room_id, event_dict): def exchange_third_party_invite(self, destination, room_id, event_dict):
path = PREFIX + "/exchange_third_party_invite/%s" % (room_id,) path = _create_path(PREFIX, "/exchange_third_party_invite/%s", room_id,)
response = yield self.client.put_json( response = yield self.client.put_json(
destination=destination, destination=destination,
@ -335,7 +339,7 @@ class TransportLayerClient(object):
@defer.inlineCallbacks @defer.inlineCallbacks
@log_function @log_function
def get_event_auth(self, destination, room_id, event_id): def get_event_auth(self, destination, room_id, event_id):
path = PREFIX + "/event_auth/%s/%s" % (room_id, event_id) path = _create_path(PREFIX, "/event_auth/%s/%s", room_id, event_id)
content = yield self.client.get_json( content = yield self.client.get_json(
destination=destination, destination=destination,
@ -347,7 +351,7 @@ class TransportLayerClient(object):
@defer.inlineCallbacks @defer.inlineCallbacks
@log_function @log_function
def send_query_auth(self, destination, room_id, event_id, content): def send_query_auth(self, destination, room_id, event_id, content):
path = PREFIX + "/query_auth/%s/%s" % (room_id, event_id) path = _create_path(PREFIX, "/query_auth/%s/%s", room_id, event_id)
content = yield self.client.post_json( content = yield self.client.post_json(
destination=destination, destination=destination,
@ -409,7 +413,7 @@ class TransportLayerClient(object):
Returns: Returns:
A dict containg the device keys. A dict containg the device keys.
""" """
path = PREFIX + "/user/devices/" + user_id path = _create_path(PREFIX, "/user/devices/%s", user_id)
content = yield self.client.get_json( content = yield self.client.get_json(
destination=destination, destination=destination,
@ -459,7 +463,7 @@ class TransportLayerClient(object):
@log_function @log_function
def get_missing_events(self, destination, room_id, earliest_events, def get_missing_events(self, destination, room_id, earliest_events,
latest_events, limit, min_depth, timeout): latest_events, limit, min_depth, timeout):
path = PREFIX + "/get_missing_events/%s" % (room_id,) path = _create_path(PREFIX, "/get_missing_events/%s", room_id,)
content = yield self.client.post_json( content = yield self.client.post_json(
destination=destination, destination=destination,
@ -479,7 +483,7 @@ class TransportLayerClient(object):
def get_group_profile(self, destination, group_id, requester_user_id): def get_group_profile(self, destination, group_id, requester_user_id):
"""Get a group profile """Get a group profile
""" """
path = PREFIX + "/groups/%s/profile" % (group_id,) path = _create_path(PREFIX, "/groups/%s/profile", group_id,)
return self.client.get_json( return self.client.get_json(
destination=destination, destination=destination,
@ -498,7 +502,7 @@ class TransportLayerClient(object):
requester_user_id (str) requester_user_id (str)
content (dict): The new profile of the group content (dict): The new profile of the group
""" """
path = PREFIX + "/groups/%s/profile" % (group_id,) path = _create_path(PREFIX, "/groups/%s/profile", group_id,)
return self.client.post_json( return self.client.post_json(
destination=destination, destination=destination,
@ -512,7 +516,7 @@ class TransportLayerClient(object):
def get_group_summary(self, destination, group_id, requester_user_id): def get_group_summary(self, destination, group_id, requester_user_id):
"""Get a group summary """Get a group summary
""" """
path = PREFIX + "/groups/%s/summary" % (group_id,) path = _create_path(PREFIX, "/groups/%s/summary", group_id,)
return self.client.get_json( return self.client.get_json(
destination=destination, destination=destination,
@ -525,7 +529,7 @@ class TransportLayerClient(object):
def get_rooms_in_group(self, destination, group_id, requester_user_id): def get_rooms_in_group(self, destination, group_id, requester_user_id):
"""Get all rooms in a group """Get all rooms in a group
""" """
path = PREFIX + "/groups/%s/rooms" % (group_id,) path = _create_path(PREFIX, "/groups/%s/rooms", group_id,)
return self.client.get_json( return self.client.get_json(
destination=destination, destination=destination,
@ -538,7 +542,7 @@ class TransportLayerClient(object):
content): content):
"""Add a room to a group """Add a room to a group
""" """
path = PREFIX + "/groups/%s/room/%s" % (group_id, room_id,) path = _create_path(PREFIX, "/groups/%s/room/%s", group_id, room_id,)
return self.client.post_json( return self.client.post_json(
destination=destination, destination=destination,
@ -552,7 +556,10 @@ class TransportLayerClient(object):
config_key, content): config_key, content):
"""Update room in group """Update room in group
""" """
path = PREFIX + "/groups/%s/room/%s/config/%s" % (group_id, room_id, config_key,) path = _create_path(
PREFIX, "/groups/%s/room/%s/config/%s",
group_id, room_id, config_key,
)
return self.client.post_json( return self.client.post_json(
destination=destination, destination=destination,
@ -565,7 +572,7 @@ class TransportLayerClient(object):
def remove_room_from_group(self, destination, group_id, requester_user_id, room_id): def remove_room_from_group(self, destination, group_id, requester_user_id, room_id):
"""Remove a room from a group """Remove a room from a group
""" """
path = PREFIX + "/groups/%s/room/%s" % (group_id, room_id,) path = _create_path(PREFIX, "/groups/%s/room/%s", group_id, room_id,)
return self.client.delete_json( return self.client.delete_json(
destination=destination, destination=destination,
@ -578,7 +585,7 @@ class TransportLayerClient(object):
def get_users_in_group(self, destination, group_id, requester_user_id): def get_users_in_group(self, destination, group_id, requester_user_id):
"""Get users in a group """Get users in a group
""" """
path = PREFIX + "/groups/%s/users" % (group_id,) path = _create_path(PREFIX, "/groups/%s/users", group_id,)
return self.client.get_json( return self.client.get_json(
destination=destination, destination=destination,
@ -591,7 +598,7 @@ class TransportLayerClient(object):
def get_invited_users_in_group(self, destination, group_id, requester_user_id): def get_invited_users_in_group(self, destination, group_id, requester_user_id):
"""Get users that have been invited to a group """Get users that have been invited to a group
""" """
path = PREFIX + "/groups/%s/invited_users" % (group_id,) path = _create_path(PREFIX, "/groups/%s/invited_users", group_id,)
return self.client.get_json( return self.client.get_json(
destination=destination, destination=destination,
@ -604,7 +611,23 @@ class TransportLayerClient(object):
def accept_group_invite(self, destination, group_id, user_id, content): def accept_group_invite(self, destination, group_id, user_id, content):
"""Accept a group invite """Accept a group invite
""" """
path = PREFIX + "/groups/%s/users/%s/accept_invite" % (group_id, user_id) path = _create_path(
PREFIX, "/groups/%s/users/%s/accept_invite",
group_id, user_id,
)
return self.client.post_json(
destination=destination,
path=path,
data=content,
ignore_backoff=True,
)
@log_function
def join_group(self, destination, group_id, user_id, content):
"""Attempts to join a group
"""
path = _create_path(PREFIX, "/groups/%s/users/%s/join", group_id, user_id)
return self.client.post_json( return self.client.post_json(
destination=destination, destination=destination,
@ -617,7 +640,7 @@ class TransportLayerClient(object):
def invite_to_group(self, destination, group_id, user_id, requester_user_id, content): def invite_to_group(self, destination, group_id, user_id, requester_user_id, content):
"""Invite a user to a group """Invite a user to a group
""" """
path = PREFIX + "/groups/%s/users/%s/invite" % (group_id, user_id) path = _create_path(PREFIX, "/groups/%s/users/%s/invite", group_id, user_id)
return self.client.post_json( return self.client.post_json(
destination=destination, destination=destination,
@ -633,7 +656,7 @@ class TransportLayerClient(object):
invited. invited.
""" """
path = PREFIX + "/groups/local/%s/users/%s/invite" % (group_id, user_id) path = _create_path(PREFIX, "/groups/local/%s/users/%s/invite", group_id, user_id)
return self.client.post_json( return self.client.post_json(
destination=destination, destination=destination,
@ -647,7 +670,7 @@ class TransportLayerClient(object):
user_id, content): user_id, content):
"""Remove a user fron a group """Remove a user fron a group
""" """
path = PREFIX + "/groups/%s/users/%s/remove" % (group_id, user_id) path = _create_path(PREFIX, "/groups/%s/users/%s/remove", group_id, user_id)
return self.client.post_json( return self.client.post_json(
destination=destination, destination=destination,
@ -664,7 +687,7 @@ class TransportLayerClient(object):
kicked from the group. kicked from the group.
""" """
path = PREFIX + "/groups/local/%s/users/%s/remove" % (group_id, user_id) path = _create_path(PREFIX, "/groups/local/%s/users/%s/remove", group_id, user_id)
return self.client.post_json( return self.client.post_json(
destination=destination, destination=destination,
@ -679,7 +702,7 @@ class TransportLayerClient(object):
the attestations the attestations
""" """
path = PREFIX + "/groups/%s/renew_attestation/%s" % (group_id, user_id) path = _create_path(PREFIX, "/groups/%s/renew_attestation/%s", group_id, user_id)
return self.client.post_json( return self.client.post_json(
destination=destination, destination=destination,
@ -694,11 +717,12 @@ class TransportLayerClient(object):
"""Update a room entry in a group summary """Update a room entry in a group summary
""" """
if category_id: if category_id:
path = PREFIX + "/groups/%s/summary/categories/%s/rooms/%s" % ( path = _create_path(
PREFIX, "/groups/%s/summary/categories/%s/rooms/%s",
group_id, category_id, room_id, group_id, category_id, room_id,
) )
else: else:
path = PREFIX + "/groups/%s/summary/rooms/%s" % (group_id, room_id,) path = _create_path(PREFIX, "/groups/%s/summary/rooms/%s", group_id, room_id,)
return self.client.post_json( return self.client.post_json(
destination=destination, destination=destination,
@ -714,11 +738,12 @@ class TransportLayerClient(object):
"""Delete a room entry in a group summary """Delete a room entry in a group summary
""" """
if category_id: if category_id:
path = PREFIX + "/groups/%s/summary/categories/%s/rooms/%s" % ( path = _create_path(
PREFIX + "/groups/%s/summary/categories/%s/rooms/%s",
group_id, category_id, room_id, group_id, category_id, room_id,
) )
else: else:
path = PREFIX + "/groups/%s/summary/rooms/%s" % (group_id, room_id,) path = _create_path(PREFIX, "/groups/%s/summary/rooms/%s", group_id, room_id,)
return self.client.delete_json( return self.client.delete_json(
destination=destination, destination=destination,
@ -731,7 +756,7 @@ class TransportLayerClient(object):
def get_group_categories(self, destination, group_id, requester_user_id): def get_group_categories(self, destination, group_id, requester_user_id):
"""Get all categories in a group """Get all categories in a group
""" """
path = PREFIX + "/groups/%s/categories" % (group_id,) path = _create_path(PREFIX, "/groups/%s/categories", group_id,)
return self.client.get_json( return self.client.get_json(
destination=destination, destination=destination,
@ -744,7 +769,7 @@ class TransportLayerClient(object):
def get_group_category(self, destination, group_id, requester_user_id, category_id): def get_group_category(self, destination, group_id, requester_user_id, category_id):
"""Get category info in a group """Get category info in a group
""" """
path = PREFIX + "/groups/%s/categories/%s" % (group_id, category_id,) path = _create_path(PREFIX, "/groups/%s/categories/%s", group_id, category_id,)
return self.client.get_json( return self.client.get_json(
destination=destination, destination=destination,
@ -758,7 +783,7 @@ class TransportLayerClient(object):
content): content):
"""Update a category in a group """Update a category in a group
""" """
path = PREFIX + "/groups/%s/categories/%s" % (group_id, category_id,) path = _create_path(PREFIX, "/groups/%s/categories/%s", group_id, category_id,)
return self.client.post_json( return self.client.post_json(
destination=destination, destination=destination,
@ -773,7 +798,7 @@ class TransportLayerClient(object):
category_id): category_id):
"""Delete a category in a group """Delete a category in a group
""" """
path = PREFIX + "/groups/%s/categories/%s" % (group_id, category_id,) path = _create_path(PREFIX, "/groups/%s/categories/%s", group_id, category_id,)
return self.client.delete_json( return self.client.delete_json(
destination=destination, destination=destination,
@ -786,7 +811,7 @@ class TransportLayerClient(object):
def get_group_roles(self, destination, group_id, requester_user_id): def get_group_roles(self, destination, group_id, requester_user_id):
"""Get all roles in a group """Get all roles in a group
""" """
path = PREFIX + "/groups/%s/roles" % (group_id,) path = _create_path(PREFIX, "/groups/%s/roles", group_id,)
return self.client.get_json( return self.client.get_json(
destination=destination, destination=destination,
@ -799,7 +824,7 @@ class TransportLayerClient(object):
def get_group_role(self, destination, group_id, requester_user_id, role_id): def get_group_role(self, destination, group_id, requester_user_id, role_id):
"""Get a roles info """Get a roles info
""" """
path = PREFIX + "/groups/%s/roles/%s" % (group_id, role_id,) path = _create_path(PREFIX, "/groups/%s/roles/%s", group_id, role_id,)
return self.client.get_json( return self.client.get_json(
destination=destination, destination=destination,
@ -813,7 +838,7 @@ class TransportLayerClient(object):
content): content):
"""Update a role in a group """Update a role in a group
""" """
path = PREFIX + "/groups/%s/roles/%s" % (group_id, role_id,) path = _create_path(PREFIX, "/groups/%s/roles/%s", group_id, role_id,)
return self.client.post_json( return self.client.post_json(
destination=destination, destination=destination,
@ -827,7 +852,7 @@ class TransportLayerClient(object):
def delete_group_role(self, destination, group_id, requester_user_id, role_id): def delete_group_role(self, destination, group_id, requester_user_id, role_id):
"""Delete a role in a group """Delete a role in a group
""" """
path = PREFIX + "/groups/%s/roles/%s" % (group_id, role_id,) path = _create_path(PREFIX, "/groups/%s/roles/%s", group_id, role_id,)
return self.client.delete_json( return self.client.delete_json(
destination=destination, destination=destination,
@ -842,11 +867,12 @@ class TransportLayerClient(object):
"""Update a users entry in a group """Update a users entry in a group
""" """
if role_id: if role_id:
path = PREFIX + "/groups/%s/summary/roles/%s/users/%s" % ( path = _create_path(
PREFIX, "/groups/%s/summary/roles/%s/users/%s",
group_id, role_id, user_id, group_id, role_id, user_id,
) )
else: else:
path = PREFIX + "/groups/%s/summary/users/%s" % (group_id, user_id,) path = _create_path(PREFIX, "/groups/%s/summary/users/%s", group_id, user_id,)
return self.client.post_json( return self.client.post_json(
destination=destination, destination=destination,
@ -856,17 +882,33 @@ class TransportLayerClient(object):
ignore_backoff=True, ignore_backoff=True,
) )
@log_function
def set_group_join_policy(self, destination, group_id, requester_user_id,
content):
"""Sets the join policy for a group
"""
path = _create_path(PREFIX, "/groups/%s/settings/m.join_policy", group_id,)
return self.client.put_json(
destination=destination,
path=path,
args={"requester_user_id": requester_user_id},
data=content,
ignore_backoff=True,
)
@log_function @log_function
def delete_group_summary_user(self, destination, group_id, requester_user_id, def delete_group_summary_user(self, destination, group_id, requester_user_id,
user_id, role_id): user_id, role_id):
"""Delete a users entry in a group """Delete a users entry in a group
""" """
if role_id: if role_id:
path = PREFIX + "/groups/%s/summary/roles/%s/users/%s" % ( path = _create_path(
PREFIX, "/groups/%s/summary/roles/%s/users/%s",
group_id, role_id, user_id, group_id, role_id, user_id,
) )
else: else:
path = PREFIX + "/groups/%s/summary/users/%s" % (group_id, user_id,) path = _create_path(PREFIX, "/groups/%s/summary/users/%s", group_id, user_id,)
return self.client.delete_json( return self.client.delete_json(
destination=destination, destination=destination,
@ -889,3 +931,22 @@ class TransportLayerClient(object):
data=content, data=content,
ignore_backoff=True, ignore_backoff=True,
) )
def _create_path(prefix, path, *args):
"""Creates a path from the prefix, path template and args. Ensures that
all args are url encoded.
Example:
_create_path(PREFIX, "/event/%s/", event_id)
Args:
prefix (str)
path (str): String template for the path
args: ([str]): Args to insert into path. Each arg will be url encoded
Returns:
str
"""
return prefix + path % tuple(urllib.quote(arg, "") for arg in args)

View File

@ -1,5 +1,6 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
# Copyright 2014-2016 OpenMarket Ltd # Copyright 2014-2016 OpenMarket Ltd
# Copyright 2018 New Vector Ltd
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License. # you may not use this file except in compliance with the License.
@ -24,7 +25,7 @@ from synapse.http.servlet import (
) )
from synapse.util.ratelimitutils import FederationRateLimiter from synapse.util.ratelimitutils import FederationRateLimiter
from synapse.util.versionstring import get_version_string from synapse.util.versionstring import get_version_string
from synapse.util.logcontext import preserve_fn from synapse.util.logcontext import run_in_background
from synapse.types import ThirdPartyInstanceID, get_domain_from_id from synapse.types import ThirdPartyInstanceID, get_domain_from_id
import functools import functools
@ -93,12 +94,6 @@ class Authenticator(object):
"signatures": {}, "signatures": {},
} }
if (
self.federation_domain_whitelist is not None and
self.server_name not in self.federation_domain_whitelist
):
raise FederationDeniedError(self.server_name)
if content is not None: if content is not None:
json_request["content"] = content json_request["content"] = content
@ -137,6 +132,12 @@ class Authenticator(object):
json_request["origin"] = origin json_request["origin"] = origin
json_request["signatures"].setdefault(origin, {})[key] = sig json_request["signatures"].setdefault(origin, {})[key] = sig
if (
self.federation_domain_whitelist is not None and
origin not in self.federation_domain_whitelist
):
raise FederationDeniedError(origin)
if not json_request["signatures"]: if not json_request["signatures"]:
raise NoAuthenticationError( raise NoAuthenticationError(
401, "Missing Authorization headers", Codes.UNAUTHORIZED, 401, "Missing Authorization headers", Codes.UNAUTHORIZED,
@ -151,11 +152,18 @@ class Authenticator(object):
# alive # alive
retry_timings = yield self.store.get_destination_retry_timings(origin) retry_timings = yield self.store.get_destination_retry_timings(origin)
if retry_timings and retry_timings["retry_last_ts"]: if retry_timings and retry_timings["retry_last_ts"]:
logger.info("Marking origin %r as up", origin) run_in_background(self._reset_retry_timings, origin)
preserve_fn(self.store.set_destination_retry_timings)(origin, 0, 0)
defer.returnValue(origin) defer.returnValue(origin)
@defer.inlineCallbacks
def _reset_retry_timings(self, origin):
try:
logger.info("Marking origin %r as up", origin)
yield self.store.set_destination_retry_timings(origin, 0, 0)
except Exception:
logger.exception("Error resetting retry timings on %s", origin)
class BaseFederationServlet(object): class BaseFederationServlet(object):
REQUIRE_AUTH = True REQUIRE_AUTH = True
@ -802,6 +810,23 @@ class FederationGroupsAcceptInviteServlet(BaseFederationServlet):
defer.returnValue((200, new_content)) defer.returnValue((200, new_content))
class FederationGroupsJoinServlet(BaseFederationServlet):
"""Attempt to join a group
"""
PATH = "/groups/(?P<group_id>[^/]*)/users/(?P<user_id>[^/]*)/join$"
@defer.inlineCallbacks
def on_POST(self, origin, content, query, group_id, user_id):
if get_domain_from_id(user_id) != origin:
raise SynapseError(403, "user_id doesn't match origin")
new_content = yield self.handler.join_group(
group_id, user_id, content,
)
defer.returnValue((200, new_content))
class FederationGroupsRemoveUserServlet(BaseFederationServlet): class FederationGroupsRemoveUserServlet(BaseFederationServlet):
"""Leave or kick a user from the group """Leave or kick a user from the group
""" """
@ -1124,6 +1149,24 @@ class FederationGroupsBulkPublicisedServlet(BaseFederationServlet):
defer.returnValue((200, resp)) defer.returnValue((200, resp))
class FederationGroupsSettingJoinPolicyServlet(BaseFederationServlet):
"""Sets whether a group is joinable without an invite or knock
"""
PATH = "/groups/(?P<group_id>[^/]*)/settings/m.join_policy$"
@defer.inlineCallbacks
def on_PUT(self, origin, content, query, group_id):
requester_user_id = parse_string_from_args(query, "requester_user_id")
if get_domain_from_id(requester_user_id) != origin:
raise SynapseError(403, "requester_user_id doesn't match origin")
new_content = yield self.handler.set_group_join_policy(
group_id, requester_user_id, content
)
defer.returnValue((200, new_content))
FEDERATION_SERVLET_CLASSES = ( FEDERATION_SERVLET_CLASSES = (
FederationSendServlet, FederationSendServlet,
FederationPullServlet, FederationPullServlet,
@ -1163,6 +1206,7 @@ GROUP_SERVER_SERVLET_CLASSES = (
FederationGroupsInvitedUsersServlet, FederationGroupsInvitedUsersServlet,
FederationGroupsInviteServlet, FederationGroupsInviteServlet,
FederationGroupsAcceptInviteServlet, FederationGroupsAcceptInviteServlet,
FederationGroupsJoinServlet,
FederationGroupsRemoveUserServlet, FederationGroupsRemoveUserServlet,
FederationGroupsSummaryRoomsServlet, FederationGroupsSummaryRoomsServlet,
FederationGroupsCategoriesServlet, FederationGroupsCategoriesServlet,
@ -1172,6 +1216,7 @@ GROUP_SERVER_SERVLET_CLASSES = (
FederationGroupsSummaryUsersServlet, FederationGroupsSummaryUsersServlet,
FederationGroupsAddRoomsServlet, FederationGroupsAddRoomsServlet,
FederationGroupsAddRoomsConfigServlet, FederationGroupsAddRoomsConfigServlet,
FederationGroupsSettingJoinPolicyServlet,
) )

View File

@ -42,7 +42,7 @@ from twisted.internet import defer
from synapse.api.errors import SynapseError from synapse.api.errors import SynapseError
from synapse.types import get_domain_from_id from synapse.types import get_domain_from_id
from synapse.util.logcontext import preserve_fn from synapse.util.logcontext import run_in_background
from signedjson.sign import sign_json from signedjson.sign import sign_json
@ -165,6 +165,7 @@ class GroupAttestionRenewer(object):
@defer.inlineCallbacks @defer.inlineCallbacks
def _renew_attestation(group_id, user_id): def _renew_attestation(group_id, user_id):
try:
if not self.is_mine_id(group_id): if not self.is_mine_id(group_id):
destination = get_domain_from_id(group_id) destination = get_domain_from_id(group_id)
elif not self.is_mine_id(user_id): elif not self.is_mine_id(user_id):
@ -187,9 +188,12 @@ class GroupAttestionRenewer(object):
yield self.store.update_attestation_renewal( yield self.store.update_attestation_renewal(
group_id, user_id, attestation group_id, user_id, attestation
) )
except Exception:
logger.exception("Error renewing attestation of %r in %r",
user_id, group_id)
for row in rows: for row in rows:
group_id = row["group_id"] group_id = row["group_id"]
user_id = row["user_id"] user_id = row["user_id"]
preserve_fn(_renew_attestation)(group_id, user_id) run_in_background(_renew_attestation, group_id, user_id)

View File

@ -1,5 +1,6 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
# Copyright 2017 Vector Creations Ltd # Copyright 2017 Vector Creations Ltd
# Copyright 2018 New Vector Ltd
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License. # you may not use this file except in compliance with the License.
@ -205,6 +206,28 @@ class GroupsServerHandler(object):
defer.returnValue({}) defer.returnValue({})
@defer.inlineCallbacks
def set_group_join_policy(self, group_id, requester_user_id, content):
"""Sets the group join policy.
Currently supported policies are:
- "invite": an invite must be received and accepted in order to join.
- "open": anyone can join.
"""
yield self.check_group_is_ours(
group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
)
join_policy = _parse_join_policy_from_contents(content)
if join_policy is None:
raise SynapseError(
400, "No value specified for 'm.join_policy'"
)
yield self.store.set_group_join_policy(group_id, join_policy=join_policy)
defer.returnValue({})
@defer.inlineCallbacks @defer.inlineCallbacks
def get_group_categories(self, group_id, requester_user_id): def get_group_categories(self, group_id, requester_user_id):
"""Get all categories in a group (as seen by user) """Get all categories in a group (as seen by user)
@ -381,9 +404,16 @@ class GroupsServerHandler(object):
yield self.check_group_is_ours(group_id, requester_user_id) yield self.check_group_is_ours(group_id, requester_user_id)
group_description = yield self.store.get_group(group_id) group = yield self.store.get_group(group_id)
if group:
cols = [
"name", "short_description", "long_description",
"avatar_url", "is_public",
]
group_description = {key: group[key] for key in cols}
group_description["is_openly_joinable"] = group["join_policy"] == "open"
if group_description:
defer.returnValue(group_description) defer.returnValue(group_description)
else: else:
raise SynapseError(404, "Unknown group") raise SynapseError(404, "Unknown group")
@ -654,6 +684,40 @@ class GroupsServerHandler(object):
else: else:
raise SynapseError(502, "Unknown state returned by HS") raise SynapseError(502, "Unknown state returned by HS")
@defer.inlineCallbacks
def _add_user(self, group_id, user_id, content):
"""Add a user to a group based on a content dict.
See accept_invite, join_group.
"""
if not self.hs.is_mine_id(user_id):
local_attestation = self.attestations.create_attestation(
group_id, user_id,
)
remote_attestation = content["attestation"]
yield self.attestations.verify_attestation(
remote_attestation,
user_id=user_id,
group_id=group_id,
)
else:
local_attestation = None
remote_attestation = None
is_public = _parse_visibility_from_contents(content)
yield self.store.add_user_to_group(
group_id, user_id,
is_admin=False,
is_public=is_public,
local_attestation=local_attestation,
remote_attestation=remote_attestation,
)
defer.returnValue(local_attestation)
@defer.inlineCallbacks @defer.inlineCallbacks
def accept_invite(self, group_id, requester_user_id, content): def accept_invite(self, group_id, requester_user_id, content):
"""User tries to accept an invite to the group. """User tries to accept an invite to the group.
@ -670,30 +734,27 @@ class GroupsServerHandler(object):
if not is_invited: if not is_invited:
raise SynapseError(403, "User not invited to group") raise SynapseError(403, "User not invited to group")
if not self.hs.is_mine_id(requester_user_id): local_attestation = yield self._add_user(group_id, requester_user_id, content)
local_attestation = self.attestations.create_attestation(
group_id, requester_user_id,
)
remote_attestation = content["attestation"]
yield self.attestations.verify_attestation( defer.returnValue({
remote_attestation, "state": "join",
user_id=requester_user_id, "attestation": local_attestation,
group_id=group_id, })
)
else:
local_attestation = None
remote_attestation = None
is_public = _parse_visibility_from_contents(content) @defer.inlineCallbacks
def join_group(self, group_id, requester_user_id, content):
"""User tries to join the group.
yield self.store.add_user_to_group( This will error if the group requires an invite/knock to join
group_id, requester_user_id, """
is_admin=False,
is_public=is_public, group_info = yield self.check_group_is_ours(
local_attestation=local_attestation, group_id, requester_user_id, and_exists=True
remote_attestation=remote_attestation,
) )
if group_info['join_policy'] != "open":
raise SynapseError(403, "Group is not publicly joinable")
local_attestation = yield self._add_user(group_id, requester_user_id, content)
defer.returnValue({ defer.returnValue({
"state": "join", "state": "join",
@ -835,6 +896,31 @@ class GroupsServerHandler(object):
}) })
def _parse_join_policy_from_contents(content):
"""Given a content for a request, return the specified join policy or None
"""
join_policy_dict = content.get("m.join_policy")
if join_policy_dict:
return _parse_join_policy_dict(join_policy_dict)
else:
return None
def _parse_join_policy_dict(join_policy_dict):
"""Given a dict for the "m.join_policy" config return the join policy specified
"""
join_policy_type = join_policy_dict.get("type")
if not join_policy_type:
return "invite"
if join_policy_type not in ("invite", "open"):
raise SynapseError(
400, "Synapse only supports 'invite'/'open' join rule"
)
return join_policy_type
def _parse_visibility_from_contents(content): def _parse_visibility_from_contents(content):
"""Given a content for a request parse out whether the entity should be """Given a content for a request parse out whether the entity should be
public or not public or not

View File

@ -18,7 +18,9 @@ from twisted.internet import defer
import synapse import synapse
from synapse.api.constants import EventTypes from synapse.api.constants import EventTypes
from synapse.util.metrics import Measure from synapse.util.metrics import Measure
from synapse.util.logcontext import make_deferred_yieldable, preserve_fn from synapse.util.logcontext import (
make_deferred_yieldable, run_in_background,
)
import logging import logging
@ -84,11 +86,16 @@ class ApplicationServicesHandler(object):
if not events: if not events:
break break
events_by_room = {}
for event in events: for event in events:
events_by_room.setdefault(event.room_id, []).append(event)
@defer.inlineCallbacks
def handle_event(event):
# Gather interested services # Gather interested services
services = yield self._get_services_for_event(event) services = yield self._get_services_for_event(event)
if len(services) == 0: if len(services) == 0:
continue # no services need notifying return # no services need notifying
# Do we know this user exists? If not, poke the user # Do we know this user exists? If not, poke the user
# query API for all services which match that user regex. # query API for all services which match that user regex.
@ -104,13 +111,35 @@ class ApplicationServicesHandler(object):
# Fork off pushes to these services # Fork off pushes to these services
for service in services: for service in services:
preserve_fn(self.scheduler.submit_event_for_as)( self.scheduler.submit_event_for_as(service, event)
service, event
@defer.inlineCallbacks
def handle_room_events(events):
for event in events:
yield handle_event(event)
yield make_deferred_yieldable(defer.gatherResults([
run_in_background(handle_room_events, evs)
for evs in events_by_room.itervalues()
], consumeErrors=True))
yield self.store.set_appservice_last_pos(upper_bound)
now = self.clock.time_msec()
ts = yield self.store.get_received_ts(events[-1].event_id)
synapse.metrics.event_processing_positions.set(
upper_bound, "appservice_sender",
) )
events_processed_counter.inc_by(len(events)) events_processed_counter.inc_by(len(events))
yield self.store.set_appservice_last_pos(upper_bound) synapse.metrics.event_processing_lag.set(
now - ts, "appservice_sender",
)
synapse.metrics.event_processing_last_ts.set(
ts, "appservice_sender",
)
finally: finally:
self.is_processing = False self.is_processing = False
@ -167,7 +196,10 @@ class ApplicationServicesHandler(object):
services = yield self._get_services_for_3pn(protocol) services = yield self._get_services_for_3pn(protocol)
results = yield make_deferred_yieldable(defer.DeferredList([ results = yield make_deferred_yieldable(defer.DeferredList([
preserve_fn(self.appservice_api.query_3pe)(service, kind, protocol, fields) run_in_background(
self.appservice_api.query_3pe,
service, kind, protocol, fields,
)
for service in services for service in services
], consumeErrors=True)) ], consumeErrors=True))
@ -228,11 +260,15 @@ class ApplicationServicesHandler(object):
event based on the service regex. event based on the service regex.
""" """
services = self.store.get_app_services() services = self.store.get_app_services()
interested_list = [
s for s in services if ( # we can't use a list comprehension here. Since python 3, list
yield s.is_interested(event, self.store) # comprehensions use a generator internally. This means you can't yield
) # inside of a list comprehension anymore.
] interested_list = []
for s in services:
if (yield s.is_interested(event, self.store)):
interested_list.append(s)
defer.returnValue(interested_list) defer.returnValue(interested_list)
def _get_services_for_user(self, user_id): def _get_services_for_user(self, user_id):

View File

@ -155,7 +155,7 @@ class DeviceHandler(BaseHandler):
try: try:
yield self.store.delete_device(user_id, device_id) yield self.store.delete_device(user_id, device_id)
except errors.StoreError, e: except errors.StoreError as e:
if e.code == 404: if e.code == 404:
# no match # no match
pass pass
@ -204,7 +204,7 @@ class DeviceHandler(BaseHandler):
try: try:
yield self.store.delete_devices(user_id, device_ids) yield self.store.delete_devices(user_id, device_ids)
except errors.StoreError, e: except errors.StoreError as e:
if e.code == 404: if e.code == 404:
# no match # no match
pass pass
@ -243,7 +243,7 @@ class DeviceHandler(BaseHandler):
new_display_name=content.get("display_name") new_display_name=content.get("display_name")
) )
yield self.notify_device_update(user_id, [device_id]) yield self.notify_device_update(user_id, [device_id])
except errors.StoreError, e: except errors.StoreError as e:
if e.code == 404: if e.code == 404:
raise errors.NotFoundError() raise errors.NotFoundError()
else: else:

View File

@ -1,5 +1,6 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
# Copyright 2016 OpenMarket Ltd # Copyright 2016 OpenMarket Ltd
# Copyright 2018 New Vector Ltd
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License. # you may not use this file except in compliance with the License.
@ -13,7 +14,7 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
import ujson as json import simplejson as json
import logging import logging
from canonicaljson import encode_canonical_json from canonicaljson import encode_canonical_json
@ -23,7 +24,7 @@ from synapse.api.errors import (
SynapseError, CodeMessageException, FederationDeniedError, SynapseError, CodeMessageException, FederationDeniedError,
) )
from synapse.types import get_domain_from_id, UserID from synapse.types import get_domain_from_id, UserID
from synapse.util.logcontext import preserve_fn, make_deferred_yieldable from synapse.util.logcontext import make_deferred_yieldable, run_in_background
from synapse.util.retryutils import NotRetryingDestination from synapse.util.retryutils import NotRetryingDestination
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -134,28 +135,13 @@ class E2eKeysHandler(object):
if user_id in destination_query: if user_id in destination_query:
results[user_id] = keys results[user_id] = keys
except CodeMessageException as e:
failures[destination] = {
"status": e.code, "message": e.message
}
except NotRetryingDestination as e:
failures[destination] = {
"status": 503, "message": "Not ready for retry",
}
except FederationDeniedError as e:
failures[destination] = {
"status": 403, "message": "Federation Denied",
}
except Exception as e: except Exception as e:
# include ConnectionRefused and other errors failures[destination] = _exception_to_failure(e)
failures[destination] = {
"status": 503, "message": e.message
}
yield make_deferred_yieldable(defer.gatherResults([ yield make_deferred_yieldable(defer.gatherResults([
preserve_fn(do_remote_query)(destination) run_in_background(do_remote_query, destination)
for destination in remote_queries_not_in_cache for destination in remote_queries_not_in_cache
])) ], consumeErrors=True))
defer.returnValue({ defer.returnValue({
"device_keys": results, "failures": failures, "device_keys": results, "failures": failures,
@ -252,24 +238,13 @@ class E2eKeysHandler(object):
for user_id, keys in remote_result["one_time_keys"].items(): for user_id, keys in remote_result["one_time_keys"].items():
if user_id in device_keys: if user_id in device_keys:
json_result[user_id] = keys json_result[user_id] = keys
except CodeMessageException as e:
failures[destination] = {
"status": e.code, "message": e.message
}
except NotRetryingDestination as e:
failures[destination] = {
"status": 503, "message": "Not ready for retry",
}
except Exception as e: except Exception as e:
# include ConnectionRefused and other errors failures[destination] = _exception_to_failure(e)
failures[destination] = {
"status": 503, "message": e.message
}
yield make_deferred_yieldable(defer.gatherResults([ yield make_deferred_yieldable(defer.gatherResults([
preserve_fn(claim_client_keys)(destination) run_in_background(claim_client_keys, destination)
for destination in remote_queries for destination in remote_queries
])) ], consumeErrors=True))
logger.info( logger.info(
"Claimed one-time-keys: %s", "Claimed one-time-keys: %s",
@ -362,6 +337,31 @@ class E2eKeysHandler(object):
) )
def _exception_to_failure(e):
if isinstance(e, CodeMessageException):
return {
"status": e.code, "message": e.message,
}
if isinstance(e, NotRetryingDestination):
return {
"status": 503, "message": "Not ready for retry",
}
if isinstance(e, FederationDeniedError):
return {
"status": 403, "message": "Federation Denied",
}
# include ConnectionRefused and other errors
#
# Note that some Exceptions (notably twisted's ResponseFailed etc) don't
# give a string for e.message, which simplejson then fails to serialize.
return {
"status": 503, "message": str(e.message),
}
def _one_time_keys_match(old_key_json, new_key): def _one_time_keys_match(old_key_json, new_key):
old_key = json.loads(old_key_json) old_key = json.loads(old_key_json)

View File

@ -15,8 +15,16 @@
# limitations under the License. # limitations under the License.
"""Contains handlers for federation events.""" """Contains handlers for federation events."""
import itertools
import logging
import sys
from signedjson.key import decode_verify_key_bytes from signedjson.key import decode_verify_key_bytes
from signedjson.sign import verify_signed_json from signedjson.sign import verify_signed_json
import six
from six.moves import http_client
from twisted.internet import defer
from unpaddedbase64 import decode_base64 from unpaddedbase64 import decode_base64
from ._base import BaseHandler from ._base import BaseHandler
@ -43,10 +51,6 @@ from synapse.util.retryutils import NotRetryingDestination
from synapse.util.distributor import user_joined_room from synapse.util.distributor import user_joined_room
from twisted.internet import defer
import itertools
import logging
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -115,6 +119,19 @@ class FederationHandler(BaseHandler):
logger.debug("Already seen pdu %s", pdu.event_id) logger.debug("Already seen pdu %s", pdu.event_id)
return return
# do some initial sanity-checking of the event. In particular, make
# sure it doesn't have hundreds of prev_events or auth_events, which
# could cause a huge state resolution or cascade of event fetches.
try:
self._sanity_check_event(pdu)
except SynapseError as err:
raise FederationError(
"ERROR",
err.code,
err.msg,
affected=pdu.event_id,
)
# If we are currently in the process of joining this room, then we # If we are currently in the process of joining this room, then we
# queue up events for later processing. # queue up events for later processing.
if pdu.room_id in self.room_queues: if pdu.room_id in self.room_queues:
@ -149,10 +166,6 @@ class FederationHandler(BaseHandler):
auth_chain = [] auth_chain = []
have_seen = yield self.store.have_events(
[ev for ev, _ in pdu.prev_events]
)
fetch_state = False fetch_state = False
# Get missing pdus if necessary. # Get missing pdus if necessary.
@ -168,7 +181,7 @@ class FederationHandler(BaseHandler):
) )
prevs = {e_id for e_id, _ in pdu.prev_events} prevs = {e_id for e_id, _ in pdu.prev_events}
seen = set(have_seen.keys()) seen = yield self.store.have_seen_events(prevs)
if min_depth and pdu.depth < min_depth: if min_depth and pdu.depth < min_depth:
# This is so that we don't notify the user about this # This is so that we don't notify the user about this
@ -196,8 +209,7 @@ class FederationHandler(BaseHandler):
# Update the set of things we've seen after trying to # Update the set of things we've seen after trying to
# fetch the missing stuff # fetch the missing stuff
have_seen = yield self.store.have_events(prevs) seen = yield self.store.have_seen_events(prevs)
seen = set(have_seen.iterkeys())
if not prevs - seen: if not prevs - seen:
logger.info( logger.info(
@ -248,8 +260,7 @@ class FederationHandler(BaseHandler):
min_depth (int): Minimum depth of events to return. min_depth (int): Minimum depth of events to return.
""" """
# We recalculate seen, since it may have changed. # We recalculate seen, since it may have changed.
have_seen = yield self.store.have_events(prevs) seen = yield self.store.have_seen_events(prevs)
seen = set(have_seen.keys())
if not prevs - seen: if not prevs - seen:
return return
@ -361,9 +372,7 @@ class FederationHandler(BaseHandler):
if auth_chain: if auth_chain:
event_ids |= {e.event_id for e in auth_chain} event_ids |= {e.event_id for e in auth_chain}
seen_ids = set( seen_ids = yield self.store.have_seen_events(event_ids)
(yield self.store.have_events(event_ids)).keys()
)
if state and auth_chain is not None: if state and auth_chain is not None:
# If we have any state or auth_chain given to us by the replication # If we have any state or auth_chain given to us by the replication
@ -527,9 +536,16 @@ class FederationHandler(BaseHandler):
def backfill(self, dest, room_id, limit, extremities): def backfill(self, dest, room_id, limit, extremities):
""" Trigger a backfill request to `dest` for the given `room_id` """ Trigger a backfill request to `dest` for the given `room_id`
This will attempt to get more events from the remote. This may return This will attempt to get more events from the remote. If the other side
be successfull and still return no events if the other side has no new has no new events to offer, this will return an empty list.
events to offer.
As the events are received, we check their signatures, and also do some
sanity-checking on them. If any of the backfilled events are invalid,
this method throws a SynapseError.
TODO: make this more useful to distinguish failures of the remote
server from invalid events (there is probably no point in trying to
re-fetch invalid events from every other HS in the room.)
""" """
if dest == self.server_name: if dest == self.server_name:
raise SynapseError(400, "Can't backfill from self.") raise SynapseError(400, "Can't backfill from self.")
@ -541,6 +557,16 @@ class FederationHandler(BaseHandler):
extremities=extremities, extremities=extremities,
) )
# ideally we'd sanity check the events here for excess prev_events etc,
# but it's hard to reject events at this point without completely
# breaking backfill in the same way that it is currently broken by
# events whose signature we cannot verify (#3121).
#
# So for now we accept the events anyway. #3124 tracks this.
#
# for ev in events:
# self._sanity_check_event(ev)
# Don't bother processing events we already have. # Don't bother processing events we already have.
seen_events = yield self.store.have_events_in_timeline( seen_events = yield self.store.have_events_in_timeline(
set(e.event_id for e in events) set(e.event_id for e in events)
@ -613,7 +639,8 @@ class FederationHandler(BaseHandler):
results = yield logcontext.make_deferred_yieldable(defer.gatherResults( results = yield logcontext.make_deferred_yieldable(defer.gatherResults(
[ [
logcontext.preserve_fn(self.replication_layer.get_pdu)( logcontext.run_in_background(
self.replication_layer.get_pdu,
[dest], [dest],
event_id, event_id,
outlier=True, outlier=True,
@ -633,7 +660,7 @@ class FederationHandler(BaseHandler):
failed_to_fetch = missing_auth - set(auth_events) failed_to_fetch = missing_auth - set(auth_events)
seen_events = yield self.store.have_events( seen_events = yield self.store.have_seen_events(
set(auth_events.keys()) | set(state_events.keys()) set(auth_events.keys()) | set(state_events.keys())
) )
@ -843,6 +870,38 @@ class FederationHandler(BaseHandler):
defer.returnValue(False) defer.returnValue(False)
def _sanity_check_event(self, ev):
"""
Do some early sanity checks of a received event
In particular, checks it doesn't have an excessive number of
prev_events or auth_events, which could cause a huge state resolution
or cascade of event fetches.
Args:
ev (synapse.events.EventBase): event to be checked
Returns: None
Raises:
SynapseError if the event does not pass muster
"""
if len(ev.prev_events) > 20:
logger.warn("Rejecting event %s which has %i prev_events",
ev.event_id, len(ev.prev_events))
raise SynapseError(
http_client.BAD_REQUEST,
"Too many prev_events",
)
if len(ev.auth_events) > 10:
logger.warn("Rejecting event %s which has %i auth_events",
ev.event_id, len(ev.auth_events))
raise SynapseError(
http_client.BAD_REQUEST,
"Too many auth_events",
)
@defer.inlineCallbacks @defer.inlineCallbacks
def send_invite(self, target_host, event): def send_invite(self, target_host, event):
""" Sends the invite to the remote server for signing. """ Sends the invite to the remote server for signing.
@ -967,7 +1026,7 @@ class FederationHandler(BaseHandler):
# lots of requests for missing prev_events which we do actually # lots of requests for missing prev_events which we do actually
# have. Hence we fire off the deferred, but don't wait for it. # have. Hence we fire off the deferred, but don't wait for it.
logcontext.preserve_fn(self._handle_queued_pdus)(room_queue) logcontext.run_in_background(self._handle_queued_pdus, room_queue)
defer.returnValue(True) defer.returnValue(True)
@ -1457,18 +1516,21 @@ class FederationHandler(BaseHandler):
backfilled=backfilled, backfilled=backfilled,
) )
except: # noqa: E722, as we reraise the exception this is fine. except: # noqa: E722, as we reraise the exception this is fine.
# Ensure that we actually remove the entries in the push actions tp, value, tb = sys.exc_info()
# staging area
logcontext.preserve_fn( logcontext.run_in_background(
self.store.remove_push_actions_from_staging self.store.remove_push_actions_from_staging,
)(event.event_id) event.event_id,
raise )
six.reraise(tp, value, tb)
if not backfilled: if not backfilled:
# this intentionally does not yield: we don't care about the result # this intentionally does not yield: we don't care about the result
# and don't need to wait for it. # and don't need to wait for it.
logcontext.preserve_fn(self.pusher_pool.on_new_notifications)( logcontext.run_in_background(
event_stream_id, max_stream_id self.pusher_pool.on_new_notifications,
event_stream_id, max_stream_id,
) )
defer.returnValue((context, event_stream_id, max_stream_id)) defer.returnValue((context, event_stream_id, max_stream_id))
@ -1482,7 +1544,8 @@ class FederationHandler(BaseHandler):
""" """
contexts = yield logcontext.make_deferred_yieldable(defer.gatherResults( contexts = yield logcontext.make_deferred_yieldable(defer.gatherResults(
[ [
logcontext.preserve_fn(self._prep_event)( logcontext.run_in_background(
self._prep_event,
origin, origin,
ev_info["event"], ev_info["event"],
state=ev_info.get("state"), state=ev_info.get("state"),
@ -1736,7 +1799,8 @@ class FederationHandler(BaseHandler):
event_key = None event_key = None
if event_auth_events - current_state: if event_auth_events - current_state:
have_events = yield self.store.have_events( # TODO: can we use store.have_seen_events here instead?
have_events = yield self.store.get_seen_events_with_rejections(
event_auth_events - current_state event_auth_events - current_state
) )
else: else:
@ -1759,12 +1823,12 @@ class FederationHandler(BaseHandler):
origin, event.room_id, event.event_id origin, event.room_id, event.event_id
) )
seen_remotes = yield self.store.have_events( seen_remotes = yield self.store.have_seen_events(
[e.event_id for e in remote_auth_chain] [e.event_id for e in remote_auth_chain]
) )
for e in remote_auth_chain: for e in remote_auth_chain:
if e.event_id in seen_remotes.keys(): if e.event_id in seen_remotes:
continue continue
if e.event_id == event.event_id: if e.event_id == event.event_id:
@ -1791,7 +1855,7 @@ class FederationHandler(BaseHandler):
except AuthError: except AuthError:
pass pass
have_events = yield self.store.have_events( have_events = yield self.store.get_seen_events_with_rejections(
[e_id for e_id, _ in event.auth_events] [e_id for e_id, _ in event.auth_events]
) )
seen_events = set(have_events.keys()) seen_events = set(have_events.keys())
@ -1810,7 +1874,8 @@ class FederationHandler(BaseHandler):
different_events = yield logcontext.make_deferred_yieldable( different_events = yield logcontext.make_deferred_yieldable(
defer.gatherResults([ defer.gatherResults([
logcontext.preserve_fn(self.store.get_event)( logcontext.run_in_background(
self.store.get_event,
d, d,
allow_none=True, allow_none=True,
allow_rejected=False, allow_rejected=False,
@ -1876,13 +1941,13 @@ class FederationHandler(BaseHandler):
local_auth_chain, local_auth_chain,
) )
seen_remotes = yield self.store.have_events( seen_remotes = yield self.store.have_seen_events(
[e.event_id for e in result["auth_chain"]] [e.event_id for e in result["auth_chain"]]
) )
# 3. Process any remote auth chain events we haven't seen. # 3. Process any remote auth chain events we haven't seen.
for ev in result["auth_chain"]: for ev in result["auth_chain"]:
if ev.event_id in seen_remotes.keys(): if ev.event_id in seen_remotes:
continue continue
if ev.event_id == event.event_id: if ev.event_id == event.event_id:

View File

@ -1,5 +1,6 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
# Copyright 2017 Vector Creations Ltd # Copyright 2017 Vector Creations Ltd
# Copyright 2018 New Vector Ltd
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License. # you may not use this file except in compliance with the License.
@ -90,6 +91,8 @@ class GroupsLocalHandler(object):
get_group_role = _create_rerouter("get_group_role") get_group_role = _create_rerouter("get_group_role")
get_group_roles = _create_rerouter("get_group_roles") get_group_roles = _create_rerouter("get_group_roles")
set_group_join_policy = _create_rerouter("set_group_join_policy")
@defer.inlineCallbacks @defer.inlineCallbacks
def get_group_summary(self, group_id, requester_user_id): def get_group_summary(self, group_id, requester_user_id):
"""Get the group summary for a group. """Get the group summary for a group.
@ -226,7 +229,45 @@ class GroupsLocalHandler(object):
def join_group(self, group_id, user_id, content): def join_group(self, group_id, user_id, content):
"""Request to join a group """Request to join a group
""" """
raise NotImplementedError() # TODO if self.is_mine_id(group_id):
yield self.groups_server_handler.join_group(
group_id, user_id, content
)
local_attestation = None
remote_attestation = None
else:
local_attestation = self.attestations.create_attestation(group_id, user_id)
content["attestation"] = local_attestation
res = yield self.transport_client.join_group(
get_domain_from_id(group_id), group_id, user_id, content,
)
remote_attestation = res["attestation"]
yield self.attestations.verify_attestation(
remote_attestation,
group_id=group_id,
user_id=user_id,
server_name=get_domain_from_id(group_id),
)
# TODO: Check that the group is public and we're being added publically
is_publicised = content.get("publicise", False)
token = yield self.store.register_user_group_membership(
group_id, user_id,
membership="join",
is_admin=False,
local_attestation=local_attestation,
remote_attestation=remote_attestation,
is_publicised=is_publicised,
)
self.notifier.on_new_event(
"groups_key", token, users=[user_id],
)
defer.returnValue({})
@defer.inlineCallbacks @defer.inlineCallbacks
def accept_invite(self, group_id, user_id, content): def accept_invite(self, group_id, user_id, content):

View File

@ -15,6 +15,11 @@
# limitations under the License. # limitations under the License.
"""Utilities for interacting with Identity Servers""" """Utilities for interacting with Identity Servers"""
import logging
import simplejson as json
from twisted.internet import defer from twisted.internet import defer
from synapse.api.errors import ( from synapse.api.errors import (
@ -24,9 +29,6 @@ from ._base import BaseHandler
from synapse.util.async import run_on_reactor from synapse.util.async import run_on_reactor
from synapse.api.errors import SynapseError, Codes from synapse.api.errors import SynapseError, Codes
import json
import logging
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View File

@ -27,7 +27,7 @@ from synapse.types import (
from synapse.util import unwrapFirstError from synapse.util import unwrapFirstError
from synapse.util.async import concurrently_execute from synapse.util.async import concurrently_execute
from synapse.util.caches.snapshot_cache import SnapshotCache from synapse.util.caches.snapshot_cache import SnapshotCache
from synapse.util.logcontext import make_deferred_yieldable, preserve_fn from synapse.util.logcontext import make_deferred_yieldable, run_in_background
from synapse.visibility import filter_events_for_client from synapse.visibility import filter_events_for_client
from ._base import BaseHandler from ._base import BaseHandler
@ -166,7 +166,8 @@ class InitialSyncHandler(BaseHandler):
(messages, token), current_state = yield make_deferred_yieldable( (messages, token), current_state = yield make_deferred_yieldable(
defer.gatherResults( defer.gatherResults(
[ [
preserve_fn(self.store.get_recent_events_for_room)( run_in_background(
self.store.get_recent_events_for_room,
event.room_id, event.room_id,
limit=limit, limit=limit,
end_token=room_end_token, end_token=room_end_token,
@ -391,9 +392,10 @@ class InitialSyncHandler(BaseHandler):
presence, receipts, (messages, token) = yield defer.gatherResults( presence, receipts, (messages, token) = yield defer.gatherResults(
[ [
preserve_fn(get_presence)(), run_in_background(get_presence),
preserve_fn(get_receipts)(), run_in_background(get_receipts),
preserve_fn(self.store.get_recent_events_for_room)( run_in_background(
self.store.get_recent_events_for_room,
room_id, room_id,
limit=limit, limit=limit,
end_token=now_token.room_key, end_token=now_token.room_key,

View File

@ -13,10 +13,16 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
import logging
import simplejson
import sys
from canonicaljson import encode_canonical_json
import six
from twisted.internet import defer, reactor from twisted.internet import defer, reactor
from twisted.python.failure import Failure from twisted.python.failure import Failure
from synapse.api.constants import EventTypes, Membership from synapse.api.constants import EventTypes, Membership, MAX_DEPTH
from synapse.api.errors import AuthError, Codes, SynapseError from synapse.api.errors import AuthError, Codes, SynapseError
from synapse.crypto.event_signing import add_hashes_and_signatures from synapse.crypto.event_signing import add_hashes_and_signatures
from synapse.events.utils import serialize_event from synapse.events.utils import serialize_event
@ -25,21 +31,15 @@ from synapse.types import (
UserID, RoomAlias, RoomStreamToken, UserID, RoomAlias, RoomStreamToken,
) )
from synapse.util.async import run_on_reactor, ReadWriteLock, Limiter from synapse.util.async import run_on_reactor, ReadWriteLock, Limiter
from synapse.util.logcontext import preserve_fn, run_in_background from synapse.util.logcontext import run_in_background
from synapse.util.metrics import measure_func from synapse.util.metrics import measure_func
from synapse.util.frozenutils import unfreeze from synapse.util.frozenutils import frozendict_json_encoder
from synapse.util.stringutils import random_string from synapse.util.stringutils import random_string
from synapse.visibility import filter_events_for_client from synapse.visibility import filter_events_for_client
from synapse.replication.http.send_event import send_event_to_master from synapse.replication.http.send_event import send_event_to_master
from ._base import BaseHandler from ._base import BaseHandler
from canonicaljson import encode_canonical_json
import logging
import random
import ujson
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -433,7 +433,7 @@ class EventCreationHandler(object):
@defer.inlineCallbacks @defer.inlineCallbacks
def create_event(self, requester, event_dict, token_id=None, txn_id=None, def create_event(self, requester, event_dict, token_id=None, txn_id=None,
prev_event_ids=None): prev_events_and_hashes=None):
""" """
Given a dict from a client, create a new event. Given a dict from a client, create a new event.
@ -447,14 +447,19 @@ class EventCreationHandler(object):
event_dict (dict): An entire event event_dict (dict): An entire event
token_id (str) token_id (str)
txn_id (str) txn_id (str)
prev_event_ids (list): The prev event ids to use when creating the event
prev_events_and_hashes (list[(str, dict[str, str], int)]|None):
the forward extremities to use as the prev_events for the
new event. For each event, a tuple of (event_id, hashes, depth)
where *hashes* is a map from algorithm to hash.
If None, they will be requested from the database.
Returns: Returns:
Tuple of created event (FrozenEvent), Context Tuple of created event (FrozenEvent), Context
""" """
builder = self.event_builder_factory.new(event_dict) builder = self.event_builder_factory.new(event_dict)
with (yield self.limiter.queue(builder.room_id)):
self.validator.validate_new(builder) self.validator.validate_new(builder)
if builder.type == EventTypes.Member: if builder.type == EventTypes.Member:
@ -486,7 +491,7 @@ class EventCreationHandler(object):
event, context = yield self.create_new_client_event( event, context = yield self.create_new_client_event(
builder=builder, builder=builder,
requester=requester, requester=requester,
prev_event_ids=prev_event_ids, prev_events_and_hashes=prev_events_and_hashes,
) )
defer.returnValue((event, context)) defer.returnValue((event, context))
@ -557,6 +562,13 @@ class EventCreationHandler(object):
See self.create_event and self.send_nonmember_event. See self.create_event and self.send_nonmember_event.
""" """
# We limit the number of concurrent event sends in a room so that we
# don't fork the DAG too much. If we don't limit then we can end up in
# a situation where event persistence can't keep up, causing
# extremities to pile up, which in turn leads to state resolution
# taking longer.
with (yield self.limiter.queue(event_dict["room_id"])):
event, context = yield self.create_event( event, context = yield self.create_event(
requester, requester,
event_dict, event_dict,
@ -582,38 +594,47 @@ class EventCreationHandler(object):
@measure_func("create_new_client_event") @measure_func("create_new_client_event")
@defer.inlineCallbacks @defer.inlineCallbacks
def create_new_client_event(self, builder, requester=None, prev_event_ids=None): def create_new_client_event(self, builder, requester=None,
if prev_event_ids: prev_events_and_hashes=None):
prev_events = yield self.store.add_event_hashes(prev_event_ids) """Create a new event for a local client
prev_max_depth = yield self.store.get_max_depth_of_events(prev_event_ids)
depth = prev_max_depth + 1 Args:
else: builder (EventBuilder):
latest_ret = yield self.store.get_latest_event_ids_and_hashes_in_room(
builder.room_id, requester (synapse.types.Requester|None):
prev_events_and_hashes (list[(str, dict[str, str], int)]|None):
the forward extremities to use as the prev_events for the
new event. For each event, a tuple of (event_id, hashes, depth)
where *hashes* is a map from algorithm to hash.
If None, they will be requested from the database.
Returns:
Deferred[(synapse.events.EventBase, synapse.events.snapshot.EventContext)]
"""
if prev_events_and_hashes is not None:
assert len(prev_events_and_hashes) <= 10, \
"Attempting to create an event with %i prev_events" % (
len(prev_events_and_hashes),
) )
else:
prev_events_and_hashes = \
yield self.store.get_prev_events_for_room(builder.room_id)
# We want to limit the max number of prev events we point to in our if prev_events_and_hashes:
# new event depth = max([d for _, _, d in prev_events_and_hashes]) + 1
if len(latest_ret) > 10: # we cap depth of generated events, to ensure that they are not
# Sort by reverse depth, so we point to the most recent. # rejected by other servers (and so that they can be persisted in
latest_ret.sort(key=lambda a: -a[2]) # the db)
new_latest_ret = latest_ret[:5] depth = min(depth, MAX_DEPTH)
# We also randomly point to some of the older events, to make
# sure that we don't completely ignore the older events.
if latest_ret[5:]:
sample_size = min(5, len(latest_ret[5:]))
new_latest_ret.extend(random.sample(latest_ret[5:], sample_size))
latest_ret = new_latest_ret
if latest_ret:
depth = max([d for _, _, d in latest_ret]) + 1
else: else:
depth = 1 depth = 1
prev_events = [ prev_events = [
(event_id, prev_hashes) (event_id, prev_hashes)
for event_id, prev_hashes, _ in latest_ret for event_id, prev_hashes, _ in prev_events_and_hashes
] ]
builder.prev_events = prev_events builder.prev_events = prev_events
@ -678,8 +699,8 @@ class EventCreationHandler(object):
# Ensure that we can round trip before trying to persist in db # Ensure that we can round trip before trying to persist in db
try: try:
dump = ujson.dumps(unfreeze(event.content)) dump = frozendict_json_encoder.encode(event.content)
ujson.loads(dump) simplejson.loads(dump)
except Exception: except Exception:
logger.exception("Failed to encode content: %r", event.content) logger.exception("Failed to encode content: %r", event.content)
raise raise
@ -713,8 +734,14 @@ class EventCreationHandler(object):
except: # noqa: E722, as we reraise the exception this is fine. except: # noqa: E722, as we reraise the exception this is fine.
# Ensure that we actually remove the entries in the push actions # Ensure that we actually remove the entries in the push actions
# staging area, if we calculated them. # staging area, if we calculated them.
preserve_fn(self.store.remove_push_actions_from_staging)(event.event_id) tp, value, tb = sys.exc_info()
raise
run_in_background(
self.store.remove_push_actions_from_staging,
event.event_id,
)
six.reraise(tp, value, tb)
@defer.inlineCallbacks @defer.inlineCallbacks
def persist_and_notify_client_event( def persist_and_notify_client_event(
@ -834,22 +861,33 @@ class EventCreationHandler(object):
# this intentionally does not yield: we don't care about the result # this intentionally does not yield: we don't care about the result
# and don't need to wait for it. # and don't need to wait for it.
preserve_fn(self.pusher_pool.on_new_notifications)( run_in_background(
self.pusher_pool.on_new_notifications,
event_stream_id, max_stream_id event_stream_id, max_stream_id
) )
@defer.inlineCallbacks @defer.inlineCallbacks
def _notify(): def _notify():
yield run_on_reactor() yield run_on_reactor()
try:
self.notifier.on_new_room_event( self.notifier.on_new_room_event(
event, event_stream_id, max_stream_id, event, event_stream_id, max_stream_id,
extra_users=extra_users extra_users=extra_users
) )
except Exception:
logger.exception("Error notifying about new room event")
preserve_fn(_notify)() run_in_background(_notify)
if event.type == EventTypes.Message: if event.type == EventTypes.Message:
presence = self.hs.get_presence_handler()
# We don't want to block sending messages on any presence code. This # We don't want to block sending messages on any presence code. This
# matters as sometimes presence code can take a while. # matters as sometimes presence code can take a while.
preserve_fn(presence.bump_presence_active_time)(requester.user) run_in_background(self._bump_active_time, requester.user)
@defer.inlineCallbacks
def _bump_active_time(self, user):
try:
presence = self.hs.get_presence_handler()
yield presence.bump_presence_active_time(user)
except Exception:
logger.exception("Error bumping presence active time")

View File

@ -31,7 +31,7 @@ from synapse.storage.presence import UserPresenceState
from synapse.util.caches.descriptors import cachedInlineCallbacks from synapse.util.caches.descriptors import cachedInlineCallbacks
from synapse.util.async import Linearizer from synapse.util.async import Linearizer
from synapse.util.logcontext import preserve_fn from synapse.util.logcontext import run_in_background
from synapse.util.logutils import log_function from synapse.util.logutils import log_function
from synapse.util.metrics import Measure from synapse.util.metrics import Measure
from synapse.util.wheel_timer import WheelTimer from synapse.util.wheel_timer import WheelTimer
@ -254,6 +254,14 @@ class PresenceHandler(object):
logger.info("Finished _persist_unpersisted_changes") logger.info("Finished _persist_unpersisted_changes")
@defer.inlineCallbacks
def _update_states_and_catch_exception(self, new_states):
try:
res = yield self._update_states(new_states)
defer.returnValue(res)
except Exception:
logger.exception("Error updating presence")
@defer.inlineCallbacks @defer.inlineCallbacks
def _update_states(self, new_states): def _update_states(self, new_states):
"""Updates presence of users. Sets the appropriate timeouts. Pokes """Updates presence of users. Sets the appropriate timeouts. Pokes
@ -364,7 +372,7 @@ class PresenceHandler(object):
now=now, now=now,
) )
preserve_fn(self._update_states)(changes) run_in_background(self._update_states_and_catch_exception, changes)
except Exception: except Exception:
logger.exception("Exception in _handle_timeouts loop") logger.exception("Exception in _handle_timeouts loop")
@ -422,20 +430,23 @@ class PresenceHandler(object):
@defer.inlineCallbacks @defer.inlineCallbacks
def _end(): def _end():
if affect_presence: try:
self.user_to_num_current_syncs[user_id] -= 1 self.user_to_num_current_syncs[user_id] -= 1
prev_state = yield self.current_state_for_user(user_id) prev_state = yield self.current_state_for_user(user_id)
yield self._update_states([prev_state.copy_and_replace( yield self._update_states([prev_state.copy_and_replace(
last_user_sync_ts=self.clock.time_msec(), last_user_sync_ts=self.clock.time_msec(),
)]) )])
except Exception:
logger.exception("Error updating presence after sync")
@contextmanager @contextmanager
def _user_syncing(): def _user_syncing():
try: try:
yield yield
finally: finally:
preserve_fn(_end)() if affect_presence:
run_in_background(_end)
defer.returnValue(_user_syncing()) defer.returnValue(_user_syncing())

View File

@ -135,6 +135,7 @@ class ReceiptsHandler(BaseHandler):
"""Given a list of receipts, works out which remote servers should be """Given a list of receipts, works out which remote servers should be
poked and pokes them. poked and pokes them.
""" """
try:
# TODO: Some of this stuff should be coallesced. # TODO: Some of this stuff should be coallesced.
for receipt in receipts: for receipt in receipts:
room_id = receipt["room_id"] room_id = receipt["room_id"]
@ -166,6 +167,8 @@ class ReceiptsHandler(BaseHandler):
}, },
key=(room_id, receipt_type, user_id), key=(room_id, receipt_type, user_id),
) )
except Exception:
logger.exception("Error pushing receipts to remote servers")
@defer.inlineCallbacks @defer.inlineCallbacks
def get_receipts_for_room(self, room_id, to_key): def get_receipts_for_room(self, room_id, to_key):

View File

@ -23,8 +23,8 @@ from synapse.api.errors import (
) )
from synapse.http.client import CaptchaServerHttpClient from synapse.http.client import CaptchaServerHttpClient
from synapse import types from synapse import types
from synapse.types import UserID from synapse.types import UserID, create_requester, RoomID, RoomAlias
from synapse.util.async import run_on_reactor from synapse.util.async import run_on_reactor, Linearizer
from synapse.util.threepids import check_3pid_allowed from synapse.util.threepids import check_3pid_allowed
from ._base import BaseHandler from ._base import BaseHandler
@ -46,6 +46,10 @@ class RegistrationHandler(BaseHandler):
self.macaroon_gen = hs.get_macaroon_generator() self.macaroon_gen = hs.get_macaroon_generator()
self._generate_user_id_linearizer = Linearizer(
name="_generate_user_id_linearizer",
)
@defer.inlineCallbacks @defer.inlineCallbacks
def check_username(self, localpart, guest_access_token=None, def check_username(self, localpart, guest_access_token=None,
assigned_user_id=None): assigned_user_id=None):
@ -201,10 +205,17 @@ class RegistrationHandler(BaseHandler):
token = None token = None
attempts += 1 attempts += 1
# auto-join the user to any rooms we're supposed to dump them into
fake_requester = create_requester(user_id)
for r in self.hs.config.auto_join_rooms:
try:
yield self._join_user_to_room(fake_requester, r)
except Exception as e:
logger.error("Failed to join new user to %r: %r", r, e)
# We used to generate default identicons here, but nowadays # We used to generate default identicons here, but nowadays
# we want clients to generate their own as part of their branding # we want clients to generate their own as part of their branding
# rather than there being consistent matrix-wide ones, so we don't. # rather than there being consistent matrix-wide ones, so we don't.
defer.returnValue((user_id, token)) defer.returnValue((user_id, token))
@defer.inlineCallbacks @defer.inlineCallbacks
@ -344,6 +355,8 @@ class RegistrationHandler(BaseHandler):
@defer.inlineCallbacks @defer.inlineCallbacks
def _generate_user_id(self, reseed=False): def _generate_user_id(self, reseed=False):
if reseed or self._next_generated_user_id is None:
with (yield self._generate_user_id_linearizer.queue(())):
if reseed or self._next_generated_user_id is None: if reseed or self._next_generated_user_id is None:
self._next_generated_user_id = ( self._next_generated_user_id = (
yield self.store.find_next_generated_user_id_localpart() yield self.store.find_next_generated_user_id_localpart()
@ -477,3 +490,28 @@ class RegistrationHandler(BaseHandler):
) )
defer.returnValue((user_id, access_token)) defer.returnValue((user_id, access_token))
@defer.inlineCallbacks
def _join_user_to_room(self, requester, room_identifier):
room_id = None
room_member_handler = self.hs.get_room_member_handler()
if RoomID.is_valid(room_identifier):
room_id = room_identifier
elif RoomAlias.is_valid(room_identifier):
room_alias = RoomAlias.from_string(room_identifier)
room_id, remote_room_hosts = (
yield room_member_handler.lookup_room_alias(room_alias)
)
room_id = room_id.to_string()
else:
raise SynapseError(400, "%s was not legal room ID or room alias" % (
room_identifier,
))
yield room_member_handler.update_membership(
requester=requester,
target=requester.user,
room_id=room_id,
remote_room_hosts=remote_room_hosts,
action="join",
)

View File

@ -15,12 +15,13 @@
from twisted.internet import defer from twisted.internet import defer
from six.moves import range
from ._base import BaseHandler from ._base import BaseHandler
from synapse.api.constants import ( from synapse.api.constants import (
EventTypes, JoinRules, EventTypes, JoinRules,
) )
from synapse.util.logcontext import make_deferred_yieldable, preserve_fn
from synapse.util.async import concurrently_execute from synapse.util.async import concurrently_execute
from synapse.util.caches.descriptors import cachedInlineCallbacks from synapse.util.caches.descriptors import cachedInlineCallbacks
from synapse.util.caches.response_cache import ResponseCache from synapse.util.caches.response_cache import ResponseCache
@ -44,8 +45,9 @@ EMTPY_THIRD_PARTY_ID = ThirdPartyInstanceID(None, None)
class RoomListHandler(BaseHandler): class RoomListHandler(BaseHandler):
def __init__(self, hs): def __init__(self, hs):
super(RoomListHandler, self).__init__(hs) super(RoomListHandler, self).__init__(hs)
self.response_cache = ResponseCache(hs) self.response_cache = ResponseCache(hs, "room_list")
self.remote_response_cache = ResponseCache(hs, timeout_ms=30 * 1000) self.remote_response_cache = ResponseCache(hs, "remote_room_list",
timeout_ms=30 * 1000)
def get_local_public_room_list(self, limit=None, since_token=None, def get_local_public_room_list(self, limit=None, since_token=None,
search_filter=None, search_filter=None,
@ -77,18 +79,11 @@ class RoomListHandler(BaseHandler):
) )
key = (limit, since_token, network_tuple) key = (limit, since_token, network_tuple)
result = self.response_cache.get(key) return self.response_cache.wrap(
if not result:
logger.info("No cached result, calculating one.")
result = self.response_cache.set(
key, key,
preserve_fn(self._get_public_room_list)( self._get_public_room_list,
limit, since_token, network_tuple=network_tuple limit, since_token, network_tuple=network_tuple,
) )
)
else:
logger.info("Using cached deferred result.")
return make_deferred_yieldable(result)
@defer.inlineCallbacks @defer.inlineCallbacks
def _get_public_room_list(self, limit=None, since_token=None, def _get_public_room_list(self, limit=None, since_token=None,
@ -207,7 +202,7 @@ class RoomListHandler(BaseHandler):
step = len(rooms_to_scan) if len(rooms_to_scan) != 0 else 1 step = len(rooms_to_scan) if len(rooms_to_scan) != 0 else 1
chunk = [] chunk = []
for i in xrange(0, len(rooms_to_scan), step): for i in range(0, len(rooms_to_scan), step):
batch = rooms_to_scan[i:i + step] batch = rooms_to_scan[i:i + step]
logger.info("Processing %i rooms for result", len(batch)) logger.info("Processing %i rooms for result", len(batch))
yield concurrently_execute( yield concurrently_execute(
@ -422,18 +417,14 @@ class RoomListHandler(BaseHandler):
server_name, limit, since_token, include_all_networks, server_name, limit, since_token, include_all_networks,
third_party_instance_id, third_party_instance_id,
) )
result = self.remote_response_cache.get(key) return self.remote_response_cache.wrap(
if not result:
result = self.remote_response_cache.set(
key, key,
repl_layer.get_public_rooms( repl_layer.get_public_rooms,
server_name, limit=limit, since_token=since_token, server_name, limit=limit, since_token=since_token,
search_filter=search_filter, search_filter=search_filter,
include_all_networks=include_all_networks, include_all_networks=include_all_networks,
third_party_instance_id=third_party_instance_id, third_party_instance_id=third_party_instance_id,
) )
)
return result
class RoomListNextBatch(namedtuple("RoomListNextBatch", ( class RoomListNextBatch(namedtuple("RoomListNextBatch", (

View File

@ -149,7 +149,7 @@ class RoomMemberHandler(object):
@defer.inlineCallbacks @defer.inlineCallbacks
def _local_membership_update( def _local_membership_update(
self, requester, target, room_id, membership, self, requester, target, room_id, membership,
prev_event_ids, prev_events_and_hashes,
txn_id=None, txn_id=None,
ratelimit=True, ratelimit=True,
content=None, content=None,
@ -175,7 +175,7 @@ class RoomMemberHandler(object):
}, },
token_id=requester.access_token_id, token_id=requester.access_token_id,
txn_id=txn_id, txn_id=txn_id,
prev_event_ids=prev_event_ids, prev_events_and_hashes=prev_events_and_hashes,
) )
# Check if this event matches the previous membership event for the user. # Check if this event matches the previous membership event for the user.
@ -314,7 +314,12 @@ class RoomMemberHandler(object):
403, "Invites have been disabled on this server", 403, "Invites have been disabled on this server",
) )
latest_event_ids = yield self.store.get_latest_event_ids_in_room(room_id) prev_events_and_hashes = yield self.store.get_prev_events_for_room(
room_id,
)
latest_event_ids = (
event_id for (event_id, _, _) in prev_events_and_hashes
)
current_state_ids = yield self.state_handler.get_current_state_ids( current_state_ids = yield self.state_handler.get_current_state_ids(
room_id, latest_event_ids=latest_event_ids, room_id, latest_event_ids=latest_event_ids,
) )
@ -403,7 +408,7 @@ class RoomMemberHandler(object):
membership=effective_membership_state, membership=effective_membership_state,
txn_id=txn_id, txn_id=txn_id,
ratelimit=ratelimit, ratelimit=ratelimit,
prev_event_ids=latest_event_ids, prev_events_and_hashes=prev_events_and_hashes,
content=content, content=content,
) )
defer.returnValue(res) defer.returnValue(res)
@ -852,6 +857,14 @@ class RoomMemberMasterHandler(RoomMemberHandler):
def _remote_join(self, requester, remote_room_hosts, room_id, user, content): def _remote_join(self, requester, remote_room_hosts, room_id, user, content):
"""Implements RoomMemberHandler._remote_join """Implements RoomMemberHandler._remote_join
""" """
# filter ourselves out of remote_room_hosts: do_invite_join ignores it
# and if it is the only entry we'd like to return a 404 rather than a
# 500.
remote_room_hosts = [
host for host in remote_room_hosts if host != self.hs.hostname
]
if len(remote_room_hosts) == 0: if len(remote_room_hosts) == 0:
raise SynapseError(404, "No known servers") raise SynapseError(404, "No known servers")

View File

@ -15,7 +15,7 @@
from synapse.api.constants import Membership, EventTypes from synapse.api.constants import Membership, EventTypes
from synapse.util.async import concurrently_execute from synapse.util.async import concurrently_execute
from synapse.util.logcontext import LoggingContext, make_deferred_yieldable, preserve_fn from synapse.util.logcontext import LoggingContext
from synapse.util.metrics import Measure, measure_func from synapse.util.metrics import Measure, measure_func
from synapse.util.caches.response_cache import ResponseCache from synapse.util.caches.response_cache import ResponseCache
from synapse.push.clientformat import format_push_rules_for_user from synapse.push.clientformat import format_push_rules_for_user
@ -52,6 +52,7 @@ class TimelineBatch(collections.namedtuple("TimelineBatch", [
to tell if room needs to be part of the sync result. to tell if room needs to be part of the sync result.
""" """
return bool(self.events) return bool(self.events)
__bool__ = __nonzero__ # python3
class JoinedSyncResult(collections.namedtuple("JoinedSyncResult", [ class JoinedSyncResult(collections.namedtuple("JoinedSyncResult", [
@ -76,6 +77,7 @@ class JoinedSyncResult(collections.namedtuple("JoinedSyncResult", [
# nb the notification count does not, er, count: if there's nothing # nb the notification count does not, er, count: if there's nothing
# else in the result, we don't need to send it. # else in the result, we don't need to send it.
) )
__bool__ = __nonzero__ # python3
class ArchivedSyncResult(collections.namedtuple("ArchivedSyncResult", [ class ArchivedSyncResult(collections.namedtuple("ArchivedSyncResult", [
@ -95,6 +97,7 @@ class ArchivedSyncResult(collections.namedtuple("ArchivedSyncResult", [
or self.state or self.state
or self.account_data or self.account_data
) )
__bool__ = __nonzero__ # python3
class InvitedSyncResult(collections.namedtuple("InvitedSyncResult", [ class InvitedSyncResult(collections.namedtuple("InvitedSyncResult", [
@ -106,6 +109,7 @@ class InvitedSyncResult(collections.namedtuple("InvitedSyncResult", [
def __nonzero__(self): def __nonzero__(self):
"""Invited rooms should always be reported to the client""" """Invited rooms should always be reported to the client"""
return True return True
__bool__ = __nonzero__ # python3
class GroupsSyncResult(collections.namedtuple("GroupsSyncResult", [ class GroupsSyncResult(collections.namedtuple("GroupsSyncResult", [
@ -117,6 +121,7 @@ class GroupsSyncResult(collections.namedtuple("GroupsSyncResult", [
def __nonzero__(self): def __nonzero__(self):
return bool(self.join or self.invite or self.leave) return bool(self.join or self.invite or self.leave)
__bool__ = __nonzero__ # python3
class DeviceLists(collections.namedtuple("DeviceLists", [ class DeviceLists(collections.namedtuple("DeviceLists", [
@ -127,6 +132,7 @@ class DeviceLists(collections.namedtuple("DeviceLists", [
def __nonzero__(self): def __nonzero__(self):
return bool(self.changed or self.left) return bool(self.changed or self.left)
__bool__ = __nonzero__ # python3
class SyncResult(collections.namedtuple("SyncResult", [ class SyncResult(collections.namedtuple("SyncResult", [
@ -159,6 +165,7 @@ class SyncResult(collections.namedtuple("SyncResult", [
self.device_lists or self.device_lists or
self.groups self.groups
) )
__bool__ = __nonzero__ # python3
class SyncHandler(object): class SyncHandler(object):
@ -169,7 +176,7 @@ class SyncHandler(object):
self.presence_handler = hs.get_presence_handler() self.presence_handler = hs.get_presence_handler()
self.event_sources = hs.get_event_sources() self.event_sources = hs.get_event_sources()
self.clock = hs.get_clock() self.clock = hs.get_clock()
self.response_cache = ResponseCache(hs) self.response_cache = ResponseCache(hs, "sync")
self.state = hs.get_state_handler() self.state = hs.get_state_handler()
def wait_for_sync_for_user(self, sync_config, since_token=None, timeout=0, def wait_for_sync_for_user(self, sync_config, since_token=None, timeout=0,
@ -180,15 +187,11 @@ class SyncHandler(object):
Returns: Returns:
A Deferred SyncResult. A Deferred SyncResult.
""" """
result = self.response_cache.get(sync_config.request_key) return self.response_cache.wrap(
if not result:
result = self.response_cache.set(
sync_config.request_key, sync_config.request_key,
preserve_fn(self._wait_for_sync_for_user)( self._wait_for_sync_for_user,
sync_config, since_token, timeout, full_state sync_config, since_token, timeout, full_state,
) )
)
return make_deferred_yieldable(result)
@defer.inlineCallbacks @defer.inlineCallbacks
def _wait_for_sync_for_user(self, sync_config, since_token, timeout, def _wait_for_sync_for_user(self, sync_config, since_token, timeout,

View File

@ -16,7 +16,7 @@
from twisted.internet import defer from twisted.internet import defer
from synapse.api.errors import SynapseError, AuthError from synapse.api.errors import SynapseError, AuthError
from synapse.util.logcontext import preserve_fn from synapse.util.logcontext import run_in_background
from synapse.util.metrics import Measure from synapse.util.metrics import Measure
from synapse.util.wheel_timer import WheelTimer from synapse.util.wheel_timer import WheelTimer
from synapse.types import UserID, get_domain_from_id from synapse.types import UserID, get_domain_from_id
@ -97,7 +97,8 @@ class TypingHandler(object):
if self.hs.is_mine_id(member.user_id): if self.hs.is_mine_id(member.user_id):
last_fed_poke = self._member_last_federation_poke.get(member, None) last_fed_poke = self._member_last_federation_poke.get(member, None)
if not last_fed_poke or last_fed_poke + FEDERATION_PING_INTERVAL <= now: if not last_fed_poke or last_fed_poke + FEDERATION_PING_INTERVAL <= now:
preserve_fn(self._push_remote)( run_in_background(
self._push_remote,
member=member, member=member,
typing=True typing=True
) )
@ -196,7 +197,7 @@ class TypingHandler(object):
def _push_update(self, member, typing): def _push_update(self, member, typing):
if self.hs.is_mine_id(member.user_id): if self.hs.is_mine_id(member.user_id):
# Only send updates for changes to our own users. # Only send updates for changes to our own users.
preserve_fn(self._push_remote)(member, typing) run_in_background(self._push_remote, member, typing)
self._push_update_local( self._push_update_local(
member=member, member=member,
@ -205,6 +206,7 @@ class TypingHandler(object):
@defer.inlineCallbacks @defer.inlineCallbacks
def _push_remote(self, member, typing): def _push_remote(self, member, typing):
try:
users = yield self.state.get_current_user_in_room(member.room_id) users = yield self.state.get_current_user_in_room(member.room_id)
self._member_last_federation_poke[member] = self.clock.time_msec() self._member_last_federation_poke[member] = self.clock.time_msec()
@ -227,6 +229,8 @@ class TypingHandler(object):
}, },
key=member, key=member,
) )
except Exception:
logger.exception("Error pushing typing notif to remotes")
@defer.inlineCallbacks @defer.inlineCallbacks
def _recv_edu(self, origin, content): def _recv_edu(self, origin, content):

View File

@ -1,5 +1,6 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
# Copyright 2014-2016 OpenMarket Ltd # Copyright 2014-2016 OpenMarket Ltd
# Copyright 2018 New Vector Ltd
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License. # you may not use this file except in compliance with the License.
@ -12,3 +13,24 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
from twisted.internet.defer import CancelledError
from twisted.python import failure
from synapse.api.errors import SynapseError
class RequestTimedOutError(SynapseError):
"""Exception representing timeout of an outbound request"""
def __init__(self):
super(RequestTimedOutError, self).__init__(504, "Timed out")
def cancelled_to_request_timed_out_error(value, timeout):
"""Turns CancelledErrors into RequestTimedOutErrors.
For use with async.add_timeout_to_deferred
"""
if isinstance(value, failure.Failure):
value.trap(CancelledError)
raise RequestTimedOutError()
return value

View File

@ -1,5 +1,6 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
# Copyright 2014-2016 OpenMarket Ltd # Copyright 2014-2016 OpenMarket Ltd
# Copyright 2018 New Vector Ltd
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License. # you may not use this file except in compliance with the License.
@ -18,9 +19,10 @@ from OpenSSL.SSL import VERIFY_NONE
from synapse.api.errors import ( from synapse.api.errors import (
CodeMessageException, MatrixCodeMessageException, SynapseError, Codes, CodeMessageException, MatrixCodeMessageException, SynapseError, Codes,
) )
from synapse.http import cancelled_to_request_timed_out_error
from synapse.util.async import add_timeout_to_deferred
from synapse.util.caches import CACHE_SIZE_FACTOR from synapse.util.caches import CACHE_SIZE_FACTOR
from synapse.util.logcontext import make_deferred_yieldable from synapse.util.logcontext import make_deferred_yieldable
from synapse.util import logcontext
import synapse.metrics import synapse.metrics
from synapse.http.endpoint import SpiderEndpoint from synapse.http.endpoint import SpiderEndpoint
@ -38,7 +40,7 @@ from twisted.web.http import PotentialDataLoss
from twisted.web.http_headers import Headers from twisted.web.http_headers import Headers
from twisted.web._newclient import ResponseDone from twisted.web._newclient import ResponseDone
from StringIO import StringIO from six import StringIO
import simplejson as json import simplejson as json
import logging import logging
@ -95,21 +97,17 @@ class SimpleHttpClient(object):
# counters to it # counters to it
outgoing_requests_counter.inc(method) outgoing_requests_counter.inc(method)
def send_request():
request_deferred = self.agent.request(
method, uri, *args, **kwargs
)
return self.clock.time_bound_deferred(
request_deferred,
time_out=60,
)
logger.info("Sending request %s %s", method, uri) logger.info("Sending request %s %s", method, uri)
try: try:
with logcontext.PreserveLoggingContext(): request_deferred = self.agent.request(
response = yield send_request() method, uri, *args, **kwargs
)
add_timeout_to_deferred(
request_deferred,
60, cancelled_to_request_timed_out_error,
)
response = yield make_deferred_yieldable(request_deferred)
incoming_responses_counter.inc(method, response.code) incoming_responses_counter.inc(method, response.code)
logger.info( logger.info(
@ -509,7 +507,7 @@ class SpiderHttpClient(SimpleHttpClient):
reactor, reactor,
SpiderEndpointFactory(hs) SpiderEndpointFactory(hs)
) )
), [('gzip', GzipDecoder)] ), [(b'gzip', GzipDecoder)]
) )
# We could look like Chrome: # We could look like Chrome:
# self.user_agent = ("Mozilla/5.0 (%s) (KHTML, like Gecko) # self.user_agent = ("Mozilla/5.0 (%s) (KHTML, like Gecko)

View File

@ -12,8 +12,6 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
import socket
from twisted.internet.endpoints import HostnameEndpoint, wrapClientTLS from twisted.internet.endpoints import HostnameEndpoint, wrapClientTLS
from twisted.internet import defer, reactor from twisted.internet import defer, reactor
from twisted.internet.error import ConnectError from twisted.internet.error import ConnectError
@ -33,7 +31,7 @@ SERVER_CACHE = {}
# our record of an individual server which can be tried to reach a destination. # our record of an individual server which can be tried to reach a destination.
# #
# "host" is actually a dotted-quad or ipv6 address string. Except when there's # "host" is the hostname acquired from the SRV record. Except when there's
# no SRV record, in which case it is the original hostname. # no SRV record, in which case it is the original hostname.
_Server = collections.namedtuple( _Server = collections.namedtuple(
"_Server", "priority weight host port expires" "_Server", "priority weight host port expires"
@ -117,9 +115,14 @@ class _WrappedConnection(object):
if time.time() - self.last_request >= 2.5 * 60: if time.time() - self.last_request >= 2.5 * 60:
self.abort() self.abort()
# Abort the underlying TLS connection. The abort() method calls # Abort the underlying TLS connection. The abort() method calls
# loseConnection() on the underlying TLS connection which tries to # loseConnection() on the TLS connection which tries to
# shutdown the connection cleanly. We call abortConnection() # shutdown the connection cleanly. We call abortConnection()
# since that will promptly close the underlying TCP connection. # since that will promptly close the TLS connection.
#
# In Twisted >18.4; the TLS connection will be None if it has closed
# which will make abortConnection() throw. Check that the TLS connection
# is not None before trying to close it.
if self.transport.getHandle() is not None:
self.transport.abortConnection() self.transport.abortConnection()
def request(self, request): def request(self, request):
@ -288,7 +291,7 @@ def resolve_service(service_name, dns_client=client, cache=SERVER_CACHE, clock=t
if (len(answers) == 1 if (len(answers) == 1
and answers[0].type == dns.SRV and answers[0].type == dns.SRV
and answers[0].payload and answers[0].payload
and answers[0].payload.target == dns.Name('.')): and answers[0].payload.target == dns.Name(b'.')):
raise ConnectError("Service %s unavailable" % service_name) raise ConnectError("Service %s unavailable" % service_name)
for answer in answers: for answer in answers:
@ -297,19 +300,12 @@ def resolve_service(service_name, dns_client=client, cache=SERVER_CACHE, clock=t
payload = answer.payload payload = answer.payload
hosts = yield _get_hosts_for_srv_record(
dns_client, str(payload.target)
)
for (ip, ttl) in hosts:
host_ttl = min(answer.ttl, ttl)
servers.append(_Server( servers.append(_Server(
host=ip, host=str(payload.target),
port=int(payload.port), port=int(payload.port),
priority=int(payload.priority), priority=int(payload.priority),
weight=int(payload.weight), weight=int(payload.weight),
expires=int(clock.time()) + host_ttl, expires=int(clock.time()) + answer.ttl,
)) ))
servers.sort() servers.sort()
@ -328,81 +324,3 @@ def resolve_service(service_name, dns_client=client, cache=SERVER_CACHE, clock=t
raise e raise e
defer.returnValue(servers) defer.returnValue(servers)
@defer.inlineCallbacks
def _get_hosts_for_srv_record(dns_client, host):
"""Look up each of the hosts in a SRV record
Args:
dns_client (twisted.names.dns.IResolver):
host (basestring): host to look up
Returns:
Deferred[list[(str, int)]]: a list of (host, ttl) pairs
"""
ip4_servers = []
ip6_servers = []
def cb(res):
# lookupAddress and lookupIP6Address return a three-tuple
# giving the answer, authority, and additional sections of the
# response.
#
# we only care about the answers.
return res[0]
def eb(res, record_type):
if res.check(DNSNameError):
return []
logger.warn("Error looking up %s for %s: %s", record_type, host, res)
return res
# no logcontexts here, so we can safely fire these off and gatherResults
d1 = dns_client.lookupAddress(host).addCallbacks(
cb, eb, errbackArgs=("A", ))
d2 = dns_client.lookupIPV6Address(host).addCallbacks(
cb, eb, errbackArgs=("AAAA", ))
results = yield defer.DeferredList(
[d1, d2], consumeErrors=True)
# if all of the lookups failed, raise an exception rather than blowing out
# the cache with an empty result.
if results and all(s == defer.FAILURE for (s, _) in results):
defer.returnValue(results[0][1])
for (success, result) in results:
if success == defer.FAILURE:
continue
for answer in result:
if not answer.payload:
continue
try:
if answer.type == dns.A:
ip = answer.payload.dottedQuad()
ip4_servers.append((ip, answer.ttl))
elif answer.type == dns.AAAA:
ip = socket.inet_ntop(
socket.AF_INET6, answer.payload.address,
)
ip6_servers.append((ip, answer.ttl))
else:
# the most likely candidate here is a CNAME record.
# rfc2782 says srvs may not point to aliases.
logger.warn(
"Ignoring unexpected DNS record type %s for %s",
answer.type, host,
)
continue
except Exception as e:
logger.warn("Ignoring invalid DNS response for %s: %s",
host, e)
continue
# keep the ipv4 results before the ipv6 results, mostly to match historical
# behaviour.
defer.returnValue(ip4_servers + ip6_servers)

View File

@ -1,5 +1,6 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
# Copyright 2014-2016 OpenMarket Ltd # Copyright 2014-2016 OpenMarket Ltd
# Copyright 2018 New Vector Ltd
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License. # you may not use this file except in compliance with the License.
@ -12,17 +13,19 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
import synapse.util.retryutils
from twisted.internet import defer, reactor, protocol from twisted.internet import defer, reactor, protocol
from twisted.internet.error import DNSLookupError from twisted.internet.error import DNSLookupError
from twisted.web.client import readBody, HTTPConnectionPool, Agent from twisted.web.client import readBody, HTTPConnectionPool, Agent
from twisted.web.http_headers import Headers from twisted.web.http_headers import Headers
from twisted.web._newclient import ResponseDone from twisted.web._newclient import ResponseDone
from synapse.http import cancelled_to_request_timed_out_error
from synapse.http.endpoint import matrix_federation_endpoint from synapse.http.endpoint import matrix_federation_endpoint
from synapse.util.async import sleep
from synapse.util import logcontext
import synapse.metrics import synapse.metrics
from synapse.util.async import sleep, add_timeout_to_deferred
from synapse.util import logcontext
from synapse.util.logcontext import make_deferred_yieldable
import synapse.util.retryutils
from canonicaljson import encode_canonical_json from canonicaljson import encode_canonical_json
@ -38,8 +41,7 @@ import logging
import random import random
import sys import sys
import urllib import urllib
import urlparse from six.moves.urllib import parse as urlparse
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
outbound_logger = logging.getLogger("synapse.http.outbound") outbound_logger = logging.getLogger("synapse.http.outbound")
@ -184,21 +186,20 @@ class MatrixFederationHttpClient(object):
producer = body_callback(method, http_url_bytes, headers_dict) producer = body_callback(method, http_url_bytes, headers_dict)
try: try:
def send_request():
request_deferred = self.agent.request( request_deferred = self.agent.request(
method, method,
url_bytes, url_bytes,
Headers(headers_dict), Headers(headers_dict),
producer producer
) )
add_timeout_to_deferred(
return self.clock.time_bound_deferred( request_deferred,
timeout / 1000. if timeout else 60,
cancelled_to_request_timed_out_error,
)
response = yield make_deferred_yieldable(
request_deferred, request_deferred,
time_out=timeout / 1000. if timeout else 60,
) )
with logcontext.PreserveLoggingContext():
response = yield send_request()
log_result = "%d %s" % (response.code, response.phrase,) log_result = "%d %s" % (response.code, response.phrase,)
break break
@ -286,7 +287,8 @@ class MatrixFederationHttpClient(object):
headers_dict[b"Authorization"] = auth_headers headers_dict[b"Authorization"] = auth_headers
@defer.inlineCallbacks @defer.inlineCallbacks
def put_json(self, destination, path, data={}, json_data_callback=None, def put_json(self, destination, path, args={}, data={},
json_data_callback=None,
long_retries=False, timeout=None, long_retries=False, timeout=None,
ignore_backoff=False, ignore_backoff=False,
backoff_on_404=False): backoff_on_404=False):
@ -296,6 +298,7 @@ class MatrixFederationHttpClient(object):
destination (str): The remote server to send the HTTP request destination (str): The remote server to send the HTTP request
to. to.
path (str): The HTTP path. path (str): The HTTP path.
args (dict): query params
data (dict): A dict containing the data that will be used as data (dict): A dict containing the data that will be used as
the request body. This will be encoded as JSON. the request body. This will be encoded as JSON.
json_data_callback (callable): A callable returning the dict to json_data_callback (callable): A callable returning the dict to
@ -342,6 +345,7 @@ class MatrixFederationHttpClient(object):
path, path,
body_callback=body_callback, body_callback=body_callback,
headers_dict={"Content-Type": ["application/json"]}, headers_dict={"Content-Type": ["application/json"]},
query_bytes=encode_query_args(args),
long_retries=long_retries, long_retries=long_retries,
timeout=timeout, timeout=timeout,
ignore_backoff=ignore_backoff, ignore_backoff=ignore_backoff,
@ -373,6 +377,7 @@ class MatrixFederationHttpClient(object):
giving up. None indicates no timeout. giving up. None indicates no timeout.
ignore_backoff (bool): true to ignore the historical backoff data and ignore_backoff (bool): true to ignore the historical backoff data and
try the request anyway. try the request anyway.
args (dict): query params
Returns: Returns:
Deferred: Succeeds when we get a 2xx HTTP response. The result Deferred: Succeeds when we get a 2xx HTTP response. The result
will be the decoded JSON body. will be the decoded JSON body.

View File

@ -37,7 +37,7 @@ from twisted.web.util import redirectTo
import collections import collections
import logging import logging
import urllib import urllib
import ujson import simplejson
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -113,6 +113,11 @@ response_db_sched_duration = metrics.register_counter(
"response_db_sched_duration_seconds", labels=["method", "servlet", "tag"] "response_db_sched_duration_seconds", labels=["method", "servlet", "tag"]
) )
# size in bytes of the response written
response_size = metrics.register_counter(
"response_size", labels=["method", "servlet", "tag"]
)
_next_request_id = 0 _next_request_id = 0
@ -324,7 +329,7 @@ class JsonResource(HttpServer, resource.Resource):
register_paths, so will return (possibly via Deferred) either register_paths, so will return (possibly via Deferred) either
None, or a tuple of (http code, response body). None, or a tuple of (http code, response body).
""" """
if request.method == "OPTIONS": if request.method == b"OPTIONS":
return _options_handler, {} return _options_handler, {}
# Loop through all the registered callbacks to check if the method # Loop through all the registered callbacks to check if the method
@ -426,6 +431,8 @@ class RequestMetrics(object):
context.db_sched_duration_ms / 1000., request.method, self.name, tag context.db_sched_duration_ms / 1000., request.method, self.name, tag
) )
response_size.inc_by(request.sentLength, request.method, self.name, tag)
class RootRedirect(resource.Resource): class RootRedirect(resource.Resource):
"""Redirects the root '/' path to another path.""" """Redirects the root '/' path to another path."""
@ -461,8 +468,7 @@ def respond_with_json(request, code, json_object, send_cors=False,
if canonical_json or synapse.events.USE_FROZEN_DICTS: if canonical_json or synapse.events.USE_FROZEN_DICTS:
json_bytes = encode_canonical_json(json_object) json_bytes = encode_canonical_json(json_object)
else: else:
# ujson doesn't like frozen_dicts. json_bytes = simplejson.dumps(json_object)
json_bytes = ujson.dumps(json_object, ensure_ascii=False)
return respond_with_json_bytes( return respond_with_json_bytes(
request, code, json_bytes, request, code, json_bytes,
@ -489,6 +495,7 @@ def respond_with_json_bytes(request, code, json_bytes, send_cors=False,
request.setHeader(b"Content-Type", b"application/json") request.setHeader(b"Content-Type", b"application/json")
request.setHeader(b"Server", version_string) request.setHeader(b"Server", version_string)
request.setHeader(b"Content-Length", b"%d" % (len(json_bytes),)) request.setHeader(b"Content-Length", b"%d" % (len(json_bytes),))
request.setHeader(b"Cache-Control", b"no-cache, no-store, must-revalidate")
if send_cors: if send_cors:
set_cors_headers(request) set_cors_headers(request)
@ -536,9 +543,9 @@ def finish_request(request):
def _request_user_agent_is_curl(request): def _request_user_agent_is_curl(request):
user_agents = request.requestHeaders.getRawHeaders( user_agents = request.requestHeaders.getRawHeaders(
"User-Agent", default=[] b"User-Agent", default=[]
) )
for user_agent in user_agents: for user_agent in user_agents:
if "curl" in user_agent: if b"curl" in user_agent:
return True return True
return False return False

View File

@ -20,7 +20,7 @@ import logging
import re import re
import time import time
ACCESS_TOKEN_RE = re.compile(r'(\?.*access(_|%5[Ff])token=)[^&]*(.*)$') ACCESS_TOKEN_RE = re.compile(br'(\?.*access(_|%5[Ff])token=)[^&]*(.*)$')
class SynapseRequest(Request): class SynapseRequest(Request):
@ -43,12 +43,12 @@ class SynapseRequest(Request):
def get_redacted_uri(self): def get_redacted_uri(self):
return ACCESS_TOKEN_RE.sub( return ACCESS_TOKEN_RE.sub(
r'\1<redacted>\3', br'\1<redacted>\3',
self.uri self.uri
) )
def get_user_agent(self): def get_user_agent(self):
return self.requestHeaders.getRawHeaders("User-Agent", [None])[-1] return self.requestHeaders.getRawHeaders(b"User-Agent", [None])[-1]
def started_processing(self): def started_processing(self):
self.site.access_logger.info( self.site.access_logger.info(

View File

@ -17,12 +17,13 @@ import logging
import functools import functools
import time import time
import gc import gc
import platform
from twisted.internet import reactor from twisted.internet import reactor
from .metric import ( from .metric import (
CounterMetric, CallbackMetric, DistributionMetric, CacheMetric, CounterMetric, CallbackMetric, DistributionMetric, CacheMetric,
MemoryUsageMetric, MemoryUsageMetric, GaugeMetric,
) )
from .process_collector import register_process_collector from .process_collector import register_process_collector
@ -30,6 +31,7 @@ from .process_collector import register_process_collector
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
running_on_pypy = platform.python_implementation() == 'PyPy'
all_metrics = [] all_metrics = []
all_collectors = [] all_collectors = []
@ -63,6 +65,13 @@ class Metrics(object):
""" """
return self._register(CounterMetric, *args, **kwargs) return self._register(CounterMetric, *args, **kwargs)
def register_gauge(self, *args, **kwargs):
"""
Returns:
GaugeMetric
"""
return self._register(GaugeMetric, *args, **kwargs)
def register_callback(self, *args, **kwargs): def register_callback(self, *args, **kwargs):
""" """
Returns: Returns:
@ -142,6 +151,32 @@ reactor_metrics = get_metrics_for("python.twisted.reactor")
tick_time = reactor_metrics.register_distribution("tick_time") tick_time = reactor_metrics.register_distribution("tick_time")
pending_calls_metric = reactor_metrics.register_distribution("pending_calls") pending_calls_metric = reactor_metrics.register_distribution("pending_calls")
synapse_metrics = get_metrics_for("synapse")
# Used to track where various components have processed in the event stream,
# e.g. federation sending, appservice sending, etc.
event_processing_positions = synapse_metrics.register_gauge(
"event_processing_positions", labels=["name"],
)
# Used to track the current max events stream position
event_persisted_position = synapse_metrics.register_gauge(
"event_persisted_position",
)
# Used to track the received_ts of the last event processed by various
# components
event_processing_last_ts = synapse_metrics.register_gauge(
"event_processing_last_ts", labels=["name"],
)
# Used to track the lag processing events. This is the time difference
# between the last processed event's received_ts and the time it was
# finished being processed.
event_processing_lag = synapse_metrics.register_gauge(
"event_processing_lag", labels=["name"],
)
def runUntilCurrentTimer(func): def runUntilCurrentTimer(func):
@ -174,6 +209,9 @@ def runUntilCurrentTimer(func):
tick_time.inc_by(end - start) tick_time.inc_by(end - start)
pending_calls_metric.inc_by(num_pending) pending_calls_metric.inc_by(num_pending)
if running_on_pypy:
return ret
# Check if we need to do a manual GC (since its been disabled), and do # Check if we need to do a manual GC (since its been disabled), and do
# one if necessary. # one if necessary.
threshold = gc.get_threshold() threshold = gc.get_threshold()
@ -206,6 +244,7 @@ try:
# We manually run the GC each reactor tick so that we can get some metrics # We manually run the GC each reactor tick so that we can get some metrics
# about time spent doing GC, # about time spent doing GC,
if not running_on_pypy:
gc.disable() gc.disable()
except AttributeError: except AttributeError:
pass pass

View File

@ -16,6 +16,7 @@
from itertools import chain from itertools import chain
import logging import logging
import re
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -56,8 +57,7 @@ class BaseMetric(object):
return not len(self.labels) return not len(self.labels)
def _render_labelvalue(self, value): def _render_labelvalue(self, value):
# TODO: escape backslashes, quotes and newlines return '"%s"' % (_escape_label_value(value),)
return '"%s"' % (value)
def _render_key(self, values): def _render_key(self, values):
if self.is_scalar(): if self.is_scalar():
@ -115,7 +115,7 @@ class CounterMetric(BaseMetric):
# dict[list[str]]: value for each set of label values. the keys are the # dict[list[str]]: value for each set of label values. the keys are the
# label values, in the same order as the labels in self.labels. # label values, in the same order as the labels in self.labels.
# #
# (if the metric is a scalar, the (single) key is the empty list). # (if the metric is a scalar, the (single) key is the empty tuple).
self.counts = {} self.counts = {}
# Scalar metrics are never empty # Scalar metrics are never empty
@ -145,6 +145,36 @@ class CounterMetric(BaseMetric):
) )
class GaugeMetric(BaseMetric):
"""A metric that can go up or down
"""
def __init__(self, *args, **kwargs):
super(GaugeMetric, self).__init__(*args, **kwargs)
# dict[list[str]]: value for each set of label values. the keys are the
# label values, in the same order as the labels in self.labels.
#
# (if the metric is a scalar, the (single) key is the empty tuple).
self.guages = {}
def set(self, v, *values):
if len(values) != self.dimension():
raise ValueError(
"Expected as many values to inc() as labels (%d)" % (self.dimension())
)
# TODO: should assert that the tag values are all strings
self.guages[values] = v
def render(self):
return flatten(
self._render_for_labels(k, self.guages[k])
for k in sorted(self.guages.keys())
)
class CallbackMetric(BaseMetric): class CallbackMetric(BaseMetric):
"""A metric that returns the numeric value returned by a callback whenever """A metric that returns the numeric value returned by a callback whenever
it is rendered. Typically this is used to implement gauges that yield the it is rendered. Typically this is used to implement gauges that yield the
@ -269,3 +299,29 @@ class MemoryUsageMetric(object):
"process_psutil_rss:total %d" % sum_rss, "process_psutil_rss:total %d" % sum_rss,
"process_psutil_rss:count %d" % len_rss, "process_psutil_rss:count %d" % len_rss,
] ]
def _escape_character(m):
"""Replaces a single character with its escape sequence.
Args:
m (re.MatchObject): A match object whose first group is the single
character to replace
Returns:
str
"""
c = m.group(1)
if c == "\\":
return "\\\\"
elif c == "\"":
return "\\\""
elif c == "\n":
return "\\n"
return c
def _escape_label_value(value):
"""Takes a label value and escapes quotes, newlines and backslashes
"""
return re.sub(r"([\n\"\\])", _escape_character, value)

View File

@ -14,14 +14,17 @@
# limitations under the License. # limitations under the License.
from twisted.internet import defer from twisted.internet import defer
from synapse.api.constants import EventTypes, Membership from synapse.api.constants import EventTypes, Membership
from synapse.api.errors import AuthError from synapse.api.errors import AuthError
from synapse.handlers.presence import format_user_presence_state from synapse.handlers.presence import format_user_presence_state
from synapse.util import DeferredTimedOutError
from synapse.util.logutils import log_function from synapse.util.logutils import log_function
from synapse.util.async import ObservableDeferred from synapse.util.async import (
from synapse.util.logcontext import PreserveLoggingContext, preserve_fn ObservableDeferred, add_timeout_to_deferred,
DeferredTimeoutError,
)
from synapse.util.logcontext import PreserveLoggingContext, run_in_background
from synapse.util.metrics import Measure from synapse.util.metrics import Measure
from synapse.types import StreamToken from synapse.types import StreamToken
from synapse.visibility import filter_events_for_client from synapse.visibility import filter_events_for_client
@ -144,6 +147,7 @@ class _NotifierUserStream(object):
class EventStreamResult(namedtuple("EventStreamResult", ("events", "tokens"))): class EventStreamResult(namedtuple("EventStreamResult", ("events", "tokens"))):
def __nonzero__(self): def __nonzero__(self):
return bool(self.events) return bool(self.events)
__bool__ = __nonzero__ # python3
class Notifier(object): class Notifier(object):
@ -250,9 +254,7 @@ class Notifier(object):
def _on_new_room_event(self, event, room_stream_id, extra_users=[]): def _on_new_room_event(self, event, room_stream_id, extra_users=[]):
"""Notify any user streams that are interested in this room event""" """Notify any user streams that are interested in this room event"""
# poke any interested application service. # poke any interested application service.
preserve_fn(self.appservice_handler.notify_interested_services)( run_in_background(self._notify_app_services, room_stream_id)
room_stream_id
)
if self.federation_sender: if self.federation_sender:
self.federation_sender.notify_new_events(room_stream_id) self.federation_sender.notify_new_events(room_stream_id)
@ -266,6 +268,13 @@ class Notifier(object):
rooms=[event.room_id], rooms=[event.room_id],
) )
@defer.inlineCallbacks
def _notify_app_services(self, room_stream_id):
try:
yield self.appservice_handler.notify_interested_services(room_stream_id)
except Exception:
logger.exception("Error notifying application services of event")
def on_new_event(self, stream_key, new_token, users=[], rooms=[]): def on_new_event(self, stream_key, new_token, users=[], rooms=[]):
""" Used to inform listeners that something has happend event wise. """ Used to inform listeners that something has happend event wise.
@ -330,11 +339,12 @@ class Notifier(object):
# Now we wait for the _NotifierUserStream to be told there # Now we wait for the _NotifierUserStream to be told there
# is a new token. # is a new token.
listener = user_stream.new_listener(prev_token) listener = user_stream.new_listener(prev_token)
with PreserveLoggingContext(): add_timeout_to_deferred(
yield self.clock.time_bound_deferred(
listener.deferred, listener.deferred,
time_out=(end_time - now) / 1000. (end_time - now) / 1000.,
) )
with PreserveLoggingContext():
yield listener.deferred
current_token = user_stream.current_token current_token = user_stream.current_token
@ -345,7 +355,7 @@ class Notifier(object):
# Update the prev_token to the current_token since nothing # Update the prev_token to the current_token since nothing
# has happened between the old prev_token and the current_token # has happened between the old prev_token and the current_token
prev_token = current_token prev_token = current_token
except DeferredTimedOutError: except DeferredTimeoutError:
break break
except defer.CancelledError: except defer.CancelledError:
break break
@ -550,13 +560,14 @@ class Notifier(object):
if end_time <= now: if end_time <= now:
break break
add_timeout_to_deferred(
listener.deferred.addTimeout,
(end_time - now) / 1000.,
)
try: try:
with PreserveLoggingContext(): with PreserveLoggingContext():
yield self.clock.time_bound_deferred( yield listener.deferred
listener.deferred, except DeferredTimeoutError:
time_out=(end_time - now) / 1000.
)
except DeferredTimedOutError:
break break
except defer.CancelledError: except defer.CancelledError:
break break

View File

@ -77,10 +77,13 @@ class EmailPusher(object):
@defer.inlineCallbacks @defer.inlineCallbacks
def on_started(self): def on_started(self):
if self.mailer is not None: if self.mailer is not None:
try:
self.throttle_params = yield self.store.get_throttle_params_by_room( self.throttle_params = yield self.store.get_throttle_params_by_room(
self.pusher_id self.pusher_id
) )
yield self._process() yield self._process()
except Exception:
logger.exception("Error starting email pusher")
def on_stop(self): def on_stop(self):
if self.timed_call: if self.timed_call:

View File

@ -18,8 +18,8 @@ import logging
from twisted.internet import defer, reactor from twisted.internet import defer, reactor
from twisted.internet.error import AlreadyCalled, AlreadyCancelled from twisted.internet.error import AlreadyCalled, AlreadyCancelled
import push_rule_evaluator from . import push_rule_evaluator
import push_tools from . import push_tools
import synapse import synapse
from synapse.push import PusherConfigException from synapse.push import PusherConfigException
from synapse.util.logcontext import LoggingContext from synapse.util.logcontext import LoggingContext
@ -94,7 +94,10 @@ class HttpPusher(object):
@defer.inlineCallbacks @defer.inlineCallbacks
def on_started(self): def on_started(self):
try:
yield self._process() yield self._process()
except Exception:
logger.exception("Error starting http pusher")
@defer.inlineCallbacks @defer.inlineCallbacks
def on_new_notifications(self, min_stream_ordering, max_stream_ordering): def on_new_notifications(self, min_stream_ordering, max_stream_ordering):

View File

@ -13,7 +13,7 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
from httppusher import HttpPusher from .httppusher import HttpPusher
import logging import logging
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View File

@ -14,13 +14,13 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
import logging
from twisted.internet import defer from twisted.internet import defer
from .pusher import PusherFactory from synapse.push.pusher import PusherFactory
from synapse.util.logcontext import make_deferred_yieldable, preserve_fn
from synapse.util.async import run_on_reactor from synapse.util.async import run_on_reactor
from synapse.util.logcontext import make_deferred_yieldable, run_in_background
import logging
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -137,12 +137,15 @@ class PusherPool:
if u in self.pushers: if u in self.pushers:
for p in self.pushers[u].values(): for p in self.pushers[u].values():
deferreds.append( deferreds.append(
preserve_fn(p.on_new_notifications)( run_in_background(
min_stream_id, max_stream_id p.on_new_notifications,
min_stream_id, max_stream_id,
) )
) )
yield make_deferred_yieldable(defer.gatherResults(deferreds)) yield make_deferred_yieldable(
defer.gatherResults(deferreds, consumeErrors=True),
)
except Exception: except Exception:
logger.exception("Exception in pusher on_new_notifications") logger.exception("Exception in pusher on_new_notifications")
@ -164,10 +167,15 @@ class PusherPool:
if u in self.pushers: if u in self.pushers:
for p in self.pushers[u].values(): for p in self.pushers[u].values():
deferreds.append( deferreds.append(
preserve_fn(p.on_new_receipts)(min_stream_id, max_stream_id) run_in_background(
p.on_new_receipts,
min_stream_id, max_stream_id,
)
) )
yield make_deferred_yieldable(defer.gatherResults(deferreds)) yield make_deferred_yieldable(
defer.gatherResults(deferreds, consumeErrors=True),
)
except Exception: except Exception:
logger.exception("Exception in pusher on_new_receipts") logger.exception("Exception in pusher on_new_receipts")
@ -207,7 +215,7 @@ class PusherPool:
if appid_pushkey in byuser: if appid_pushkey in byuser:
byuser[appid_pushkey].on_stop() byuser[appid_pushkey].on_stop()
byuser[appid_pushkey] = p byuser[appid_pushkey] = p
preserve_fn(p.on_started)() run_in_background(p.on_started)
logger.info("Started pushers") logger.info("Started pushers")

View File

@ -1,5 +1,6 @@
# Copyright 2015, 2016 OpenMarket Ltd # Copyright 2015, 2016 OpenMarket Ltd
# Copyright 2017 Vector Creations Ltd # Copyright 2017 Vector Creations Ltd
# Copyright 2018 New Vector Ltd
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License. # you may not use this file except in compliance with the License.
@ -18,28 +19,43 @@ from distutils.version import LooseVersion
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
# this dict maps from python package name to a list of modules we expect it to
# provide.
#
# the key is a "requirement specifier", as used as a parameter to `pip
# install`[1], or an `install_requires` argument to `setuptools.setup` [2].
#
# the value is a sequence of strings; each entry should be the name of the
# python module, optionally followed by a version assertion which can be either
# ">=<ver>" or "==<ver>".
#
# [1] https://pip.pypa.io/en/stable/reference/pip_install/#requirement-specifiers.
# [2] https://setuptools.readthedocs.io/en/latest/setuptools.html#declaring-dependencies
REQUIREMENTS = { REQUIREMENTS = {
"jsonschema>=2.5.1": ["jsonschema>=2.5.1"], "jsonschema>=2.5.1": ["jsonschema>=2.5.1"],
"frozendict>=0.4": ["frozendict"], "frozendict>=0.4": ["frozendict"],
"unpaddedbase64>=1.1.0": ["unpaddedbase64>=1.1.0"], "unpaddedbase64>=1.1.0": ["unpaddedbase64>=1.1.0"],
"canonicaljson>=1.0.0": ["canonicaljson>=1.0.0"], "canonicaljson>=1.1.3": ["canonicaljson>=1.1.3"],
"signedjson>=1.0.0": ["signedjson>=1.0.0"], "signedjson>=1.0.0": ["signedjson>=1.0.0"],
"pynacl>=1.2.1": ["nacl>=1.2.1", "nacl.bindings"], "pynacl>=1.2.1": ["nacl>=1.2.1", "nacl.bindings"],
"service_identity>=1.0.0": ["service_identity>=1.0.0"], "service_identity>=1.0.0": ["service_identity>=1.0.0"],
"Twisted>=16.0.0": ["twisted>=16.0.0"], "Twisted>=16.0.0": ["twisted>=16.0.0"],
"pyopenssl>=0.14": ["OpenSSL>=0.14"],
# We use crypto.get_elliptic_curve which is only supported in >=0.15
"pyopenssl>=0.15": ["OpenSSL>=0.15"],
"pyyaml": ["yaml"], "pyyaml": ["yaml"],
"pyasn1": ["pyasn1"], "pyasn1": ["pyasn1"],
"daemonize": ["daemonize"], "daemonize": ["daemonize"],
"bcrypt": ["bcrypt>=3.1.0"], "bcrypt": ["bcrypt>=3.1.0"],
"pillow": ["PIL"], "pillow": ["PIL"],
"pydenticon": ["pydenticon"], "pydenticon": ["pydenticon"],
"ujson": ["ujson"],
"blist": ["blist"], "blist": ["blist"],
"pysaml2>=3.0.0": ["saml2>=3.0.0"], "pysaml2>=3.0.0": ["saml2>=3.0.0"],
"pymacaroons-pynacl": ["pymacaroons"], "pymacaroons-pynacl": ["pymacaroons"],
"msgpack-python>=0.3.0": ["msgpack"], "msgpack-python>=0.3.0": ["msgpack"],
"phonenumbers>=8.2.0": ["phonenumbers"], "phonenumbers>=8.2.0": ["phonenumbers"],
"six": ["six"],
} }
CONDITIONAL_REQUIREMENTS = { CONDITIONAL_REQUIREMENTS = {
"web_client": { "web_client": {

View File

@ -23,7 +23,6 @@ from synapse.events.snapshot import EventContext
from synapse.http.servlet import RestServlet, parse_json_object_from_request from synapse.http.servlet import RestServlet, parse_json_object_from_request
from synapse.util.async import sleep from synapse.util.async import sleep
from synapse.util.caches.response_cache import ResponseCache from synapse.util.caches.response_cache import ResponseCache
from synapse.util.logcontext import make_deferred_yieldable, preserve_fn
from synapse.util.metrics import Measure from synapse.util.metrics import Measure
from synapse.types import Requester, UserID from synapse.types import Requester, UserID
@ -115,20 +114,15 @@ class ReplicationSendEventRestServlet(RestServlet):
self.clock = hs.get_clock() self.clock = hs.get_clock()
# The responses are tiny, so we may as well cache them for a while # The responses are tiny, so we may as well cache them for a while
self.response_cache = ResponseCache(hs, timeout_ms=30 * 60 * 1000) self.response_cache = ResponseCache(hs, "send_event", timeout_ms=30 * 60 * 1000)
def on_PUT(self, request, event_id): def on_PUT(self, request, event_id):
result = self.response_cache.get(event_id) return self.response_cache.wrap(
if not result:
result = self.response_cache.set(
event_id, event_id,
self._handle_request(request) self._handle_request,
request
) )
else:
logger.warn("Returning cached response")
return make_deferred_yieldable(result)
@preserve_fn
@defer.inlineCallbacks @defer.inlineCallbacks
def _handle_request(self, request): def _handle_request(self, request):
with Measure(self.clock, "repl_send_event_parse"): with Measure(self.clock, "repl_send_event_parse"):

View File

@ -19,11 +19,13 @@ allowed to be sent by which side.
""" """
import logging import logging
import ujson as json import simplejson
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
_json_encoder = simplejson.JSONEncoder(namedtuple_as_object=False)
class Command(object): class Command(object):
"""The base command class. """The base command class.
@ -100,14 +102,14 @@ class RdataCommand(Command):
return cls( return cls(
stream_name, stream_name,
None if token == "batch" else int(token), None if token == "batch" else int(token),
json.loads(row_json) simplejson.loads(row_json)
) )
def to_line(self): def to_line(self):
return " ".join(( return " ".join((
self.stream_name, self.stream_name,
str(self.token) if self.token is not None else "batch", str(self.token) if self.token is not None else "batch",
json.dumps(self.row), _json_encoder.encode(self.row),
)) ))
@ -298,10 +300,12 @@ class InvalidateCacheCommand(Command):
def from_line(cls, line): def from_line(cls, line):
cache_func, keys_json = line.split(" ", 1) cache_func, keys_json = line.split(" ", 1)
return cls(cache_func, json.loads(keys_json)) return cls(cache_func, simplejson.loads(keys_json))
def to_line(self): def to_line(self):
return " ".join((self.cache_func, json.dumps(self.keys))) return " ".join((
self.cache_func, _json_encoder.encode(self.keys),
))
class UserIpCommand(Command): class UserIpCommand(Command):
@ -325,14 +329,14 @@ class UserIpCommand(Command):
def from_line(cls, line): def from_line(cls, line):
user_id, jsn = line.split(" ", 1) user_id, jsn = line.split(" ", 1)
access_token, ip, user_agent, device_id, last_seen = json.loads(jsn) access_token, ip, user_agent, device_id, last_seen = simplejson.loads(jsn)
return cls( return cls(
user_id, access_token, ip, user_agent, device_id, last_seen user_id, access_token, ip, user_agent, device_id, last_seen
) )
def to_line(self): def to_line(self):
return self.user_id + " " + json.dumps(( return self.user_id + " " + _json_encoder.encode((
self.access_token, self.ip, self.user_agent, self.device_id, self.access_token, self.ip, self.user_agent, self.device_id,
self.last_seen, self.last_seen,
)) ))

View File

@ -53,12 +53,12 @@ from twisted.internet import defer
from twisted.protocols.basic import LineOnlyReceiver from twisted.protocols.basic import LineOnlyReceiver
from twisted.python.failure import Failure from twisted.python.failure import Failure
from commands import ( from .commands import (
COMMAND_MAP, VALID_CLIENT_COMMANDS, VALID_SERVER_COMMANDS, COMMAND_MAP, VALID_CLIENT_COMMANDS, VALID_SERVER_COMMANDS,
ErrorCommand, ServerCommand, RdataCommand, PositionCommand, PingCommand, ErrorCommand, ServerCommand, RdataCommand, PositionCommand, PingCommand,
NameCommand, ReplicateCommand, UserSyncCommand, SyncCommand, NameCommand, ReplicateCommand, UserSyncCommand, SyncCommand,
) )
from streams import STREAMS_MAP from .streams import STREAMS_MAP
from synapse.util.stringutils import random_string from synapse.util.stringutils import random_string
from synapse.metrics.metric import CounterMetric from synapse.metrics.metric import CounterMetric

View File

@ -18,8 +18,8 @@
from twisted.internet import defer, reactor from twisted.internet import defer, reactor
from twisted.internet.protocol import Factory from twisted.internet.protocol import Factory
from streams import STREAMS_MAP, FederationStream from .streams import STREAMS_MAP, FederationStream
from protocol import ServerReplicationStreamProtocol from .protocol import ServerReplicationStreamProtocol
from synapse.util.metrics import Measure, measure_func from synapse.util.metrics import Measure, measure_func

View File

@ -168,11 +168,24 @@ class PurgeHistoryRestServlet(ClientV1RestServlet):
yield self.store.find_first_stream_ordering_after_ts(ts) yield self.store.find_first_stream_ordering_after_ts(ts)
) )
(_, depth, _) = ( room_event_after_stream_ordering = (
yield self.store.get_room_event_after_stream_ordering( yield self.store.get_room_event_after_stream_ordering(
room_id, stream_ordering, room_id, stream_ordering,
) )
) )
if room_event_after_stream_ordering:
(_, depth, _) = room_event_after_stream_ordering
else:
logger.warn(
"[purge] purging events not possible: No event found "
"(received_ts %i => stream_ordering %i)",
ts, stream_ordering,
)
raise SynapseError(
404,
"there is no event to be purged",
errcode=Codes.NOT_FOUND,
)
logger.info( logger.info(
"[purge] purging up to depth %i (received_ts %i => " "[purge] purging up to depth %i (received_ts %i => "
"stream_ordering %i)", "stream_ordering %i)",

View File

@ -52,6 +52,10 @@ class ClientV1RestServlet(RestServlet):
"""A base Synapse REST Servlet for the client version 1 API. """A base Synapse REST Servlet for the client version 1 API.
""" """
# This subclass was presumably created to allow the auth for the v1
# protocol version to be different, however this behaviour was removed.
# it may no longer be necessary
def __init__(self, hs): def __init__(self, hs):
""" """
Args: Args:
@ -59,5 +63,5 @@ class ClientV1RestServlet(RestServlet):
""" """
self.hs = hs self.hs = hs
self.builder_factory = hs.get_event_builder_factory() self.builder_factory = hs.get_event_builder_factory()
self.auth = hs.get_v1auth() self.auth = hs.get_auth()
self.txns = HttpTransactionCache(hs.get_clock()) self.txns = HttpTransactionCache(hs.get_clock())

View File

@ -25,7 +25,7 @@ from .base import ClientV1RestServlet, client_path_patterns
import simplejson as json import simplejson as json
import urllib import urllib
import urlparse from six.moves.urllib import parse as urlparse
import logging import logging
from saml2 import BINDING_HTTP_POST from saml2 import BINDING_HTTP_POST

View File

@ -44,7 +44,10 @@ class LogoutRestServlet(ClientV1RestServlet):
requester = yield self.auth.get_user_by_req(request) requester = yield self.auth.get_user_by_req(request)
except AuthError: except AuthError:
# this implies the access token has already been deleted. # this implies the access token has already been deleted.
pass defer.returnValue((401, {
"errcode": "M_UNKNOWN_TOKEN",
"error": "Access Token unknown or expired"
}))
else: else:
if requester.device_id is None: if requester.device_id is None:
# the acccess token wasn't associated with a device. # the acccess token wasn't associated with a device.

View File

@ -150,7 +150,7 @@ class PushersRemoveRestServlet(RestServlet):
super(RestServlet, self).__init__() super(RestServlet, self).__init__()
self.hs = hs self.hs = hs
self.notifier = hs.get_notifier() self.notifier = hs.get_notifier()
self.auth = hs.get_v1auth() self.auth = hs.get_auth()
self.pusher_pool = self.hs.get_pusherpool() self.pusher_pool = self.hs.get_pusherpool()
@defer.inlineCallbacks @defer.inlineCallbacks

View File

@ -30,6 +30,8 @@ from hashlib import sha1
import hmac import hmac
import logging import logging
from six import string_types
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -333,11 +335,11 @@ class RegisterRestServlet(ClientV1RestServlet):
def _do_shared_secret(self, request, register_json, session): def _do_shared_secret(self, request, register_json, session):
yield run_on_reactor() yield run_on_reactor()
if not isinstance(register_json.get("mac", None), basestring): if not isinstance(register_json.get("mac", None), string_types):
raise SynapseError(400, "Expected mac.") raise SynapseError(400, "Expected mac.")
if not isinstance(register_json.get("user", None), basestring): if not isinstance(register_json.get("user", None), string_types):
raise SynapseError(400, "Expected 'user' key.") raise SynapseError(400, "Expected 'user' key.")
if not isinstance(register_json.get("password", None), basestring): if not isinstance(register_json.get("password", None), string_types):
raise SynapseError(400, "Expected 'password' key.") raise SynapseError(400, "Expected 'password' key.")
if not self.hs.config.registration_shared_secret: if not self.hs.config.registration_shared_secret:
@ -348,9 +350,9 @@ class RegisterRestServlet(ClientV1RestServlet):
admin = register_json.get("admin", None) admin = register_json.get("admin", None)
# Its important to check as we use null bytes as HMAC field separators # Its important to check as we use null bytes as HMAC field separators
if "\x00" in user: if b"\x00" in user:
raise SynapseError(400, "Invalid user") raise SynapseError(400, "Invalid user")
if "\x00" in password: if b"\x00" in password:
raise SynapseError(400, "Invalid password") raise SynapseError(400, "Invalid password")
# str() because otherwise hmac complains that 'unicode' does not # str() because otherwise hmac complains that 'unicode' does not
@ -358,14 +360,14 @@ class RegisterRestServlet(ClientV1RestServlet):
got_mac = str(register_json["mac"]) got_mac = str(register_json["mac"])
want_mac = hmac.new( want_mac = hmac.new(
key=self.hs.config.registration_shared_secret, key=self.hs.config.registration_shared_secret.encode(),
digestmod=sha1, digestmod=sha1,
) )
want_mac.update(user) want_mac.update(user)
want_mac.update("\x00") want_mac.update(b"\x00")
want_mac.update(password) want_mac.update(password)
want_mac.update("\x00") want_mac.update(b"\x00")
want_mac.update("admin" if admin else "notadmin") want_mac.update(b"admin" if admin else b"notadmin")
want_mac = want_mac.hexdigest() want_mac = want_mac.hexdigest()
if compare_digest(want_mac, got_mac): if compare_digest(want_mac, got_mac):

View File

@ -28,9 +28,10 @@ from synapse.http.servlet import (
parse_json_object_from_request, parse_string, parse_integer parse_json_object_from_request, parse_string, parse_integer
) )
from six.moves.urllib import parse as urlparse
import logging import logging
import urllib import simplejson as json
import ujson as json
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -165,17 +166,12 @@ class RoomStateEventRestServlet(ClientV1RestServlet):
content=content, content=content,
) )
else: else:
event, context = yield self.event_creation_hander.create_event( event = yield self.event_creation_hander.create_and_send_nonmember_event(
requester, requester,
event_dict, event_dict,
token_id=requester.access_token_id,
txn_id=txn_id, txn_id=txn_id,
) )
yield self.event_creation_hander.send_nonmember_event(
requester, event, context,
)
ret = {} ret = {}
if event: if event:
ret = {"event_id": event.event_id} ret = {"event_id": event.event_id}
@ -438,7 +434,7 @@ class RoomMessageListRestServlet(ClientV1RestServlet):
as_client_event = "raw" not in request.args as_client_event = "raw" not in request.args
filter_bytes = request.args.get("filter", None) filter_bytes = request.args.get("filter", None)
if filter_bytes: if filter_bytes:
filter_json = urllib.unquote(filter_bytes[-1]).decode("UTF-8") filter_json = urlparse.unquote(filter_bytes[-1]).decode("UTF-8")
event_filter = Filter(json.loads(filter_json)) event_filter = Filter(json.loads(filter_json))
else: else:
event_filter = None event_filter = None
@ -655,7 +651,12 @@ class RoomMembershipRestServlet(ClientV1RestServlet):
content=event_content, content=event_content,
) )
defer.returnValue((200, {})) return_value = {}
if membership_action == "join":
return_value["room_id"] = room_id
defer.returnValue((200, return_value))
def _has_3pid_invite_keys(self, content): def _has_3pid_invite_keys(self, content):
for key in {"id_server", "medium", "address"}: for key in {"id_server", "medium", "address"}:
@ -718,8 +719,8 @@ class RoomTypingRestServlet(ClientV1RestServlet):
def on_PUT(self, request, room_id, user_id): def on_PUT(self, request, room_id, user_id):
requester = yield self.auth.get_user_by_req(request) requester = yield self.auth.get_user_by_req(request)
room_id = urllib.unquote(room_id) room_id = urlparse.unquote(room_id)
target_user = UserID.from_string(urllib.unquote(user_id)) target_user = UserID.from_string(urlparse.unquote(user_id))
content = parse_json_object_from_request(request) content = parse_json_object_from_request(request)

View File

@ -1,5 +1,6 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
# Copyright 2017 Vector Creations Ltd # Copyright 2017 Vector Creations Ltd
# Copyright 2018 New Vector Ltd
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License. # you may not use this file except in compliance with the License.
@ -401,6 +402,32 @@ class GroupInvitedUsersServlet(RestServlet):
defer.returnValue((200, result)) defer.returnValue((200, result))
class GroupSettingJoinPolicyServlet(RestServlet):
"""Set group join policy
"""
PATTERNS = client_v2_patterns("/groups/(?P<group_id>[^/]*)/settings/m.join_policy$")
def __init__(self, hs):
super(GroupSettingJoinPolicyServlet, self).__init__()
self.auth = hs.get_auth()
self.groups_handler = hs.get_groups_local_handler()
@defer.inlineCallbacks
def on_PUT(self, request, group_id):
requester = yield self.auth.get_user_by_req(request)
requester_user_id = requester.user.to_string()
content = parse_json_object_from_request(request)
result = yield self.groups_handler.set_group_join_policy(
group_id,
requester_user_id,
content,
)
defer.returnValue((200, result))
class GroupCreateServlet(RestServlet): class GroupCreateServlet(RestServlet):
"""Create a group """Create a group
""" """
@ -738,6 +765,7 @@ def register_servlets(hs, http_server):
GroupInvitedUsersServlet(hs).register(http_server) GroupInvitedUsersServlet(hs).register(http_server)
GroupUsersServlet(hs).register(http_server) GroupUsersServlet(hs).register(http_server)
GroupRoomServlet(hs).register(http_server) GroupRoomServlet(hs).register(http_server)
GroupSettingJoinPolicyServlet(hs).register(http_server)
GroupCreateServlet(hs).register(http_server) GroupCreateServlet(hs).register(http_server)
GroupAdminRoomsServlet(hs).register(http_server) GroupAdminRoomsServlet(hs).register(http_server)
GroupAdminRoomsConfigServlet(hs).register(http_server) GroupAdminRoomsConfigServlet(hs).register(http_server)

View File

@ -20,7 +20,6 @@ import synapse
import synapse.types import synapse.types
from synapse.api.auth import get_access_token_from_request, has_access_token from synapse.api.auth import get_access_token_from_request, has_access_token
from synapse.api.constants import LoginType from synapse.api.constants import LoginType
from synapse.types import RoomID, RoomAlias
from synapse.api.errors import SynapseError, Codes, UnrecognizedRequestError from synapse.api.errors import SynapseError, Codes, UnrecognizedRequestError
from synapse.http.servlet import ( from synapse.http.servlet import (
RestServlet, parse_json_object_from_request, assert_params_in_request, parse_string RestServlet, parse_json_object_from_request, assert_params_in_request, parse_string
@ -36,6 +35,8 @@ from hashlib import sha1
from synapse.util.async import run_on_reactor from synapse.util.async import run_on_reactor
from synapse.util.ratelimitutils import FederationRateLimiter from synapse.util.ratelimitutils import FederationRateLimiter
from six import string_types
# We ought to be using hmac.compare_digest() but on older pythons it doesn't # We ought to be using hmac.compare_digest() but on older pythons it doesn't
# exist. It's a _really minor_ security flaw to use plain string comparison # exist. It's a _really minor_ security flaw to use plain string comparison
@ -211,14 +212,14 @@ class RegisterRestServlet(RestServlet):
# in sessions. Pull out the username/password provided to us. # in sessions. Pull out the username/password provided to us.
desired_password = None desired_password = None
if 'password' in body: if 'password' in body:
if (not isinstance(body['password'], basestring) or if (not isinstance(body['password'], string_types) or
len(body['password']) > 512): len(body['password']) > 512):
raise SynapseError(400, "Invalid password") raise SynapseError(400, "Invalid password")
desired_password = body["password"] desired_password = body["password"]
desired_username = None desired_username = None
if 'username' in body: if 'username' in body:
if (not isinstance(body['username'], basestring) or if (not isinstance(body['username'], string_types) or
len(body['username']) > 512): len(body['username']) > 512):
raise SynapseError(400, "Invalid username") raise SynapseError(400, "Invalid username")
desired_username = body['username'] desired_username = body['username']
@ -244,7 +245,7 @@ class RegisterRestServlet(RestServlet):
access_token = get_access_token_from_request(request) access_token = get_access_token_from_request(request)
if isinstance(desired_username, basestring): if isinstance(desired_username, string_types):
result = yield self._do_appservice_registration( result = yield self._do_appservice_registration(
desired_username, access_token, body desired_username, access_token, body
) )
@ -405,14 +406,6 @@ class RegisterRestServlet(RestServlet):
generate_token=False, generate_token=False,
) )
# auto-join the user to any rooms we're supposed to dump them into
fake_requester = synapse.types.create_requester(registered_user_id)
for r in self.hs.config.auto_join_rooms:
try:
yield self._join_user_to_room(fake_requester, r)
except Exception as e:
logger.error("Failed to join new user to %r: %r", r, e)
# remember that we've now registered that user account, and with # remember that we've now registered that user account, and with
# what user ID (since the user may not have specified) # what user ID (since the user may not have specified)
self.auth_handler.set_session_data( self.auth_handler.set_session_data(
@ -445,29 +438,6 @@ class RegisterRestServlet(RestServlet):
def on_OPTIONS(self, _): def on_OPTIONS(self, _):
return 200, {} return 200, {}
@defer.inlineCallbacks
def _join_user_to_room(self, requester, room_identifier):
room_id = None
if RoomID.is_valid(room_identifier):
room_id = room_identifier
elif RoomAlias.is_valid(room_identifier):
room_alias = RoomAlias.from_string(room_identifier)
room_id, remote_room_hosts = (
yield self.room_member_handler.lookup_room_alias(room_alias)
)
room_id = room_id.to_string()
else:
raise SynapseError(400, "%s was not legal room ID or room alias" % (
room_identifier,
))
yield self.room_member_handler.update_membership(
requester=requester,
target=requester.user,
room_id=room_id,
action="join",
)
@defer.inlineCallbacks @defer.inlineCallbacks
def _do_appservice_registration(self, username, as_token, body): def _do_appservice_registration(self, username, as_token, body):
user_id = yield self.registration_handler.appservice_register( user_id = yield self.registration_handler.appservice_register(
@ -496,7 +466,7 @@ class RegisterRestServlet(RestServlet):
# includes the password and admin flag in the hashed text. Why are # includes the password and admin flag in the hashed text. Why are
# these different? # these different?
want_mac = hmac.new( want_mac = hmac.new(
key=self.hs.config.registration_shared_secret, key=self.hs.config.registration_shared_secret.encode(),
msg=user, msg=user,
digestmod=sha1, digestmod=sha1,
).hexdigest() ).hexdigest()

View File

@ -33,7 +33,7 @@ from ._base import set_timeline_upper_limit
import itertools import itertools
import logging import logging
import ujson as json import simplejson as json
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View File

@ -28,7 +28,7 @@ import os
import logging import logging
import urllib import urllib
import urlparse from six.moves.urllib import parse as urlparse
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -143,6 +143,7 @@ def respond_with_responder(request, responder, media_type, file_size, upload_nam
respond_404(request) respond_404(request)
return return
logger.debug("Responding to media request with responder %s")
add_file_headers(request, media_type, file_size, upload_name) add_file_headers(request, media_type, file_size, upload_name)
with responder: with responder:
yield responder.write_to_consumer(request) yield responder.write_to_consumer(request)

View File

@ -47,7 +47,7 @@ import shutil
import cgi import cgi
import logging import logging
import urlparse from six.moves.urllib import parse as urlparse
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View File

@ -16,6 +16,8 @@
from twisted.internet import defer, threads from twisted.internet import defer, threads
from twisted.protocols.basic import FileSender from twisted.protocols.basic import FileSender
import six
from ._base import Responder from ._base import Responder
from synapse.util.file_consumer import BackgroundFileConsumer from synapse.util.file_consumer import BackgroundFileConsumer
@ -119,7 +121,7 @@ class MediaStorage(object):
os.remove(fname) os.remove(fname)
except Exception: except Exception:
pass pass
raise t, v, tb six.reraise(t, v, tb)
if not finished_called: if not finished_called:
raise Exception("Finished callback not called") raise Exception("Finished callback not called")
@ -253,7 +255,9 @@ class FileResponder(Responder):
self.open_file = open_file self.open_file = open_file
def write_to_consumer(self, consumer): def write_to_consumer(self, consumer):
return FileSender().beginFileTransfer(self.open_file, consumer) return make_deferred_yieldable(
FileSender().beginFileTransfer(self.open_file, consumer)
)
def __exit__(self, exc_type, exc_val, exc_tb): def __exit__(self, exc_type, exc_val, exc_tb):
self.open_file.close() self.open_file.close()

View File

@ -23,7 +23,7 @@ import re
import shutil import shutil
import sys import sys
import traceback import traceback
import ujson as json import simplejson as json
import urlparse import urlparse
from twisted.web.server import NOT_DONE_YET from twisted.web.server import NOT_DONE_YET
@ -35,7 +35,7 @@ from ._base import FileInfo
from synapse.api.errors import ( from synapse.api.errors import (
SynapseError, Codes, SynapseError, Codes,
) )
from synapse.util.logcontext import preserve_fn, make_deferred_yieldable from synapse.util.logcontext import make_deferred_yieldable, run_in_background
from synapse.util.stringutils import random_string from synapse.util.stringutils import random_string
from synapse.util.caches.expiringcache import ExpiringCache from synapse.util.caches.expiringcache import ExpiringCache
from synapse.http.client import SpiderHttpClient from synapse.http.client import SpiderHttpClient
@ -144,7 +144,8 @@ class PreviewUrlResource(Resource):
observable = self._cache.get(url) observable = self._cache.get(url)
if not observable: if not observable:
download = preserve_fn(self._do_preview)( download = run_in_background(
self._do_preview,
url, requester.user, ts, url, requester.user, ts,
) )
observable = ObservableDeferred( observable = ObservableDeferred(

View File

@ -18,7 +18,7 @@ from twisted.internet import defer, threads
from .media_storage import FileResponder from .media_storage import FileResponder
from synapse.config._base import Config from synapse.config._base import Config
from synapse.util.logcontext import preserve_fn from synapse.util.logcontext import run_in_background
import logging import logging
import os import os
@ -87,7 +87,12 @@ class StorageProviderWrapper(StorageProvider):
return self.backend.store_file(path, file_info) return self.backend.store_file(path, file_info)
else: else:
# TODO: Handle errors. # TODO: Handle errors.
preserve_fn(self.backend.store_file)(path, file_info) def store():
try:
return self.backend.store_file(path, file_info)
except Exception:
logger.exception("Error storing file")
run_in_background(store)
return defer.succeed(None) return defer.succeed(None)
def fetch(self, path, file_info): def fetch(self, path, file_info):

Some files were not shown because too many files have changed in this diff Show More