Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes
commit
34406cf22c
63
CHANGES.md
63
CHANGES.md
|
@ -1,3 +1,66 @@
|
||||||
|
Synapse 0.33.8 (2018-11-01)
|
||||||
|
===========================
|
||||||
|
|
||||||
|
No significant changes.
|
||||||
|
|
||||||
|
|
||||||
|
Synapse 0.33.8rc2 (2018-10-31)
|
||||||
|
==============================
|
||||||
|
|
||||||
|
Bugfixes
|
||||||
|
--------
|
||||||
|
|
||||||
|
- Searches that request profile info now no longer fail with a 500. Fixes
|
||||||
|
a regression in 0.33.8rc1. ([\#4122](https://github.com/matrix-org/synapse/issues/4122))
|
||||||
|
|
||||||
|
|
||||||
|
Synapse 0.33.8rc1 (2018-10-29)
|
||||||
|
==============================
|
||||||
|
|
||||||
|
Features
|
||||||
|
--------
|
||||||
|
|
||||||
|
- Servers with auto-join rooms will now automatically create those rooms when the first user registers ([\#3975](https://github.com/matrix-org/synapse/issues/3975))
|
||||||
|
- Add config option to control alias creation ([\#4051](https://github.com/matrix-org/synapse/issues/4051))
|
||||||
|
- The register_new_matrix_user script is now ported to Python 3. ([\#4085](https://github.com/matrix-org/synapse/issues/4085))
|
||||||
|
- Configure Docker image to listen on both ipv4 and ipv6. ([\#4089](https://github.com/matrix-org/synapse/issues/4089))
|
||||||
|
|
||||||
|
|
||||||
|
Bugfixes
|
||||||
|
--------
|
||||||
|
|
||||||
|
- Fix HTTP error response codes for federated group requests. ([\#3969](https://github.com/matrix-org/synapse/issues/3969))
|
||||||
|
- Fix issue where Python 3 users couldn't paginate /publicRooms ([\#4046](https://github.com/matrix-org/synapse/issues/4046))
|
||||||
|
- Fix URL previewing to work in Python 3.7 ([\#4050](https://github.com/matrix-org/synapse/issues/4050))
|
||||||
|
- synctl will use the right python executable to run worker processes ([\#4057](https://github.com/matrix-org/synapse/issues/4057))
|
||||||
|
- Manhole now works again on Python 3, instead of failing with a "couldn't match all kex parts" when connecting. ([\#4060](https://github.com/matrix-org/synapse/issues/4060), [\#4067](https://github.com/matrix-org/synapse/issues/4067))
|
||||||
|
- Fix some metrics being racy and causing exceptions when polled by Prometheus. ([\#4061](https://github.com/matrix-org/synapse/issues/4061))
|
||||||
|
- Fix bug which prevented email notifications from being sent unless an absolute path was given for `email_templates`. ([\#4068](https://github.com/matrix-org/synapse/issues/4068))
|
||||||
|
- Correctly account for cpu usage by background threads ([\#4074](https://github.com/matrix-org/synapse/issues/4074))
|
||||||
|
- Fix race condition where config defined reserved users were not being added to
|
||||||
|
the monthly active user list prior to the homeserver reactor firing up ([\#4081](https://github.com/matrix-org/synapse/issues/4081))
|
||||||
|
- Fix bug which prevented backslashes being used in event field filters ([\#4083](https://github.com/matrix-org/synapse/issues/4083))
|
||||||
|
|
||||||
|
|
||||||
|
Internal Changes
|
||||||
|
----------------
|
||||||
|
|
||||||
|
- Add information about the [matrix-docker-ansible-deploy](https://github.com/spantaleev/matrix-docker-ansible-deploy) playbook ([\#3698](https://github.com/matrix-org/synapse/issues/3698))
|
||||||
|
- Add initial implementation of new state resolution algorithm ([\#3786](https://github.com/matrix-org/synapse/issues/3786))
|
||||||
|
- Reduce database load when fetching state groups ([\#4011](https://github.com/matrix-org/synapse/issues/4011))
|
||||||
|
- Various cleanups in the federation client code ([\#4031](https://github.com/matrix-org/synapse/issues/4031))
|
||||||
|
- Run the CircleCI builds in docker containers ([\#4041](https://github.com/matrix-org/synapse/issues/4041))
|
||||||
|
- Only colourise synctl output when attached to tty ([\#4049](https://github.com/matrix-org/synapse/issues/4049))
|
||||||
|
- Refactor room alias creation code ([\#4063](https://github.com/matrix-org/synapse/issues/4063))
|
||||||
|
- Make the Python scripts in the top-level scripts folders meet pep8 and pass flake8. ([\#4068](https://github.com/matrix-org/synapse/issues/4068))
|
||||||
|
- The README now contains example for the Caddy web server. Contributed by steamp0rt. ([\#4072](https://github.com/matrix-org/synapse/issues/4072))
|
||||||
|
- Add psutil as an explicit dependency ([\#4073](https://github.com/matrix-org/synapse/issues/4073))
|
||||||
|
- Clean up threading and logcontexts in pushers ([\#4075](https://github.com/matrix-org/synapse/issues/4075))
|
||||||
|
- Correctly manage logcontexts during startup to fix some "Unexpected logging context" warnings ([\#4076](https://github.com/matrix-org/synapse/issues/4076))
|
||||||
|
- Give some more things logcontexts ([\#4077](https://github.com/matrix-org/synapse/issues/4077))
|
||||||
|
- Clean up some bits of code which were flagged by the linter ([\#4082](https://github.com/matrix-org/synapse/issues/4082))
|
||||||
|
|
||||||
|
|
||||||
Synapse 0.33.7 (2018-10-18)
|
Synapse 0.33.7 (2018-10-18)
|
||||||
===========================
|
===========================
|
||||||
|
|
||||||
|
|
|
@ -1 +0,0 @@
|
||||||
Add information about the [matrix-docker-ansible-deploy](https://github.com/spantaleev/matrix-docker-ansible-deploy) playbook
|
|
|
@ -0,0 +1 @@
|
||||||
|
Fix build of Docker image with docker-compose
|
|
@ -1 +0,0 @@
|
||||||
Add initial implementation of new state resolution algorithm
|
|
|
@ -1 +0,0 @@
|
||||||
Fix HTTP error response codes for federated group requests.
|
|
|
@ -1 +0,0 @@
|
||||||
Servers with auto-join rooms will now automatically create those rooms when the first user registers
|
|
|
@ -0,0 +1 @@
|
||||||
|
Include flags to optionally add `m.login.terms` to the registration flow when consent tracking is enabled.
|
|
@ -1 +0,0 @@
|
||||||
Reduce database load when fetching state groups
|
|
|
@ -1 +0,0 @@
|
||||||
Various cleanups in the federation client code
|
|
|
@ -1 +0,0 @@
|
||||||
Run the CircleCI builds in docker containers
|
|
|
@ -1 +0,0 @@
|
||||||
Fix issue where Python 3 users couldn't paginate /publicRooms
|
|
|
@ -1 +0,0 @@
|
||||||
Only colourise synctl output when attached to tty
|
|
|
@ -1 +0,0 @@
|
||||||
Fix URL priewing to work in Python 3.7
|
|
|
@ -1 +0,0 @@
|
||||||
Add config option to control alias creation
|
|
|
@ -1 +0,0 @@
|
||||||
synctl will use the right python executable to run worker processes
|
|
|
@ -1 +0,0 @@
|
||||||
Manhole now works again on Python 3, instead of failing with a "couldn't match all kex parts" when connecting.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix some metrics being racy and causing exceptions when polled by Prometheus.
|
|
|
@ -1 +0,0 @@
|
||||||
Refactor room alias creation code
|
|
|
@ -1 +0,0 @@
|
||||||
Manhole now works again on Python 3, instead of failing with a "couldn't match all kex parts" when connecting.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix bug which prevented email notifications from being sent unless an absolute path was given for `email_templates`.
|
|
|
@ -1 +0,0 @@
|
||||||
Make the Python scripts in the top-level scripts folders meet pep8 and pass flake8.
|
|
|
@ -1 +0,0 @@
|
||||||
The README now contains example for the Caddy web server. Contributed by steamp0rt.
|
|
|
@ -1 +0,0 @@
|
||||||
Add psutil as an explicit dependency
|
|
|
@ -1 +0,0 @@
|
||||||
Correctly account for cpu usage by background threads
|
|
|
@ -1 +0,0 @@
|
||||||
Clean up threading and logcontexts in pushers
|
|
|
@ -1 +0,0 @@
|
||||||
Correctly manage logcontexts during startup to fix some "Unexpected logging context" warnings
|
|
|
@ -1 +0,0 @@
|
||||||
Give some more things logcontexts
|
|
|
@ -1,2 +0,0 @@
|
||||||
Fix race condition where config defined reserved users were not being added to
|
|
||||||
the monthly active user list prior to the homeserver reactor firing up
|
|
|
@ -1 +0,0 @@
|
||||||
Clean up some bits of code which were flagged by the linter
|
|
|
@ -1 +0,0 @@
|
||||||
Fix bug which prevented backslashes being used in event field filters
|
|
|
@ -1 +0,0 @@
|
||||||
The register_new_matrix_user script is now ported to Python 3.
|
|
|
@ -1 +0,0 @@
|
||||||
Configure Docker image to listen on both ipv4 and ipv6.
|
|
|
@ -0,0 +1 @@
|
||||||
|
Support for replacing rooms with new ones
|
|
@ -0,0 +1 @@
|
||||||
|
The deprecated v1 key exchange endpoints have been removed.
|
|
@ -0,0 +1 @@
|
||||||
|
Synapse will no longer fetch keys using the fallback deprecated v1 key exchange method and will now always use v2.
|
|
@ -0,0 +1 @@
|
||||||
|
Log some bits about room creation
|
|
@ -0,0 +1 @@
|
||||||
|
fix return code of empty key backups
|
|
@ -0,0 +1 @@
|
||||||
|
Fix `tox` failure on old systems
|
|
@ -0,0 +1 @@
|
||||||
|
If the typing stream ID goes backwards (as on a worker when the master restarts), the worker's typing handler will no longer erroneously report rooms containing new typing events.
|
|
@ -0,0 +1 @@
|
||||||
|
Add STATE_V2_TEST room version
|
|
@ -0,0 +1 @@
|
||||||
|
Fix table lock of device_lists_remote_cache which could freeze the application
|
|
@ -0,0 +1 @@
|
||||||
|
Include flags to optionally add `m.login.terms` to the registration flow when consent tracking is enabled.
|
|
@ -0,0 +1 @@
|
||||||
|
Fix exception when using state res v2 algorithm
|
|
@ -0,0 +1 @@
|
||||||
|
Clean up event accesses and tests
|
|
@ -0,0 +1 @@
|
||||||
|
The default logging config will now set an explicit log file encoding of UTF-8.
|
|
@ -0,0 +1 @@
|
||||||
|
Add helpers functions for getting prev and auth events of an event
|
|
@ -0,0 +1 @@
|
||||||
|
Generating the user consent URI no longer fails on Python 3.
|
|
@ -0,0 +1 @@
|
||||||
|
Include flags to optionally add `m.login.terms` to the registration flow when consent tracking is enabled.
|
|
@ -0,0 +1 @@
|
||||||
|
Add some tests for the HTTP pusher.
|
|
@ -6,9 +6,11 @@ version: '3'
|
||||||
services:
|
services:
|
||||||
|
|
||||||
synapse:
|
synapse:
|
||||||
build: ../..
|
build:
|
||||||
|
context: ../..
|
||||||
|
dockerfile: docker/Dockerfile
|
||||||
image: docker.io/matrixdotorg/synapse:latest
|
image: docker.io/matrixdotorg/synapse:latest
|
||||||
# Since snyapse does not retry to connect to the database, restart upon
|
# Since synapse does not retry to connect to the database, restart upon
|
||||||
# failure
|
# failure
|
||||||
restart: unless-stopped
|
restart: unless-stopped
|
||||||
# See the readme for a full documentation of the environment settings
|
# See the readme for a full documentation of the environment settings
|
||||||
|
@ -47,4 +49,4 @@ services:
|
||||||
# You may store the database tables in a local folder..
|
# You may store the database tables in a local folder..
|
||||||
- ./schemas:/var/lib/postgresql/data
|
- ./schemas:/var/lib/postgresql/data
|
||||||
# .. or store them on some high performance storage for better results
|
# .. or store them on some high performance storage for better results
|
||||||
# - /path/to/ssd/storage:/var/lib/postfesql/data
|
# - /path/to/ssd/storage:/var/lib/postgresql/data
|
||||||
|
|
|
@ -31,7 +31,7 @@ Note that the templates must be stored under a name giving the language of the
|
||||||
template - currently this must always be `en` (for "English");
|
template - currently this must always be `en` (for "English");
|
||||||
internationalisation support is intended for the future.
|
internationalisation support is intended for the future.
|
||||||
|
|
||||||
The template for the policy itself should be versioned and named according to
|
The template for the policy itself should be versioned and named according to
|
||||||
the version: for example `1.0.html`. The version of the policy which the user
|
the version: for example `1.0.html`. The version of the policy which the user
|
||||||
has agreed to is stored in the database.
|
has agreed to is stored in the database.
|
||||||
|
|
||||||
|
@ -85,6 +85,37 @@ Once this is complete, and the server has been restarted, try visiting
|
||||||
an error "Missing string query parameter 'u'". It is now possible to manually
|
an error "Missing string query parameter 'u'". It is now possible to manually
|
||||||
construct URIs where users can give their consent.
|
construct URIs where users can give their consent.
|
||||||
|
|
||||||
|
### Enabling consent tracking at registration
|
||||||
|
|
||||||
|
1. Add the following to your configuration:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
user_consent:
|
||||||
|
require_at_registration: true
|
||||||
|
policy_name: "Privacy Policy" # or whatever you'd like to call the policy
|
||||||
|
```
|
||||||
|
|
||||||
|
2. In your consent templates, make use of the `public_version` variable to
|
||||||
|
see if an unauthenticated user is viewing the page. This is typically
|
||||||
|
wrapped around the form that would be used to actually agree to the document:
|
||||||
|
|
||||||
|
```
|
||||||
|
{% if not public_version %}
|
||||||
|
<!-- The variables used here are only provided when the 'u' param is given to the homeserver -->
|
||||||
|
<form method="post" action="consent">
|
||||||
|
<input type="hidden" name="v" value="{{version}}"/>
|
||||||
|
<input type="hidden" name="u" value="{{user}}"/>
|
||||||
|
<input type="hidden" name="h" value="{{userhmac}}"/>
|
||||||
|
<input type="submit" value="Sure thing!"/>
|
||||||
|
</form>
|
||||||
|
{% endif %}
|
||||||
|
```
|
||||||
|
|
||||||
|
3. Restart Synapse to apply the changes.
|
||||||
|
|
||||||
|
Visiting `https://<server>/_matrix/consent` should now give you a view of the privacy
|
||||||
|
document. This is what users will be able to see when registering for accounts.
|
||||||
|
|
||||||
### Constructing the consent URI
|
### Constructing the consent URI
|
||||||
|
|
||||||
It may be useful to manually construct the "consent URI" for a given user - for
|
It may be useful to manually construct the "consent URI" for a given user - for
|
||||||
|
@ -106,6 +137,12 @@ query parameters:
|
||||||
`https://<server>/_matrix/consent?u=<user>&h=68a152465a4d...`.
|
`https://<server>/_matrix/consent?u=<user>&h=68a152465a4d...`.
|
||||||
|
|
||||||
|
|
||||||
|
Note that not providing a `u` parameter will be interpreted as wanting to view
|
||||||
|
the document from an unauthenticated perspective, such as prior to registration.
|
||||||
|
Therefore, the `h` parameter is not required in this scenario. To enable this
|
||||||
|
behaviour, set `require_at_registration` to `true` in your `user_consent` config.
|
||||||
|
|
||||||
|
|
||||||
Sending users a server notice asking them to agree to the policy
|
Sending users a server notice asking them to agree to the policy
|
||||||
----------------------------------------------------------------
|
----------------------------------------------------------------
|
||||||
|
|
||||||
|
|
|
@ -12,12 +12,15 @@
|
||||||
<p>
|
<p>
|
||||||
All your base are belong to us.
|
All your base are belong to us.
|
||||||
</p>
|
</p>
|
||||||
<form method="post" action="consent">
|
{% if not public_version %}
|
||||||
<input type="hidden" name="v" value="{{version}}"/>
|
<!-- The variables used here are only provided when the 'u' param is given to the homeserver -->
|
||||||
<input type="hidden" name="u" value="{{user}}"/>
|
<form method="post" action="consent">
|
||||||
<input type="hidden" name="h" value="{{userhmac}}"/>
|
<input type="hidden" name="v" value="{{version}}"/>
|
||||||
<input type="submit" value="Sure thing!"/>
|
<input type="hidden" name="u" value="{{user}}"/>
|
||||||
</form>
|
<input type="hidden" name="h" value="{{userhmac}}"/>
|
||||||
|
<input type="submit" value="Sure thing!"/>
|
||||||
|
</form>
|
||||||
|
{% endif %}
|
||||||
{% endif %}
|
{% endif %}
|
||||||
</body>
|
</body>
|
||||||
</html>
|
</html>
|
||||||
|
|
|
@ -14,22 +14,3 @@ fi
|
||||||
|
|
||||||
# set up the virtualenv
|
# set up the virtualenv
|
||||||
tox -e py27 --notest -v
|
tox -e py27 --notest -v
|
||||||
|
|
||||||
TOX_BIN=$TOX_DIR/py27/bin
|
|
||||||
|
|
||||||
# cryptography 2.2 requires setuptools >= 18.5.
|
|
||||||
#
|
|
||||||
# older versions of virtualenv (?) give us a virtualenv with the same version
|
|
||||||
# of setuptools as is installed on the system python (and tox runs virtualenv
|
|
||||||
# under python3, so we get the version of setuptools that is installed on that).
|
|
||||||
#
|
|
||||||
# anyway, make sure that we have a recent enough setuptools.
|
|
||||||
$TOX_BIN/pip install 'setuptools>=18.5'
|
|
||||||
|
|
||||||
# we also need a semi-recent version of pip, because old ones fail to install
|
|
||||||
# the "enum34" dependency of cryptography.
|
|
||||||
$TOX_BIN/pip install 'pip>=10'
|
|
||||||
|
|
||||||
{ python synapse/python_dependencies.py
|
|
||||||
echo lxml
|
|
||||||
} | xargs $TOX_BIN/pip install
|
|
||||||
|
|
|
@ -27,4 +27,4 @@ try:
|
||||||
except ImportError:
|
except ImportError:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
__version__ = "0.33.7"
|
__version__ = "0.33.8"
|
||||||
|
|
|
@ -51,6 +51,7 @@ class LoginType(object):
|
||||||
EMAIL_IDENTITY = u"m.login.email.identity"
|
EMAIL_IDENTITY = u"m.login.email.identity"
|
||||||
MSISDN = u"m.login.msisdn"
|
MSISDN = u"m.login.msisdn"
|
||||||
RECAPTCHA = u"m.login.recaptcha"
|
RECAPTCHA = u"m.login.recaptcha"
|
||||||
|
TERMS = u"m.login.terms"
|
||||||
DUMMY = u"m.login.dummy"
|
DUMMY = u"m.login.dummy"
|
||||||
|
|
||||||
# Only for C/S API v1
|
# Only for C/S API v1
|
||||||
|
@ -102,6 +103,7 @@ class ThirdPartyEntityKind(object):
|
||||||
class RoomVersions(object):
|
class RoomVersions(object):
|
||||||
V1 = "1"
|
V1 = "1"
|
||||||
VDH_TEST = "vdh-test-version"
|
VDH_TEST = "vdh-test-version"
|
||||||
|
STATE_V2_TEST = "state-v2-test"
|
||||||
|
|
||||||
|
|
||||||
# the version we will give rooms which are created on this server
|
# the version we will give rooms which are created on this server
|
||||||
|
@ -109,7 +111,11 @@ DEFAULT_ROOM_VERSION = RoomVersions.V1
|
||||||
|
|
||||||
# vdh-test-version is a placeholder to get room versioning support working and tested
|
# vdh-test-version is a placeholder to get room versioning support working and tested
|
||||||
# until we have a working v2.
|
# until we have a working v2.
|
||||||
KNOWN_ROOM_VERSIONS = {RoomVersions.V1, RoomVersions.VDH_TEST}
|
KNOWN_ROOM_VERSIONS = {
|
||||||
|
RoomVersions.V1,
|
||||||
|
RoomVersions.VDH_TEST,
|
||||||
|
RoomVersions.STATE_V2_TEST,
|
||||||
|
}
|
||||||
|
|
||||||
ServerNoticeMsgType = "m.server_notice"
|
ServerNoticeMsgType = "m.server_notice"
|
||||||
ServerNoticeLimitReached = "m.server_notice.usage_limit_reached"
|
ServerNoticeLimitReached = "m.server_notice.usage_limit_reached"
|
||||||
|
|
|
@ -28,7 +28,6 @@ FEDERATION_PREFIX = "/_matrix/federation/v1"
|
||||||
STATIC_PREFIX = "/_matrix/static"
|
STATIC_PREFIX = "/_matrix/static"
|
||||||
WEB_CLIENT_PREFIX = "/_matrix/client"
|
WEB_CLIENT_PREFIX = "/_matrix/client"
|
||||||
CONTENT_REPO_PREFIX = "/_matrix/content"
|
CONTENT_REPO_PREFIX = "/_matrix/content"
|
||||||
SERVER_KEY_PREFIX = "/_matrix/key/v1"
|
|
||||||
SERVER_KEY_V2_PREFIX = "/_matrix/key/v2"
|
SERVER_KEY_V2_PREFIX = "/_matrix/key/v2"
|
||||||
MEDIA_PREFIX = "/_matrix/media/r0"
|
MEDIA_PREFIX = "/_matrix/media/r0"
|
||||||
LEGACY_MEDIA_PREFIX = "/_matrix/media/v1"
|
LEGACY_MEDIA_PREFIX = "/_matrix/media/v1"
|
||||||
|
|
|
@ -37,7 +37,6 @@ from synapse.api.urls import (
|
||||||
FEDERATION_PREFIX,
|
FEDERATION_PREFIX,
|
||||||
LEGACY_MEDIA_PREFIX,
|
LEGACY_MEDIA_PREFIX,
|
||||||
MEDIA_PREFIX,
|
MEDIA_PREFIX,
|
||||||
SERVER_KEY_PREFIX,
|
|
||||||
SERVER_KEY_V2_PREFIX,
|
SERVER_KEY_V2_PREFIX,
|
||||||
STATIC_PREFIX,
|
STATIC_PREFIX,
|
||||||
WEB_CLIENT_PREFIX,
|
WEB_CLIENT_PREFIX,
|
||||||
|
@ -59,7 +58,6 @@ from synapse.python_dependencies import CONDITIONAL_REQUIREMENTS, check_requirem
|
||||||
from synapse.replication.http import REPLICATION_PREFIX, ReplicationRestResource
|
from synapse.replication.http import REPLICATION_PREFIX, ReplicationRestResource
|
||||||
from synapse.replication.tcp.resource import ReplicationStreamProtocolFactory
|
from synapse.replication.tcp.resource import ReplicationStreamProtocolFactory
|
||||||
from synapse.rest import ClientRestResource
|
from synapse.rest import ClientRestResource
|
||||||
from synapse.rest.key.v1.server_key_resource import LocalKey
|
|
||||||
from synapse.rest.key.v2 import KeyApiV2Resource
|
from synapse.rest.key.v2 import KeyApiV2Resource
|
||||||
from synapse.rest.media.v0.content_repository import ContentRepoResource
|
from synapse.rest.media.v0.content_repository import ContentRepoResource
|
||||||
from synapse.server import HomeServer
|
from synapse.server import HomeServer
|
||||||
|
@ -236,10 +234,7 @@ class SynapseHomeServer(HomeServer):
|
||||||
)
|
)
|
||||||
|
|
||||||
if name in ["keys", "federation"]:
|
if name in ["keys", "federation"]:
|
||||||
resources.update({
|
resources[SERVER_KEY_V2_PREFIX] = KeyApiV2Resource(self)
|
||||||
SERVER_KEY_PREFIX: LocalKey(self),
|
|
||||||
SERVER_KEY_V2_PREFIX: KeyApiV2Resource(self),
|
|
||||||
})
|
|
||||||
|
|
||||||
if name == "webclient":
|
if name == "webclient":
|
||||||
resources[WEB_CLIENT_PREFIX] = build_resource_for_web_client(self)
|
resources[WEB_CLIENT_PREFIX] = build_resource_for_web_client(self)
|
||||||
|
|
|
@ -226,7 +226,15 @@ class SynchrotronPresence(object):
|
||||||
class SynchrotronTyping(object):
|
class SynchrotronTyping(object):
|
||||||
def __init__(self, hs):
|
def __init__(self, hs):
|
||||||
self._latest_room_serial = 0
|
self._latest_room_serial = 0
|
||||||
|
self._reset()
|
||||||
|
|
||||||
|
def _reset(self):
|
||||||
|
"""
|
||||||
|
Reset the typing handler's data caches.
|
||||||
|
"""
|
||||||
|
# map room IDs to serial numbers
|
||||||
self._room_serials = {}
|
self._room_serials = {}
|
||||||
|
# map room IDs to sets of users currently typing
|
||||||
self._room_typing = {}
|
self._room_typing = {}
|
||||||
|
|
||||||
def stream_positions(self):
|
def stream_positions(self):
|
||||||
|
@ -236,6 +244,12 @@ class SynchrotronTyping(object):
|
||||||
return {"typing": self._latest_room_serial}
|
return {"typing": self._latest_room_serial}
|
||||||
|
|
||||||
def process_replication_rows(self, token, rows):
|
def process_replication_rows(self, token, rows):
|
||||||
|
if self._latest_room_serial > token:
|
||||||
|
# The master has gone backwards. To prevent inconsistent data, just
|
||||||
|
# clear everything.
|
||||||
|
self._reset()
|
||||||
|
|
||||||
|
# Set the latest serial token to whatever the server gave us.
|
||||||
self._latest_room_serial = token
|
self._latest_room_serial = token
|
||||||
|
|
||||||
for row in rows:
|
for row in rows:
|
||||||
|
|
|
@ -42,6 +42,14 @@ DEFAULT_CONFIG = """\
|
||||||
# until the user consents to the privacy policy. The value of the setting is
|
# until the user consents to the privacy policy. The value of the setting is
|
||||||
# used as the text of the error.
|
# used as the text of the error.
|
||||||
#
|
#
|
||||||
|
# 'require_at_registration', if enabled, will add a step to the registration
|
||||||
|
# process, similar to how captcha works. Users will be required to accept the
|
||||||
|
# policy before their account is created.
|
||||||
|
#
|
||||||
|
# 'policy_name' is the display name of the policy users will see when registering
|
||||||
|
# for an account. Has no effect unless `require_at_registration` is enabled.
|
||||||
|
# Defaults to "Privacy Policy".
|
||||||
|
#
|
||||||
# user_consent:
|
# user_consent:
|
||||||
# template_dir: res/templates/privacy
|
# template_dir: res/templates/privacy
|
||||||
# version: 1.0
|
# version: 1.0
|
||||||
|
@ -54,6 +62,8 @@ DEFAULT_CONFIG = """\
|
||||||
# block_events_error: >-
|
# block_events_error: >-
|
||||||
# To continue using this homeserver you must review and agree to the
|
# To continue using this homeserver you must review and agree to the
|
||||||
# terms and conditions at %(consent_uri)s
|
# terms and conditions at %(consent_uri)s
|
||||||
|
# require_at_registration: False
|
||||||
|
# policy_name: Privacy Policy
|
||||||
#
|
#
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
@ -67,6 +77,8 @@ class ConsentConfig(Config):
|
||||||
self.user_consent_server_notice_content = None
|
self.user_consent_server_notice_content = None
|
||||||
self.user_consent_server_notice_to_guests = False
|
self.user_consent_server_notice_to_guests = False
|
||||||
self.block_events_without_consent_error = None
|
self.block_events_without_consent_error = None
|
||||||
|
self.user_consent_at_registration = False
|
||||||
|
self.user_consent_policy_name = "Privacy Policy"
|
||||||
|
|
||||||
def read_config(self, config):
|
def read_config(self, config):
|
||||||
consent_config = config.get("user_consent")
|
consent_config = config.get("user_consent")
|
||||||
|
@ -83,6 +95,12 @@ class ConsentConfig(Config):
|
||||||
self.user_consent_server_notice_to_guests = bool(consent_config.get(
|
self.user_consent_server_notice_to_guests = bool(consent_config.get(
|
||||||
"send_server_notice_to_guests", False,
|
"send_server_notice_to_guests", False,
|
||||||
))
|
))
|
||||||
|
self.user_consent_at_registration = bool(consent_config.get(
|
||||||
|
"require_at_registration", False,
|
||||||
|
))
|
||||||
|
self.user_consent_policy_name = consent_config.get(
|
||||||
|
"policy_name", "Privacy Policy",
|
||||||
|
)
|
||||||
|
|
||||||
def default_config(self, **kwargs):
|
def default_config(self, **kwargs):
|
||||||
return DEFAULT_CONFIG
|
return DEFAULT_CONFIG
|
||||||
|
|
|
@ -50,6 +50,7 @@ handlers:
|
||||||
maxBytes: 104857600
|
maxBytes: 104857600
|
||||||
backupCount: 10
|
backupCount: 10
|
||||||
filters: [context]
|
filters: [context]
|
||||||
|
encoding: utf8
|
||||||
console:
|
console:
|
||||||
class: logging.StreamHandler
|
class: logging.StreamHandler
|
||||||
formatter: precise
|
formatter: precise
|
||||||
|
|
|
@ -15,6 +15,8 @@
|
||||||
|
|
||||||
import logging
|
import logging
|
||||||
|
|
||||||
|
from six.moves import urllib
|
||||||
|
|
||||||
from canonicaljson import json
|
from canonicaljson import json
|
||||||
|
|
||||||
from twisted.internet import defer, reactor
|
from twisted.internet import defer, reactor
|
||||||
|
@ -28,15 +30,15 @@ from synapse.util import logcontext
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
KEY_API_V1 = b"/_matrix/key/v1/"
|
KEY_API_V2 = "/_matrix/key/v2/server/%s"
|
||||||
|
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
def fetch_server_key(server_name, tls_client_options_factory, path=KEY_API_V1):
|
def fetch_server_key(server_name, tls_client_options_factory, key_id):
|
||||||
"""Fetch the keys for a remote server."""
|
"""Fetch the keys for a remote server."""
|
||||||
|
|
||||||
factory = SynapseKeyClientFactory()
|
factory = SynapseKeyClientFactory()
|
||||||
factory.path = path
|
factory.path = KEY_API_V2 % (urllib.parse.quote(key_id), )
|
||||||
factory.host = server_name
|
factory.host = server_name
|
||||||
endpoint = matrix_federation_endpoint(
|
endpoint = matrix_federation_endpoint(
|
||||||
reactor, server_name, tls_client_options_factory, timeout=30
|
reactor, server_name, tls_client_options_factory, timeout=30
|
||||||
|
|
|
@ -1,6 +1,6 @@
|
||||||
# -*- coding: utf-8 -*-
|
# -*- coding: utf-8 -*-
|
||||||
# Copyright 2014-2016 OpenMarket Ltd
|
# Copyright 2014-2016 OpenMarket Ltd
|
||||||
# Copyright 2017 New Vector Ltd.
|
# Copyright 2017, 2018 New Vector Ltd.
|
||||||
#
|
#
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
# you may not use this file except in compliance with the License.
|
# you may not use this file except in compliance with the License.
|
||||||
|
@ -18,8 +18,6 @@ import hashlib
|
||||||
import logging
|
import logging
|
||||||
from collections import namedtuple
|
from collections import namedtuple
|
||||||
|
|
||||||
from six.moves import urllib
|
|
||||||
|
|
||||||
from signedjson.key import (
|
from signedjson.key import (
|
||||||
decode_verify_key_bytes,
|
decode_verify_key_bytes,
|
||||||
encode_verify_key_base64,
|
encode_verify_key_base64,
|
||||||
|
@ -395,32 +393,13 @@ class Keyring(object):
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
def get_keys_from_server(self, server_name_and_key_ids):
|
def get_keys_from_server(self, server_name_and_key_ids):
|
||||||
@defer.inlineCallbacks
|
|
||||||
def get_key(server_name, key_ids):
|
|
||||||
keys = None
|
|
||||||
try:
|
|
||||||
keys = yield self.get_server_verify_key_v2_direct(
|
|
||||||
server_name, key_ids
|
|
||||||
)
|
|
||||||
except Exception as e:
|
|
||||||
logger.info(
|
|
||||||
"Unable to get key %r for %r directly: %s %s",
|
|
||||||
key_ids, server_name,
|
|
||||||
type(e).__name__, str(e),
|
|
||||||
)
|
|
||||||
|
|
||||||
if not keys:
|
|
||||||
keys = yield self.get_server_verify_key_v1_direct(
|
|
||||||
server_name, key_ids
|
|
||||||
)
|
|
||||||
|
|
||||||
keys = {server_name: keys}
|
|
||||||
|
|
||||||
defer.returnValue(keys)
|
|
||||||
|
|
||||||
results = yield logcontext.make_deferred_yieldable(defer.gatherResults(
|
results = yield logcontext.make_deferred_yieldable(defer.gatherResults(
|
||||||
[
|
[
|
||||||
run_in_background(get_key, server_name, key_ids)
|
run_in_background(
|
||||||
|
self.get_server_verify_key_v2_direct,
|
||||||
|
server_name,
|
||||||
|
key_ids,
|
||||||
|
)
|
||||||
for server_name, key_ids in server_name_and_key_ids
|
for server_name, key_ids in server_name_and_key_ids
|
||||||
],
|
],
|
||||||
consumeErrors=True,
|
consumeErrors=True,
|
||||||
|
@ -525,10 +504,7 @@ class Keyring(object):
|
||||||
continue
|
continue
|
||||||
|
|
||||||
(response, tls_certificate) = yield fetch_server_key(
|
(response, tls_certificate) = yield fetch_server_key(
|
||||||
server_name, self.hs.tls_client_options_factory,
|
server_name, self.hs.tls_client_options_factory, requested_key_id
|
||||||
path=("/_matrix/key/v2/server/%s" % (
|
|
||||||
urllib.parse.quote(requested_key_id),
|
|
||||||
)).encode("ascii"),
|
|
||||||
)
|
)
|
||||||
|
|
||||||
if (u"signatures" not in response
|
if (u"signatures" not in response
|
||||||
|
@ -657,78 +633,6 @@ class Keyring(object):
|
||||||
|
|
||||||
defer.returnValue(results)
|
defer.returnValue(results)
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
|
||||||
def get_server_verify_key_v1_direct(self, server_name, key_ids):
|
|
||||||
"""Finds a verification key for the server with one of the key ids.
|
|
||||||
Args:
|
|
||||||
server_name (str): The name of the server to fetch a key for.
|
|
||||||
keys_ids (list of str): The key_ids to check for.
|
|
||||||
"""
|
|
||||||
|
|
||||||
# Try to fetch the key from the remote server.
|
|
||||||
|
|
||||||
(response, tls_certificate) = yield fetch_server_key(
|
|
||||||
server_name, self.hs.tls_client_options_factory
|
|
||||||
)
|
|
||||||
|
|
||||||
# Check the response.
|
|
||||||
|
|
||||||
x509_certificate_bytes = crypto.dump_certificate(
|
|
||||||
crypto.FILETYPE_ASN1, tls_certificate
|
|
||||||
)
|
|
||||||
|
|
||||||
if ("signatures" not in response
|
|
||||||
or server_name not in response["signatures"]):
|
|
||||||
raise KeyLookupError("Key response not signed by remote server")
|
|
||||||
|
|
||||||
if "tls_certificate" not in response:
|
|
||||||
raise KeyLookupError("Key response missing TLS certificate")
|
|
||||||
|
|
||||||
tls_certificate_b64 = response["tls_certificate"]
|
|
||||||
|
|
||||||
if encode_base64(x509_certificate_bytes) != tls_certificate_b64:
|
|
||||||
raise KeyLookupError("TLS certificate doesn't match")
|
|
||||||
|
|
||||||
# Cache the result in the datastore.
|
|
||||||
|
|
||||||
time_now_ms = self.clock.time_msec()
|
|
||||||
|
|
||||||
verify_keys = {}
|
|
||||||
for key_id, key_base64 in response["verify_keys"].items():
|
|
||||||
if is_signing_algorithm_supported(key_id):
|
|
||||||
key_bytes = decode_base64(key_base64)
|
|
||||||
verify_key = decode_verify_key_bytes(key_id, key_bytes)
|
|
||||||
verify_key.time_added = time_now_ms
|
|
||||||
verify_keys[key_id] = verify_key
|
|
||||||
|
|
||||||
for key_id in response["signatures"][server_name]:
|
|
||||||
if key_id not in response["verify_keys"]:
|
|
||||||
raise KeyLookupError(
|
|
||||||
"Key response must include verification keys for all"
|
|
||||||
" signatures"
|
|
||||||
)
|
|
||||||
if key_id in verify_keys:
|
|
||||||
verify_signed_json(
|
|
||||||
response,
|
|
||||||
server_name,
|
|
||||||
verify_keys[key_id]
|
|
||||||
)
|
|
||||||
|
|
||||||
yield self.store.store_server_certificate(
|
|
||||||
server_name,
|
|
||||||
server_name,
|
|
||||||
time_now_ms,
|
|
||||||
tls_certificate,
|
|
||||||
)
|
|
||||||
|
|
||||||
yield self.store_keys(
|
|
||||||
server_name=server_name,
|
|
||||||
from_server=server_name,
|
|
||||||
verify_keys=verify_keys,
|
|
||||||
)
|
|
||||||
|
|
||||||
defer.returnValue(verify_keys)
|
|
||||||
|
|
||||||
def store_keys(self, server_name, from_server, verify_keys):
|
def store_keys(self, server_name, from_server, verify_keys):
|
||||||
"""Store a collection of verify keys for a given server
|
"""Store a collection of verify keys for a given server
|
||||||
Args:
|
Args:
|
||||||
|
|
|
@ -200,11 +200,11 @@ def _is_membership_change_allowed(event, auth_events):
|
||||||
membership = event.content["membership"]
|
membership = event.content["membership"]
|
||||||
|
|
||||||
# Check if this is the room creator joining:
|
# Check if this is the room creator joining:
|
||||||
if len(event.prev_events) == 1 and Membership.JOIN == membership:
|
if len(event.prev_event_ids()) == 1 and Membership.JOIN == membership:
|
||||||
# Get room creation event:
|
# Get room creation event:
|
||||||
key = (EventTypes.Create, "", )
|
key = (EventTypes.Create, "", )
|
||||||
create = auth_events.get(key)
|
create = auth_events.get(key)
|
||||||
if create and event.prev_events[0][0] == create.event_id:
|
if create and event.prev_event_ids()[0] == create.event_id:
|
||||||
if create.content["creator"] == event.state_key:
|
if create.content["creator"] == event.state_key:
|
||||||
return
|
return
|
||||||
|
|
||||||
|
|
|
@ -159,6 +159,24 @@ class EventBase(object):
|
||||||
def keys(self):
|
def keys(self):
|
||||||
return six.iterkeys(self._event_dict)
|
return six.iterkeys(self._event_dict)
|
||||||
|
|
||||||
|
def prev_event_ids(self):
|
||||||
|
"""Returns the list of prev event IDs. The order matches the order
|
||||||
|
specified in the event, though there is no meaning to it.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
list[str]: The list of event IDs of this event's prev_events
|
||||||
|
"""
|
||||||
|
return [e for e, _ in self.prev_events]
|
||||||
|
|
||||||
|
def auth_event_ids(self):
|
||||||
|
"""Returns the list of auth event IDs. The order matches the order
|
||||||
|
specified in the event, though there is no meaning to it.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
list[str]: The list of event IDs of this event's auth_events
|
||||||
|
"""
|
||||||
|
return [e for e, _ in self.auth_events]
|
||||||
|
|
||||||
|
|
||||||
class FrozenEvent(EventBase):
|
class FrozenEvent(EventBase):
|
||||||
def __init__(self, event_dict, internal_metadata_dict={}, rejected_reason=None):
|
def __init__(self, event_dict, internal_metadata_dict={}, rejected_reason=None):
|
||||||
|
|
|
@ -183,9 +183,7 @@ class TransactionQueue(object):
|
||||||
# banned then it won't receive the event because it won't
|
# banned then it won't receive the event because it won't
|
||||||
# be in the room after the ban.
|
# be in the room after the ban.
|
||||||
destinations = yield self.state.get_current_hosts_in_room(
|
destinations = yield self.state.get_current_hosts_in_room(
|
||||||
event.room_id, latest_event_ids=[
|
event.room_id, latest_event_ids=event.prev_event_ids(),
|
||||||
prev_id for prev_id, _ in event.prev_events
|
|
||||||
],
|
|
||||||
)
|
)
|
||||||
except Exception:
|
except Exception:
|
||||||
logger.exception(
|
logger.exception(
|
||||||
|
|
|
@ -117,9 +117,6 @@ class Transaction(JsonEncodedObject):
|
||||||
"Require 'transaction_id' to construct a Transaction"
|
"Require 'transaction_id' to construct a Transaction"
|
||||||
)
|
)
|
||||||
|
|
||||||
for p in pdus:
|
|
||||||
p.transaction_id = kwargs["transaction_id"]
|
|
||||||
|
|
||||||
kwargs["pdus"] = [p.get_pdu_json() for p in pdus]
|
kwargs["pdus"] = [p.get_pdu_json() for p in pdus]
|
||||||
|
|
||||||
return Transaction(**kwargs)
|
return Transaction(**kwargs)
|
||||||
|
|
|
@ -59,6 +59,7 @@ class AuthHandler(BaseHandler):
|
||||||
LoginType.EMAIL_IDENTITY: self._check_email_identity,
|
LoginType.EMAIL_IDENTITY: self._check_email_identity,
|
||||||
LoginType.MSISDN: self._check_msisdn,
|
LoginType.MSISDN: self._check_msisdn,
|
||||||
LoginType.DUMMY: self._check_dummy_auth,
|
LoginType.DUMMY: self._check_dummy_auth,
|
||||||
|
LoginType.TERMS: self._check_terms_auth,
|
||||||
}
|
}
|
||||||
self.bcrypt_rounds = hs.config.bcrypt_rounds
|
self.bcrypt_rounds = hs.config.bcrypt_rounds
|
||||||
|
|
||||||
|
@ -431,6 +432,9 @@ class AuthHandler(BaseHandler):
|
||||||
def _check_dummy_auth(self, authdict, _):
|
def _check_dummy_auth(self, authdict, _):
|
||||||
return defer.succeed(True)
|
return defer.succeed(True)
|
||||||
|
|
||||||
|
def _check_terms_auth(self, authdict, _):
|
||||||
|
return defer.succeed(True)
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
def _check_threepid(self, medium, authdict):
|
def _check_threepid(self, medium, authdict):
|
||||||
if 'threepid_creds' not in authdict:
|
if 'threepid_creds' not in authdict:
|
||||||
|
@ -462,6 +466,22 @@ class AuthHandler(BaseHandler):
|
||||||
def _get_params_recaptcha(self):
|
def _get_params_recaptcha(self):
|
||||||
return {"public_key": self.hs.config.recaptcha_public_key}
|
return {"public_key": self.hs.config.recaptcha_public_key}
|
||||||
|
|
||||||
|
def _get_params_terms(self):
|
||||||
|
return {
|
||||||
|
"policies": {
|
||||||
|
"privacy_policy": {
|
||||||
|
"version": self.hs.config.user_consent_version,
|
||||||
|
"en": {
|
||||||
|
"name": self.hs.config.user_consent_policy_name,
|
||||||
|
"url": "%s/_matrix/consent?v=%s" % (
|
||||||
|
self.hs.config.public_baseurl,
|
||||||
|
self.hs.config.user_consent_version,
|
||||||
|
),
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
def _auth_dict_for_flows(self, flows, session):
|
def _auth_dict_for_flows(self, flows, session):
|
||||||
public_flows = []
|
public_flows = []
|
||||||
for f in flows:
|
for f in flows:
|
||||||
|
@ -469,6 +489,7 @@ class AuthHandler(BaseHandler):
|
||||||
|
|
||||||
get_params = {
|
get_params = {
|
||||||
LoginType.RECAPTCHA: self._get_params_recaptcha,
|
LoginType.RECAPTCHA: self._get_params_recaptcha,
|
||||||
|
LoginType.TERMS: self._get_params_terms,
|
||||||
}
|
}
|
||||||
|
|
||||||
params = {}
|
params = {}
|
||||||
|
|
|
@ -138,9 +138,30 @@ class DirectoryHandler(BaseHandler):
|
||||||
)
|
)
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
def delete_association(self, requester, room_alias):
|
def delete_association(self, requester, room_alias, send_event=True):
|
||||||
# association deletion for human users
|
"""Remove an alias from the directory
|
||||||
|
|
||||||
|
(this is only meant for human users; AS users should call
|
||||||
|
delete_appservice_association)
|
||||||
|
|
||||||
|
Args:
|
||||||
|
requester (Requester):
|
||||||
|
room_alias (RoomAlias):
|
||||||
|
send_event (bool): Whether to send an updated m.room.aliases event.
|
||||||
|
Note that, if we delete the canonical alias, we will always attempt
|
||||||
|
to send an m.room.canonical_alias event
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Deferred[unicode]: room id that the alias used to point to
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
NotFoundError: if the alias doesn't exist
|
||||||
|
|
||||||
|
AuthError: if the user doesn't have perms to delete the alias (ie, the user
|
||||||
|
is neither the creator of the alias, nor a server admin.
|
||||||
|
|
||||||
|
SynapseError: if the alias belongs to an AS
|
||||||
|
"""
|
||||||
user_id = requester.user.to_string()
|
user_id = requester.user.to_string()
|
||||||
|
|
||||||
try:
|
try:
|
||||||
|
@ -168,10 +189,11 @@ class DirectoryHandler(BaseHandler):
|
||||||
room_id = yield self._delete_association(room_alias)
|
room_id = yield self._delete_association(room_alias)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
yield self.send_room_alias_update_event(
|
if send_event:
|
||||||
requester,
|
yield self.send_room_alias_update_event(
|
||||||
room_id
|
requester,
|
||||||
)
|
room_id
|
||||||
|
)
|
||||||
|
|
||||||
yield self._update_canonical_alias(
|
yield self._update_canonical_alias(
|
||||||
requester,
|
requester,
|
||||||
|
|
|
@ -19,7 +19,7 @@ from six import iteritems
|
||||||
|
|
||||||
from twisted.internet import defer
|
from twisted.internet import defer
|
||||||
|
|
||||||
from synapse.api.errors import RoomKeysVersionError, StoreError, SynapseError
|
from synapse.api.errors import NotFoundError, RoomKeysVersionError, StoreError
|
||||||
from synapse.util.async_helpers import Linearizer
|
from synapse.util.async_helpers import Linearizer
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
@ -55,6 +55,8 @@ class E2eRoomKeysHandler(object):
|
||||||
room_id(string): room ID to get keys for, for None to get keys for all rooms
|
room_id(string): room ID to get keys for, for None to get keys for all rooms
|
||||||
session_id(string): session ID to get keys for, for None to get keys for all
|
session_id(string): session ID to get keys for, for None to get keys for all
|
||||||
sessions
|
sessions
|
||||||
|
Raises:
|
||||||
|
NotFoundError: if the backup version does not exist
|
||||||
Returns:
|
Returns:
|
||||||
A deferred list of dicts giving the session_data and message metadata for
|
A deferred list of dicts giving the session_data and message metadata for
|
||||||
these room keys.
|
these room keys.
|
||||||
|
@ -63,13 +65,19 @@ class E2eRoomKeysHandler(object):
|
||||||
# we deliberately take the lock to get keys so that changing the version
|
# we deliberately take the lock to get keys so that changing the version
|
||||||
# works atomically
|
# works atomically
|
||||||
with (yield self._upload_linearizer.queue(user_id)):
|
with (yield self._upload_linearizer.queue(user_id)):
|
||||||
|
# make sure the backup version exists
|
||||||
|
try:
|
||||||
|
yield self.store.get_e2e_room_keys_version_info(user_id, version)
|
||||||
|
except StoreError as e:
|
||||||
|
if e.code == 404:
|
||||||
|
raise NotFoundError("Unknown backup version")
|
||||||
|
else:
|
||||||
|
raise
|
||||||
|
|
||||||
results = yield self.store.get_e2e_room_keys(
|
results = yield self.store.get_e2e_room_keys(
|
||||||
user_id, version, room_id, session_id
|
user_id, version, room_id, session_id
|
||||||
)
|
)
|
||||||
|
|
||||||
if results['rooms'] == {}:
|
|
||||||
raise SynapseError(404, "No room_keys found")
|
|
||||||
|
|
||||||
defer.returnValue(results)
|
defer.returnValue(results)
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
|
@ -120,7 +128,7 @@ class E2eRoomKeysHandler(object):
|
||||||
}
|
}
|
||||||
|
|
||||||
Raises:
|
Raises:
|
||||||
SynapseError: with code 404 if there are no versions defined
|
NotFoundError: if there are no versions defined
|
||||||
RoomKeysVersionError: if the uploaded version is not the current version
|
RoomKeysVersionError: if the uploaded version is not the current version
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
@ -134,7 +142,7 @@ class E2eRoomKeysHandler(object):
|
||||||
version_info = yield self.store.get_e2e_room_keys_version_info(user_id)
|
version_info = yield self.store.get_e2e_room_keys_version_info(user_id)
|
||||||
except StoreError as e:
|
except StoreError as e:
|
||||||
if e.code == 404:
|
if e.code == 404:
|
||||||
raise SynapseError(404, "Version '%s' not found" % (version,))
|
raise NotFoundError("Version '%s' not found" % (version,))
|
||||||
else:
|
else:
|
||||||
raise
|
raise
|
||||||
|
|
||||||
|
@ -148,7 +156,7 @@ class E2eRoomKeysHandler(object):
|
||||||
raise RoomKeysVersionError(current_version=version_info['version'])
|
raise RoomKeysVersionError(current_version=version_info['version'])
|
||||||
except StoreError as e:
|
except StoreError as e:
|
||||||
if e.code == 404:
|
if e.code == 404:
|
||||||
raise SynapseError(404, "Version '%s' not found" % (version,))
|
raise NotFoundError("Version '%s' not found" % (version,))
|
||||||
else:
|
else:
|
||||||
raise
|
raise
|
||||||
|
|
||||||
|
|
|
@ -239,7 +239,7 @@ class FederationHandler(BaseHandler):
|
||||||
room_id, event_id, min_depth,
|
room_id, event_id, min_depth,
|
||||||
)
|
)
|
||||||
|
|
||||||
prevs = {e_id for e_id, _ in pdu.prev_events}
|
prevs = set(pdu.prev_event_ids())
|
||||||
seen = yield self.store.have_seen_events(prevs)
|
seen = yield self.store.have_seen_events(prevs)
|
||||||
|
|
||||||
if min_depth and pdu.depth < min_depth:
|
if min_depth and pdu.depth < min_depth:
|
||||||
|
@ -607,7 +607,7 @@ class FederationHandler(BaseHandler):
|
||||||
if e.event_id in seen_ids:
|
if e.event_id in seen_ids:
|
||||||
continue
|
continue
|
||||||
e.internal_metadata.outlier = True
|
e.internal_metadata.outlier = True
|
||||||
auth_ids = [e_id for e_id, _ in e.auth_events]
|
auth_ids = e.auth_event_ids()
|
||||||
auth = {
|
auth = {
|
||||||
(e.type, e.state_key): e for e in auth_chain
|
(e.type, e.state_key): e for e in auth_chain
|
||||||
if e.event_id in auth_ids or e.type == EventTypes.Create
|
if e.event_id in auth_ids or e.type == EventTypes.Create
|
||||||
|
@ -726,7 +726,7 @@ class FederationHandler(BaseHandler):
|
||||||
edges = [
|
edges = [
|
||||||
ev.event_id
|
ev.event_id
|
||||||
for ev in events
|
for ev in events
|
||||||
if set(e_id for e_id, _ in ev.prev_events) - event_ids
|
if set(ev.prev_event_ids()) - event_ids
|
||||||
]
|
]
|
||||||
|
|
||||||
logger.info(
|
logger.info(
|
||||||
|
@ -753,7 +753,7 @@ class FederationHandler(BaseHandler):
|
||||||
required_auth = set(
|
required_auth = set(
|
||||||
a_id
|
a_id
|
||||||
for event in events + list(state_events.values()) + list(auth_events.values())
|
for event in events + list(state_events.values()) + list(auth_events.values())
|
||||||
for a_id, _ in event.auth_events
|
for a_id in event.auth_event_ids()
|
||||||
)
|
)
|
||||||
auth_events.update({
|
auth_events.update({
|
||||||
e_id: event_map[e_id] for e_id in required_auth if e_id in event_map
|
e_id: event_map[e_id] for e_id in required_auth if e_id in event_map
|
||||||
|
@ -769,7 +769,7 @@ class FederationHandler(BaseHandler):
|
||||||
auth_events.update(ret_events)
|
auth_events.update(ret_events)
|
||||||
|
|
||||||
required_auth.update(
|
required_auth.update(
|
||||||
a_id for event in ret_events.values() for a_id, _ in event.auth_events
|
a_id for event in ret_events.values() for a_id in event.auth_event_ids()
|
||||||
)
|
)
|
||||||
missing_auth = required_auth - set(auth_events)
|
missing_auth = required_auth - set(auth_events)
|
||||||
|
|
||||||
|
@ -796,7 +796,7 @@ class FederationHandler(BaseHandler):
|
||||||
required_auth.update(
|
required_auth.update(
|
||||||
a_id
|
a_id
|
||||||
for event in results if event
|
for event in results if event
|
||||||
for a_id, _ in event.auth_events
|
for a_id in event.auth_event_ids()
|
||||||
)
|
)
|
||||||
missing_auth = required_auth - set(auth_events)
|
missing_auth = required_auth - set(auth_events)
|
||||||
|
|
||||||
|
@ -816,7 +816,7 @@ class FederationHandler(BaseHandler):
|
||||||
"auth_events": {
|
"auth_events": {
|
||||||
(auth_events[a_id].type, auth_events[a_id].state_key):
|
(auth_events[a_id].type, auth_events[a_id].state_key):
|
||||||
auth_events[a_id]
|
auth_events[a_id]
|
||||||
for a_id, _ in a.auth_events
|
for a_id in a.auth_event_ids()
|
||||||
if a_id in auth_events
|
if a_id in auth_events
|
||||||
}
|
}
|
||||||
})
|
})
|
||||||
|
@ -828,7 +828,7 @@ class FederationHandler(BaseHandler):
|
||||||
"auth_events": {
|
"auth_events": {
|
||||||
(auth_events[a_id].type, auth_events[a_id].state_key):
|
(auth_events[a_id].type, auth_events[a_id].state_key):
|
||||||
auth_events[a_id]
|
auth_events[a_id]
|
||||||
for a_id, _ in event_map[e_id].auth_events
|
for a_id in event_map[e_id].auth_event_ids()
|
||||||
if a_id in auth_events
|
if a_id in auth_events
|
||||||
}
|
}
|
||||||
})
|
})
|
||||||
|
@ -1041,17 +1041,17 @@ class FederationHandler(BaseHandler):
|
||||||
Raises:
|
Raises:
|
||||||
SynapseError if the event does not pass muster
|
SynapseError if the event does not pass muster
|
||||||
"""
|
"""
|
||||||
if len(ev.prev_events) > 20:
|
if len(ev.prev_event_ids()) > 20:
|
||||||
logger.warn("Rejecting event %s which has %i prev_events",
|
logger.warn("Rejecting event %s which has %i prev_events",
|
||||||
ev.event_id, len(ev.prev_events))
|
ev.event_id, len(ev.prev_event_ids()))
|
||||||
raise SynapseError(
|
raise SynapseError(
|
||||||
http_client.BAD_REQUEST,
|
http_client.BAD_REQUEST,
|
||||||
"Too many prev_events",
|
"Too many prev_events",
|
||||||
)
|
)
|
||||||
|
|
||||||
if len(ev.auth_events) > 10:
|
if len(ev.auth_event_ids()) > 10:
|
||||||
logger.warn("Rejecting event %s which has %i auth_events",
|
logger.warn("Rejecting event %s which has %i auth_events",
|
||||||
ev.event_id, len(ev.auth_events))
|
ev.event_id, len(ev.auth_event_ids()))
|
||||||
raise SynapseError(
|
raise SynapseError(
|
||||||
http_client.BAD_REQUEST,
|
http_client.BAD_REQUEST,
|
||||||
"Too many auth_events",
|
"Too many auth_events",
|
||||||
|
@ -1076,7 +1076,7 @@ class FederationHandler(BaseHandler):
|
||||||
def on_event_auth(self, event_id):
|
def on_event_auth(self, event_id):
|
||||||
event = yield self.store.get_event(event_id)
|
event = yield self.store.get_event(event_id)
|
||||||
auth = yield self.store.get_auth_chain(
|
auth = yield self.store.get_auth_chain(
|
||||||
[auth_id for auth_id, _ in event.auth_events],
|
[auth_id for auth_id in event.auth_event_ids()],
|
||||||
include_given=True
|
include_given=True
|
||||||
)
|
)
|
||||||
defer.returnValue([e for e in auth])
|
defer.returnValue([e for e in auth])
|
||||||
|
@ -1698,7 +1698,7 @@ class FederationHandler(BaseHandler):
|
||||||
|
|
||||||
missing_auth_events = set()
|
missing_auth_events = set()
|
||||||
for e in itertools.chain(auth_events, state, [event]):
|
for e in itertools.chain(auth_events, state, [event]):
|
||||||
for e_id, _ in e.auth_events:
|
for e_id in e.auth_event_ids():
|
||||||
if e_id not in event_map:
|
if e_id not in event_map:
|
||||||
missing_auth_events.add(e_id)
|
missing_auth_events.add(e_id)
|
||||||
|
|
||||||
|
@ -1717,7 +1717,7 @@ class FederationHandler(BaseHandler):
|
||||||
for e in itertools.chain(auth_events, state, [event]):
|
for e in itertools.chain(auth_events, state, [event]):
|
||||||
auth_for_e = {
|
auth_for_e = {
|
||||||
(event_map[e_id].type, event_map[e_id].state_key): event_map[e_id]
|
(event_map[e_id].type, event_map[e_id].state_key): event_map[e_id]
|
||||||
for e_id, _ in e.auth_events
|
for e_id in e.auth_event_ids()
|
||||||
if e_id in event_map
|
if e_id in event_map
|
||||||
}
|
}
|
||||||
if create_event:
|
if create_event:
|
||||||
|
@ -1785,10 +1785,10 @@ class FederationHandler(BaseHandler):
|
||||||
|
|
||||||
# This is a hack to fix some old rooms where the initial join event
|
# This is a hack to fix some old rooms where the initial join event
|
||||||
# didn't reference the create event in its auth events.
|
# didn't reference the create event in its auth events.
|
||||||
if event.type == EventTypes.Member and not event.auth_events:
|
if event.type == EventTypes.Member and not event.auth_event_ids():
|
||||||
if len(event.prev_events) == 1 and event.depth < 5:
|
if len(event.prev_event_ids()) == 1 and event.depth < 5:
|
||||||
c = yield self.store.get_event(
|
c = yield self.store.get_event(
|
||||||
event.prev_events[0][0],
|
event.prev_event_ids()[0],
|
||||||
allow_none=True,
|
allow_none=True,
|
||||||
)
|
)
|
||||||
if c and c.type == EventTypes.Create:
|
if c and c.type == EventTypes.Create:
|
||||||
|
@ -1835,7 +1835,7 @@ class FederationHandler(BaseHandler):
|
||||||
|
|
||||||
# Now get the current auth_chain for the event.
|
# Now get the current auth_chain for the event.
|
||||||
local_auth_chain = yield self.store.get_auth_chain(
|
local_auth_chain = yield self.store.get_auth_chain(
|
||||||
[auth_id for auth_id, _ in event.auth_events],
|
[auth_id for auth_id in event.auth_event_ids()],
|
||||||
include_given=True
|
include_given=True
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -1891,7 +1891,7 @@ class FederationHandler(BaseHandler):
|
||||||
"""
|
"""
|
||||||
# Check if we have all the auth events.
|
# Check if we have all the auth events.
|
||||||
current_state = set(e.event_id for e in auth_events.values())
|
current_state = set(e.event_id for e in auth_events.values())
|
||||||
event_auth_events = set(e_id for e_id, _ in event.auth_events)
|
event_auth_events = set(event.auth_event_ids())
|
||||||
|
|
||||||
if event.is_state():
|
if event.is_state():
|
||||||
event_key = (event.type, event.state_key)
|
event_key = (event.type, event.state_key)
|
||||||
|
@ -1935,7 +1935,7 @@ class FederationHandler(BaseHandler):
|
||||||
continue
|
continue
|
||||||
|
|
||||||
try:
|
try:
|
||||||
auth_ids = [e_id for e_id, _ in e.auth_events]
|
auth_ids = e.auth_event_ids()
|
||||||
auth = {
|
auth = {
|
||||||
(e.type, e.state_key): e for e in remote_auth_chain
|
(e.type, e.state_key): e for e in remote_auth_chain
|
||||||
if e.event_id in auth_ids or e.type == EventTypes.Create
|
if e.event_id in auth_ids or e.type == EventTypes.Create
|
||||||
|
@ -1956,7 +1956,7 @@ class FederationHandler(BaseHandler):
|
||||||
pass
|
pass
|
||||||
|
|
||||||
have_events = yield self.store.get_seen_events_with_rejections(
|
have_events = yield self.store.get_seen_events_with_rejections(
|
||||||
[e_id for e_id, _ in event.auth_events]
|
event.auth_event_ids()
|
||||||
)
|
)
|
||||||
seen_events = set(have_events.keys())
|
seen_events = set(have_events.keys())
|
||||||
except Exception:
|
except Exception:
|
||||||
|
@ -2058,7 +2058,7 @@ class FederationHandler(BaseHandler):
|
||||||
continue
|
continue
|
||||||
|
|
||||||
try:
|
try:
|
||||||
auth_ids = [e_id for e_id, _ in ev.auth_events]
|
auth_ids = ev.auth_event_ids()
|
||||||
auth = {
|
auth = {
|
||||||
(e.type, e.state_key): e
|
(e.type, e.state_key): e
|
||||||
for e in result["auth_chain"]
|
for e in result["auth_chain"]
|
||||||
|
@ -2250,7 +2250,7 @@ class FederationHandler(BaseHandler):
|
||||||
missing_remote_ids = [e.event_id for e in missing_remotes]
|
missing_remote_ids = [e.event_id for e in missing_remotes]
|
||||||
base_remote_rejected = list(missing_remotes)
|
base_remote_rejected = list(missing_remotes)
|
||||||
for e in missing_remotes:
|
for e in missing_remotes:
|
||||||
for e_id, _ in e.auth_events:
|
for e_id in e.auth_event_ids():
|
||||||
if e_id in missing_remote_ids:
|
if e_id in missing_remote_ids:
|
||||||
try:
|
try:
|
||||||
base_remote_rejected.remove(e)
|
base_remote_rejected.remove(e)
|
||||||
|
|
|
@ -427,6 +427,9 @@ class EventCreationHandler(object):
|
||||||
|
|
||||||
if event.is_state():
|
if event.is_state():
|
||||||
prev_state = yield self.deduplicate_state_event(event, context)
|
prev_state = yield self.deduplicate_state_event(event, context)
|
||||||
|
logger.info(
|
||||||
|
"Not bothering to persist duplicate state event %s", event.event_id,
|
||||||
|
)
|
||||||
if prev_state is not None:
|
if prev_state is not None:
|
||||||
defer.returnValue(prev_state)
|
defer.returnValue(prev_state)
|
||||||
|
|
||||||
|
|
|
@ -104,6 +104,8 @@ class RoomCreationHandler(BaseHandler):
|
||||||
creator_id=user_id, is_public=r["is_public"],
|
creator_id=user_id, is_public=r["is_public"],
|
||||||
)
|
)
|
||||||
|
|
||||||
|
logger.info("Creating new room %s to replace %s", new_room_id, old_room_id)
|
||||||
|
|
||||||
# we create and auth the tombstone event before properly creating the new
|
# we create and auth the tombstone event before properly creating the new
|
||||||
# room, to check our user has perms in the old room.
|
# room, to check our user has perms in the old room.
|
||||||
tombstone_event, tombstone_context = (
|
tombstone_event, tombstone_context = (
|
||||||
|
@ -136,10 +138,15 @@ class RoomCreationHandler(BaseHandler):
|
||||||
requester, tombstone_event, tombstone_context,
|
requester, tombstone_event, tombstone_context,
|
||||||
)
|
)
|
||||||
|
|
||||||
# and finally, shut down the PLs in the old room, and update them in the new
|
|
||||||
# room.
|
|
||||||
old_room_state = yield tombstone_context.get_current_state_ids(self.store)
|
old_room_state = yield tombstone_context.get_current_state_ids(self.store)
|
||||||
|
|
||||||
|
# update any aliases
|
||||||
|
yield self._move_aliases_to_new_room(
|
||||||
|
requester, old_room_id, new_room_id, old_room_state,
|
||||||
|
)
|
||||||
|
|
||||||
|
# and finally, shut down the PLs in the old room, and update them in the new
|
||||||
|
# room.
|
||||||
yield self._update_upgraded_room_pls(
|
yield self._update_upgraded_room_pls(
|
||||||
requester, old_room_id, new_room_id, old_room_state,
|
requester, old_room_id, new_room_id, old_room_state,
|
||||||
)
|
)
|
||||||
|
@ -245,11 +252,6 @@ class RoomCreationHandler(BaseHandler):
|
||||||
if not self.spam_checker.user_may_create_room(user_id):
|
if not self.spam_checker.user_may_create_room(user_id):
|
||||||
raise SynapseError(403, "You are not permitted to create rooms")
|
raise SynapseError(403, "You are not permitted to create rooms")
|
||||||
|
|
||||||
# XXX check alias is free
|
|
||||||
# canonical_alias = None
|
|
||||||
|
|
||||||
# XXX create association in directory handler
|
|
||||||
|
|
||||||
creation_content = {
|
creation_content = {
|
||||||
"room_version": new_room_version,
|
"room_version": new_room_version,
|
||||||
"predecessor": {
|
"predecessor": {
|
||||||
|
@ -295,7 +297,111 @@ class RoomCreationHandler(BaseHandler):
|
||||||
|
|
||||||
# XXX invites/joins
|
# XXX invites/joins
|
||||||
# XXX 3pid invites
|
# XXX 3pid invites
|
||||||
# XXX directory_handler.send_room_alias_update_event
|
|
||||||
|
@defer.inlineCallbacks
|
||||||
|
def _move_aliases_to_new_room(
|
||||||
|
self, requester, old_room_id, new_room_id, old_room_state,
|
||||||
|
):
|
||||||
|
directory_handler = self.hs.get_handlers().directory_handler
|
||||||
|
|
||||||
|
aliases = yield self.store.get_aliases_for_room(old_room_id)
|
||||||
|
|
||||||
|
# check to see if we have a canonical alias.
|
||||||
|
canonical_alias = None
|
||||||
|
canonical_alias_event_id = old_room_state.get((EventTypes.CanonicalAlias, ""))
|
||||||
|
if canonical_alias_event_id:
|
||||||
|
canonical_alias_event = yield self.store.get_event(canonical_alias_event_id)
|
||||||
|
if canonical_alias_event:
|
||||||
|
canonical_alias = canonical_alias_event.content.get("alias", "")
|
||||||
|
|
||||||
|
# first we try to remove the aliases from the old room (we suppress sending
|
||||||
|
# the room_aliases event until the end).
|
||||||
|
#
|
||||||
|
# Note that we'll only be able to remove aliases that (a) aren't owned by an AS,
|
||||||
|
# and (b) unless the user is a server admin, which the user created.
|
||||||
|
#
|
||||||
|
# This is probably correct - given we don't allow such aliases to be deleted
|
||||||
|
# normally, it would be odd to allow it in the case of doing a room upgrade -
|
||||||
|
# but it makes the upgrade less effective, and you have to wonder why a room
|
||||||
|
# admin can't remove aliases that point to that room anyway.
|
||||||
|
# (cf https://github.com/matrix-org/synapse/issues/2360)
|
||||||
|
#
|
||||||
|
removed_aliases = []
|
||||||
|
for alias_str in aliases:
|
||||||
|
alias = RoomAlias.from_string(alias_str)
|
||||||
|
try:
|
||||||
|
yield directory_handler.delete_association(
|
||||||
|
requester, alias, send_event=False,
|
||||||
|
)
|
||||||
|
removed_aliases.append(alias_str)
|
||||||
|
except SynapseError as e:
|
||||||
|
logger.warning(
|
||||||
|
"Unable to remove alias %s from old room: %s",
|
||||||
|
alias, e,
|
||||||
|
)
|
||||||
|
|
||||||
|
# if we didn't find any aliases, or couldn't remove anyway, we can skip the rest
|
||||||
|
# of this.
|
||||||
|
if not removed_aliases:
|
||||||
|
return
|
||||||
|
|
||||||
|
try:
|
||||||
|
# this can fail if, for some reason, our user doesn't have perms to send
|
||||||
|
# m.room.aliases events in the old room (note that we've already checked that
|
||||||
|
# they have perms to send a tombstone event, so that's not terribly likely).
|
||||||
|
#
|
||||||
|
# If that happens, it's regrettable, but we should carry on: it's the same
|
||||||
|
# as when you remove an alias from the directory normally - it just means that
|
||||||
|
# the aliases event gets out of sync with the directory
|
||||||
|
# (cf https://github.com/vector-im/riot-web/issues/2369)
|
||||||
|
yield directory_handler.send_room_alias_update_event(
|
||||||
|
requester, old_room_id,
|
||||||
|
)
|
||||||
|
except AuthError as e:
|
||||||
|
logger.warning(
|
||||||
|
"Failed to send updated alias event on old room: %s", e,
|
||||||
|
)
|
||||||
|
|
||||||
|
# we can now add any aliases we successfully removed to the new room.
|
||||||
|
for alias in removed_aliases:
|
||||||
|
try:
|
||||||
|
yield directory_handler.create_association(
|
||||||
|
requester, RoomAlias.from_string(alias),
|
||||||
|
new_room_id, servers=(self.hs.hostname, ),
|
||||||
|
send_event=False,
|
||||||
|
)
|
||||||
|
logger.info("Moved alias %s to new room", alias)
|
||||||
|
except SynapseError as e:
|
||||||
|
# I'm not really expecting this to happen, but it could if the spam
|
||||||
|
# checking module decides it shouldn't, or similar.
|
||||||
|
logger.error(
|
||||||
|
"Error adding alias %s to new room: %s",
|
||||||
|
alias, e,
|
||||||
|
)
|
||||||
|
|
||||||
|
try:
|
||||||
|
if canonical_alias and (canonical_alias in removed_aliases):
|
||||||
|
yield self.event_creation_handler.create_and_send_nonmember_event(
|
||||||
|
requester,
|
||||||
|
{
|
||||||
|
"type": EventTypes.CanonicalAlias,
|
||||||
|
"state_key": "",
|
||||||
|
"room_id": new_room_id,
|
||||||
|
"sender": requester.user.to_string(),
|
||||||
|
"content": {"alias": canonical_alias, },
|
||||||
|
},
|
||||||
|
ratelimit=False
|
||||||
|
)
|
||||||
|
|
||||||
|
yield directory_handler.send_room_alias_update_event(
|
||||||
|
requester, new_room_id,
|
||||||
|
)
|
||||||
|
except SynapseError as e:
|
||||||
|
# again I'm not really expecting this to fail, but if it does, I'd rather
|
||||||
|
# we returned the new room to the client at this point.
|
||||||
|
logger.error(
|
||||||
|
"Unable to send updated alias events in new room: %s", e,
|
||||||
|
)
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
def create_room(self, requester, config, ratelimit=True,
|
def create_room(self, requester, config, ratelimit=True,
|
||||||
|
@ -522,6 +628,7 @@ class RoomCreationHandler(BaseHandler):
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
def send(etype, content, **kwargs):
|
def send(etype, content, **kwargs):
|
||||||
event = create(etype, content, **kwargs)
|
event = create(etype, content, **kwargs)
|
||||||
|
logger.info("Sending %s in new room", etype)
|
||||||
yield self.event_creation_handler.create_and_send_nonmember_event(
|
yield self.event_creation_handler.create_and_send_nonmember_event(
|
||||||
creator,
|
creator,
|
||||||
event,
|
event,
|
||||||
|
@ -544,6 +651,7 @@ class RoomCreationHandler(BaseHandler):
|
||||||
content=creation_content,
|
content=creation_content,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
logger.info("Sending %s in new room", EventTypes.Member)
|
||||||
yield self.room_member_handler.update_membership(
|
yield self.room_member_handler.update_membership(
|
||||||
creator,
|
creator,
|
||||||
creator.user,
|
creator.user,
|
||||||
|
|
|
@ -63,11 +63,8 @@ class TypingHandler(object):
|
||||||
self._member_typing_until = {} # clock time we expect to stop
|
self._member_typing_until = {} # clock time we expect to stop
|
||||||
self._member_last_federation_poke = {}
|
self._member_last_federation_poke = {}
|
||||||
|
|
||||||
# map room IDs to serial numbers
|
|
||||||
self._room_serials = {}
|
|
||||||
self._latest_room_serial = 0
|
self._latest_room_serial = 0
|
||||||
# map room IDs to sets of users currently typing
|
self._reset()
|
||||||
self._room_typing = {}
|
|
||||||
|
|
||||||
# caches which room_ids changed at which serials
|
# caches which room_ids changed at which serials
|
||||||
self._typing_stream_change_cache = StreamChangeCache(
|
self._typing_stream_change_cache = StreamChangeCache(
|
||||||
|
@ -79,6 +76,15 @@ class TypingHandler(object):
|
||||||
5000,
|
5000,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
def _reset(self):
|
||||||
|
"""
|
||||||
|
Reset the typing handler's data caches.
|
||||||
|
"""
|
||||||
|
# map room IDs to serial numbers
|
||||||
|
self._room_serials = {}
|
||||||
|
# map room IDs to sets of users currently typing
|
||||||
|
self._room_typing = {}
|
||||||
|
|
||||||
def _handle_timeouts(self):
|
def _handle_timeouts(self):
|
||||||
logger.info("Checking for typing timeouts")
|
logger.info("Checking for typing timeouts")
|
||||||
|
|
||||||
|
|
|
@ -311,10 +311,10 @@ class HttpPusher(object):
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
if event.type == 'm.room.member':
|
if event.type == 'm.room.member' and event.is_state():
|
||||||
d['notification']['membership'] = event.content['membership']
|
d['notification']['membership'] = event.content['membership']
|
||||||
d['notification']['user_is_target'] = event.state_key == self.user_id
|
d['notification']['user_is_target'] = event.state_key == self.user_id
|
||||||
if self.hs.config.push_include_content and 'content' in event:
|
if self.hs.config.push_include_content and event.content:
|
||||||
d['notification']['content'] = event.content
|
d['notification']['content'] = event.content
|
||||||
|
|
||||||
# We no longer send aliases separately, instead, we send the human
|
# We no longer send aliases separately, instead, we send the human
|
||||||
|
|
|
@ -124,7 +124,7 @@ class PushRuleEvaluatorForEvent(object):
|
||||||
|
|
||||||
# XXX: optimisation: cache our pattern regexps
|
# XXX: optimisation: cache our pattern regexps
|
||||||
if condition['key'] == 'content.body':
|
if condition['key'] == 'content.body':
|
||||||
body = self._event["content"].get("body", None)
|
body = self._event.content.get("body", None)
|
||||||
if not body:
|
if not body:
|
||||||
return False
|
return False
|
||||||
|
|
||||||
|
@ -140,7 +140,7 @@ class PushRuleEvaluatorForEvent(object):
|
||||||
if not display_name:
|
if not display_name:
|
||||||
return False
|
return False
|
||||||
|
|
||||||
body = self._event["content"].get("body", None)
|
body = self._event.content.get("body", None)
|
||||||
if not body:
|
if not body:
|
||||||
return False
|
return False
|
||||||
|
|
||||||
|
|
|
@ -68,6 +68,29 @@ function captchaDone() {
|
||||||
</html>
|
</html>
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
TERMS_TEMPLATE = """
|
||||||
|
<html>
|
||||||
|
<head>
|
||||||
|
<title>Authentication</title>
|
||||||
|
<meta name='viewport' content='width=device-width, initial-scale=1,
|
||||||
|
user-scalable=no, minimum-scale=1.0, maximum-scale=1.0'>
|
||||||
|
<link rel="stylesheet" href="/_matrix/static/client/register/style.css">
|
||||||
|
</head>
|
||||||
|
<body>
|
||||||
|
<form id="registrationForm" method="post" action="%(myurl)s">
|
||||||
|
<div>
|
||||||
|
<p>
|
||||||
|
Please click the button below if you agree to the
|
||||||
|
<a href="%(terms_url)s">privacy policy of this homeserver.</a>
|
||||||
|
</p>
|
||||||
|
<input type="hidden" name="session" value="%(session)s" />
|
||||||
|
<input type="submit" value="Agree" />
|
||||||
|
</div>
|
||||||
|
</form>
|
||||||
|
</body>
|
||||||
|
</html>
|
||||||
|
"""
|
||||||
|
|
||||||
SUCCESS_TEMPLATE = """
|
SUCCESS_TEMPLATE = """
|
||||||
<html>
|
<html>
|
||||||
<head>
|
<head>
|
||||||
|
@ -130,6 +153,27 @@ class AuthRestServlet(RestServlet):
|
||||||
request.setHeader(b"Content-Type", b"text/html; charset=utf-8")
|
request.setHeader(b"Content-Type", b"text/html; charset=utf-8")
|
||||||
request.setHeader(b"Content-Length", b"%d" % (len(html_bytes),))
|
request.setHeader(b"Content-Length", b"%d" % (len(html_bytes),))
|
||||||
|
|
||||||
|
request.write(html_bytes)
|
||||||
|
finish_request(request)
|
||||||
|
defer.returnValue(None)
|
||||||
|
elif stagetype == LoginType.TERMS:
|
||||||
|
session = request.args['session'][0]
|
||||||
|
|
||||||
|
html = TERMS_TEMPLATE % {
|
||||||
|
'session': session,
|
||||||
|
'terms_url': "%s/_matrix/consent?v=%s" % (
|
||||||
|
self.hs.config.public_baseurl,
|
||||||
|
self.hs.config.user_consent_version,
|
||||||
|
),
|
||||||
|
'myurl': "%s/auth/%s/fallback/web" % (
|
||||||
|
CLIENT_V2_ALPHA_PREFIX, LoginType.TERMS
|
||||||
|
),
|
||||||
|
}
|
||||||
|
html_bytes = html.encode("utf8")
|
||||||
|
request.setResponseCode(200)
|
||||||
|
request.setHeader(b"Content-Type", b"text/html; charset=utf-8")
|
||||||
|
request.setHeader(b"Content-Length", b"%d" % (len(html_bytes),))
|
||||||
|
|
||||||
request.write(html_bytes)
|
request.write(html_bytes)
|
||||||
finish_request(request)
|
finish_request(request)
|
||||||
defer.returnValue(None)
|
defer.returnValue(None)
|
||||||
|
@ -139,7 +183,7 @@ class AuthRestServlet(RestServlet):
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
def on_POST(self, request, stagetype):
|
def on_POST(self, request, stagetype):
|
||||||
yield
|
yield
|
||||||
if stagetype == "m.login.recaptcha":
|
if stagetype == LoginType.RECAPTCHA:
|
||||||
if ('g-recaptcha-response' not in request.args or
|
if ('g-recaptcha-response' not in request.args or
|
||||||
len(request.args['g-recaptcha-response'])) == 0:
|
len(request.args['g-recaptcha-response'])) == 0:
|
||||||
raise SynapseError(400, "No captcha response supplied")
|
raise SynapseError(400, "No captcha response supplied")
|
||||||
|
@ -178,6 +222,41 @@ class AuthRestServlet(RestServlet):
|
||||||
request.write(html_bytes)
|
request.write(html_bytes)
|
||||||
finish_request(request)
|
finish_request(request)
|
||||||
|
|
||||||
|
defer.returnValue(None)
|
||||||
|
elif stagetype == LoginType.TERMS:
|
||||||
|
if ('session' not in request.args or
|
||||||
|
len(request.args['session'])) == 0:
|
||||||
|
raise SynapseError(400, "No session supplied")
|
||||||
|
|
||||||
|
session = request.args['session'][0]
|
||||||
|
authdict = {'session': session}
|
||||||
|
|
||||||
|
success = yield self.auth_handler.add_oob_auth(
|
||||||
|
LoginType.TERMS,
|
||||||
|
authdict,
|
||||||
|
self.hs.get_ip_from_request(request)
|
||||||
|
)
|
||||||
|
|
||||||
|
if success:
|
||||||
|
html = SUCCESS_TEMPLATE
|
||||||
|
else:
|
||||||
|
html = TERMS_TEMPLATE % {
|
||||||
|
'session': session,
|
||||||
|
'terms_url': "%s/_matrix/consent?v=%s" % (
|
||||||
|
self.hs.config.public_baseurl,
|
||||||
|
self.hs.config.user_consent_version,
|
||||||
|
),
|
||||||
|
'myurl': "%s/auth/%s/fallback/web" % (
|
||||||
|
CLIENT_V2_ALPHA_PREFIX, LoginType.TERMS
|
||||||
|
),
|
||||||
|
}
|
||||||
|
html_bytes = html.encode("utf8")
|
||||||
|
request.setResponseCode(200)
|
||||||
|
request.setHeader(b"Content-Type", b"text/html; charset=utf-8")
|
||||||
|
request.setHeader(b"Content-Length", b"%d" % (len(html_bytes),))
|
||||||
|
|
||||||
|
request.write(html_bytes)
|
||||||
|
finish_request(request)
|
||||||
defer.returnValue(None)
|
defer.returnValue(None)
|
||||||
else:
|
else:
|
||||||
raise SynapseError(404, "Unknown auth stage type")
|
raise SynapseError(404, "Unknown auth stage type")
|
||||||
|
|
|
@ -359,6 +359,13 @@ class RegisterRestServlet(RestServlet):
|
||||||
[LoginType.MSISDN, LoginType.EMAIL_IDENTITY]
|
[LoginType.MSISDN, LoginType.EMAIL_IDENTITY]
|
||||||
])
|
])
|
||||||
|
|
||||||
|
# Append m.login.terms to all flows if we're requiring consent
|
||||||
|
if self.hs.config.user_consent_at_registration:
|
||||||
|
new_flows = []
|
||||||
|
for flow in flows:
|
||||||
|
flow.append(LoginType.TERMS)
|
||||||
|
flows.extend(new_flows)
|
||||||
|
|
||||||
auth_result, params, session_id = yield self.auth_handler.check_auth(
|
auth_result, params, session_id = yield self.auth_handler.check_auth(
|
||||||
flows, body, self.hs.get_ip_from_request(request)
|
flows, body, self.hs.get_ip_from_request(request)
|
||||||
)
|
)
|
||||||
|
@ -445,6 +452,12 @@ class RegisterRestServlet(RestServlet):
|
||||||
params.get("bind_msisdn")
|
params.get("bind_msisdn")
|
||||||
)
|
)
|
||||||
|
|
||||||
|
if auth_result and LoginType.TERMS in auth_result:
|
||||||
|
logger.info("%s has consented to the privacy policy" % registered_user_id)
|
||||||
|
yield self.store.user_set_consent_version(
|
||||||
|
registered_user_id, self.hs.config.user_consent_version,
|
||||||
|
)
|
||||||
|
|
||||||
defer.returnValue((200, return_dict))
|
defer.returnValue((200, return_dict))
|
||||||
|
|
||||||
def on_OPTIONS(self, _):
|
def on_OPTIONS(self, _):
|
||||||
|
|
|
@ -17,7 +17,7 @@ import logging
|
||||||
|
|
||||||
from twisted.internet import defer
|
from twisted.internet import defer
|
||||||
|
|
||||||
from synapse.api.errors import Codes, SynapseError
|
from synapse.api.errors import Codes, NotFoundError, SynapseError
|
||||||
from synapse.http.servlet import (
|
from synapse.http.servlet import (
|
||||||
RestServlet,
|
RestServlet,
|
||||||
parse_json_object_from_request,
|
parse_json_object_from_request,
|
||||||
|
@ -208,10 +208,25 @@ class RoomKeysServlet(RestServlet):
|
||||||
user_id, version, room_id, session_id
|
user_id, version, room_id, session_id
|
||||||
)
|
)
|
||||||
|
|
||||||
|
# Convert room_keys to the right format to return.
|
||||||
if session_id:
|
if session_id:
|
||||||
room_keys = room_keys['rooms'][room_id]['sessions'][session_id]
|
# If the client requests a specific session, but that session was
|
||||||
|
# not backed up, then return an M_NOT_FOUND.
|
||||||
|
if room_keys['rooms'] == {}:
|
||||||
|
raise NotFoundError("No room_keys found")
|
||||||
|
else:
|
||||||
|
room_keys = room_keys['rooms'][room_id]['sessions'][session_id]
|
||||||
elif room_id:
|
elif room_id:
|
||||||
room_keys = room_keys['rooms'][room_id]
|
# If the client requests all sessions from a room, but no sessions
|
||||||
|
# are found, then return an empty result rather than an error, so
|
||||||
|
# that clients don't have to handle an error condition, and an
|
||||||
|
# empty result is valid. (Similarly if the client requests all
|
||||||
|
# sessions from the backup, but in that case, room_keys is already
|
||||||
|
# in the right format, so we don't need to do anything about it.)
|
||||||
|
if room_keys['rooms'] == {}:
|
||||||
|
room_keys = {'sessions': {}}
|
||||||
|
else:
|
||||||
|
room_keys = room_keys['rooms'][room_id]
|
||||||
|
|
||||||
defer.returnValue((200, room_keys))
|
defer.returnValue((200, room_keys))
|
||||||
|
|
||||||
|
|
|
@ -137,27 +137,31 @@ class ConsentResource(Resource):
|
||||||
request (twisted.web.http.Request):
|
request (twisted.web.http.Request):
|
||||||
"""
|
"""
|
||||||
|
|
||||||
version = parse_string(request, "v",
|
version = parse_string(request, "v", default=self._default_consent_version)
|
||||||
default=self._default_consent_version)
|
username = parse_string(request, "u", required=False, default="")
|
||||||
username = parse_string(request, "u", required=True)
|
userhmac = None
|
||||||
userhmac = parse_string(request, "h", required=True, encoding=None)
|
has_consented = False
|
||||||
|
public_version = username == ""
|
||||||
|
if not public_version or not self.hs.config.user_consent_at_registration:
|
||||||
|
userhmac = parse_string(request, "h", required=True, encoding=None)
|
||||||
|
|
||||||
self._check_hash(username, userhmac)
|
self._check_hash(username, userhmac)
|
||||||
|
|
||||||
if username.startswith('@'):
|
if username.startswith('@'):
|
||||||
qualified_user_id = username
|
qualified_user_id = username
|
||||||
else:
|
else:
|
||||||
qualified_user_id = UserID(username, self.hs.hostname).to_string()
|
qualified_user_id = UserID(username, self.hs.hostname).to_string()
|
||||||
|
|
||||||
u = yield self.store.get_user_by_id(qualified_user_id)
|
u = yield self.store.get_user_by_id(qualified_user_id)
|
||||||
if u is None:
|
if u is None:
|
||||||
raise NotFoundError("Unknown user")
|
raise NotFoundError("Unknown user")
|
||||||
|
has_consented = u["consent_version"] == version
|
||||||
|
|
||||||
try:
|
try:
|
||||||
self._render_template(
|
self._render_template(
|
||||||
request, "%s.html" % (version,),
|
request, "%s.html" % (version,),
|
||||||
user=username, userhmac=userhmac, version=version,
|
user=username, userhmac=userhmac, version=version,
|
||||||
has_consented=(u["consent_version"] == version),
|
has_consented=has_consented, public_version=public_version,
|
||||||
)
|
)
|
||||||
except TemplateNotFound:
|
except TemplateNotFound:
|
||||||
raise NotFoundError("Unknown policy version")
|
raise NotFoundError("Unknown policy version")
|
||||||
|
@ -223,7 +227,7 @@ class ConsentResource(Resource):
|
||||||
key=self._hmac_secret,
|
key=self._hmac_secret,
|
||||||
msg=userid.encode('utf-8'),
|
msg=userid.encode('utf-8'),
|
||||||
digestmod=sha256,
|
digestmod=sha256,
|
||||||
).hexdigest()
|
).hexdigest().encode('ascii')
|
||||||
|
|
||||||
if not compare_digest(want_mac, userhmac):
|
if not compare_digest(want_mac, userhmac):
|
||||||
raise SynapseError(http_client.FORBIDDEN, "HMAC incorrect")
|
raise SynapseError(http_client.FORBIDDEN, "HMAC incorrect")
|
||||||
|
|
|
@ -1,14 +0,0 @@
|
||||||
# -*- coding: utf-8 -*-
|
|
||||||
# Copyright 2015, 2016 OpenMarket Ltd
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
# you may not use this file except in compliance with the License.
|
|
||||||
# You may obtain a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
|
@ -1,92 +0,0 @@
|
||||||
# -*- coding: utf-8 -*-
|
|
||||||
# Copyright 2014-2016 OpenMarket Ltd
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
# you may not use this file except in compliance with the License.
|
|
||||||
# You may obtain a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
|
|
||||||
|
|
||||||
import logging
|
|
||||||
|
|
||||||
from canonicaljson import encode_canonical_json
|
|
||||||
from signedjson.sign import sign_json
|
|
||||||
from unpaddedbase64 import encode_base64
|
|
||||||
|
|
||||||
from OpenSSL import crypto
|
|
||||||
from twisted.web.resource import Resource
|
|
||||||
|
|
||||||
from synapse.http.server import respond_with_json_bytes
|
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
|
|
||||||
class LocalKey(Resource):
|
|
||||||
"""HTTP resource containing encoding the TLS X.509 certificate and NACL
|
|
||||||
signature verification keys for this server::
|
|
||||||
|
|
||||||
GET /key HTTP/1.1
|
|
||||||
|
|
||||||
HTTP/1.1 200 OK
|
|
||||||
Content-Type: application/json
|
|
||||||
{
|
|
||||||
"server_name": "this.server.example.com"
|
|
||||||
"verify_keys": {
|
|
||||||
"algorithm:version": # base64 encoded NACL verification key.
|
|
||||||
},
|
|
||||||
"tls_certificate": # base64 ASN.1 DER encoded X.509 tls cert.
|
|
||||||
"signatures": {
|
|
||||||
"this.server.example.com": {
|
|
||||||
"algorithm:version": # NACL signature for this server.
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, hs):
|
|
||||||
self.response_body = encode_canonical_json(
|
|
||||||
self.response_json_object(hs.config)
|
|
||||||
)
|
|
||||||
Resource.__init__(self)
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def response_json_object(server_config):
|
|
||||||
verify_keys = {}
|
|
||||||
for key in server_config.signing_key:
|
|
||||||
verify_key_bytes = key.verify_key.encode()
|
|
||||||
key_id = "%s:%s" % (key.alg, key.version)
|
|
||||||
verify_keys[key_id] = encode_base64(verify_key_bytes)
|
|
||||||
|
|
||||||
x509_certificate_bytes = crypto.dump_certificate(
|
|
||||||
crypto.FILETYPE_ASN1,
|
|
||||||
server_config.tls_certificate
|
|
||||||
)
|
|
||||||
json_object = {
|
|
||||||
u"server_name": server_config.server_name,
|
|
||||||
u"verify_keys": verify_keys,
|
|
||||||
u"tls_certificate": encode_base64(x509_certificate_bytes)
|
|
||||||
}
|
|
||||||
for key in server_config.signing_key:
|
|
||||||
json_object = sign_json(
|
|
||||||
json_object,
|
|
||||||
server_config.server_name,
|
|
||||||
key,
|
|
||||||
)
|
|
||||||
|
|
||||||
return json_object
|
|
||||||
|
|
||||||
def render_GET(self, request):
|
|
||||||
return respond_with_json_bytes(
|
|
||||||
request, 200, self.response_body,
|
|
||||||
)
|
|
||||||
|
|
||||||
def getChild(self, name, request):
|
|
||||||
if name == b'':
|
|
||||||
return self
|
|
|
@ -261,7 +261,7 @@ class StateHandler(object):
|
||||||
logger.debug("calling resolve_state_groups from compute_event_context")
|
logger.debug("calling resolve_state_groups from compute_event_context")
|
||||||
|
|
||||||
entry = yield self.resolve_state_groups_for_events(
|
entry = yield self.resolve_state_groups_for_events(
|
||||||
event.room_id, [e for e, _ in event.prev_events],
|
event.room_id, event.prev_event_ids(),
|
||||||
)
|
)
|
||||||
|
|
||||||
prev_state_ids = entry.state
|
prev_state_ids = entry.state
|
||||||
|
@ -607,7 +607,7 @@ def resolve_events_with_store(room_version, state_sets, event_map, state_res_sto
|
||||||
return v1.resolve_events_with_store(
|
return v1.resolve_events_with_store(
|
||||||
state_sets, event_map, state_res_store.get_events,
|
state_sets, event_map, state_res_store.get_events,
|
||||||
)
|
)
|
||||||
elif room_version == RoomVersions.VDH_TEST:
|
elif room_version in (RoomVersions.VDH_TEST, RoomVersions.STATE_V2_TEST):
|
||||||
return v2.resolve_events_with_store(
|
return v2.resolve_events_with_store(
|
||||||
state_sets, event_map, state_res_store,
|
state_sets, event_map, state_res_store,
|
||||||
)
|
)
|
||||||
|
|
|
@ -53,6 +53,10 @@ def resolve_events_with_store(state_sets, event_map, state_res_store):
|
||||||
|
|
||||||
logger.debug("Computing conflicted state")
|
logger.debug("Computing conflicted state")
|
||||||
|
|
||||||
|
# We use event_map as a cache, so if its None we need to initialize it
|
||||||
|
if event_map is None:
|
||||||
|
event_map = {}
|
||||||
|
|
||||||
# First split up the un/conflicted state
|
# First split up the un/conflicted state
|
||||||
unconflicted_state, conflicted_state = _seperate(state_sets)
|
unconflicted_state, conflicted_state = _seperate(state_sets)
|
||||||
|
|
||||||
|
@ -155,7 +159,7 @@ def _get_power_level_for_sender(event_id, event_map, state_res_store):
|
||||||
event = yield _get_event(event_id, event_map, state_res_store)
|
event = yield _get_event(event_id, event_map, state_res_store)
|
||||||
|
|
||||||
pl = None
|
pl = None
|
||||||
for aid, _ in event.auth_events:
|
for aid in event.auth_event_ids():
|
||||||
aev = yield _get_event(aid, event_map, state_res_store)
|
aev = yield _get_event(aid, event_map, state_res_store)
|
||||||
if (aev.type, aev.state_key) == (EventTypes.PowerLevels, ""):
|
if (aev.type, aev.state_key) == (EventTypes.PowerLevels, ""):
|
||||||
pl = aev
|
pl = aev
|
||||||
|
@ -163,7 +167,7 @@ def _get_power_level_for_sender(event_id, event_map, state_res_store):
|
||||||
|
|
||||||
if pl is None:
|
if pl is None:
|
||||||
# Couldn't find power level. Check if they're the creator of the room
|
# Couldn't find power level. Check if they're the creator of the room
|
||||||
for aid, _ in event.auth_events:
|
for aid in event.auth_event_ids():
|
||||||
aev = yield _get_event(aid, event_map, state_res_store)
|
aev = yield _get_event(aid, event_map, state_res_store)
|
||||||
if (aev.type, aev.state_key) == (EventTypes.Create, ""):
|
if (aev.type, aev.state_key) == (EventTypes.Create, ""):
|
||||||
if aev.content.get("creator") == event.sender:
|
if aev.content.get("creator") == event.sender:
|
||||||
|
@ -295,7 +299,7 @@ def _add_event_and_auth_chain_to_graph(graph, event_id, event_map,
|
||||||
graph.setdefault(eid, set())
|
graph.setdefault(eid, set())
|
||||||
|
|
||||||
event = yield _get_event(eid, event_map, state_res_store)
|
event = yield _get_event(eid, event_map, state_res_store)
|
||||||
for aid, _ in event.auth_events:
|
for aid in event.auth_event_ids():
|
||||||
if aid in auth_diff:
|
if aid in auth_diff:
|
||||||
if aid not in graph:
|
if aid not in graph:
|
||||||
state.append(aid)
|
state.append(aid)
|
||||||
|
@ -365,7 +369,7 @@ def _iterative_auth_checks(event_ids, base_state, event_map, state_res_store):
|
||||||
event = event_map[event_id]
|
event = event_map[event_id]
|
||||||
|
|
||||||
auth_events = {}
|
auth_events = {}
|
||||||
for aid, _ in event.auth_events:
|
for aid in event.auth_event_ids():
|
||||||
ev = yield _get_event(aid, event_map, state_res_store)
|
ev = yield _get_event(aid, event_map, state_res_store)
|
||||||
|
|
||||||
if ev.rejected_reason is None:
|
if ev.rejected_reason is None:
|
||||||
|
@ -413,9 +417,9 @@ def _mainline_sort(event_ids, resolved_power_event_id, event_map,
|
||||||
while pl:
|
while pl:
|
||||||
mainline.append(pl)
|
mainline.append(pl)
|
||||||
pl_ev = yield _get_event(pl, event_map, state_res_store)
|
pl_ev = yield _get_event(pl, event_map, state_res_store)
|
||||||
auth_events = pl_ev.auth_events
|
auth_events = pl_ev.auth_event_ids()
|
||||||
pl = None
|
pl = None
|
||||||
for aid, _ in auth_events:
|
for aid in auth_events:
|
||||||
ev = yield _get_event(aid, event_map, state_res_store)
|
ev = yield _get_event(aid, event_map, state_res_store)
|
||||||
if (ev.type, ev.state_key) == (EventTypes.PowerLevels, ""):
|
if (ev.type, ev.state_key) == (EventTypes.PowerLevels, ""):
|
||||||
pl = aid
|
pl = aid
|
||||||
|
@ -460,10 +464,10 @@ def _get_mainline_depth_for_event(event, mainline_map, event_map, state_res_stor
|
||||||
if depth is not None:
|
if depth is not None:
|
||||||
defer.returnValue(depth)
|
defer.returnValue(depth)
|
||||||
|
|
||||||
auth_events = event.auth_events
|
auth_events = event.auth_event_ids()
|
||||||
event = None
|
event = None
|
||||||
|
|
||||||
for aid, _ in auth_events:
|
for aid in auth_events:
|
||||||
aev = yield _get_event(aid, event_map, state_res_store)
|
aev = yield _get_event(aid, event_map, state_res_store)
|
||||||
if (aev.type, aev.state_key) == (EventTypes.PowerLevels, ""):
|
if (aev.type, aev.state_key) == (EventTypes.PowerLevels, ""):
|
||||||
event = aev
|
event = aev
|
||||||
|
|
|
@ -22,14 +22,19 @@ from twisted.internet import defer
|
||||||
|
|
||||||
from synapse.api.errors import StoreError
|
from synapse.api.errors import StoreError
|
||||||
from synapse.metrics.background_process_metrics import run_as_background_process
|
from synapse.metrics.background_process_metrics import run_as_background_process
|
||||||
|
from synapse.storage.background_updates import BackgroundUpdateStore
|
||||||
from synapse.util.caches.descriptors import cached, cachedInlineCallbacks, cachedList
|
from synapse.util.caches.descriptors import cached, cachedInlineCallbacks, cachedList
|
||||||
|
|
||||||
from ._base import Cache, SQLBaseStore, db_to_json
|
from ._base import Cache, db_to_json
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
DROP_DEVICE_LIST_STREAMS_NON_UNIQUE_INDEXES = (
|
||||||
|
"drop_device_list_streams_non_unique_indexes"
|
||||||
|
)
|
||||||
|
|
||||||
class DeviceStore(SQLBaseStore):
|
|
||||||
|
class DeviceStore(BackgroundUpdateStore):
|
||||||
def __init__(self, db_conn, hs):
|
def __init__(self, db_conn, hs):
|
||||||
super(DeviceStore, self).__init__(db_conn, hs)
|
super(DeviceStore, self).__init__(db_conn, hs)
|
||||||
|
|
||||||
|
@ -52,6 +57,30 @@ class DeviceStore(SQLBaseStore):
|
||||||
columns=["user_id", "device_id"],
|
columns=["user_id", "device_id"],
|
||||||
)
|
)
|
||||||
|
|
||||||
|
# create a unique index on device_lists_remote_cache
|
||||||
|
self.register_background_index_update(
|
||||||
|
"device_lists_remote_cache_unique_idx",
|
||||||
|
index_name="device_lists_remote_cache_unique_id",
|
||||||
|
table="device_lists_remote_cache",
|
||||||
|
columns=["user_id", "device_id"],
|
||||||
|
unique=True,
|
||||||
|
)
|
||||||
|
|
||||||
|
# And one on device_lists_remote_extremeties
|
||||||
|
self.register_background_index_update(
|
||||||
|
"device_lists_remote_extremeties_unique_idx",
|
||||||
|
index_name="device_lists_remote_extremeties_unique_idx",
|
||||||
|
table="device_lists_remote_extremeties",
|
||||||
|
columns=["user_id"],
|
||||||
|
unique=True,
|
||||||
|
)
|
||||||
|
|
||||||
|
# once they complete, we can remove the old non-unique indexes.
|
||||||
|
self.register_background_update_handler(
|
||||||
|
DROP_DEVICE_LIST_STREAMS_NON_UNIQUE_INDEXES,
|
||||||
|
self._drop_device_list_streams_non_unique_indexes,
|
||||||
|
)
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
def store_device(self, user_id, device_id,
|
def store_device(self, user_id, device_id,
|
||||||
initial_device_display_name):
|
initial_device_display_name):
|
||||||
|
@ -239,7 +268,19 @@ class DeviceStore(SQLBaseStore):
|
||||||
|
|
||||||
def update_remote_device_list_cache_entry(self, user_id, device_id, content,
|
def update_remote_device_list_cache_entry(self, user_id, device_id, content,
|
||||||
stream_id):
|
stream_id):
|
||||||
"""Updates a single user's device in the cache.
|
"""Updates a single device in the cache of a remote user's devicelist.
|
||||||
|
|
||||||
|
Note: assumes that we are the only thread that can be updating this user's
|
||||||
|
device list.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
user_id (str): User to update device list for
|
||||||
|
device_id (str): ID of decivice being updated
|
||||||
|
content (dict): new data on this device
|
||||||
|
stream_id (int): the version of the device list
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Deferred[None]
|
||||||
"""
|
"""
|
||||||
return self.runInteraction(
|
return self.runInteraction(
|
||||||
"update_remote_device_list_cache_entry",
|
"update_remote_device_list_cache_entry",
|
||||||
|
@ -272,7 +313,11 @@ class DeviceStore(SQLBaseStore):
|
||||||
},
|
},
|
||||||
values={
|
values={
|
||||||
"content": json.dumps(content),
|
"content": json.dumps(content),
|
||||||
}
|
},
|
||||||
|
|
||||||
|
# we don't need to lock, because we assume we are the only thread
|
||||||
|
# updating this user's devices.
|
||||||
|
lock=False,
|
||||||
)
|
)
|
||||||
|
|
||||||
txn.call_after(self._get_cached_user_device.invalidate, (user_id, device_id,))
|
txn.call_after(self._get_cached_user_device.invalidate, (user_id, device_id,))
|
||||||
|
@ -289,11 +334,26 @@ class DeviceStore(SQLBaseStore):
|
||||||
},
|
},
|
||||||
values={
|
values={
|
||||||
"stream_id": stream_id,
|
"stream_id": stream_id,
|
||||||
}
|
},
|
||||||
|
|
||||||
|
# again, we can assume we are the only thread updating this user's
|
||||||
|
# extremity.
|
||||||
|
lock=False,
|
||||||
)
|
)
|
||||||
|
|
||||||
def update_remote_device_list_cache(self, user_id, devices, stream_id):
|
def update_remote_device_list_cache(self, user_id, devices, stream_id):
|
||||||
"""Replace the cache of the remote user's devices.
|
"""Replace the entire cache of the remote user's devices.
|
||||||
|
|
||||||
|
Note: assumes that we are the only thread that can be updating this user's
|
||||||
|
device list.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
user_id (str): User to update device list for
|
||||||
|
devices (list[dict]): list of device objects supplied over federation
|
||||||
|
stream_id (int): the version of the device list
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Deferred[None]
|
||||||
"""
|
"""
|
||||||
return self.runInteraction(
|
return self.runInteraction(
|
||||||
"update_remote_device_list_cache",
|
"update_remote_device_list_cache",
|
||||||
|
@ -338,7 +398,11 @@ class DeviceStore(SQLBaseStore):
|
||||||
},
|
},
|
||||||
values={
|
values={
|
||||||
"stream_id": stream_id,
|
"stream_id": stream_id,
|
||||||
}
|
},
|
||||||
|
|
||||||
|
# we don't need to lock, because we can assume we are the only thread
|
||||||
|
# updating this user's extremity.
|
||||||
|
lock=False,
|
||||||
)
|
)
|
||||||
|
|
||||||
def get_devices_by_remote(self, destination, from_stream_id):
|
def get_devices_by_remote(self, destination, from_stream_id):
|
||||||
|
@ -722,3 +786,19 @@ class DeviceStore(SQLBaseStore):
|
||||||
"_prune_old_outbound_device_pokes",
|
"_prune_old_outbound_device_pokes",
|
||||||
_prune_txn,
|
_prune_txn,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@defer.inlineCallbacks
|
||||||
|
def _drop_device_list_streams_non_unique_indexes(self, progress, batch_size):
|
||||||
|
def f(conn):
|
||||||
|
txn = conn.cursor()
|
||||||
|
txn.execute(
|
||||||
|
"DROP INDEX IF EXISTS device_lists_remote_cache_id"
|
||||||
|
)
|
||||||
|
txn.execute(
|
||||||
|
"DROP INDEX IF EXISTS device_lists_remote_extremeties_id"
|
||||||
|
)
|
||||||
|
txn.close()
|
||||||
|
|
||||||
|
yield self.runWithConnection(f)
|
||||||
|
yield self._end_background_update(DROP_DEVICE_LIST_STREAMS_NON_UNIQUE_INDEXES)
|
||||||
|
defer.returnValue(1)
|
||||||
|
|
|
@ -477,7 +477,7 @@ class EventFederationStore(EventFederationWorkerStore):
|
||||||
"is_state": False,
|
"is_state": False,
|
||||||
}
|
}
|
||||||
for ev in events
|
for ev in events
|
||||||
for e_id, _ in ev.prev_events
|
for e_id in ev.prev_event_ids()
|
||||||
],
|
],
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -510,7 +510,7 @@ class EventFederationStore(EventFederationWorkerStore):
|
||||||
|
|
||||||
txn.executemany(query, [
|
txn.executemany(query, [
|
||||||
(e_id, ev.room_id, e_id, ev.room_id, e_id, ev.room_id, False)
|
(e_id, ev.room_id, e_id, ev.room_id, e_id, ev.room_id, False)
|
||||||
for ev in events for e_id, _ in ev.prev_events
|
for ev in events for e_id in ev.prev_event_ids()
|
||||||
if not ev.internal_metadata.is_outlier()
|
if not ev.internal_metadata.is_outlier()
|
||||||
])
|
])
|
||||||
|
|
||||||
|
|
|
@ -416,7 +416,7 @@ class EventsStore(StateGroupWorkerStore, EventFederationStore, EventsWorkerStore
|
||||||
)
|
)
|
||||||
if len_1:
|
if len_1:
|
||||||
all_single_prev_not_state = all(
|
all_single_prev_not_state = all(
|
||||||
len(event.prev_events) == 1
|
len(event.prev_event_ids()) == 1
|
||||||
and not event.is_state()
|
and not event.is_state()
|
||||||
for event, ctx in ev_ctx_rm
|
for event, ctx in ev_ctx_rm
|
||||||
)
|
)
|
||||||
|
@ -440,7 +440,7 @@ class EventsStore(StateGroupWorkerStore, EventFederationStore, EventsWorkerStore
|
||||||
# guess this by looking at the prev_events and checking
|
# guess this by looking at the prev_events and checking
|
||||||
# if they match the current forward extremities.
|
# if they match the current forward extremities.
|
||||||
for ev, _ in ev_ctx_rm:
|
for ev, _ in ev_ctx_rm:
|
||||||
prev_event_ids = set(e for e, _ in ev.prev_events)
|
prev_event_ids = set(ev.prev_event_ids())
|
||||||
if latest_event_ids == prev_event_ids:
|
if latest_event_ids == prev_event_ids:
|
||||||
state_delta_reuse_delta_counter.inc()
|
state_delta_reuse_delta_counter.inc()
|
||||||
break
|
break
|
||||||
|
@ -551,7 +551,7 @@ class EventsStore(StateGroupWorkerStore, EventFederationStore, EventsWorkerStore
|
||||||
result.difference_update(
|
result.difference_update(
|
||||||
e_id
|
e_id
|
||||||
for event in new_events
|
for event in new_events
|
||||||
for e_id, _ in event.prev_events
|
for e_id in event.prev_event_ids()
|
||||||
)
|
)
|
||||||
|
|
||||||
# Finally, remove any events which are prev_events of any existing events.
|
# Finally, remove any events which are prev_events of any existing events.
|
||||||
|
@ -869,7 +869,7 @@ class EventsStore(StateGroupWorkerStore, EventFederationStore, EventsWorkerStore
|
||||||
"auth_id": auth_id,
|
"auth_id": auth_id,
|
||||||
}
|
}
|
||||||
for event, _ in events_and_contexts
|
for event, _ in events_and_contexts
|
||||||
for auth_id, _ in event.auth_events
|
for auth_id in event.auth_event_ids()
|
||||||
if event.is_state()
|
if event.is_state()
|
||||||
],
|
],
|
||||||
)
|
)
|
||||||
|
|
|
@ -20,9 +20,6 @@ CREATE TABLE device_lists_remote_cache (
|
||||||
content TEXT NOT NULL
|
content TEXT NOT NULL
|
||||||
);
|
);
|
||||||
|
|
||||||
CREATE INDEX device_lists_remote_cache_id ON device_lists_remote_cache(user_id, device_id);
|
|
||||||
|
|
||||||
|
|
||||||
-- The last update we got for a user. Empty if we're not receiving updates for
|
-- The last update we got for a user. Empty if we're not receiving updates for
|
||||||
-- that user.
|
-- that user.
|
||||||
CREATE TABLE device_lists_remote_extremeties (
|
CREATE TABLE device_lists_remote_extremeties (
|
||||||
|
@ -30,7 +27,11 @@ CREATE TABLE device_lists_remote_extremeties (
|
||||||
stream_id TEXT NOT NULL
|
stream_id TEXT NOT NULL
|
||||||
);
|
);
|
||||||
|
|
||||||
CREATE INDEX device_lists_remote_extremeties_id ON device_lists_remote_extremeties(user_id, stream_id);
|
-- we used to create non-unique indexes on these tables, but as of update 52 we create
|
||||||
|
-- unique indexes concurrently:
|
||||||
|
--
|
||||||
|
-- CREATE INDEX device_lists_remote_cache_id ON device_lists_remote_cache(user_id, device_id);
|
||||||
|
-- CREATE INDEX device_lists_remote_extremeties_id ON device_lists_remote_extremeties(user_id, stream_id);
|
||||||
|
|
||||||
|
|
||||||
-- Stream of device lists updates. Includes both local and remotes
|
-- Stream of device lists updates. Includes both local and remotes
|
||||||
|
|
|
@ -0,0 +1,36 @@
|
||||||
|
/* Copyright 2018 New Vector Ltd
|
||||||
|
*
|
||||||
|
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
* you may not use this file except in compliance with the License.
|
||||||
|
* You may obtain a copy of the License at
|
||||||
|
*
|
||||||
|
* http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
*
|
||||||
|
* Unless required by applicable law or agreed to in writing, software
|
||||||
|
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
* See the License for the specific language governing permissions and
|
||||||
|
* limitations under the License.
|
||||||
|
*/
|
||||||
|
|
||||||
|
-- register a background update which will create a unique index on
|
||||||
|
-- device_lists_remote_cache
|
||||||
|
INSERT into background_updates (update_name, progress_json)
|
||||||
|
VALUES ('device_lists_remote_cache_unique_idx', '{}');
|
||||||
|
|
||||||
|
-- and one on device_lists_remote_extremeties
|
||||||
|
INSERT into background_updates (update_name, progress_json, depends_on)
|
||||||
|
VALUES (
|
||||||
|
'device_lists_remote_extremeties_unique_idx', '{}',
|
||||||
|
|
||||||
|
-- doesn't really depend on this, but we need to make sure both happen
|
||||||
|
-- before we drop the old indexes.
|
||||||
|
'device_lists_remote_cache_unique_idx'
|
||||||
|
);
|
||||||
|
|
||||||
|
-- once they complete, we can drop the old indexes.
|
||||||
|
INSERT into background_updates (update_name, progress_json, depends_on)
|
||||||
|
VALUES (
|
||||||
|
'drop_device_list_streams_non_unique_indexes', '{}',
|
||||||
|
'device_lists_remote_extremeties_unique_idx'
|
||||||
|
);
|
|
@ -169,8 +169,8 @@ class E2eRoomKeysHandlerTestCase(unittest.TestCase):
|
||||||
self.assertEqual(res, 404)
|
self.assertEqual(res, 404)
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
def test_get_missing_room_keys(self):
|
def test_get_missing_backup(self):
|
||||||
"""Check that we get a 404 on querying missing room_keys
|
"""Check that we get a 404 on querying missing backup
|
||||||
"""
|
"""
|
||||||
res = None
|
res = None
|
||||||
try:
|
try:
|
||||||
|
@ -179,19 +179,20 @@ class E2eRoomKeysHandlerTestCase(unittest.TestCase):
|
||||||
res = e.code
|
res = e.code
|
||||||
self.assertEqual(res, 404)
|
self.assertEqual(res, 404)
|
||||||
|
|
||||||
# check we also get a 404 even if the version is valid
|
@defer.inlineCallbacks
|
||||||
|
def test_get_missing_room_keys(self):
|
||||||
|
"""Check we get an empty response from an empty backup
|
||||||
|
"""
|
||||||
version = yield self.handler.create_version(self.local_user, {
|
version = yield self.handler.create_version(self.local_user, {
|
||||||
"algorithm": "m.megolm_backup.v1",
|
"algorithm": "m.megolm_backup.v1",
|
||||||
"auth_data": "first_version_auth_data",
|
"auth_data": "first_version_auth_data",
|
||||||
})
|
})
|
||||||
self.assertEqual(version, "1")
|
self.assertEqual(version, "1")
|
||||||
|
|
||||||
res = None
|
res = yield self.handler.get_room_keys(self.local_user, version)
|
||||||
try:
|
self.assertDictEqual(res, {
|
||||||
yield self.handler.get_room_keys(self.local_user, version)
|
"rooms": {}
|
||||||
except errors.SynapseError as e:
|
})
|
||||||
res = e.code
|
|
||||||
self.assertEqual(res, 404)
|
|
||||||
|
|
||||||
# TODO: test the locking semantics when uploading room_keys,
|
# TODO: test the locking semantics when uploading room_keys,
|
||||||
# although this is probably best done in sytest
|
# although this is probably best done in sytest
|
||||||
|
@ -345,17 +346,15 @@ class E2eRoomKeysHandlerTestCase(unittest.TestCase):
|
||||||
# check for bulk-delete
|
# check for bulk-delete
|
||||||
yield self.handler.upload_room_keys(self.local_user, version, room_keys)
|
yield self.handler.upload_room_keys(self.local_user, version, room_keys)
|
||||||
yield self.handler.delete_room_keys(self.local_user, version)
|
yield self.handler.delete_room_keys(self.local_user, version)
|
||||||
res = None
|
res = yield self.handler.get_room_keys(
|
||||||
try:
|
self.local_user,
|
||||||
yield self.handler.get_room_keys(
|
version,
|
||||||
self.local_user,
|
room_id="!abc:matrix.org",
|
||||||
version,
|
session_id="c0ff33",
|
||||||
room_id="!abc:matrix.org",
|
)
|
||||||
session_id="c0ff33",
|
self.assertDictEqual(res, {
|
||||||
)
|
"rooms": {}
|
||||||
except errors.SynapseError as e:
|
})
|
||||||
res = e.code
|
|
||||||
self.assertEqual(res, 404)
|
|
||||||
|
|
||||||
# check for bulk-delete per room
|
# check for bulk-delete per room
|
||||||
yield self.handler.upload_room_keys(self.local_user, version, room_keys)
|
yield self.handler.upload_room_keys(self.local_user, version, room_keys)
|
||||||
|
@ -364,17 +363,15 @@ class E2eRoomKeysHandlerTestCase(unittest.TestCase):
|
||||||
version,
|
version,
|
||||||
room_id="!abc:matrix.org",
|
room_id="!abc:matrix.org",
|
||||||
)
|
)
|
||||||
res = None
|
res = yield self.handler.get_room_keys(
|
||||||
try:
|
self.local_user,
|
||||||
yield self.handler.get_room_keys(
|
version,
|
||||||
self.local_user,
|
room_id="!abc:matrix.org",
|
||||||
version,
|
session_id="c0ff33",
|
||||||
room_id="!abc:matrix.org",
|
)
|
||||||
session_id="c0ff33",
|
self.assertDictEqual(res, {
|
||||||
)
|
"rooms": {}
|
||||||
except errors.SynapseError as e:
|
})
|
||||||
res = e.code
|
|
||||||
self.assertEqual(res, 404)
|
|
||||||
|
|
||||||
# check for bulk-delete per session
|
# check for bulk-delete per session
|
||||||
yield self.handler.upload_room_keys(self.local_user, version, room_keys)
|
yield self.handler.upload_room_keys(self.local_user, version, room_keys)
|
||||||
|
@ -384,14 +381,12 @@ class E2eRoomKeysHandlerTestCase(unittest.TestCase):
|
||||||
room_id="!abc:matrix.org",
|
room_id="!abc:matrix.org",
|
||||||
session_id="c0ff33",
|
session_id="c0ff33",
|
||||||
)
|
)
|
||||||
res = None
|
res = yield self.handler.get_room_keys(
|
||||||
try:
|
self.local_user,
|
||||||
yield self.handler.get_room_keys(
|
version,
|
||||||
self.local_user,
|
room_id="!abc:matrix.org",
|
||||||
version,
|
session_id="c0ff33",
|
||||||
room_id="!abc:matrix.org",
|
)
|
||||||
session_id="c0ff33",
|
self.assertDictEqual(res, {
|
||||||
)
|
"rooms": {}
|
||||||
except errors.SynapseError as e:
|
})
|
||||||
res = e.code
|
|
||||||
self.assertEqual(res, 404)
|
|
||||||
|
|
|
@ -0,0 +1,159 @@
|
||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
# Copyright 2018 New Vector
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
|
||||||
|
from mock import Mock
|
||||||
|
|
||||||
|
from twisted.internet.defer import Deferred
|
||||||
|
|
||||||
|
from synapse.rest.client.v1 import admin, login, room
|
||||||
|
|
||||||
|
from tests.unittest import HomeserverTestCase
|
||||||
|
|
||||||
|
try:
|
||||||
|
from synapse.push.mailer import load_jinja2_templates
|
||||||
|
except Exception:
|
||||||
|
load_jinja2_templates = None
|
||||||
|
|
||||||
|
|
||||||
|
class HTTPPusherTests(HomeserverTestCase):
|
||||||
|
|
||||||
|
skip = "No Jinja installed" if not load_jinja2_templates else None
|
||||||
|
servlets = [
|
||||||
|
admin.register_servlets,
|
||||||
|
room.register_servlets,
|
||||||
|
login.register_servlets,
|
||||||
|
]
|
||||||
|
user_id = True
|
||||||
|
hijack_auth = False
|
||||||
|
|
||||||
|
def make_homeserver(self, reactor, clock):
|
||||||
|
|
||||||
|
self.push_attempts = []
|
||||||
|
|
||||||
|
m = Mock()
|
||||||
|
|
||||||
|
def post_json_get_json(url, body):
|
||||||
|
d = Deferred()
|
||||||
|
self.push_attempts.append((d, url, body))
|
||||||
|
return d
|
||||||
|
|
||||||
|
m.post_json_get_json = post_json_get_json
|
||||||
|
|
||||||
|
config = self.default_config()
|
||||||
|
config.start_pushers = True
|
||||||
|
|
||||||
|
hs = self.setup_test_homeserver(config=config, simple_http_client=m)
|
||||||
|
|
||||||
|
return hs
|
||||||
|
|
||||||
|
def test_sends_http(self):
|
||||||
|
"""
|
||||||
|
The HTTP pusher will send pushes for each message to a HTTP endpoint
|
||||||
|
when configured to do so.
|
||||||
|
"""
|
||||||
|
# Register the user who gets notified
|
||||||
|
user_id = self.register_user("user", "pass")
|
||||||
|
access_token = self.login("user", "pass")
|
||||||
|
|
||||||
|
# Register the user who sends the message
|
||||||
|
other_user_id = self.register_user("otheruser", "pass")
|
||||||
|
other_access_token = self.login("otheruser", "pass")
|
||||||
|
|
||||||
|
# Register the pusher
|
||||||
|
user_tuple = self.get_success(
|
||||||
|
self.hs.get_datastore().get_user_by_access_token(access_token)
|
||||||
|
)
|
||||||
|
token_id = user_tuple["token_id"]
|
||||||
|
|
||||||
|
self.get_success(
|
||||||
|
self.hs.get_pusherpool().add_pusher(
|
||||||
|
user_id=user_id,
|
||||||
|
access_token=token_id,
|
||||||
|
kind="http",
|
||||||
|
app_id="m.http",
|
||||||
|
app_display_name="HTTP Push Notifications",
|
||||||
|
device_display_name="pushy push",
|
||||||
|
pushkey="a@example.com",
|
||||||
|
lang=None,
|
||||||
|
data={"url": "example.com"},
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
# Create a room
|
||||||
|
room = self.helper.create_room_as(user_id, tok=access_token)
|
||||||
|
|
||||||
|
# Invite the other person
|
||||||
|
self.helper.invite(room=room, src=user_id, tok=access_token, targ=other_user_id)
|
||||||
|
|
||||||
|
# The other user joins
|
||||||
|
self.helper.join(room=room, user=other_user_id, tok=other_access_token)
|
||||||
|
|
||||||
|
# The other user sends some messages
|
||||||
|
self.helper.send(room, body="Hi!", tok=other_access_token)
|
||||||
|
self.helper.send(room, body="There!", tok=other_access_token)
|
||||||
|
|
||||||
|
# Get the stream ordering before it gets sent
|
||||||
|
pushers = self.get_success(
|
||||||
|
self.hs.get_datastore().get_pushers_by(dict(user_name=user_id))
|
||||||
|
)
|
||||||
|
self.assertEqual(len(pushers), 1)
|
||||||
|
last_stream_ordering = pushers[0]["last_stream_ordering"]
|
||||||
|
|
||||||
|
# Advance time a bit, so the pusher will register something has happened
|
||||||
|
self.pump()
|
||||||
|
|
||||||
|
# It hasn't succeeded yet, so the stream ordering shouldn't have moved
|
||||||
|
pushers = self.get_success(
|
||||||
|
self.hs.get_datastore().get_pushers_by(dict(user_name=user_id))
|
||||||
|
)
|
||||||
|
self.assertEqual(len(pushers), 1)
|
||||||
|
self.assertEqual(last_stream_ordering, pushers[0]["last_stream_ordering"])
|
||||||
|
|
||||||
|
# One push was attempted to be sent -- it'll be the first message
|
||||||
|
self.assertEqual(len(self.push_attempts), 1)
|
||||||
|
self.assertEqual(self.push_attempts[0][1], "example.com")
|
||||||
|
self.assertEqual(
|
||||||
|
self.push_attempts[0][2]["notification"]["content"]["body"], "Hi!"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Make the push succeed
|
||||||
|
self.push_attempts[0][0].callback({})
|
||||||
|
self.pump()
|
||||||
|
|
||||||
|
# The stream ordering has increased
|
||||||
|
pushers = self.get_success(
|
||||||
|
self.hs.get_datastore().get_pushers_by(dict(user_name=user_id))
|
||||||
|
)
|
||||||
|
self.assertEqual(len(pushers), 1)
|
||||||
|
self.assertTrue(pushers[0]["last_stream_ordering"] > last_stream_ordering)
|
||||||
|
last_stream_ordering = pushers[0]["last_stream_ordering"]
|
||||||
|
|
||||||
|
# Now it'll try and send the second push message, which will be the second one
|
||||||
|
self.assertEqual(len(self.push_attempts), 2)
|
||||||
|
self.assertEqual(self.push_attempts[1][1], "example.com")
|
||||||
|
self.assertEqual(
|
||||||
|
self.push_attempts[1][2]["notification"]["content"]["body"], "There!"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Make the second push succeed
|
||||||
|
self.push_attempts[1][0].callback({})
|
||||||
|
self.pump()
|
||||||
|
|
||||||
|
# The stream ordering has increased, again
|
||||||
|
pushers = self.get_success(
|
||||||
|
self.hs.get_datastore().get_pushers_by(dict(user_name=user_id))
|
||||||
|
)
|
||||||
|
self.assertEqual(len(pushers), 1)
|
||||||
|
self.assertTrue(pushers[0]["last_stream_ordering"] > last_stream_ordering)
|
|
@ -28,8 +28,8 @@ ROOM_ID = "!room:blue"
|
||||||
|
|
||||||
|
|
||||||
def dict_equals(self, other):
|
def dict_equals(self, other):
|
||||||
me = encode_canonical_json(self._event_dict)
|
me = encode_canonical_json(self.get_pdu_json())
|
||||||
them = encode_canonical_json(other._event_dict)
|
them = encode_canonical_json(other.get_pdu_json())
|
||||||
return me == them
|
return me == them
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -0,0 +1,111 @@
|
||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
# Copyright 2018 New Vector
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
|
||||||
|
import os
|
||||||
|
|
||||||
|
from synapse.api.urls import ConsentURIBuilder
|
||||||
|
from synapse.rest.client.v1 import admin, login, room
|
||||||
|
from synapse.rest.consent import consent_resource
|
||||||
|
|
||||||
|
from tests import unittest
|
||||||
|
from tests.server import render
|
||||||
|
|
||||||
|
try:
|
||||||
|
from synapse.push.mailer import load_jinja2_templates
|
||||||
|
except Exception:
|
||||||
|
load_jinja2_templates = None
|
||||||
|
|
||||||
|
|
||||||
|
class ConsentResourceTestCase(unittest.HomeserverTestCase):
|
||||||
|
skip = "No Jinja installed" if not load_jinja2_templates else None
|
||||||
|
servlets = [
|
||||||
|
admin.register_servlets,
|
||||||
|
room.register_servlets,
|
||||||
|
login.register_servlets,
|
||||||
|
]
|
||||||
|
user_id = True
|
||||||
|
hijack_auth = False
|
||||||
|
|
||||||
|
def make_homeserver(self, reactor, clock):
|
||||||
|
|
||||||
|
config = self.default_config()
|
||||||
|
config.user_consent_version = "1"
|
||||||
|
config.public_baseurl = ""
|
||||||
|
config.form_secret = "123abc"
|
||||||
|
|
||||||
|
# Make some temporary templates...
|
||||||
|
temp_consent_path = self.mktemp()
|
||||||
|
os.mkdir(temp_consent_path)
|
||||||
|
os.mkdir(os.path.join(temp_consent_path, 'en'))
|
||||||
|
config.user_consent_template_dir = os.path.abspath(temp_consent_path)
|
||||||
|
|
||||||
|
with open(os.path.join(temp_consent_path, "en/1.html"), 'w') as f:
|
||||||
|
f.write("{{version}},{{has_consented}}")
|
||||||
|
|
||||||
|
with open(os.path.join(temp_consent_path, "en/success.html"), 'w') as f:
|
||||||
|
f.write("yay!")
|
||||||
|
|
||||||
|
hs = self.setup_test_homeserver(config=config)
|
||||||
|
return hs
|
||||||
|
|
||||||
|
def test_accept_consent(self):
|
||||||
|
"""
|
||||||
|
A user can use the consent form to accept the terms.
|
||||||
|
"""
|
||||||
|
uri_builder = ConsentURIBuilder(self.hs.config)
|
||||||
|
resource = consent_resource.ConsentResource(self.hs)
|
||||||
|
|
||||||
|
# Register a user
|
||||||
|
user_id = self.register_user("user", "pass")
|
||||||
|
access_token = self.login("user", "pass")
|
||||||
|
|
||||||
|
# Fetch the consent page, to get the consent version
|
||||||
|
consent_uri = (
|
||||||
|
uri_builder.build_user_consent_uri(user_id).replace("_matrix/", "")
|
||||||
|
+ "&u=user"
|
||||||
|
)
|
||||||
|
request, channel = self.make_request(
|
||||||
|
"GET", consent_uri, access_token=access_token, shorthand=False
|
||||||
|
)
|
||||||
|
render(request, resource, self.reactor)
|
||||||
|
self.assertEqual(channel.code, 200)
|
||||||
|
|
||||||
|
# Get the version from the body, and whether we've consented
|
||||||
|
version, consented = channel.result["body"].decode('ascii').split(",")
|
||||||
|
self.assertEqual(consented, "False")
|
||||||
|
|
||||||
|
# POST to the consent page, saying we've agreed
|
||||||
|
request, channel = self.make_request(
|
||||||
|
"POST",
|
||||||
|
consent_uri + "&v=" + version,
|
||||||
|
access_token=access_token,
|
||||||
|
shorthand=False,
|
||||||
|
)
|
||||||
|
render(request, resource, self.reactor)
|
||||||
|
self.assertEqual(channel.code, 200)
|
||||||
|
|
||||||
|
# Fetch the consent page, to get the consent version -- it should have
|
||||||
|
# changed
|
||||||
|
request, channel = self.make_request(
|
||||||
|
"GET", consent_uri, access_token=access_token, shorthand=False
|
||||||
|
)
|
||||||
|
render(request, resource, self.reactor)
|
||||||
|
self.assertEqual(channel.code, 200)
|
||||||
|
|
||||||
|
# Get the version from the body, and check that it's the version we
|
||||||
|
# agreed to, and that we've consented to it.
|
||||||
|
version, consented = channel.result["body"].decode('ascii').split(",")
|
||||||
|
self.assertEqual(consented, "True")
|
||||||
|
self.assertEqual(version, "1")
|
|
@ -15,9 +15,11 @@
|
||||||
|
|
||||||
from mock import Mock
|
from mock import Mock
|
||||||
|
|
||||||
|
from synapse.rest.client.v1 import admin, login, room
|
||||||
from synapse.rest.client.v2_alpha import sync
|
from synapse.rest.client.v2_alpha import sync
|
||||||
|
|
||||||
from tests import unittest
|
from tests import unittest
|
||||||
|
from tests.server import TimedOutException
|
||||||
|
|
||||||
|
|
||||||
class FilterTestCase(unittest.HomeserverTestCase):
|
class FilterTestCase(unittest.HomeserverTestCase):
|
||||||
|
@ -65,3 +67,124 @@ class FilterTestCase(unittest.HomeserverTestCase):
|
||||||
["next_batch", "rooms", "account_data", "to_device", "device_lists"]
|
["next_batch", "rooms", "account_data", "to_device", "device_lists"]
|
||||||
).issubset(set(channel.json_body.keys()))
|
).issubset(set(channel.json_body.keys()))
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class SyncTypingTests(unittest.HomeserverTestCase):
|
||||||
|
|
||||||
|
servlets = [
|
||||||
|
admin.register_servlets,
|
||||||
|
room.register_servlets,
|
||||||
|
login.register_servlets,
|
||||||
|
sync.register_servlets,
|
||||||
|
]
|
||||||
|
user_id = True
|
||||||
|
hijack_auth = False
|
||||||
|
|
||||||
|
def test_sync_backwards_typing(self):
|
||||||
|
"""
|
||||||
|
If the typing serial goes backwards and the typing handler is then reset
|
||||||
|
(such as when the master restarts and sets the typing serial to 0), we
|
||||||
|
do not incorrectly return typing information that had a serial greater
|
||||||
|
than the now-reset serial.
|
||||||
|
"""
|
||||||
|
typing_url = "/rooms/%s/typing/%s?access_token=%s"
|
||||||
|
sync_url = "/sync?timeout=3000000&access_token=%s&since=%s"
|
||||||
|
|
||||||
|
# Register the user who gets notified
|
||||||
|
user_id = self.register_user("user", "pass")
|
||||||
|
access_token = self.login("user", "pass")
|
||||||
|
|
||||||
|
# Register the user who sends the message
|
||||||
|
other_user_id = self.register_user("otheruser", "pass")
|
||||||
|
other_access_token = self.login("otheruser", "pass")
|
||||||
|
|
||||||
|
# Create a room
|
||||||
|
room = self.helper.create_room_as(user_id, tok=access_token)
|
||||||
|
|
||||||
|
# Invite the other person
|
||||||
|
self.helper.invite(room=room, src=user_id, tok=access_token, targ=other_user_id)
|
||||||
|
|
||||||
|
# The other user joins
|
||||||
|
self.helper.join(room=room, user=other_user_id, tok=other_access_token)
|
||||||
|
|
||||||
|
# The other user sends some messages
|
||||||
|
self.helper.send(room, body="Hi!", tok=other_access_token)
|
||||||
|
self.helper.send(room, body="There!", tok=other_access_token)
|
||||||
|
|
||||||
|
# Start typing.
|
||||||
|
request, channel = self.make_request(
|
||||||
|
"PUT",
|
||||||
|
typing_url % (room, other_user_id, other_access_token),
|
||||||
|
b'{"typing": true, "timeout": 30000}',
|
||||||
|
)
|
||||||
|
self.render(request)
|
||||||
|
self.assertEquals(200, channel.code)
|
||||||
|
|
||||||
|
request, channel = self.make_request(
|
||||||
|
"GET", "/sync?access_token=%s" % (access_token,)
|
||||||
|
)
|
||||||
|
self.render(request)
|
||||||
|
self.assertEquals(200, channel.code)
|
||||||
|
next_batch = channel.json_body["next_batch"]
|
||||||
|
|
||||||
|
# Stop typing.
|
||||||
|
request, channel = self.make_request(
|
||||||
|
"PUT",
|
||||||
|
typing_url % (room, other_user_id, other_access_token),
|
||||||
|
b'{"typing": false}',
|
||||||
|
)
|
||||||
|
self.render(request)
|
||||||
|
self.assertEquals(200, channel.code)
|
||||||
|
|
||||||
|
# Start typing.
|
||||||
|
request, channel = self.make_request(
|
||||||
|
"PUT",
|
||||||
|
typing_url % (room, other_user_id, other_access_token),
|
||||||
|
b'{"typing": true, "timeout": 30000}',
|
||||||
|
)
|
||||||
|
self.render(request)
|
||||||
|
self.assertEquals(200, channel.code)
|
||||||
|
|
||||||
|
# Should return immediately
|
||||||
|
request, channel = self.make_request(
|
||||||
|
"GET", sync_url % (access_token, next_batch)
|
||||||
|
)
|
||||||
|
self.render(request)
|
||||||
|
self.assertEquals(200, channel.code)
|
||||||
|
next_batch = channel.json_body["next_batch"]
|
||||||
|
|
||||||
|
# Reset typing serial back to 0, as if the master had.
|
||||||
|
typing = self.hs.get_typing_handler()
|
||||||
|
typing._latest_room_serial = 0
|
||||||
|
|
||||||
|
# Since it checks the state token, we need some state to update to
|
||||||
|
# invalidate the stream token.
|
||||||
|
self.helper.send(room, body="There!", tok=other_access_token)
|
||||||
|
|
||||||
|
request, channel = self.make_request(
|
||||||
|
"GET", sync_url % (access_token, next_batch)
|
||||||
|
)
|
||||||
|
self.render(request)
|
||||||
|
self.assertEquals(200, channel.code)
|
||||||
|
next_batch = channel.json_body["next_batch"]
|
||||||
|
|
||||||
|
# This should time out! But it does not, because our stream token is
|
||||||
|
# ahead, and therefore it's saying the typing (that we've actually
|
||||||
|
# already seen) is new, since it's got a token above our new, now-reset
|
||||||
|
# stream token.
|
||||||
|
request, channel = self.make_request(
|
||||||
|
"GET", sync_url % (access_token, next_batch)
|
||||||
|
)
|
||||||
|
self.render(request)
|
||||||
|
self.assertEquals(200, channel.code)
|
||||||
|
next_batch = channel.json_body["next_batch"]
|
||||||
|
|
||||||
|
# Clear the typing information, so that it doesn't think everything is
|
||||||
|
# in the future.
|
||||||
|
typing._reset()
|
||||||
|
|
||||||
|
# Now it SHOULD fail as it never completes!
|
||||||
|
request, channel = self.make_request(
|
||||||
|
"GET", sync_url % (access_token, next_batch)
|
||||||
|
)
|
||||||
|
self.assertRaises(TimedOutException, self.render, request)
|
||||||
|
|
|
@ -21,6 +21,12 @@ from synapse.util import Clock
|
||||||
from tests.utils import setup_test_homeserver as _sth
|
from tests.utils import setup_test_homeserver as _sth
|
||||||
|
|
||||||
|
|
||||||
|
class TimedOutException(Exception):
|
||||||
|
"""
|
||||||
|
A web query timed out.
|
||||||
|
"""
|
||||||
|
|
||||||
|
|
||||||
@attr.s
|
@attr.s
|
||||||
class FakeChannel(object):
|
class FakeChannel(object):
|
||||||
"""
|
"""
|
||||||
|
@ -98,10 +104,24 @@ class FakeSite:
|
||||||
return FakeLogger()
|
return FakeLogger()
|
||||||
|
|
||||||
|
|
||||||
def make_request(method, path, content=b"", access_token=None, request=SynapseRequest):
|
def make_request(
|
||||||
|
method, path, content=b"", access_token=None, request=SynapseRequest, shorthand=True
|
||||||
|
):
|
||||||
"""
|
"""
|
||||||
Make a web request using the given method and path, feed it the
|
Make a web request using the given method and path, feed it the
|
||||||
content, and return the Request and the Channel underneath.
|
content, and return the Request and the Channel underneath.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
method (bytes/unicode): The HTTP request method ("verb").
|
||||||
|
path (bytes/unicode): The HTTP path, suitably URL encoded (e.g.
|
||||||
|
escaped UTF-8 & spaces and such).
|
||||||
|
content (bytes or dict): The body of the request. JSON-encoded, if
|
||||||
|
a dict.
|
||||||
|
shorthand: Whether to try and be helpful and prefix the given URL
|
||||||
|
with the usual REST API path, if it doesn't contain it.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
A synapse.http.site.SynapseRequest.
|
||||||
"""
|
"""
|
||||||
if not isinstance(method, bytes):
|
if not isinstance(method, bytes):
|
||||||
method = method.encode('ascii')
|
method = method.encode('ascii')
|
||||||
|
@ -109,8 +129,8 @@ def make_request(method, path, content=b"", access_token=None, request=SynapseRe
|
||||||
if not isinstance(path, bytes):
|
if not isinstance(path, bytes):
|
||||||
path = path.encode('ascii')
|
path = path.encode('ascii')
|
||||||
|
|
||||||
# Decorate it to be the full path
|
# Decorate it to be the full path, if we're using shorthand
|
||||||
if not path.startswith(b"/_matrix"):
|
if shorthand and not path.startswith(b"/_matrix"):
|
||||||
path = b"/_matrix/client/r0/" + path
|
path = b"/_matrix/client/r0/" + path
|
||||||
path = path.replace(b"//", b"/")
|
path = path.replace(b"//", b"/")
|
||||||
|
|
||||||
|
@ -153,7 +173,7 @@ def wait_until_result(clock, request, timeout=100):
|
||||||
x += 1
|
x += 1
|
||||||
|
|
||||||
if x > timeout:
|
if x > timeout:
|
||||||
raise Exception("Timed out waiting for request to finish.")
|
raise TimedOutException("Timed out waiting for request to finish.")
|
||||||
|
|
||||||
clock.advance(0.1)
|
clock.advance(0.1)
|
||||||
|
|
||||||
|
|
|
@ -544,8 +544,7 @@ class StateTestCase(unittest.TestCase):
|
||||||
state_res_store=TestStateResolutionStore(event_map),
|
state_res_store=TestStateResolutionStore(event_map),
|
||||||
)
|
)
|
||||||
|
|
||||||
self.assertTrue(state_d.called)
|
state_before = self.successResultOf(state_d)
|
||||||
state_before = state_d.result
|
|
||||||
|
|
||||||
state_after = dict(state_before)
|
state_after = dict(state_before)
|
||||||
if fake_event.state_key is not None:
|
if fake_event.state_key is not None:
|
||||||
|
@ -599,6 +598,103 @@ class LexicographicalTestCase(unittest.TestCase):
|
||||||
self.assertEqual(["o", "l", "n", "m", "p"], res)
|
self.assertEqual(["o", "l", "n", "m", "p"], res)
|
||||||
|
|
||||||
|
|
||||||
|
class SimpleParamStateTestCase(unittest.TestCase):
|
||||||
|
def setUp(self):
|
||||||
|
# We build up a simple DAG.
|
||||||
|
|
||||||
|
event_map = {}
|
||||||
|
|
||||||
|
create_event = FakeEvent(
|
||||||
|
id="CREATE",
|
||||||
|
sender=ALICE,
|
||||||
|
type=EventTypes.Create,
|
||||||
|
state_key="",
|
||||||
|
content={"creator": ALICE},
|
||||||
|
).to_event([], [])
|
||||||
|
event_map[create_event.event_id] = create_event
|
||||||
|
|
||||||
|
alice_member = FakeEvent(
|
||||||
|
id="IMA",
|
||||||
|
sender=ALICE,
|
||||||
|
type=EventTypes.Member,
|
||||||
|
state_key=ALICE,
|
||||||
|
content=MEMBERSHIP_CONTENT_JOIN,
|
||||||
|
).to_event([create_event.event_id], [create_event.event_id])
|
||||||
|
event_map[alice_member.event_id] = alice_member
|
||||||
|
|
||||||
|
join_rules = FakeEvent(
|
||||||
|
id="IJR",
|
||||||
|
sender=ALICE,
|
||||||
|
type=EventTypes.JoinRules,
|
||||||
|
state_key="",
|
||||||
|
content={"join_rule": JoinRules.PUBLIC},
|
||||||
|
).to_event(
|
||||||
|
auth_events=[create_event.event_id, alice_member.event_id],
|
||||||
|
prev_events=[alice_member.event_id],
|
||||||
|
)
|
||||||
|
event_map[join_rules.event_id] = join_rules
|
||||||
|
|
||||||
|
# Bob and Charlie join at the same time, so there is a fork
|
||||||
|
bob_member = FakeEvent(
|
||||||
|
id="IMB",
|
||||||
|
sender=BOB,
|
||||||
|
type=EventTypes.Member,
|
||||||
|
state_key=BOB,
|
||||||
|
content=MEMBERSHIP_CONTENT_JOIN,
|
||||||
|
).to_event(
|
||||||
|
auth_events=[create_event.event_id, join_rules.event_id],
|
||||||
|
prev_events=[join_rules.event_id],
|
||||||
|
)
|
||||||
|
event_map[bob_member.event_id] = bob_member
|
||||||
|
|
||||||
|
charlie_member = FakeEvent(
|
||||||
|
id="IMC",
|
||||||
|
sender=CHARLIE,
|
||||||
|
type=EventTypes.Member,
|
||||||
|
state_key=CHARLIE,
|
||||||
|
content=MEMBERSHIP_CONTENT_JOIN,
|
||||||
|
).to_event(
|
||||||
|
auth_events=[create_event.event_id, join_rules.event_id],
|
||||||
|
prev_events=[join_rules.event_id],
|
||||||
|
)
|
||||||
|
event_map[charlie_member.event_id] = charlie_member
|
||||||
|
|
||||||
|
self.event_map = event_map
|
||||||
|
self.create_event = create_event
|
||||||
|
self.alice_member = alice_member
|
||||||
|
self.join_rules = join_rules
|
||||||
|
self.bob_member = bob_member
|
||||||
|
self.charlie_member = charlie_member
|
||||||
|
|
||||||
|
self.state_at_bob = {
|
||||||
|
(e.type, e.state_key): e.event_id
|
||||||
|
for e in [create_event, alice_member, join_rules, bob_member]
|
||||||
|
}
|
||||||
|
|
||||||
|
self.state_at_charlie = {
|
||||||
|
(e.type, e.state_key): e.event_id
|
||||||
|
for e in [create_event, alice_member, join_rules, charlie_member]
|
||||||
|
}
|
||||||
|
|
||||||
|
self.expected_combined_state = {
|
||||||
|
(e.type, e.state_key): e.event_id
|
||||||
|
for e in [create_event, alice_member, join_rules, bob_member, charlie_member]
|
||||||
|
}
|
||||||
|
|
||||||
|
def test_event_map_none(self):
|
||||||
|
# Test that we correctly handle passing `None` as the event_map
|
||||||
|
|
||||||
|
state_d = resolve_events_with_store(
|
||||||
|
[self.state_at_bob, self.state_at_charlie],
|
||||||
|
event_map=None,
|
||||||
|
state_res_store=TestStateResolutionStore(self.event_map),
|
||||||
|
)
|
||||||
|
|
||||||
|
state = self.successResultOf(state_d)
|
||||||
|
|
||||||
|
self.assert_dict(self.expected_combined_state, state)
|
||||||
|
|
||||||
|
|
||||||
def pairwise(iterable):
|
def pairwise(iterable):
|
||||||
"s -> (s0,s1), (s1,s2), (s2, s3), ..."
|
"s -> (s0,s1), (s1,s2), (s2, s3), ..."
|
||||||
a, b = itertools.tee(iterable)
|
a, b = itertools.tee(iterable)
|
||||||
|
@ -657,7 +753,7 @@ class TestStateResolutionStore(object):
|
||||||
result.add(event_id)
|
result.add(event_id)
|
||||||
|
|
||||||
event = self.event_map[event_id]
|
event = self.event_map[event_id]
|
||||||
for aid, _ in event.auth_events:
|
for aid in event.auth_event_ids():
|
||||||
stack.append(aid)
|
stack.append(aid)
|
||||||
|
|
||||||
return list(result)
|
return list(result)
|
||||||
|
|
|
@ -112,7 +112,7 @@ class MessageAcceptTests(unittest.TestCase):
|
||||||
"origin_server_ts": 1,
|
"origin_server_ts": 1,
|
||||||
"type": "m.room.message",
|
"type": "m.room.message",
|
||||||
"origin": "test.serv",
|
"origin": "test.serv",
|
||||||
"content": "hewwo?",
|
"content": {"body": "hewwo?"},
|
||||||
"auth_events": [],
|
"auth_events": [],
|
||||||
"prev_events": [("two:test.serv", {}), (most_recent, {})],
|
"prev_events": [("two:test.serv", {}), (most_recent, {})],
|
||||||
}
|
}
|
||||||
|
|
|
@ -0,0 +1,124 @@
|
||||||
|
# Copyright 2018 New Vector Ltd
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
|
||||||
|
import json
|
||||||
|
|
||||||
|
import six
|
||||||
|
from mock import Mock
|
||||||
|
|
||||||
|
from twisted.test.proto_helpers import MemoryReactorClock
|
||||||
|
|
||||||
|
from synapse.rest.client.v2_alpha.register import register_servlets
|
||||||
|
from synapse.util import Clock
|
||||||
|
|
||||||
|
from tests import unittest
|
||||||
|
from tests.server import make_request
|
||||||
|
|
||||||
|
|
||||||
|
class TermsTestCase(unittest.HomeserverTestCase):
|
||||||
|
servlets = [register_servlets]
|
||||||
|
|
||||||
|
def prepare(self, reactor, clock, hs):
|
||||||
|
self.clock = MemoryReactorClock()
|
||||||
|
self.hs_clock = Clock(self.clock)
|
||||||
|
self.url = "/_matrix/client/r0/register"
|
||||||
|
self.registration_handler = Mock()
|
||||||
|
self.auth_handler = Mock()
|
||||||
|
self.device_handler = Mock()
|
||||||
|
hs.config.enable_registration = True
|
||||||
|
hs.config.registrations_require_3pid = []
|
||||||
|
hs.config.auto_join_rooms = []
|
||||||
|
hs.config.enable_registration_captcha = False
|
||||||
|
|
||||||
|
def test_ui_auth(self):
|
||||||
|
self.hs.config.user_consent_at_registration = True
|
||||||
|
self.hs.config.user_consent_policy_name = "My Cool Privacy Policy"
|
||||||
|
self.hs.config.public_baseurl = "https://example.org"
|
||||||
|
self.hs.config.user_consent_version = "1.0"
|
||||||
|
|
||||||
|
# Do a UI auth request
|
||||||
|
request, channel = self.make_request(b"POST", self.url, b"{}")
|
||||||
|
self.render(request)
|
||||||
|
|
||||||
|
self.assertEquals(channel.result["code"], b"401", channel.result)
|
||||||
|
|
||||||
|
self.assertTrue(channel.json_body is not None)
|
||||||
|
self.assertIsInstance(channel.json_body["session"], six.text_type)
|
||||||
|
|
||||||
|
self.assertIsInstance(channel.json_body["flows"], list)
|
||||||
|
for flow in channel.json_body["flows"]:
|
||||||
|
self.assertIsInstance(flow["stages"], list)
|
||||||
|
self.assertTrue(len(flow["stages"]) > 0)
|
||||||
|
self.assertEquals(flow["stages"][-1], "m.login.terms")
|
||||||
|
|
||||||
|
expected_params = {
|
||||||
|
"m.login.terms": {
|
||||||
|
"policies": {
|
||||||
|
"privacy_policy": {
|
||||||
|
"en": {
|
||||||
|
"name": "My Cool Privacy Policy",
|
||||||
|
"url": "https://example.org/_matrix/consent?v=1.0",
|
||||||
|
},
|
||||||
|
"version": "1.0"
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
self.assertIsInstance(channel.json_body["params"], dict)
|
||||||
|
self.assertDictContainsSubset(channel.json_body["params"], expected_params)
|
||||||
|
|
||||||
|
# We have to complete the dummy auth stage before completing the terms stage
|
||||||
|
request_data = json.dumps(
|
||||||
|
{
|
||||||
|
"username": "kermit",
|
||||||
|
"password": "monkey",
|
||||||
|
"auth": {
|
||||||
|
"session": channel.json_body["session"],
|
||||||
|
"type": "m.login.dummy",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
self.registration_handler.check_username = Mock(return_value=True)
|
||||||
|
|
||||||
|
request, channel = make_request(b"POST", self.url, request_data)
|
||||||
|
self.render(request)
|
||||||
|
|
||||||
|
# We don't bother checking that the response is correct - we'll leave that to
|
||||||
|
# other tests. We just want to make sure we're on the right path.
|
||||||
|
self.assertEquals(channel.result["code"], b"401", channel.result)
|
||||||
|
|
||||||
|
# Finish the UI auth for terms
|
||||||
|
request_data = json.dumps(
|
||||||
|
{
|
||||||
|
"username": "kermit",
|
||||||
|
"password": "monkey",
|
||||||
|
"auth": {
|
||||||
|
"session": channel.json_body["session"],
|
||||||
|
"type": "m.login.terms",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
)
|
||||||
|
request, channel = make_request(b"POST", self.url, request_data)
|
||||||
|
self.render(request)
|
||||||
|
|
||||||
|
# We're interested in getting a response that looks like a successful
|
||||||
|
# registration, not so much that the details are exactly what we want.
|
||||||
|
|
||||||
|
self.assertEquals(channel.result["code"], b"200", channel.result)
|
||||||
|
|
||||||
|
self.assertTrue(channel.json_body is not None)
|
||||||
|
self.assertIsInstance(channel.json_body["user_id"], six.text_type)
|
||||||
|
self.assertIsInstance(channel.json_body["access_token"], six.text_type)
|
||||||
|
self.assertIsInstance(channel.json_body["device_id"], six.text_type)
|
|
@ -258,7 +258,13 @@ class HomeserverTestCase(TestCase):
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def make_request(
|
def make_request(
|
||||||
self, method, path, content=b"", access_token=None, request=SynapseRequest
|
self,
|
||||||
|
method,
|
||||||
|
path,
|
||||||
|
content=b"",
|
||||||
|
access_token=None,
|
||||||
|
request=SynapseRequest,
|
||||||
|
shorthand=True,
|
||||||
):
|
):
|
||||||
"""
|
"""
|
||||||
Create a SynapseRequest at the path using the method and containing the
|
Create a SynapseRequest at the path using the method and containing the
|
||||||
|
@ -270,6 +276,8 @@ class HomeserverTestCase(TestCase):
|
||||||
escaped UTF-8 & spaces and such).
|
escaped UTF-8 & spaces and such).
|
||||||
content (bytes or dict): The body of the request. JSON-encoded, if
|
content (bytes or dict): The body of the request. JSON-encoded, if
|
||||||
a dict.
|
a dict.
|
||||||
|
shorthand: Whether to try and be helpful and prefix the given URL
|
||||||
|
with the usual REST API path, if it doesn't contain it.
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
A synapse.http.site.SynapseRequest.
|
A synapse.http.site.SynapseRequest.
|
||||||
|
@ -277,7 +285,7 @@ class HomeserverTestCase(TestCase):
|
||||||
if isinstance(content, dict):
|
if isinstance(content, dict):
|
||||||
content = json.dumps(content).encode('utf8')
|
content = json.dumps(content).encode('utf8')
|
||||||
|
|
||||||
return make_request(method, path, content, access_token, request)
|
return make_request(method, path, content, access_token, request, shorthand)
|
||||||
|
|
||||||
def render(self, request):
|
def render(self, request):
|
||||||
"""
|
"""
|
||||||
|
|
|
@ -123,6 +123,8 @@ def default_config(name):
|
||||||
config.user_directory_search_all_users = False
|
config.user_directory_search_all_users = False
|
||||||
config.user_consent_server_notice_content = None
|
config.user_consent_server_notice_content = None
|
||||||
config.block_events_without_consent_error = None
|
config.block_events_without_consent_error = None
|
||||||
|
config.user_consent_at_registration = False
|
||||||
|
config.user_consent_policy_name = "Privacy Policy"
|
||||||
config.media_storage_providers = []
|
config.media_storage_providers = []
|
||||||
config.autocreate_auto_join_rooms = True
|
config.autocreate_auto_join_rooms = True
|
||||||
config.auto_join_rooms = []
|
config.auto_join_rooms = []
|
||||||
|
|
14
tox.ini
14
tox.ini
|
@ -11,6 +11,20 @@ deps =
|
||||||
# needed by some of the tests
|
# needed by some of the tests
|
||||||
lxml
|
lxml
|
||||||
|
|
||||||
|
# cyptography 2.2 requires setuptools >= 18.5
|
||||||
|
#
|
||||||
|
# older versions of virtualenv (?) give us a virtualenv with the same
|
||||||
|
# version of setuptools as is installed on the system python (and tox runs
|
||||||
|
# virtualenv under python3, so we get the version of setuptools that is
|
||||||
|
# installed on that).
|
||||||
|
#
|
||||||
|
# anyway, make sure that we have a recent enough setuptools.
|
||||||
|
setuptools>=18.5
|
||||||
|
|
||||||
|
# we also need a semi-recent version of pip, because old ones fail to
|
||||||
|
# install the "enum34" dependency of cryptography.
|
||||||
|
pip>=10
|
||||||
|
|
||||||
setenv =
|
setenv =
|
||||||
PYTHONDONTWRITEBYTECODE = no_byte_code
|
PYTHONDONTWRITEBYTECODE = no_byte_code
|
||||||
|
|
||||||
|
|
Loading…
Reference in New Issue