Merge remote-tracking branch 'origin/develop' into erikj/py312_asyncio

erikj/py312_asyncio
Erik Johnston 2023-12-04 14:12:08 +00:00
commit 85151a345d
76 changed files with 912 additions and 3072 deletions

View File

@ -22,7 +22,7 @@ jobs:
path: book
- name: 📤 Deploy to Netlify
uses: matrix-org/netlify-pr-preview@v2
uses: matrix-org/netlify-pr-preview@v3
with:
path: book
owner: ${{ github.event.workflow_run.head_repository.owner.login }}

View File

@ -6,6 +6,7 @@ on:
- docs/**
- book.toml
- .github/workflows/docs-pr.yaml
- scripts-dev/schema_versions.py
jobs:
pages:
@ -13,12 +14,22 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
# Fetch all history so that the schema_versions script works.
fetch-depth: 0
- name: Setup mdbook
uses: peaceiris/actions-mdbook@adeb05db28a0c0004681db83893d56c0388ea9ea # v1.2.0
with:
mdbook-version: '0.4.17'
- name: Setup python
uses: actions/setup-python@v4
with:
python-version: "3.x"
- run: "pip install 'packaging>=20.0' 'GitPython>=3.1.20'"
- name: Build the documentation
# mdbook will only create an index.html if we're including docs/README.md in SUMMARY.md.
# However, we're using docs/README.md for other purposes and need to pick a new page

View File

@ -51,12 +51,22 @@ jobs:
- pre
steps:
- uses: actions/checkout@v4
with:
# Fetch all history so that the schema_versions script works.
fetch-depth: 0
- name: Setup mdbook
uses: peaceiris/actions-mdbook@adeb05db28a0c0004681db83893d56c0388ea9ea # v1.2.0
with:
mdbook-version: '0.4.17'
- name: Setup python
uses: actions/setup-python@v4
with:
python-version: "3.x"
- run: "pip install 'packaging>=20.0' 'GitPython>=3.1.20'"
- name: Build the documentation
# mdbook will only create an index.html if we're including docs/README.md in SUMMARY.md.
# However, we're using docs/README.md for other purposes and need to pick a new page

52
.github/workflows/fix_lint.yaml vendored Normal file
View File

@ -0,0 +1,52 @@
# A helper workflow to automatically fixup any linting errors on a PR. Must be
# triggered manually.
name: Attempt to automatically fix linting errors
on:
workflow_dispatch:
jobs:
fixup:
name: Fix up
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Install Rust
uses: dtolnay/rust-toolchain@master
with:
# We use nightly so that `fmt` correctly groups together imports, and
# clippy correctly fixes up the benchmarks.
toolchain: nightly-2022-12-01
components: rustfmt
- uses: Swatinem/rust-cache@v2
- name: Setup Poetry
uses: matrix-org/setup-python-poetry@v1
with:
install-project: "false"
- name: Import order (isort)
continue-on-error: true
run: poetry run isort .
- name: Code style (black)
continue-on-error: true
run: poetry run black .
- name: Semantic checks (ruff)
continue-on-error: true
run: poetry run ruff --fix .
- run: cargo clippy --all-features --fix -- -D warnings
continue-on-error: true
- run: cargo fmt
continue-on-error: true
- uses: stefanzweifel/git-auto-commit-action@v5
with:
commit_message: "Attempt to fix linting"

View File

@ -1,4 +1,4 @@
# Synapse 1.97.0rc1 (2023-11-21)
# Synapse 1.97.0 (2023-11-28)
Synapse will soon be forked by Element under an AGPLv3.0 licence (with CLA, for
proprietary dual licensing). You can read more about this here:
@ -10,6 +10,12 @@ The Matrix.org Foundation copy of the project will be archived. Any changes need
by server administrators will be communicated via our usual announcements channels,
but we are striving to make this as seamless as possible.
No significant changes since 1.97.0rc1.
# Synapse 1.97.0rc1 (2023-11-21)
### Features
- Add support for asynchronous uploads as defined by [MSC2246](https://github.com/matrix-org/matrix-spec-proposals/pull/2246). Contributed by @sumnerevans at @beeper. ([\#15503](https://github.com/matrix-org/synapse/issues/15503))

View File

@ -36,4 +36,7 @@ additional-css = [
"docs/website_files/indent-section-headers.css",
]
additional-js = ["docs/website_files/table-of-contents.js"]
theme = "docs/website_files/theme"
theme = "docs/website_files/theme"
[preprocessor.schema_versions]
command = "./scripts-dev/schema_versions.py"

View File

@ -0,0 +1 @@
Adds on_user_login ModuleAPI callback allowing to execute custom code after (on) Auth.

1
changelog.d/16522.misc Normal file
View File

@ -0,0 +1 @@
Clean-up unused tables.

View File

@ -0,0 +1 @@
Support MSC4069: Inhibit profile propagation.

1
changelog.d/16661.doc Normal file
View File

@ -0,0 +1 @@
Add schema rollback information to documentation.

1
changelog.d/16667.misc Normal file
View File

@ -0,0 +1 @@
Reduce database load of pruning old `user_ips`.

1
changelog.d/16668.misc Normal file
View File

@ -0,0 +1 @@
Reduce DB load when forget on leave setting is disabled.

1
changelog.d/16677.misc Normal file
View File

@ -0,0 +1 @@
Ignore `encryption_enabled_by_default_for_room_type` setting when creating server notices room, since the notices will be send unencrypted anyway.

1
changelog.d/16695.doc Normal file
View File

@ -0,0 +1 @@
Fix poetry version typo in [contributors' guide](https://matrix-org.github.io/synapse/latest/development/contributing_guide.html).

1
changelog.d/16697.misc Normal file
View File

@ -0,0 +1 @@
Remove old full schema dumps which are no longer used.

View File

@ -0,0 +1 @@
Add an autojoin setting for the notices room so users get joined directly instead of receiving an invite.

1
changelog.d/16700.doc Normal file
View File

@ -0,0 +1 @@
Switch the example UNIX socket paths to /run. Add HAProxy example configuration for UNIX sockets.

View File

@ -0,0 +1 @@
Follow redirects when downloading media over federation (per [MSC3860](https://github.com/matrix-org/matrix-spec-proposals/pull/3860)).

1
changelog.d/16702.misc Normal file
View File

@ -0,0 +1 @@
Raise poetry-core upper bound to <=1.8.1. This allows contributors to import Synapse after `poetry install`ing with Poetry 1.6 and above. Contributed by Mo Balaa.

1
changelog.d/16704.misc Normal file
View File

@ -0,0 +1 @@
Add a workflow to try and automatically fixup linting in a PR.

View File

@ -0,0 +1 @@
Synapse now declares support for Matrix v1.7, v1.8, and v1.9.

6
debian/changelog vendored
View File

@ -1,3 +1,9 @@
matrix-synapse-py3 (1.97.0) stable; urgency=medium
* New Synapse release 1.97.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 28 Nov 2023 14:08:58 +0000
matrix-synapse-py3 (1.97.0~rc1) stable; urgency=medium
* New Synapse release 1.97.0rc1.

View File

@ -66,7 +66,7 @@ Of their installation methods, we recommend
```shell
pip install --user pipx
pipx install poetry==1.5.2 # Problems with Poetry 1.6, see https://github.com/matrix-org/synapse/issues/16147
pipx install poetry==1.5.1 # Problems with Poetry 1.6, see https://github.com/matrix-org/synapse/issues/16147
```
but see poetry's [installation instructions](https://python-poetry.org/docs/#installation)

View File

@ -42,3 +42,16 @@ operations to keep track of them. (e.g. add them to a database table). The user
represented by their Matrix user ID.
If multiple modules implement this callback, Synapse runs them all in order.
### `on_user_login`
_First introduced in Synapse v1.98.0_
```python
async def on_user_login(user_id: str, auth_provider_type: str, auth_provider_id: str) -> None
```
Called after successfully login or registration of a user for cases when module needs to perform extra operations after auth.
represented by their Matrix user ID.
If multiple modules implement this callback, Synapse runs them all in order.

View File

@ -181,7 +181,11 @@ frontend matrix-federation
backend matrix
server matrix 127.0.0.1:8008
```
Example configuration, if using a UNIX socket. The configuration lines regarding the frontends do not need to be modified.
```
backend matrix
server matrix unix@/run/synapse/main_public.sock
```
[Delegation](delegate.md) example:
```

View File

@ -46,6 +46,7 @@ server_notices:
system_mxid_display_name: "Server Notices"
system_mxid_avatar_url: "mxc://server.com/oumMVlgDnLYFaPVkExemNVVZ"
room_name: "Server Notices"
auto_join: true
```
The only compulsory setting is `system_mxid_localpart`, which defines the user
@ -55,6 +56,8 @@ room which will be created.
`system_mxid_display_name` and `system_mxid_avatar_url` can be used to set the
displayname and avatar of the Server Notices user.
`auto_join` will autojoin users to the notices room instead of sending an invite.
## Sending notices
To send server notices to users you can use the

View File

@ -88,6 +88,15 @@ process, for example:
dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb
```
Generally Synapse database schemas are compatible across multiple versions, once
a version of Synapse is deployed you may not be able to rollback automatically.
The following table gives the version ranges and the earliest version they can
be rolled back to. E.g. Synapse versions v1.58.0 through v1.61.1 can be rolled
back safely to v1.57.0, but starting with v1.62.0 it is only safe to rollback to
v1.61.0.
<!-- REPLACE_WITH_SCHEMA_VERSIONS -->
# Upgrading to v1.93.0
## Minimum supported Rust version

View File

@ -566,7 +566,7 @@ listeners:
# Note that x_forwarded will default to true, when using a UNIX socket. Please see
# https://matrix-org.github.io/synapse/latest/reverse_proxy.html.
#
- path: /var/run/synapse/main_public.sock
- path: /run/synapse/main_public.sock
type: http
resources:
- names: [client, federation]
@ -3815,6 +3815,8 @@ Sub-options for this setting include:
* `system_mxid_display_name`: set the display name of the "notices" user
* `system_mxid_avatar_url`: set the avatar for the "notices" user
* `room_name`: set the room name of the server notices room
* `auto_join`: boolean. If true, the user will be automatically joined to the room instead of being invited.
Defaults to false. _Added in Synapse 1.98.0._
Example configuration:
```yaml
@ -3823,6 +3825,7 @@ server_notices:
system_mxid_display_name: "Server Notices"
system_mxid_avatar_url: "mxc://server.com/oumMVlgDnLYFaPVkExemNVVZ"
room_name: "Server Notices"
auto_join: true
```
---
### `enable_room_list_search`
@ -4215,9 +4218,9 @@ Example configuration(#2, for UNIX sockets):
```yaml
instance_map:
main:
path: /var/run/synapse/main_replication.sock
path: /run/synapse/main_replication.sock
worker1:
path: /var/run/synapse/worker1_replication.sock
path: /run/synapse/worker1_replication.sock
```
---
### `stream_writers`
@ -4403,13 +4406,13 @@ Example configuration(#2, using UNIX sockets with a `replication` listener):
```yaml
worker_listeners:
- type: http
path: /var/run/synapse/worker_public.sock
resources:
- names: [client, federation]
- type: http
path: /var/run/synapse/worker_replication.sock
path: /run/synapse/worker_replication.sock
resources:
- names: [replication]
- type: http
path: /run/synapse/worker_public.sock
resources:
- names: [client, federation]
```
---
### `worker_manhole`

84
poetry.lock generated
View File

@ -454,34 +454,34 @@ files = [
[[package]]
name = "cryptography"
version = "41.0.5"
version = "41.0.7"
description = "cryptography is a package which provides cryptographic recipes and primitives to Python developers."
optional = false
python-versions = ">=3.7"
files = [
{file = "cryptography-41.0.5-cp37-abi3-macosx_10_12_universal2.whl", hash = "sha256:da6a0ff8f1016ccc7477e6339e1d50ce5f59b88905585f77193ebd5068f1e797"},
{file = "cryptography-41.0.5-cp37-abi3-macosx_10_12_x86_64.whl", hash = "sha256:b948e09fe5fb18517d99994184854ebd50b57248736fd4c720ad540560174ec5"},
{file = "cryptography-41.0.5-cp37-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d38e6031e113b7421db1de0c1b1f7739564a88f1684c6b89234fbf6c11b75147"},
{file = "cryptography-41.0.5-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e270c04f4d9b5671ebcc792b3ba5d4488bf7c42c3c241a3748e2599776f29696"},
{file = "cryptography-41.0.5-cp37-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:ec3b055ff8f1dce8e6ef28f626e0972981475173d7973d63f271b29c8a2897da"},
{file = "cryptography-41.0.5-cp37-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:7d208c21e47940369accfc9e85f0de7693d9a5d843c2509b3846b2db170dfd20"},
{file = "cryptography-41.0.5-cp37-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:8254962e6ba1f4d2090c44daf50a547cd5f0bf446dc658a8e5f8156cae0d8548"},
{file = "cryptography-41.0.5-cp37-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:a48e74dad1fb349f3dc1d449ed88e0017d792997a7ad2ec9587ed17405667e6d"},
{file = "cryptography-41.0.5-cp37-abi3-win32.whl", hash = "sha256:d3977f0e276f6f5bf245c403156673db103283266601405376f075c849a0b936"},
{file = "cryptography-41.0.5-cp37-abi3-win_amd64.whl", hash = "sha256:73801ac9736741f220e20435f84ecec75ed70eda90f781a148f1bad546963d81"},
{file = "cryptography-41.0.5-pp310-pypy310_pp73-macosx_10_12_x86_64.whl", hash = "sha256:3be3ca726e1572517d2bef99a818378bbcf7d7799d5372a46c79c29eb8d166c1"},
{file = "cryptography-41.0.5-pp310-pypy310_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:e886098619d3815e0ad5790c973afeee2c0e6e04b4da90b88e6bd06e2a0b1b72"},
{file = "cryptography-41.0.5-pp310-pypy310_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:573eb7128cbca75f9157dcde974781209463ce56b5804983e11a1c462f0f4e88"},
{file = "cryptography-41.0.5-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:0c327cac00f082013c7c9fb6c46b7cc9fa3c288ca702c74773968173bda421bf"},
{file = "cryptography-41.0.5-pp38-pypy38_pp73-macosx_10_12_x86_64.whl", hash = "sha256:227ec057cd32a41c6651701abc0328135e472ed450f47c2766f23267b792a88e"},
{file = "cryptography-41.0.5-pp38-pypy38_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:22892cc830d8b2c89ea60148227631bb96a7da0c1b722f2aac8824b1b7c0b6b8"},
{file = "cryptography-41.0.5-pp38-pypy38_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:5a70187954ba7292c7876734183e810b728b4f3965fbe571421cb2434d279179"},
{file = "cryptography-41.0.5-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:88417bff20162f635f24f849ab182b092697922088b477a7abd6664ddd82291d"},
{file = "cryptography-41.0.5-pp39-pypy39_pp73-macosx_10_12_x86_64.whl", hash = "sha256:c707f7afd813478e2019ae32a7c49cd932dd60ab2d2a93e796f68236b7e1fbf1"},
{file = "cryptography-41.0.5-pp39-pypy39_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:580afc7b7216deeb87a098ef0674d6ee34ab55993140838b14c9b83312b37b86"},
{file = "cryptography-41.0.5-pp39-pypy39_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:fba1e91467c65fe64a82c689dc6cf58151158993b13eb7a7f3f4b7f395636723"},
{file = "cryptography-41.0.5-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:0d2a6a598847c46e3e321a7aef8af1436f11c27f1254933746304ff014664d84"},
{file = "cryptography-41.0.5.tar.gz", hash = "sha256:392cb88b597247177172e02da6b7a63deeff1937fa6fec3bbf902ebd75d97ec7"},
{file = "cryptography-41.0.7-cp37-abi3-macosx_10_12_universal2.whl", hash = "sha256:3c78451b78313fa81607fa1b3f1ae0a5ddd8014c38a02d9db0616133987b9cdf"},
{file = "cryptography-41.0.7-cp37-abi3-macosx_10_12_x86_64.whl", hash = "sha256:928258ba5d6f8ae644e764d0f996d61a8777559f72dfeb2eea7e2fe0ad6e782d"},
{file = "cryptography-41.0.7-cp37-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5a1b41bc97f1ad230a41657d9155113c7521953869ae57ac39ac7f1bb471469a"},
{file = "cryptography-41.0.7-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:841df4caa01008bad253bce2a6f7b47f86dc9f08df4b433c404def869f590a15"},
{file = "cryptography-41.0.7-cp37-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:5429ec739a29df2e29e15d082f1d9ad683701f0ec7709ca479b3ff2708dae65a"},
{file = "cryptography-41.0.7-cp37-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:43f2552a2378b44869fe8827aa19e69512e3245a219104438692385b0ee119d1"},
{file = "cryptography-41.0.7-cp37-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:af03b32695b24d85a75d40e1ba39ffe7db7ffcb099fe507b39fd41a565f1b157"},
{file = "cryptography-41.0.7-cp37-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:49f0805fc0b2ac8d4882dd52f4a3b935b210935d500b6b805f321addc8177406"},
{file = "cryptography-41.0.7-cp37-abi3-win32.whl", hash = "sha256:f983596065a18a2183e7f79ab3fd4c475205b839e02cbc0efbbf9666c4b3083d"},
{file = "cryptography-41.0.7-cp37-abi3-win_amd64.whl", hash = "sha256:90452ba79b8788fa380dfb587cca692976ef4e757b194b093d845e8d99f612f2"},
{file = "cryptography-41.0.7-pp310-pypy310_pp73-macosx_10_12_x86_64.whl", hash = "sha256:079b85658ea2f59c4f43b70f8119a52414cdb7be34da5d019a77bf96d473b960"},
{file = "cryptography-41.0.7-pp310-pypy310_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:b640981bf64a3e978a56167594a0e97db71c89a479da8e175d8bb5be5178c003"},
{file = "cryptography-41.0.7-pp310-pypy310_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:e3114da6d7f95d2dee7d3f4eec16dacff819740bbab931aff8648cb13c5ff5e7"},
{file = "cryptography-41.0.7-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:d5ec85080cce7b0513cfd233914eb8b7bbd0633f1d1703aa28d1dd5a72f678ec"},
{file = "cryptography-41.0.7-pp38-pypy38_pp73-macosx_10_12_x86_64.whl", hash = "sha256:7a698cb1dac82c35fcf8fe3417a3aaba97de16a01ac914b89a0889d364d2f6be"},
{file = "cryptography-41.0.7-pp38-pypy38_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:37a138589b12069efb424220bf78eac59ca68b95696fc622b6ccc1c0a197204a"},
{file = "cryptography-41.0.7-pp38-pypy38_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:68a2dec79deebc5d26d617bfdf6e8aab065a4f34934b22d3b5010df3ba36612c"},
{file = "cryptography-41.0.7-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:09616eeaef406f99046553b8a40fbf8b1e70795a91885ba4c96a70793de5504a"},
{file = "cryptography-41.0.7-pp39-pypy39_pp73-macosx_10_12_x86_64.whl", hash = "sha256:48a0476626da912a44cc078f9893f292f0b3e4c739caf289268168d8f4702a39"},
{file = "cryptography-41.0.7-pp39-pypy39_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:c7f3201ec47d5207841402594f1d7950879ef890c0c495052fa62f58283fde1a"},
{file = "cryptography-41.0.7-pp39-pypy39_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:c5ca78485a255e03c32b513f8c2bc39fedb7f5c5f8535545bdc223a03b24f248"},
{file = "cryptography-41.0.7-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:d6c391c021ab1f7a82da5d8d0b3cee2f4b2c455ec86c8aebbc84837a631ff309"},
{file = "cryptography-41.0.7.tar.gz", hash = "sha256:13f93ce9bea8016c253b34afc6bd6a75993e5c40672ed5405a9c832f0d4a00bc"},
]
[package.dependencies]
@ -712,13 +712,13 @@ idna = ">=2.5"
[[package]]
name = "idna"
version = "3.4"
version = "3.6"
description = "Internationalized Domain Names in Applications (IDNA)"
optional = false
python-versions = ">=3.5"
files = [
{file = "idna-3.4-py3-none-any.whl", hash = "sha256:90b77e79eaa3eba6de819a0c442c0b4ceefc341a7a2ab77d7562bf49f425c5c2"},
{file = "idna-3.4.tar.gz", hash = "sha256:814f528e8dead7d329833b91c5faa87d60bf71824cd12a7530b5526063d02cb4"},
{file = "idna-3.6-py3-none-any.whl", hash = "sha256:c05567e9c24a6b9faaa835c4821bad0590fbb9d5779e7caa6e1cc4978e7eb24f"},
{file = "idna-3.6.tar.gz", hash = "sha256:9ecdbbd083b06798ae1e86adcbfe8ab1479cf864e4ee30fe4e46a003d12491ca"},
]
[[package]]
@ -1611,13 +1611,13 @@ files = [
[[package]]
name = "phonenumbers"
version = "8.13.23"
version = "8.13.26"
description = "Python version of Google's common library for parsing, formatting, storing and validating international phone numbers."
optional = false
python-versions = "*"
files = [
{file = "phonenumbers-8.13.23-py2.py3-none-any.whl", hash = "sha256:34d6cb279dd4a64714e324c71350f96e5bda3237be28d11b4c555c44701544cd"},
{file = "phonenumbers-8.13.23.tar.gz", hash = "sha256:869e44fcaaf276eca6b953a401e2b27d57461f3a18a66cf5f13377e7bb0e228c"},
{file = "phonenumbers-8.13.26-py2.py3-none-any.whl", hash = "sha256:b2308c9c5750b8f10dd30d94547afd66bce60ac5e93aff227f95740557f32752"},
{file = "phonenumbers-8.13.26.tar.gz", hash = "sha256:937d70aeceb317f5831dfec28de855a60260ef4a9d551964bec8e7a7d0cf81cd"},
]
[[package]]
@ -1729,13 +1729,13 @@ test = ["appdirs (==1.4.4)", "covdefaults (>=2.2.2)", "pytest (>=7.2.1)", "pytes
[[package]]
name = "prometheus-client"
version = "0.18.0"
version = "0.19.0"
description = "Python client for the Prometheus monitoring system."
optional = false
python-versions = ">=3.8"
files = [
{file = "prometheus_client-0.18.0-py3-none-any.whl", hash = "sha256:8de3ae2755f890826f4b6479e5571d4f74ac17a81345fe69a6778fdb92579184"},
{file = "prometheus_client-0.18.0.tar.gz", hash = "sha256:35f7a8c22139e2bb7ca5a698e92d38145bc8dc74c1c0bf56f25cca886a764e17"},
{file = "prometheus_client-0.19.0-py3-none-any.whl", hash = "sha256:c88b1e6ecf6b41cd8fb5731c7ae919bf66df6ec6fafa555cd6c0e16ca169ae92"},
{file = "prometheus_client-0.19.0.tar.gz", hash = "sha256:4585b0d1223148c27a225b10dbec5ae9bc4c81a99a3fa80774fa6209935324e1"},
]
[package.extras]
@ -2691,17 +2691,17 @@ test = ["cython", "filelock", "html5lib", "pytest (>=4.6)"]
[[package]]
name = "sphinx-autodoc2"
version = "0.4.2"
version = "0.5.0"
description = "Analyse a python project and create documentation for it."
optional = false
python-versions = ">=3.8"
files = [
{file = "sphinx-autodoc2-0.4.2.tar.gz", hash = "sha256:06da226a25a4339e173b34bb0e590e0ba9b4570b414796140aee1939d09acb3a"},
{file = "sphinx_autodoc2-0.4.2-py3-none-any.whl", hash = "sha256:00835ba8c980b9c510ea794c3e2060e5a254a74c6c22badc9bfd3642dc1034b4"},
{file = "sphinx_autodoc2-0.5.0-py3-none-any.whl", hash = "sha256:e867013b1512f9d6d7e6f6799f8b537d6884462acd118ef361f3f619a60b5c9e"},
{file = "sphinx_autodoc2-0.5.0.tar.gz", hash = "sha256:7d76044aa81d6af74447080182b6868c7eb066874edc835e8ddf810735b6565a"},
]
[package.dependencies]
astroid = ">=2.7"
astroid = ">=2.7,<4"
tomli = {version = "*", markers = "python_version < \"3.11\""}
typing-extensions = "*"
@ -2709,7 +2709,7 @@ typing-extensions = "*"
cli = ["typer[all]"]
docs = ["furo", "myst-parser", "sphinx (>=4.0.0)"]
sphinx = ["sphinx (>=4.0.0)"]
testing = ["pytest", "pytest-cov", "pytest-regressions", "sphinx (>=4.0.0)"]
testing = ["pytest", "pytest-cov", "pytest-regressions", "sphinx (>=4.0.0,<7)"]
[[package]]
name = "sphinx-basic-ng"
@ -3054,13 +3054,13 @@ files = [
[[package]]
name = "types-jsonschema"
version = "4.19.0.4"
version = "4.20.0.0"
description = "Typing stubs for jsonschema"
optional = false
python-versions = ">=3.8"
files = [
{file = "types-jsonschema-4.19.0.4.tar.gz", hash = "sha256:994feb6632818259c4b5dbd733867824cb475029a6abc2c2b5201a2268b6e7d2"},
{file = "types_jsonschema-4.19.0.4-py3-none-any.whl", hash = "sha256:b73c3f4ba3cd8108602d1198a438e2698d5eb6b9db206ed89a33e24729b0abe7"},
{file = "types-jsonschema-4.20.0.0.tar.gz", hash = "sha256:0de1032d243f1d3dba8b745ad84efe8c1af71665a9deb1827636ac535dcb79c1"},
{file = "types_jsonschema-4.20.0.0-py3-none-any.whl", hash = "sha256:e6d5df18aaca4412f0aae246a294761a92040e93d7bc840f002b7329a8b72d26"},
]
[package.dependencies]
@ -3476,4 +3476,4 @@ user-search = ["pyicu"]
[metadata]
lock-version = "2.0"
python-versions = "^3.8.0"
content-hash = "2f289275f52d181bc0168528c6ce820c9147f67ab11a035098865d1ed8334aef"
content-hash = "64b56c01cea86e00d35b3105a2253732ffbf30fa3c4c01ccf1a03bffd1796ab5"

View File

@ -96,7 +96,7 @@ module-name = "synapse.synapse_rust"
[tool.poetry]
name = "matrix-synapse"
version = "1.97.0rc1"
version = "1.97.0"
description = "Homeserver for the Matrix decentralised comms protocol"
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
license = "Apache-2.0"
@ -372,7 +372,7 @@ optional = true
[tool.poetry.group.dev-docs.dependencies]
sphinx = {version = "^6.1", python = "^3.8"}
sphinx-autodoc2 = {version = "^0.4.2", python = "^3.8"}
sphinx-autodoc2 = {version = ">=0.4.2,<0.6.0", python = "^3.8"}
myst-parser = {version = "^1.0.0", python = "^3.8"}
furo = ">=2022.12.7,<2024.0.0"
@ -384,7 +384,7 @@ furo = ">=2022.12.7,<2024.0.0"
# runtime errors caused by build system changes.
# We are happy to raise these upper bounds upon request,
# provided we check that it's safe to do so (i.e. that CI passes).
requires = ["poetry-core>=1.1.0,<=1.7.0", "setuptools_rust>=1.3,<=1.8.1"]
requires = ["poetry-core>=1.1.0,<=1.8.1", "setuptools_rust>=1.3,<=1.8.1"]
build-backend = "poetry.core.masonry.api"

View File

@ -296,8 +296,7 @@ impl<'source> FromPyObject<'source> for JsonValue {
match l.iter().map(SimpleJsonValue::extract).collect() {
Ok(a) => Ok(JsonValue::Array(a)),
Err(e) => Err(PyTypeError::new_err(format!(
"Can't convert to JsonValue::Array: {}",
e
"Can't convert to JsonValue::Array: {e}"
))),
}
} else if let Ok(v) = SimpleJsonValue::extract(ob) {

181
scripts-dev/schema_versions.py Executable file
View File

@ -0,0 +1,181 @@
#!/usr/bin/env python
# Copyright 2023 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""A script to calculate which versions of Synapse have backwards-compatible
database schemas. It creates a Markdown table of Synapse versions and the earliest
compatible version.
It is compatible with the mdbook protocol for preprocessors (see
https://rust-lang.github.io/mdBook/for_developers/preprocessors.html#implementing-a-preprocessor-with-a-different-language):
Exit 0 to denote support for all renderers:
./scripts-dev/schema_versions.py supports <mdbook renderer>
Parse a JSON list from stdin and add the table to the proper documetnation page:
./scripts-dev/schema_versions.py
Additionally, the script supports dumping the table to stdout for debugging:
./scripts-dev/schema_versions.py dump
"""
import io
import json
import sys
from collections import defaultdict
from typing import Any, Dict, Iterator, Optional, Tuple
import git
from packaging import version
# The schema version has moved around over the years.
SCHEMA_VERSION_FILES = (
"synapse/storage/schema/__init__.py",
"synapse/storage/prepare_database.py",
"synapse/storage/__init__.py",
"synapse/app/homeserver.py",
)
# Skip versions of Synapse < v1.0, they're old and essentially not
# compatible with today's federation.
OLDEST_SHOWN_VERSION = version.parse("v1.0")
def get_schema_versions(tag: git.Tag) -> Tuple[Optional[int], Optional[int]]:
"""Get the schema and schema compat versions for a tag."""
schema_version = None
schema_compat_version = None
for file in SCHEMA_VERSION_FILES:
try:
schema_file = tag.commit.tree / file
except KeyError:
continue
# We (usually) can't execute the code since it might have unknown imports.
if file != "synapse/storage/schema/__init__.py":
with io.BytesIO(schema_file.data_stream.read()) as f:
for line in f.readlines():
if line.startswith(b"SCHEMA_VERSION"):
schema_version = int(line.split()[2])
# Bail early.
break
else:
# SCHEMA_COMPAT_VERSION is sometimes across multiple lines, the easist
# thing to do is exec the code. Luckily it has only ever existed in
# a file which imports nothing else from Synapse.
locals: Dict[str, Any] = {}
exec(schema_file.data_stream.read().decode("utf-8"), {}, locals)
schema_version = locals["SCHEMA_VERSION"]
schema_compat_version = locals.get("SCHEMA_COMPAT_VERSION")
return schema_version, schema_compat_version
def get_tags(repo: git.Repo) -> Iterator[git.Tag]:
"""Return an iterator of tags sorted by version."""
tags = []
for tag in repo.tags:
# All "real" Synapse tags are of the form vX.Y.Z.
if not tag.name.startswith("v"):
continue
# There's a weird tag from the initial react UI.
if tag.name == "v0.1":
continue
try:
tag_version = version.parse(tag.name)
except version.InvalidVersion:
# Skip invalid versions.
continue
# Skip pre- and post-release versions.
if tag_version.is_prerelease or tag_version.is_postrelease or tag_version.local:
continue
# Skip old versions.
if tag_version < OLDEST_SHOWN_VERSION:
continue
tags.append((tag_version, tag))
# Sort based on the version number (not lexically).
return (tag for _, tag in sorted(tags, key=lambda t: t[0]))
def calculate_version_chart() -> str:
repo = git.Repo(path=".")
# Map of schema version -> Synapse versions which are at that schema version.
schema_versions = defaultdict(list)
# Map of schema version -> Synapse versions which are compatible with that
# schema version.
schema_compat_versions = defaultdict(list)
# Find ranges of versions which are compatible with a schema version.
#
# There are two modes of operation:
#
# 1. Pre-schema_compat_version (i.e. schema_compat_version of None), then
# Synapse is compatible up/downgrading to a version with
# schema_version >= its current version.
#
# 2. Post-schema_compat_version (i.e. schema_compat_version is *not* None),
# then Synapse is compatible up/downgrading to a version with
# schema version >= schema_compat_version.
#
# This is more generous and avoids versions that cannot be rolled back.
#
# See https://github.com/matrix-org/synapse/pull/9933 which was included in v1.37.0.
for tag in get_tags(repo):
schema_version, schema_compat_version = get_schema_versions(tag)
# If a schema compat version is given, prefer that over the schema version.
schema_versions[schema_version].append(tag.name)
schema_compat_versions[schema_compat_version or schema_version].append(tag.name)
# Generate a table which maps the latest Synapse version compatible with each
# schema version.
result = f"| {'Versions': ^19} | Compatible version |\n"
result += f"|{'-' * (19 + 2)}|{'-' * (18 + 2)}|\n"
for schema_version, synapse_versions in schema_compat_versions.items():
result += f"| {synapse_versions[0] + ' ' + synapse_versions[-1]: ^19} | {schema_versions[schema_version][0]: ^18} |\n"
return result
if __name__ == "__main__":
if len(sys.argv) == 3 and sys.argv[1] == "supports":
# We don't care about the renderer which is being used, which is the second argument.
sys.exit(0)
elif len(sys.argv) == 2 and sys.argv[1] == "dump":
print(calculate_version_chart())
else:
# Expect JSON data on stdin.
context, book = json.load(sys.stdin)
for section in book["sections"]:
if "Chapter" in section and section["Chapter"]["path"] == "upgrade.md":
section["Chapter"]["content"] = section["Chapter"]["content"].replace(
"<!-- REPLACE_WITH_SCHEMA_VERSIONS -->", calculate_version_chart()
)
# Print the result back out to stdout.
print(json.dumps(book))

View File

@ -419,3 +419,7 @@ class ExperimentalConfig(Config):
self.msc4028_push_encrypted_events = experimental.get(
"msc4028_push_encrypted_events", False
)
self.msc4069_profile_inhibit_propagation = experimental.get(
"msc4069_profile_inhibit_propagation", False
)

View File

@ -48,6 +48,7 @@ class ServerNoticesConfig(Config):
self.server_notices_mxid_display_name: Optional[str] = None
self.server_notices_mxid_avatar_url: Optional[str] = None
self.server_notices_room_name: Optional[str] = None
self.server_notices_auto_join: bool = False
def read_config(self, config: JsonDict, **kwargs: Any) -> None:
c = config.get("server_notices")
@ -62,3 +63,4 @@ class ServerNoticesConfig(Config):
self.server_notices_mxid_avatar_url = c.get("system_mxid_avatar_url", None)
# todo: i18n
self.server_notices_room_name = c.get("room_name", "Server Notices")
self.server_notices_auto_join = c.get("auto_join", False)

View File

@ -21,6 +21,7 @@ from typing import (
TYPE_CHECKING,
AbstractSet,
Awaitable,
BinaryIO,
Callable,
Collection,
Container,
@ -1862,6 +1863,43 @@ class FederationClient(FederationBase):
return filtered_statuses, filtered_failures
async def download_media(
self,
destination: str,
media_id: str,
output_stream: BinaryIO,
max_size: int,
max_timeout_ms: int,
) -> Tuple[int, Dict[bytes, List[bytes]]]:
try:
return await self.transport_layer.download_media_v3(
destination,
media_id,
output_stream=output_stream,
max_size=max_size,
max_timeout_ms=max_timeout_ms,
)
except HttpResponseException as e:
# If an error is received that is due to an unrecognised endpoint,
# fallback to the r0 endpoint. Otherwise, consider it a legitimate error
# and raise.
if not is_unknown_endpoint(e):
raise
logger.debug(
"Couldn't download media %s/%s with the v3 API, falling back to the r0 API",
destination,
media_id,
)
return await self.transport_layer.download_media_r0(
destination,
media_id,
output_stream=output_stream,
max_size=max_size,
max_timeout_ms=max_timeout_ms,
)
@attr.s(frozen=True, slots=True, auto_attribs=True)
class TimestampToEventResponse:

View File

@ -18,6 +18,7 @@ import urllib
from typing import (
TYPE_CHECKING,
Any,
BinaryIO,
Callable,
Collection,
Dict,
@ -804,6 +805,58 @@ class TransportLayerClient:
destination=destination, path=path, data={"user_ids": user_ids}
)
async def download_media_r0(
self,
destination: str,
media_id: str,
output_stream: BinaryIO,
max_size: int,
max_timeout_ms: int,
) -> Tuple[int, Dict[bytes, List[bytes]]]:
path = f"/_matrix/media/r0/download/{destination}/{media_id}"
return await self.client.get_file(
destination,
path,
output_stream=output_stream,
max_size=max_size,
args={
# tell the remote server to 404 if it doesn't
# recognise the server_name, to make sure we don't
# end up with a routing loop.
"allow_remote": "false",
"timeout_ms": str(max_timeout_ms),
},
)
async def download_media_v3(
self,
destination: str,
media_id: str,
output_stream: BinaryIO,
max_size: int,
max_timeout_ms: int,
) -> Tuple[int, Dict[bytes, List[bytes]]]:
path = f"/_matrix/media/v3/download/{destination}/{media_id}"
return await self.client.get_file(
destination,
path,
output_stream=output_stream,
max_size=max_size,
args={
# tell the remote server to 404 if it doesn't
# recognise the server_name, to make sure we don't
# end up with a routing loop.
"allow_remote": "false",
"timeout_ms": str(max_timeout_ms),
# Matrix 1.7 allows for this to redirect to another URL, this should
# just be ignored for an old homeserver, so always provide it.
"allow_redirect": "true",
},
follow_redirects=True,
)
def _create_path(federation_prefix: str, path: str, *args: str) -> str:
"""

View File

@ -98,6 +98,22 @@ class AccountValidityHandler:
for callback in self._module_api_callbacks.on_user_registration_callbacks:
await callback(user_id)
async def on_user_login(
self,
user_id: str,
auth_provider_type: Optional[str],
auth_provider_id: Optional[str],
) -> None:
"""Tell third-party modules about a user logins.
Args:
user_id: The mxID of the user.
auth_provider_type: The type of login.
auth_provider_id: The ID of the auth provider.
"""
for callback in self._module_api_callbacks.on_user_login_callbacks:
await callback(user_id, auth_provider_type, auth_provider_id)
@wrap_as_background_process("send_renewals")
async def _send_renewal_emails(self) -> None:
"""Gets the list of users whose account is expiring in the amount of time

View File

@ -212,6 +212,7 @@ class AuthHandler:
self._password_enabled_for_reauth = hs.config.auth.password_enabled_for_reauth
self._password_localdb_enabled = hs.config.auth.password_localdb_enabled
self._third_party_rules = hs.get_module_api_callbacks().third_party_event_rules
self._account_validity_handler = hs.get_account_validity_handler()
# Ratelimiter for failed auth during UIA. Uses same ratelimit config
# as per `rc_login.failed_attempts`.
@ -1783,6 +1784,13 @@ class AuthHandler:
client_redirect_url, "loginToken", login_token
)
# Run post-login module callback handlers
await self._account_validity_handler.on_user_login(
user_id=registered_user_id,
auth_provider_type=LoginType.SSO,
auth_provider_id=auth_provider_id,
)
# if the client is whitelisted, we can redirect straight to it
if client_redirect_url.startswith(self._whitelisted_sso_clients):
request.redirect(redirect_url)

View File

@ -693,13 +693,9 @@ class EventCreationHandler:
if require_consent and not is_exempt:
await self.assert_accepted_privacy_policy(requester)
# Save the access token ID, the device ID and the transaction ID in the event
# internal metadata. This is useful to determine if we should echo the
# transaction_id in events.
# Save the the device ID and the transaction ID in the event internal metadata.
# This is useful to determine if we should echo the transaction_id in events.
# See `synapse.events.utils.EventClientSerializer.serialize_event`
if requester.access_token_id is not None:
builder.internal_metadata.token_id = requester.access_token_id
if requester.device_id is not None:
builder.internal_metadata.device_id = requester.device_id

View File

@ -129,6 +129,7 @@ class ProfileHandler:
new_displayname: str,
by_admin: bool = False,
deactivation: bool = False,
propagate: bool = True,
) -> None:
"""Set the displayname of a user
@ -138,6 +139,7 @@ class ProfileHandler:
new_displayname: The displayname to give this user.
by_admin: Whether this change was made by an administrator.
deactivation: Whether this change was made while deactivating the user.
propagate: Whether this change also applies to the user's membership events.
"""
if not self.hs.is_mine(target_user):
raise SynapseError(400, "User is not hosted on this homeserver")
@ -188,7 +190,8 @@ class ProfileHandler:
target_user.to_string(), profile, by_admin, deactivation
)
await self._update_join_states(requester, target_user)
if propagate:
await self._update_join_states(requester, target_user)
async def get_avatar_url(self, target_user: UserID) -> Optional[str]:
if self.hs.is_mine(target_user):
@ -221,6 +224,7 @@ class ProfileHandler:
new_avatar_url: str,
by_admin: bool = False,
deactivation: bool = False,
propagate: bool = True,
) -> None:
"""Set a new avatar URL for a user.
@ -230,6 +234,7 @@ class ProfileHandler:
new_avatar_url: The avatar URL to give this user.
by_admin: Whether this change was made by an administrator.
deactivation: Whether this change was made while deactivating the user.
propagate: Whether this change also applies to the user's membership events.
"""
if not self.hs.is_mine(target_user):
raise SynapseError(400, "User is not hosted on this homeserver")
@ -278,7 +283,8 @@ class ProfileHandler:
target_user.to_string(), profile, by_admin, deactivation
)
await self._update_join_states(requester, target_user)
if propagate:
await self._update_join_states(requester, target_user)
@cached()
async def check_avatar_size_and_mime_type(self, mxc: str) -> bool:

View File

@ -698,6 +698,7 @@ class RoomCreationHandler:
config: JsonDict,
ratelimit: bool = True,
creator_join_profile: Optional[JsonDict] = None,
ignore_forced_encryption: bool = False,
) -> Tuple[str, Optional[RoomAlias], int]:
"""Creates a new room.
@ -714,6 +715,8 @@ class RoomCreationHandler:
derived from the user's profile. If set, should contain the
values to go in the body of the 'join' event (typically
`avatar_url` and/or `displayname`.
ignore_forced_encryption:
Ignore encryption forced by `encryption_enabled_by_default_for_room_type` setting.
Returns:
A 3-tuple containing:
@ -1015,6 +1018,7 @@ class RoomCreationHandler:
room_alias: Optional[RoomAlias] = None,
power_level_content_override: Optional[JsonDict] = None,
creator_join_profile: Optional[JsonDict] = None,
ignore_forced_encryption: bool = False,
) -> Tuple[int, str, int]:
"""Sends the initial events into a new room. Sends the room creation, membership,
and power level events into the room sequentially, then creates and batches up the
@ -1049,6 +1053,8 @@ class RoomCreationHandler:
creator_join_profile:
Set to override the displayname and avatar for the creating
user in this room.
ignore_forced_encryption:
Ignore encryption forced by `encryption_enabled_by_default_for_room_type` setting.
Returns:
A tuple containing the stream ID, event ID and depth of the last
@ -1251,7 +1257,7 @@ class RoomCreationHandler:
)
events_to_send.append((event, context))
if config["encrypted"]:
if config["encrypted"] and not ignore_forced_encryption:
encryption_event, encryption_context = await create_event(
EventTypes.RoomEncryption,
{"algorithm": RoomEncryptionAlgorithms.DEFAULT},

View File

@ -2111,9 +2111,14 @@ class RoomForgetterHandler(StateDeltasHandler):
self.pos = room_max_stream_ordering
if not self._hs.config.room.forget_on_leave:
# Update the processing position, so that if the server admin turns the
# feature on at a later date, we don't decide to forget every room that
# has ever been left in the past.
# Update the processing position, so that if the server admin turns
# the feature on at a later date, we don't decide to forget every
# room that has ever been left in the past.
#
# We wait for a short time so that we don't "tight" loop just
# keeping the table up to date.
await self._clock.sleep(0.5)
self.pos = self._store.get_room_max_stream_ordering()
await self._store.update_room_forgetter_stream_pos(self.pos)
return

View File

@ -153,12 +153,18 @@ class MatrixFederationRequest:
"""Query arguments.
"""
txn_id: Optional[str] = None
"""Unique ID for this request (for logging)
txn_id: str = attr.ib(init=False)
"""Unique ID for this request (for logging), this is autogenerated.
"""
uri: bytes = attr.ib(init=False)
"""The URI of this request
uri: bytes = b""
"""The URI of this request, usually generated from the above information.
"""
_generate_uri: bool = True
"""True to automatically generate the uri field based on the above information.
Set to False if manually configuring the URI.
"""
def __attrs_post_init__(self) -> None:
@ -168,22 +174,23 @@ class MatrixFederationRequest:
object.__setattr__(self, "txn_id", txn_id)
destination_bytes = self.destination.encode("ascii")
path_bytes = self.path.encode("ascii")
query_bytes = encode_query_args(self.query)
if self._generate_uri:
destination_bytes = self.destination.encode("ascii")
path_bytes = self.path.encode("ascii")
query_bytes = encode_query_args(self.query)
# The object is frozen so we can pre-compute this.
uri = urllib.parse.urlunparse(
(
b"matrix-federation",
destination_bytes,
path_bytes,
None,
query_bytes,
b"",
# The object is frozen so we can pre-compute this.
uri = urllib.parse.urlunparse(
(
b"matrix-federation",
destination_bytes,
path_bytes,
None,
query_bytes,
b"",
)
)
)
object.__setattr__(self, "uri", uri)
object.__setattr__(self, "uri", uri)
def get_json(self) -> Optional[JsonDict]:
if self.json_callback:
@ -513,6 +520,7 @@ class MatrixFederationHttpClient:
ignore_backoff: bool = False,
backoff_on_404: bool = False,
backoff_on_all_error_codes: bool = False,
follow_redirects: bool = False,
) -> IResponse:
"""
Sends a request to the given server.
@ -555,6 +563,9 @@ class MatrixFederationHttpClient:
backoff_on_404: Back off if we get a 404
backoff_on_all_error_codes: Back off if we get any error response
follow_redirects: True to follow the Location header of 307/308 redirect
responses. This does not recurse.
Returns:
Resolves with the HTTP response object on success.
@ -714,6 +725,26 @@ class MatrixFederationHttpClient:
response.code,
response_phrase,
)
elif (
response.code in (307, 308)
and follow_redirects
and response.headers.hasHeader("Location")
):
# The Location header *might* be relative so resolve it.
location = response.headers.getRawHeaders(b"Location")[0]
new_uri = urllib.parse.urljoin(request.uri, location)
return await self._send_request(
attr.evolve(request, uri=new_uri, generate_uri=False),
retry_on_dns_fail,
timeout,
long_retries,
ignore_backoff,
backoff_on_404,
backoff_on_all_error_codes,
# Do not continue following redirects.
follow_redirects=False,
)
else:
logger.info(
"{%s} [%s] Got response headers: %d %s",
@ -1383,6 +1414,7 @@ class MatrixFederationHttpClient:
retry_on_dns_fail: bool = True,
max_size: Optional[int] = None,
ignore_backoff: bool = False,
follow_redirects: bool = False,
) -> Tuple[int, Dict[bytes, List[bytes]]]:
"""GETs a file from a given homeserver
Args:
@ -1392,6 +1424,8 @@ class MatrixFederationHttpClient:
args: Optional dictionary used to create the query string.
ignore_backoff: true to ignore the historical backoff data
and try the request anyway.
follow_redirects: True to follow the Location header of 307/308 redirect
responses. This does not recurse.
Returns:
Resolves with an (int,dict) tuple of
@ -1412,7 +1446,10 @@ class MatrixFederationHttpClient:
)
response = await self._send_request(
request, retry_on_dns_fail=retry_on_dns_fail, ignore_backoff=ignore_backoff
request,
retry_on_dns_fail=retry_on_dns_fail,
ignore_backoff=ignore_backoff,
follow_redirects=follow_redirects,
)
headers = dict(response.headers.getAllRawHeaders())

View File

@ -77,7 +77,7 @@ class MediaRepository:
def __init__(self, hs: "HomeServer"):
self.hs = hs
self.auth = hs.get_auth()
self.client = hs.get_federation_http_client()
self.client = hs.get_federation_client()
self.clock = hs.get_clock()
self.server_name = hs.hostname
self.store = hs.get_datastores().main
@ -644,22 +644,13 @@ class MediaRepository:
file_info = FileInfo(server_name=server_name, file_id=file_id)
with self.media_storage.store_into_file(file_info) as (f, fname, finish):
request_path = "/".join(
("/_matrix/media/r0/download", server_name, media_id)
)
try:
length, headers = await self.client.get_file(
length, headers = await self.client.download_media(
server_name,
request_path,
media_id,
output_stream=f,
max_size=self.max_upload_size,
args={
# tell the remote server to 404 if it doesn't
# recognise the server_name, to make sure we don't
# end up with a routing loop.
"allow_remote": "false",
"timeout_ms": str(max_timeout_ms),
},
max_timeout_ms=max_timeout_ms,
)
except RequestSendFailed as e:
logger.warning(

View File

@ -80,6 +80,7 @@ from synapse.module_api.callbacks.account_validity_callbacks import (
ON_LEGACY_ADMIN_REQUEST,
ON_LEGACY_RENEW_CALLBACK,
ON_LEGACY_SEND_MAIL_CALLBACK,
ON_USER_LOGIN_CALLBACK,
ON_USER_REGISTRATION_CALLBACK,
)
from synapse.module_api.callbacks.spamchecker_callbacks import (
@ -334,6 +335,7 @@ class ModuleApi:
*,
is_user_expired: Optional[IS_USER_EXPIRED_CALLBACK] = None,
on_user_registration: Optional[ON_USER_REGISTRATION_CALLBACK] = None,
on_user_login: Optional[ON_USER_LOGIN_CALLBACK] = None,
on_legacy_send_mail: Optional[ON_LEGACY_SEND_MAIL_CALLBACK] = None,
on_legacy_renew: Optional[ON_LEGACY_RENEW_CALLBACK] = None,
on_legacy_admin_request: Optional[ON_LEGACY_ADMIN_REQUEST] = None,
@ -345,6 +347,7 @@ class ModuleApi:
return self._callbacks.account_validity.register_callbacks(
is_user_expired=is_user_expired,
on_user_registration=on_user_registration,
on_user_login=on_user_login,
on_legacy_send_mail=on_legacy_send_mail,
on_legacy_renew=on_legacy_renew,
on_legacy_admin_request=on_legacy_admin_request,

View File

@ -22,6 +22,7 @@ logger = logging.getLogger(__name__)
# Types for callbacks to be registered via the module api
IS_USER_EXPIRED_CALLBACK = Callable[[str], Awaitable[Optional[bool]]]
ON_USER_REGISTRATION_CALLBACK = Callable[[str], Awaitable]
ON_USER_LOGIN_CALLBACK = Callable[[str, Optional[str], Optional[str]], Awaitable]
# Temporary hooks to allow for a transition from `/_matrix/client` endpoints
# to `/_synapse/client/account_validity`. See `register_callbacks` below.
ON_LEGACY_SEND_MAIL_CALLBACK = Callable[[str], Awaitable]
@ -33,6 +34,7 @@ class AccountValidityModuleApiCallbacks:
def __init__(self) -> None:
self.is_user_expired_callbacks: List[IS_USER_EXPIRED_CALLBACK] = []
self.on_user_registration_callbacks: List[ON_USER_REGISTRATION_CALLBACK] = []
self.on_user_login_callbacks: List[ON_USER_LOGIN_CALLBACK] = []
self.on_legacy_send_mail_callback: Optional[ON_LEGACY_SEND_MAIL_CALLBACK] = None
self.on_legacy_renew_callback: Optional[ON_LEGACY_RENEW_CALLBACK] = None
@ -44,6 +46,7 @@ class AccountValidityModuleApiCallbacks:
self,
is_user_expired: Optional[IS_USER_EXPIRED_CALLBACK] = None,
on_user_registration: Optional[ON_USER_REGISTRATION_CALLBACK] = None,
on_user_login: Optional[ON_USER_LOGIN_CALLBACK] = None,
on_legacy_send_mail: Optional[ON_LEGACY_SEND_MAIL_CALLBACK] = None,
on_legacy_renew: Optional[ON_LEGACY_RENEW_CALLBACK] = None,
on_legacy_admin_request: Optional[ON_LEGACY_ADMIN_REQUEST] = None,
@ -55,6 +58,9 @@ class AccountValidityModuleApiCallbacks:
if on_user_registration is not None:
self.on_user_registration_callbacks.append(on_user_registration)
if on_user_login is not None:
self.on_user_login_callbacks.append(on_user_login)
# The builtin account validity feature exposes 3 endpoints (send_mail, renew, and
# an admin one). As part of moving the feature into a module, we need to change
# the path from /_matrix/client/unstable/account_validity/... to

View File

@ -115,6 +115,7 @@ class LoginRestServlet(RestServlet):
self.registration_handler = hs.get_registration_handler()
self._sso_handler = hs.get_sso_handler()
self._spam_checker = hs.get_module_api_callbacks().spam_checker
self._account_validity_handler = hs.get_account_validity_handler()
self._well_known_builder = WellKnownBuilder(hs)
self._address_ratelimiter = Ratelimiter(
@ -470,6 +471,13 @@ class LoginRestServlet(RestServlet):
device_id=device_id,
)
# execute the callback
await self._account_validity_handler.on_user_login(
user_id,
auth_provider_type=login_submission.get("type"),
auth_provider_id=auth_provider_id,
)
if valid_until_ms is not None:
expires_in_ms = valid_until_ms - self.clock.time_msec()
result["expires_in_ms"] = expires_in_ms

View File

@ -13,12 +13,17 @@
# limitations under the License.
""" This module contains REST servlets to do with profile: /profile/<paths> """
from http import HTTPStatus
from typing import TYPE_CHECKING, Tuple
from synapse.api.errors import Codes, SynapseError
from synapse.http.server import HttpServer
from synapse.http.servlet import RestServlet, parse_json_object_from_request
from synapse.http.servlet import (
RestServlet,
parse_boolean,
parse_json_object_from_request,
)
from synapse.http.site import SynapseRequest
from synapse.rest.client._base import client_patterns
from synapse.types import JsonDict, UserID
@ -27,6 +32,20 @@ if TYPE_CHECKING:
from synapse.server import HomeServer
def _read_propagate(hs: "HomeServer", request: SynapseRequest) -> bool:
# This will always be set by the time Twisted calls us.
assert request.args is not None
propagate = True
if hs.config.experimental.msc4069_profile_inhibit_propagation:
do_propagate = request.args.get(b"org.matrix.msc4069.propagate")
if do_propagate is not None:
propagate = parse_boolean(
request, "org.matrix.msc4069.propagate", default=False
)
return propagate
class ProfileDisplaynameRestServlet(RestServlet):
PATTERNS = client_patterns("/profile/(?P<user_id>[^/]*)/displayname", v1=True)
CATEGORY = "Event sending requests"
@ -80,7 +99,11 @@ class ProfileDisplaynameRestServlet(RestServlet):
errcode=Codes.BAD_JSON,
)
await self.profile_handler.set_displayname(user, requester, new_name, is_admin)
propagate = _read_propagate(self.hs, request)
await self.profile_handler.set_displayname(
user, requester, new_name, is_admin, propagate=propagate
)
return 200, {}
@ -135,8 +158,10 @@ class ProfileAvatarURLRestServlet(RestServlet):
400, "Missing key 'avatar_url'", errcode=Codes.MISSING_PARAM
)
propagate = _read_propagate(self.hs, request)
await self.profile_handler.set_avatar_url(
user, requester, new_avatar_url, is_admin
user, requester, new_avatar_url, is_admin, propagate=propagate
)
return 200, {}

View File

@ -80,6 +80,9 @@ class VersionsRestServlet(RestServlet):
"v1.4",
"v1.5",
"v1.6",
"v1.7",
"v1.8",
"v1.9",
],
# as per MSC1497:
"unstable_features": {
@ -126,6 +129,8 @@ class VersionsRestServlet(RestServlet):
"org.matrix.msc3981": self.config.experimental.msc3981_recurse_relations,
# Adds support for deleting account data.
"org.matrix.msc3391": self.config.experimental.msc3391_enabled,
# Allows clients to inhibit profile update propagation.
"org.matrix.msc4069": self.config.experimental.msc4069_profile_inhibit_propagation,
},
},
)

View File

@ -178,6 +178,8 @@ class ServerNoticesManager:
"avatar_url": self._config.servernotices.server_notices_mxid_avatar_url,
}
# `ignore_forced_encryption` is used to bypass `encryption_enabled_by_default_for_room_type`
# setting if it set, since the server notices will not be encrypted anyway.
room_id, _, _ = await self._room_creation_handler.create_room(
requester,
config={
@ -187,6 +189,7 @@ class ServerNoticesManager:
},
ratelimit=False,
creator_join_profile=join_profile,
ignore_forced_encryption=True,
)
self.maybe_get_notice_room_for_user.invalidate((user_id,))
@ -221,14 +224,27 @@ class ServerNoticesManager:
if room.room_id == room_id:
return
user_id_obj = UserID.from_string(user_id)
await self._room_member_handler.update_membership(
requester=requester,
target=UserID.from_string(user_id),
target=user_id_obj,
room_id=room_id,
action="invite",
ratelimit=False,
)
if self._config.servernotices.server_notices_auto_join:
user_requester = create_requester(
user_id, authenticated_entity=self._server_name
)
await self._room_member_handler.update_membership(
requester=user_requester,
target=user_id_obj,
room_id=room_id,
action="join",
ratelimit=False,
)
async def _update_notice_user_profile_if_changed(
self,
requester: Requester,

View File

@ -465,18 +465,15 @@ class ClientIpWorkerStore(ClientIpBackgroundUpdateStore, MonthlyActiveUsersWorke
#
# This works by finding the max last_seen that is less than the given
# time, but has no more than N rows before it, deleting all rows with
# a lesser last_seen time. (We COALESCE so that the sub-SELECT always
# returns exactly one row).
# a lesser last_seen time. (We use an `IN` clause to force postgres to
# use the index, otherwise it tends to do a seq scan).
sql = """
DELETE FROM user_ips
WHERE last_seen <= (
SELECT COALESCE(MAX(last_seen), -1)
FROM (
SELECT last_seen FROM user_ips
WHERE last_seen <= ?
ORDER BY last_seen ASC
LIMIT 5000
) AS u
WHERE last_seen IN (
SELECT last_seen FROM user_ips
WHERE last_seen <= ?
ORDER BY last_seen ASC
LIMIT 5000
)
"""

View File

@ -129,8 +129,8 @@ Changes in SCHEMA_VERSION = 83
SCHEMA_COMPAT_VERSION = (
# The `event_txn_id_device_id` must be written to for new events.
80
# The event_txn_id table and tables from MSC2716 no longer exist.
83
)
"""Limit on how far the synapse codebase can be rolled back without breaking db compat

View File

@ -1,8 +0,0 @@
CREATE TABLE background_updates (
update_name text NOT NULL,
progress_json text NOT NULL,
depends_on text,
CONSTRAINT background_updates_uniqueness UNIQUE (update_name)
);

View File

@ -1,4 +1,4 @@
/* Copyright 2014-2016 OpenMarket Ltd
/* Copyright 2023 The Matrix.org Foundation C.I.C
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@ -13,17 +13,12 @@
* limitations under the License.
*/
CREATE TABLE IF NOT EXISTS room_aliases(
room_alias TEXT NOT NULL,
room_id TEXT NOT NULL,
UNIQUE (room_alias)
);
-- Drop the old event transaction ID table, the event_txn_id_device_id table
-- should be used instead.
DROP TABLE IF EXISTS event_txn_id;
CREATE INDEX room_aliases_id ON room_aliases(room_id);
CREATE TABLE IF NOT EXISTS room_alias_servers(
room_alias TEXT NOT NULL,
server TEXT NOT NULL
);
CREATE INDEX room_alias_servers_alias ON room_alias_servers(room_alias);
-- Drop tables related to MSC2716 since the implementation is being removed
DROP TABLE IF EXISTS insertion_events;
DROP TABLE IF EXISTS insertion_event_edges;
DROP TABLE IF EXISTS insertion_event_extremities;
DROP TABLE IF EXISTS batch_events;

View File

@ -1,37 +0,0 @@
/* Copyright 2015, 2016 OpenMarket Ltd
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/* We used to create tables called application_services and
* application_services_regex, but these are no longer used and are removed in
* delta 54.
*/
CREATE TABLE IF NOT EXISTS application_services_state(
as_id TEXT PRIMARY KEY,
state VARCHAR(5),
last_txn INTEGER
);
CREATE TABLE IF NOT EXISTS application_services_txns(
as_id TEXT NOT NULL,
txn_id INTEGER NOT NULL,
event_ids TEXT NOT NULL,
UNIQUE(as_id, txn_id)
);
CREATE INDEX application_services_txns_id ON application_services_txns (
as_id
);

View File

@ -1,70 +0,0 @@
/* Copyright 2014-2016 OpenMarket Ltd
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/* We used to create tables called event_destinations and
* state_forward_extremities, but these are no longer used and are removed in
* delta 54.
*/
CREATE TABLE IF NOT EXISTS event_forward_extremities(
event_id TEXT NOT NULL,
room_id TEXT NOT NULL,
UNIQUE (event_id, room_id)
);
CREATE INDEX ev_extrem_room ON event_forward_extremities(room_id);
CREATE INDEX ev_extrem_id ON event_forward_extremities(event_id);
CREATE TABLE IF NOT EXISTS event_backward_extremities(
event_id TEXT NOT NULL,
room_id TEXT NOT NULL,
UNIQUE (event_id, room_id)
);
CREATE INDEX ev_b_extrem_room ON event_backward_extremities(room_id);
CREATE INDEX ev_b_extrem_id ON event_backward_extremities(event_id);
CREATE TABLE IF NOT EXISTS event_edges(
event_id TEXT NOT NULL,
prev_event_id TEXT NOT NULL,
room_id TEXT NOT NULL,
is_state BOOL NOT NULL, -- true if this is a prev_state edge rather than a regular
-- event dag edge.
UNIQUE (event_id, prev_event_id, room_id, is_state)
);
CREATE INDEX ev_edges_id ON event_edges(event_id);
CREATE INDEX ev_edges_prev_id ON event_edges(prev_event_id);
CREATE TABLE IF NOT EXISTS room_depth(
room_id TEXT NOT NULL,
min_depth INTEGER NOT NULL,
UNIQUE (room_id)
);
CREATE INDEX room_depth_room ON room_depth(room_id);
CREATE TABLE IF NOT EXISTS event_auth(
event_id TEXT NOT NULL,
auth_id TEXT NOT NULL,
room_id TEXT NOT NULL,
UNIQUE (event_id, auth_id, room_id)
);
CREATE INDEX evauth_edges_id ON event_auth(event_id);
CREATE INDEX evauth_edges_auth_id ON event_auth(auth_id);

View File

@ -1,38 +0,0 @@
/* Copyright 2014-2016 OpenMarket Ltd
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/* We used to create tables called event_content_hashes and event_edge_hashes,
* but these are no longer used and are removed in delta 54.
*/
CREATE TABLE IF NOT EXISTS event_reference_hashes (
event_id TEXT,
algorithm TEXT,
hash bytea,
UNIQUE (event_id, algorithm)
);
CREATE INDEX event_reference_hashes_id ON event_reference_hashes(event_id);
CREATE TABLE IF NOT EXISTS event_signatures (
event_id TEXT,
signature_name TEXT,
key_id TEXT,
signature bytea,
UNIQUE (event_id, signature_name, key_id)
);
CREATE INDEX event_signatures_id ON event_signatures(event_id);

View File

@ -1,120 +0,0 @@
/* Copyright 2014-2016 OpenMarket Ltd
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/* We used to create tables called room_hosts and feedback,
* but these are no longer used and are removed in delta 54.
*/
CREATE TABLE IF NOT EXISTS events(
stream_ordering INTEGER PRIMARY KEY,
topological_ordering BIGINT NOT NULL,
event_id TEXT NOT NULL,
type TEXT NOT NULL,
room_id TEXT NOT NULL,
-- 'content' used to be created NULLable, but as of delta 50 we drop that constraint.
-- the hack we use to drop the constraint doesn't work for an in-memory sqlite
-- database, which breaks the sytests. Hence, we no longer make it nullable.
content TEXT,
unrecognized_keys TEXT,
processed BOOL NOT NULL,
outlier BOOL NOT NULL,
depth BIGINT DEFAULT 0 NOT NULL,
UNIQUE (event_id)
);
CREATE INDEX events_stream_ordering ON events (stream_ordering);
CREATE INDEX events_topological_ordering ON events (topological_ordering);
CREATE INDEX events_order ON events (topological_ordering, stream_ordering);
CREATE INDEX events_room_id ON events (room_id);
CREATE INDEX events_order_room ON events (
room_id, topological_ordering, stream_ordering
);
CREATE TABLE IF NOT EXISTS event_json(
event_id TEXT NOT NULL,
room_id TEXT NOT NULL,
internal_metadata TEXT NOT NULL,
json TEXT NOT NULL,
UNIQUE (event_id)
);
CREATE INDEX event_json_room_id ON event_json(room_id);
CREATE TABLE IF NOT EXISTS state_events(
event_id TEXT NOT NULL,
room_id TEXT NOT NULL,
type TEXT NOT NULL,
state_key TEXT NOT NULL,
prev_state TEXT,
UNIQUE (event_id)
);
CREATE INDEX state_events_room_id ON state_events (room_id);
CREATE INDEX state_events_type ON state_events (type);
CREATE INDEX state_events_state_key ON state_events (state_key);
CREATE TABLE IF NOT EXISTS current_state_events(
event_id TEXT NOT NULL,
room_id TEXT NOT NULL,
type TEXT NOT NULL,
state_key TEXT NOT NULL,
UNIQUE (event_id),
UNIQUE (room_id, type, state_key)
);
CREATE INDEX current_state_events_room_id ON current_state_events (room_id);
CREATE INDEX current_state_events_type ON current_state_events (type);
CREATE INDEX current_state_events_state_key ON current_state_events (state_key);
CREATE TABLE IF NOT EXISTS room_memberships(
event_id TEXT NOT NULL,
user_id TEXT NOT NULL,
sender TEXT NOT NULL,
room_id TEXT NOT NULL,
membership TEXT NOT NULL,
UNIQUE (event_id)
);
CREATE INDEX room_memberships_room_id ON room_memberships (room_id);
CREATE INDEX room_memberships_user_id ON room_memberships (user_id);
CREATE TABLE IF NOT EXISTS topics(
event_id TEXT NOT NULL,
room_id TEXT NOT NULL,
topic TEXT NOT NULL,
UNIQUE (event_id)
);
CREATE INDEX topics_room_id ON topics(room_id);
CREATE TABLE IF NOT EXISTS room_names(
event_id TEXT NOT NULL,
room_id TEXT NOT NULL,
name TEXT NOT NULL,
UNIQUE (event_id)
);
CREATE INDEX room_names_room_id ON room_names(room_id);
CREATE TABLE IF NOT EXISTS rooms(
room_id TEXT PRIMARY KEY NOT NULL,
is_public BOOL,
creator TEXT
);

View File

@ -1,26 +0,0 @@
/* Copyright 2014-2016 OpenMarket Ltd
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
-- we used to create a table called server_tls_certificates, but this is no
-- longer used, and is removed in delta 54.
CREATE TABLE IF NOT EXISTS server_signature_keys(
server_name TEXT, -- Server name.
key_id TEXT, -- Key version.
from_server TEXT, -- Which key server the key was fetched form.
ts_added_ms BIGINT, -- When the key was added.
verify_key bytea, -- NACL verification key.
UNIQUE (server_name, key_id)
);

View File

@ -1,68 +0,0 @@
/* Copyright 2014-2016 OpenMarket Ltd
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
CREATE TABLE IF NOT EXISTS local_media_repository (
media_id TEXT, -- The id used to refer to the media.
media_type TEXT, -- The MIME-type of the media.
media_length INTEGER, -- Length of the media in bytes.
created_ts BIGINT, -- When the content was uploaded in ms.
upload_name TEXT, -- The name the media was uploaded with.
user_id TEXT, -- The user who uploaded the file.
UNIQUE (media_id)
);
CREATE TABLE IF NOT EXISTS local_media_repository_thumbnails (
media_id TEXT, -- The id used to refer to the media.
thumbnail_width INTEGER, -- The width of the thumbnail in pixels.
thumbnail_height INTEGER, -- The height of the thumbnail in pixels.
thumbnail_type TEXT, -- The MIME-type of the thumbnail.
thumbnail_method TEXT, -- The method used to make the thumbnail.
thumbnail_length INTEGER, -- The length of the thumbnail in bytes.
UNIQUE (
media_id, thumbnail_width, thumbnail_height, thumbnail_type
)
);
CREATE INDEX local_media_repository_thumbnails_media_id
ON local_media_repository_thumbnails (media_id);
CREATE TABLE IF NOT EXISTS remote_media_cache (
media_origin TEXT, -- The remote HS the media came from.
media_id TEXT, -- The id used to refer to the media on that server.
media_type TEXT, -- The MIME-type of the media.
created_ts BIGINT, -- When the content was uploaded in ms.
upload_name TEXT, -- The name the media was uploaded with.
media_length INTEGER, -- Length of the media in bytes.
filesystem_id TEXT, -- The name used to store the media on disk.
UNIQUE (media_origin, media_id)
);
CREATE TABLE IF NOT EXISTS remote_media_cache_thumbnails (
media_origin TEXT, -- The remote HS the media came from.
media_id TEXT, -- The id used to refer to the media.
thumbnail_width INTEGER, -- The width of the thumbnail in pixels.
thumbnail_height INTEGER, -- The height of the thumbnail in pixels.
thumbnail_method TEXT, -- The method used to make the thumbnail
thumbnail_type TEXT, -- The MIME-type of the thumbnail.
thumbnail_length INTEGER, -- The length of the thumbnail in bytes.
filesystem_id TEXT, -- The name used to store the media on disk.
UNIQUE (
media_origin, media_id, thumbnail_width, thumbnail_height,
thumbnail_type
)
);
CREATE INDEX remote_media_cache_thumbnails_media_id
ON remote_media_cache_thumbnails (media_id);

View File

@ -1,32 +0,0 @@
/* Copyright 2014-2016 OpenMarket Ltd
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
CREATE TABLE IF NOT EXISTS presence(
user_id TEXT NOT NULL,
state VARCHAR(20),
status_msg TEXT,
mtime BIGINT, -- miliseconds since last state change
UNIQUE (user_id)
);
-- For each of /my/ users which possibly-remote users are allowed to see their
-- presence state
CREATE TABLE IF NOT EXISTS presence_allow_inbound(
observed_user_id TEXT NOT NULL,
observer_user_id TEXT NOT NULL, -- a UserID,
UNIQUE (observed_user_id, observer_user_id)
);
-- We used to create a table called presence_list, but this is no longer used
-- and is removed in delta 54.

View File

@ -1,20 +0,0 @@
/* Copyright 2014-2016 OpenMarket Ltd
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
CREATE TABLE IF NOT EXISTS profiles(
user_id TEXT NOT NULL,
displayname TEXT,
avatar_url TEXT,
UNIQUE(user_id)
);

View File

@ -1,74 +0,0 @@
/* Copyright 2015, 2016 OpenMarket Ltd
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
CREATE TABLE IF NOT EXISTS rejections(
event_id TEXT NOT NULL,
reason TEXT NOT NULL,
last_check TEXT NOT NULL,
UNIQUE (event_id)
);
-- Push notification endpoints that users have configured
CREATE TABLE IF NOT EXISTS pushers (
id BIGINT PRIMARY KEY,
user_name TEXT NOT NULL,
access_token BIGINT DEFAULT NULL,
profile_tag VARCHAR(32) NOT NULL,
kind VARCHAR(8) NOT NULL,
app_id VARCHAR(64) NOT NULL,
app_display_name VARCHAR(64) NOT NULL,
device_display_name VARCHAR(128) NOT NULL,
pushkey bytea NOT NULL,
ts BIGINT NOT NULL,
lang VARCHAR(8),
data bytea,
last_token TEXT,
last_success BIGINT,
failing_since BIGINT,
UNIQUE (app_id, pushkey)
);
CREATE TABLE IF NOT EXISTS push_rules (
id BIGINT PRIMARY KEY,
user_name TEXT NOT NULL,
rule_id TEXT NOT NULL,
priority_class SMALLINT NOT NULL,
priority INTEGER NOT NULL DEFAULT 0,
conditions TEXT NOT NULL,
actions TEXT NOT NULL,
UNIQUE(user_name, rule_id)
);
CREATE INDEX push_rules_user_name on push_rules (user_name);
CREATE TABLE IF NOT EXISTS user_filters(
user_id TEXT,
filter_id BIGINT,
filter_json bytea
);
CREATE INDEX user_filters_by_user_id_filter_id ON user_filters(
user_id, filter_id
);
CREATE TABLE IF NOT EXISTS push_rules_enable (
id BIGINT PRIMARY KEY,
user_name TEXT NOT NULL,
rule_id TEXT NOT NULL,
enabled SMALLINT,
UNIQUE(user_name, rule_id)
);
CREATE INDEX push_rules_enable_user_name on push_rules_enable (user_name);

View File

@ -1,22 +0,0 @@
/* Copyright 2014-2016 OpenMarket Ltd
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
CREATE TABLE IF NOT EXISTS redactions (
event_id TEXT NOT NULL,
redacts TEXT NOT NULL,
UNIQUE (event_id)
);
CREATE INDEX redactions_event_id ON redactions (event_id);
CREATE INDEX redactions_redacts ON redactions (redacts);

View File

@ -1,40 +0,0 @@
/* Copyright 2014-2016 OpenMarket Ltd
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
CREATE TABLE IF NOT EXISTS state_groups(
id BIGINT PRIMARY KEY,
room_id TEXT NOT NULL,
event_id TEXT NOT NULL
);
CREATE TABLE IF NOT EXISTS state_groups_state(
state_group BIGINT NOT NULL,
room_id TEXT NOT NULL,
type TEXT NOT NULL,
state_key TEXT NOT NULL,
event_id TEXT NOT NULL
);
CREATE TABLE IF NOT EXISTS event_to_state_groups(
event_id TEXT NOT NULL,
state_group BIGINT NOT NULL,
UNIQUE (event_id)
);
CREATE INDEX state_groups_id ON state_groups(id);
CREATE INDEX state_groups_state_id ON state_groups_state(state_group);
CREATE INDEX state_groups_state_tuple ON state_groups_state(room_id, type, state_key);
CREATE INDEX event_to_state_groups_id ON event_to_state_groups(event_id);

View File

@ -1,44 +0,0 @@
/* Copyright 2014-2016 OpenMarket Ltd
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
-- Stores what transaction ids we have received and what our response was
CREATE TABLE IF NOT EXISTS received_transactions(
transaction_id TEXT,
origin TEXT,
ts BIGINT,
response_code INTEGER,
response_json bytea,
has_been_referenced smallint default 0, -- Whether thishas been referenced by a prev_tx
UNIQUE (transaction_id, origin)
);
CREATE INDEX transactions_have_ref ON received_transactions(origin, has_been_referenced);-- WHERE has_been_referenced = 0;
-- For sent transactions only.
CREATE TABLE IF NOT EXISTS transaction_id_to_pdu(
transaction_id INTEGER,
destination TEXT,
pdu_id TEXT,
pdu_origin TEXT,
UNIQUE (transaction_id, destination)
);
CREATE INDEX transaction_id_to_pdu_dest ON transaction_id_to_pdu(destination);
-- To track destination health
CREATE TABLE IF NOT EXISTS destinations(
destination TEXT PRIMARY KEY,
retry_last_ts BIGINT,
retry_interval INTEGER
);

View File

@ -1,42 +0,0 @@
/* Copyright 2014-2016 OpenMarket Ltd
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
CREATE TABLE IF NOT EXISTS users(
name TEXT,
password_hash TEXT,
creation_ts BIGINT,
admin SMALLINT DEFAULT 0 NOT NULL,
UNIQUE(name)
);
CREATE TABLE IF NOT EXISTS access_tokens(
id BIGINT PRIMARY KEY,
user_id TEXT NOT NULL,
device_id TEXT,
token TEXT NOT NULL,
last_used BIGINT,
UNIQUE(token)
);
CREATE TABLE IF NOT EXISTS user_ips (
user_id TEXT NOT NULL,
access_token TEXT NOT NULL,
device_id TEXT,
ip TEXT NOT NULL,
user_agent TEXT NOT NULL,
last_seen BIGINT NOT NULL
);
CREATE INDEX user_ips_user ON user_ips(user_id);
CREATE INDEX user_ips_user_ip ON user_ips(user_id, access_token, ip);

File diff suppressed because it is too large Load Diff

View File

@ -1,243 +0,0 @@
CREATE TABLE application_services_state( as_id TEXT PRIMARY KEY, state VARCHAR(5), last_txn INTEGER );
CREATE TABLE application_services_txns( as_id TEXT NOT NULL, txn_id INTEGER NOT NULL, event_ids TEXT NOT NULL, UNIQUE(as_id, txn_id) );
CREATE INDEX application_services_txns_id ON application_services_txns ( as_id );
CREATE TABLE presence( user_id TEXT NOT NULL, state VARCHAR(20), status_msg TEXT, mtime BIGINT, UNIQUE (user_id) );
CREATE TABLE presence_allow_inbound( observed_user_id TEXT NOT NULL, observer_user_id TEXT NOT NULL, UNIQUE (observed_user_id, observer_user_id) );
CREATE TABLE users( name TEXT, password_hash TEXT, creation_ts BIGINT, admin SMALLINT DEFAULT 0 NOT NULL, upgrade_ts BIGINT, is_guest SMALLINT DEFAULT 0 NOT NULL, appservice_id TEXT, consent_version TEXT, consent_server_notice_sent TEXT, user_type TEXT DEFAULT NULL, UNIQUE(name) );
CREATE TABLE access_tokens( id BIGINT PRIMARY KEY, user_id TEXT NOT NULL, device_id TEXT, token TEXT NOT NULL, last_used BIGINT, UNIQUE(token) );
CREATE TABLE user_ips ( user_id TEXT NOT NULL, access_token TEXT NOT NULL, device_id TEXT, ip TEXT NOT NULL, user_agent TEXT NOT NULL, last_seen BIGINT NOT NULL );
CREATE TABLE profiles( user_id TEXT NOT NULL, displayname TEXT, avatar_url TEXT, UNIQUE(user_id) );
CREATE TABLE received_transactions( transaction_id TEXT, origin TEXT, ts BIGINT, response_code INTEGER, response_json bytea, has_been_referenced smallint default 0, UNIQUE (transaction_id, origin) );
CREATE TABLE destinations( destination TEXT PRIMARY KEY, retry_last_ts BIGINT, retry_interval INTEGER );
CREATE TABLE events( stream_ordering INTEGER PRIMARY KEY, topological_ordering BIGINT NOT NULL, event_id TEXT NOT NULL, type TEXT NOT NULL, room_id TEXT NOT NULL, content TEXT, unrecognized_keys TEXT, processed BOOL NOT NULL, outlier BOOL NOT NULL, depth BIGINT DEFAULT 0 NOT NULL, origin_server_ts BIGINT, received_ts BIGINT, sender TEXT, contains_url BOOLEAN, UNIQUE (event_id) );
CREATE INDEX events_order_room ON events ( room_id, topological_ordering, stream_ordering );
CREATE TABLE event_json( event_id TEXT NOT NULL, room_id TEXT NOT NULL, internal_metadata TEXT NOT NULL, json TEXT NOT NULL, format_version INTEGER, UNIQUE (event_id) );
CREATE INDEX event_json_room_id ON event_json(room_id);
CREATE TABLE state_events( event_id TEXT NOT NULL, room_id TEXT NOT NULL, type TEXT NOT NULL, state_key TEXT NOT NULL, prev_state TEXT, UNIQUE (event_id) );
CREATE TABLE current_state_events( event_id TEXT NOT NULL, room_id TEXT NOT NULL, type TEXT NOT NULL, state_key TEXT NOT NULL, UNIQUE (event_id), UNIQUE (room_id, type, state_key) );
CREATE TABLE room_memberships( event_id TEXT NOT NULL, user_id TEXT NOT NULL, sender TEXT NOT NULL, room_id TEXT NOT NULL, membership TEXT NOT NULL, forgotten INTEGER DEFAULT 0, display_name TEXT, avatar_url TEXT, UNIQUE (event_id) );
CREATE INDEX room_memberships_room_id ON room_memberships (room_id);
CREATE INDEX room_memberships_user_id ON room_memberships (user_id);
CREATE TABLE topics( event_id TEXT NOT NULL, room_id TEXT NOT NULL, topic TEXT NOT NULL, UNIQUE (event_id) );
CREATE INDEX topics_room_id ON topics(room_id);
CREATE TABLE room_names( event_id TEXT NOT NULL, room_id TEXT NOT NULL, name TEXT NOT NULL, UNIQUE (event_id) );
CREATE INDEX room_names_room_id ON room_names(room_id);
CREATE TABLE rooms( room_id TEXT PRIMARY KEY NOT NULL, is_public BOOL, creator TEXT );
CREATE TABLE server_signature_keys( server_name TEXT, key_id TEXT, from_server TEXT, ts_added_ms BIGINT, verify_key bytea, ts_valid_until_ms BIGINT, UNIQUE (server_name, key_id) );
CREATE TABLE rejections( event_id TEXT NOT NULL, reason TEXT NOT NULL, last_check TEXT NOT NULL, UNIQUE (event_id) );
CREATE TABLE push_rules ( id BIGINT PRIMARY KEY, user_name TEXT NOT NULL, rule_id TEXT NOT NULL, priority_class SMALLINT NOT NULL, priority INTEGER NOT NULL DEFAULT 0, conditions TEXT NOT NULL, actions TEXT NOT NULL, UNIQUE(user_name, rule_id) );
CREATE INDEX push_rules_user_name on push_rules (user_name);
CREATE TABLE user_filters( user_id TEXT, filter_id BIGINT, filter_json bytea );
CREATE INDEX user_filters_by_user_id_filter_id ON user_filters( user_id, filter_id );
CREATE TABLE push_rules_enable ( id BIGINT PRIMARY KEY, user_name TEXT NOT NULL, rule_id TEXT NOT NULL, enabled SMALLINT, UNIQUE(user_name, rule_id) );
CREATE INDEX push_rules_enable_user_name on push_rules_enable (user_name);
CREATE TABLE event_forward_extremities( event_id TEXT NOT NULL, room_id TEXT NOT NULL, UNIQUE (event_id, room_id) );
CREATE INDEX ev_extrem_room ON event_forward_extremities(room_id);
CREATE INDEX ev_extrem_id ON event_forward_extremities(event_id);
CREATE TABLE event_backward_extremities( event_id TEXT NOT NULL, room_id TEXT NOT NULL, UNIQUE (event_id, room_id) );
CREATE INDEX ev_b_extrem_room ON event_backward_extremities(room_id);
CREATE INDEX ev_b_extrem_id ON event_backward_extremities(event_id);
CREATE TABLE event_edges( event_id TEXT NOT NULL, prev_event_id TEXT NOT NULL, room_id TEXT NOT NULL, is_state BOOL NOT NULL, UNIQUE (event_id, prev_event_id, room_id, is_state) );
CREATE INDEX ev_edges_id ON event_edges(event_id);
CREATE INDEX ev_edges_prev_id ON event_edges(prev_event_id);
CREATE TABLE room_depth( room_id TEXT NOT NULL, min_depth INTEGER NOT NULL, UNIQUE (room_id) );
CREATE INDEX room_depth_room ON room_depth(room_id);
CREATE TABLE event_to_state_groups( event_id TEXT NOT NULL, state_group BIGINT NOT NULL, UNIQUE (event_id) );
CREATE TABLE local_media_repository ( media_id TEXT, media_type TEXT, media_length INTEGER, created_ts BIGINT, upload_name TEXT, user_id TEXT, quarantined_by TEXT, url_cache TEXT, last_access_ts BIGINT, UNIQUE (media_id) );
CREATE TABLE local_media_repository_thumbnails ( media_id TEXT, thumbnail_width INTEGER, thumbnail_height INTEGER, thumbnail_type TEXT, thumbnail_method TEXT, thumbnail_length INTEGER, UNIQUE ( media_id, thumbnail_width, thumbnail_height, thumbnail_type ) );
CREATE INDEX local_media_repository_thumbnails_media_id ON local_media_repository_thumbnails (media_id);
CREATE TABLE remote_media_cache ( media_origin TEXT, media_id TEXT, media_type TEXT, created_ts BIGINT, upload_name TEXT, media_length INTEGER, filesystem_id TEXT, last_access_ts BIGINT, quarantined_by TEXT, UNIQUE (media_origin, media_id) );
CREATE TABLE remote_media_cache_thumbnails ( media_origin TEXT, media_id TEXT, thumbnail_width INTEGER, thumbnail_height INTEGER, thumbnail_method TEXT, thumbnail_type TEXT, thumbnail_length INTEGER, filesystem_id TEXT, UNIQUE ( media_origin, media_id, thumbnail_width, thumbnail_height, thumbnail_type ) );
CREATE TABLE redactions ( event_id TEXT NOT NULL, redacts TEXT NOT NULL, UNIQUE (event_id) );
CREATE INDEX redactions_redacts ON redactions (redacts);
CREATE TABLE room_aliases( room_alias TEXT NOT NULL, room_id TEXT NOT NULL, creator TEXT, UNIQUE (room_alias) );
CREATE INDEX room_aliases_id ON room_aliases(room_id);
CREATE TABLE room_alias_servers( room_alias TEXT NOT NULL, server TEXT NOT NULL );
CREATE INDEX room_alias_servers_alias ON room_alias_servers(room_alias);
CREATE TABLE event_reference_hashes ( event_id TEXT, algorithm TEXT, hash bytea, UNIQUE (event_id, algorithm) );
CREATE INDEX event_reference_hashes_id ON event_reference_hashes(event_id);
CREATE TABLE IF NOT EXISTS "server_keys_json" ( server_name TEXT NOT NULL, key_id TEXT NOT NULL, from_server TEXT NOT NULL, ts_added_ms BIGINT NOT NULL, ts_valid_until_ms BIGINT NOT NULL, key_json bytea NOT NULL, CONSTRAINT server_keys_json_uniqueness UNIQUE (server_name, key_id, from_server) );
CREATE TABLE e2e_device_keys_json ( user_id TEXT NOT NULL, device_id TEXT NOT NULL, ts_added_ms BIGINT NOT NULL, key_json TEXT NOT NULL, CONSTRAINT e2e_device_keys_json_uniqueness UNIQUE (user_id, device_id) );
CREATE TABLE e2e_one_time_keys_json ( user_id TEXT NOT NULL, device_id TEXT NOT NULL, algorithm TEXT NOT NULL, key_id TEXT NOT NULL, ts_added_ms BIGINT NOT NULL, key_json TEXT NOT NULL, CONSTRAINT e2e_one_time_keys_json_uniqueness UNIQUE (user_id, device_id, algorithm, key_id) );
CREATE TABLE receipts_graph( room_id TEXT NOT NULL, receipt_type TEXT NOT NULL, user_id TEXT NOT NULL, event_ids TEXT NOT NULL, data TEXT NOT NULL, CONSTRAINT receipts_graph_uniqueness UNIQUE (room_id, receipt_type, user_id) );
CREATE TABLE receipts_linearized ( stream_id BIGINT NOT NULL, room_id TEXT NOT NULL, receipt_type TEXT NOT NULL, user_id TEXT NOT NULL, event_id TEXT NOT NULL, data TEXT NOT NULL, CONSTRAINT receipts_linearized_uniqueness UNIQUE (room_id, receipt_type, user_id) );
CREATE INDEX receipts_linearized_id ON receipts_linearized( stream_id );
CREATE INDEX receipts_linearized_room_stream ON receipts_linearized( room_id, stream_id );
CREATE TABLE IF NOT EXISTS "user_threepids" ( user_id TEXT NOT NULL, medium TEXT NOT NULL, address TEXT NOT NULL, validated_at BIGINT NOT NULL, added_at BIGINT NOT NULL, CONSTRAINT medium_address UNIQUE (medium, address) );
CREATE INDEX user_threepids_user_id ON user_threepids(user_id);
CREATE VIRTUAL TABLE event_search USING fts4 ( event_id, room_id, sender, key, value )
/* event_search(event_id,room_id,sender,"key",value) */;
CREATE TABLE guest_access( event_id TEXT NOT NULL, room_id TEXT NOT NULL, guest_access TEXT NOT NULL, UNIQUE (event_id) );
CREATE TABLE history_visibility( event_id TEXT NOT NULL, room_id TEXT NOT NULL, history_visibility TEXT NOT NULL, UNIQUE (event_id) );
CREATE TABLE room_tags( user_id TEXT NOT NULL, room_id TEXT NOT NULL, tag TEXT NOT NULL, content TEXT NOT NULL, CONSTRAINT room_tag_uniqueness UNIQUE (user_id, room_id, tag) );
CREATE TABLE room_tags_revisions ( user_id TEXT NOT NULL, room_id TEXT NOT NULL, stream_id BIGINT NOT NULL, CONSTRAINT room_tag_revisions_uniqueness UNIQUE (user_id, room_id) );
CREATE TABLE IF NOT EXISTS "account_data_max_stream_id"( Lock CHAR(1) NOT NULL DEFAULT 'X' UNIQUE, stream_id BIGINT NOT NULL, CHECK (Lock='X') );
CREATE TABLE account_data( user_id TEXT NOT NULL, account_data_type TEXT NOT NULL, stream_id BIGINT NOT NULL, content TEXT NOT NULL, CONSTRAINT account_data_uniqueness UNIQUE (user_id, account_data_type) );
CREATE TABLE room_account_data( user_id TEXT NOT NULL, room_id TEXT NOT NULL, account_data_type TEXT NOT NULL, stream_id BIGINT NOT NULL, content TEXT NOT NULL, CONSTRAINT room_account_data_uniqueness UNIQUE (user_id, room_id, account_data_type) );
CREATE INDEX account_data_stream_id on account_data(user_id, stream_id);
CREATE INDEX room_account_data_stream_id on room_account_data(user_id, stream_id);
CREATE INDEX events_ts ON events(origin_server_ts, stream_ordering);
CREATE TABLE event_push_actions( room_id TEXT NOT NULL, event_id TEXT NOT NULL, user_id TEXT NOT NULL, profile_tag VARCHAR(32), actions TEXT NOT NULL, topological_ordering BIGINT, stream_ordering BIGINT, notif SMALLINT, highlight SMALLINT, CONSTRAINT event_id_user_id_profile_tag_uniqueness UNIQUE (room_id, event_id, user_id, profile_tag) );
CREATE INDEX event_push_actions_room_id_user_id on event_push_actions(room_id, user_id);
CREATE INDEX events_room_stream on events(room_id, stream_ordering);
CREATE INDEX public_room_index on rooms(is_public);
CREATE INDEX receipts_linearized_user ON receipts_linearized( user_id );
CREATE INDEX event_push_actions_rm_tokens on event_push_actions( user_id, room_id, topological_ordering, stream_ordering );
CREATE TABLE presence_stream( stream_id BIGINT, user_id TEXT, state TEXT, last_active_ts BIGINT, last_federation_update_ts BIGINT, last_user_sync_ts BIGINT, status_msg TEXT, currently_active BOOLEAN );
CREATE INDEX presence_stream_id ON presence_stream(stream_id, user_id);
CREATE INDEX presence_stream_user_id ON presence_stream(user_id);
CREATE TABLE push_rules_stream( stream_id BIGINT NOT NULL, event_stream_ordering BIGINT NOT NULL, user_id TEXT NOT NULL, rule_id TEXT NOT NULL, op TEXT NOT NULL, priority_class SMALLINT, priority INTEGER, conditions TEXT, actions TEXT );
CREATE INDEX push_rules_stream_id ON push_rules_stream(stream_id);
CREATE INDEX push_rules_stream_user_stream_id on push_rules_stream(user_id, stream_id);
CREATE TABLE ex_outlier_stream( event_stream_ordering BIGINT PRIMARY KEY NOT NULL, event_id TEXT NOT NULL, state_group BIGINT NOT NULL );
CREATE TABLE threepid_guest_access_tokens( medium TEXT, address TEXT, guest_access_token TEXT, first_inviter TEXT );
CREATE UNIQUE INDEX threepid_guest_access_tokens_index ON threepid_guest_access_tokens(medium, address);
CREATE TABLE local_invites( stream_id BIGINT NOT NULL, inviter TEXT NOT NULL, invitee TEXT NOT NULL, event_id TEXT NOT NULL, room_id TEXT NOT NULL, locally_rejected TEXT, replaced_by TEXT );
CREATE INDEX local_invites_id ON local_invites(stream_id);
CREATE INDEX local_invites_for_user_idx ON local_invites(invitee, locally_rejected, replaced_by, room_id);
CREATE INDEX event_push_actions_stream_ordering on event_push_actions( stream_ordering, user_id );
CREATE TABLE open_id_tokens ( token TEXT NOT NULL PRIMARY KEY, ts_valid_until_ms bigint NOT NULL, user_id TEXT NOT NULL, UNIQUE (token) );
CREATE INDEX open_id_tokens_ts_valid_until_ms ON open_id_tokens(ts_valid_until_ms);
CREATE TABLE pusher_throttle( pusher BIGINT NOT NULL, room_id TEXT NOT NULL, last_sent_ts BIGINT, throttle_ms BIGINT, PRIMARY KEY (pusher, room_id) );
CREATE TABLE event_reports( id BIGINT NOT NULL PRIMARY KEY, received_ts BIGINT NOT NULL, room_id TEXT NOT NULL, event_id TEXT NOT NULL, user_id TEXT NOT NULL, reason TEXT, content TEXT );
CREATE TABLE devices ( user_id TEXT NOT NULL, device_id TEXT NOT NULL, display_name TEXT, CONSTRAINT device_uniqueness UNIQUE (user_id, device_id) );
CREATE TABLE appservice_stream_position( Lock CHAR(1) NOT NULL DEFAULT 'X' UNIQUE, stream_ordering BIGINT, CHECK (Lock='X') );
CREATE TABLE device_inbox ( user_id TEXT NOT NULL, device_id TEXT NOT NULL, stream_id BIGINT NOT NULL, message_json TEXT NOT NULL );
CREATE INDEX device_inbox_user_stream_id ON device_inbox(user_id, device_id, stream_id);
CREATE INDEX received_transactions_ts ON received_transactions(ts);
CREATE TABLE device_federation_outbox ( destination TEXT NOT NULL, stream_id BIGINT NOT NULL, queued_ts BIGINT NOT NULL, messages_json TEXT NOT NULL );
CREATE INDEX device_federation_outbox_destination_id ON device_federation_outbox(destination, stream_id);
CREATE TABLE device_federation_inbox ( origin TEXT NOT NULL, message_id TEXT NOT NULL, received_ts BIGINT NOT NULL );
CREATE INDEX device_federation_inbox_sender_id ON device_federation_inbox(origin, message_id);
CREATE TABLE device_max_stream_id ( stream_id BIGINT NOT NULL );
CREATE TABLE public_room_list_stream ( stream_id BIGINT NOT NULL, room_id TEXT NOT NULL, visibility BOOLEAN NOT NULL , appservice_id TEXT, network_id TEXT);
CREATE INDEX public_room_list_stream_idx on public_room_list_stream( stream_id );
CREATE INDEX public_room_list_stream_rm_idx on public_room_list_stream( room_id, stream_id );
CREATE TABLE stream_ordering_to_exterm ( stream_ordering BIGINT NOT NULL, room_id TEXT NOT NULL, event_id TEXT NOT NULL );
CREATE INDEX stream_ordering_to_exterm_idx on stream_ordering_to_exterm( stream_ordering );
CREATE INDEX stream_ordering_to_exterm_rm_idx on stream_ordering_to_exterm( room_id, stream_ordering );
CREATE TABLE IF NOT EXISTS "event_auth"( event_id TEXT NOT NULL, auth_id TEXT NOT NULL, room_id TEXT NOT NULL );
CREATE INDEX evauth_edges_id ON event_auth(event_id);
CREATE INDEX user_threepids_medium_address on user_threepids (medium, address);
CREATE TABLE appservice_room_list( appservice_id TEXT NOT NULL, network_id TEXT NOT NULL, room_id TEXT NOT NULL );
CREATE UNIQUE INDEX appservice_room_list_idx ON appservice_room_list( appservice_id, network_id, room_id );
CREATE INDEX device_federation_outbox_id ON device_federation_outbox(stream_id);
CREATE TABLE federation_stream_position( type TEXT NOT NULL, stream_id INTEGER NOT NULL );
CREATE TABLE device_lists_remote_cache ( user_id TEXT NOT NULL, device_id TEXT NOT NULL, content TEXT NOT NULL );
CREATE TABLE device_lists_remote_extremeties ( user_id TEXT NOT NULL, stream_id TEXT NOT NULL );
CREATE TABLE device_lists_stream ( stream_id BIGINT NOT NULL, user_id TEXT NOT NULL, device_id TEXT NOT NULL );
CREATE INDEX device_lists_stream_id ON device_lists_stream(stream_id, user_id);
CREATE TABLE device_lists_outbound_pokes ( destination TEXT NOT NULL, stream_id BIGINT NOT NULL, user_id TEXT NOT NULL, device_id TEXT NOT NULL, sent BOOLEAN NOT NULL, ts BIGINT NOT NULL );
CREATE INDEX device_lists_outbound_pokes_id ON device_lists_outbound_pokes(destination, stream_id);
CREATE INDEX device_lists_outbound_pokes_user ON device_lists_outbound_pokes(destination, user_id);
CREATE TABLE event_push_summary ( user_id TEXT NOT NULL, room_id TEXT NOT NULL, notif_count BIGINT NOT NULL, stream_ordering BIGINT NOT NULL );
CREATE INDEX event_push_summary_user_rm ON event_push_summary(user_id, room_id);
CREATE TABLE event_push_summary_stream_ordering ( Lock CHAR(1) NOT NULL DEFAULT 'X' UNIQUE, stream_ordering BIGINT NOT NULL, CHECK (Lock='X') );
CREATE TABLE IF NOT EXISTS "pushers" ( id BIGINT PRIMARY KEY, user_name TEXT NOT NULL, access_token BIGINT DEFAULT NULL, profile_tag TEXT NOT NULL, kind TEXT NOT NULL, app_id TEXT NOT NULL, app_display_name TEXT NOT NULL, device_display_name TEXT NOT NULL, pushkey TEXT NOT NULL, ts BIGINT NOT NULL, lang TEXT, data TEXT, last_stream_ordering INTEGER, last_success BIGINT, failing_since BIGINT, UNIQUE (app_id, pushkey, user_name) );
CREATE INDEX device_lists_outbound_pokes_stream ON device_lists_outbound_pokes(stream_id);
CREATE TABLE ratelimit_override ( user_id TEXT NOT NULL, messages_per_second BIGINT, burst_count BIGINT );
CREATE UNIQUE INDEX ratelimit_override_idx ON ratelimit_override(user_id);
CREATE TABLE current_state_delta_stream ( stream_id BIGINT NOT NULL, room_id TEXT NOT NULL, type TEXT NOT NULL, state_key TEXT NOT NULL, event_id TEXT, prev_event_id TEXT );
CREATE INDEX current_state_delta_stream_idx ON current_state_delta_stream(stream_id);
CREATE TABLE device_lists_outbound_last_success ( destination TEXT NOT NULL, user_id TEXT NOT NULL, stream_id BIGINT NOT NULL );
CREATE INDEX device_lists_outbound_last_success_idx ON device_lists_outbound_last_success( destination, user_id, stream_id );
CREATE TABLE user_directory_stream_pos ( Lock CHAR(1) NOT NULL DEFAULT 'X' UNIQUE, stream_id BIGINT, CHECK (Lock='X') );
CREATE VIRTUAL TABLE user_directory_search USING fts4 ( user_id, value )
/* user_directory_search(user_id,value) */;
CREATE TABLE blocked_rooms ( room_id TEXT NOT NULL, user_id TEXT NOT NULL );
CREATE UNIQUE INDEX blocked_rooms_idx ON blocked_rooms(room_id);
CREATE TABLE IF NOT EXISTS "local_media_repository_url_cache"( url TEXT, response_code INTEGER, etag TEXT, expires_ts BIGINT, og TEXT, media_id TEXT, download_ts BIGINT );
CREATE INDEX local_media_repository_url_cache_expires_idx ON local_media_repository_url_cache(expires_ts);
CREATE INDEX local_media_repository_url_cache_by_url_download_ts ON local_media_repository_url_cache(url, download_ts);
CREATE INDEX local_media_repository_url_cache_media_idx ON local_media_repository_url_cache(media_id);
CREATE TABLE group_users ( group_id TEXT NOT NULL, user_id TEXT NOT NULL, is_admin BOOLEAN NOT NULL, is_public BOOLEAN NOT NULL );
CREATE TABLE group_invites ( group_id TEXT NOT NULL, user_id TEXT NOT NULL );
CREATE TABLE group_rooms ( group_id TEXT NOT NULL, room_id TEXT NOT NULL, is_public BOOLEAN NOT NULL );
CREATE TABLE group_summary_rooms ( group_id TEXT NOT NULL, room_id TEXT NOT NULL, category_id TEXT NOT NULL, room_order BIGINT NOT NULL, is_public BOOLEAN NOT NULL, UNIQUE (group_id, category_id, room_id, room_order), CHECK (room_order > 0) );
CREATE UNIQUE INDEX group_summary_rooms_g_idx ON group_summary_rooms(group_id, room_id, category_id);
CREATE TABLE group_summary_room_categories ( group_id TEXT NOT NULL, category_id TEXT NOT NULL, cat_order BIGINT NOT NULL, UNIQUE (group_id, category_id, cat_order), CHECK (cat_order > 0) );
CREATE TABLE group_room_categories ( group_id TEXT NOT NULL, category_id TEXT NOT NULL, profile TEXT NOT NULL, is_public BOOLEAN NOT NULL, UNIQUE (group_id, category_id) );
CREATE TABLE group_summary_users ( group_id TEXT NOT NULL, user_id TEXT NOT NULL, role_id TEXT NOT NULL, user_order BIGINT NOT NULL, is_public BOOLEAN NOT NULL );
CREATE INDEX group_summary_users_g_idx ON group_summary_users(group_id);
CREATE TABLE group_summary_roles ( group_id TEXT NOT NULL, role_id TEXT NOT NULL, role_order BIGINT NOT NULL, UNIQUE (group_id, role_id, role_order), CHECK (role_order > 0) );
CREATE TABLE group_roles ( group_id TEXT NOT NULL, role_id TEXT NOT NULL, profile TEXT NOT NULL, is_public BOOLEAN NOT NULL, UNIQUE (group_id, role_id) );
CREATE TABLE group_attestations_renewals ( group_id TEXT NOT NULL, user_id TEXT NOT NULL, valid_until_ms BIGINT NOT NULL );
CREATE INDEX group_attestations_renewals_g_idx ON group_attestations_renewals(group_id, user_id);
CREATE INDEX group_attestations_renewals_u_idx ON group_attestations_renewals(user_id);
CREATE INDEX group_attestations_renewals_v_idx ON group_attestations_renewals(valid_until_ms);
CREATE TABLE group_attestations_remote ( group_id TEXT NOT NULL, user_id TEXT NOT NULL, valid_until_ms BIGINT NOT NULL, attestation_json TEXT NOT NULL );
CREATE INDEX group_attestations_remote_g_idx ON group_attestations_remote(group_id, user_id);
CREATE INDEX group_attestations_remote_u_idx ON group_attestations_remote(user_id);
CREATE INDEX group_attestations_remote_v_idx ON group_attestations_remote(valid_until_ms);
CREATE TABLE local_group_membership ( group_id TEXT NOT NULL, user_id TEXT NOT NULL, is_admin BOOLEAN NOT NULL, membership TEXT NOT NULL, is_publicised BOOLEAN NOT NULL, content TEXT NOT NULL );
CREATE INDEX local_group_membership_u_idx ON local_group_membership(user_id, group_id);
CREATE INDEX local_group_membership_g_idx ON local_group_membership(group_id);
CREATE TABLE local_group_updates ( stream_id BIGINT NOT NULL, group_id TEXT NOT NULL, user_id TEXT NOT NULL, type TEXT NOT NULL, content TEXT NOT NULL );
CREATE TABLE remote_profile_cache ( user_id TEXT NOT NULL, displayname TEXT, avatar_url TEXT, last_check BIGINT NOT NULL );
CREATE UNIQUE INDEX remote_profile_cache_user_id ON remote_profile_cache(user_id);
CREATE INDEX remote_profile_cache_time ON remote_profile_cache(last_check);
CREATE TABLE IF NOT EXISTS "deleted_pushers" ( stream_id BIGINT NOT NULL, app_id TEXT NOT NULL, pushkey TEXT NOT NULL, user_id TEXT NOT NULL );
CREATE INDEX deleted_pushers_stream_id ON deleted_pushers (stream_id);
CREATE TABLE IF NOT EXISTS "groups" ( group_id TEXT NOT NULL, name TEXT, avatar_url TEXT, short_description TEXT, long_description TEXT, is_public BOOL NOT NULL , join_policy TEXT NOT NULL DEFAULT 'invite');
CREATE UNIQUE INDEX groups_idx ON groups(group_id);
CREATE TABLE IF NOT EXISTS "user_directory" ( user_id TEXT NOT NULL, room_id TEXT, display_name TEXT, avatar_url TEXT );
CREATE INDEX user_directory_room_idx ON user_directory(room_id);
CREATE UNIQUE INDEX user_directory_user_idx ON user_directory(user_id);
CREATE TABLE event_push_actions_staging ( event_id TEXT NOT NULL, user_id TEXT NOT NULL, actions TEXT NOT NULL, notif SMALLINT NOT NULL, highlight SMALLINT NOT NULL );
CREATE INDEX event_push_actions_staging_id ON event_push_actions_staging(event_id);
CREATE TABLE users_pending_deactivation ( user_id TEXT NOT NULL );
CREATE UNIQUE INDEX group_invites_g_idx ON group_invites(group_id, user_id);
CREATE UNIQUE INDEX group_users_g_idx ON group_users(group_id, user_id);
CREATE INDEX group_users_u_idx ON group_users(user_id);
CREATE INDEX group_invites_u_idx ON group_invites(user_id);
CREATE UNIQUE INDEX group_rooms_g_idx ON group_rooms(group_id, room_id);
CREATE INDEX group_rooms_r_idx ON group_rooms(room_id);
CREATE TABLE user_daily_visits ( user_id TEXT NOT NULL, device_id TEXT, timestamp BIGINT NOT NULL );
CREATE INDEX user_daily_visits_uts_idx ON user_daily_visits(user_id, timestamp);
CREATE INDEX user_daily_visits_ts_idx ON user_daily_visits(timestamp);
CREATE TABLE erased_users ( user_id TEXT NOT NULL );
CREATE UNIQUE INDEX erased_users_user ON erased_users(user_id);
CREATE TABLE monthly_active_users ( user_id TEXT NOT NULL, timestamp BIGINT NOT NULL );
CREATE UNIQUE INDEX monthly_active_users_users ON monthly_active_users(user_id);
CREATE INDEX monthly_active_users_time_stamp ON monthly_active_users(timestamp);
CREATE TABLE IF NOT EXISTS "e2e_room_keys_versions" ( user_id TEXT NOT NULL, version BIGINT NOT NULL, algorithm TEXT NOT NULL, auth_data TEXT NOT NULL, deleted SMALLINT DEFAULT 0 NOT NULL );
CREATE UNIQUE INDEX e2e_room_keys_versions_idx ON e2e_room_keys_versions(user_id, version);
CREATE TABLE IF NOT EXISTS "e2e_room_keys" ( user_id TEXT NOT NULL, room_id TEXT NOT NULL, session_id TEXT NOT NULL, version BIGINT NOT NULL, first_message_index INT, forwarded_count INT, is_verified BOOLEAN, session_data TEXT NOT NULL );
CREATE UNIQUE INDEX e2e_room_keys_idx ON e2e_room_keys(user_id, room_id, session_id);
CREATE TABLE users_who_share_private_rooms ( user_id TEXT NOT NULL, other_user_id TEXT NOT NULL, room_id TEXT NOT NULL );
CREATE UNIQUE INDEX users_who_share_private_rooms_u_idx ON users_who_share_private_rooms(user_id, other_user_id, room_id);
CREATE INDEX users_who_share_private_rooms_r_idx ON users_who_share_private_rooms(room_id);
CREATE INDEX users_who_share_private_rooms_o_idx ON users_who_share_private_rooms(other_user_id);
CREATE TABLE user_threepid_id_server ( user_id TEXT NOT NULL, medium TEXT NOT NULL, address TEXT NOT NULL, id_server TEXT NOT NULL );
CREATE UNIQUE INDEX user_threepid_id_server_idx ON user_threepid_id_server( user_id, medium, address, id_server );
CREATE TABLE users_in_public_rooms ( user_id TEXT NOT NULL, room_id TEXT NOT NULL );
CREATE UNIQUE INDEX users_in_public_rooms_u_idx ON users_in_public_rooms(user_id, room_id);
CREATE TABLE account_validity ( user_id TEXT PRIMARY KEY, expiration_ts_ms BIGINT NOT NULL, email_sent BOOLEAN NOT NULL, renewal_token TEXT );
CREATE TABLE event_relations ( event_id TEXT NOT NULL, relates_to_id TEXT NOT NULL, relation_type TEXT NOT NULL, aggregation_key TEXT );
CREATE UNIQUE INDEX event_relations_id ON event_relations(event_id);
CREATE INDEX event_relations_relates ON event_relations(relates_to_id, relation_type, aggregation_key);
CREATE TABLE stats_stream_pos ( Lock CHAR(1) NOT NULL DEFAULT 'X' UNIQUE, stream_id BIGINT, CHECK (Lock='X') );
CREATE TABLE user_stats ( user_id TEXT NOT NULL, ts BIGINT NOT NULL, bucket_size INT NOT NULL, public_rooms INT NOT NULL, private_rooms INT NOT NULL );
CREATE UNIQUE INDEX user_stats_user_ts ON user_stats(user_id, ts);
CREATE TABLE room_stats ( room_id TEXT NOT NULL, ts BIGINT NOT NULL, bucket_size INT NOT NULL, current_state_events INT NOT NULL, joined_members INT NOT NULL, invited_members INT NOT NULL, left_members INT NOT NULL, banned_members INT NOT NULL, state_events INT NOT NULL );
CREATE UNIQUE INDEX room_stats_room_ts ON room_stats(room_id, ts);
CREATE TABLE room_state ( room_id TEXT NOT NULL, join_rules TEXT, history_visibility TEXT, encryption TEXT, name TEXT, topic TEXT, avatar TEXT, canonical_alias TEXT );
CREATE UNIQUE INDEX room_state_room ON room_state(room_id);
CREATE TABLE room_stats_earliest_token ( room_id TEXT NOT NULL, token BIGINT NOT NULL );
CREATE UNIQUE INDEX room_stats_earliest_token_idx ON room_stats_earliest_token(room_id);
CREATE INDEX access_tokens_device_id ON access_tokens (user_id, device_id);
CREATE INDEX user_ips_device_id ON user_ips (user_id, device_id, last_seen);
CREATE INDEX event_contains_url_index ON events (room_id, topological_ordering, stream_ordering);
CREATE INDEX event_push_actions_u_highlight ON event_push_actions (user_id, stream_ordering);
CREATE INDEX event_push_actions_highlights_index ON event_push_actions (user_id, room_id, topological_ordering, stream_ordering);
CREATE INDEX current_state_events_member_index ON current_state_events (state_key);
CREATE INDEX device_inbox_stream_id_user_id ON device_inbox (stream_id, user_id);
CREATE INDEX device_lists_stream_user_id ON device_lists_stream (user_id, device_id);
CREATE INDEX local_media_repository_url_idx ON local_media_repository (created_ts);
CREATE INDEX user_ips_last_seen ON user_ips (user_id, last_seen);
CREATE INDEX user_ips_last_seen_only ON user_ips (last_seen);
CREATE INDEX users_creation_ts ON users (creation_ts);
CREATE INDEX event_to_state_groups_sg_index ON event_to_state_groups (state_group);
CREATE UNIQUE INDEX device_lists_remote_cache_unique_id ON device_lists_remote_cache (user_id, device_id);
CREATE UNIQUE INDEX device_lists_remote_extremeties_unique_idx ON device_lists_remote_extremeties (user_id);
CREATE UNIQUE INDEX user_ips_user_token_ip_unique_index ON user_ips (user_id, access_token, ip);

View File

@ -1,8 +0,0 @@
INSERT INTO appservice_stream_position (stream_ordering) SELECT COALESCE(MAX(stream_ordering), 0) FROM events;
INSERT INTO federation_stream_position (type, stream_id) VALUES ('federation', -1);
INSERT INTO federation_stream_position (type, stream_id) SELECT 'events', coalesce(max(stream_ordering), -1) FROM events;
INSERT INTO user_directory_stream_pos (stream_id) VALUES (0);
INSERT INTO stats_stream_pos (stream_id) VALUES (0);
INSERT INTO event_push_summary_stream_ordering (stream_ordering) VALUES (0);
-- device_max_stream_id is handled separately in 56/device_stream_id_insert.sql

View File

@ -1,37 +0,0 @@
/* Copyright 2019 The Matrix.org Foundation C.I.C
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
CREATE TABLE state_groups (
id BIGINT PRIMARY KEY,
room_id TEXT NOT NULL,
event_id TEXT NOT NULL
);
CREATE TABLE state_groups_state (
state_group BIGINT NOT NULL,
room_id TEXT NOT NULL,
type TEXT NOT NULL,
state_key TEXT NOT NULL,
event_id TEXT NOT NULL
);
CREATE TABLE state_group_edges (
state_group BIGINT NOT NULL,
prev_state_group BIGINT NOT NULL
);
CREATE INDEX state_group_edges_idx ON state_group_edges (state_group);
CREATE INDEX state_group_edges_prev_idx ON state_group_edges (prev_state_group);
CREATE INDEX state_groups_state_type_idx ON state_groups_state (state_group, type, state_key);

View File

@ -1,21 +0,0 @@
/* Copyright 2019 The Matrix.org Foundation C.I.C
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
CREATE SEQUENCE state_group_id_seq
START WITH 1
INCREMENT BY 1
NO MINVALUE
NO MAXVALUE
CACHE 1;

View File

@ -27,10 +27,11 @@ from typing_extensions import Literal
from twisted.internet import defer
from twisted.internet.defer import Deferred
from twisted.python.failure import Failure
from twisted.test.proto_helpers import MemoryReactor
from twisted.web.resource import Resource
from synapse.api.errors import Codes
from synapse.api.errors import Codes, HttpResponseException
from synapse.events import EventBase
from synapse.http.types import QueryParams
from synapse.logging.context import make_deferred_yieldable
@ -247,6 +248,7 @@ class MediaRepoTests(unittest.HomeserverTestCase):
retry_on_dns_fail: bool = True,
max_size: Optional[int] = None,
ignore_backoff: bool = False,
follow_redirects: bool = False,
) -> "Deferred[Tuple[int, Dict[bytes, List[bytes]]]]":
"""A mock for MatrixFederationHttpClient.get_file."""
@ -257,10 +259,15 @@ class MediaRepoTests(unittest.HomeserverTestCase):
output_stream.write(data)
return response
def write_err(f: Failure) -> Failure:
f.trap(HttpResponseException)
output_stream.write(f.value.response)
return f
d: Deferred[Tuple[bytes, Tuple[int, Dict[bytes, List[bytes]]]]] = Deferred()
self.fetches.append((d, destination, path, args))
# Note that this callback changes the value held by d.
d_after_callback = d.addCallback(write_to)
d_after_callback = d.addCallbacks(write_to, write_err)
return make_deferred_yieldable(d_after_callback)
# Mock out the homeserver's MatrixFederationHttpClient
@ -316,10 +323,11 @@ class MediaRepoTests(unittest.HomeserverTestCase):
self.assertEqual(len(self.fetches), 1)
self.assertEqual(self.fetches[0][1], "example.com")
self.assertEqual(
self.fetches[0][2], "/_matrix/media/r0/download/" + self.media_id
self.fetches[0][2], "/_matrix/media/v3/download/" + self.media_id
)
self.assertEqual(
self.fetches[0][3], {"allow_remote": "false", "timeout_ms": "20000"}
self.fetches[0][3],
{"allow_remote": "false", "timeout_ms": "20000", "allow_redirect": "true"},
)
headers = {
@ -671,6 +679,52 @@ class MediaRepoTests(unittest.HomeserverTestCase):
[b"cross-origin"],
)
def test_unknown_v3_endpoint(self) -> None:
"""
If the v3 endpoint fails, try the r0 one.
"""
channel = self.make_request(
"GET",
f"/_matrix/media/v3/download/{self.media_id}",
shorthand=False,
await_result=False,
)
self.pump()
# We've made one fetch, to example.com, using the media URL, and asking
# the other server not to do a remote fetch
self.assertEqual(len(self.fetches), 1)
self.assertEqual(self.fetches[0][1], "example.com")
self.assertEqual(
self.fetches[0][2], "/_matrix/media/v3/download/" + self.media_id
)
# The result which says the endpoint is unknown.
unknown_endpoint = b'{"errcode":"M_UNRECOGNIZED","error":"Unknown request"}'
self.fetches[0][0].errback(
HttpResponseException(404, "NOT FOUND", unknown_endpoint)
)
self.pump()
# There should now be another request to the r0 URL.
self.assertEqual(len(self.fetches), 2)
self.assertEqual(self.fetches[1][1], "example.com")
self.assertEqual(
self.fetches[1][2], f"/_matrix/media/r0/download/{self.media_id}"
)
headers = {
b"Content-Length": [b"%d" % (len(self.test_image.data))],
}
self.fetches[1][0].callback(
(self.test_image.data, (len(self.test_image.data), headers))
)
self.pump()
self.assertEqual(channel.code, 200)
class TestSpamCheckerLegacy:
"""A spam checker module that rejects all media that includes the bytes

View File

@ -133,7 +133,7 @@ class MediaRepoShardTestCase(BaseMultiWorkerStreamTestCase):
self.assertEqual(request.method, b"GET")
self.assertEqual(
request.path,
f"/_matrix/media/r0/download/{target}/{media_id}".encode(),
f"/_matrix/media/v3/download/{target}/{media_id}".encode(),
)
self.assertEqual(
request.requestHeaders.getRawHeaders(b"host"), [target.encode("utf-8")]

View File

@ -477,6 +477,33 @@ class ServerNoticeTestCase(unittest.HomeserverTestCase):
# second room has new ID
self.assertNotEqual(first_room_id, second_room_id)
@override_config(
{"server_notices": {"system_mxid_localpart": "notices", "auto_join": True}}
)
def test_auto_join(self) -> None:
"""
Tests that the user get automatically joined to the notice room
when `auto_join` setting is used.
"""
# user has no room memberships
self._check_invite_and_join_status(self.other_user, 0, 0)
# send server notice
server_notice_request_content = {
"user_id": self.other_user,
"content": {"msgtype": "m.text", "body": "test msg one"},
}
self.make_request(
"POST",
self.url,
access_token=self.admin_user_tok,
content=server_notice_request_content,
)
# user has joined the room
self._check_invite_and_join_status(self.other_user, 0, 1)
@override_config({"server_notices": {"system_mxid_localpart": "notices"}})
def test_update_notice_user_name_when_changed(self) -> None:
"""

View File

@ -312,6 +312,166 @@ class ProfileTestCase(unittest.HomeserverTestCase):
)
self.assertEqual(channel.code, 200, channel.result)
@unittest.override_config(
{"experimental_features": {"msc4069_profile_inhibit_propagation": True}}
)
def test_msc4069_inhibit_propagation(self) -> None:
"""Tests to ensure profile update propagation can be inhibited."""
for prop in ["avatar_url", "displayname"]:
room_id = self.helper.create_room_as(tok=self.owner_tok)
channel = self.make_request(
"PUT",
f"/rooms/{room_id}/state/m.room.member/{self.owner}",
content={"membership": "join", prop: "mxc://my.server/existing"},
access_token=self.owner_tok,
)
self.assertEqual(channel.code, 200, channel.result)
channel = self.make_request(
"PUT",
f"/profile/{self.owner}/{prop}?org.matrix.msc4069.propagate=false",
content={prop: "http://my.server/pic.gif"},
access_token=self.owner_tok,
)
self.assertEqual(channel.code, 200, channel.result)
res = (
self._get_avatar_url()
if prop == "avatar_url"
else self._get_displayname()
)
self.assertEqual(res, "http://my.server/pic.gif")
channel = self.make_request(
"GET",
f"/rooms/{room_id}/state/m.room.member/{self.owner}",
access_token=self.owner_tok,
)
self.assertEqual(channel.code, 200, channel.result)
self.assertEqual(channel.json_body.get(prop), "mxc://my.server/existing")
def test_msc4069_inhibit_propagation_disabled(self) -> None:
"""Tests to ensure profile update propagation inhibit flags are ignored when the
experimental flag is not enabled.
"""
for prop in ["avatar_url", "displayname"]:
room_id = self.helper.create_room_as(tok=self.owner_tok)
channel = self.make_request(
"PUT",
f"/rooms/{room_id}/state/m.room.member/{self.owner}",
content={"membership": "join", prop: "mxc://my.server/existing"},
access_token=self.owner_tok,
)
self.assertEqual(channel.code, 200, channel.result)
channel = self.make_request(
"PUT",
f"/profile/{self.owner}/{prop}?org.matrix.msc4069.propagate=false",
content={prop: "http://my.server/pic.gif"},
access_token=self.owner_tok,
)
self.assertEqual(channel.code, 200, channel.result)
res = (
self._get_avatar_url()
if prop == "avatar_url"
else self._get_displayname()
)
self.assertEqual(res, "http://my.server/pic.gif")
channel = self.make_request(
"GET",
f"/rooms/{room_id}/state/m.room.member/{self.owner}",
access_token=self.owner_tok,
)
self.assertEqual(channel.code, 200, channel.result)
# The ?propagate=false should be ignored by the server because the config flag
# isn't enabled.
self.assertEqual(channel.json_body.get(prop), "http://my.server/pic.gif")
def test_msc4069_inhibit_propagation_default(self) -> None:
"""Tests to ensure profile update propagation happens by default."""
for prop in ["avatar_url", "displayname"]:
room_id = self.helper.create_room_as(tok=self.owner_tok)
channel = self.make_request(
"PUT",
f"/rooms/{room_id}/state/m.room.member/{self.owner}",
content={"membership": "join", prop: "mxc://my.server/existing"},
access_token=self.owner_tok,
)
self.assertEqual(channel.code, 200, channel.result)
channel = self.make_request(
"PUT",
f"/profile/{self.owner}/{prop}",
content={prop: "http://my.server/pic.gif"},
access_token=self.owner_tok,
)
self.assertEqual(channel.code, 200, channel.result)
res = (
self._get_avatar_url()
if prop == "avatar_url"
else self._get_displayname()
)
self.assertEqual(res, "http://my.server/pic.gif")
channel = self.make_request(
"GET",
f"/rooms/{room_id}/state/m.room.member/{self.owner}",
access_token=self.owner_tok,
)
self.assertEqual(channel.code, 200, channel.result)
# The ?propagate=false should be ignored by the server because the config flag
# isn't enabled.
self.assertEqual(channel.json_body.get(prop), "http://my.server/pic.gif")
@unittest.override_config(
{"experimental_features": {"msc4069_profile_inhibit_propagation": True}}
)
def test_msc4069_inhibit_propagation_like_default(self) -> None:
"""Tests to ensure clients can request explicit profile propagation."""
for prop in ["avatar_url", "displayname"]:
room_id = self.helper.create_room_as(tok=self.owner_tok)
channel = self.make_request(
"PUT",
f"/rooms/{room_id}/state/m.room.member/{self.owner}",
content={"membership": "join", prop: "mxc://my.server/existing"},
access_token=self.owner_tok,
)
self.assertEqual(channel.code, 200, channel.result)
channel = self.make_request(
"PUT",
f"/profile/{self.owner}/{prop}?org.matrix.msc4069.propagate=true",
content={prop: "http://my.server/pic.gif"},
access_token=self.owner_tok,
)
self.assertEqual(channel.code, 200, channel.result)
res = (
self._get_avatar_url()
if prop == "avatar_url"
else self._get_displayname()
)
self.assertEqual(res, "http://my.server/pic.gif")
channel = self.make_request(
"GET",
f"/rooms/{room_id}/state/m.room.member/{self.owner}",
access_token=self.owner_tok,
)
self.assertEqual(channel.code, 200, channel.result)
# The client requested ?propagate=true, so it should have happened.
self.assertEqual(channel.json_body.get(prop), "http://my.server/pic.gif")
def _setup_local_files(self, names_and_props: Dict[str, Dict[str, Any]]) -> None:
"""Stores metadata about files in the database.