In `MatrixFederationHttpClient._send_request()`, we make a HTTP request
using an `Agent`, wrap that request in a timeout and await the resulting
`Deferred`. On its own, the `Agent` performing the HTTP request
correctly stashes and restores the logging context while waiting.
The addition of the timeout introduces a path where the logging context
is not restored when execution resumes.
To address this, we wrap the timeout `Deferred` in a
`make_deferred_yieldable()` to stash the logging context and restore it
on completion of the `await`. However this is not sufficient, since by
the time we construct the timeout `Deferred`, the `Agent` has already
stashed and cleared the logging context when using
`make_deferred_yieldable()` to produce its `Deferred` for the request.
Hence, we wrap the `Agent` request in a `run_in_background()` to "fork"
and preserve the logging context so that we can stash and restore it
when `await`ing the timeout `Deferred`.
This approach is similar to the one used with `defer.gatherResults`.
Note that the code is still not fully correct. When a timeout occurs,
the request remains running in the background (existing behavior which
is nothing to do with the new call to `run_in_background`) and may
re-start the logging context after it has finished.
Mostly this involves decorating a few Deferred declarations with extra type hints. We wrap the types in quotes to avoid runtime errors when running against older versions of Twisted that don't have generics on Deferred.
* switch from `types.CoroutineType` to `typing.Coroutine`
these should be identical semantically, and since `defer.ensureDeferred` is
defined to take a `typing.Coroutine`, will keep mypy happy
* Fix some annotations on inlineCallbacks functions
* changelog
Improves type hints for:
* parse_{boolean,integer}
* parse_{boolean,integer}_from_args
* parse_json_{value,object}_from_request
And fixes any incorrect calls that resulted from unknown types.
* Drop Origin & Accept from Access-Control-Allow-Headers value
This change drops the Origin and Accept header names from the value of the
Access-Control-Allow-Headers response header sent by Synapse. Per the CORS
protocol, it’s not necessary or useful to include those header names.
Details:
Per-spec at https://fetch.spec.whatwg.org/#forbidden-header-name, Origin
is a “forbidden header name” set by the browser and that frontend
JavaScript code is never allowed to set.
So the value of Access-Control-Allow-Headers isn’t relevant to Origin or
in general to other headers set by the browser itself — the browser
never ever consults the Access-Control-Allow-Headers value to confirm
that it’s OK for the request to include an Origin header.
And per-spec at https://fetch.spec.whatwg.org/#cors-safelisted-request-header,
Accept is a “CORS-safelisted request-header”, which means that browsers
allow requests to contain the Accept header regardless of whether the
Access-Control-Allow-Headers value contains "Accept".
So it’s unnecessary for the Access-Control-Allow-Headers to explicitly
include Accept. Browsers will not perform a CORS preflight for requests
containing an Accept request header.
Related: https://github.com/matrix-org/matrix-doc/pull/3225
Signed-off-by: Michael[tm] Smith <mike@w3.org>
Add 'federation_ip_range_whitelist'. This allows backwards-compatibility, If 'federation_ip_range_blacklist' is set. Otherwise 'ip_range_whitelist' will be used for federation servers.
Signed-off-by: Michael Kutzner 1mikure@gmail.com
Instead of parsing the full response to `/send_join` into Python objects (which can be huge for large rooms) and *then* parsing that into events, we instead use ijson to stream parse the response directly into `EventBase` objects.
Part of #9744
Removes all redundant `# -*- coding: utf-8 -*-` lines from files, as python 3 automatically reads source code as utf-8 now.
`Signed-off-by: Jonathan de Jong <jonathan@automatia.nl>`
### Changes proposed in this PR
- Add support for the `no_proxy` and `NO_PROXY` environment variables
- Internally rely on urllib's [`proxy_bypass_environment`](bdb941be42/Lib/urllib/request.py (L2519))
- Extract env variables using urllib's `getproxies`/[`getproxies_environment`](bdb941be42/Lib/urllib/request.py (L2488)) which supports lowercase + uppercase, preferring lowercase, except for `HTTP_PROXY` in a CGI environment
This does contain behaviour changes for consumers so making sure these are called out:
- `no_proxy`/`NO_PROXY` is now respected
- lowercase `https_proxy` is now allowed and taken over `HTTPS_PROXY`
Related to #9306 which also uses `ProxyAgent`
Signed-off-by: Timothy Leung tim95@hotmail.co.uk
This reduces the memory usage of previewing media files which
end up larger than the `max_spider_size` by avoiding buffering
content internally in treq.
It also checks the `Content-Length` header in additional places
instead of streaming the content to check the body length.
- Update black version to the latest
- Run black auto formatting over the codebase
- Run autoformatting according to [`docs/code_style.md
`](80d6dc9783/docs/code_style.md)
- Update `code_style.md` docs around installing black to use the correct version
There are going to be a couple of paths to get to the final step of SSO reg, and I want the URL in the browser to consistent. So, let's move the final step onto a separate path, which we redirect to.
A reactor was being passed instead of a whitelist for the BlacklistingAgentWrapper
used by the WellyKnownResolver. This coulld cause exceptions when attempting to
connect to IP addresses that are blacklisted, but in reality this did not have any
observable affect since this code is not used for IP literals.
SynapseRequest is in danger of becoming a bit of a dumping-ground for "useful stuff relating to Requests",
which isn't really its intention (its purpose is to override render, finished and connectionLost to set up the
LoggingContext and write the right entries to the request log).
Putting utility functions inside SynapseRequest means that lots of our code ends up requiring a
SynapseRequest when there is nothing synapse-specific about the Request at all, and any old
twisted.web.iweb.IRequest will do. This increases code coupling and makes testing more difficult.
In short: move get_user_agent out to a utility function.
Replaces the `federation_ip_range_blacklist` configuration setting with an
`ip_range_blacklist` setting with wider scope. It now applies to:
* Federation
* Identity servers
* Push notifications
* Checking key validitity for third-party invite events
The old `federation_ip_range_blacklist` setting is still honored if present, but
with reduced scope (it only applies to federation and identity servers).
We do it this way round so that only the "owner" can delete the access token (i.e. `/logout/all` by the "owner" also deletes that token, but `/logout/all` by the "target user" doesn't).
A future PR will add an API for creating such a token.
When the target user and authenticated entity are different the `Processed request` log line will be logged with a: `{@admin:server as @bob:server} ...`. I'm not convinced by that format (especially since it adds spaces in there, making it harder to use `cut -d ' '` to chop off the start of log lines). Suggestions welcome.
Not being able to serialise `frozendicts` is fragile, and it's annoying to have
to think about which serialiser you want. There's no real downside to
supporting frozendicts, so let's just have one json encoder.
This allows trailing commas in multi-line arg lists.
Minor, but we might as well keep our formatting current with regard to
our minimum supported Python version.
Signed-off-by: Dan Callahan <danc@element.io>
* Remove `on_timeout_cancel` from `timeout_deferred`
The `on_timeout_cancel` param to `timeout_deferred` wasn't always called on a
timeout (in particular if the canceller raised an exception), so it was
unreliable. It was also only used in one place, and to be honest it's easier to
do what it does a different way.
* Fix handling of connection timeouts in outgoing http requests
Turns out that if we get a timeout during connection, then a different
exception is raised, which wasn't always handled correctly.
To fix it, catch the exception in SimpleHttpClient and turn it into a
RequestTimedOutError (which is already a documented exception).
Also add a description to RequestTimedOutError so that we can see which stage
it failed at.
* Fix incorrect handling of timeouts reading federation responses
This was trapping the wrong sort of TimeoutError, so was never being hit.
The effect was relatively minor, but we should fix this so that it does the
expected thing.
* Fix inconsistent handling of `timeout` param between methods
`get_json`, `put_json` and `delete_json` were applying a different timeout to
the response body to `post_json`; bring them in line and test.
Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>
Co-authored-by: Erik Johnston <erik@matrix.org>
slots use less memory (and attribute access is faster) while slightly
limiting the flexibility of the class attributes. This focuses on objects
which are instantiated "often" and for short periods of time.
c.f. #8021
A lot of the code here is to change the `Completed 200 OK` logging to include the request URI so that we can drop the `Sending request...` log line.
Some notes:
1. We won't log retries, which may be confusing considering the time taken log line includes retries and sleeps.
2. The `_send_request_with_optional_trailing_slash` will always be logged *without* the forward slash, even if it succeeded only with the forward slash.
This ended up being a bit more invasive than I'd hoped for (not helped by
generic_worker duplicating some of the code from homeserver), but hopefully
it's an improvement.
The idea is that, rather than storing unstructured `dict`s in the config for
the listener configurations, we instead parse it into a structured
`ListenerConfig` object.
* Expose `return_html_error`, and allow it to take a Jinja2 template instead of a raw string
* Clean up exception handling in SAML2ResponseResource
* use the existing code in `return_html_error` instead of re-implementing it
(giving it a jinja2 template rather than inventing a new form of template)
* do the exception-catching in the REST layer rather than in the handler
layer, to make sure we catch all exceptions.
Splitting based on the response code means we can avoid double logging here and identical information from line 164 while still logging at info if we don't get a good response and need to retry.
* Pull Sentinel out of LoggingContext
... and drop a few unnecessary references to it
* Factor out LoggingContext.current_context
move `current_context` and `set_context` out to top-level functions.
Mostly this means that I can more easily trace what's actually referring to
LoggingContext, but I think it's generally neater.
* move copy-to-parent into `stop`
this really just makes `start` and `stop` more symetric. It also means that it
behaves correctly if you manually `set_log_context` rather than using the
context manager.
* Replace `LoggingContext.alive` with `finished`
Turn `alive` into `finished` and make it a bit better defined.
A lot of the things we log at INFO are now a bit superfluous, so lets
make them DEBUG logs to reduce the amount we log by default.
Co-Authored-By: Brendan Abolivier <babolivier@matrix.org>
Co-authored-by: Brendan Abolivier <github@brendanabolivier.com>
The `http_proxy` and `HTTPS_PROXY` env vars can be set to a `host[:port]` value which should point to a proxy.
The address of the proxy should be excluded from IP blacklists such as the `url_preview_ip_range_blacklist`.
The proxy will then be used for
* push
* url previews
* phone-home stats
* recaptcha validation
* CAS auth validation
It will *not* be used for:
* Application Services
* Identity servers
* Outbound federation
* In worker configurations, connections from workers to masters
Fixes#4198.
Replace `client_secret` query parameter values with `<redacted>` in the logs. Prevents a scenario where a MITM of server traffic can horde 3pids on their account.