It turns out that looping_call does check the deferred returned by its
callback, and (at least in the case of client_ips), we were relying on this,
and I broke it in #3604.
Update run_as_background_process to return the deferred, and make sure we
return it to clock.looping_call.
In most cases, we limit the number of prev_events for a given event to 10
events. This fixes a particular code path which created events with huge
numbers of prev_events.
Add db_conn parameters to the `__init__` methods of the *Store classes, so that
they are all consistent, which makes the multiple inheritance work correctly
(and so that we can later extract mixins which can be used in the slavedstores)
A few non-functional changes:
* A bunch of docstrings to document types
* Split `EventsStore._persist_events_txn` up a bit. Hopefully it's a bit more
readable.
* Rephrase `EventFederationStore._update_min_depth_for_room_txn` to avoid
mind-bending conditional.
* Rephrase rejected/outlier conditional in `_update_outliers_txn` to avoid
mind-bending conditional.
* `get_forward_extremeties_for_room` takes a numeric `stream_ordering`. We were
passing a `RoomStreamToken`, which meant that it returned the *current*
extremities, rather than those corresponding to the `from_token`. However:
* `get_state_ids_for_events` required a second ('types') parameter; this meant
that a `TypeError` was thrown and we ended up acting as though there was *no*
prev state.
* `get_state_ids_for_events` actually returns a map from event_id to state
dictionary - just looking up the state keys in it again meant that we acted
as though there was no prev state. We now check if each member's state has
changed since *any* of the extremities.
Also add/fix some comments.
Currently, we magically perform an extra database hit to find the
inviter, and use this to guess where we should send the event. Instead,
fill in a valid context, so that other callers relying on the context
actually have one.