Commit Graph

27 Commits (608f66f978e02e583f1c42c912b1daa22fbc0bad)

Author SHA1 Message Date
Jonathan de Jong 1cc512909c
Have `Follow` activities bypass availability (#27586)
Co-authored-by: Claire <claire.github-309c@sitedethib.com>
2023-10-27 14:55:00 +00:00
Claire ddde4e0d95
Change `ActivityPub::DeliveryWorker` retries to be spread out more (#21956) 2023-03-03 21:08:22 +01:00
Claire 8cf7006d4e
Refactor ActivityPub handling to prepare for non-Account actors (#19212)
* Move ActivityPub::FetchRemoteAccountService to ActivityPub::FetchRemoteActorService

ActivityPub::FetchRemoteAccountService is kept as a wrapper for when the actor is
specifically required to be an Account

* Refactor SignatureVerification to allow non-Account actors

* fixup! Move ActivityPub::FetchRemoteAccountService to ActivityPub::FetchRemoteActorService

* Refactor ActivityPub::FetchRemoteKeyService to potentially return non-Account actors

* Refactor inbound ActivityPub payload processing to accept non-Account actors

* Refactor inbound ActivityPub processing to accept activities relayed through non-Account

* Refactor how Account key URIs are built

* Refactor Request and drop unused key_id_format parameter

* Rename ActivityPub::Dereferencer `signature_account` to `signature_actor`
2022-09-21 22:45:57 +02:00
Claire 5efb1ff337
Fix followers synchronization mechanism not working when URI has empty path (#16510)
* Fix followers synchronization mechanism not working when URI has empty path

To my knowledge, there is no current implementation on the fediverse
that can use bare domains (e.g., actor is at https://example.org instead of
something like https://example.org/actor) that also plans to support the
followers synchronization mechanism. However, Mastodon's current implementation
would exclude such accounts from followers list.

Also adds tests and rename them to reflect the proper method names.

* Move url prefix regexp to its own constant
2021-08-11 17:48:42 +02:00
ThibG ca56527140
Add follower synchronization mechanism (#14510)
* Add support for followers synchronization on the receiving end

Check the `collectionSynchronization` attribute on `Create` and `Announce`
activities and synchronize followers from provided collection if possible.

* Add tests for followers synchronization on the receiving end

* Add support for follower synchronization on the sender's end

* Add tests for the sending end

* Switch from AS attributes to HTTP header

Replace the custom `collectionSynchronization` ActivityStreams attribute by
an HTTP header (`X-AS-Collection-Synchronization`) with the same syntax as
the `Signature` header and the following fields:
- `collectionId` to specify which collection to synchronize
- `digest` for the SHA256 hex-digest of the list of followers known on the
   receiving instance (where “receiving instance” is determined by accounts
   sharing the same host name for their ActivityPub actor `id`)
- `url` of a collection that should be fetched by the instance actor

Internally, move away from the webfinger-based `domain` attribute and use
account `uri` prefix to group accounts.

* Add environment variable to disable followers synchronization

Since the whole mechanism relies on some new preconditions that, in some
extremely rare cases, might not be met, add an environment variable
(DISABLE_FOLLOWERS_SYNCHRONIZATION) to disable the mechanism altogether and
avoid followers being incorrectly removed.

The current conditions are:
1. all managed accounts' actor `id` and inbox URL have the same URI scheme and
   netloc.
2. all accounts whose actor `id` or inbox URL share the same URI scheme and
   netloc as a managed account must be managed by the same Mastodon instance
   as well.

As far as Mastodon is concerned, breaking those preconditions require extensive
configuration changes in the reverse proxy and might also cause other issues.

Therefore, this environment variable provides a way out for people with highly
unusual configurations, and can be safely ignored for the overwhelming majority
of Mastodon administrators.

* Only set follower synchronization header on non-public statuses

This is to avoid unnecessary computations and allow Follow-related
activities to be handled by the usual codepath instead of going through
the synchronization mechanism (otherwise, any Follow/Undo/Accept activity
would trigger the synchronization mechanism even if processing the activity
itself would be enough to re-introduce synchronization)

* Change how ActivityPub::SynchronizeFollowersService handles follow requests

If the remote lists a local follower which we only know has sent a follow
request, consider the follow request as accepted instead of sending an Undo.

* Integrate review feeback

- rename X-AS-Collection-Synchronization to Collection-Synchronization
- various minor refactoring and code style changes

* Only select required fields when computing followers_hash

* Use actor URI rather than webfinger domain in synchronization endpoint

* Change hash computation to be a XOR of individual hashes

Makes it much easier to be memory-efficient, and avoid sorting discrepancy issues.

* Marginally improve followers_hash computation speed

* Further improve hash computation performances by using pluck_each
2020-10-21 18:04:09 +02:00
Eugen Rochko e9ecbca70d
Fix error within error when limiting backtrace to 3 lines (#13120)
Fix #13086, close #13113
2020-05-10 10:30:27 +02:00
Takeshi Umeda 04c8d825f6
Fix DeliveryWorker not to call failure_tracker when inbox_url is unavailable (#13482) 2020-04-16 08:04:10 +02:00
Eugen Rochko 5edff32733
Change delivery failure tracking to work with hostnames instead of URLs (#13437) 2020-04-15 20:33:24 +02:00
Daigo 3 Dango e9ea09d173 Suppress backtrace when delivering toots (#12798)
This is to suppress irrelevant backtrace from errors raised when
delivering toots to remote servers. The errors are usually out of
control by the local server and backtraces don't provide much
information.

This is similar to https://github.com/tootsuite/mastodon/pull/5174
and shortens backtraces like below:

```
WARN: Mastodon::UnexpectedResponseError: https://example.com/inbox returned code 523
WARN: app/workers/activitypub/delivery_worker.rb:48:in `block (3 levels) in perform_request'
app/lib/request.rb:75:in `perform'
app/workers/activitypub/delivery_worker.rb:47:in `block (2 levels) in perform_request'
app/lib/request_pool.rb:53:in `use'
app/lib/request_pool.rb:108:in `block (2 levels) in with'
vendor/bundle/ruby/2.7.0/gems/activesupport-5.2.4.1/lib/active_support/notifications.rb:170:in `instrument'
app/lib/request_pool.rb:107:in `block in with'
app/lib/connection_pool/shared_connection_pool.rb:21:in `block (2 levels) in with'
app/lib/connection_pool/shared_connection_pool.rb:20:in `handle_interrupt'
app/lib/connection_pool/shared_connection_pool.rb:20:in `block in with'
app/lib/connection_pool/shared_connection_pool.rb:16:in `handle_interrupt'
app/lib/connection_pool/shared_connection_pool.rb:16:in `with'
app/lib/request_pool.rb:106:in `with'
app/workers/activitypub/delivery_worker.rb:46:in `block in perform_request'
vendor/bundle/ruby/2.7.0/gems/stoplight-2.2.0/lib/stoplight/light/runnable.rb:51:in `run_code'
vendor/bundle/ruby/2.7.0/gems/stoplight-2.2.0/lib/stoplight/light/runnable.rb:42:in `run_yellow'
vendor/bundle/ruby/2.7.0/gems/stoplight-2.2.0/lib/stoplight/light/runnable.rb:24:in `run'
app/workers/activitypub/delivery_worker.rb:57:in `perform_request'
app/workers/activitypub/delivery_worker.rb:25:in `perform'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:192:in `execute_job'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:165:in `block (2 levels) in process'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/middleware/chain.rb:128:in `block in invoke'
vendor/bundle/ruby/2.7.0/gems/nsa-0.2.7/lib/nsa/collectors/sidekiq.rb:31:in `block in call'
vendor/bundle/ruby/2.7.0/gems/nsa-0.2.7/lib/nsa/statsd/publisher.rb:27:in `statsd_time'
vendor/bundle/ruby/2.7.0/gems/nsa-0.2.7/lib/nsa/collectors/sidekiq.rb:30:in `call'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/middleware/chain.rb:130:in `block in invoke'
app/lib/sidekiq_error_handler.rb:5:in `call'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/middleware/chain.rb:130:in `block in invoke'
vendor/bundle/ruby/2.7.0/gems/scout_apm-2.3.0.pre3/lib/scout_apm/background_job_integrations/sidekiq.rb:69:in `call'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/middleware/chain.rb:130:in `block in invoke'
vendor/bundle/ruby/2.7.0/gems/sidekiq-unique-jobs-6.0.18/lib/sidekiq_unique_jobs/server/middleware.rb:29:in `call'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/middleware/chain.rb:130:in `block in invoke'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/middleware/chain.rb:133:in `invoke'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:164:in `block in process'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:137:in `block (6 levels) in dispatch'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/job_retry.rb:109:in `local'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:136:in `block (5 levels) in dispatch'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/rails.rb:43:in `block in call'
vendor/bundle/ruby/2.7.0/gems/activesupport-5.2.4.1/lib/active_support/execution_wrapper.rb:87:in `wrap'
vendor/bundle/ruby/2.7.0/gems/activesupport-5.2.4.1/lib/active_support/reloader.rb:73:in `block in wrap'
vendor/bundle/ruby/2.7.0/gems/activesupport-5.2.4.1/lib/active_support/execution_wrapper.rb:87:in `wrap'
vendor/bundle/ruby/2.7.0/gems/activesupport-5.2.4.1/lib/active_support/reloader.rb:72:in `wrap'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/rails.rb:42:in `call'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:132:in `block (4 levels) in dispatch'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:250:in `stats'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:127:in `block (3 levels) in dispatch'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/job_logger.rb:8:in `call'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:126:in `block (2 levels) in dispatch'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/job_retry.rb:74:in `global'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:125:in `block in dispatch'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/logging.rb:48:in `with_context'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/logging.rb:42:in `with_job_hash_context'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:124:in `dispatch'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:163:in `process'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:83:in `process_one'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:71:in `run'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/util.rb:16:in `watchdog'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/util.rb:25:in `block in safe_thread'
```

```
WARN: Stoplight::Error::RedLight: https://example.com/inbox
WARN: vendor/bundle/ruby/2.7.0/gems/stoplight-2.2.0/lib/stoplight/light/runnable.rb:46:in `run_red'
vendor/bundle/ruby/2.7.0/gems/stoplight-2.2.0/lib/stoplight/light/runnable.rb:25:in `run'
app/workers/activitypub/delivery_worker.rb:57:in `perform_request'
app/workers/activitypub/delivery_worker.rb:25:in `perform'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:192:in `execute_job'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:165:in `block (2 levels) in process'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/middleware/chain.rb:128:in `block in invoke'
vendor/bundle/ruby/2.7.0/gems/nsa-0.2.7/lib/nsa/collectors/sidekiq.rb:31:in `block in call'
vendor/bundle/ruby/2.7.0/gems/nsa-0.2.7/lib/nsa/statsd/publisher.rb:27:in `statsd_time'
vendor/bundle/ruby/2.7.0/gems/nsa-0.2.7/lib/nsa/collectors/sidekiq.rb:30:in `call'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/middleware/chain.rb:130:in `block in invoke'
app/lib/sidekiq_error_handler.rb:5:in `call'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/middleware/chain.rb:130:in `block in invoke'
vendor/bundle/ruby/2.7.0/gems/scout_apm-2.3.0.pre3/lib/scout_apm/background_job_integrations/sidekiq.rb:69:in `call'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/middleware/chain.rb:130:in `block in invoke'
vendor/bundle/ruby/2.7.0/gems/sidekiq-unique-jobs-6.0.18/lib/sidekiq_unique_jobs/server/middleware.rb:29:in `call'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/middleware/chain.rb:130:in `block in invoke'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/middleware/chain.rb:133:in `invoke'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:164:in `block in process'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:137:in `block (6 levels) in dispatch'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/job_retry.rb:109:in `local'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:136:in `block (5 levels) in dispatch'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/rails.rb:43:in `block in call'
vendor/bundle/ruby/2.7.0/gems/activesupport-5.2.4.1/lib/active_support/execution_wrapper.rb:87:in `wrap'
vendor/bundle/ruby/2.7.0/gems/activesupport-5.2.4.1/lib/active_support/reloader.rb:73:in `block in wrap'
vendor/bundle/ruby/2.7.0/gems/activesupport-5.2.4.1/lib/active_support/execution_wrapper.rb:87:in `wrap'
vendor/bundle/ruby/2.7.0/gems/activesupport-5.2.4.1/lib/active_support/reloader.rb:72:in `wrap'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/rails.rb:42:in `call'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:132:in `block (4 levels) in dispatch'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:250:in `stats'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:127:in `block (3 levels) in dispatch'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/job_logger.rb:8:in `call'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:126:in `block (2 levels) in dispatch'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/job_retry.rb:74:in `global'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:125:in `block in dispatch'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/logging.rb:48:in `with_context'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/logging.rb:42:in `with_job_hash_context'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:124:in `dispatch'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:163:in `process'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:83:in `process_one'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:71:in `run'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/util.rb:16:in `watchdog'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/util.rb:25:in `block in safe_thread'
```
2020-01-11 02:15:03 +01:00
Eugen Rochko 5d3feed191
Refactor fetching of remote resources (#11251) 2019-07-10 18:59:28 +02:00
Eugen Rochko 406b46395d
Fix URLs appearing twice in errors of ActivityPub::DeliveryWorker (#11231) 2019-07-07 03:37:01 +02:00
Eugen Rochko bc60d794f8
Change ActivityPub::DeliveryWorker to not retry HTTP 501 errors (#11233) 2019-07-02 00:59:53 +02:00
Eugen Rochko 0d9ffe56fb
Add request pool to improve delivery performance (#10353)
* Add request pool to improve delivery performance

Fix #7909

* Ensure connection is closed when exception interrupts execution

* Remove Timeout#timeout from socket connection

* Fix infinite retrial loop on HTTP::ConnectionError

* Close sockets on failure, reduce idle time to 90 seconds

* Add MAX_REQUEST_POOL_SIZE option to limit concurrent connections to the same server

* Use a shared pool size, 512 by default, to stay below open file limit

* Add some tests

* Add more tests

* Reduce MAX_IDLE_TIME from 90 to 30 seconds, reap every 30 seconds

* Use a shared pool that returns preferred connection but re-purposes other ones when needed

* Fix wrong connection being returned on subsequent calls within the same thread

* Reduce mutex calls on flushes from 2 to 1 and add test for reaping
2019-07-02 00:34:38 +02:00
ThibG 9efcca3c54 Retry ActivityPub inbox delivery on HTTP 401 and 408 errors (#10812)
HTTP 401 responses returned by Mastodon's inbox controller may
be temporary if, for instance, the requesting user's actor/key json
could not be retrieved in a timely fashion. This changes allow retries
instead of dropping the message entirely.

Also added HTTP 408 as that error is by nature temporary.
2019-05-23 15:00:30 +02:00
Eugen Rochko 11955600ad
Skip deliveries to inboxes that have already been marked as unavailable (#9358) 2018-11-27 19:15:08 +01:00
Eugen Rochko cabdbb7f9c
Add CLI task for rotating keys (#8466)
* If an Update is signed with known key, skip re-following procedure

Because it means the remote actor did *not* lose their database

* Add CLI method for rotating keys

    bin/tootctl accounts rotate [USERNAME]

Generates a new RSA key per account and sends out an Update activity
signed with the old key.

* Key rotation: Space out Update fan-outs every 5 minutes per 1000 accounts

* Skip suspended accounts in key rotation
2018-08-26 20:21:03 +02:00
takayamaki 587da93152 checking http status code with range (#7544) 2018-05-19 14:47:44 +02:00
Eugen Rochko 97f02f2c08
Do not raise delivery failure on 4xx errors, increase stoplight threshold (#7541)
* Do not raise delivery failure on 4xx errors, increase stoplight threshold

Stoplight failure threshold from 3 to 10
Status code 429 will raise a failure/get retried

* Oops
2018-05-19 00:23:19 +02:00
Eugen Rochko d4de2239b0
Add a circuit breaker for ActivityPub deliveries (#7053) 2018-04-07 21:36:58 +02:00
Akihiko Odaki 54b273bf99 Close http connection in perform method of Request class (#6889)
HTTP connections must be explicitly closed in many cases, and letting
perform method close connections makes its callers less redundant and
prevent them from forgetting to close connections.
2018-03-24 12:49:54 +01:00
ThibG a0de3222dd Retry delivering toots over ActivityPub for about 2 days (#6298)
Currently, Mastodon will retry delivering toots for a bit over 1 hour.
This is a very short timespan when considering private and direct toots, which
cannot be seen by the recipient at all after the delivery attempts have failed.

Ideally, private and direct toots should have a different number of retries,
but I do not know how to do that.
2018-01-19 15:49:48 +01:00
abcang 2eab41cd1a Close connection when succeeded posting (#5390)
* Close connection when succeeded posting

* Update webmock
2017-10-14 14:38:57 +02:00
ThibG f7c909e290 Retry ActivityPub delivery a few more times (#5014) 2017-09-30 16:01:46 +02:00
Eugen Rochko f4ca116ea8 After 7 days of repeated delivery failures, give up on inbox (#5131)
- A successful delivery cancels it out
- An incoming delivery from account of the inbox cancels it out
2017-09-29 03:16:20 +02:00
abcang 3d9b8847d2 Flush body when POST requests (#5128) 2017-09-28 15:04:32 +02:00
Daigo 3 Dango a0bbeafb04 Suppress backtrace when failed to communicate with a remote instance (#5076) 2017-09-24 11:14:06 +02:00
Eugen Rochko b7370ac8ba ActivityPub delivery (#4566)
* Deliver ActivityPub Like

* Deliver ActivityPub Undo-Like

* Deliver ActivityPub Create/Announce activities

* Deliver ActivityPub creates from mentions

* Deliver ActivityPub Block/Undo-Block

* Deliver ActivityPub Accept/Reject-Follow

* Deliver ActivityPub Undo-Follow

* Deliver ActivityPub Follow

* Deliver ActivityPub Delete activities

Incidentally fix #889

* Adjust BatchedRemoveStatusService for ActivityPub

* Add tests for ActivityPub workers

* Add tests for FollowService

* Add tests for FavouriteService, UnfollowService and PostStatusService

* Add tests for ReblogService, BlockService, UnblockService, ProcessMentionsService

* Add tests for AuthorizeFollowService, RejectFollowService, RemoveStatusService

* Add tests for BatchedRemoveStatusService

* Deliver updates to a local account to ActivityPub followers

* Minor adjustments
2017-08-13 00:44:41 +02:00