When using a CAS server, the users only have a temporary email
`change@me-foo-cas.com` which can't be changed but by an
administrator.
We need a new environment variable like for SAML to assume the email
from CAS is verified.
* config/initializers/omniauth.rb: define CAS option for assuming
email are always verified.
* .env.nanobox: add new variable as an example.
The auto-linking code basically rewrote the whole string escaping non-ascii
characters in an inefficient way, and building a full character offset map
between the unescaped and escaped texts before sending the contents to
TwitterText's extractor.
Instead of doing that, this commit changes the TwitterText regexps to include
valid IRI characters in addition to valid URI characters.
* Fix issues with POSIX::Spawn, Terrapin and Ruby 3.0
Also improve the Terrapin monkey-patch for the stderr/stdout issue.
* Fix keyword argument handling throughout the codebase
* Monkey-patch Paperclip to fix keyword arguments handling in validators
* Change validation_extensions to please CodeClimate
* Bump microformats from 4.2.1 to 4.3.1
* Allow Ruby 3.0
* Add Ruby 3.0 test target to CircleCI
* Add test for admin dashboard warnings
* Fix admin dashboard warnings on Ruby 3.0
* Update devise-two-factor to unreleased fork for Rails 6 support
Update tests to match new `rotp` version.
* Update nsa gem to unreleased fork for Rails 6 support
* Update rails to 6.1.3 and rails-i18n to 6.0
* Update to unreleased fork of pluck_each for Ruby 6 support
* Run "rails app:update"
* Add missing ActiveStorage config file
* Use config.ssl_options instead of removed ApplicationController#force_ssl
Disabled force_ssl-related tests as they do not seem to be easily testable
anymore.
* Fix nonce directives by removing Rails 5 specific monkey-patching
* Fix fixture_file_upload deprecation warning
* Fix yield-based test failing with Rails 6
* Use Rails 6's index_with when possible
* Use ActiveRecord::Cache::Store#delete_multi from Rails 6
This will yield better performances when deleting an account
* Disable Rails 6.1's automatic preload link headers
Since Rails 6.1, ActionView adds preload links for javascript files
in the Links header per default.
In our case, that will bloat headers too much and potentially cause
issues with reverse proxies. Furhermore, we don't need those links,
as we already output them as HTML link tags.
* Switch to Rails 6.0 default config
* Switch to Rails 6.1 default config
* Do not include autoload paths in the load path
* Prepare Mastodon for zeitwerk autoloader (Rails 6)
Add inflections and rename/move a few classes.
In particular, app/lib/exceptions.rb and app/lib/sanitize_config.rb
were manually loaded while still in autoload paths.
* Add inflection for Url → URL
* Fix misuse of foreign_type
* Fix use of removed "add_template_helper"
* Use response.media_type instead of response.content_type in tests
* Fix CSV export controller test on Rails 6
Rails 6 sets a "filename*" field in the Content-Disposition header to
explicitly encode the filename as UTF-8.
This changes checks the first part of the Content-Disposition header so
it matches in both Rails 5 and Rails 6.
* Fix emoji formatting with Rails 6
* Make emoji output more idiomatic and robust
* Switch from redis-rails gem to built-in Rails redis cache storage
* Update twitter-text from 1.14 to 3.1.0
* Disable emoji parsing
* Properly depend on twitter-text for url detection
* Fix some URLs being wrongly detected client-side
* Add test for server-side validation of non-autolinkable URLs
* Fix server-side status length counting
* Drop dependency on secure_headers, use always_write_cookie instead
* Fix cookies in Tor Hidden Services by moving configuration to application.rb
* Instead of setting always_write_cookie at boot, monkey-patch ActionDispatch
* Added .deepsource.toml
* Removed bad use of `alias`
* Fixed operand order in the binary expression
* Prefixed unused method arguments with an underscore
* Replaced the old OpenSSL algorithmic constants with the newer strings initializers.
* Removed unnecessary UTF-8 encoding comment
Nginx can be configured to bypass proxy cache when a special header
is in the request. If the response is cacheable, it will replace
the cache for that request. Proxy caching of media files is
desirable when using object storage as a way of minimizing bandwidth
costs, but has the drawback of leaving deleted media files for
a configured amount of cache time. A cache buster can make those
media files immediately unavailable. This especially makes sense
when suspending and unsuspending an account.
* feat: add possibility of adding WebAuthn security keys to use as 2FA
This adds a basic UI for enabling WebAuthn 2FA. We did a little refactor
to the Settings page for editing the 2FA methods – now it will list the
methods that are available to the user (TOTP and WebAuthn) and from
there they'll be able to add or remove any of them.
Also, it's worth mentioning that for enabling WebAuthn it's required to
have TOTP enabled, so the first time that you go to the 2FA Settings
page, you'll be asked to set it up.
This work was inspired by the one donde by Github in their platform, and
despite it could be approached in different ways, we decided to go with
this one given that we feel that this gives a great UX.
Co-authored-by: Facundo Padula <facundo.padula@cedarcode.com>
* feat: add request for WebAuthn as second factor at login if enabled
This commits adds the feature for using WebAuthn as a second factor for
login when enabled.
If users have WebAuthn enabled, now a page requesting for the use of a
WebAuthn credential for log in will appear, although a link redirecting
to the old page for logging in using a two-factor code will also be
present.
Co-authored-by: Facundo Padula <facundo.padula@cedarcode.com>
* feat: add possibility of deleting WebAuthn Credentials
Co-authored-by: Facundo Padula <facundo.padula@cedarcode.com>
* feat: disable WebAuthn when an Admin disables 2FA for a user
Co-authored-by: Facundo Padula <facundo.padula@cedarcode.com>
* feat: remove ability to disable TOTP leaving only WebAuthn as 2FA
Following examples form other platforms like Github, we decided to make
Webauthn 2FA secondary to 2FA with TOTP, so that we removed the
possibility of removing TOTP authentication only, leaving users with
just WEbAuthn as 2FA. Instead, users will have to click on 'Disable 2FA'
in order to remove second factor auth.
The reason for WebAuthn being secondary to TOPT is that in that way,
users will still be able to log in using their code from their phone's
application if they don't have their security keys with them – or maybe
even lost them.
* We had to change a little the flow for setting up TOTP, given that now
it's possible to setting up again if you already had TOTP, in order to
let users modify their authenticator app – given that now it's not
possible for them to disable TOTP and set it up again with another
authenticator app.
So, basically, now instead of storing the new `otp_secret` in the
user, we store it in the session until the process of set up is
finished.
This was because, as it was before, when users clicked on 'Edit' in
the new two-factor methods lists page, but then went back without
finishing the flow, their `otp_secret` had been changed therefore
invalidating their previous authenticator app, making them unable to
log in again using TOTP.
Co-authored-by: Facundo Padula <facundo.padula@cedarcode.com>
* refactor: fix eslint errors
The PR build was failing given that linting returning some errors.
This commit attempts to fix them.
* refactor: normalize i18n translations
The build was failing given that i18n translations files were not
normalized.
This commits fixes that.
* refactor: avoid having the webauthn gem locked to a specific version
* refactor: use symbols for routes without '/'
* refactor: avoid sending webauthn disabled email when 2FA is disabled
When an admins disable 2FA for users, we were sending two mails
to them, one notifying that 2FA was disabled and the other to notify
that WebAuthn was disabled.
As the second one is redundant since the first email includes it, we can
remove it and send just one email to users.
* refactor: avoid creating new env variable for webauthn_origin config
* refactor: improve flash error messages for webauthn pages
Co-authored-by: Facundo Padula <facundo.padula@cedarcode.com>
- Rate limit login attempts by target account
- Rate limit password resets and e-mail re-confirmations by target account
- Rate limit sign-up/login attempts, password resets, and e-mail re-confirmations by IP like before
Also:
- Fix locks not being removed when jobs go to the dead job queue
- Add UI for managing locks to the Sidekiq dashboard
- Remove unused Sidekiq workers
Fix#13349
* Add announcements
Fix#11006
* Add reactions to announcements
* Add admin UI for announcements
* Add unit tests
* Fix issues
- Add `with_dismissed` param to announcements API
- Fix end date not being formatted when time range is given
- Fix announcement delete causing reactions to send streaming updates
- Fix announcements container growing too wide and mascot too small
- Fix `all_day` being settable when no time range is given
- Change text "Update" to "Announcement"
* Fix scheduler unpublishing announcements before they are due
* Fix filter params not being passed to announcements filter
* Fix wrong grouping in Twitter valid_url regex
* Add support for xmpp URIs
Fixes#9776
The difficult part is autolinking, because Twitter-text's extractor does
some pretty ad-hoc stuff to find things that “look like” URLs, and XMPP
URIs do not really match the assumptions of that lib, so it doesn't sound
wise to try to shoehorn it into the existing regex.
This is why I used a specific regex (very close, although slightly more
permissive than the RFC), and a specific scan function (a simplified version
of the generalized one from Twitter).
* Remove leading “xmpp:” from auto-linked text
When authenticating via OAuth, the resource owner password grant
strategy is allowed by Mastodon, but (without this PR), it does not
attempt to authenticate against LDAP or PAM. As a result, LDAP or PAM
authenticated users cannot sign in to Mastodon with their
email/password credentials via OAuth (for instance, for native/mobile
app users).
This PR fleshes out the authentication strategy supplied to doorkeeper
in its initializer by looking up the user with LDAP and/or PAM when
devise is configured to use LDAP/PAM backends. It attempts to follow the
same logic as the Auth::SessionsController for handling email/password
credentials.
Note #1: Since this pull request affects an initializer, it's unclear
how to add test automation.
Note #2: The PAM authentication path has not been manually tested. It
was added for completeness sake, and it is hoped that it can be manually
tested before merging.
* Add backend support for bookmarks
Bookmarks behave like favourites, except they aren't shared with other
users and do not have an associated counter.
* Add spec for bookmark endpoints
* Add front-end support for bookmarks
* Introduce OAuth scopes for bookmarks
* Add bookmarks to archive takeout
* Fix migration
* Coding style fixes
* Fix rebase issue
* Update bookmarked_statuses to latest UI changes
* Update bookmark actions to properly reflect status changes in state
* Add bookmarks item to single-column layout
* Make active bookmarks red
Change the behaviour of remotable concern. Previously, it would skip
downloading an attachment if the stored remote URL is identical to
the new one. Now it would not be skipped if the attachment is not
actually currently stored by Paperclip.
The default limit of 10 retries with exponential backoff meant
that if the S3 server was timing out, you would be stuck with it
for much, much longer than the 5 second read timeout we expect.
The uploading happens within a database transaction, which means
a failing S3 server could negatively affect database performance
It's possible that after commit callbacks were not firing when
exceptions occurred in the process. Also, the default Sidekiq
strategy does not push indexing jobs immediately, which is not
necessary and could be part of the issue too.
* Add nodeinfo endpoint
* dont commit stuff from my local dev
* consistant naming since we implimented 2.1 schema
* Add some additional node info stuff
* Add nodeinfo endpoint
* dont commit stuff from my local dev
* consistant naming since we implimented 2.1 schema
* expanding this to include federation info
* codeclimate feedback
* CC feedback
* using activeserializers seems like a good idea...
* get rid of draft 2.1 version
* Reimplement 2.1, also fix metaData -> metadata
* Fix metaData -> metadata here too
* Fix nodeinfo 2.1 tests
* Implement cache for monthly user aggregate
* Useless
* Remove ostatus from the list of supported protocols
* Fix nodeinfo's open_registration reading obsolete setting variable
* Only serialize domain blocks with user-facing limitations
* Do not needlessly list noop severity in nodeinfo
* Only serialize domain blocks info in nodeinfo when they are set to be displayed to everyone
* Enable caching for nodeinfo endpoints
* Fix rendering nodeinfo
* CodeClimate fixes
* Please CodeClimate
* Change InstancePresenter#active_user_count_months for clarity
* Refactor NodeInfoSerializer#metadata
* Remove nodeinfo 2.1 support as the schema doesn't exist
* Clean-up
The instrumentation code was used for StatsD metrics collection
prior to the switch to the nsa gem and should have been removed
at that point as it no longer does anything at all
* Rate limit based on remote address IP, not on potential reverse proxy
* Limit rate of unauthenticated API requests further
* Rate-limit paging requests to one every 3 seconds
Deletions take a lot of resources to execute and cause a lot of
federation traffic, so it makes sense to decrease the number
someone can queue up through the API.
30 per 30 minutes
I also added "public" here, as I can't think of a good reason not to add it. Perhaps it has some marginal benefit in that ISPs (or other proxies) can cache it for all users. The assets are certainly publicly available and the same for all users.
* Add REST API for creating an account
The method is available to apps with a token obtained via the client
credentials grant. It creates a user and account records, as well as
an access token for the app that initiated the request. The user is
unconfirmed, and an e-mail is sent as usual.
The method returns the access token, which the app should save for
later. The REST API is not available to users with unconfirmed
accounts, so the app must be smart to wait for the user to click a
link in their e-mail inbox.
The method is rate-limited by IP to 5 requests per 30 minutes.
* Redirect users back to app from confirmation if they were created with an app
* Add tests
* Return 403 on the method if registrations are not open
* Require agreement param to be true in the API when creating an account
Right now, this includes three endpoints: host-meta, webfinger, and change-password.
host-meta and webfinger are publicly available and do not use any authentication. Nothing bad can be done by accessing them in a user's browser.
change-password being CORS-enabled will only reveal the URL it redirects to (which is /auth/edit) but not anything about the actual /auth/edit page, because it does not have CORS enabled.
The documentation for hosting an instance on a different domain should also be updated to point out that Access-Control-Allow-Origin: * should be set at a minimum for the /.well-known/host-meta redirect to allow browser-based non-proxied instance discovery.
* Verify link ownership with rel="me"
* Add explanation about verification to UI
* Perform link verifications
* Add click-to-copy widget for verification HTML
* Redesign edit profile page
* Redesign forms
* Improve responsive design of settings pages
* Restore landing page sign-up form
* Fix typo
* Support <link> tags, add spec
* Fix links not being verified on first discovery and passive updates
CSFR-prevention is already implemented but adding this doesn't hurt.
A brief introduction to Same-Site cookies (and the difference between strict and
lax) can be found at
https://blog.mozilla.org/security/2018/04/24/same-site-cookies-in-firefox-60/
TLDR: We use lax since we want the cookies to be sent when the user navigates
safely from an external site.
* Fix uncaching worker
* Revert to using Paperclip's filesystem backend instead of fog-local
fog-local has lots of concurrency issues, causing failure to delete files,
dangling file records, and spurious errors UncacheMediaWorker
Adopted from GitLab CE. Generate new migration with:
rails g post_deployment_migration name_of_migration_here
By default they are run together with db:migrate. To not run them,
the env variable SKIP_POST_DEPLOYMENT_MIGRATIONS must be set
Code by Yorick Peterse <yorickpeterse@gmail.com>, see also:
83c8241160
* Add more granular OAuth scopes
* Add human-readable descriptions of the new scopes
* Ensure new scopes look good on the app UI
* Add tests
* Group scopes in screen and color-code dangerous ones
* Fix wrong extra scope
If Mastodon accesses to the hidden service via transparent proxy, it's needed to avoid checking whether it's a private address, since `.onion` is resolved to a private address.
I was previously using the `HIDDEN_SERVICE_VIA_TRANSPARENT_PROXY` to provide that function. However, I realized that using `HIDDEN_SERVICE_VIA_TRANSPARENT_PROXY` is redundant, since this specification is always used with `ALLOW_ACCESS_TO_HIDDEN_SERVICE`. Therefore, I decided to integrate the setting of `HIDDEN_SERVICE_VIA_TRANSPARENT_PROXY` into` ALLOW_ACCESS_TO_HIDDEN_SERVICE`.
- POST /api/v1/push/subscription
- PUT /api/v1/push/subscription
- DELETE /api/v1/push/subscription
- New OAuth scope: "push" (required for the above methods)
The previous rate limit allowed to post media so fast that it is possible
to fill up the disk space even before an administrator notices. The new
rate limit is configured so that it takes 24 hours to eat 10 gigabytes:
10 * 1024 / 8 / (24 * 60 / 30) = 27 (which rounded to 30)
The period is set long so that it does not prevent from attaching several
media to one post, which would happen in a short period. For example,
if the period is 5 minutes, the rate limit would be:
10 * 1024 / 8 / (24 * 60 / 5) = 4
This long period allows to lift the limit up.
* No need to re-require sidekiq plugins, they are required via Gemfile
* Add derailed_benchmarks tool, no need to require TTY gems in Gemfile
* Replace ruby-oembed with FetchOEmbedService
Reduce startup by 45382 allocated objects
* Remove preloaded JSON-LD in favour of caching HTTP responses
Reduce boot RAM by about 6 MiB
* Fix tests
* Fix test suite by stubbing out JSON-LD contexts
* Add support for HTTP client proxy
* Add access control for darknet
Supress error when access to darknet via transparent proxy
* Fix the codes pointed out
* Lint
* Fix an omission + lint
* any? -> include?
* Change detection method to regexp to avoid test fail
* Revert "Bump version to 2.3.2rc1"
This reverts commit cdf8b92fea.
* Revert "Downgrade Dockerfile to Ruby 2.4.3 on Alpine 3.6 (#6806)"
This reverts commit 0074cad44f.
* Revert "Handle Mastodon::HostValidationError when pulling remoteable assets (#6782)"
This reverts commit 4a0a19fe54.
* Revert "Correct the reference to user's password in mastodon:add_user task (#6800)"
This reverts commit 338bff8b93.
* Revert "Upgrade Paperclip to version 6.0.0 (#6754)"
This reverts commit b88fcd53f7.