PeerTube/.github/workflows/test.yml

116 lines
3.1 KiB
YAML
Raw Normal View History

2021-11-30 09:01:37 +01:00
name: Test
2020-07-30 08:46:11 +02:00
on:
push:
pull_request:
2020-08-06 09:47:35 +02:00
types: [synchronize, opened]
schedule:
2020-07-31 09:13:13 +02:00
- cron: '0 3 * * 1-5'
2020-07-30 08:46:11 +02:00
jobs:
test:
runs-on: ubuntu-latest
services:
redis:
image: redis
ports:
- 6379:6379
postgres:
2021-08-25 15:11:49 +02:00
image: postgres:10
2020-07-30 08:46:11 +02:00
ports:
- 5432:5432
env:
POSTGRES_USER: peertube
POSTGRES_HOST_AUTH_METHOD: trust
2020-07-30 16:25:00 +02:00
ldap:
2021-01-04 11:38:33 +01:00
image: chocobozzz/docker-test-openldap
2020-07-30 16:25:00 +02:00
ports:
2021-01-04 11:38:33 +01:00
- 10389:10389
2020-07-30 16:25:00 +02:00
Add support for saving video files to object storage (#4290) * Add support for saving video files to object storage * Add support for custom url generation on s3 stored files Uses two config keys to support url generation that doesn't directly go to (compatible s3). Can be used to generate urls to any cache server or CDN. * Upload files to s3 concurrently and delete originals afterwards * Only publish after move to object storage is complete * Use base url instead of url template * Fix mistyped config field * Add rudenmentary way to download before transcode * Implement Chocobozzz suggestions https://github.com/Chocobozzz/PeerTube/pull/4290#issuecomment-891670478 The remarks in question: Try to use objectStorage prefix instead of s3 prefix for your function/variables/config names Prefer to use a tree for the config: s3.streaming_playlists_bucket -> object_storage.streaming_playlists.bucket Use uppercase for config: S3.STREAMING_PLAYLISTS_BUCKETINFO.bucket -> OBJECT_STORAGE.STREAMING_PLAYLISTS.BUCKET (maybe BUCKET_NAME instead of BUCKET) I suggest to rename moveJobsRunning to pendingMovingJobs (or better, create a dedicated videoJobInfo table with a pendingMove & videoId columns so we could also use this table to track pending transcoding jobs) https://github.com/Chocobozzz/PeerTube/pull/4290/files#diff-3e26d41ca4bda1de8e1747af70ca2af642abcc1e9e0bfb94239ff2165acfbde5R19 uses a string instead of an integer I think we should store the origin object storage URL in fileUrl, without base_url injection. Instead, inject the base_url at "runtime" so admins can easily change this configuration without running a script to update DB URLs * Import correct function * Support multipart upload * Remove import of node 15.0 module stream/promises * Extend maximum upload job length Using the same value as for redundancy downloading seems logical * Use dynamic part size for really large uploads Also adds very small part size for local testing * Fix decreasePendingMove query * Resolve various PR comments * Move to object storage after optimize * Make upload size configurable and increase default * Prune webtorrent files that are stored in object storage * Move files after transcoding jobs * Fix federation * Add video path manager * Support move to external storage job in client * Fix live object storage tests Co-authored-by: Chocobozzz <me@florianbigard.com>
2021-08-17 08:26:20 +02:00
s3ninja:
2021-08-18 09:14:51 +02:00
image: chocobozzz/s3-ninja
Add support for saving video files to object storage (#4290) * Add support for saving video files to object storage * Add support for custom url generation on s3 stored files Uses two config keys to support url generation that doesn't directly go to (compatible s3). Can be used to generate urls to any cache server or CDN. * Upload files to s3 concurrently and delete originals afterwards * Only publish after move to object storage is complete * Use base url instead of url template * Fix mistyped config field * Add rudenmentary way to download before transcode * Implement Chocobozzz suggestions https://github.com/Chocobozzz/PeerTube/pull/4290#issuecomment-891670478 The remarks in question: Try to use objectStorage prefix instead of s3 prefix for your function/variables/config names Prefer to use a tree for the config: s3.streaming_playlists_bucket -> object_storage.streaming_playlists.bucket Use uppercase for config: S3.STREAMING_PLAYLISTS_BUCKETINFO.bucket -> OBJECT_STORAGE.STREAMING_PLAYLISTS.BUCKET (maybe BUCKET_NAME instead of BUCKET) I suggest to rename moveJobsRunning to pendingMovingJobs (or better, create a dedicated videoJobInfo table with a pendingMove & videoId columns so we could also use this table to track pending transcoding jobs) https://github.com/Chocobozzz/PeerTube/pull/4290/files#diff-3e26d41ca4bda1de8e1747af70ca2af642abcc1e9e0bfb94239ff2165acfbde5R19 uses a string instead of an integer I think we should store the origin object storage URL in fileUrl, without base_url injection. Instead, inject the base_url at "runtime" so admins can easily change this configuration without running a script to update DB URLs * Import correct function * Support multipart upload * Remove import of node 15.0 module stream/promises * Extend maximum upload job length Using the same value as for redundancy downloading seems logical * Use dynamic part size for really large uploads Also adds very small part size for local testing * Fix decreasePendingMove query * Resolve various PR comments * Move to object storage after optimize * Make upload size configurable and increase default * Prune webtorrent files that are stored in object storage * Move files after transcoding jobs * Fix federation * Add video path manager * Support move to external storage job in client * Fix live object storage tests Co-authored-by: Chocobozzz <me@florianbigard.com>
2021-08-17 08:26:20 +02:00
ports:
- 9444:9000
2020-07-30 08:46:11 +02:00
strategy:
fail-fast: false
2020-07-30 08:46:11 +02:00
matrix:
feat(transcription): groundwork chore: fiddling around some more chore: add ctranslate2 and timestamped chore: add performance markers chore: refactor test chore: change worflow name chore: ensure Python3 chore(duration): convert to chai/mocha syntahx chore(transcription): add individual tests for others transcribers chore(transcription): implement formats test of all implementations Also compare result of other implementation to the reference implementation chore(transcription): add more test case with other language and models size and local model chore(test): wip ctranslate 2 adapat chore(transcription): wip transcript file and benchmark chore(test): clean a bit chore(test): clean a bit chore(test): refacto timestamed spec chore(test): update workflow chore(test): fix glob expansion with sh chore(test): extract some hw info chore(test): fix async tests chore(benchmark): add model info feat(transcription): allow use of a local mode in timestamped-whisper feat(transcription): extract run and profiling info in own value object feat(transcription): extract run concept in own class an run more bench chore(transcription): somplify run object only a uuid is now needed and add more benchmark scenario docs(transcription): creates own package readme docs(transcription): add local model usage docs(transcription): update README fix(transcription): use fr video for better comparison chore(transcription): make openai comparison passed docs(timestamped): clea chore(transcription): change transcribers transcribe method signature Introduce whisper builtin model. fix(transcription): activate language detection Forbid transcript creation without a language. Add `languageDetection` flag to an engine and some assertions. Fix an issue in `whisper-ctranslate2` : https://github.com/Softcatala/whisper-ctranslate2/pull/93 chore(transcription): use PeerTube time helpers instead of custom ones Update existing time function to output an integer number of seconds and add a ms human-readable time formatter with hints of tests. chore(transcription): use PeerTube UUID helpers chore(transcription): enable CER evaluation Thanks to this recent fix in Jiwer <3 https://github.com/jitsi/jiwer/issues/873 chore(jiwer): creates JiWer package I'm not very happy with the TranscriptFileEvaluator constructor... suggestions ? chore(JiWer): add usage in README docs(jiwer): update JiWer readme chore(transcription): use FunMOOC video in fixtures chore(transcription): add proper english video fixture chore(transcription): use os tmp directory where relevant chore(transcription): fix jiwer cli test reference.txt chore(transcription): move benchmark out of tests chore(transcription): remove transcription workflow docs(transcription): add benchmark info fix(transcription): use ms precision in other transcribers chore(transcription): simplify most of the tests chore(transcription): remove slashes when building path with join chore(transcription): make fromPath method async chore(transcription): assert path to model is a directory for CTranslate2 transcriber chore(transcription): ctranslate2 assertion chore(transcription): ctranslate2 assertion chore(transcription): add preinstall script for Python dependencies chore(transcription): add download and unzip utils functions chore(transcription): add download and unzip utils functions chore(transcription): download & unzip models fixtures chore(transcription): zip chore(transcription): raise download file test timeout chore(transcription): simplify download file test chore(transcription): add transcriptions test to CI chore(transcription): raise test preconditions timeout chore(transcription): run preinstall scripts before running ci chore(transcription): create dedicated tmp folder for transcriber tests chore(transcription): raise timeout some more chore(transcription): raise timeout some more chore(transcription): raise timeout some more chore(transcription): raise timeout some more chore(transcription): raise timeout some more chore(transcription): raise timeout some more chore(transcription): raise timeout some more chore(transcription): raise timeout some more chore(transcription): use short video for local model test chore(transcription): raise timeout some more chore(transcription): raise timeout some more chore(transcription): raise timeout some more chore(transcription): setup verbosity based on NODE_ENV value
2024-03-29 10:34:45 +01:00
test_suite: [ types-package, client, api-1, api-2, api-3, api-4, api-5, transcription, cli-plugin, lint, external-plugins ]
2020-07-30 08:46:11 +02:00
env:
PGUSER: peertube
PGHOST: localhost
2021-05-07 14:48:39 +02:00
NODE_PENDING_JOB_WAIT: 250
Add support for saving video files to object storage (#4290) * Add support for saving video files to object storage * Add support for custom url generation on s3 stored files Uses two config keys to support url generation that doesn't directly go to (compatible s3). Can be used to generate urls to any cache server or CDN. * Upload files to s3 concurrently and delete originals afterwards * Only publish after move to object storage is complete * Use base url instead of url template * Fix mistyped config field * Add rudenmentary way to download before transcode * Implement Chocobozzz suggestions https://github.com/Chocobozzz/PeerTube/pull/4290#issuecomment-891670478 The remarks in question: Try to use objectStorage prefix instead of s3 prefix for your function/variables/config names Prefer to use a tree for the config: s3.streaming_playlists_bucket -> object_storage.streaming_playlists.bucket Use uppercase for config: S3.STREAMING_PLAYLISTS_BUCKETINFO.bucket -> OBJECT_STORAGE.STREAMING_PLAYLISTS.BUCKET (maybe BUCKET_NAME instead of BUCKET) I suggest to rename moveJobsRunning to pendingMovingJobs (or better, create a dedicated videoJobInfo table with a pendingMove & videoId columns so we could also use this table to track pending transcoding jobs) https://github.com/Chocobozzz/PeerTube/pull/4290/files#diff-3e26d41ca4bda1de8e1747af70ca2af642abcc1e9e0bfb94239ff2165acfbde5R19 uses a string instead of an integer I think we should store the origin object storage URL in fileUrl, without base_url injection. Instead, inject the base_url at "runtime" so admins can easily change this configuration without running a script to update DB URLs * Import correct function * Support multipart upload * Remove import of node 15.0 module stream/promises * Extend maximum upload job length Using the same value as for redundancy downloading seems logical * Use dynamic part size for really large uploads Also adds very small part size for local testing * Fix decreasePendingMove query * Resolve various PR comments * Move to object storage after optimize * Make upload size configurable and increase default * Prune webtorrent files that are stored in object storage * Move files after transcoding jobs * Fix federation * Add video path manager * Support move to external storage job in client * Fix live object storage tests Co-authored-by: Chocobozzz <me@florianbigard.com>
2021-08-17 08:26:20 +02:00
ENABLE_OBJECT_STORAGE_TESTS: true
2023-06-06 11:14:13 +02:00
ENABLE_FFMPEG_THUMBNAIL_PIXEL_COMPARISON_TESTS: true
OBJECT_STORAGE_SCALEWAY_KEY_ID: ${{ secrets.OBJECT_STORAGE_SCALEWAY_KEY_ID }}
OBJECT_STORAGE_SCALEWAY_ACCESS_KEY: ${{ secrets.OBJECT_STORAGE_SCALEWAY_ACCESS_KEY }}
2022-12-29 10:24:06 +01:00
YOUTUBE_DL_DOWNLOAD_BEARER_TOKEN: ${{ secrets.GITHUB_TOKEN }}
2020-07-30 08:46:11 +02:00
steps:
2024-02-06 07:34:58 +01:00
- uses: actions/checkout@v4
2020-07-30 08:46:11 +02:00
2021-12-01 14:14:58 +01:00
- uses: './.github/actions/reusable-prepare-peertube-build'
2020-07-30 08:46:11 +02:00
with:
node-version: '18.x'
2020-07-30 08:46:11 +02:00
2021-12-01 14:14:58 +01:00
- uses: './.github/actions/reusable-prepare-peertube-run'
2020-07-30 08:46:11 +02:00
2020-08-24 16:37:47 +02:00
- name: Cache fixtures
2024-02-06 07:34:58 +01:00
uses: actions/cache@v4
2020-08-24 16:37:47 +02:00
with:
path: |
fixtures
key: ${{ runner.OS }}-fixtures-${{ matrix.test_suite }}-${{ hashFiles('fixtures/*') }}
restore-keys: |
${{ runner.OS }}-fixtures-${{ matrix.test_suite }}-
${{ runner.OS }}-fixtures-
${{ runner.OS }}-
2024-06-13 09:23:12 +02:00
- name: Cache PeerTube pip directory
uses: actions/cache@v4
with:
path: |
~/.cache/pip
key: ${{ runner.OS }}-${{ matrix.test_suite }}-pip-v1
- name: Cache Hugging Face models
uses: actions/cache@v4
with:
path: |
~/.cache/huggingface
key: ${{ runner.OS }}-${{ matrix.test_suite }}-hugging-face-v1
2020-07-30 16:58:32 +02:00
- name: Set env test variable (schedule)
if: github.event_name != 'schedule'
2020-07-30 16:58:32 +02:00
run: |
2020-11-10 08:50:49 +01:00
echo "DISABLE_HTTP_IMPORT_TESTS=true" >> $GITHUB_ENV
2020-07-30 08:46:11 +02:00
- name: Run Test
2020-07-31 09:13:13 +02:00
# external-plugins tests only run on schedule
2023-05-05 14:35:59 +02:00
if: github.event_name == 'schedule' || matrix.test_suite != 'external-plugins'
2022-09-28 08:24:18 +02:00
env:
AKISMET_KEY: ${{ secrets.AKISMET_KEY }}
2021-02-18 14:44:12 +01:00
run: npm run ci -- ${{ matrix.test_suite }}
2020-07-30 08:46:11 +02:00
2020-12-10 16:08:26 +01:00
- name: Display errors
2021-01-04 10:52:27 +01:00
if: ${{ always() }}
2020-11-30 09:16:41 +01:00
run: |
( \
test -f dist/scripts/parse-log.js && \
NODE_ENV=test node dist/scripts/parse-log.js -l error -f artifacts/*.log \
) || \
2020-12-10 16:38:12 +01:00
echo "parse-log.js script does not exist, skipping."
2020-11-30 09:16:41 +01:00
2020-07-30 08:46:11 +02:00
- name: Upload logs
2022-10-11 14:36:19 +02:00
uses: actions/upload-artifact@v3
2020-07-30 08:46:11 +02:00
if: failure()
with:
name: test-storages-${{ matrix.test_suite }}
2020-12-10 11:24:17 +01:00
path: artifacts
retention-days: 7