Compare commits

...

665 Commits

Author SHA1 Message Date
pukkandan
2e9a445bc3 [version] update
:ci skip all
2021-11-10 01:14:33 +00:00
pukkandan
86c1a8aae4 Release 2021.11.10 2021-11-10 06:41:44 +05:30
Lauren Liberda
ebfab36fca [tvp] Add TVPStreamIE (#1401)
Authored by: selfisekai
2021-11-10 06:16:51 +05:30
Lauren Liberda
c15de6ffe6 [tvp] Fix extractor (#1401)
Authored by: selfisekai
2021-11-10 06:16:40 +05:30
Lauren Liberda
56bb56f3cf [tvp] Fix embeds (#1401)
Authored by: selfisekai
2021-11-10 06:16:30 +05:30
Lauren Liberda
c0599d4fe4 [wppilot] Add extractors (#1401)
Authored by: selfisekai
2021-11-10 06:16:18 +05:30
Lauren Liberda
3f771f75d7 [radiokapital] Add extractors (#1401)
Authored by: selfisekai
2021-11-10 06:15:46 +05:30
Lauren Liberda
ed76230b3f [polsatgo] Add extractor (#1386)
Authored by: selfisekai, sdomi

Co-authored-by: Dominika Liberda <ja@sdomi.pl>
2021-11-10 06:13:53 +05:30
Lauren Liberda
89fcdff5d8 [polskieradio] Add extractors (#1386)
Authored by: selfisekai
2021-11-10 06:11:24 +05:30
Lauren Liberda
f98709af31 [extractor] Add _search_nextjs_data (#1386)
Authored by: selfisekai
2021-11-10 06:11:05 +05:30
pukkandan
c586f9e8de [cleanup] minor fixes 2021-11-10 04:19:54 +05:30
pukkandan
59a7a13ef9 [docs] Minor documentation improvements
Closes #1583, #1599
2021-11-10 04:19:52 +05:30
pukkandan
4476d2c764 [outtmpl] Add alternate forms for q and j 2021-11-10 04:19:42 +05:30
pukkandan
aa9369a2d8 [cleanup] Minor improvements to error and debug messages 2021-11-10 04:19:33 +05:30
stanoarn
d54c6003ab fix for e1b7c54d78
Authored by: stanoarn
2021-11-10 03:44:17 +05:30
u-spec-png
1ee316a34a [Gab] Add extractor (#1505)
Closes #1462 
Authored by: u-spec-png
2021-11-10 03:41:51 +05:30
ozburo
358247ed2a [imdb] Fix thumbnail (#1581)
Authored by: ozburo
2021-11-10 02:56:57 +05:30
nixxo
9b12e9a573 [la7] Fix extractor (#1575)
Closes #1065 
Authored by: nixxo
2021-11-10 02:37:52 +05:30
u-spec-png
a109acbf82 [ZenYandex] Fix extractor (#1558)
Closes #1545
Authored by: u-spec-png
2021-11-09 00:06:01 +05:30
pukkandan
a49891c761 Fix bug in --load-infojson of playlists
Fixes: https://github.com/yt-dlp/yt-dlp/issues/1514#issuecomment-962659529
2021-11-08 00:26:08 +05:30
pukkandan
582fad70f5 [outtmpl] Do not traverse None
Closes #1585
2021-11-08 00:26:08 +05:30
pgaig
aeec0e44e2 [VRT] Fix login (#1566)
Closes #1557 
Authored by: pgaig
2021-11-06 22:57:40 +05:30
Ryan Hendrickson
d9190e4467 [youtube] Add Invidious list for playlists/channels (#1567)
Authored by: rhendric
2021-11-06 08:37:34 +05:30
stanoarn
e1b7c54d78 [iPrima] Fix extractor (#1541)
Authored by: stanoarn
2021-11-06 07:55:18 +05:30
pukkandan
244644c02c [roosterteeth] Add series extractor 2021-11-06 07:53:58 +05:30
pukkandan
34921b4345 [utils] Add join_nonempty 2021-11-06 07:53:55 +05:30
pukkandan
a331949df3 [test/download] Fallback test to bv 2021-11-06 07:53:53 +05:30
u-spec-png
2c5e8a961e [Newgrounds] Fix description (#1562)
Authored by: u-spec-png
2021-11-06 03:42:16 +05:30
u-spec-png
b515b37cc4 [Vupload] Fix extractor (#1549)
Authored by: u-spec-png
2021-11-06 03:35:13 +05:30
pukkandan
3c4eebf772 [AmazonStore] Add extractor (#1512)
Closes #1509

Authored by: Ashish0804
2021-11-06 03:13:50 +05:30
u-spec-png
fb2d1ee6cc [Instagram] Add IOS URL support (#1560)
Authored by: u-spec-png
2021-11-06 03:01:34 +05:30
pukkandan
9cb070f9c0 [vimeo] Detect source extension
and misc cleanup

Cherry-picked from #1477
Closes #1402

Authored by: flashdagger
2021-11-06 02:33:06 +05:30
pukkandan
2a6f8475ac [vimeo] Fix ondemand videos and direct URLs with hash
Closes #1353, #1471
2021-11-06 02:33:05 +05:30
Francesco Frassinelli
73673ccff3 [RaiplayRadio] Add extractors (#780)
Original PR: https://github.com/ytdl-org/youtube-dl/pull/21837
Authored by: frafra
2021-11-05 22:24:56 +05:30
pukkandan
aeb2a9ad27 [FormatSort] eac3 is better than ac3 2021-11-05 20:40:45 +05:30
pukkandan
df6c409d1f [piksel] Fix sorting 2021-11-05 20:39:16 +05:30
pukkandan
a9d4da606d [crunchyroll] Add extractor-args language and hardsub
Closes #1516
2021-11-05 00:12:12 +05:30
pukkandan
c18d4482b1 [youtube] Fix sorting for some videos 2021-11-05 00:12:11 +05:30
u-spec-png
0f6518938d [N1] Add support for nova.rs (#1537)
Authored by: u-spec-png
2021-11-04 20:59:59 +05:30
u-spec-png
22cd06c452 [Instagram] Improve thumbnail extraction (#1496)
Authored by: u-spec-png
2021-11-04 08:52:10 +05:30
pukkandan
a4211baff5 [cleanup] Minor cleanup 2021-11-04 03:53:15 +05:30
pukkandan
8913ef74d7 [ffmpeg] Detect libavformat version for aac_adtstoasc
and print available features in verbose head
Based on https://github.com/ytdl-org/youtube-dl/pull/29581
2021-11-04 03:13:37 +05:30
pukkandan
832e9000c7 [ffmpeg] Accurately detect presence of setts
Closes #1237
2021-11-04 02:24:12 +05:30
CrypticSignal
673c0057e8 [ExtractAudio] Use libfdk_aac if available
Closes #1502
Authored by: CrypticSignal
2021-11-04 02:23:45 +05:30
pukkandan
9af98e17bd [ffmpeg] Framework for feature detection
Related: #1502, #1237, https://github.com/ytdl-org/youtube-dl/pull/29581
2021-11-04 02:16:39 +05:30
pukkandan
31c49255bf [ExtractAudio] Rescale --audio-quality correctly
Authored by: CrypticSignal, pukkandan
2021-11-04 00:05:53 +05:30
pukkandan
bd93fd5d45 [fragment] Fix progress display in fragmented downloads
Closes #1517
2021-11-03 16:45:58 +05:30
pukkandan
d89257f398 [youtube] Remove unnecessary no-playlist warning 2021-11-03 16:35:09 +05:30
pukkandan
9bd979ca40 [utils] Parse vp09 as vp9 2021-11-03 16:35:08 +05:30
pukkandan
a1fc7ca074 [jsinterp] Handle default in switch better 2021-11-03 16:35:08 +05:30
u-spec-png
c588b602d3 [Instagram] Fix incorrect resolution (#1494)
Authored by: u-spec-png
2021-10-31 19:50:09 +05:30
kaz-us
f0ffaa1621 [vk] Fix login (#1495)
Closes #1459
Authored by: kaz-us
2021-10-31 19:46:12 +05:30
pukkandan
0930b11fda [docs,cleanup] Improve docs and minor cleanup
Closes #1387, #1404, #1408, #1485, #1415, #1450, #1492
2021-10-31 14:47:33 +05:30
pukkandan
a0bb6ce58d [youtube] refactor itag processing 2021-10-31 13:26:44 +05:30
pukkandan
da48320075 [linkedin] Don't login multiple times 2021-10-31 13:08:03 +05:30
kaz-us
5b6cb56207 [vk] Add subtitles (#1480)
Authored by: kaz-us
2021-10-31 10:43:49 +05:30
u-spec-png
b2f25dc242 [Olympics] Fix extractor (#1483)
Authored by: u-spec-png
2021-10-31 10:40:42 +05:30
Ashish Gupta
2f9e021299 [PlanetMarathi] Add extractor (#1484)
Authored by: Ashish0804
2021-10-31 10:39:26 +05:30
u-spec-png
8dcf65c92e [Instagram] Add login to playlist (#1488)
Authored by: u-spec-png
2021-10-31 10:38:04 +05:30
Marcel
92592bd305 [ceskatelevize] Fix extractor (#1489)
Authored by: flashdagger
2021-10-31 10:19:03 +05:30
pukkandan
404f611f1c [youtube] Fix throttling by decrypting n-sig (#1437) 2021-10-31 09:53:58 +05:30
u-spec-png
cd9ea4104b [instagram] Add more formats when logged in (#1487)
Authored by: u-spec-png
2021-10-31 08:24:39 +05:30
Ashish Gupta
652fb0d446 [VLive] Add upload_date and thumbnail (#1486)
Closes #1472
Authored by: Ashish0804
2021-10-30 23:26:00 +05:30
Sipherdrakon
6b301aaa34 [mtv] Fix some videos (#1453)
Partial fix for #713
Authored by: Sipherdrakon
2021-10-30 06:48:59 +05:30
pukkandan
fa0b816e37 [generic] Detect more json_ld
Closes #1475
2021-10-30 02:03:53 +05:30
pukkandan
5e7bbac305 [generic] parse jwplayer with only the json URL
Closes #1476
2021-10-30 01:54:50 +05:30
pukkandan
10beccc980 [FormatSort] Fix some fields' defaults
Closes #1479
2021-10-30 01:14:14 +05:30
nixxo
e6ff66efc0 [mediaset] Add playlist support (#1463)
Closes #1372
Authored by: nixxo
2021-10-30 01:09:55 +05:30
Luc Ritchie
aeaf3b2b92 [Coub] Fix media format identification (#1469)
Authored by: wlritchi
2021-10-29 23:47:10 +05:30
Ashish Gupta
7b5f3f7c3d [MLSScoccer] Add extractor (#1452)
Authored by: Ashish0804
Closes #1451
2021-10-28 23:48:09 +05:30
ajj8
3783b5f1d1 [itv] Add support for ITV News (#1456)
Authored by: ajj8
2021-10-28 16:27:09 +05:30
pukkandan
ab630a57b9 [viewlift] Fix typo in 5be76d1ab7 2021-10-28 02:14:33 +05:30
pukkandan
16b0d7e621 [utils] Add jwt_decode_hs256
Code from #1340
Authored by: Ashish0804
2021-10-28 02:07:41 +05:30
pukkandan
5be76d1ab7 [viewlift] Add cookie-based login and series support
Closes #1340, #1316
Authored by: Ashish0804, pukkandan
2021-10-28 02:07:40 +05:30
ajj8
b7b186e7de [sky] Add SkyNewsStoryIE (#1443)
Authored by: ajj8
2021-10-27 21:38:48 +05:30
nyuszika7h
bd1c792327 [wakanim] Detect geo-restriction (#1429)
Authored by: nyuszika7h
2021-10-26 22:05:20 +05:30
nyuszika7h
dc88e9be03 [wakanim] Add support for MPD manifests (#1428)
Closes #1426
Authored by: nyuszika7h
2021-10-26 22:03:43 +05:30
pukkandan
673944b001 [compat] Don't create console in windows_enable_vt_mode
Closes #1420
2021-10-26 21:59:08 +05:30
Ashish Gupta
0c873df3a8 [3speak] Add extractors (#1430)
Closes #1421
Authored by: Ashish0804
2021-10-26 21:17:39 +05:30
pukkandan
c35ada3360 [twitter] Do not sort by codec
Closes #1431
2021-10-26 21:15:38 +05:30
pukkandan
0db3bae879 [extractor] Fix some errors being converted to ExtractorError 2021-10-26 20:27:09 +05:30
pukkandan
48f796874d [utils] Create DownloadCancelled exception
as super-class of ExistingVideoReached, RejectedVideoReached, MaxDownloadsReached

Third parties can also sub-class this to cancel the download queue from a hook
2021-10-26 20:27:09 +05:30
pukkandan
abad800058 [downloader/ffmpeg] Fix vtt download with ffmpeg 2021-10-26 20:27:09 +05:30
pukkandan
08438d2ca5 [outtmpl] Add type link for internet shortcut files
and refactor related code
Closes #1405
2021-10-26 20:27:09 +05:30
pukkandan
7de837a5e3 [utils] Sanitize URL when determining protocol
Closes #1406
2021-10-26 20:26:08 +05:30
pukkandan
7e59ca440a [DiscoveryPlus] Allow language codes in URL
Closes #1425
2021-10-26 20:26:08 +05:30
u-spec-png
8e7ab2cf08 [Bilibili:comments] Fix infinite loop (#1423)
Closes #1412
Authored by: u-spec-png
2021-10-26 01:03:01 +05:30
u-spec-png
ad64a2323f [instagram] Fix bug in ab2ffab22d (#1403)
Authored by: u-spec-png
2021-10-24 22:01:33 +05:30
pukkandan
f2fe69c7b0 Approximate filesize from bitrate
Closes #1400
2021-10-24 18:02:00 +05:30
pukkandan
fccf502118 [youtube] Populate thumbnail with the best "known" thumbnail
Closes #402, Related: https://github.com/yt-dlp/yt-dlp/issues/340#issuecomment-950290624
2021-10-24 15:00:18 +05:30
pukkandan
9f1a1c36e6 Separate --check-all-formats from --check-formats
Previously, `--check-formats` tested only the selected video formats, but ALL thumbnails
2021-10-24 15:00:17 +05:30
pukkandan
96565c7e55 [cleanup] Add keyword automatically to SearchIE descriptions
and some minor cleanup of docs
2021-10-23 21:20:19 +05:30
pukkandan
ec11a9f4a2 [minicurses] Add more colors 2021-10-23 05:23:38 +05:30
Alf Marius
93c7f3398d [Nrk] See desc (#1382)
* Endpoint has changed. Currently the old one redirects to the new one, but this may change
* Descriptions use \r instead of \n. So translate it

Authored by: fractalf
2021-10-23 04:22:01 +05:30
pukkandan
1117579b94 [version] update
:ci skip all
2021-10-22 20:47:18 +00:00
pukkandan
0676afb126 Release 2021.10.22 2021-10-23 02:09:15 +05:30
pukkandan
49a57e70a9 [cleanup] misc 2021-10-23 02:09:10 +05:30
pukkandan
457f6d6866 [vlive:channel] Fix extraction
Based on https://github.com/ytdl-org/youtube-dl/pull/29866
Closes #749, #927, https://github.com/ytdl-org/youtube-dl/issues/29837
Authored by kikuyan, pukkandan
2021-10-22 23:19:38 +05:30
pukkandan
ad0090d0d2 [cookies] Local State should be opened as utf-8
Closes #1276
2021-10-22 23:19:37 +05:30
makeworld
d183af3cc1 [CBC] Support CBC Gem member content (#1294)
Authored by: makeworld-the-better-one
2021-10-22 06:28:32 +05:30
makeworld
3c239332b0 [CBC] Fix Gem livestream (#1289)
Authored by: makeworld-the-better-one
2021-10-22 06:26:29 +05:30
u-spec-png
ab2ffab22d [Instagram] Add login (#1288)
Authored by: u-spec-png
2021-10-22 06:23:45 +05:30
zenerdi0de
f656a23cb1 [patreon] Fix vimeo player regex (#1332)
Closes #1323
Authored by: zenerdi0de
2021-10-22 06:20:49 +05:30
pukkandan
58ab5cbc58 [vimeo] Fix embedded player.vimeo URL
Closes #1138, partially fixes #1323
Cherry-picked from upstream commit 3ae9c0f410b1d4f63e8bada67dd62a8d2852be32
2021-10-22 06:15:51 +05:30
Damiano Amatruda
17ec8bcfa9 [microsoftstream] Add extractor (#1201)
Based on: https://github.com/ytdl-org/youtube-dl/pull/24649
Fixes: https://github.com/ytdl-org/youtube-dl/issues/24440
Authored by: damianoamatruda, nixklai
2021-10-22 05:34:00 +05:30
u-spec-png
0f6e60bb57 [tagesschau] Fix extractor (#1227)
Closes #1124
Authored by: u-spec-png
2021-10-22 05:09:50 +05:30
pukkandan
ef58c47637 [SponsorBlock] Obey extractor-retries and sleep-requests 2021-10-22 04:42:44 +05:30
pukkandan
19b824f693 Re-implement deprecated option --id
Despite `--title`, `--literal` etc being deprecated,
`--id` is still documented in youtube-dl and so should be kept
2021-10-22 04:42:24 +05:30
jfogelman
f0ded3dad3 [AdobePass] Fix RCN MSO (#1349)
Authored by: jfogelman
2021-10-22 01:06:03 +05:30
pukkandan
733d8e8f99 [build] Refactor pyinst.py and misc cleanup
Closes #1361
2021-10-21 20:11:05 +05:30
pukkandan
386cdfdb5b [build] Release windows exe built with py2exe
Closes: #855
Related: #661, #705, #890, #1024, #1160
2021-10-21 20:11:05 +05:30
pukkandan
6e21fdd279 [build] Enable lazy-extractors in releases
Set the environment variable `YTDLP_NO_LAZY_EXTRACTORS`
to forcefully disable lazy extractor loading
2021-10-21 19:41:33 +05:30
Ricardo
0e5927eebf [build] Build standalone MacOS packages (#1221)
Closes #1075 
Authored by: smplayer-dev
2021-10-21 16:18:46 +05:30
Ashish Gupta
27f817a84b [docs] Migrate issues to use forms (#1302)
Authored by: Ashish0804
2021-10-21 15:26:36 +05:30
pukkandan
d3c93ec2b7 Don't create console for subprocesses on Windows (#1261)
Closes #1251
2021-10-20 21:49:40 +05:30
pukkandan
b4b855ebc7 [fragment] Print error message when skipping fragment 2021-10-19 22:58:26 +05:30
pukkandan
2cda6b401d Revert "[fragments] Pad fragments before decrypting (#1298)"
This reverts commit 373475f035.
2021-10-19 22:58:25 +05:30
pukkandan
aa7785f860 [utils] Standardize timestamp formatting code
Closes #1285
2021-10-19 22:58:25 +05:30
pukkandan
9fab498fbf [http] Retry on socket timeout
Closes #1222
2021-10-19 22:58:24 +05:30
Nil Admirari
e619d8a752 [ModifyChapters] Do not mutate original chapters (#1322)
Closes #1295 
Authored by: nihil-admirari
2021-10-19 14:21:05 +05:30
Zirro
1e520b5535 Add option --no-batch-file (#1335)
Authored by: Zirro
2021-10-19 00:41:07 +05:30
pukkandan
176f1866cb Add HDR information to formats 2021-10-18 18:35:02 +05:30
pukkandan
17bddf3e95 Reduce default --socket-timeout 2021-10-18 16:40:12 +05:30
pukkandan
2d9ec70423 [ModifyChapters] Allow removing sections by timestamp
Eg: --remove-chapters "*10:15-15:00".
The `*` prefix is used so as to avoid any conflicts with other valid regex
2021-10-18 16:06:51 +05:30
pukkandan
e820fbaa6f Do not verify thumbnail URLs by default
Partially reverts cca80fe611 and 0ba692acc8

Unless `--check-formats` is specified, this causes yt-dlp to return incorrect thumbnail urls.
See https://github.com/yt-dlp/yt-dlp/issues/340#issuecomment-877909966, #402

But the overhead in general use is not worth it

Closes #694, #725
2021-10-18 15:44:47 +05:30
pukkandan
b11d210156 [EmbedMetadata] Allow overwriting all default metadata
with `meta_default` key
2021-10-18 10:31:56 +05:30
pukkandan
24b0a72b30 [cleanup] Remove broken youtube login code 2021-10-18 09:25:51 +05:30
coletdjnz
aae16f6ed9 [youtube:comments] Fix comment section not being extracted in new layouts (#1324)
Co-authored-by: coletdjnz, pukkandan
2021-10-18 02:58:42 +00:00
shirt
373475f035 [fragments] Pad fragments before decrypting (#1298)
Closes #197, #1297, #1007
Authored by: shirt-dev
2021-10-18 08:14:20 +05:30
Ashish Gupta
920134b2e5 [Gronkh] Add extractor (#1299)
Closes #1293
Authored by: Ashish0804
2021-10-18 08:11:31 +05:30
Ashish Gupta
72ab768719 [SkyNewsAU] Add extractor (#1308)
Closes #1287
Authored by: Ashish0804
2021-10-18 08:09:50 +05:30
LE
01b052b2b1 [tbs] Add tbs live streams (#1326)
Authored by: llacb47
2021-10-18 07:58:20 +05:30
Ákos Sülyi
019a94f7d6 [utils] Use importlib to load plugins (#1277)
Authored by: sulyi
2021-10-18 07:16:49 +05:30
nyuszika7h
e69585f8c6 [7plus] Add cookie based authentication (#1202)
Closes #1103
Authored by: nyuszika7h
2021-10-18 07:04:56 +05:30
Damiano Amatruda
693ec74401 [on24] Add extractor (#1200)
Authored by: damianoamatruda
2021-10-18 07:02:46 +05:30
pukkandan
239df02103 Make duration_string and resolution available in --match-filter
Related: #1309
2021-10-17 17:39:33 +05:30
pukkandan
18f96d129b [utils] Allow duration strings in filter
Closes #1309
2021-10-17 17:39:33 +05:30
pukkandan
ec3f6640c1 [crunchyroll] Add season to flat-playlist
Closes #1319
2021-10-17 17:39:23 +05:30
pukkandan
dd078970ba [crunchyroll] Add support for beta.crunchyroll URLs
and fix series URLs with language code
2021-10-17 17:38:57 +05:30
pukkandan
71ce444a3f Fix --restrict-filename when used with default template 2021-10-17 01:03:04 +05:30
pukkandan
580d3274e5 [youtube] Expose different formats with same itag 2021-10-16 20:28:17 +05:30
pukkandan
03b4de722a [downloader] Fix slow progress hooks
Closes #1301
2021-10-16 20:02:40 +05:30
pukkandan
48ee10ee8a Fix conflict b/w id and ext in format selection
Closes #1282
2021-10-16 20:02:30 +05:30
Ashish Gupta
6ff34542d2 [Hotstar] Raise appropriate error for DRM 2021-10-16 14:08:52 +05:30
gustaf
e3950399e4 [Viafree] add support for Finland (#1253)
Authored by: 18928172992817182 (gustaf)
2021-10-14 17:34:40 +05:30
Ashish Gupta
974208e151 [trovo] Support channel clips and VODs (#1246)
Closes #229
Authored by: Ashish0804
2021-10-14 17:32:48 +05:30
pukkandan
883d4b1eec [YoutubeDL] Write verbose header to logger 2021-10-14 14:44:30 +05:30
pukkandan
a0c716bb61 [instagram] Show appropriate error when login is needed
Closes #1264
2021-10-14 14:44:29 +05:30
pukkandan
d5a39f0bad [http] Show the last encountered error
Closes #1262
2021-10-14 14:44:28 +05:30
Ashish Gupta
a64907d0ac [Hotstar] Mention Dynamic Range in format id (#1265)
Authored by: Ashish0804
2021-10-14 14:44:14 +05:30
pukkandan
6993f78d1b [extractor,utils] Detect more codecs/mimetypes
Fixes: https://github.com/ytdl-org/youtube-dl/issues/29943
2021-10-13 05:05:29 +05:30
pukkandan
993191c0d5 Fix bug in c111cefa5d 2021-10-13 04:43:26 +05:30
pukkandan
fc5c8b6492 [eria2c] Fix --skip-unavailable fragment 2021-10-13 04:14:12 +05:30
pukkandan
b836dc94f2 [outtmpl] Fix bug in expanding environment variables 2021-10-13 04:14:11 +05:30
pukkandan
c111cefa5d [downloader/ffmpeg] Improve simultaneous download and merge 2021-10-13 04:14:11 +05:30
pukkandan
975a0d0df9 Calculate more fields for merged formats
Closes #947
2021-10-13 04:14:11 +05:30
Ákos Sülyi
a387b69a7c [devscripts/run_tests] Use markers to filter tests (#1258)
`-k` filters using a substring match on test name.
`-m` checks markers for an exact match.
Authored by: sulyi
2021-10-13 00:24:27 +05:30
pukkandan
ecdc9049c0 [YouTube] Add auto-translated subtitles
Closes #1245
2021-10-12 15:21:32 +05:30
pukkandan
7b38649845 Fix verbose head not showing custom configs 2021-10-12 15:21:31 +05:30
pukkandan
e88d44c6ee [cleanup] Cleanup bilibili code
Closes #1169
Authored by pukkandan, u-spec-png
2021-10-12 15:21:31 +05:30
pukkandan
a2160aa45f [extractor] Generalize getcomments implementation 2021-10-12 15:21:30 +05:30
pukkandan
cc16383ff3 [extractor] Simplify search extractors 2021-10-12 15:21:30 +05:30
pukkandan
a903d8285c Fix bug in storyboards
Caused by 9359f3d4f0
2021-10-11 17:27:39 +05:30
pukkandan
9dda99f2fc [Merger] Do not add aac_adtstoasc to non-hls audio 2021-10-11 17:09:28 +05:30
pukkandan
ba10757412 [extractor] Detect EXT-X-KEY Apple FairPlay 2021-10-11 17:09:21 +05:30
pukkandan
e6faf2be36 [update] Clean up error reporting
Closes #1224
2021-10-11 09:58:24 +05:30
pukkandan
ed39cac53d Load archive only after printing verbose head
If there is some issue in loading archive, the verbose head should be visible in the logs
2021-10-11 09:49:52 +05:30
pukkandan
a169858f24 Fix check_formats output being written to stdout when -qv
Closes #1229
2021-10-11 09:49:52 +05:30
pukkandan
0481e266f5 [tiktok] Fix typo in 943d5ab133
and update tests
Closes #1226
2021-10-11 09:49:51 +05:30
Ashish Gupta
2c4bba96ac [EUScreen] Add Extractor (#1219)
Closes #1207
Authored by: Ashish0804
2021-10-11 03:36:27 +05:30
pukkandan
e8f726a57f [hidive] Fix typo in b5ae35ee6d 2021-10-10 11:44:44 +05:30
pukkandan
8063de5109 [version] update
:ci skip all
2021-10-10 04:03:13 +00:00
pukkandan
dec0d56fa9 Release 2021.10.10 2021-10-10 09:32:01 +05:30
pukkandan
21186af70a [downloader] Fix throttledratelimit
The timer should not reset at start of each block
2021-10-10 09:32:00 +05:30
pukkandan
84999521c8 [build] Allow to release without changelog
so that forks can build using GHA easily
2021-10-10 09:32:00 +05:30
pukkandan
d1d5c08f29 [minicurses] Fix when printing to file
Closes #1215
2021-10-10 09:31:59 +05:30
Bojidar Qnkov
2e01ba6218 [NovaPlay] Add extractor (#1209)
Authored by: Bojidarist
2021-10-10 05:41:10 +05:30
pukkandan
c9652aa418 [docs] Remove incorrect dependency on VC++10
Closes #1163
2021-10-10 04:47:48 +05:30
pukkandan
91b6c884c9 Revert "[ffmpeg] Set max probesize to workaround AAC HLS stream issues (#1109)"
This reverts commit 250a938de8.

This is no longer necessary since 7687c8ac6e
2021-10-10 04:47:48 +05:30
Felix S
28fe35b4e3 [francetv] Update extractor (#1096)
Original PR: https://github.com/ytdl-org/youtube-dl/pull/29996
Closes: https://github.com/yt-dlp/yt-dlp/issues/970, https://github.com/ytdl-org/youtube-dl/issues/29956, https://github.com/ytdl-org/youtube-dl/issues/29957, https://github.com/ytdl-org/youtube-dl/issues/29969, https://github.com/ytdl-org/youtube-dl/issues/29990, https://github.com/ytdl-org/youtube-dl/issues/30010

Authored by: fstirlitz, sarnoud
2021-10-10 03:20:17 +05:30
pukkandan
aa9a92fdbb [downloader/ffmpeg] Fix bug in initializing FFmpegPostProcessor
When `FFmpegFD` initializes the PP, it passes `self` as the `downloader`
But it does not have a `_postprocessor_hooks` attribute

Closes #1211
2021-10-10 02:23:50 +05:30
pukkandan
a170527e1f [version] update
:ci skip all
2021-10-09 19:11:24 +00:00
pukkandan
90d55df330 Release 2021.10.09 2021-10-10 00:40:35 +05:30
Ashish Gupta
81bcd43a03 [HotStarSeries] Fix cookies (#1187)
Authored by: Ashish0804
2021-10-09 23:57:08 +05:30
pukkandan
b5ae35ee6d [cleanup] Misc cleanup 2021-10-09 22:32:00 +05:30
pukkandan
4e3b637d5b Merge webm formats into mkv if thumbnails are to be embedded
This was originally implemented in 4d971a16b8 (#173) by @damianoamatruda
but was reverted in 3b297919e0
since it was unintentionally being triggered for `write_thumbnail` (See #500)
2021-10-09 22:19:23 +05:30
Jules-A
8cd69fc407 [Funimation] Fix for /v/ urls (#1196)
Closes #993 
Authored by: pukkandan, Jules-A
2021-10-09 20:51:41 +05:30
pukkandan
2614f64600 [utils] Let traverse_obj accept functions as keys 2021-10-09 20:49:07 +05:30
pukkandan
b922db9fe5 [http] Respect user-provided chunk size over extractor's 2021-10-09 20:49:07 +05:30
pukkandan
f2cad2e496 [Hidive] Fix subtitles broken by 705e7c2005 2021-10-09 20:49:00 +05:30
u-spec-png
d6124e191e [bilibili] Fix bug in efc947fb3e
Authored by: u-spec-png
2021-10-09 07:34:02 +05:30
timethrow
8c6f4daa4c [docs] Write embedding and contributing documentation (#528)
Authored by: pukkandan, timethrow
2021-10-09 06:38:01 +05:30
coletdjnz
ac56cf38a4 [youtube:tab] Fallback to API when webpage fails to download (#1122)
and add some extractor_args to force this mode
Authored by: coletdjnz
2021-10-09 02:49:25 +05:30
Damiano Amatruda
c08b8873ea [ciscowebex] Add extractor (#1199)
Authored by: damianoamatruda
2021-10-09 01:06:27 +05:30
pukkandan
819e05319b Improved progress reporting (See desc) (#1125)
* Separate `--console-title` and `--no-progress`
* Add option `--progress` to show progress-bar even in quiet mode
* Fix and refactor `minicurses`
* Use `minicurses` for all progress reporting
* Standardize use of terminal sequences and enable color support for windows 10
* Add option `--progress-template` to customize progress-bar and console-title
* Add postprocessor hooks and progress reporting

Closes: #906, #901, #1085, #1170
2021-10-09 00:41:59 +05:30
u-spec-png
fee3f44f5f [Streamable] Add codecs (#1189)
Authored by: u-spec-png
2021-10-07 20:02:42 +05:30
pukkandan
705e7c2005 [Hidive] Fix duplicate and incorrect formats 2021-10-06 11:23:48 +05:30
pukkandan
49e7e9c3ce [docs,build] Change all pycryptodome references to pycryptodomex 2021-10-06 06:45:45 +05:30
pukkandan
8472674399 [FixupM3u8] Do not run if merge is needed
We pass the relevant arguments to the merger, so separate fixup in redundant
2021-10-06 05:45:19 +05:30
pukkandan
1276a43a77 [youtube] Fix non-fatal errors in fetching player 2021-10-06 05:45:19 +05:30
pukkandan
519804a92f bugfix for 80c03fa98f 2021-10-06 05:45:18 +05:30
pukkandan
1b6bb4a85a [reddit] bugfix for 8e3fd7e034 2021-10-06 05:45:18 +05:30
pukkandan
644149afec [soundcloud:playlist] Detect last page correctly
Closes #1168
2021-10-06 05:45:17 +05:30
pukkandan
4e3d1898a8 Workaround ssl errors in mingw python
Closes #1151
2021-10-06 05:45:16 +05:30
shirt
f85e6be42e [build] Use pycryptodomex for PyInstaller (#1179) 2021-10-05 13:37:58 -04:00
coletdjnz
762e509d91 [Mediaite] Relax valid url (#1158)
Closes #1131
Authored by: coletdjnz
2021-10-05 01:00:57 +05:30
i6t
d92125aeba [GoPro] Add extractor (#1167)
Fixes: https://github.com/ytdl-org/youtube-dl/issues/30044
Authored by: i6t
2021-10-05 00:53:37 +05:30
makeworld
0f0ac87be3 [CBC] Cleanup tests (#1162)
Related: #1013 
Authored by: makeworld-the-better-one
2021-10-05 00:41:00 +05:30
u-spec-png
755203fc3f [parliamentlive.tv] Fix extractor (#1153)
Closes #1139 
Authored by: u-spec-png
2021-10-05 00:39:00 +05:30
MinePlayersPE
943d5ab133 [Douyin] Rewrite extractor (#1157)
Closes #1121
Authored by: MinePlayersPE
2021-10-05 00:31:33 +05:30
u-spec-png
3001a84dca [Newgrounds] Add age_limit and fix duration (#1156)
Authored by: u-spec-png
2021-10-05 00:28:02 +05:30
u-spec-png
ebf2fb4d61 [Vupload] Add extractor (#1146)
Fixes: https://github.com/ytdl-org/youtube-dl/issues/29877
Authored by: u-spec-png
2021-10-05 00:12:24 +05:30
u-spec-png
efc947fb3e [Bilibili] Add subtitle converter (#1144)
Closes #1015
Based on https://github.com/y2361547758/bcc2ass
Authored by: u-spec-png
2021-10-05 00:07:05 +05:30
pukkandan
b11c04a8ae Fix -f mp4 behaving differently from youtube-dl 2021-10-04 03:08:28 +05:30
pukkandan
5d535b4a55 [build] Allow building with py2exe (and misc fixes)
py2exe config is copied from youtube-dl
Closes #1160
2021-10-04 03:08:27 +05:30
pukkandan
a1c3967307 [EmbedSubtitle, SubtitlesConvertor] Fix error when subtitle file is missing
Closes #1152, #1134
Bug from 8e25d624df
2021-10-04 03:08:26 +05:30
pukkandan
e919569e67 [funimation] Sort formats according to the relevant extractor-args 2021-10-04 03:08:26 +05:30
Ákos Sülyi
ff1dec819a [aes] Improve performance slightly (#1135)
Authored by: sulyi
2021-10-03 00:20:39 +05:30
Felix S
9359f3d4f0 [extractor] Extract storyboards from SMIL manifests (#1128)
Authored by: fstirlitz
2021-10-03 00:13:42 +05:30
Aleri Kaisattera
0eaec13ba6 [Theta] Add video extractor (#1137)
Authored by: alerikaisattera
2021-10-02 00:15:15 +05:30
jfogelman
ad095c4283 [adobepass] Add RCN as MSO (#1129)
Authored by: jfogelman
2021-09-30 21:14:20 +05:30
pukkandan
e6f21b3d92 [docs,cleanup] Some minor refactoring and improve docs 2021-09-30 03:32:52 +05:30
pukkandan
d710cc6d36 [docs] Add note about our custom ffmpeg builds 2021-09-30 03:32:49 +05:30
pukkandan
3ae5e79774 [postprocessor] Add plugin support
Adds option `--use-postprocessor` to enable them
2021-09-30 03:32:46 +05:30
pukkandan
8e3fd7e034 [reddit] Fix 429 by generating a random reddit_session
Related: a76e2e0f88, #1014, https://github.com/ytdl-org/youtube-dl/issues/29986
Original PR: https://github.com/ytdl-org/youtube-dl/pull/30017
Authored by: AjaxGb
2021-09-30 03:32:44 +05:30
pukkandan
80c03fa98f Allow empty output template to skip a type of file
Closes #760, #1111
2021-09-30 03:32:43 +05:30
pukkandan
1f2a268bd3 [embedsubtitle] Fix error when duration is unknown 2021-09-30 03:32:41 +05:30
pukkandan
804ca01cc7 [build] Add more files to the tarball
Closes #1099
2021-09-30 03:32:38 +05:30
i6t
851876095b [Gettr] Add extractor (#1120)
Fixes: https://github.com/ytdl-org/youtube-dl/issues/29589
Authored by: i6t
2021-09-29 15:53:56 +05:30
ajj8
2d997542ca [bbc] Extract better quality videos (#1113)
mobile-tablet-main only provides 540p25, so it shouldn't be used for the first attempt. Instead pc provides up to 720p50

Authored by: ajj8
2021-09-29 04:07:33 +05:30
pukkandan
7756277882 Workaround for bug in ssl.SSLContext.load_default_certs (#1118)
* Remove old compat code
* Load certificates only when not using nocheckcertificate
* Load each certificate individually

Closes #1060
Related bugs.python.org/issue35665, bugs.python.org/issue4531
2021-09-29 03:07:23 +05:30
shirt
7687c8ac6e [HLS] Fix decryption issues (#1117)
* Unpad HLS fragments with PKCS#7 according to datatracker.ietf.org/doc/html/rfc8216
* media_sequence should only be incremented in for media fragments
* The native decryption should only be used if ffmpeg is unavailable since it is significantly slower. Closes #1086

Authored by: shirt-dev, pukkandan
2021-09-29 00:23:24 +05:30
Ashish Gupta
80c360d7aa [LinkedInLearning] Fix newline bug in subtitles (#1104)
Authored by: Ashish0804
2021-09-28 16:06:31 +05:30
shirt
250a938de8 [ffmpeg] Set max probesize to workaround AAC HLS stream issues (#1109)
Fixes: #618, #998, #1039

Authored by: shirt-dev
2021-09-28 04:12:33 +05:30
Ashish Gupta
f1d42a83ab [Rumble] Add RumbleChannelIE (#1088)
Authored by: Ashish0804
2021-09-28 02:31:23 +05:30
ChillingPepper
3cf4b91dc5 [SovietsCloset] Add duration from m3u8 (#908)
Authored by: ChillingPepper
2021-09-28 02:30:41 +05:30
u-spec-png
fecb20a503 [N1] Add extractor (#1080)
Authored by: u-spec-png
2021-09-28 01:40:51 +05:30
pukkandan
360167b9fc Fix --flat-playlist when neither IE nor id is known 2021-09-27 11:29:17 +05:30
pukkandan
28234287f1 [update] Check for new version even if not updateable 2021-09-27 11:29:17 +05:30
pukkandan
91dd88b90f [outtmpl] Alternate form of format type l for \n delimited list 2021-09-27 11:29:16 +05:30
Aleri Kaisattera
d31dab7084 [vidme] Remove extractor (#1095)
Authored by: alerikaisattera
2021-09-27 07:42:44 +05:30
u-spec-png
c470901ccf [reddit] Add embedded url (#1090)
Authored by: u-spec-png
2021-09-26 18:58:22 +05:30
i6t
2333ea1029 [Veo] Add extractor (#1084)
Fixes: https://github.com/ytdl-org/youtube-dl/issues/29445
Authored by: i6t
2021-09-26 04:09:45 +05:30
u-spec-png
9a13345439 [PolskieRadio] Fix extractors (#1082)
Closes #1033
Authored by: jakubadamw, u-spec-png
2021-09-26 04:00:22 +05:30
pukkandan
524e2e4fda [outtmpl] Format type U for unicode normalization 2021-09-26 01:41:01 +05:30
Matt Broadway
f440b14f87 [cookies] Fix keyring fallback (#1078)
The password returned by `security find-generic-password` has a newline at the end

Closes #1073
Authored by: mbway
2021-09-25 21:04:16 +05:30
Ashish Gupta
8dc831f715 [LinkedInLearning] Add subtitles (#1077)
Authored by: Ashish0804
Closes #1072
2021-09-25 16:55:33 +05:30
u-spec-png
e99b2d2771 [Newgrounds] Fix view count on songs (#1071)
Authored by: u-spec-png
2021-09-25 06:42:30 +05:30
pukkandan
1fed277349 [version] update
:ci skip all
2021-09-25 00:59:59 +00:00
pukkandan
0ef787d773 Release 2021.09.25 2021-09-25 06:28:05 +05:30
pukkandan
a5de4099cb [build] Fix brew tap 2021-09-25 06:28:05 +05:30
pukkandan
ff1c7fc9d3 Allow 0 in --playlist-items 2021-09-25 03:31:35 +05:30
pukkandan
600e900300 [zdf] Improve format sorting
Closes #910
2021-09-24 07:47:00 +05:30
f4pp3rk1ng
20b91b9b63 [SpankBang] Fix uploader (#892)
Closes #833 
Authored by: f4pp3rk1ng, coletdjnz
2021-09-24 06:36:30 +05:30
pukkandan
4c88ff87fc [build] Improve release process (#880)
* Automate more of the release process by animelover1984, pukkandan - closes #823
* Fix sha256 by nihil-admirari - closes #385
* Bring back brew taps by nao20010128nao #865
* Provide `--onedir` zip for windows by pukkandan - Closes #1024, #661, #705 and #890

Authored by: pukkandan, animelover1984, nihil-admirari, nao20010128nao
2021-09-24 06:31:43 +05:30
renalid
e27cc5d864 [Arte] Improve description extraction (#1046)
Authored by: renalid
2021-09-24 06:26:15 +05:30
Aleri Kaisattera
eb6d4ad1ca [Theta] Add extractor (#1068)
Authored by: alerikaisattera
2021-09-24 06:23:51 +05:30
coletdjnz
99e9e001de [youtube] Cleanup authentication code (#786)
Authored by: coletdjnz
2021-09-24 06:22:17 +05:30
pukkandan
51ff9ca0b0 [xattr] bugfix for b19404591a 2021-09-24 06:20:42 +05:30
pukkandan
b19404591a Separate the options --ignore-errors and --no-abort-on-error
In youtube-dl, `-i` ignores both download and post-processing error, and
treats the download as successful even if the post-processor fails.

yt-dlp used to skip the entire video on either error and there was no
option to ignore the post-processing errors like youtube-dl does.

By splitting the option into two, now either just the download errors
(--no-abort-on-error, default on CLI) or all errors (--ignore-errors)
can be ignored as per the users' needs

Closes #893
2021-09-24 06:05:35 +05:30
pukkandan
1f8471e22c Ignore empty entries in _list_from_options_callback 2021-09-24 05:14:19 +05:30
pukkandan
77c4a9ef68 Download subtitles in order of --sub-langs
Closes #1041
2021-09-24 05:14:19 +05:30
pukkandan
8f70b0b82f [cbs] Report appropriate error for DRM
Closes #1056
2021-09-24 05:14:18 +05:30
pukkandan
be867b03f5 bugfix for bd50a52b0d 2021-09-24 05:14:16 +05:30
pukkandan
1813a6ccd4 [youtube] Fix --mark-watched with --cookies-from-browser
Closes #1019
2021-09-24 05:14:16 +05:30
pukkandan
8100c77223 [lbry] Show error message from API response 2021-09-24 05:14:15 +05:30
Ashish Gupta
9ada988bfc [Koo] Add extractor (#1044)
Authored by: Ashish0804
2021-09-23 23:45:17 +05:30
Ashish Gupta
d1a7768432 [Chingari] Add extractors (#1038)
Authored by: Ashish0804
2021-09-23 23:31:55 +05:30
NeroBurner
49fa4d9af7 [atv.at] Use jwt for API (#1012)
The jwt token is implemented according to RFC7519

Closes #988
Authored by: NeroBurner
2021-09-23 23:10:51 +05:30
The Hatsune Daishi
ee2b3563f3 [downloader/niconico] Pass custom headers (#1063)
Closes #1057
Authored by: nao20010128nao
2021-09-23 14:36:48 +05:30
Glenn Slayden
bdc196a444 [cleanup] Fix line endings for nebula.py (#1064)
:ci skip
Authored by: glenn-slayden
2021-09-23 14:35:01 +05:30
Ashish Gupta
388bc4a640 [Hotstar] Add referer for subs (#1062)
Authored by: Ashish0804
2021-09-23 14:30:49 +05:30
pukkandan
50eff38c1c bugfix for a21e0ab1a1
Closes #1061
2021-09-23 11:49:00 +05:30
nixxo
4be9dbdc24 [comedycentral] Support collection-playlist (#1058)
Authored by: nixxo
2021-09-23 11:45:54 +05:30
pukkandan
a21e0ab1a1 [ffmpeg] Add aac_adtstoasc when merging if needed
Related: #1039
2021-09-22 19:51:58 +05:30
pukkandan
a76e2e0f88 [reddit] Workaround for 429 by redirecting to old.reddit.com
Closes #1014
2021-09-22 19:51:57 +05:30
The Hatsune Daishi
bd50a52b0d Basic framework for simultaneous download of multiple formats (#1036)
Authored by: nao20010128nao
2021-09-22 19:42:04 +05:30
Sipherdrakon
c12977bdc4 [AnimalPlanet] Fix extractor (#1050)
Authored by: Sipherdrakon
2021-09-22 19:39:45 +05:30
ChillingPepper
f6d8776d34 [SovietsCloset] Fix playlists for games with only named categories
Authored by: ConquerorDopy
2021-09-22 07:40:02 +05:30
pukkandan
d806c9fd97 [docs,cleanup] Add deprecation warning in docs
for some counter intuitive behaviour that may be removed in future.

and fix linter
2021-09-22 05:50:11 +05:30
pukkandan
5e3f2f8fc4 [youtube] Return full URL instead of just ID 2021-09-22 05:37:41 +05:30
pukkandan
1009f67c2a [fragment,aria2c] Generalize and refactor some code 2021-09-22 05:27:07 +05:30
pukkandan
bd6f722de8 dump files should obey --trim-filename (#1043)
Authored by: sulyi
2021-09-22 05:25:17 +05:30
pukkandan
d9d8b85747 [fragment] Fix range header when using -N and media sequence (#1048)
Authored by: shirt
2021-09-22 04:19:45 +05:30
pukkandan
daf7ac2b92 [fragment] Avoid repeated request for AES key 2021-09-22 01:15:16 +05:30
pukkandan
96933fc1b6 [aria2c] Fix IV for some AES-128 streams
Authored by: shirt
2021-09-22 00:20:41 +05:30
makeworld
0d32e124c6 [CBC] Fix CBC Gem extractors (#1013)
Closes #936
Authored by: makeworld-the-better-one
2021-09-20 03:43:26 +05:30
u-spec-png
cb2ec90e91 [Peertube] Add channel extractor (#1023)
Authored by: u-spec-png
2021-09-19 23:17:41 +05:30
pukkandan
3cd786dbd7 [youtube] Warn when trying to download clips 2021-09-19 19:41:10 +05:30
pukkandan
1b629e1b4c [test/cookies] Improve logging 2021-09-19 19:41:09 +05:30
u-spec-png
8f8e8eba24 [Nuvid] Fix extractor (#1022)
Fixes: https://github.com/ytdl-org/youtube-dl/issues/29886
Authored by: u-spec-png
2021-09-19 17:56:29 +05:30
Ákos Sülyi
09906f554d [aes] Add aes_gcm_decrypt_and_verify (#1020)
Authored by: sulyi, pukkandan
2021-09-19 17:52:31 +05:30
Yuan Chao
a63d9bd0b0 [CGTN] Add extractor (#981)
Authored by: chao813
2021-09-19 17:48:22 +05:30
pukkandan
f137e4c27c [utils] Improve extract_timezone
Code taken from: https://github.com/ytdl-org/youtube-dl/pull/29845
Fixes: https://github.com/ytdl-org/youtube-dl/issues/29948
Authored by: dirkf
2021-09-19 17:45:49 +05:30
nyuszika7h
4762621925 [videa] Fix some extraction errors (#1028)
Authored by: nyuszika7h
2021-09-19 17:07:50 +05:30
pukkandan
57aa7b8511 [hls] Byterange + AES128 is supported by native downloader 2021-09-19 14:20:54 +05:30
pukkandan
9c1c3ec016 [Oreilly] Bugfix for 7738bd3272 2021-09-19 14:20:53 +05:30
DigitalDJ
f9cc0161e6 [extractor] Fix root-relative URLs in MPD (#1006)
Authored by: DigitalDJ
2021-09-19 14:07:57 +05:30
Nil Admirari
c6af2dd8e5 [SponsorBlock] Improve merge algorithm (#999)
Authored by: nihil-admirari
2021-09-19 08:38:50 +05:30
Mohammad Khaled AbouElSherbini
7738bd3272 [Oreilly] Handle new web url (#990)
The change in URL is most likely a server side issue. But we can work around it by a simple substitution

Authored by: MKSherbini
2021-09-18 17:03:06 +05:30
pukkandan
7c37ff97d3 Allow alternate fields in outtmpl
Closes #899, #1004
2021-09-18 16:41:01 +05:30
The Hatsune Daishi
d47f46e17e [damtomo] Add extractor (#992)
Authored by: nao20010128nao
2021-09-18 11:25:17 +05:30
coletdjnz
298bf1d275 [itv] Prefer last matching featureset (#1001)
Bug fix for #986
Authored by: coletdjnz
2021-09-18 02:25:49 +05:30
Aleri Kaisattera
d1b39ad844 [CAM4] Add extractor (#1010)
Authored by: alerikaisattera
2021-09-18 02:24:17 +05:30
pukkandan
edf65256aa [hls,aes] Fallback to native implementation for AES-CBC
and detect `Cryptodome` in addition to `Crypto`

Closes #935
Related: #938
2021-09-18 00:55:58 +05:30
pukkandan
7303f84abe [options] Fix --no-config and refactor reading of config files
Closes #912, #914
2021-09-18 00:11:11 +05:30
pukkandan
f5aa5cfbff Add format type B for outtmpl to treat the value as bytes
This is useful to limit the filename to a certain number of bytes rather than characters
Closes #1003
2021-09-18 00:11:11 +05:30
Aleri Kaisattera
f1f6ca78b4 [Streamanity] Add Extractor (#984)
Authored by: alerikaisattera
2021-09-16 23:45:10 +05:30
Ashish Gupta
2fac2e9136 [Mediaite] Add Extractor (#973)
Closes #969 
Authored by: Ashish0804
2021-09-16 23:42:45 +05:30
Ashish Gupta
23dd2d9a32 [NDR] Rewrite NDRIE (#962)
Closes #959 
Authored by: Ashish0804
2021-09-16 23:41:55 +05:30
Ashish Gupta
b89378a69a [globo] Fix GloboIE (#994)
Closes #991 
Authored by: Ashish0804
2021-09-16 23:01:39 +05:30
pukkandan
0001fcb586 Add option --netrc-location
Closes #792, #963
2021-09-16 01:28:55 +05:30
pukkandan
c589c1d395 [compat] Don't ignore HOME (if set) on windows
Related: #792
2021-09-16 01:28:54 +05:30
pukkandan
f7590d4764 [vrv] Don't raise error when thumbnails are missing
Closes #983
2021-09-16 01:28:53 +05:30
pukkandan
dbf7eca917 [soundcloud] Update _CLIENT_ID
Related: #975
2021-09-16 01:28:52 +05:30
pukkandan
d21bba7853 [options] Strip spaces in list-like switches 2021-09-16 01:28:51 +05:30
Ashish Gupta
a8cb7eca61 [HiDive] Fix extractor (#958)
Closes #952, #408
Authored by: Ashish0804
2021-09-15 07:34:54 +05:30
nyuszika7h
92790da2bb [radlive] Add new extractor (#870)
Closes #312
Authored by: nyuszika7h
2021-09-15 07:15:10 +05:30
Sipherdrakon
b5a39ed43b [DIYNetwork] Support new format (#934)
Authored by: Sipherdrakon
2021-09-15 05:55:03 +05:30
LE
cc33cc4395 [VrtNU] Handle login errors (#977)
Authored by: llacb47
2021-09-15 02:28:49 +05:30
Ashish Gupta
1722099ded [Mxplayer] Use mobile API (#966)
Authored by: Ashish0804
2021-09-15 02:23:36 +05:30
Ákos Sülyi
40b18348e7 [cleanup] Improve make clean-test (#972)
Authored by: sulyi
2021-09-14 23:53:47 +05:30
u-spec-png
e9a30b181e [Peertube] Add playlist extractor (#957)
Authored by: u-spec-png
2021-09-14 09:25:26 +05:30
zenerdi0de
9c95ac677e [Fancode] Fix live streams (#961)
Authored by: zenerdi0de
2021-09-13 21:10:32 +05:30
coletdjnz
ea706726d6 [ITV] Fix extractor, add subtitles and thumbnails (#913)
Original PR: https://github.com/ytdl-org/youtube-dl/pull/28955 (see also https://github.com/ytdl-org/youtube-dl/issues/28906#issuecomment-831008270)

Closes #861, https://github.com/ytdl-org/youtube-dl/issues/28906, https://github.com/ytdl-org/youtube-dl/issues/29337, https://github.com/ytdl-org/youtube-dl/issues/29190, https://github.com/ytdl-org/youtube-dl/issues/28939, https://github.com/ytdl-org/youtube-dl/issues/29620

Authored-by: coletdjnz, sleaux-meaux, Vangelis66
2021-09-13 02:26:19 +05:30
pukkandan
f60990ddfc [peertube] Update instances (#957)
Authored by: u-spec-png
2021-09-13 02:02:28 +05:30
pukkandan
ad226b1dc9 [funimation] Fix for locations outside US
Closes #868
Authored by: Jules-A, pukkandan
2021-09-12 21:40:37 +05:30
pukkandan
ca46b94134 [cookies] Make browser names case insensitive 2021-09-12 21:40:37 +05:30
pukkandan
67ad7759af [brightcove] Extract subtitles from manifests 2021-09-12 21:40:36 +05:30
pukkandan
d5fe04f5c7 Fix --compat-option no-direct-merge 2021-09-12 21:40:28 +05:30
dalan
03c862794f [9Now] handle episodes of series (#896)
Authored by: dalanmiller
2021-09-12 17:41:24 +05:30
MinePlayersPE
0fd6661edb [TikTokUser] Fix extractor using mobile API (#925)
and misc cleanup

Closes #859
Authored by: MinePlayersPE, llacb47
2021-09-12 11:51:59 +05:30
u-spec-png
02c7ae8104 [Newgrounds] Add NewgroundsUserIE and improve extractor (#942)
Authored by: u-spec-png
2021-09-12 11:07:44 +05:30
Ashish Gupta
16f7e6be3a [bilibili]Add BiliIntlIE and BiliIntlSeriesIE (#907)
Closes #611 
Authored by: Ashish0804
2021-09-11 18:59:48 +05:30
Ashish Gupta
ffecd3034b [MuseScore] Add Extractor (#918)
Closes #911 
Authored by: Ashish0804
2021-09-11 18:51:11 +05:30
Felix S
1c5ce74c04 [zype] Extract subtitles from the m3u8 manifest (#948)
Closes #929
Authored by: fstirlitz
2021-09-11 15:46:03 +05:30
pukkandan
81a136b80f [WebVTT] Adjust parser to accommodate PBS subtitles (#922)
Closes #921
2021-09-08 16:10:10 +05:30
coletdjnz
eab3f867e2 [nzherald] Add NZHeraldIE (#909)
Authored-by: coletdjnz

Related: https://github.com/ytdl-org/youtube-dl/issues/28267
2021-09-07 22:49:57 +00:00
coletdjnz
a7e999beec [pbs] Fix subtitle extraction (#813)
Original PR: https://github.com/ytdl-org/youtube-dl/pull/24430, https://github.com/ytdl-org/youtube-dl/pull/17434
Closes: #836, https://github.com/ytdl-org/youtube-dl/issues/18796, https://github.com/ytdl-org/youtube-dl/issues/17273
Authored-by: coletdjnz, gesa, raphaeldore
2021-09-08 02:29:20 +05:30
Ashish
71407b3eca [Olympics] Add replay extractor (#905)
Closes #897 
Authored by: Ashish0804
2021-09-07 23:05:27 +05:30
Ashish
dc9de9cbd2 [Yandex] Add ZenYandexIE and ZenYandexChannelIE (#900)
Authored by: Ashish0804
2021-09-07 23:03:19 +05:30
Poschi
92ddaa415e [gotostage] Add extractor (#883)
Authored by: poschi3
2021-09-07 22:41:56 +05:30
coletdjnz
b6de707d13 [youtube] Improvements to JS player extraction (See desc) (#860)
* fallback player url extraction when it fails to be extracted from the webpage
* don't download js player unnecessarily for clients that don't require it
* try to extract js player url from any additional client configs
* ability to skip the js player usage/download using `player_skip=js`
* ability to skip the initial webpage download using `player_skip=webpage`

known issue:
* authentication for multi-channel accounts and multi-account cookies may not work correctly if the webpage or client configs are skipped
*  formats from the web client requiring signature decryption will be skipped if player js extraction is skipped

Authored by: coletdjnz
2021-09-06 12:56:41 +05:30
coletdjnz
bccdbd22d5 [Mediaklikk] Add Extractor (#867)
Original PR: https://github.com/ytdl-org/youtube-dl/pull/17453, https://github.com/ytdl-org/youtube-dl/pull/25098
Fixes: https://github.com/ytdl-org/youtube-dl/issues/21431
Authored-by: tmarki, mrx23dot, coletdjnz
2021-09-06 12:22:38 +05:30
MinePlayersPE
bd9ff55bcd [tiktok] Use API to fetch higher quality video (#843)
Authored by: MinePlayersPE, llacb47
2021-09-05 11:16:27 +05:30
pukkandan
526d74ec5a [cleanup] Misc 2021-09-05 11:16:23 +05:30
pukkandan
e04a1ff92e [soundcloud] Retry playlist pages on 502 error
Closes #872
2021-09-05 10:48:40 +05:30
pukkandan
aa6c25309a [soundcloud] Make playlist extraction lazy 2021-09-05 10:28:28 +05:30
pukkandan
d98b006b85 [dw] Fix extractor
Closes #830
2021-09-05 10:28:28 +05:30
pukkandan
265a7a8ee5 [redtube] Fix exts
Closes #464
2021-09-05 06:32:11 +05:30
pukkandan
826446bd82 [plutotv] Fix extractor for URLs with /en
Closes #431
2021-09-05 06:32:10 +05:30
The Hatsune Daishi
bc79491368 [17live] Add 17.live extractor (#866)
Authored by: nao20010128nao
2021-09-05 04:07:28 +05:30
ChillingPepper
421ddcb8b4 [SovietsCloset] Add extractor (#884)
Authored by: ChillingPepper
2021-09-04 17:59:35 +05:30
coletdjnz
c0ac49bcca [youtube] Retry on 'Unknown Error' (#854)
and do not repeat unimportant alerts

Closes #839
Authored by: coletdjnz
2021-09-04 08:03:42 +05:30
coletdjnz
02def2714c [southpark] Fix SouthParkDE (#812)
This was broken by ee1e05581e
Authored by: coletdjnz
2021-09-04 08:01:47 +05:30
pukkandan
f9be9cb9fd [cookies] Print warning for cookie decoding error only once
Closes #889
2021-09-04 07:52:47 +05:30
pukkandan
4614bc22c1 Allow --force-write-archive to work with --flat-playlist
Related: #876
2021-09-04 03:07:29 +05:30
pukkandan
8e5fecc88c Handle more playlist errors with -i 2021-09-04 03:07:27 +05:30
pukkandan
165efb823b [ModifyChapters] fixes (See desc)
* [docs] Fix typo
* Do not enable `sponskrub` by default
* Fix `--force-keyframes-at-cuts`
* Don't embed subtitles if the video has been cut. Previously, running `--remove-chapters` with `--embed-subs` multiple times caused repeated cuts and out-of-sync subtitles
* Store `_real_duration` to prevent running ffprobe multiple times
2021-09-04 01:39:31 +05:30
pukkandan
dd594deb2a Fix --no-get-comments
Closes #882
2021-09-04 01:39:30 +05:30
pukkandan
409e18286e Fix extra_info being reused across runs
58adec4677 was supposed to solve this, but ended up being an incomplete fix
Closes #727
2021-09-04 01:39:29 +05:30
pukkandan
8113999995 Fix --compat-option playlist-index 2021-09-04 01:39:27 +05:30
pukkandan
8026e50152 [version] update
:ci skip all
2021-09-02 05:33:38 +05:30
pukkandan
9ee4f0bb5b Release 2021.09.02 2021-09-02 04:43:38 +05:30
pukkandan
be4d9f4cd9 Partially revert "[build] Add homebrew taps (#827)" 2021-09-02 04:43:38 +05:30
pukkandan
347182a0cd Show a more useful error in older python versions 2021-09-02 03:52:08 +05:30
pukkandan
a7429aa9fa [youtube] Fix subtitle names 2021-09-02 02:26:27 +05:30
Nil Admirari
7a340e0df3 Native SponsorBlock implementation and related improvements (#360)
SponsorBlock options:
* The fetched sponsor sections are written to infojson
* `--sponsorblock-remove` removes specified chapters from file
* `--sponsorblock-mark` marks the specified sponsor sections as chapters
* `--sponsorblock-chapter-title` to specify sponsor chapter template
* `--sponsorblock-api` to use a different API

Related improvements:
* Split `--embed-chapters` from `--embed-metadata`
* Add `--remove-chapters` to remove arbitrary chapters
* Add `--force-keyframes-at-cuts` for more accurate cuts when removing and splitting chapters

Deprecates all `--sponskrub` options

Authored by: nihil-admirari, pukkandan
2021-09-02 02:25:16 +05:30
ouwou
f0e5366335 [reddit] Fix for quarantined subreddits (#848)
Authored by: ouwou
2021-09-02 00:24:31 +05:30
nyuszika7h
49ca8db06b [mediaset] Fix extraction for more videos (#852)
Closes #851
Authored by: nyuszika7h
2021-09-02 00:23:19 +05:30
nyuszika7h
ee57a19d84 [mediaset] Fix extraction for some videos (#850)
This was broken by #564
Closes #849 
Authored by: nyuszika7h
2021-09-01 21:09:15 +05:30
octotherp
908b56eaf7 [XHamster] Extract uploader_id (#844)
Authored by: octotherp
2021-09-01 18:58:25 +05:30
u-spec-png
1461d7bef2 [Tokentube] Add extractor (#842)
Closes #800 
Authored by: u-spec-png
2021-09-01 18:40:25 +05:30
pukkandan
8a2d992389 [facebook] Fix format sorting
Closes #795
2021-09-01 09:17:52 +05:30
pukkandan
8e25d624df [EmbedSubtitle] Continue even if some files are missing 2021-09-01 08:51:22 +05:30
coletdjnz
e88dabb35e [Viafree] Fix extractor and extract subtitles (#828)
Authored by: coletdjnz
Fixes #820
2021-08-31 22:31:11 +00:00
BunnyHelp
8eb7ba82ca [iwara.tv] Extract more metadata (#829)
Authored-by: BunnyHelp
2021-09-01 00:59:30 +05:30
Luc Ritchie
b2eeee0ce0 [afreecatv] Tolerate failure to parse date string (#832)
Authored by: wlritchi
2021-08-30 21:37:34 +05:30
Luc Ritchie
875cfb8cbc [afreecatv] Fix adult VODs (#831)
Original PR: https://github.com/ytdl-org/youtube-dl/pull/28405
Fixes https://github.com/ytdl-org/youtube-dl/issues/26622, https://github.com/ytdl-org/youtube-dl/issues/26926

Authored by: wlritchi
2021-08-30 21:05:48 +05:30
The Hatsune Daishi
b8773e63f0 [build] Add homebrew taps (#827)
https://github.com/yt-dlp/homebrew-taps
Closes: #754, #770
Authored by: nao20010128nao
2021-08-30 20:07:43 +05:30
u-spec-png
05664a2f7b [CDA] Add more formats (#805)
Fixes: #791, https://github.com/ytdl-org/youtube-dl/issues/29844
Authored by: u-spec-png
2021-08-30 19:37:03 +05:30
pukkandan
2ee6389bef [build] Fix bug in making yt-dlp.tar.gz 2021-08-30 08:28:49 +05:30
coletdjnz
62cdaaf0e2 [StarTV] Add extractor for startv.com.tr (#815)
Authored-by: mrfade, coletdjnz
Related: https://github.com/ytdl-org/youtube-dl/issues/22715
2021-08-29 22:29:42 +00:00
coletdjnz
419508eabb [Motherless] Fix extractor (#809)
Authored-by: coletdjnz
Fixes #806, https://github.com/ytdl-org/youtube-dl/issues/29626
2021-08-29 22:22:57 +00:00
Sipherdrakon
54153fb71b [VH1,TVLand] Fix extractors (#784)
Fixes #745 but not #713
Authored by: Sipherdrakon
2021-08-30 03:20:58 +05:30
zenerdi0de
1dd6d9ca9d [Patreon] Add PatreonUserIE (#573)
Authored by: zenerdi0de
2021-08-30 03:17:50 +05:30
IONECarter
356ac009d3 [peloton] Add extractor (#192)
Authored by: IONECarter, capntrips, pukkandan
2021-08-30 03:13:59 +05:30
coletdjnz
9a292a620c [ATV.at] Fix extractor for ATV.at (#816)
Authored-by: NeroBurner, coletdjnz
Fixes https://github.com/ytdl-org/youtube-dl/issues/29079
2021-08-29 21:34:39 +00:00
coletdjnz
7e55872286 [camtube] remove extractor (#810)
Co-authored-by: alerikaisattera
2021-08-29 21:11:03 +00:00
std-move
2fc14b9925 [Nova] fix extractor (#807)
Fixes: https://github.com/ytdl-org/youtube-dl/issues/27840
Authored by: std-move
2021-08-29 07:04:42 +05:30
Ashish
58f68fe703 [TV2Hu] Fix TV2HuIE and add TV2HuSeriesIE (#804)
Closes #799 
Authored by: Ashish0804
2021-08-29 06:44:22 +05:30
animelover1984
abafce59a1 [Niconico] Add Search extractors (#672)
Authored by: animelover1984, pukkandan
2021-08-28 07:07:13 +05:30
pukkandan
2e7781a93c [docs] Fix some typos
Closes #677, #774
2021-08-28 02:20:40 +05:30
Ashish
bc36bc36a1 [ShemarooMe] Fix extractor (#798)
Closes #797 
Authored by: Ashish0804
2021-08-27 20:39:13 +05:30
Paul Wrubel
d75201a873 Use os.replace where applicable (#793)
When using 
```py
os.remove(encodeFilename(filename))
os.rename(encodeFilename(temp_filename), encodeFilename(filename))
```
the `os.remove` need not be atomic and so can be executed arbitrarily compared to the immediately following rename call. It is better to use `os.replace` instead

Authored by: paulwrubel
2021-08-27 07:57:20 +05:30
pukkandan
691d5823d6 [aria2c] Obey --rate-limit 2021-08-27 00:59:36 +05:30
pukkandan
c311988d19 [youtube] Improve 26e8e04454
The streams of the same itag may have slightly different size/bitrate
2021-08-26 08:27:29 +05:30
pukkandan
26e8e04454 [youtube] Prefer audio stream that YouTube considers default
Fixes: https://github.com/ytdl-org/youtube-dl/issues/29864
Related: https://github.com/clsid2/mpc-hc/issues/1268
2021-08-26 08:08:34 +05:30
pukkandan
198e3a04c9 [FormatSort] Remove priority of lang 2021-08-26 08:08:33 +05:30
Robin
61bfacb233 [facebook] Update onion URL (#788)
Authored by: Derkades
2021-08-25 20:31:43 +05:30
Ashish
85a0021fb3 [ProjectVeritas] Add extractor (#790)
https://github.com/ytdl-org/youtube-dl/issues/26749
Authored by: Ashish0804
2021-08-25 20:17:58 +05:30
Ashish
7a45a1590b [Epicon] Add extractors (#789)
Authored by: Ashish0804
2021-08-25 19:33:32 +05:30
CeruleanSky
1c36c1f320 Fix --no-prefer-free-formats (#787)
Authored by: CeruleanSky
2021-08-25 17:19:05 +05:30
pukkandan
e0493e90fc fix bug in 88acdbc269 2021-08-25 10:26:09 +05:30
The Hatsune Daishi
1931a55ee8 [radiko] Add extractors (#731)
https://github.com/ytdl-org/youtube-dl/issues/29840
Authored by: nao20010128nao
2021-08-25 10:18:27 +05:30
i6t
63b1ad0f05 [iwara] Add thumbnail (#781)
Authored by: i6t
2021-08-25 03:06:15 +05:30
coletdjnz
0bb1bc1b10 [youtube] Remove annotations and deprecate --write-annotations (#765)
Closes #692 
Authored by: coletdjnz
2021-08-24 09:22:40 +05:30
pukkandan
45842107b9 fix bug in 6251555f1c
:ci skip
2021-08-24 06:23:21 +05:30
pukkandan
6251555f1c [downloader/ffmpeg] Support for DASH manifests (experimental)
Closes #159
2021-08-24 05:52:00 +05:30
pukkandan
330690a214 [downloader/ffmpeg] Allow passing custom arguments before -i
Closes #686
2021-08-24 04:24:12 +05:30
tandy1000
91d4b32bb6 [ManotoTV] Add new extractors (#767)
Authored by: tandy1000
2021-08-24 00:15:46 +05:30
pukkandan
a181cd0c60 [facebook] Fix metadata extraction
Original PR: https://github.com/ytdl-org/youtube-dl/pull/29796
Closes #453, https://github.com/ytdl-org/youtube-dl/issues/29421, https://github.com/ytdl-org/youtube-dl/issues/23627, https://github.com/ytdl-org/youtube-dl/issues/23180, https://github.com/ytdl-org/youtube-dl/issues/14156

Authored by: kikuyan
2021-08-23 22:07:00 +05:30
Ashish
ea81966e64 [TV2] Fix extractor (#766)
Closes #764 
Authored by: Ashish0804
2021-08-23 21:32:33 +05:30
Ashish
2acf2ce5cb [GabTV] Add extractor (#768)
Closes #499
Authored by: Ashish0804
2021-08-23 21:30:39 +05:30
Ashish
f7f18f905c [tiktok] Add TikTokUserIE (#756)
Authored-by: Ashish0804, pukkandan
2021-08-23 20:12:23 +05:30
pukkandan
4f8b70b593 [TikTok] Fix metadata extraction 2021-08-23 19:31:28 +05:30
MinePlayersPE
e43e9f3c2c [aljazeera] Fix extractor (#763)
Closes #762, https://github.com/ytdl-org/youtube-dl/issues/29517
Authored by: MinePlayersPE
2021-08-23 15:24:15 +05:30
pukkandan
71dd5d4a00 [peertube] handle new video URL format
Closes #722, https://github.com/ytdl-org/youtube-dl/issues/29782
Original PR: https://github.com/ytdl-org/youtube-dl/pull/29475
Authored by: Chocobozzz
2021-08-23 06:26:35 +05:30
nyuszika7h
52a2f994c9 [adobepass] Fix Verizon SAML login (#743)
Original PR: https://github.com/ytdl-org/youtube-dl/pull/19136 from 64bddfe15c

Authored-by: nyuszika7h, ParadoxGBB <paradoxgbb@yahoo.com>
2021-08-23 06:08:32 +05:30
pukkandan
8b7491c8d1 Fix add_info_extractor when used via API
Bug from: 251ae04e6a
2021-08-23 05:31:55 +05:30
pukkandan
251ae04e6a [lazy_extractor] Create instance only after pre-checking archive 2021-08-23 05:06:39 +05:30
pukkandan
5bc4a65eea [lazy_extractor] Import actual class if an attribute is accessed
Now all core tests pass with lazy extraction enabled
2021-08-23 04:02:06 +05:30
pukkandan
1151c4079a [extractor] Show video id in error messages if possible 2021-08-23 02:49:07 +05:30
pukkandan
88acdbc269 [extractor] Better error message for DRM (#729)
Closes #636
2021-08-23 01:38:38 +05:30
Tom-Oliver Heidel
9b5fa9ee7c [youtube] Add av01 itags to known formats list (#747)
Authored by: blackjack4494
2021-08-23 01:29:43 +05:30
mahanstreamer
aca5774e68 [bitchute] Fix test (#758)
Authored by: mahanstreamer
2021-08-23 01:28:23 +05:30
pukkandan
3fb4e21b38 [lazy_extractors] Fix suitable and add flake8 test 2021-08-23 01:04:29 +05:30
pukkandan
4dfbf8696b [utils] Add parse_qs 2021-08-23 00:50:43 +05:30
pukkandan
8fc54b1230 [youtube] Add shorts to _VALID_URL
Normally the generic extractor will redirect the URL,
but the cookies consent screen may sometimes appear instead

Closes #752
2021-08-23 00:50:42 +05:30
pukkandan
da33e35b05 Don't try to merge with final extension
The formats may not be directly mergable into the final extension
2021-08-23 00:50:41 +05:30
pukkandan
5ad28e7ffd [extractor] Common function _match_valid_url 2021-08-23 00:50:40 +05:30
Jérôme Duval
f79ec47d71 [tv5mondeplus] Fix extractor (#739)
Authored by: korli
2021-08-21 02:04:51 +05:30
Ashish
45b0596290 [HearThisAtIE] Fix extractor (#742)
Closes: #740 
Authored by: Ashish0804
2021-08-21 01:09:59 +05:30
Ashish
96c23f3be8 [Zee5] Fix extractor and add subtitles (#733)
Closes #728
Authored by Ashish0804
2021-08-21 00:43:12 +05:30
CHJ85
6e7dfe4959 [BannedVideo] Add Extractor (#717)
Closes: #669
Original PR: https://github.com/ytdl-org/youtube-dl/pull/24572
Authored by: smege1001, blackjack4494, pukkandan
2021-08-21 00:15:00 +05:30
animelover1984
c34f505b04 [bilibili] Add category extractor (#695)
Authored by: animelover1984
2021-08-20 23:57:40 +05:30
Ashish
14183d1f80 [Hungama] Fix HungamaSongIE and add HungamaAlbumPlaylistIE (#744)
Authored by: Ashish0804
2021-08-20 23:46:59 +05:30
pukkandan
58adec4677 Fix extra_info being reused across runs
Closes #727
2021-08-19 03:10:58 +05:30
pukkandan
9e598870dd Fix playlist_index not obeying playlist_start
and add tests
Closes #720
2021-08-17 19:06:10 +05:30
pukkandan
8f18aca871 Let --match-filter reject entries early
Makes redundant: `--match-title`, `--reject-title`, `--min-views`, `--max-views`
2021-08-17 04:29:56 +05:30
pukkandan
3ad56b4236 Fix -J when there are failed videos 2021-08-17 04:29:55 +05:30
Glenn Slayden
5d62709bc7 [cleanup] Replace improper use of tab in trovo (#719)
:ci skip

Authored by: glenn-slayden
2021-08-17 04:19:31 +05:30
zootedb0t
7581d2467a [docs] fix typo (#715)
Authored by: zootedb0t
2021-08-16 21:59:40 +05:30
shirt
5fa206fb54 [ParamountPlus] Fix geo verification (#711)
Closes #681 
Authored by: shirt
2021-08-16 12:13:24 +05:30
mzbaulhaque
df2a5633da [pornhub] Separate and fix playlist extractor (#700)
Closes #680
Authored by: mzbaulhaque
2021-08-15 23:02:48 +05:30
Felix S
7a6742b5f9 [webvtt] Fix timestamp overflow adjustment (#698)
In some streams, empty segments may appear with a bogus, non-monotone MPEG timestamp.
This should not be considered as an overflow

Authored by: fstirlitz
2021-08-15 21:03:06 +05:30
The Hatsune Daishi
e040bb0a41 [voicy] Add extractor (#667)
Authored by: nao20010128nao
2021-08-15 20:49:54 +05:30
pukkandan
f8fabc9930 [kakao] Fix extractor
Closes #699
2021-08-15 14:31:27 +05:30
jhwgh1968
d967c68e4c [eroprofile] Fix page skipping in albums (#701)
Bug from #658 
Authored by: jhwgh1968
2021-08-15 11:32:11 +05:30
SsSsS
3dd39c5f9a [instagram] Add referrer to prevent throttling (#676)
Code from: https://github.com/ytdl-org/youtube-dl/pull/29751
Fixes: https://github.com/ytdl-org/youtube-dl/issues/29736

Authored by: u-spec-png, kikuyan
2021-08-15 00:45:01 +05:30
mzbaulhaque
be44eefd5e [filmmodu] Add extractor (#690)
Closes #288
Authored by: mzbaulhaque
2021-08-15 00:40:56 +05:30
pukkandan
f775c83110 Fix --force-overwrites when using -k
For formats that need merge, the `.fxxx` files are not removed before
downloading the corresponding `.part` files. This causes the rename to fail
2021-08-15 00:28:49 +05:30
pukkandan
b714b41f81 [soundcloud] Refetch client_id on 403
Closes #673
2021-08-15 00:28:49 +05:30
pukkandan
31654882e9 [options] Add _set_from_options_callback 2021-08-15 00:26:34 +05:30
pukkandan
86c66b2d3e Fix -F for extractors that directly return url
Related: #693
2021-08-15 00:26:34 +05:30
pukkandan
37242e56f2 Fix bug during subtitle conversion 2021-08-15 00:26:33 +05:30
pukkandan
6c7274ecd2 Fix resuming of single formats when using --no-part
Closes #576
2021-08-15 00:26:32 +05:30
Kid
5c333d7496 [lazy_extractor] Bugfix for when plugin directory doesn't exist (#691)
Bug introduced by: 0b2e9d2c30

Authored by: kidonng
2021-08-13 20:54:17 +05:30
coletdjnz
641ad5d813 [youtube] Extract error messages from HTTPError response (#644)
Authored by: coletdjnz
2021-08-13 11:48:26 +05:30
Felix S
0715f7e19b Revert erroneous use of the Content-Length header (#637)
This reverts commit 6c907eb33f

The use of the Content-Length value here is erroneous and may lead
to truncated downloads if a compression scheme is specified in the
Content-Encoding header, as the Content-Length header refers to the
size of encoded data, not of the raw bytestream. This has been noticed
in the wild with WebVTT subtitle segments.

Authored by: fstirlitz
2021-08-11 21:09:17 +05:30
pukkandan
a8731fcc1d minor bugfixes
bugs due to be2fc5b212, e9f4ccd19e
2021-08-11 20:27:30 +05:30
pukkandan
5a64127f94 [docs] Fix credits of 246fb276e0
It is authored by mzbaulhaque - The commit message is wrong

:ci skip all
2021-08-10 22:32:23 +05:30
pukkandan
ade6dc5e9e [version] update
:ci skip all
2021-08-10 20:51:47 +05:30
pukkandan
418964fa91 Release 2021.08.10 2021-08-10 20:10:39 +05:30
jhwgh1968
c196640ff1 [eroprofile] Add album downloader (#658)
Authored by: jhwgh1968
2021-08-10 19:21:12 +05:30
SsSsS
60c8fc73c6 [instagram] Fix comments extraction (#660)
Authored-by: u-spec-png <miloradkalabasdt@gmail.com>
2021-08-10 18:45:32 +05:30
Ashish
bc8745480e [BandCamp] Add BandcampMusicIE (#668)
Authored by Ashish0804
2021-08-10 18:42:11 +05:30
The Hatsune Daishi
ff5e16f2f6 [mirrativ] Add extractors (#657)
Authored by: nao20010128nao
2021-08-10 08:54:58 +05:30
pukkandan
be2fc5b212 [extractor] Detect sttp as subtitles in MPD
Closes #656
Solution by: fstirlitz
2021-08-10 04:46:48 +05:30
pukkandan
7be9ccff0b [utils] Fix InAdvancePagedList.__getitem__
Since it didn't have any cache, the page was re-fetched for each video.
* Also generalized the cache code
2021-08-10 04:45:25 +05:30
funniray
245d43cacf [crunchyroll] Fix thumbnail (#650)
Authored by: funniray
2021-08-10 03:09:20 +05:30
mzbaulhaque
246fb276e0 [blackboardcollaborate] Add new extractor (#646)
Authored by: Ashish0804
2021-08-10 02:03:12 +05:30
shirt
6e6e0d95b3 [paramountplus] Separate extractor and fix some titles (#652)
Co-authored-by: shirt, pukkandan
2021-08-10 01:54:50 +05:30
Felix S
25a3f4f5d6 [webvtt] Merge daisy-chained duplicate cues (#638)
Fixes: https://github.com/yt-dlp/yt-dlp/issues/631#issuecomment-893338552

Previous deduplication algorithm only removed duplicate cues with
identical text, styles and timestamps.  This change also merges
cues that come in ‘daisy chains’, where sequences of cues with
identical text and styles appear in which the ending timestamp of
one equals the starting timestamp of the next.

This deduplication algorithm has the somewhat unfortunate side effect
that NOTE blocks between cues, if found, will be emitted in a different
order relative to their original cues.  This may be unwanted if perfect
fidelity is desired, but then so is daisy-chain deduplication itself.
NOTE blocks ought to be ignored by WebVTT players in any case.

Authored by: fstirlitz
2021-08-10 01:52:30 +05:30
pukkandan
ad3dc496bb Misc fixes - See desc
* Remove unnecessary uses of _list_from_options_callback
* Fix download tests - Bug from 6e84b21559
* Rename ExecAfterDownloadPP to ExecPP and refactor its tests
* Ensure _write_ytdl_file closes file handle on error - Potential fix for #517
2021-08-10 01:22:55 +05:30
pukkandan
2831b4686c Show libraries present in verbose head 2021-08-10 01:22:55 +05:30
pukkandan
8c0ae192a4 [ffmpeg] Fix --ffmpeg-location when directory is given
Bug introduced in 89efdc15dd
Closes #654
2021-08-10 01:22:55 +05:30
pukkandan
e9f4ccd19e Add option --replace-in-metadata 2021-08-10 01:22:55 +05:30
pukkandan
a38bd1defa [viki] Print error message from API request
Closes #651
2021-08-10 01:21:22 +05:30
shirt
476febeb3a [build] Use custom build of pyinstaller (#663)
Related: #25 

Authored-by: shirt
2021-08-10 01:21:02 +05:30
Ashish
b6a35ad83b [HotStar] Use API for metadata and extract subtitles (#640)
The API is not rate-limited unlike the webpage

Authored by: Ashish0804
2021-08-08 09:45:06 +05:30
SsSsS
bfd56b74b9 [peertube] Fix videos without description (#639)
Authored by: u-spec-png
2021-08-08 09:26:44 +05:30
PSlava
858a65ecc1 [youtube] Improve signature function detection (#641)
Authored by: PSlava (Slava <slash@i-slash.com>)
2021-08-08 09:24:37 +05:30
Wes
3b34e38813 [aenetworks] Update _THEPLATFORM_KEY and _THEPLATFORM_SECRET (#643)
Original PR: https://github.com/ytdl-org/youtube-dl/pull/29749
Fixes: https://github.com/ytdl-org/youtube-dl/issues/29300

Authored by: wesnm
2021-08-08 09:22:31 +05:30
pukkandan
3448870205 [docs] Fix some mistakes and improve doc 2021-08-07 21:41:48 +05:30
pukkandan
b868936cd6 [cleanup] Misc 2021-08-07 21:17:07 +05:30
pukkandan
c681cb5d93 Allow multiple --exec and --exec-before-download 2021-08-07 21:17:07 +05:30
pukkandan
379e44ed3c [youtube] Raise appropriate error when API pages can't be downloaded 2021-08-07 21:17:06 +05:30
pukkandan
243c57cfe8 [tests:download] Add batch testing for extractors
Use `test_YourExtractor_all` to invoke them
2021-08-07 21:17:06 +05:30
pukkandan
28f436bad0 [extractor] Reset non-repeating warnings per video 2021-08-07 21:17:05 +05:30
pukkandan
2b8a2973bd Allow entire infodict to be printed using %()s
Makes `--dump-json` redundant
2021-08-07 21:17:04 +05:30
pukkandan
b7b04c782e Add option --no-simulate to not simulate even when --print or --list... are used
* Deprecates `--print-json`
* Some listings like `--list-extractors` are handled by `yt_dlp` and so are not affected by this. These have been documented as such

Addresses: https://github.com/ytdl-org/youtube-dl/issues/29675, https://github.com/ytdl-org/youtube-dl/issues/29580#issuecomment-882046305
2021-08-07 21:17:03 +05:30
pukkandan
6e84b21559 Fix bugs related to sanitize_info
Related: 8012d892bd (r54555230)
2021-08-07 21:16:55 +05:30
pukkandan
575e17a1b9 [utils] Fix traverse_obj depth when is_user_input 2021-08-07 20:08:22 +05:30
pukkandan
57015a4a3f [youtube] extractor-arg to show live dash formats
If replay is enabled, these formats can be used to download the last 4 hours
2021-08-07 12:47:54 +05:30
pukkandan
9cc1a3130a Fix resuming when using --no-part
Closes #576
2021-08-06 00:55:04 +05:30
pukkandan
b51d2ae3ca Add compat-option no-keep-subs
Closes #630
2021-08-06 00:55:04 +05:30
Jesse
fee5f0c909 [adobepass] Add MSO Cablevision (#635)
Authored by: Jessecar96
2021-08-06 00:53:37 +05:30
funniray
7bb6434767 [vrv] Fix thumbnail extraction (#634)
Authored by: funniray
2021-08-05 21:49:28 +05:30
pukkandan
124bc071ee Fix wrong extension for intermediate files
Closes #632
2021-08-05 19:51:14 +05:30
pukkandan
a047eeb6d2 Add regex to --match-filter
This does not fully deprecate `--match-title`/`--reject-title`
since `--match-filter` is only checked after the extraction is complete,
while `--match-title` can often be checked from the flat playlist.

Fixes: https://github.com/ytdl-org/youtube-dl/issues/9092, https://github.com/ytdl-org/youtube-dl/issues/23035
2021-08-05 04:10:26 +05:30
Max Teegen
77b87f0519 Add all format filtering operators also to --match-filter
PR: https://github.com/ytdl-org/youtube-dl/pull/27361

Authored by: max-te
2021-08-05 03:37:20 +05:30
pukkandan
678da2f21b [twitch:clips] Extract display_id
PR: https://github.com/ytdl-org/youtube-dl/pull/29684
Fixes: https://github.com/ytdl-org/youtube-dl/issues/29666

Authored by: dirkf
2021-08-05 03:37:20 +05:30
pukkandan
cc3fa8d39d Handle BrokenPipeError
PR: https://github.com/ytdl-org/youtube-dl/pull/29505
Fixes: https://github.com/ytdl-org/youtube-dl/issues/29082

Authored by: kikuyan
2021-08-05 03:37:20 +05:30
pukkandan
89efdc15dd [ffpmeg] Allow --ffmpeg-location to be a file with different name 2021-08-05 03:37:18 +05:30
pukkandan
8012d892bd Ensure sanitization of infodict before printing to stdout
* `filter_requested_info` is renamed to a more appropriate name `sanitize_info`
2021-08-05 03:37:16 +05:30
Stavros Ntentos
9d65e7bd6d Fix --compat-options filename (#629)
The correct default filename is `%(title)s-%(id)s.%(ext)s`

Authored by: stdedos
2021-08-04 23:31:37 +05:30
SsSsS
36576d7c4c [Newgrounds] Improve extractor and fix playlist (#627)
Authored by: u-spec-png
2021-08-04 21:18:54 +05:30
nikhil
bb36a55c41 [nbcolympics:stream] Fix extractor
PR: https://github.com/ytdl-org/youtube-dl/pull/29688
Closes: #617, https://github.com/ytdl-org/youtube-dl/issues/29665

* Livestreams are untested
* If using ffmpeg as downloader, v4.3+ is needed since `-http_seekable` option is necessary
* Instead of making a seperate key for each arg that needs to be passed to ffmpeg, I made `_ffmpeg_args`
* This deprecates `_seekable`, but the option is kept for compatibility

Authored by: nchilada, pukkandan
2021-08-04 20:41:59 +05:30
MinePlayersPE
3dbb2a9dcb [RCTIPlus] Support events and TV (#625)
Authored by: MinePlayersPE
2021-08-04 18:42:15 +05:30
The Hatsune Daishi
9997eee4af [openrec] Add extractors (#624)
Authored by: nao20010128nao
2021-08-04 14:44:37 +05:30
Wes
3e376d183e [nbcolympics] Update extractor for 2020 olympics (#621)
Fixes: https://github.com/yt-dlp/yt-dlp/issues/617#issuecomment-891834323

Authored by: wesnm
2021-08-04 09:49:44 +05:30
Sam
888299e6ca [VrtNU] Fix XSRF token (#588)
PR: https://github.com/ytdl-org/youtube-dl/pull/29614
Authored-by: pgaig
2021-08-04 00:11:26 +05:30
pukkandan
c31be5b009 [docs] Document which fields --add-metadata adds to the file
:ci skip all
2021-08-03 01:34:28 +05:30
pukkandan
e5611e8eda [ffmpeg] Fix streaming mp4 to stdout 2021-08-03 00:05:16 +05:30
SsSsS
8e6cc12c80 [Vine] Remove invalid formats (#614)
Authored by: u-spec-png
2021-08-02 23:37:59 +05:30
pukkandan
e980017ac8 [doc] Fix banner URL 2021-08-02 10:45:02 +05:30
pukkandan
e9d9efc0f2 [version] update
:ci skip all
2021-08-02 10:41:58 +05:30
pukkandan
6ccf351a87 Release 2021.08.02 2021-08-02 10:37:10 +05:30
pukkandan
28dff70b51 Add donate links 2021-08-02 08:51:23 +05:30
pukkandan
1aebc0f79e Add logo and banner 2021-08-02 08:51:22 +05:30
pukkandan
cf87314d4e [youtube] Extract SAPISID only once 2021-08-02 08:00:08 +05:30
pukkandan
1bd3639f69 [tenplay] Add MA15+ age limit (#606)
Authored by: pento
2021-08-02 07:52:11 +05:30
LE
68f5867cf0 [CBS] Add fallback (#579)
Related: https://github.com/ytdl-org/youtube-dl/issues/29564
Authored-by: llacb47, pukkandan
2021-08-02 07:46:12 +05:30
Ashish
605cad0be7 [Vimeo] Better extraction of original file (#599)
Authored by: Ashish0804
2021-08-02 07:23:12 +05:30
pukkandan
0855702f3f [test:download] Support testing with ignore_no_formats_error 2021-08-02 03:47:31 +05:30
Ashish
e8384376c0 [CBS] Add ParamountPlusSeriesIE (#603)
Authored by: Ashish0804
2021-08-02 02:58:47 +05:30
David
e7e94f2a5c [youtube] Add age-gate bypass for unverified accounts (#600)
Adds `_creator` variants for each client

Authored by: zerodytrash, colethedj, pukkandan
2021-08-02 02:43:46 +05:30
pukkandan
a46a815b05 [cleanup] Fix linter in 96fccc101f 2021-08-01 12:52:09 +05:30
pukkandan
96fccc101f [downloader] Allow streaming unmerged formats to stdout using ffmpeg
For this to work:
1. The downloader must be ffmpeg
2. The selected formats must have the same protocol
3. The formats must be downloadable by ffmpeg to stdout

Partial solution for: https://github.com/ytdl-org/youtube-dl/issues/28146, https://github.com/ytdl-org/youtube-dl/issues/27265
2021-08-01 12:38:06 +05:30
pukkandan
dbf5416a20 [cleanup] Refactor some code 2021-08-01 12:38:05 +05:30
pukkandan
d74a58a186 Set home: as the default key for -P 2021-08-01 12:13:40 +05:30
pukkandan
f5510afef0 [FormatSort] Fix bug for audio with unknown codec 2021-08-01 12:13:40 +05:30
pukkandan
e4f0275711 Add compat-option no-clean-infojson 2021-08-01 12:13:40 +05:30
pukkandan
e0f2b4b47d [utils] Fix slicing of reversed LazyList
Closes #589
2021-08-01 12:13:40 +05:30
coletdjnz
eca330cb88 [youtube] Fix default global API key
bug introduced in 000c15a4ca
2021-08-01 06:12:26 +00:00
Wes
d24734daea [adobepass] Add MSO Sling TV (#596)
Original PR: ytdl-org/youtube-dl#29686
Closes: #300, ytdl-org/youtube-dl#18132

Authored by: wesnm
2021-07-31 03:35:56 +05:30
MinePlayersPE
d9e6e9481e [RCTIPlus] Remove PhantomJS dependency (#595)
Authored by: MinePlayersPE
2021-07-31 03:22:52 +05:30
pukkandan
3619f78d2c [youtube] Misc cleanup (#577)
Authored by: pukkandan, colethedj
2021-07-31 03:01:49 +05:30
pukkandan
65c2fde23f [youtube] Add thirdParty to agegate clients (#577)
* This allows more videos like `tf2U5Vyj0oU` to become embeddable
    See https://github.com/yt-dlp/yt-dlp/pull/575#issuecomment-888837000
* Also added tests for all types of age-gate

Closes #581
2021-07-31 02:20:21 +05:30
pukkandan
000c15a4ca [youtube] simplify and de-duplicate client definitions (#577) 2021-07-31 02:14:15 +05:30
colethedj
9275f62cf8 [youtube] Improve age-gate detection (#577)
Authored by: colethedj
2021-07-31 02:13:55 +05:30
coletdjnz
6552469433 [youtube] Force hl=en for comments (#594)
Closes #532
2021-07-31 01:06:00 +05:30
MinePlayersPE
11cc45718c [vidio] Fix login error detection (#582)
Authored by: MinePlayersPE
2021-07-29 10:11:05 +05:30
Ashish
fe07e2c69f [Hotstar] Support cookies (#584)
Closes #583 
Authored by: Ashish0804
2021-07-29 10:06:38 +05:30
Ashish
89ce723edd [Mxplayer] Add h265 formats (#572)
Authored by: Ashish0804
2021-07-29 09:57:09 +05:30
Sipherdrakon
45d1f15725 [dplay] Add ScienceChannelIE (#567)
Authored by: Sipherdrakon
2021-07-29 09:55:00 +05:30
rigstot
a318f59d14 [generic] Support KVS player (#549)
* Replaces the extractor for thisvid

Fixes: https://github.com/ytdl-org/youtube-dl/issues/2077
Authored-by: rigstot
2021-07-29 09:33:01 +05:30
pukkandan
7d1eb38af1 Add format types j, l, q for outtmpl
Closes #345
2021-07-29 08:47:25 +05:30
pukkandan
901130bbcf Expand and escape environment variables correctly in outtmpl
Fixes: https://www.reddit.com/r/youtubedl/comments/otfmq3/ytdlp_same_parameters_different_results
2021-07-29 08:38:18 +05:30
MinePlayersPE
c0bc527bca [YouTube] Age-gate bypass implementation (#575)
* Calling the API with `clientScreen=EMBED` allows access to most age-gated videos - discovered by @ccdffddfddfdsfedeee (https://github.com/yt-dlp/yt-dlp/issues/574#issuecomment-887171136)
* Adds clients: (web/android/ios)_(embedded/agegate), mweb_embedded
* Renamed mobile_web to mweb

Closes #574

Authored by pukkandan, MinePlayersPE
2021-07-27 15:10:44 +05:30
pukkandan
2a9c6dcd22 [youtube] Fix format sorting when using alternate clients 2021-07-26 03:50:13 +05:30
coletdjnz
5a1fc62b41 [youtube] Add mobile_web client (#557)
Authored by: colethedj
2021-07-26 03:48:36 +05:30
pukkandan
b4c055bac2 [youtube] Add player_client=all 2021-07-26 03:38:18 +05:30
pukkandan
ea05b3020d Remove asr appearing twice in -F 2021-07-26 03:38:15 +05:30
pukkandan
9536bc072d [bilibili] Improve _VALID_URL 2021-07-26 03:38:10 +05:30
Ashish
8242bf220d [HotStarSeriesIE] Fix regex (#569)
Authored by: Ashish0804
2021-07-25 22:43:43 +05:30
Ashish
4bfa401d40 [UtreonIE] Add extractor (#562)
Authored by: Ashish0804
2021-07-25 22:41:45 +05:30
nixxo
0222620725 [mediaset] Fix extraction (#564)
Closes #365
Authored by: nixxo
2021-07-24 20:06:55 +05:30
pukkandan
1fe3c4c27e [version] update
:ci skip all
2021-07-24 20:02:12 +05:30
pukkandan
f703a88055 Release 2021.07.24 2021-07-24 07:03:14 +05:30
pukkandan
a353beba83 [youtube:tab] Extract video duration early
Based on: https://github.com/ytdl-org/youtube-dl/pull/29487 by glenn-slayden
2021-07-24 06:59:20 +05:30
pukkandan
052e135029 [youtube] Simplify _get_text early 2021-07-24 06:59:20 +05:30
xtkoba
cb89cfc14b [test] Add Python 3.10 (#480)
Authored-by: pukkandan, xtkoba
2021-07-23 20:32:48 +05:30
pukkandan
060ac76257 [test] Use pytest instead of nosetests (#482)
`nosetests` is no longer being maintained : https://github.com/nose-devs/nose/issues/1099
and will stop working in py 3.10 as can be seen in #480
2021-07-23 20:18:15 +05:30
pukkandan
063c409dfb [cookies] Handle errors when importing keyring
Workaround for #551
2021-07-23 19:58:27 +05:30
Matt Broadway
767b02a99b [cookies] Handle sqlite ImportError gracefully (#554)
Closes #544
Authored by: mbway
2021-07-23 19:56:19 +05:30
pukkandan
f45e6c1126 [downloader] Pass same status object to all progress_hooks 2021-07-23 09:46:55 +05:30
pukkandan
3944e7af92 [youtube] Fix subtitles only being extracted from the first client
Closes #547
2021-07-23 09:46:55 +05:30
pukkandan
ad34b2951e Try all clients even if age-gated
Reverts: 892e31ce7c

If some API calls have any issue, saving the state will cause unnecessary errors
2021-07-23 09:46:54 +05:30
pukkandan
c8fa48fd94 [youtube] Disable get_video_info age-gate workaround
This now seems to be completely dead
Closes: #553
2021-07-23 09:46:52 +05:30
coletdjnz
2fd226f6a7 [youtube] Fix age-gated videos for API clients when cookies are supplied (#545)
Fixes #543
Authored by: colethedj
2021-07-22 08:11:04 +00:00
pukkandan
3ba7740dd8 [downloader] Pass info_dict to progress_hooks 2021-07-22 04:30:11 +05:30
pukkandan
29b208f6f9 [cookies] bugfix
Fixes: https://github.com/yt-dlp/yt-dlp/pull/488#discussion_r674352059
2021-07-22 03:00:21 +05:30
pukkandan
e4d666d27b [version] update
:ci skip all
2021-07-22 02:37:51 +05:30
pukkandan
245524e6a3 Release 2021.07.21
and fix some typos
Closes #538
2021-07-22 02:33:28 +05:30
pukkandan
9c0d7f4951 [youtube] Make --extractor-retries work for more errors
Closes #507
2021-07-22 02:32:20 +05:30
pukkandan
e37d0efbd9 Fix bug where original_url was not propagated when _type=url 2021-07-22 02:32:19 +05:30
coletdjnz
c926c9541f [youtube] Add debug message for SAPISID cookie extraction (#540)
Authored by: colethedj
2021-07-21 20:45:05 +00:00
Matt Broadway
982ee69a74 Add option --cookies-from-browser to load cookies from a browser (#488)
* also adds `--no-cookies-from-browser`

Original PR: https://github.com/ytdl-org/youtube-dl/pull/29201
Authored by: mbway
2021-07-22 02:02:49 +05:30
pukkandan
7ea6541124 [youtube] Improve extraction of livestream metadata
Modified from and closes #441
Authored by: pukkandan, krichbanana
2021-07-21 20:50:59 +05:30
pukkandan
ae30b84072 Add field live_status 2021-07-21 20:50:58 +05:30
pukkandan
cc9d1493c6 bugfix for 50fed816dd 2021-07-21 20:50:49 +05:30
Philip Xu
f6755419d1 [douyin] Add extractor (#513)
Authored-by: pukkandan, pyx
2021-07-21 20:49:27 +05:30
Henrik Heimbuerger
145bd631c5 [nebula] Authentication via tokens from cookie jar (#537)
Closes #496
Co-authored-by: hheimbuerger, TpmKranz
2021-07-21 18:12:43 +05:30
pukkandan
b35496d825 Add only_once param for write_debug 2021-07-21 18:06:34 +05:30
pukkandan
352d63fdb5 [utils] Improve traverse_obj 2021-07-21 11:30:06 +05:30
pukkandan
11f9be0912 [youtube] Extract data from multiple clients (#536)
* `player_client` accepts multiple clients
* default `player_client` = `android,web`
* music clients can be specifically requested
* Add IOS `player_client`
* Hide live dash since they can't be downloaded

Closes #501

Authored-by: pukkandan, colethedj
2021-07-21 09:22:34 +05:30
pukkandan
c84aeac6b5 Add only_once param for report_warning
Related: https://github.com/yt-dlp/yt-dlp/pull/488#discussion_r667527297
2021-07-21 01:39:58 +05:30
pukkandan
50fed816dd Errors in playlist extraction should obey --ignore-errors
Related: https://github.com/yt-dlp/yt-dlp/issues/535#issuecomment-883277272, https://github.com/yt-dlp/yt-dlp/issues/518#issuecomment-881794754
2021-07-21 01:04:53 +05:30
coletdjnz
a1a7907bc0 [youtube] Fix controversial videos when requested via API (#533)
Closes: https://github.com/yt-dlp/yt-dlp/issues/511#issuecomment-883024350
Authored by: colethedj
2021-07-20 23:31:28 +05:30
pukkandan
d61fc64618 [youtube:tab] Fix channels tab 2021-07-20 23:22:34 +05:30
pukkandan
6586bca9b9 [utils] Fix LazyList for Falsey values 2021-07-20 23:22:26 +05:30
pukkandan
da503b7a52 [youtube] Make parse_time_text and _extract_chapters non-fatal
Related: #532, 7c365c2109
2021-07-20 07:22:26 +05:30
pukkandan
7c365c2109 [youtube] Sanity check chapters (and refactor related code)
Closes #520
2021-07-20 05:39:02 +05:30
pukkandan
3f698246b2 Rename NOTE in -F to MORE INFO
since it's often confused to be the same as `format_note`
2021-07-20 05:30:28 +05:30
pukkandan
cca80fe611 [youtube] Extract even more thumbnails and reduce testing
* Also fix bug where `_test_url` was being ignored

Ref: https://stackoverflow.com/a/20542029
Related: #340
2021-07-20 03:46:06 +05:30
pukkandan
c634ad2a3c [compat] Remove unnecessary code 2021-07-20 03:46:05 +05:30
pukkandan
8f3343809e [utils] Improve traverse_obj
* Allow skipping a level: `traverse_obj([{k:v1}, {k:v2}], (None, k))` => `[v1, v2]`
* Make keys variadic: `traverse_obj(obj, k1: str, k2: str)` => `traverse_obj(obj, (k1,), (k2,))`
* Fetch from multiple keys: `traverse_obj([{k1:[1], k2:[2], k3:[3]}], (0, (k1, k2), 0))` => `[1, 2]`

TODO: Add tests
2021-07-20 02:42:11 +05:30
pukkandan
0ba692acc8 [youtube] Extract more thumbnails
* The thumbnail URLs are hard-coded and their actual existence is tested lazily
* Added option `--no-check-formats` to not test them

Closes #340, Related: #402, #337, https://github.com/ytdl-org/youtube-dl/issues/29049
2021-07-20 02:42:11 +05:30
pukkandan
d9488f69c1 [crunchyroll:playlist] Force http
Closes #495
2021-07-20 02:42:11 +05:30
pukkandan
dce8743677 [docs] fix default of multistreams 2021-07-19 23:47:57 +05:30
pukkandan
5520aa2dc9 Add option --exec-before-download
Closes #530
2021-07-19 23:47:45 +05:30
mzbaulhaque
8d9b902243 [pornflip] Add new extractor (#523)
Authored-by: mzbaulhaque
2021-07-19 23:46:21 +05:30
coletdjnz
fe93e2c4cf [youtube] misc cleanup and bug fixes (#505)
* Update some `_extract_response` calls to keep them consistent
* Cleanup continuation extraction related code using new API format
* Improve `_extract_account_syncid` to support multiple parameters
* Generalize `get_text` and related functions into one
* Update `INNERTUBE_CONTEXT_CLIENT_NAME` with integer values

Authored by: colethedj
2021-07-19 10:25:07 +05:30
coletdjnz
314ee30548 [youtube] Fix session index extraction and headers for non-web player clients (#526)
Fixes #522
2021-07-18 06:23:32 +00:00
coletdjnz
34917076ad [youtube] Fix authentication when using multiple accounts
`SESSION_INDEX` in `ytcfg` is the index of the active account and should be sent as `X-Goog-AuthUser` header

Closes #518
Authored by @colethedj
2021-07-17 11:50:05 +05:30
The Hatsune Daishi
ccc7795ca3 [yahoo:gyao:player] Relax _VALID_URL (#503)
Authored by: nao20010128nao
2021-07-16 20:06:53 +05:30
Felix S
da1c94ee45 [generic] Extract previously missed subtitles (#515)
* [generic] Extract subtitles in cases missed previously
* [common] Detect discarded subtitles in SMIL manifests
* [generic] Extract everything in the SMIL manifest

Authored by: fstirlitz
2021-07-16 19:52:56 +05:30
pukkandan
3b297919e0 Revert "Merge webm formats into mkv if thumbnails are to be embedded (#173)"
This reverts commit 4d971a16b8 by @damianoamatruda
Closes #500

This was wrongly checking for `write_thumbnail`
2021-07-15 23:34:52 +05:30
coletdjnz
47193e0298 [youtube:tab] Extract playlist availability (#504)
Authored by: colethedj
2021-07-15 02:42:30 +00:00
coletdjnz
49bd8c66d3 [youtube:comments] Improve comment vote count parsing (fixes #506) (#508)
Authored by: colethedj
2021-07-14 23:24:42 +00:00
Felix S
182b6ae8a6 [RTP] Fix extraction and add subtitles (#497)
Authored by: fstirlitz
2021-07-14 05:06:18 +05:30
felix
c843e68588 [utils] Improve js_to_json comment regex
Capture the newline character as part of a single-line comment

From #497, Authored by: fstirlitz
2021-07-14 05:02:43 +05:30
felix
198f7ea89e [extractor] Allow extracting multiple groups in _search_regex
From #497, Authored by: fstirlitz
2021-07-14 05:02:42 +05:30
coletdjnz
c888ffb95a [youtube] Use android client as default and add age-gate bypass for it (#492)
Authored by: colethedj
2021-07-14 03:58:51 +05:30
coletdjnz
9752433221 [youtube:comments] Fix is_favorited (#491)
Authored by colethedj
2021-07-12 06:50:03 +05:30
pukkandan
f0ff9979c6 [vlive] Extract thumbnail directly in addition to the one from Naver
Closes #477
2021-07-12 06:07:23 +05:30
pukkandan
501dd1ad55 [metadatafromfield] Do not detect numbers as field names
Related: https://github.com/yt-dlp/yt-dlp/issues/486#issuecomment-877820394
2021-07-12 05:20:12 +05:30
pukkandan
75722b037d [webtt] Fix timestamps
Closes #474
2021-07-12 05:20:12 +05:30
coletdjnz
2d6659b9ea [youtube:comments] Move comment extraction to new API (#466)
Closes #438, #481, #485 

Authored by: colethedj
2021-07-12 04:48:40 +05:30
Kevin O'Connor
c5370857b3 [BravoTV] Improve metadata extraction (#483)
Authored by: kevinoconnor7
2021-07-11 16:36:26 +05:30
pukkandan
00034c146a [embedthumbnail] Fix _get_thumbnail_resolution 2021-07-11 04:46:53 +05:30
pukkandan
325ebc1703 Improve traverse_obj 2021-07-11 04:46:53 +05:30
pukkandan
7dde84f3c9 [FFmpegMetadata] Add language of each stream
and some refactoring
2021-07-11 04:46:52 +05:30
pukkandan
6606817a86 [utils] Add variadic 2021-07-11 04:46:51 +05:30
zackmark29
73d829c144 [VIKI] Rewrite extractors (#475)
Closes #462
Also added extractor-arg `video_types` to `vikichannel`

Co-authored-by: zackmark29, pukkandan
2021-07-10 02:08:09 +05:30
pukkandan
60bdb7bd9e [youtube] Fix sorting of 3gp format 2021-07-08 22:33:33 +05:30
pukkandan
4bb6b02f93 Improve extractor_args parsing 2021-07-08 21:22:35 +05:30
pukkandan
b5ac45b197 Fix selectors all, mergeall and add tests
Bug from: 981052c9c6
2021-07-07 21:10:43 +05:30
pukkandan
38a40c9e16 [version] update
:ci skip all
2021-07-07 05:43:58 +05:30
pukkandan
a8bf9b4dc1 Release 2021.07.07 2021-07-07 05:35:20 +05:30
pukkandan
51f8a31d65 Update to ytdl-commit-a803582
[peertube] only call description endpoint if necessary
a803582717
2021-07-07 05:17:11 +05:30
Tom-Oliver Heidel
be05d5cff1 [soundcloud] Allow login using oauth token (#469)
Authored by: blackjack4494
2021-07-07 04:21:13 +05:30
zenerdi0de
30d569d2ac [fancode] Fix extraction, support live and allow login with refresh token (#471)
Authored-by: zenerdi0de
2021-07-07 04:02:56 +05:30
OhMyBahGosh
08625e4125 [AdobePass] Add Spectrum MSO (#470)
From: https://github.com/ytdl-org/youtube-dl/pull/26792

Co-authored by: kevinoconnor7, ohmybahgosh
2021-07-07 03:26:51 +05:30
pukkandan
3acf6d3856 [Funimation] Rewrite extractor (See desc) (#444)
* Support direct `/player/` URL
* Treat the different versions of an episode as different formats of a single video. So `experience_id` can no longer be used as the video `id` and the `episode_id` is used instead. This means that all existing archives will break
* Extractor options `language` and `version` to pre-select them
* Compat option `seperate-video-versions` to fall back to old behavior (including using the old video IDs)

Closes #428
2021-07-07 02:51:29 +05:30
pukkandan
46890374f7 [extractor] Minor improvements (See desc)
1. Allow removal of login hint - extractors can set their own login hint as part of `msg`
2. Cleanup `_merge_subtitles` signature
2021-07-07 02:27:53 +05:30
pukkandan
60755938b3 [extractor] Prevent unnecessary download of hls manifests
and refactor `hls_split_discontinuity` code
2021-07-07 02:24:58 +05:30
pukkandan
723d44b92b [fragment] Handle errors in threads correctly 2021-07-07 01:55:54 +05:30
pukkandan
bc97cdae67 [cleanup] Fix linter and some typos
Related: https://github.com/ytdl-org/youtube-dl/pull/29398
2021-07-04 03:04:25 +05:30
nyuszika7h
e010672ab5 [videa] Fix extraction (#463)
Authored by: nyuszika7h
2021-07-03 21:38:08 +05:30
pukkandan
169dbde946 Fixes for --list options (See desc)
1. Fix `--list-formats-old`
2. Allow listing with `--quiet`
3. Allow various listings to work together
4. Allow `--print` to work with listing
2021-07-03 01:16:19 +05:30
MinePlayersPE
17f0eb66b8 [RCTIPlus] Add extractor (#443)
Authored by: MinePlayersPE
2021-07-02 19:54:41 +05:30
pukkandan
981052c9c6 Some minor fixes and refactoring (see desc)
* [utils] Fix issues with reversal
* check_formats should catch `DownloadError`, not `ExtractorError`
* Simplify format selectors with `LazyList` and `yield from`
2021-07-02 08:17:37 +05:30
pukkandan
b1e60d1806 [facebook] Extract description and fix title
Partially fixes: #453
2021-07-02 08:17:37 +05:30
pukkandan
6b6c16ca6c [downloader/ffmpeg] Fix --ppa when using simultaneous download 2021-07-02 08:17:30 +05:30
krichbanana
f6745c4980 [Youtube] Choose correct Live chat API for upcoming streams (#460)
Authored by: krichbanana
2021-07-02 05:59:29 +05:30
coletdjnz
109dd3b237 [youtube] Use new API for additional video extraction requests (#328)
Co-authored-by: colethedj, pukkandan
Closes https://github.com/yt-dlp/yt-dlp/issues/427
Workarounds for https://github.com/ytdl-org/youtube-dl/issues/29326, https://github.com/yt-dlp/yt-dlp/issues/319, https://github.com/ytdl-org/youtube-dl/issues/29086
2021-06-29 22:07:49 +00:00
siikamiika
c2603313b1 [youtube_live_chat] use clickTrackingParams (#449)
Authored by: siikamiika
2021-06-27 04:52:32 +05:30
LE
1e79316e20 [TBS] Support livestreams (#448)
Authored by: llacb47
2021-06-26 17:14:43 +05:30
coletdjnz
45261e063b [youtube:comments] Fix error handling and add itct to params (#446)
Should close #439 (untested)

Authored by: colethedj
2021-06-25 23:31:10 +05:30
pukkandan
49c258e18d [youtube] Fix subtitle names for age-gated videos
Related: https://github.com/iv-org/invidious/pull/2205#issuecomment-868680486
2021-06-25 23:10:31 +05:30
pukkandan
d3f62c1967 Fix --throttled-rate when using --load-info-json 2021-06-25 22:57:17 +05:30
pukkandan
5d3a0e794b Add --extractor-args to pass extractor-specific arguments 2021-06-25 20:10:28 +05:30
Mevious
125728b038 [funimation] Add FunimationShowIE (#442)
Closes #436

Authored by: Mevious
2021-06-25 05:45:23 +05:30
pukkandan
15a4fd53d3 [thumbnailsconvertor] Treat jpeg as jpg 2021-06-25 05:36:35 +05:30
Adrik
4513a41a72 Process videos when using --ignore-no-formats-error (#441)
Authored by: krichbanana
2021-06-24 22:23:34 +05:30
pukkandan
6033d9808d Fix --flat-playlist when entry has no ie_key 2021-06-24 22:23:34 +05:30
pukkandan
bd4d1ea398 [cleanup] Minor refactoring of fragment 2021-06-24 22:23:33 +05:30
pukkandan
8e897ed283 [fragment] Return status of download correctly 2021-06-24 22:04:23 +05:30
LE
412cce82b0 [yahoo] Fix extraction (#435)
Fixes: https://github.com/ytdl-org/youtube-dl/issues/28290

Co-authored-by: llacb47, pukkandan
2021-06-24 21:27:48 +05:30
siikamiika
d534c4520b [youtube_live_chat] Fix download with cookies (#437)
Closes #417 

Authored by: siikamiika
2021-06-24 21:26:32 +05:30
pukkandan
2b18a8c590 [plutotv] Improve _VALID_URL
Closes #431
2021-06-23 07:49:09 +05:30
pukkandan
dac8b87b0c [version] update :ci skip all 2021-06-23 07:37:07 +05:30
570 changed files with 27142 additions and 13711 deletions

13
.github/FUNDING.yml vendored Normal file
View File

@@ -0,0 +1,13 @@
# These are supported funding model platforms
github: # Replace with up to 4 GitHub Sponsors-enabled usernames e.g., [user1, user2]
patreon: # Replace with a single Patreon username
open_collective: # Replace with a single Open Collective username
ko_fi: # Replace with a single Ko-fi username
tidelift: # Replace with a single Tidelift platform-name/package-name e.g., npm/babel
community_bridge: # Replace with a single Community Bridge project-name e.g., cloud-foundry
liberapay: # Replace with a single Liberapay username
issuehunt: # Replace with a single IssueHunt username
otechie: # Replace with a single Otechie username
custom: ['https://github.com/yt-dlp/yt-dlp/blob/master/Collaborators.md#collaborators']

View File

@@ -1,70 +0,0 @@
---
name: Broken site support
about: Report broken or misfunctioning site
title: "[Broken]"
labels: Broken
assignees: ''
---
<!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
- First of, make sure you are using the latest version of yt-dlp. Run `yt-dlp --version` and ensure your version is 2021.06.09. If it's not, see https://github.com/yt-dlp/yt-dlp on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that all URLs and arguments with special characters are properly quoted or escaped as explained in https://github.com/yt-dlp/yt-dlp.
- Search the bugtracker for similar issues: https://github.com/yt-dlp/yt-dlp. DO NOT post duplicates.
- Finally, put x into all relevant boxes like this [x] (Dont forget to delete the empty space)
-->
- [ ] I'm reporting a broken site support
- [ ] I've verified that I'm running yt-dlp version **2021.06.09**
- [ ] I've checked that all provided URLs are alive and playable in a browser
- [ ] I've checked that all URLs and arguments with special characters are properly quoted or escaped
- [ ] I've searched the bugtracker for similar issues including closed ones
## Verbose log
<!--
Provide the complete verbose output of yt-dlp that clearly demonstrates the problem.
Add the `-v` flag to your command line you run yt-dlp with (`yt-dlp -v <your command line>`), copy the WHOLE output and insert it below. It should look similar to this:
[debug] System config: []
[debug] User config: []
[debug] Command-line args: [u'-v', u'http://www.youtube.com/watch?v=BaW_jenozKcj']
[debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251
[debug] yt-dlp version 2021.06.09
[debug] Python version 2.7.11 - Windows-2003Server-5.2.3790-SP2
[debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4
[debug] Proxy map: {}
<more lines>
-->
```
PASTE VERBOSE LOG HERE
```
<!--
Do not remove the above ```
-->
## Description
<!--
Provide an explanation of your issue in an arbitrary form. Provide any additional information, suggested solution and as much context and examples as possible.
If work on your issue requires account credentials please provide them or explain how one can obtain them.
-->
WRITE DESCRIPTION HERE

View File

@@ -0,0 +1,63 @@
name: Broken site support
description: Report broken or misfunctioning site
labels: [triage, extractor-bug]
body:
- type: checkboxes
id: checklist
attributes:
label: Checklist
description: |
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
options:
- label: I'm reporting a broken site
required: true
- label: I've verified that I'm running yt-dlp version **2021.11.10**. ([update instructions](https://github.com/yt-dlp/yt-dlp#update))
required: true
- label: I've checked that all provided URLs are alive and playable in a browser
required: true
- label: I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/ytdl-org/youtube-dl#video-url-contains-an-ampersand-and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
required: true
- label: I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates
required: true
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
required: true
- label: I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
- type: input
id: region
attributes:
label: Region
description: "Enter the region the site is accessible from"
placeholder: "India"
- type: textarea
id: description
attributes:
label: Description
description: |
Provide an explanation of your issue in an arbitrary form.
Provide any additional information, any suggested solutions, and as much context and examples as possible
placeholder: WRITE DESCRIPTION HERE
validations:
required: true
- type: textarea
id: log
attributes:
label: Verbose log
description: |
Provide the complete verbose output of yt-dlp **that clearly demonstrates the problem**.
Add the `-Uv` flag to your command line you run yt-dlp with (`yt-dlp -Uv <your command line>`), copy the WHOLE output and insert it below.
It should look similar to this:
placeholder: |
[debug] Command-line config: ['-Uv', 'http://www.youtube.com/watch?v=BaW_jenozKc']
[debug] Portable config file: yt-dlp.conf
[debug] Portable config: ['-i']
[debug] Encodings: locale cp1252, fs utf-8, stdout utf-8, stderr utf-8, pref cp1252
[debug] yt-dlp version 2021.11.10 (exe)
[debug] Python version 3.8.8 (CPython 64bit) - Windows-10-10.0.19041-SP0
[debug] exe versions: ffmpeg 3.0.1, ffprobe 3.0.1
[debug] Optional libraries: Cryptodome, keyring, mutagen, sqlite, websockets
[debug] Proxy map: {}
yt-dlp is up to date (2021.11.10)
<more lines>
render: shell
validations:
required: true

View File

@@ -1,56 +0,0 @@
---
name: Site support request
about: Request support for a new site
title: "[Site Request]"
labels: Request
assignees: ''
---
<!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
- First of, make sure you are using the latest version of yt-dlp. Run `yt-dlp --version` and ensure your version is 2021.06.09. If it's not, see https://github.com/yt-dlp/yt-dlp on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that site you are requesting is not dedicated to copyright infringement, see https://github.com/yt-dlp/yt-dlp. yt-dlp does not support such sites. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
- Search the bugtracker for similar site support requests: https://github.com/yt-dlp/yt-dlp. DO NOT post duplicates.
- Finally, put x into all relevant boxes like this [x] (Dont forget to delete the empty space)
-->
- [ ] I'm reporting a new site support request
- [ ] I've verified that I'm running yt-dlp version **2021.06.09**
- [ ] I've checked that all provided URLs are alive and playable in a browser
- [ ] I've checked that none of provided URLs violate any copyrights
- [ ] I've searched the bugtracker for similar site support requests including closed ones
## Example URLs
<!--
Provide all kinds of example URLs support for which should be included. Replace following example URLs by yours.
-->
- Single video: https://www.youtube.com/watch?v=BaW_jenozKc
- Single video: https://youtu.be/BaW_jenozKc
- Playlist: https://www.youtube.com/playlist?list=PL4lCao7KL_QFVb7Iudeipvc2BCavECqzc
## Description
<!--
Provide any additional information.
If work on your issue requires account credentials please provide them or explain how one can obtain them.
-->
WRITE DESCRIPTION HERE

View File

@@ -0,0 +1,74 @@
name: Site support request
description: Request support for a new site
labels: [triage, site-request]
body:
- type: checkboxes
id: checklist
attributes:
label: Checklist
description: |
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
options:
- label: I'm reporting a new site support request
required: true
- label: I've verified that I'm running yt-dlp version **2021.11.10**. ([update instructions](https://github.com/yt-dlp/yt-dlp#update))
required: true
- label: I've checked that all provided URLs are alive and playable in a browser
required: true
- label: I've checked that none of provided URLs [violate any copyrights](https://github.com/ytdl-org/youtube-dl#can-you-add-support-for-this-anime-video-site-or-site-which-shows-current-movies-for-free) or contain any [DRM](https://en.wikipedia.org/wiki/Digital_rights_management) to the best of my knowledge
required: true
- label: I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates
required: true
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
required: true
- label: I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and am willing to share it if required
- type: input
id: region
attributes:
label: Region
description: "Enter the region the site is accessible from"
placeholder: "India"
- type: textarea
id: example-urls
attributes:
label: Example URLs
description: |
Provide all kinds of example URLs for which support should be added
value: |
- Single video: https://www.youtube.com/watch?v=BaW_jenozKc
- Single video: https://youtu.be/BaW_jenozKc
- Playlist: https://www.youtube.com/playlist?list=PL4lCao7KL_QFVb7Iudeipvc2BCavECqzc
validations:
required: true
- type: textarea
id: description
attributes:
label: Description
description: |
Provide any additional information
placeholder: WRITE DESCRIPTION HERE
validations:
required: true
- type: textarea
id: log
attributes:
label: Verbose log
description: |
Provide the complete verbose output **using one of the example URLs provided above**.
Add the `-Uv` flag to your command line you run yt-dlp with (`yt-dlp -Uv <your command line>`), copy the WHOLE output and insert it below.
It should look similar to this:
placeholder: |
[debug] Command-line config: ['-Uv', 'http://www.youtube.com/watch?v=BaW_jenozKc']
[debug] Portable config file: yt-dlp.conf
[debug] Portable config: ['-i']
[debug] Encodings: locale cp1252, fs utf-8, stdout utf-8, stderr utf-8, pref cp1252
[debug] yt-dlp version 2021.11.10 (exe)
[debug] Python version 3.8.8 (CPython 64bit) - Windows-10-10.0.19041-SP0
[debug] exe versions: ffmpeg 3.0.1, ffprobe 3.0.1
[debug] Optional libraries: Cryptodome, keyring, mutagen, sqlite, websockets
[debug] Proxy map: {}
yt-dlp is up to date (2021.11.10)
<more lines>
render: shell
validations:
required: true

View File

@@ -1,40 +0,0 @@
---
name: Site feature request
about: Request a new functionality for a site
title: "[Site Request]"
labels: Request
assignees: ''
---
<!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
- First of, make sure you are using the latest version of yt-dlp. Run `yt-dlp --version` and ensure your version is 2021.06.09. If it's not, see https://github.com/yt-dlp/yt-dlp on how to update. Issues with outdated version will be REJECTED.
- Search the bugtracker for similar site feature requests: https://github.com/yt-dlp/yt-dlp. DO NOT post duplicates.
- Finally, put x into all relevant boxes like this [x] (Dont forget to delete the empty space)
-->
- [ ] I'm reporting a site feature request
- [ ] I've verified that I'm running yt-dlp version **2021.06.09**
- [ ] I've searched the bugtracker for similar site feature requests including closed ones
## Description
<!--
Provide an explanation of your site feature request in an arbitrary form. Please make sure the description is worded well enough to be understood, see https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient. Provide any additional information, suggested solution and as much context and examples as possible.
-->
WRITE DESCRIPTION HERE

View File

@@ -0,0 +1,49 @@
name: Site feature request
description: Request a new functionality for a site
labels: [triage, site-enhancement]
body:
- type: checkboxes
id: checklist
attributes:
label: Checklist
description: |
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
options:
- label: I'm reporting a site feature request
required: true
- label: I've verified that I'm running yt-dlp version **2021.11.10**. ([update instructions](https://github.com/yt-dlp/yt-dlp#update))
required: true
- label: I've checked that all provided URLs are alive and playable in a browser
required: true
- label: I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates
required: true
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
required: true
- label: I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
- type: input
id: region
attributes:
label: Region
description: "Enter the region the site is accessible from"
placeholder: "India"
- type: textarea
id: example-urls
attributes:
label: Example URLs
description: |
Example URLs that can be used to demonstrate the requested feature
value: |
https://www.youtube.com/watch?v=BaW_jenozKc
validations:
required: true
- type: textarea
id: description
attributes:
label: Description
description: |
Provide an explanation of your site feature request in an arbitrary form.
Please make sure the description is worded well enough to be understood, see [is-the-description-of-the-issue-itself-sufficient](https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient).
Provide any additional information, any suggested solutions, and as much context and examples as possible
placeholder: WRITE DESCRIPTION HERE
validations:
required: true

View File

@@ -1,72 +0,0 @@
---
name: Bug report
about: Report a bug unrelated to any particular site or extractor
title: ''
labels: ''
assignees: ''
---
<!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
- First of, make sure you are using the latest version of yt-dlp. Run `yt-dlp --version` and ensure your version is 2021.06.09. If it's not, see https://github.com/yt-dlp/yt-dlp on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that all URLs and arguments with special characters are properly quoted or escaped as explained in https://github.com/yt-dlp/yt-dlp.
- Search the bugtracker for similar issues: https://github.com/yt-dlp/yt-dlp. DO NOT post duplicates.
- Read bugs section in FAQ: https://github.com/yt-dlp/yt-dlp
- Finally, put x into all relevant boxes like this [x] (Dont forget to delete the empty space)
-->
- [ ] I'm reporting a broken site support issue
- [ ] I've verified that I'm running yt-dlp version **2021.06.09**
- [ ] I've checked that all provided URLs are alive and playable in a browser
- [ ] I've checked that all URLs and arguments with special characters are properly quoted or escaped
- [ ] I've searched the bugtracker for similar bug reports including closed ones
- [ ] I've read bugs section in FAQ
## Verbose log
<!--
Provide the complete verbose output of yt-dlp that clearly demonstrates the problem.
Add the `-v` flag to your command line you run yt-dlp with (`yt-dlp -v <your command line>`), copy the WHOLE output and insert it below. It should look similar to this:
[debug] System config: []
[debug] User config: []
[debug] Command-line args: [u'-v', u'http://www.youtube.com/watch?v=BaW_jenozKcj']
[debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251
[debug] yt-dlp version 2021.06.09
[debug] Python version 2.7.11 - Windows-2003Server-5.2.3790-SP2
[debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4
[debug] Proxy map: {}
<more lines>
-->
```
PASTE VERBOSE LOG HERE
```
<!--
Do not remove the above ```
-->
## Description
<!--
Provide an explanation of your issue in an arbitrary form. Please make sure the description is worded well enough to be understood, see https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient. Provide any additional information, suggested solution and as much context and examples as possible.
If work on your issue requires account credentials please provide them or explain how one can obtain them.
-->
WRITE DESCRIPTION HERE

57
.github/ISSUE_TEMPLATE/4_bug_report.yml vendored Normal file
View File

@@ -0,0 +1,57 @@
name: Bug report
description: Report a bug unrelated to any particular site or extractor
labels: [triage,bug]
body:
- type: checkboxes
id: checklist
attributes:
label: Checklist
description: |
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
options:
- label: I'm reporting a bug unrelated to a specific site
required: true
- label: I've verified that I'm running yt-dlp version **2021.11.10**. ([update instructions](https://github.com/yt-dlp/yt-dlp#update))
required: true
- label: I've checked that all provided URLs are alive and playable in a browser
required: true
- label: I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/ytdl-org/youtube-dl#video-url-contains-an-ampersand-and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
required: true
- label: I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates
required: true
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
required: true
- type: textarea
id: description
attributes:
label: Description
description: |
Provide an explanation of your issue in an arbitrary form.
Please make sure the description is worded well enough to be understood, see [is-the-description-of-the-issue-itself-sufficient](https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient).
Provide any additional information, any suggested solutions, and as much context and examples as possible
placeholder: WRITE DESCRIPTION HERE
validations:
required: true
- type: textarea
id: log
attributes:
label: Verbose log
description: |
Provide the complete verbose output of yt-dlp **that clearly demonstrates the problem**.
Add the `-Uv` flag to **your** command line you run yt-dlp with (`yt-dlp -Uv <your command line>`), copy the WHOLE output and insert it below.
It should look similar to this:
placeholder: |
[debug] Command-line config: ['-Uv', 'http://www.youtube.com/watch?v=BaW_jenozKc']
[debug] Portable config file: yt-dlp.conf
[debug] Portable config: ['-i']
[debug] Encodings: locale cp1252, fs utf-8, stdout utf-8, stderr utf-8, pref cp1252
[debug] yt-dlp version 2021.11.10 (exe)
[debug] Python version 3.8.8 (CPython 64bit) - Windows-10-10.0.19041-SP0
[debug] exe versions: ffmpeg 3.0.1, ffprobe 3.0.1
[debug] Optional libraries: Cryptodome, keyring, mutagen, sqlite, websockets
[debug] Proxy map: {}
yt-dlp is up to date (2021.11.10)
<more lines>
render: shell
validations:
required: true

View File

@@ -1,40 +0,0 @@
---
name: Feature request
about: Request a new functionality unrelated to any particular site or extractor
title: "[Feature Request]"
labels: Request
assignees: ''
---
<!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
- First of, make sure you are using the latest version of yt-dlp. Run `yt-dlp --version` and ensure your version is 2021.06.09. If it's not, see https://github.com/yt-dlp/yt-dlp on how to update. Issues with outdated version will be REJECTED.
- Search the bugtracker for similar feature requests: https://github.com/yt-dlp/yt-dlp. DO NOT post duplicates.
- Finally, put x into all relevant boxes like this [x] (Dont forget to delete the empty space)
-->
- [ ] I'm reporting a feature request
- [ ] I've verified that I'm running yt-dlp version **2021.06.09**
- [ ] I've searched the bugtracker for similar feature requests including closed ones
## Description
<!--
Provide an explanation of your issue in an arbitrary form. Please make sure the description is worded well enough to be understood, see https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient. Provide any additional information, suggested solution and as much context and examples as possible.
-->
WRITE DESCRIPTION HERE

View File

@@ -0,0 +1,30 @@
name: Feature request request
description: Request a new functionality unrelated to any particular site or extractor
labels: [triage, enhancement]
body:
- type: checkboxes
id: checklist
attributes:
label: Checklist
description: |
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
options:
- label: I'm reporting a feature request
required: true
- label: I've verified that I'm running yt-dlp version **2021.11.10**. ([update instructions](https://github.com/yt-dlp/yt-dlp#update))
required: true
- label: I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates
required: true
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
required: true
- type: textarea
id: description
attributes:
label: Description
description: |
Provide an explanation of your site feature request in an arbitrary form.
Please make sure the description is worded well enough to be understood, see [is-the-description-of-the-issue-itself-sufficient](https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient).
Provide any additional information, any suggested solutions, and as much context and examples as possible
placeholder: WRITE DESCRIPTION HERE
validations:
required: true

View File

@@ -1,40 +0,0 @@
---
name: Ask question
about: Ask youtube-dl related question
title: "[Question]"
labels: question
assignees: ''
---
<!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
- Look through the README (https://github.com/yt-dlp/yt-dlp) and FAQ (https://github.com/yt-dlp/yt-dlp) for similar questions
- Search the bugtracker for similar questions: https://github.com/yt-dlp/yt-dlp
- Finally, put x into all relevant boxes like this [x] (Dont forget to delete the empty space)
-->
- [ ] I'm asking a question
- [ ] I've looked through the README and FAQ for similar questions
- [ ] I've searched the bugtracker for similar questions including closed ones
## Question
<!--
Ask your question in an arbitrary form. Please make sure it's worded well enough to be understood, see https://github.com/yt-dlp/yt-dlp.
-->
WRITE QUESTION HERE

30
.github/ISSUE_TEMPLATE/6_question.yml vendored Normal file
View File

@@ -0,0 +1,30 @@
name: Ask question
description: Ask yt-dlp related question
labels: [question]
body:
- type: checkboxes
id: checklist
attributes:
label: Checklist
description: |
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
options:
- label: I'm asking a question and not reporting a bug/feature request
required: true
- label: I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
required: true
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
required: true
- label: I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions including closed ones
required: true
- type: textarea
id: question
attributes:
label: Question
description: |
Ask your question in an arbitrary form.
Please make sure it's worded well enough to be understood, see [is-the-description-of-the-issue-itself-sufficient](https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient).
Provide any additional information and as much context and examples as possible
placeholder: WRITE QUESTION HERE
validations:
required: true

5
.github/ISSUE_TEMPLATE/config.yml vendored Normal file
View File

@@ -0,0 +1,5 @@
blank_issues_enabled: false
contact_links:
- name: Get help from the community on Discord
url: https://discord.gg/H5MNcFW63r
about: Join the yt-dlp Discord for community-powered support!

View File

@@ -1,70 +0,0 @@
---
name: Broken site support
about: Report broken or misfunctioning site
title: "[Broken]"
labels: Broken
assignees: ''
---
<!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
- First of, make sure you are using the latest version of yt-dlp. Run `yt-dlp --version` and ensure your version is %(version)s. If it's not, see https://github.com/yt-dlp/yt-dlp on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that all URLs and arguments with special characters are properly quoted or escaped as explained in https://github.com/yt-dlp/yt-dlp.
- Search the bugtracker for similar issues: https://github.com/yt-dlp/yt-dlp. DO NOT post duplicates.
- Finally, put x into all relevant boxes like this [x] (Dont forget to delete the empty space)
-->
- [ ] I'm reporting a broken site support
- [ ] I've verified that I'm running yt-dlp version **%(version)s**
- [ ] I've checked that all provided URLs are alive and playable in a browser
- [ ] I've checked that all URLs and arguments with special characters are properly quoted or escaped
- [ ] I've searched the bugtracker for similar issues including closed ones
## Verbose log
<!--
Provide the complete verbose output of yt-dlp that clearly demonstrates the problem.
Add the `-v` flag to your command line you run yt-dlp with (`yt-dlp -v <your command line>`), copy the WHOLE output and insert it below. It should look similar to this:
[debug] System config: []
[debug] User config: []
[debug] Command-line args: [u'-v', u'http://www.youtube.com/watch?v=BaW_jenozKcj']
[debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251
[debug] yt-dlp version %(version)s
[debug] Python version 2.7.11 - Windows-2003Server-5.2.3790-SP2
[debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4
[debug] Proxy map: {}
<more lines>
-->
```
PASTE VERBOSE LOG HERE
```
<!--
Do not remove the above ```
-->
## Description
<!--
Provide an explanation of your issue in an arbitrary form. Provide any additional information, suggested solution and as much context and examples as possible.
If work on your issue requires account credentials please provide them or explain how one can obtain them.
-->
WRITE DESCRIPTION HERE

View File

@@ -0,0 +1,63 @@
name: Broken site support
description: Report broken or misfunctioning site
labels: [triage, extractor-bug]
body:
- type: checkboxes
id: checklist
attributes:
label: Checklist
description: |
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
options:
- label: I'm reporting a broken site
required: true
- label: I've verified that I'm running yt-dlp version **%(version)s**. ([update instructions](https://github.com/yt-dlp/yt-dlp#update))
required: true
- label: I've checked that all provided URLs are alive and playable in a browser
required: true
- label: I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/ytdl-org/youtube-dl#video-url-contains-an-ampersand-and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
required: true
- label: I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates
required: true
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
required: true
- label: I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
- type: input
id: region
attributes:
label: Region
description: "Enter the region the site is accessible from"
placeholder: "India"
- type: textarea
id: description
attributes:
label: Description
description: |
Provide an explanation of your issue in an arbitrary form.
Provide any additional information, any suggested solutions, and as much context and examples as possible
placeholder: WRITE DESCRIPTION HERE
validations:
required: true
- type: textarea
id: log
attributes:
label: Verbose log
description: |
Provide the complete verbose output of yt-dlp **that clearly demonstrates the problem**.
Add the `-Uv` flag to your command line you run yt-dlp with (`yt-dlp -Uv <your command line>`), copy the WHOLE output and insert it below.
It should look similar to this:
placeholder: |
[debug] Command-line config: ['-Uv', 'http://www.youtube.com/watch?v=BaW_jenozKc']
[debug] Portable config file: yt-dlp.conf
[debug] Portable config: ['-i']
[debug] Encodings: locale cp1252, fs utf-8, stdout utf-8, stderr utf-8, pref cp1252
[debug] yt-dlp version %(version)s (exe)
[debug] Python version 3.8.8 (CPython 64bit) - Windows-10-10.0.19041-SP0
[debug] exe versions: ffmpeg 3.0.1, ffprobe 3.0.1
[debug] Optional libraries: Cryptodome, keyring, mutagen, sqlite, websockets
[debug] Proxy map: {}
yt-dlp is up to date (%(version)s)
<more lines>
render: shell
validations:
required: true

View File

@@ -1,56 +0,0 @@
---
name: Site support request
about: Request support for a new site
title: "[Site Request]"
labels: Request
assignees: ''
---
<!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
- First of, make sure you are using the latest version of yt-dlp. Run `yt-dlp --version` and ensure your version is %(version)s. If it's not, see https://github.com/yt-dlp/yt-dlp on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that site you are requesting is not dedicated to copyright infringement, see https://github.com/yt-dlp/yt-dlp. yt-dlp does not support such sites. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
- Search the bugtracker for similar site support requests: https://github.com/yt-dlp/yt-dlp. DO NOT post duplicates.
- Finally, put x into all relevant boxes like this [x] (Dont forget to delete the empty space)
-->
- [ ] I'm reporting a new site support request
- [ ] I've verified that I'm running yt-dlp version **%(version)s**
- [ ] I've checked that all provided URLs are alive and playable in a browser
- [ ] I've checked that none of provided URLs violate any copyrights
- [ ] I've searched the bugtracker for similar site support requests including closed ones
## Example URLs
<!--
Provide all kinds of example URLs support for which should be included. Replace following example URLs by yours.
-->
- Single video: https://www.youtube.com/watch?v=BaW_jenozKc
- Single video: https://youtu.be/BaW_jenozKc
- Playlist: https://www.youtube.com/playlist?list=PL4lCao7KL_QFVb7Iudeipvc2BCavECqzc
## Description
<!--
Provide any additional information.
If work on your issue requires account credentials please provide them or explain how one can obtain them.
-->
WRITE DESCRIPTION HERE

View File

@@ -0,0 +1,74 @@
name: Site support request
description: Request support for a new site
labels: [triage, site-request]
body:
- type: checkboxes
id: checklist
attributes:
label: Checklist
description: |
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
options:
- label: I'm reporting a new site support request
required: true
- label: I've verified that I'm running yt-dlp version **%(version)s**. ([update instructions](https://github.com/yt-dlp/yt-dlp#update))
required: true
- label: I've checked that all provided URLs are alive and playable in a browser
required: true
- label: I've checked that none of provided URLs [violate any copyrights](https://github.com/ytdl-org/youtube-dl#can-you-add-support-for-this-anime-video-site-or-site-which-shows-current-movies-for-free) or contain any [DRM](https://en.wikipedia.org/wiki/Digital_rights_management) to the best of my knowledge
required: true
- label: I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates
required: true
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
required: true
- label: I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and am willing to share it if required
- type: input
id: region
attributes:
label: Region
description: "Enter the region the site is accessible from"
placeholder: "India"
- type: textarea
id: example-urls
attributes:
label: Example URLs
description: |
Provide all kinds of example URLs for which support should be added
value: |
- Single video: https://www.youtube.com/watch?v=BaW_jenozKc
- Single video: https://youtu.be/BaW_jenozKc
- Playlist: https://www.youtube.com/playlist?list=PL4lCao7KL_QFVb7Iudeipvc2BCavECqzc
validations:
required: true
- type: textarea
id: description
attributes:
label: Description
description: |
Provide any additional information
placeholder: WRITE DESCRIPTION HERE
validations:
required: true
- type: textarea
id: log
attributes:
label: Verbose log
description: |
Provide the complete verbose output **using one of the example URLs provided above**.
Add the `-Uv` flag to your command line you run yt-dlp with (`yt-dlp -Uv <your command line>`), copy the WHOLE output and insert it below.
It should look similar to this:
placeholder: |
[debug] Command-line config: ['-Uv', 'http://www.youtube.com/watch?v=BaW_jenozKc']
[debug] Portable config file: yt-dlp.conf
[debug] Portable config: ['-i']
[debug] Encodings: locale cp1252, fs utf-8, stdout utf-8, stderr utf-8, pref cp1252
[debug] yt-dlp version %(version)s (exe)
[debug] Python version 3.8.8 (CPython 64bit) - Windows-10-10.0.19041-SP0
[debug] exe versions: ffmpeg 3.0.1, ffprobe 3.0.1
[debug] Optional libraries: Cryptodome, keyring, mutagen, sqlite, websockets
[debug] Proxy map: {}
yt-dlp is up to date (%(version)s)
<more lines>
render: shell
validations:
required: true

View File

@@ -1,40 +0,0 @@
---
name: Site feature request
about: Request a new functionality for a site
title: "[Site Request]"
labels: Request
assignees: ''
---
<!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
- First of, make sure you are using the latest version of yt-dlp. Run `yt-dlp --version` and ensure your version is %(version)s. If it's not, see https://github.com/yt-dlp/yt-dlp on how to update. Issues with outdated version will be REJECTED.
- Search the bugtracker for similar site feature requests: https://github.com/yt-dlp/yt-dlp. DO NOT post duplicates.
- Finally, put x into all relevant boxes like this [x] (Dont forget to delete the empty space)
-->
- [ ] I'm reporting a site feature request
- [ ] I've verified that I'm running yt-dlp version **%(version)s**
- [ ] I've searched the bugtracker for similar site feature requests including closed ones
## Description
<!--
Provide an explanation of your site feature request in an arbitrary form. Please make sure the description is worded well enough to be understood, see https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient. Provide any additional information, suggested solution and as much context and examples as possible.
-->
WRITE DESCRIPTION HERE

View File

@@ -0,0 +1,49 @@
name: Site feature request
description: Request a new functionality for a site
labels: [triage, site-enhancement]
body:
- type: checkboxes
id: checklist
attributes:
label: Checklist
description: |
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
options:
- label: I'm reporting a site feature request
required: true
- label: I've verified that I'm running yt-dlp version **%(version)s**. ([update instructions](https://github.com/yt-dlp/yt-dlp#update))
required: true
- label: I've checked that all provided URLs are alive and playable in a browser
required: true
- label: I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates
required: true
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
required: true
- label: I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
- type: input
id: region
attributes:
label: Region
description: "Enter the region the site is accessible from"
placeholder: "India"
- type: textarea
id: example-urls
attributes:
label: Example URLs
description: |
Example URLs that can be used to demonstrate the requested feature
value: |
https://www.youtube.com/watch?v=BaW_jenozKc
validations:
required: true
- type: textarea
id: description
attributes:
label: Description
description: |
Provide an explanation of your site feature request in an arbitrary form.
Please make sure the description is worded well enough to be understood, see [is-the-description-of-the-issue-itself-sufficient](https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient).
Provide any additional information, any suggested solutions, and as much context and examples as possible
placeholder: WRITE DESCRIPTION HERE
validations:
required: true

View File

@@ -1,72 +0,0 @@
---
name: Bug report
about: Report a bug unrelated to any particular site or extractor
title: ''
labels: ''
assignees: ''
---
<!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
- First of, make sure you are using the latest version of yt-dlp. Run `yt-dlp --version` and ensure your version is %(version)s. If it's not, see https://github.com/yt-dlp/yt-dlp on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that all URLs and arguments with special characters are properly quoted or escaped as explained in https://github.com/yt-dlp/yt-dlp.
- Search the bugtracker for similar issues: https://github.com/yt-dlp/yt-dlp. DO NOT post duplicates.
- Read bugs section in FAQ: https://github.com/yt-dlp/yt-dlp
- Finally, put x into all relevant boxes like this [x] (Dont forget to delete the empty space)
-->
- [ ] I'm reporting a broken site support issue
- [ ] I've verified that I'm running yt-dlp version **%(version)s**
- [ ] I've checked that all provided URLs are alive and playable in a browser
- [ ] I've checked that all URLs and arguments with special characters are properly quoted or escaped
- [ ] I've searched the bugtracker for similar bug reports including closed ones
- [ ] I've read bugs section in FAQ
## Verbose log
<!--
Provide the complete verbose output of yt-dlp that clearly demonstrates the problem.
Add the `-v` flag to your command line you run yt-dlp with (`yt-dlp -v <your command line>`), copy the WHOLE output and insert it below. It should look similar to this:
[debug] System config: []
[debug] User config: []
[debug] Command-line args: [u'-v', u'http://www.youtube.com/watch?v=BaW_jenozKcj']
[debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251
[debug] yt-dlp version %(version)s
[debug] Python version 2.7.11 - Windows-2003Server-5.2.3790-SP2
[debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4
[debug] Proxy map: {}
<more lines>
-->
```
PASTE VERBOSE LOG HERE
```
<!--
Do not remove the above ```
-->
## Description
<!--
Provide an explanation of your issue in an arbitrary form. Please make sure the description is worded well enough to be understood, see https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient. Provide any additional information, suggested solution and as much context and examples as possible.
If work on your issue requires account credentials please provide them or explain how one can obtain them.
-->
WRITE DESCRIPTION HERE

View File

@@ -0,0 +1,57 @@
name: Bug report
description: Report a bug unrelated to any particular site or extractor
labels: [triage,bug]
body:
- type: checkboxes
id: checklist
attributes:
label: Checklist
description: |
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
options:
- label: I'm reporting a bug unrelated to a specific site
required: true
- label: I've verified that I'm running yt-dlp version **%(version)s**. ([update instructions](https://github.com/yt-dlp/yt-dlp#update))
required: true
- label: I've checked that all provided URLs are alive and playable in a browser
required: true
- label: I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/ytdl-org/youtube-dl#video-url-contains-an-ampersand-and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
required: true
- label: I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates
required: true
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
required: true
- type: textarea
id: description
attributes:
label: Description
description: |
Provide an explanation of your issue in an arbitrary form.
Please make sure the description is worded well enough to be understood, see [is-the-description-of-the-issue-itself-sufficient](https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient).
Provide any additional information, any suggested solutions, and as much context and examples as possible
placeholder: WRITE DESCRIPTION HERE
validations:
required: true
- type: textarea
id: log
attributes:
label: Verbose log
description: |
Provide the complete verbose output of yt-dlp **that clearly demonstrates the problem**.
Add the `-Uv` flag to **your** command line you run yt-dlp with (`yt-dlp -Uv <your command line>`), copy the WHOLE output and insert it below.
It should look similar to this:
placeholder: |
[debug] Command-line config: ['-Uv', 'http://www.youtube.com/watch?v=BaW_jenozKc']
[debug] Portable config file: yt-dlp.conf
[debug] Portable config: ['-i']
[debug] Encodings: locale cp1252, fs utf-8, stdout utf-8, stderr utf-8, pref cp1252
[debug] yt-dlp version %(version)s (exe)
[debug] Python version 3.8.8 (CPython 64bit) - Windows-10-10.0.19041-SP0
[debug] exe versions: ffmpeg 3.0.1, ffprobe 3.0.1
[debug] Optional libraries: Cryptodome, keyring, mutagen, sqlite, websockets
[debug] Proxy map: {}
yt-dlp is up to date (%(version)s)
<more lines>
render: shell
validations:
required: true

View File

@@ -1,40 +0,0 @@
---
name: Feature request
about: Request a new functionality unrelated to any particular site or extractor
title: "[Feature Request]"
labels: Request
assignees: ''
---
<!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
- First of, make sure you are using the latest version of yt-dlp. Run `yt-dlp --version` and ensure your version is %(version)s. If it's not, see https://github.com/yt-dlp/yt-dlp on how to update. Issues with outdated version will be REJECTED.
- Search the bugtracker for similar feature requests: https://github.com/yt-dlp/yt-dlp. DO NOT post duplicates.
- Finally, put x into all relevant boxes like this [x] (Dont forget to delete the empty space)
-->
- [ ] I'm reporting a feature request
- [ ] I've verified that I'm running yt-dlp version **%(version)s**
- [ ] I've searched the bugtracker for similar feature requests including closed ones
## Description
<!--
Provide an explanation of your issue in an arbitrary form. Please make sure the description is worded well enough to be understood, see https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient. Provide any additional information, suggested solution and as much context and examples as possible.
-->
WRITE DESCRIPTION HERE

View File

@@ -0,0 +1,30 @@
name: Feature request request
description: Request a new functionality unrelated to any particular site or extractor
labels: [triage, enhancement]
body:
- type: checkboxes
id: checklist
attributes:
label: Checklist
description: |
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
options:
- label: I'm reporting a feature request
required: true
- label: I've verified that I'm running yt-dlp version **%(version)s**. ([update instructions](https://github.com/yt-dlp/yt-dlp#update))
required: true
- label: I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates
required: true
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
required: true
- type: textarea
id: description
attributes:
label: Description
description: |
Provide an explanation of your site feature request in an arbitrary form.
Please make sure the description is worded well enough to be understood, see [is-the-description-of-the-issue-itself-sufficient](https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient).
Provide any additional information, any suggested solutions, and as much context and examples as possible
placeholder: WRITE DESCRIPTION HERE
validations:
required: true

View File

@@ -0,0 +1,30 @@
name: Ask question
description: Ask yt-dlp related question
labels: [question]
body:
- type: checkboxes
id: checklist
attributes:
label: Checklist
description: |
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
options:
- label: I'm asking a question and not reporting a bug/feature request
required: true
- label: I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
required: true
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
required: true
- label: I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions including closed ones
required: true
- type: textarea
id: question
attributes:
label: Question
description: |
Ask your question in an arbitrary form.
Please make sure it's worded well enough to be understood, see [is-the-description-of-the-issue-itself-sufficient](https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient).
Provide any additional information and as much context and examples as possible
placeholder: WRITE QUESTION HERE
validations:
required: true

View File

@@ -7,11 +7,11 @@
--- ---
### Before submitting a *pull request* make sure you have: ### Before submitting a *pull request* make sure you have:
- [ ] At least skimmed through [adding new extractor tutorial](https://github.com/ytdl-org/youtube-dl#adding-support-for-a-new-site) and [youtube-dl coding conventions](https://github.com/ytdl-org/youtube-dl#youtube-dl-coding-conventions) sections - [ ] At least skimmed through [contributing guidelines](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#developer-instructions) including [yt-dlp coding conventions](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#yt-dlp-coding-conventions)
- [ ] [Searched](https://github.com/yt-dlp/yt-dlp/search?q=is%3Apr&type=Issues) the bugtracker for similar pull requests - [ ] [Searched](https://github.com/yt-dlp/yt-dlp/search?q=is%3Apr&type=Issues) the bugtracker for similar pull requests
- [ ] Checked the code with [flake8](https://pypi.python.org/pypi/flake8) - [ ] Checked the code with [flake8](https://pypi.python.org/pypi/flake8)
### In order to be accepted and merged into youtube-dl each piece of code must be in public domain or released under [Unlicense](http://unlicense.org/). Check one of the following options: ### In order to be accepted and merged into yt-dlp each piece of code must be in public domain or released under [Unlicense](http://unlicense.org/). Check one of the following options:
- [ ] I am the original author of this code and I am willing to release it under [Unlicense](http://unlicense.org/) - [ ] I am the original author of this code and I am willing to release it under [Unlicense](http://unlicense.org/)
- [ ] I am not the original author of this code but it is in public domain or released under [Unlicense](http://unlicense.org/) (provide reliable evidence) - [ ] I am not the original author of this code but it is in public domain or released under [Unlicense](http://unlicense.org/) (provide reliable evidence)

31
.github/banner.svg vendored Normal file

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 24 KiB

View File

@@ -8,15 +8,18 @@ on:
jobs: jobs:
build_unix: build_unix:
runs-on: ubuntu-latest runs-on: ubuntu-latest
outputs: outputs:
ytdlp_version: ${{ steps.bump_version.outputs.ytdlp_version }} ytdlp_version: ${{ steps.bump_version.outputs.ytdlp_version }}
upload_url: ${{ steps.create_release.outputs.upload_url }} upload_url: ${{ steps.create_release.outputs.upload_url }}
sha256_unix: ${{ steps.sha256_file.outputs.sha256_unix }} sha256_bin: ${{ steps.sha256_bin.outputs.sha256_bin }}
sha512_unix: ${{ steps.sha512_file.outputs.sha512_unix }} sha512_bin: ${{ steps.sha512_bin.outputs.sha512_bin }}
sha256_tar: ${{ steps.sha256_tar.outputs.sha256_tar }}
sha512_tar: ${{ steps.sha512_tar.outputs.sha512_tar }}
steps: steps:
- uses: actions/checkout@v2 - uses: actions/checkout@v2
with:
fetch-depth: 0
- name: Set up Python - name: Set up Python
uses: actions/setup-python@v2 uses: actions/setup-python@v2
with: with:
@@ -25,11 +28,83 @@ jobs:
run: sudo apt-get -y install zip pandoc man run: sudo apt-get -y install zip pandoc man
- name: Bump version - name: Bump version
id: bump_version id: bump_version
run: python devscripts/update-version.py run: |
python devscripts/update-version.py
make issuetemplates
- name: Print version - name: Print version
run: echo "${{ steps.bump_version.outputs.ytdlp_version }}" run: echo "${{ steps.bump_version.outputs.ytdlp_version }}"
- name: Update master
id: push_update
run: |
git config --global user.email "${{ github.event.pusher.email }}"
git config --global user.name "${{ github.event.pusher.name }}"
git add -u
git commit -m "[version] update" -m ":ci skip all"
git pull --rebase origin ${{ github.event.repository.master_branch }}
git push origin ${{ github.event.ref }}:${{ github.event.repository.master_branch }}
echo ::set-output name=head_sha::$(git rev-parse HEAD)
- name: Get Changelog
id: get_changelog
run: |
changelog=$(cat Changelog.md | grep -oPz '(?s)(?<=### ${{ steps.bump_version.outputs.ytdlp_version }}\n{2}).+?(?=\n{2,3}###)') || true
echo "changelog<<EOF" >> $GITHUB_ENV
echo "$changelog" >> $GITHUB_ENV
echo "EOF" >> $GITHUB_ENV
- name: Build lazy extractors
id: lazy_extractors
run: python devscripts/make_lazy_extractors.py
- name: Run Make - name: Run Make
run: make all tar run: make all tar
- name: Get SHA2-256SUMS for yt-dlp
id: sha256_bin
run: echo "::set-output name=sha256_bin::$(sha256sum yt-dlp | awk '{print $1}')"
- name: Get SHA2-256SUMS for yt-dlp.tar.gz
id: sha256_tar
run: echo "::set-output name=sha256_tar::$(sha256sum yt-dlp.tar.gz | awk '{print $1}')"
- name: Get SHA2-512SUMS for yt-dlp
id: sha512_bin
run: echo "::set-output name=sha512_bin::$(sha512sum yt-dlp | awk '{print $1}')"
- name: Get SHA2-512SUMS for yt-dlp.tar.gz
id: sha512_tar
run: echo "::set-output name=sha512_tar::$(sha512sum yt-dlp.tar.gz | awk '{print $1}')"
- name: Install dependencies for pypi
env:
PYPI_TOKEN: ${{ secrets.PYPI_TOKEN }}
if: "env.PYPI_TOKEN != ''"
run: |
python -m pip install --upgrade pip
pip install setuptools wheel twine
- name: Build and publish on pypi
env:
TWINE_USERNAME: __token__
TWINE_PASSWORD: ${{ secrets.PYPI_TOKEN }}
if: "env.TWINE_PASSWORD != ''"
run: |
rm -rf dist/*
python setup.py sdist bdist_wheel
twine upload dist/*
- name: Install SSH private key
env:
BREW_TOKEN: ${{ secrets.BREW_TOKEN }}
if: "env.BREW_TOKEN != ''"
uses: webfactory/ssh-agent@v0.5.3
with:
ssh-private-key: ${{ env.BREW_TOKEN }}
- name: Update Homebrew Formulae
env:
BREW_TOKEN: ${{ secrets.BREW_TOKEN }}
if: "env.BREW_TOKEN != ''"
run: |
git clone git@github.com:yt-dlp/homebrew-taps taps/
python3 devscripts/update-formulae.py taps/Formula/yt-dlp.rb "${{ steps.bump_version.outputs.ytdlp_version }}"
git -C taps/ config user.name github-actions
git -C taps/ config user.email github-actions@example.com
git -C taps/ commit -am 'yt-dlp: ${{ steps.bump_version.outputs.ytdlp_version }}'
git -C taps/ push
- name: Create Release - name: Create Release
id: create_release id: create_release
uses: actions/create-release@v1 uses: actions/create-release@v1
@@ -38,9 +113,14 @@ jobs:
with: with:
tag_name: ${{ steps.bump_version.outputs.ytdlp_version }} tag_name: ${{ steps.bump_version.outputs.ytdlp_version }}
release_name: yt-dlp ${{ steps.bump_version.outputs.ytdlp_version }} release_name: yt-dlp ${{ steps.bump_version.outputs.ytdlp_version }}
commitish: ${{ steps.push_update.outputs.head_sha }}
body: | body: |
Changelog: #### [A description of the various files]((https://github.com/yt-dlp/yt-dlp#release-files)) are in the README
PLACEHOLDER
---
### Changelog:
${{ env.changelog }}
draft: false draft: false
prerelease: false prerelease: false
- name: Upload yt-dlp Unix binary - name: Upload yt-dlp Unix binary
@@ -62,36 +142,81 @@ jobs:
asset_path: ./yt-dlp.tar.gz asset_path: ./yt-dlp.tar.gz
asset_name: yt-dlp.tar.gz asset_name: yt-dlp.tar.gz
asset_content_type: application/gzip asset_content_type: application/gzip
- name: Get SHA2-256SUMS for yt-dlp
id: sha256_file build_macos:
run: echo "::set-output name=sha256_unix::$(sha256sum yt-dlp | awk '{print $1}')" runs-on: macos-11
- name: Get SHA2-512SUMS for yt-dlp needs: build_unix
id: sha512_file outputs:
run: echo "::set-output name=sha512_unix::$(sha512sum yt-dlp | awk '{print $1}')" sha256_macos: ${{ steps.sha256_macos.outputs.sha256_macos }}
- name: Install dependencies for pypi sha512_macos: ${{ steps.sha512_macos.outputs.sha512_macos }}
env: sha256_macos_zip: ${{ steps.sha256_macos_zip.outputs.sha256_macos_zip }}
PYPI_TOKEN: ${{ secrets.PYPI_TOKEN }} sha512_macos_zip: ${{ steps.sha512_macos_zip.outputs.sha512_macos_zip }}
if: "env.PYPI_TOKEN != ''"
steps:
- uses: actions/checkout@v2
# In order to create a universal2 application, the version of python3 in /usr/bin has to be used
- name: Install Requirements
run: | run: |
python -m pip install --upgrade pip brew install coreutils
pip install setuptools wheel twine /usr/bin/python3 -m pip install -U --user pip Pyinstaller mutagen pycryptodomex websockets
- name: Build and publish on pypi - name: Bump version
id: bump_version
run: /usr/bin/python3 devscripts/update-version.py
- name: Build lazy extractors
id: lazy_extractors
run: /usr/bin/python3 devscripts/make_lazy_extractors.py
- name: Run PyInstaller Script
run: /usr/bin/python3 pyinst.py --target-architecture universal2 --onefile
- name: Upload yt-dlp MacOS binary
id: upload-release-macos
uses: actions/upload-release-asset@v1
env: env:
TWINE_USERNAME: __token__ GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
TWINE_PASSWORD: ${{ secrets.PYPI_TOKEN }} with:
if: "env.TWINE_PASSWORD != ''" upload_url: ${{ needs.build_unix.outputs.upload_url }}
run: | asset_path: ./dist/yt-dlp_macos
rm -rf dist/* asset_name: yt-dlp_macos
python setup.py sdist bdist_wheel asset_content_type: application/octet-stream
twine upload dist/* - name: Get SHA2-256SUMS for yt-dlp_macos
id: sha256_macos
run: echo "::set-output name=sha256_macos::$(sha256sum dist/yt-dlp_macos | awk '{print $1}')"
- name: Get SHA2-512SUMS for yt-dlp_macos
id: sha512_macos
run: echo "::set-output name=sha512_macos::$(sha512sum dist/yt-dlp_macos | awk '{print $1}')"
- name: Run PyInstaller Script with --onedir
run: /usr/bin/python3 pyinst.py --target-architecture universal2 --onedir
- uses: papeloto/action-zip@v1
with:
files: ./dist/yt-dlp_macos
dest: ./dist/yt-dlp_macos.zip
- name: Upload yt-dlp MacOS onedir
id: upload-release-macos-zip
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ needs.build_unix.outputs.upload_url }}
asset_path: ./dist/yt-dlp_macos.zip
asset_name: yt-dlp_macos.zip
asset_content_type: application/zip
- name: Get SHA2-256SUMS for yt-dlp_macos.zip
id: sha256_macos_zip
run: echo "::set-output name=sha256_macos_zip::$(sha256sum dist/yt-dlp_macos.zip | awk '{print $1}')"
- name: Get SHA2-512SUMS for yt-dlp_macos
id: sha512_macos_zip
run: echo "::set-output name=sha512_macos_zip::$(sha512sum dist/yt-dlp_macos.zip | awk '{print $1}')"
build_windows: build_windows:
runs-on: windows-latest runs-on: windows-latest
needs: build_unix needs: build_unix
outputs: outputs:
sha256_windows: ${{ steps.sha256_file_win.outputs.sha256_windows }} sha256_win: ${{ steps.sha256_win.outputs.sha256_win }}
sha512_windows: ${{ steps.sha512_file_win.outputs.sha512_windows }} sha512_win: ${{ steps.sha512_win.outputs.sha512_win }}
sha256_py2exe: ${{ steps.sha256_py2exe.outputs.sha256_py2exe }}
sha512_py2exe: ${{ steps.sha512_py2exe.outputs.sha512_py2exe }}
sha256_win_zip: ${{ steps.sha256_win_zip.outputs.sha256_win_zip }}
sha512_win_zip: ${{ steps.sha512_win_zip.outputs.sha512_win_zip }}
steps: steps:
- uses: actions/checkout@v2 - uses: actions/checkout@v2
@@ -100,17 +225,19 @@ jobs:
uses: actions/setup-python@v2 uses: actions/setup-python@v2
with: with:
python-version: '3.8' python-version: '3.8'
- name: Upgrade pip and enable wheel support
run: python -m pip install --upgrade pip setuptools wheel
- name: Install Requirements - name: Install Requirements
run: pip install pyinstaller mutagen pycryptodome websockets # Custom pyinstaller built with https://github.com/yt-dlp/pyinstaller-builds
run: |
python -m pip install --upgrade pip setuptools wheel py2exe
pip install "https://yt-dlp.github.io/Pyinstaller-Builds/x86_64/pyinstaller-4.5.1-py3-none-any.whl" mutagen pycryptodomex websockets
- name: Bump version - name: Bump version
id: bump_version id: bump_version
run: python devscripts/update-version.py run: python devscripts/update-version.py
- name: Print version - name: Build lazy extractors
run: echo "${{ steps.bump_version.outputs.ytdlp_version }}" id: lazy_extractors
run: python devscripts/make_lazy_extractors.py
- name: Run PyInstaller Script - name: Run PyInstaller Script
run: python pyinst.py 64 run: python pyinst.py
- name: Upload yt-dlp.exe Windows binary - name: Upload yt-dlp.exe Windows binary
id: upload-release-windows id: upload-release-windows
uses: actions/upload-release-asset@v1 uses: actions/upload-release-asset@v1
@@ -122,19 +249,61 @@ jobs:
asset_name: yt-dlp.exe asset_name: yt-dlp.exe
asset_content_type: application/vnd.microsoft.portable-executable asset_content_type: application/vnd.microsoft.portable-executable
- name: Get SHA2-256SUMS for yt-dlp.exe - name: Get SHA2-256SUMS for yt-dlp.exe
id: sha256_file_win id: sha256_win
run: echo "::set-output name=sha256_windows::$((Get-FileHash dist\yt-dlp.exe -Algorithm SHA256).Hash.ToLower())" run: echo "::set-output name=sha256_win::$((Get-FileHash dist\yt-dlp.exe -Algorithm SHA256).Hash.ToLower())"
- name: Get SHA2-512SUMS for yt-dlp.exe - name: Get SHA2-512SUMS for yt-dlp.exe
id: sha512_file_win id: sha512_win
run: echo "::set-output name=sha512_windows::$((Get-FileHash dist\yt-dlp.exe -Algorithm SHA512).Hash.ToLower())" run: echo "::set-output name=sha512_win::$((Get-FileHash dist\yt-dlp.exe -Algorithm SHA512).Hash.ToLower())"
- name: Run PyInstaller Script with --onedir
run: python pyinst.py --onedir
- uses: papeloto/action-zip@v1
with:
files: ./dist/yt-dlp
dest: ./dist/yt-dlp_win.zip
- name: Upload yt-dlp Windows onedir
id: upload-release-windows-zip
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ needs.build_unix.outputs.upload_url }}
asset_path: ./dist/yt-dlp_win.zip
asset_name: yt-dlp_win.zip
asset_content_type: application/zip
- name: Get SHA2-256SUMS for yt-dlp_win.zip
id: sha256_win_zip
run: echo "::set-output name=sha256_win_zip::$((Get-FileHash dist\yt-dlp_win.zip -Algorithm SHA256).Hash.ToLower())"
- name: Get SHA2-512SUMS for yt-dlp_win.zip
id: sha512_win_zip
run: echo "::set-output name=sha512_win_zip::$((Get-FileHash dist\yt-dlp_win.zip -Algorithm SHA512).Hash.ToLower())"
- name: Run py2exe Script
run: python setup.py py2exe
- name: Upload yt-dlp_min.exe Windows binary
id: upload-release-windows-py2exe
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ needs.build_unix.outputs.upload_url }}
asset_path: ./dist/yt-dlp.exe
asset_name: yt-dlp_min.exe
asset_content_type: application/vnd.microsoft.portable-executable
- name: Get SHA2-256SUMS for yt-dlp_min.exe
id: sha256_py2exe
run: echo "::set-output name=sha256_py2exe::$((Get-FileHash dist\yt-dlp.exe -Algorithm SHA256).Hash.ToLower())"
- name: Get SHA2-512SUMS for yt-dlp_min.exe
id: sha512_py2exe
run: echo "::set-output name=sha512_py2exe::$((Get-FileHash dist\yt-dlp.exe -Algorithm SHA512).Hash.ToLower())"
build_windows32: build_windows32:
runs-on: windows-latest runs-on: windows-latest
needs: [build_unix, build_windows] needs: build_unix
outputs: outputs:
sha256_windows32: ${{ steps.sha256_file_win32.outputs.sha256_windows32 }} sha256_win32: ${{ steps.sha256_win32.outputs.sha256_win32 }}
sha512_windows32: ${{ steps.sha512_file_win32.outputs.sha512_windows32 }} sha512_win32: ${{ steps.sha512_win32.outputs.sha512_win32 }}
steps: steps:
- uses: actions/checkout@v2 - uses: actions/checkout@v2
@@ -144,17 +313,18 @@ jobs:
with: with:
python-version: '3.7' python-version: '3.7'
architecture: 'x86' architecture: 'x86'
- name: Upgrade pip and enable wheel support
run: python -m pip install --upgrade pip setuptools wheel
- name: Install Requirements - name: Install Requirements
run: pip install pyinstaller mutagen pycryptodome websockets run: |
python -m pip install --upgrade pip setuptools wheel
pip install "https://yt-dlp.github.io/Pyinstaller-Builds/i686/pyinstaller-4.5.1-py3-none-any.whl" mutagen pycryptodomex websockets
- name: Bump version - name: Bump version
id: bump_version id: bump_version
run: python devscripts/update-version.py run: python devscripts/update-version.py
- name: Print version - name: Build lazy extractors
run: echo "${{ steps.bump_version.outputs.ytdlp_version }}" id: lazy_extractors
run: python devscripts/make_lazy_extractors.py
- name: Run PyInstaller Script for 32 Bit - name: Run PyInstaller Script for 32 Bit
run: python pyinst.py 32 run: python pyinst.py
- name: Upload Executable yt-dlp_x86.exe - name: Upload Executable yt-dlp_x86.exe
id: upload-release-windows32 id: upload-release-windows32
uses: actions/upload-release-asset@v1 uses: actions/upload-release-asset@v1
@@ -166,28 +336,36 @@ jobs:
asset_name: yt-dlp_x86.exe asset_name: yt-dlp_x86.exe
asset_content_type: application/vnd.microsoft.portable-executable asset_content_type: application/vnd.microsoft.portable-executable
- name: Get SHA2-256SUMS for yt-dlp_x86.exe - name: Get SHA2-256SUMS for yt-dlp_x86.exe
id: sha256_file_win32 id: sha256_win32
run: echo "::set-output name=sha256_windows32::$((Get-FileHash dist\yt-dlp_x86.exe -Algorithm SHA256).Hash.ToLower())" run: echo "::set-output name=sha256_win32::$((Get-FileHash dist\yt-dlp_x86.exe -Algorithm SHA256).Hash.ToLower())"
- name: Get SHA2-512SUMS for yt-dlp_x86.exe - name: Get SHA2-512SUMS for yt-dlp_x86.exe
id: sha512_file_win32 id: sha512_win32
run: echo "::set-output name=sha512_windows32::$((Get-FileHash dist\yt-dlp_x86.exe -Algorithm SHA512).Hash.ToLower())" run: echo "::set-output name=sha512_win32::$((Get-FileHash dist\yt-dlp_x86.exe -Algorithm SHA512).Hash.ToLower())"
finish: finish:
runs-on: ubuntu-latest runs-on: ubuntu-latest
needs: [build_unix, build_windows, build_windows32] needs: [build_unix, build_windows, build_windows32, build_macos]
steps: steps:
- name: Make SHA2-256SUMS file - name: Make SHA2-256SUMS file
env: env:
SHA256_WINDOWS: ${{ needs.build_windows.outputs.sha256_windows }} SHA256_BIN: ${{ needs.build_unix.outputs.sha256_bin }}
SHA256_WINDOWS32: ${{ needs.build_windows32.outputs.sha256_windows32 }} SHA256_TAR: ${{ needs.build_unix.outputs.sha256_tar }}
SHA256_UNIX: ${{ needs.build_unix.outputs.sha256_unix }} SHA256_WIN: ${{ needs.build_windows.outputs.sha256_win }}
YTDLP_VERSION: ${{ needs.build_unix.outputs.ytdlp_version }} SHA256_PY2EXE: ${{ needs.build_windows.outputs.sha256_py2exe }}
SHA256_WIN_ZIP: ${{ needs.build_windows.outputs.sha256_win_zip }}
SHA256_WIN32: ${{ needs.build_windows32.outputs.sha256_win32 }}
SHA256_MACOS: ${{ needs.build_macos.outputs.sha256_macos }}
SHA256_MACOS_ZIP: ${{ needs.build_macos.outputs.sha256_macos_zip }}
run: | run: |
echo "version:${{ env.YTDLP_VERSION }}" >> SHA2-256SUMS echo "${{ env.SHA256_BIN }} yt-dlp" >> SHA2-256SUMS
echo "yt-dlp.exe:${{ env.SHA256_WINDOWS }}" >> SHA2-256SUMS echo "${{ env.SHA256_TAR }} yt-dlp.tar.gz" >> SHA2-256SUMS
echo "yt-dlp_x86.exe:${{ env.SHA256_WINDOWS32 }}" >> SHA2-256SUMS echo "${{ env.SHA256_WIN }} yt-dlp.exe" >> SHA2-256SUMS
echo "yt-dlp:${{ env.SHA256_UNIX }}" >> SHA2-256SUMS echo "${{ env.SHA256_PY2EXE }} yt-dlp_min.exe" >> SHA2-256SUMS
echo "${{ env.SHA256_WIN32 }} yt-dlp_x86.exe" >> SHA2-256SUMS
echo "${{ env.SHA256_WIN_ZIP }} yt-dlp_win.zip" >> SHA2-256SUMS
echo "${{ env.SHA256_MACOS }} yt-dlp_macos" >> SHA2-256SUMS
echo "${{ env.SHA256_MACOS_ZIP }} yt-dlp_macos.zip" >> SHA2-256SUMS
- name: Upload 256SUMS file - name: Upload 256SUMS file
id: upload-sums id: upload-sums
uses: actions/upload-release-asset@v1 uses: actions/upload-release-asset@v1
@@ -200,13 +378,23 @@ jobs:
asset_content_type: text/plain asset_content_type: text/plain
- name: Make SHA2-512SUMS file - name: Make SHA2-512SUMS file
env: env:
SHA512_WINDOWS: ${{ needs.build_windows.outputs.sha512_windows }} SHA512_BIN: ${{ needs.build_unix.outputs.sha512_bin }}
SHA512_WINDOWS32: ${{ needs.build_windows32.outputs.sha512_windows32 }} SHA512_TAR: ${{ needs.build_unix.outputs.sha512_tar }}
SHA512_UNIX: ${{ needs.build_unix.outputs.sha512_unix }} SHA512_WIN: ${{ needs.build_windows.outputs.sha512_win }}
SHA512_PY2EXE: ${{ needs.build_windows.outputs.sha512_py2exe }}
SHA512_WIN_ZIP: ${{ needs.build_windows.outputs.sha512_win_zip }}
SHA512_WIN32: ${{ needs.build_windows32.outputs.sha512_win32 }}
SHA512_MACOS: ${{ needs.build_macos.outputs.sha512_macos }}
SHA512_MACOS_ZIP: ${{ needs.build_macos.outputs.sha512_macos_zip }}
run: | run: |
echo "${{ env.SHA512_WINDOWS }} yt-dlp.exe" >> SHA2-512SUMS echo "${{ env.SHA512_BIN }} yt-dlp" >> SHA2-512SUMS
echo "${{ env.SHA512_WINDOWS32 }} yt-dlp_x86.exe" >> SHA2-512SUMS echo "${{ env.SHA512_TAR }} yt-dlp.tar.gz" >> SHA2-512SUMS
echo "${{ env.SHA512_UNIX }} yt-dlp" >> SHA2-512SUMS echo "${{ env.SHA512_WIN }} yt-dlp.exe" >> SHA2-512SUMS
echo "${{ env.SHA512_WIN_ZIP }} yt-dlp_win.zip" >> SHA2-512SUMS
echo "${{ env.SHA512_PY2EXE }} yt-dlp_min.exe" >> SHA2-512SUMS
echo "${{ env.SHA512_WIN32 }} yt-dlp_x86.exe" >> SHA2-512SUMS
echo "${{ env.SHA512_MACOS }} yt-dlp_macos" >> SHA2-512SUMS
echo "${{ env.SHA512_MACOS_ZIP }} yt-dlp_macos.zip" >> SHA2-512SUMS
- name: Upload 512SUMS file - name: Upload 512SUMS file
id: upload-512sums id: upload-512sums
uses: actions/upload-release-asset@v1 uses: actions/upload-release-asset@v1

View File

@@ -10,7 +10,7 @@ jobs:
matrix: matrix:
os: [ubuntu-18.04] os: [ubuntu-18.04]
# py3.9 is in quick-test # py3.9 is in quick-test
python-version: [3.7, 3.8, pypy-3.6, pypy-3.7] python-version: [3.7, 3.8, 3.10-dev, pypy-3.6, pypy-3.7]
run-tests-ext: [sh] run-tests-ext: [sh]
include: include:
# atleast one of the tests must be in windows # atleast one of the tests must be in windows
@@ -23,11 +23,9 @@ jobs:
uses: actions/setup-python@v2 uses: actions/setup-python@v2
with: with:
python-version: ${{ matrix.python-version }} python-version: ${{ matrix.python-version }}
- name: Install nose - name: Install pytest
run: pip install nose run: pip install pytest
- name: Run tests - name: Run tests
continue-on-error: False continue-on-error: False
env: run: ./devscripts/run_tests.${{ matrix.run-tests-ext }} core
YTDL_TEST_SET: core
run: ./devscripts/run_tests.${{ matrix.run-tests-ext }}
# Linter is in quick-test # Linter is in quick-test

View File

@@ -9,7 +9,7 @@ jobs:
fail-fast: true fail-fast: true
matrix: matrix:
os: [ubuntu-18.04] os: [ubuntu-18.04]
python-version: [3.7, 3.8, 3.9, pypy-3.6, pypy-3.7] python-version: [3.7, 3.8, 3.9, 3.10-dev, pypy-3.6, pypy-3.7]
run-tests-ext: [sh] run-tests-ext: [sh]
include: include:
- os: windows-latest - os: windows-latest
@@ -21,10 +21,8 @@ jobs:
uses: actions/setup-python@v2 uses: actions/setup-python@v2
with: with:
python-version: ${{ matrix.python-version }} python-version: ${{ matrix.python-version }}
- name: Install nose - name: Install pytest
run: pip install nose run: pip install pytest
- name: Run tests - name: Run tests
continue-on-error: true continue-on-error: true
env: run: ./devscripts/run_tests.${{ matrix.run-tests-ext }} download
YTDL_TEST_SET: download
run: ./devscripts/run_tests.${{ matrix.run-tests-ext }}

View File

@@ -11,12 +11,10 @@ jobs:
uses: actions/setup-python@v2 uses: actions/setup-python@v2
with: with:
python-version: 3.9 python-version: 3.9
- name: Install nose - name: Install test requirements
run: pip install nose run: pip install pytest pycryptodomex
- name: Run tests - name: Run tests
env: run: ./devscripts/run_tests.sh core
YTDL_TEST_SET: core
run: ./devscripts/run_tests.sh
flake8: flake8:
name: Linter name: Linter
if: "!contains(github.event.head_commit.message, 'ci skip all')" if: "!contains(github.event.head_commit.message, 'ci skip all')"
@@ -29,5 +27,7 @@ jobs:
python-version: 3.9 python-version: 3.9
- name: Install flake8 - name: Install flake8
run: pip install flake8 run: pip install flake8
- name: Make lazy extractors
run: python devscripts/make_lazy_extractors.py
- name: Run flake8 - name: Run flake8
run: flake8 . run: flake8 .

11
.gitignore vendored
View File

@@ -2,7 +2,8 @@
*.conf *.conf
*.spec *.spec
cookies cookies
cookies.txt *cookies.txt
.netrc
# Downloaded # Downloaded
*.srt *.srt
@@ -19,6 +20,8 @@ cookies.txt
*.wav *.wav
*.ape *.ape
*.mkv *.mkv
*.flac
*.avi
*.swf *.swf
*.part *.part
*.part-* *.part-*
@@ -33,17 +36,20 @@ cookies.txt
*.info.json *.info.json
*.live_chat.json *.live_chat.json
*.jpg *.jpg
*.jpeg
*.png *.png
*.webp *.webp
*.annotations.xml *.annotations.xml
*.description *.description
.cache/
# Allow config/media files in testdata # Allow config/media files in testdata
!test/testdata/** !test/**
# Python # Python
*.pyc *.pyc
*.pyo *.pyo
.pytest_cache
wine-py2exe/ wine-py2exe/
py2exe.log py2exe.log
build/ build/
@@ -78,6 +84,7 @@ README.txt
*.tar.gz *.tar.gz
*.zsh *.zsh
*.spec *.spec
test/testdata/player-*.js
# Binary # Binary
/youtube-dl /youtube-dl

View File

@@ -1,26 +1,59 @@
**Please include the full output of youtube-dl when run with `-v`**, i.e. **add** `-v` flag to **your command line**, copy the **whole** output and post it in the issue body wrapped in \`\`\` for better formatting. It should look similar to this: # CONTRIBUTING TO YT-DLP
- [OPENING AN ISSUE](#opening-an-issue)
- [Is the description of the issue itself sufficient?](#is-the-description-of-the-issue-itself-sufficient)
- [Are you using the latest version?](#are-you-using-the-latest-version)
- [Is the issue already documented?](#is-the-issue-already-documented)
- [Why are existing options not enough?](#why-are-existing-options-not-enough)
- [Have you read and understood the changes, between youtube-dl and yt-dlp](#have-you-read-and-understood-the-changes-between-youtube-dl-and-yt-dlp)
- [Is there enough context in your bug report?](#is-there-enough-context-in-your-bug-report)
- [Does the issue involve one problem, and one problem only?](#does-the-issue-involve-one-problem-and-one-problem-only)
- [Is anyone going to need the feature?](#is-anyone-going-to-need-the-feature)
- [Is your question about yt-dlp?](#is-your-question-about-yt-dlp)
- [DEVELOPER INSTRUCTIONS](#developer-instructions)
- [Adding new feature or making overarching changes](#adding-new-feature-or-making-overarching-changes)
- [Adding support for a new site](#adding-support-for-a-new-site)
- [yt-dlp coding conventions](#yt-dlp-coding-conventions)
- [Mandatory and optional metafields](#mandatory-and-optional-metafields)
- [Provide fallbacks](#provide-fallbacks)
- [Regular expressions](#regular-expressions)
- [Long lines policy](#long-lines-policy)
- [Inline values](#inline-values)
- [Collapse fallbacks](#collapse-fallbacks)
- [Trailing parentheses](#trailing-parentheses)
- [Use convenience conversion and parsing functions](#use-convenience-conversion-and-parsing-functions)
- [EMBEDDING YT-DLP](README.md#embedding-yt-dlp)
# OPENING AN ISSUE
Bugs and suggestions should be reported at: [yt-dlp/yt-dlp/issues](https://github.com/yt-dlp/yt-dlp/issues). Unless you were prompted to or there is another pertinent reason (e.g. GitHub fails to accept the bug report), please do not send bug reports via personal email. For discussions, join us in our [discord server](https://discord.gg/H5MNcFW63r).
**Please include the full output of yt-dlp when run with `-Uv`**, i.e. **add** `-Uv` flag to **your command line**, copy the **whole** output and post it in the issue body wrapped in \`\`\` for better formatting. It should look similar to this:
``` ```
$ youtube-dl -v <your command line> $ yt-dlp -Uv <your command line>
[debug] System config: [] [debug] Command-line config: ['-v', 'demo.com']
[debug] User config: [] [debug] Encodings: locale UTF-8, fs utf-8, out utf-8, pref UTF-8
[debug] Command-line args: [u'-v', u'https://www.youtube.com/watch?v=BaW_jenozKcj'] [debug] yt-dlp version 2021.09.25 (zip)
[debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251 [debug] Python version 3.8.10 (CPython 64bit) - Linux-5.4.0-74-generic-x86_64-with-glibc2.29
[debug] youtube-dl version 2015.12.06 [debug] exe versions: ffmpeg 4.2.4, ffprobe 4.2.4
[debug] Git HEAD: 135392e
[debug] Python version 2.6.6 - Windows-2003Server-5.2.3790-SP2
[debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4
[debug] Proxy map: {} [debug] Proxy map: {}
Current Build Hash 25cc412d1d3c0725a1f2f5b7e4682f6fb40e6d15f7024e96f7afd572e9919535
yt-dlp is up to date (2021.09.25)
... ...
``` ```
**Do not post screenshots of verbose logs; only plain text is acceptable.** **Do not post screenshots of verbose logs; only plain text is acceptable.**
The output (including the first lines) contains important debugging information. Issues without the full output are often not reproducible and therefore do not get solved in short order, if ever. The output (including the first lines) contains important debugging information. Issues without the full output are often not reproducible and therefore will be closed as `incomplete`.
The templates provided for the Issues, should be completed and **not removed**, this helps aide the resolution of the issue.
Please re-read your issue once again to avoid a couple of common mistakes (you can and should use this as a checklist): Please re-read your issue once again to avoid a couple of common mistakes (you can and should use this as a checklist):
### Is the description of the issue itself sufficient? ### Is the description of the issue itself sufficient?
We often get issue reports that we cannot really decipher. While in most cases we eventually get the required information after asking back multiple times, this poses an unnecessary drain on our resources. Many contributors, including myself, are also not native speakers, so we may misread some parts. We often get issue reports that we cannot really decipher. While in most cases we eventually get the required information after asking back multiple times, this poses an unnecessary drain on our resources.
So please elaborate on what feature you are requesting, or what bug you want to be fixed. Make sure that it's obvious So please elaborate on what feature you are requesting, or what bug you want to be fixed. Make sure that it's obvious
@@ -28,25 +61,31 @@ So please elaborate on what feature you are requesting, or what bug you want to
- How it could be fixed - How it could be fixed
- How your proposed solution would look like - How your proposed solution would look like
If your report is shorter than two lines, it is almost certainly missing some of these, which makes it hard for us to respond to it. We're often too polite to close the issue outright, but the missing info makes misinterpretation likely. As a committer myself, I often get frustrated by these issues, since the only possible way for me to move forward on them is to ask for clarification over and over. If your report is shorter than two lines, it is almost certainly missing some of these, which makes it hard for us to respond to it. We're often too polite to close the issue outright, but the missing info makes misinterpretation likely. We often get frustrated by these issues, since the only possible way for us to move forward on them is to ask for clarification over and over.
For bug reports, this means that your report should contain the *complete* output of youtube-dl when called with the `-v` flag. The error message you get for (most) bugs even says so, but you would not believe how many of our bug reports do not contain this information. For bug reports, this means that your report should contain the **complete** output of yt-dlp when called with the `-Uv` flag. The error message you get for (most) bugs even says so, but you would not believe how many of our bug reports do not contain this information.
If your server has multiple IPs or you suspect censorship, adding `--call-home` may be a good idea to get more diagnostics. If the error is `ERROR: Unable to extract ...` and you cannot reproduce it from multiple countries, add `--dump-pages` (warning: this will yield a rather large output, redirect it to the file `log.txt` by adding `>log.txt 2>&1` to your command-line) or upload the `.dump` files you get when you add `--write-pages` [somewhere](https://gist.github.com/). If the error is `ERROR: Unable to extract ...` and you cannot reproduce it from multiple countries, add `--write-pages` and upload the `.dump` files you get [somewhere](https://gist.github.com).
**Site support requests must contain an example URL**. An example URL is a URL you might want to download, like `https://www.youtube.com/watch?v=BaW_jenozKc`. There should be an obvious video present. Except under very special circumstances, the main page of a video service (e.g. `https://www.youtube.com/`) is *not* an example URL. **Site support requests must contain an example URL**. An example URL is a URL you might want to download, like `https://www.youtube.com/watch?v=BaW_jenozKc`. There should be an obvious video present. Except under very special circumstances, the main page of a video service (e.g. `https://www.youtube.com/`) is *not* an example URL.
### Are you using the latest version? ### Are you using the latest version?
Before reporting any issue, type `youtube-dl -U`. This should report that you're up-to-date. About 20% of the reports we receive are already fixed, but people are using outdated versions. This goes for feature requests as well. Before reporting any issue, type `yt-dlp -U`. This should report that you're up-to-date. This goes for feature requests as well.
### Is the issue already documented? ### Is the issue already documented?
Make sure that someone has not already opened the issue you're trying to open. Search at the top of the window or browse the [GitHub Issues](https://github.com/ytdl-org/youtube-dl/search?type=Issues) of this repository. If there is an issue, feel free to write something along the lines of "This affects me as well, with version 2015.01.01. Here is some more information on the issue: ...". While some issues may be old, a new post into them often spurs rapid activity. Make sure that someone has not already opened the issue you're trying to open. Search at the top of the window or browse the [GitHub Issues](https://github.com/yt-dlp/yt-dlp/search?type=Issues) of this repository. If there is an issue, feel free to write something along the lines of "This affects me as well, with version 2021.01.01. Here is some more information on the issue: ...". While some issues may be old, a new post into them often spurs rapid activity.
Additionally, it is also helpful to see if the issue has already been documented in the [youtube-dl issue tracker](https://github.com/ytdl-org/youtube-dl/issues). If similar issues have already been reported in youtube-dl (but not in our issue tracker), links to them can be included in your issue report here.
### Why are existing options not enough? ### Why are existing options not enough?
Before requesting a new feature, please have a quick peek at [the list of supported options](https://github.com/ytdl-org/youtube-dl/blob/master/README.md#options). Many feature requests are for features that actually exist already! Please, absolutely do show off your work in the issue report and detail how the existing similar options do *not* solve your problem. Before requesting a new feature, please have a quick peek at [the list of supported options](README.md#usage-and-options). Many feature requests are for features that actually exist already! Please, absolutely do show off your work in the issue report and detail how the existing similar options do *not* solve your problem.
### Have you read and understood the changes, between youtube-dl and yt-dlp
There are many changes between youtube-dl and yt-dlp [(changes to default behavior)](README.md#differences-in-default-behavior), and some of the options available have a different behaviour in yt-dlp, or have been removed all together [(list of changes to options)](README.md#deprecated-options). Make sure you have read and understand the differences in the options and how this may impact your downloads before opening an issue.
### Is there enough context in your bug report? ### Is there enough context in your bug report?
@@ -58,68 +97,86 @@ We are then presented with a very complicated request when the original problem
Some of our users seem to think there is a limit of issues they can or should open. There is no limit of issues they can or should open. While it may seem appealing to be able to dump all your issues into one ticket, that means that someone who solves one of your issues cannot mark the issue as closed. Typically, reporting a bunch of issues leads to the ticket lingering since nobody wants to attack that behemoth, until someone mercifully splits the issue into multiple ones. Some of our users seem to think there is a limit of issues they can or should open. There is no limit of issues they can or should open. While it may seem appealing to be able to dump all your issues into one ticket, that means that someone who solves one of your issues cannot mark the issue as closed. Typically, reporting a bunch of issues leads to the ticket lingering since nobody wants to attack that behemoth, until someone mercifully splits the issue into multiple ones.
In particular, every site support request issue should only pertain to services at one site (generally under a common domain, but always using the same backend technology). Do not request support for vimeo user videos, White house podcasts, and Google Plus pages in the same issue. Also, make sure that you don't post bug reports alongside feature requests. As a rule of thumb, a feature request does not include outputs of youtube-dl that are not immediately related to the feature at hand. Do not post reports of a network error alongside the request for a new video service. In particular, every site support request issue should only pertain to services at one site (generally under a common domain, but always using the same backend technology). Do not request support for vimeo user videos, White house podcasts, and Google Plus pages in the same issue. Also, make sure that you don't post bug reports alongside feature requests. As a rule of thumb, a feature request does not include outputs of yt-dlp that are not immediately related to the feature at hand. Do not post reports of a network error alongside the request for a new video service.
### Is anyone going to need the feature? ### Is anyone going to need the feature?
Only post features that you (or an incapacitated friend you can personally talk to) require. Do not post features because they seem like a good idea. If they are really useful, they will be requested by someone who requires them. Only post features that you (or an incapacitated friend you can personally talk to) require. Do not post features because they seem like a good idea. If they are really useful, they will be requested by someone who requires them.
### Is your question about youtube-dl? ### Is your question about yt-dlp?
Some bug reports are completely unrelated to yt-dlp and relate to a different, or even the reporter's own, application. Please make sure that you are actually using yt-dlp. If you are using a UI for yt-dlp, report the bug to the maintainer of the actual application providing the UI. In general, if you are unable to provide the verbose log, you should not be opening the issue here.
If the issue is with `youtube-dl` (the upstream fork of yt-dlp) and not with yt-dlp, the issue should be raised in the youtube-dl project.
### Are you willing to share account details if needed?
The maintainers and potential contributors of the project often do not have an account for the website you are asking support for. So any developer interested in solving your issue may ask you for account details. It is your personal discression whether you are willing to share the account in order for the developer to try and solve your issue. However, if you are unwilling or unable to provide details, they obviously cannot work on the issue and it cannot be solved unless some developer who both has an account and is willing/able to contribute decides to solve it.
By sharing an account with anyone, you agree to bear all risks associated with it. The maintainers and yt-dlp can't be held responsible for any misuse of the credentials.
While these steps won't necessarily ensure that no misuse of the account takes place, these are still some good practices to follow.
- Look for people with `Member` (maintainers of the project) or `Contributor` (people who have previously contributed code) tag on their messages.
- Change the password before sharing the account to something random (use [this](https://passwordsgenerator.net/) if you don't have a random password generator).
- Change the password after receiving the account back.
It may sound strange, but some bug reports we receive are completely unrelated to youtube-dl and relate to a different, or even the reporter's own, application. Please make sure that you are actually using youtube-dl. If you are using a UI for youtube-dl, report the bug to the maintainer of the actual application providing the UI. On the other hand, if your UI for youtube-dl fails in some way you believe is related to youtube-dl, by all means, go ahead and report the bug.
# DEVELOPER INSTRUCTIONS # DEVELOPER INSTRUCTIONS
Most users do not need to build youtube-dl and can [download the builds](https://ytdl-org.github.io/youtube-dl/download.html) or get them from their distribution. Most users do not need to build yt-dlp and can [download the builds](https://github.com/yt-dlp/yt-dlp/releases) or get them via [the other installation methods](README.md#installation).
To run youtube-dl as a developer, you don't need to build anything either. Simply execute To run yt-dlp as a developer, you don't need to build anything either. Simply execute
python -m youtube_dl python -m yt_dlp
To run the test, simply invoke your favorite test runner, or execute a test file directly; any of the following work: To run the test, simply invoke your favorite test runner, or execute a test file directly; any of the following work:
python -m unittest discover python -m unittest discover
python test/test_download.py python test/test_download.py
nosetests nosetests
pytest
See item 6 of [new extractor tutorial](#adding-support-for-a-new-site) for how to run extractor specific test cases. See item 6 of [new extractor tutorial](#adding-support-for-a-new-site) for how to run extractor specific test cases.
If you want to create a build of youtube-dl yourself, you'll need If you want to create a build of yt-dlp yourself, you can follow the instructions [here](README.md#compile).
* python
* make (only GNU make is supported)
* pandoc
* zip
* nosetests
### Adding support for a new site ## Adding new feature or making overarching changes
If you want to add support for a new site, first of all **make sure** this site is **not dedicated to [copyright infringement](README.md#can-you-add-support-for-this-anime-video-site-or-site-which-shows-current-movies-for-free)**. youtube-dl does **not support** such sites thus pull requests adding support for them **will be rejected**. Before you start writing code for implementing a new feature, open an issue explaining your feature request and atleast one use case. This allows the maintainers to decide whether such a feature is desired for the project in the first place, and will provide an avenue to discuss some implementation details. If you open a pull request for a new feature without discussing with us first, do not be surprised when we ask for large changes to the code, or even reject it outright.
The same applies for changes to the documentation, code style, or overarching changes to the architecture
## Adding support for a new site
If you want to add support for a new site, first of all **make sure** this site is **not dedicated to [copyright infringement](https://www.github.com/ytdl-org/youtube-dl#can-you-add-support-for-this-anime-video-site-or-site-which-shows-current-movies-for-free)**. yt-dlp does **not support** such sites thus pull requests adding support for them **will be rejected**.
After you have ensured this site is distributing its content legally, you can follow this quick list (assuming your service is called `yourextractor`): After you have ensured this site is distributing its content legally, you can follow this quick list (assuming your service is called `yourextractor`):
1. [Fork this repository](https://github.com/ytdl-org/youtube-dl/fork) 1. [Fork this repository](https://github.com/yt-dlp/yt-dlp/fork)
2. Check out the source code with: 1. Check out the source code with:
git clone git@github.com:YOUR_GITHUB_USERNAME/youtube-dl.git git clone git@github.com:YOUR_GITHUB_USERNAME/yt-dlp.git
3. Start a new git branch with 1. Start a new git branch with
cd youtube-dl cd yt-dlp
git checkout -b yourextractor git checkout -b yourextractor
4. Start with this simple template and save it to `youtube_dl/extractor/yourextractor.py`: 1. Start with this simple template and save it to `yt_dlp/extractor/yourextractor.py`:
```python ```python
# coding: utf-8 # coding: utf-8
from __future__ import unicode_literals
from .common import InfoExtractor from .common import InfoExtractor
class YourExtractorIE(InfoExtractor): class YourExtractorIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?yourextractor\.com/watch/(?P<id>[0-9]+)' _VALID_URL = r'https?://(?:www\.)?yourextractor\.com/watch/(?P<id>[0-9]+)'
_TEST = { _TESTS = [{
'url': 'https://yourextractor.com/watch/42', 'url': 'https://yourextractor.com/watch/42',
'md5': 'TODO: md5 sum of the first 10241 bytes of the video file (use --test)', 'md5': 'TODO: md5 sum of the first 10241 bytes of the video file (use --test)',
'info_dict': { 'info_dict': {
@@ -133,12 +190,12 @@ After you have ensured this site is distributing its content legally, you can fo
# * A regular expression; start the string with re: # * A regular expression; start the string with re:
# * Any Python type (for example int or float) # * Any Python type (for example int or float)
} }
} }]
def _real_extract(self, url): def _real_extract(self, url):
video_id = self._match_id(url) video_id = self._match_id(url)
webpage = self._download_webpage(url, video_id) webpage = self._download_webpage(url, video_id)
# TODO more code goes here, for example ... # TODO more code goes here, for example ...
title = self._html_search_regex(r'<h1>(.+?)</h1>', webpage, 'title') title = self._html_search_regex(r'<h1>(.+?)</h1>', webpage, 'title')
@@ -147,45 +204,48 @@ After you have ensured this site is distributing its content legally, you can fo
'title': title, 'title': title,
'description': self._og_search_description(webpage), 'description': self._og_search_description(webpage),
'uploader': self._search_regex(r'<div[^>]+id="uploader"[^>]*>([^<]+)<', webpage, 'uploader', fatal=False), 'uploader': self._search_regex(r'<div[^>]+id="uploader"[^>]*>([^<]+)<', webpage, 'uploader', fatal=False),
# TODO more properties (see youtube_dl/extractor/common.py) # TODO more properties (see yt_dlp/extractor/common.py)
} }
``` ```
5. Add an import in [`youtube_dl/extractor/extractors.py`](https://github.com/ytdl-org/youtube-dl/blob/master/youtube_dl/extractor/extractors.py). 1. Add an import in [`yt_dlp/extractor/extractors.py`](yt_dlp/extractor/extractors.py).
6. Run `python test/test_download.py TestDownload.test_YourExtractor`. This *should fail* at first, but you can continually re-run it until you're done. If you decide to add more than one test, then rename ``_TEST`` to ``_TESTS`` and make it into a list of dictionaries. The tests will then be named `TestDownload.test_YourExtractor`, `TestDownload.test_YourExtractor_1`, `TestDownload.test_YourExtractor_2`, etc. Note that tests with `only_matching` key in test's dict are not counted in. 1. Run `python test/test_download.py TestDownload.test_YourExtractor`. This *should fail* at first, but you can continually re-run it until you're done. If you decide to add more than one test, the tests will then be named `TestDownload.test_YourExtractor`, `TestDownload.test_YourExtractor_1`, `TestDownload.test_YourExtractor_2`, etc. Note that tests with `only_matching` key in test's dict are not counted in. You can also run all the tests in one go with `TestDownload.test_YourExtractor_all`
7. Have a look at [`youtube_dl/extractor/common.py`](https://github.com/ytdl-org/youtube-dl/blob/master/youtube_dl/extractor/common.py) for possible helper methods and a [detailed description of what your extractor should and may return](https://github.com/ytdl-org/youtube-dl/blob/7f41a598b3fba1bcab2817de64a08941200aa3c8/youtube_dl/extractor/common.py#L94-L303). Add tests and code for as many as you want. 1. Make sure you have atleast one test for your extractor. Even if all videos covered by the extractor are expected to be inaccessible for automated testing, tests should still be added with a `skip` parameter indicating why the purticular test is disabled from running.
8. Make sure your code follows [youtube-dl coding conventions](#youtube-dl-coding-conventions) and check the code with [flake8](https://flake8.pycqa.org/en/latest/index.html#quickstart): 1. Have a look at [`yt_dlp/extractor/common.py`](yt_dlp/extractor/common.py) for possible helper methods and a [detailed description of what your extractor should and may return](yt_dlp/extractor/common.py#L91-L426). Add tests and code for as many as you want.
1. Make sure your code follows [yt-dlp coding conventions](#yt-dlp-coding-conventions) and check the code with [flake8](https://flake8.pycqa.org/en/latest/index.html#quickstart):
$ flake8 youtube_dl/extractor/yourextractor.py $ flake8 yt_dlp/extractor/yourextractor.py
9. Make sure your code works under all [Python](https://www.python.org/) versions claimed supported by youtube-dl, namely 2.6, 2.7, and 3.2+. 1. Make sure your code works under all [Python](https://www.python.org/) versions supported by yt-dlp, namely CPython and PyPy for Python 3.6 and above. Backward compatability is not required for even older versions of Python.
10. When the tests pass, [add](https://git-scm.com/docs/git-add) the new files and [commit](https://git-scm.com/docs/git-commit) them and [push](https://git-scm.com/docs/git-push) the result, like this: 1. When the tests pass, [add](https://git-scm.com/docs/git-add) the new files, [commit](https://git-scm.com/docs/git-commit) them and [push](https://git-scm.com/docs/git-push) the result, like this:
$ git add youtube_dl/extractor/extractors.py $ git add yt_dlp/extractor/extractors.py
$ git add youtube_dl/extractor/yourextractor.py $ git add yt_dlp/extractor/yourextractor.py
$ git commit -m '[yourextractor] Add new extractor' $ git commit -m '[yourextractor] Add extractor'
$ git push origin yourextractor $ git push origin yourextractor
11. Finally, [create a pull request](https://help.github.com/articles/creating-a-pull-request). We'll then review and merge it. 1. Finally, [create a pull request](https://help.github.com/articles/creating-a-pull-request). We'll then review and merge it.
In any case, thank you very much for your contributions! In any case, thank you very much for your contributions!
## youtube-dl coding conventions
## yt-dlp coding conventions
This section introduces a guide lines for writing idiomatic, robust and future-proof extractor code. This section introduces a guide lines for writing idiomatic, robust and future-proof extractor code.
Extractors are very fragile by nature since they depend on the layout of the source data provided by 3rd party media hosters out of your control and this layout tends to change. As an extractor implementer your task is not only to write code that will extract media links and metadata correctly but also to minimize dependency on the source's layout and even to make the code foresee potential future changes and be ready for that. This is important because it will allow the extractor not to break on minor layout changes thus keeping old youtube-dl versions working. Even though this breakage issue is easily fixed by emitting a new version of youtube-dl with a fix incorporated, all the previous versions become broken in all repositories and distros' packages that may not be so prompt in fetching the update from us. Needless to say, some non rolling release distros may never receive an update at all. Extractors are very fragile by nature since they depend on the layout of the source data provided by 3rd party media hosters out of your control and this layout tends to change. As an extractor implementer your task is not only to write code that will extract media links and metadata correctly but also to minimize dependency on the source's layout and even to make the code foresee potential future changes and be ready for that. This is important because it will allow the extractor not to break on minor layout changes thus keeping old yt-dlp versions working. Even though this breakage issue may be easily fixed by a new version of yt-dlp, this could take some time, during which the the extractor will remain broken.
### Mandatory and optional metafields ### Mandatory and optional metafields
For extraction to work youtube-dl relies on metadata your extractor extracts and provides to youtube-dl expressed by an [information dictionary](https://github.com/ytdl-org/youtube-dl/blob/7f41a598b3fba1bcab2817de64a08941200aa3c8/youtube_dl/extractor/common.py#L94-L303) or simply *info dict*. Only the following meta fields in the *info dict* are considered mandatory for a successful extraction process by youtube-dl: For extraction to work yt-dlp relies on metadata your extractor extracts and provides to yt-dlp expressed by an [information dictionary](yt_dlp/extractor/common.py#L91-L426) or simply *info dict*. Only the following meta fields in the *info dict* are considered mandatory for a successful extraction process by yt-dlp:
- `id` (media identifier) - `id` (media identifier)
- `title` (media title) - `title` (media title)
- `url` (media download URL) or `formats` - `url` (media download URL) or `formats`
In fact only the last option is technically mandatory (i.e. if you can't figure out the download location of the media the extraction does not make any sense). But by convention youtube-dl also treats `id` and `title` as mandatory. Thus the aforementioned metafields are the critical data that the extraction does not make any sense without and if any of them fail to be extracted then the extractor is considered completely broken. The aforementioned metafields are the critical data that the extraction does not make any sense without and if any of them fail to be extracted then the extractor is considered completely broken. While, in fact, only `id` is technically mandatory, due to compatability reasons, yt-dlp also treats `title` as mandatory. The extractor is allowed to return the info dict without url or formats in some special cases if it allows the user to extract usefull information with `--ignore-no-formats-error` - Eg: when the video is a live stream that has not started yet.
[Any field](https://github.com/ytdl-org/youtube-dl/blob/7f41a598b3fba1bcab2817de64a08941200aa3c8/youtube_dl/extractor/common.py#L188-L303) apart from the aforementioned ones are considered **optional**. That means that extraction should be **tolerant** to situations when sources for these fields can potentially be unavailable (even if they are always available at the moment) and **future-proof** in order not to break the extraction of general purpose mandatory fields. [Any field](yt_dlp/extractor/common.py#219-L426) apart from the aforementioned ones are considered **optional**. That means that extraction should be **tolerant** to situations when sources for these fields can potentially be unavailable (even if they are always available at the moment) and **future-proof** in order not to break the extraction of general purpose mandatory fields.
#### Example #### Example
@@ -199,8 +259,10 @@ Assume at this point `meta`'s layout is:
```python ```python
{ {
...
"summary": "some fancy summary text", "summary": "some fancy summary text",
"user": {
"name": "uploader name"
},
... ...
} }
``` ```
@@ -219,6 +281,30 @@ description = meta['summary'] # incorrect
The latter will break extraction process with `KeyError` if `summary` disappears from `meta` at some later time but with the former approach extraction will just go ahead with `description` set to `None` which is perfectly fine (remember `None` is equivalent to the absence of data). The latter will break extraction process with `KeyError` if `summary` disappears from `meta` at some later time but with the former approach extraction will just go ahead with `description` set to `None` which is perfectly fine (remember `None` is equivalent to the absence of data).
If the data is nested, do not use `.get` chains, but instead make use of the utility functions `try_get` or `traverse_obj`
Considering the above `meta` again, assume you want to extract `["user"]["name"]` and put it in the resulting info dict as `uploader`
```python
uploader = try_get(meta, lambda x: x['user']['name']) # correct
```
or
```python
uploader = traverse_obj(meta, ('user', 'name')) # correct
```
and not like:
```python
uploader = meta['user']['name'] # incorrect
```
or
```python
uploader = meta.get('user', {}).get('name') # incorrect
```
Similarly, you should pass `fatal=False` when extracting optional data from a webpage with `_search_regex`, `_html_search_regex` or similar methods, for instance: Similarly, you should pass `fatal=False` when extracting optional data from a webpage with `_search_regex`, `_html_search_regex` or similar methods, for instance:
```python ```python
@@ -238,11 +324,36 @@ description = self._search_regex(
``` ```
On failure this code will silently continue the extraction with `description` set to `None`. That is useful for metafields that may or may not be present. On failure this code will silently continue the extraction with `description` set to `None`. That is useful for metafields that may or may not be present.
Another thing to remember is not to try to iterate over `None`
Say you extracted a list of thumbnails into `thumbnail_data` using `try_get` and now want to iterate over them
```python
thumbnail_data = try_get(...)
thumbnails = [{
'url': item['url']
} for item in thumbnail_data or []] # correct
```
and not like:
```python
thumbnail_data = try_get(...)
thumbnails = [{
'url': item['url']
} for item in thumbnail_data] # incorrect
```
In the later case, `thumbnail_data` will be `None` if the field was not found and this will cause the loop `for item in thumbnail_data` to raise a fatal error. Using `for item in thumbnail_data or []` avoids this error and results in setting an empty list in `thumbnails` instead.
### Provide fallbacks ### Provide fallbacks
When extracting metadata try to do so from multiple sources. For example if `title` is present in several places, try extracting from at least some of them. This makes it more future-proof in case some of the sources become unavailable. When extracting metadata try to do so from multiple sources. For example if `title` is present in several places, try extracting from at least some of them. This makes it more future-proof in case some of the sources become unavailable.
#### Example #### Example
Say `meta` from the previous example has a `title` and you are about to extract it. Since `title` is a mandatory meta field you should end up with something like: Say `meta` from the previous example has a `title` and you are about to extract it. Since `title` is a mandatory meta field you should end up with something like:
@@ -261,6 +372,7 @@ title = meta.get('title') or self._og_search_title(webpage)
This code will try to extract from `meta` first and if it fails it will try extracting `og:title` from a `webpage`. This code will try to extract from `meta` first and if it fails it will try extracting `og:title` from a `webpage`.
### Regular expressions ### Regular expressions
#### Don't capture groups you don't use #### Don't capture groups you don't use
@@ -282,11 +394,10 @@ Incorrect:
r'(id|ID)=(?P<id>\d+)' r'(id|ID)=(?P<id>\d+)'
``` ```
#### Make regular expressions relaxed and flexible #### Make regular expressions relaxed and flexible
When using regular expressions try to write them fuzzy, relaxed and flexible, skipping insignificant parts that are more likely to change, allowing both single and double quotes for quoted values and so on. When using regular expressions try to write them fuzzy, relaxed and flexible, skipping insignificant parts that are more likely to change, allowing both single and double quotes for quoted values and so on.
##### Example ##### Example
Say you need to extract `title` from the following HTML code: Say you need to extract `title` from the following HTML code:
@@ -298,14 +409,14 @@ Say you need to extract `title` from the following HTML code:
The code for that task should look similar to: The code for that task should look similar to:
```python ```python
title = self._search_regex( title = self._search_regex( # correct
r'<span[^>]+class="title"[^>]*>([^<]+)', webpage, 'title') r'<span[^>]+class="title"[^>]*>([^<]+)', webpage, 'title')
``` ```
Or even better: Or even better:
```python ```python
title = self._search_regex( title = self._search_regex( # correct
r'<span[^>]+class=(["\'])title\1[^>]*>(?P<title>[^<]+)', r'<span[^>]+class=(["\'])title\1[^>]*>(?P<title>[^<]+)',
webpage, 'title', group='title') webpage, 'title', group='title')
``` ```
@@ -315,14 +426,25 @@ Note how you tolerate potential changes in the `style` attribute's value or swit
The code definitely should not look like: The code definitely should not look like:
```python ```python
title = self._search_regex( title = self._search_regex( # incorrect
r'<span style="position: absolute; left: 910px; width: 90px; float: right; z-index: 9999;" class="title">(.*?)</span>', r'<span style="position: absolute; left: 910px; width: 90px; float: right; z-index: 9999;" class="title">(.*?)</span>',
webpage, 'title', group='title') webpage, 'title', group='title')
``` ```
or even
```python
title = self._search_regex( # incorrect
r'<span style=".*?" class="title">(.*?)</span>',
webpage, 'title', group='title')
```
Here the presence or absence of other attributes including `style` is irrelevent for the data we need, and so the regex must not depend on it
### Long lines policy ### Long lines policy
There is a soft limit to keep lines of code under 80 characters long. This means it should be respected if possible and if it does not make readability and code maintenance worse. There is a soft limit to keep lines of code under 100 characters long. This means it should be respected if possible and if it does not make readability and code maintenance worse. Sometimes, it may be reasonable to go upto 120 characters and sometimes even 80 can be unreadable. Keep in mind that this is not a hard limit and is just one of many tools to make the code more readable
For example, you should **never** split long string literals like URLs or some other often copied entities over multiple lines to fit this limit: For example, you should **never** split long string literals like URLs or some other often copied entities over multiple lines to fit this limit:
@@ -359,6 +481,7 @@ TITLE_RE = r'<title>([^<]+)</title>'
title = self._html_search_regex(TITLE_RE, webpage, 'title') title = self._html_search_regex(TITLE_RE, webpage, 'title')
``` ```
### Collapse fallbacks ### Collapse fallbacks
Multiple fallback values can quickly become unwieldy. Collapse multiple fallback values into a single expression via a list of patterns. Multiple fallback values can quickly become unwieldy. Collapse multiple fallback values into a single expression via a list of patterns.
@@ -384,10 +507,13 @@ description = (
Methods supporting list of patterns are: `_search_regex`, `_html_search_regex`, `_og_search_property`, `_html_search_meta`. Methods supporting list of patterns are: `_search_regex`, `_html_search_regex`, `_og_search_property`, `_html_search_meta`.
### Trailing parentheses ### Trailing parentheses
Always move trailing parentheses after the last argument. Always move trailing parentheses after the last argument.
Note that this *does not* apply to braces `}` or square brackets `]` both of which should closed be in a new line
#### Example #### Example
Correct: Correct:
@@ -405,30 +531,36 @@ Incorrect:
) )
``` ```
### Use convenience conversion and parsing functions ### Use convenience conversion and parsing functions
Wrap all extracted numeric data into safe functions from [`youtube_dl/utils.py`](https://github.com/ytdl-org/youtube-dl/blob/master/youtube_dl/utils.py): `int_or_none`, `float_or_none`. Use them for string to number conversions as well. Wrap all extracted numeric data into safe functions from [`yt_dlp/utils.py`](yt_dlp/utils.py): `int_or_none`, `float_or_none`. Use them for string to number conversions as well.
Use `url_or_none` for safe URL processing. Use `url_or_none` for safe URL processing.
Use `try_get` for safe metadata extraction from parsed JSON. Use `try_get`, `dict_get` and `traverse_obj` for safe metadata extraction from parsed JSON.
Use `unified_strdate` for uniform `upload_date` or any `YYYYMMDD` meta field extraction, `unified_timestamp` for uniform `timestamp` extraction, `parse_filesize` for `filesize` extraction, `parse_count` for count meta fields extraction, `parse_resolution`, `parse_duration` for `duration` extraction, `parse_age_limit` for `age_limit` extraction. Use `unified_strdate` for uniform `upload_date` or any `YYYYMMDD` meta field extraction, `unified_timestamp` for uniform `timestamp` extraction, `parse_filesize` for `filesize` extraction, `parse_count` for count meta fields extraction, `parse_resolution`, `parse_duration` for `duration` extraction, `parse_age_limit` for `age_limit` extraction.
Explore [`youtube_dl/utils.py`](https://github.com/ytdl-org/youtube-dl/blob/master/youtube_dl/utils.py) for more useful convenience functions. Explore [`yt_dlp/utils.py`](yt_dlp/utils.py) for more useful convenience functions.
#### More examples #### More examples
##### Safely extract optional description from parsed JSON ##### Safely extract optional description from parsed JSON
```python ```python
description = try_get(response, lambda x: x['result']['video'][0]['summary'], compat_str) description = traverse_obj(response, ('result', 'video', 'summary'), expected_type=str)
``` ```
##### Safely extract more optional metadata ##### Safely extract more optional metadata
```python ```python
video = try_get(response, lambda x: x['result']['video'][0], dict) or {} video = traverse_obj(response, ('result', 'video', 0), default={}, expected_type=dict)
description = video.get('summary') description = video.get('summary')
duration = float_or_none(video.get('durationMs'), scale=1000) duration = float_or_none(video.get('durationMs'), scale=1000)
view_count = int_or_none(video.get('views')) view_count = int_or_none(video.get('views'))
``` ```
# EMBEDDING YT-DLP
See [README.md#embedding-yt-dlp](README.md#embedding-yt-dlp) for instructions on how to embed yt-dlp in another Python program

View File

@@ -1,6 +1,6 @@
pukkandan (owner) pukkandan (owner)
shirt-dev (collaborator) shirt-dev (collaborator)
colethedj (collaborator) coletdjnz/colethedj (collaborator)
Ashish0804 (collaborator) Ashish0804 (collaborator)
h-h-h-h h-h-h-h
pauldubois98 pauldubois98
@@ -22,7 +22,7 @@ Zocker1999NET
nao20010128nao nao20010128nao
kurumigi kurumigi
bbepis bbepis
animelover1984 animelover1984/horahoradev
Pccode66 Pccode66
RobinD42 RobinD42
hseg hseg
@@ -52,5 +52,90 @@ hhirtz
louie-github louie-github
MinePlayersPE MinePlayersPE
olifre olifre
rhsmachine rhsmachine/zenerdi0de
nihil-admirari nihil-admirari
krichbanana
ohmybahgosh
nyuszika7h
blackjack4494
pyx
TpmKranz
mzbaulhaque
zackmark29
mbway
zerodytrash
wesnm
pento
rigstot
dirkf
funniray
Jessecar96
jhwgh1968
kikuyan
max-te
nchilada
pgaig
PSlava
stdedos
u-spec-png
Sipherdrakon
kidonng
smege1001
tandy1000
IONECarter
capntrips
mrfade
ParadoxGBB
wlritchi
NeroBurner
mahanstreamer
alerikaisattera
Derkades
BunnyHelp
i6t
std-move
Chocobozzz
ouwou
korli
octotherp
CeruleanSky
zootedb0t
chao813
ChillingPepper
ConquerorDopy
dalanmiller
DigitalDJ
f4pp3rk1ng
gesa
Jules-A
makeworld-the-better-one
MKSherbini
mrx23dot
poschi3
raphaeldore
renalid
sleaux-meaux
sulyi
tmarki
Vangelis66
AjaxGb
ajj8
jakubadamw
jfogelman
timethrow
sarnoud
Bojidarist
18928172992817182/gustaf
nixklai
smplayer-dev
Zirro
CrypticSignal
flashdagger
fractalf
frafra
kaz-us
ozburo
rhendric
sdomi
selfisekai
stanoarn

View File

@@ -7,31 +7,709 @@
* Update Changelog.md and CONTRIBUTORS * Update Changelog.md and CONTRIBUTORS
* Change "Merged with ytdl" version in Readme.md if needed * Change "Merged with ytdl" version in Readme.md if needed
* Add new/fixed extractors in "new features" section of Readme.md * Add new/fixed extractors in "new features" section of Readme.md
* Commit to master as `Release <version>` * Commit as `Release <version>`
* Push to origin/release using `git push origin master:release` * Push to origin/release using `git push origin master:release`
build task will now run build task will now run
* Update version.py using `devscripts\update-version.py`
* Run `make issuetemplates`
* Commit to master as `[version] update :ci skip all`
* Push to origin/master
* Update changelog in /releases
--> -->
### 2021.11.10
* [youtube] **Fix throttling by decrypting n-sig**
* Merging extractors from [haruhi-dl](https://git.sakamoto.pl/laudom/haruhi-dl) by [selfisekai](https://github.com/selfisekai)
* [extractor] Add `_search_nextjs_data`
* [tvp] Fix extractors
* [tvp] Add TVPStreamIE
* [wppilot] Add extractors
* [polskieradio] Add extractors
* [radiokapital] Add extractors
* [polsatgo] Add extractor by [selfisekai](https://github.com/selfisekai), [sdomi](https://github.com/sdomi)
* Separate `--check-all-formats` from `--check-formats`
* Approximate filesize from bitrate
* Don't create console in `windows_enable_vt_mode`
* Fix bug in `--load-infojson` of playlists
* [minicurses] Add colors to `-F` and standardize color-printing code
* [outtmpl] Add type `link` for internet shortcut files
* [outtmpl] Add alternate forms for `q` and `j`
* [outtmpl] Do not traverse `None`
* [fragment] Fix progress display in fragmented downloads
* [downloader/ffmpeg] Fix vtt download with ffmpeg
* [ffmpeg] Detect presence of setts and libavformat version
* [ExtractAudio] Rescale --audio-quality correctly by [CrypticSignal](https://github.com/CrypticSignal), [pukkandan](https://github.com/pukkandan)
* [ExtractAudio] Use `libfdk_aac` if available by [CrypticSignal](https://github.com/CrypticSignal)
* [FormatSort] `eac3` is better than `ac3`
* [FormatSort] Fix some fields' defaults
* [generic] Detect more json_ld
* [generic] parse jwplayer with only the json URL
* [extractor] Add keyword automatically to SearchIE descriptions
* [extractor] Fix some errors being converted to `ExtractorError`
* [utils] Add `join_nonempty`
* [utils] Add `jwt_decode_hs256` by [Ashish0804](https://github.com/Ashish0804)
* [utils] Create `DownloadCancelled` exception
* [utils] Parse `vp09` as vp9
* [utils] Sanitize URL when determining protocol
* [test/download] Fallback test to `bv`
* [docs] Minor documentation improvements
* [cleanup] Improvements to error and debug messages
* [cleanup] Minor fixes and cleanup
* [3speak] Add extractors by [Ashish0804](https://github.com/Ashish0804)
* [AmazonStore] Add extractor by [Ashish0804](https://github.com/Ashish0804)
* [Gab] Add extractor by [u-spec-png](https://github.com/u-spec-png)
* [mediaset] Add playlist support by [nixxo](https://github.com/nixxo)
* [MLSScoccer] Add extractor by [Ashish0804](https://github.com/Ashish0804)
* [N1] Add support for nova.rs by [u-spec-png](https://github.com/u-spec-png)
* [PlanetMarathi] Add extractor by [Ashish0804](https://github.com/Ashish0804)
* [RaiplayRadio] Add extractors by [frafra](https://github.com/frafra)
* [roosterteeth] Add series extractor
* [sky] Add `SkyNewsStoryIE` by [ajj8](https://github.com/ajj8)
* [youtube] Fix sorting for some videos
* [youtube] Populate `thumbnail` with the best "known" thumbnail
* [youtube] Refactor itag processing
* [youtube] Remove unnecessary no-playlist warning
* [youtube:tab] Add Invidious list for playlists/channels by [rhendric](https://github.com/rhendric)
* [Bilibili:comments] Fix infinite loop by [u-spec-png](https://github.com/u-spec-png)
* [ceskatelevize] Fix extractor by [flashdagger](https://github.com/flashdagger)
* [Coub] Fix media format identification by [wlritchi](https://github.com/wlritchi)
* [crunchyroll] Add extractor-args `language` and `hardsub`
* [DiscoveryPlus] Allow language codes in URL
* [imdb] Fix thumbnail by [ozburo](https://github.com/ozburo)
* [instagram] Add IOS URL support by [u-spec-png](https://github.com/u-spec-png)
* [instagram] Improve login code by [u-spec-png](https://github.com/u-spec-png)
* [Instagram] Improve metadata extraction by [u-spec-png](https://github.com/u-spec-png)
* [iPrima] Fix extractor by [stanoarn](https://github.com/stanoarn)
* [itv] Add support for ITV News by [ajj8](https://github.com/ajj8)
* [la7] Fix extractor by [nixxo](https://github.com/nixxo)
* [linkedin] Don't login multiple times
* [mtv] Fix some videos by [Sipherdrakon](https://github.com/Sipherdrakon)
* [Newgrounds] Fix description by [u-spec-png](https://github.com/u-spec-png)
* [Nrk] Minor fixes by [fractalf](https://github.com/fractalf)
* [Olympics] Fix extractor by [u-spec-png](https://github.com/u-spec-png)
* [piksel] Fix sorting
* [twitter] Do not sort by codec
* [viewlift] Add cookie-based login and series support by [Ashish0804](https://github.com/Ashish0804), [pukkandan](https://github.com/pukkandan)
* [vimeo] Detect source extension and misc cleanup by [flashdagger](https://github.com/flashdagger)
* [vimeo] Fix ondemand videos and direct URLs with hash
* [vk] Fix login and add subtitles by [kaz-us](https://github.com/kaz-us)
* [VLive] Add upload_date and thumbnail by [Ashish0804](https://github.com/Ashish0804)
* [VRT] Fix login by [pgaig](https://github.com/pgaig)
* [Vupload] Fix extractor by [u-spec-png](https://github.com/u-spec-png)
* [wakanim] Add support for MPD manifests by [nyuszika7h](https://github.com/nyuszika7h)
* [wakanim] Detect geo-restriction by [nyuszika7h](https://github.com/nyuszika7h)
* [ZenYandex] Fix extractor by [u-spec-png](https://github.com/u-spec-png)
### 2021.10.22
* [build] Improvements
* Build standalone MacOS packages by [smplayer-dev](https://github.com/smplayer-dev)
* Release windows exe built with `py2exe`
* Enable lazy-extractors in releases.
* Set env var `YTDLP_NO_LAZY_EXTRACTORS` to forcefully disable this (experimental)
* Clean up error reporting in update
* Refactor `pyinst.py`, misc cleanup and improve docs
* [docs] Migrate issues to use forms by [Ashish0804](https://github.com/Ashish0804)
* [downloader] **Fix slow progress hooks**
* This was causing HLS/DASH downloads to be extremely slow in some situations
* [downloader/ffmpeg] Improve simultaneous download and merge
* [EmbedMetadata] Allow overwriting all default metadata with `meta_default` key
* [ModifyChapters] Add ability for `--remove-chapters` to remove sections by timestamp
* [utils] Allow duration strings in `--match-filter`
* Add HDR information to formats
* Add negative option `--no-batch-file` by [Zirro](https://github.com/Zirro)
* Calculate more fields for merged formats
* Do not verify thumbnail URLs unless `--check-formats` is specified
* Don't create console for subprocesses on Windows
* Fix `--restrict-filename` when used with default template
* Fix `check_formats` output being written to stdout when `-qv`
* Fix bug in storyboards
* Fix conflict b/w id and ext in format selection
* Fix verbose head not showing custom configs
* Load archive only after printing verbose head
* Make `duration_string` and `resolution` available in --match-filter
* Re-implement deprecated option `--id`
* Reduce default `--socket-timeout`
* Write verbose header to logger
* [outtmpl] Fix bug in expanding environment variables
* [cookies] Local State should be opened as utf-8
* [extractor,utils] Detect more codecs/mimetypes
* [extractor] Detect `EXT-X-KEY` Apple FairPlay
* [utils] Use `importlib` to load plugins by [sulyi](https://github.com/sulyi)
* [http] Retry on socket timeout and show the last encountered error
* [fragment] Print error message when skipping fragment
* [aria2c] Fix `--skip-unavailable-fragment`
* [SponsorBlock] Obey `extractor-retries` and `sleep-requests`
* [Merger] Do not add `aac_adtstoasc` to non-hls audio
* [ModifyChapters] Do not mutate original chapters by [nihil-admirari](https://github.com/nihil-admirari)
* [devscripts/run_tests] Use markers to filter tests by [sulyi](https://github.com/sulyi)
* [7plus] Add cookie based authentication by [nyuszika7h](https://github.com/nyuszika7h)
* [AdobePass] Fix RCN MSO by [jfogelman](https://github.com/jfogelman)
* [CBC] Fix Gem livestream by [makeworld-the-better-one](https://github.com/makeworld-the-better-one)
* [CBC] Support CBC Gem member content by [makeworld-the-better-one](https://github.com/makeworld-the-better-one)
* [crunchyroll] Add season to flat-playlist
* [crunchyroll] Add support for `beta.crunchyroll` URLs and fix series URLs with language code
* [EUScreen] Add Extractor by [Ashish0804](https://github.com/Ashish0804)
* [Gronkh] Add extractor by [Ashish0804](https://github.com/Ashish0804)
* [hidive] Fix typo
* [Hotstar] Mention Dynamic Range in `format_id` by [Ashish0804](https://github.com/Ashish0804)
* [Hotstar] Raise appropriate error for DRM
* [instagram] Add login by [u-spec-png](https://github.com/u-spec-png)
* [instagram] Show appropriate error when login is needed
* [microsoftstream] Add extractor by [damianoamatruda](https://github.com/damianoamatruda), [nixklai](https://github.com/nixklai)
* [on24] Add extractor by [damianoamatruda](https://github.com/damianoamatruda)
* [patreon] Fix vimeo player regex by [zenerdi0de](https://github.com/zenerdi0de)
* [SkyNewsAU] Add extractor by [Ashish0804](https://github.com/Ashish0804)
* [tagesschau] Fix extractor by [u-spec-png](https://github.com/u-spec-png)
* [tbs] Add tbs live streams by [llacb47](https://github.com/llacb47)
* [tiktok] Fix typo and update tests
* [trovo] Support channel clips and VODs by [Ashish0804](https://github.com/Ashish0804)
* [Viafree] Add support for Finland by [18928172992817182](https://github.com/18928172992817182)
* [vimeo] Fix embedded `player.vimeo`
* [vlive:channel] Fix extraction by [kikuyan](https://github.com/kikuyan), [pukkandan](https://github.com/pukkandan)
* [youtube] Add auto-translated subtitles
* [youtube] Expose different formats with same itag
* [youtube:comments] Fix for new layout by [coletdjnz](https://github.com/coletdjnz)
* [cleanup] Cleanup bilibili code by [pukkandan](https://github.com/pukkandan), [u-spec-png](https://github.com/u-spec-png)
* [cleanup] Remove broken youtube login code
* [cleanup] Standardize timestamp formatting code
* [cleanup] Generalize `getcomments` implementation for extractors
* [cleanup] Simplify search extractors code
* [cleanup] misc
### 2021.10.10
* [downloader/ffmpeg] Fix bug in initializing `FFmpegPostProcessor`
* [minicurses] Fix when printing to file
* [downloader] Fix throttledratelimit
* [francetv] Fix extractor by [fstirlitz](https://github.com/fstirlitz), [sarnoud](https://github.com/sarnoud)
* [NovaPlay] Add extractor by [Bojidarist](https://github.com/Bojidarist)
* [ffmpeg] Revert "Set max probesize" - No longer needed
* [docs] Remove incorrect dependency on VC++10
* [build] Allow to release without changelog
### 2021.10.09
* Improved progress reporting
* Separate `--console-title` and `--no-progress`
* Add option `--progress` to show progress-bar even in quiet mode
* Fix and refactor `minicurses` and use it for all progress reporting
* Standardize use of terminal sequences and enable color support for windows 10
* Add option `--progress-template` to customize progress-bar and console-title
* Add postprocessor hooks and progress reporting
* [postprocessor] Add plugin support with option `--use-postprocessor`
* [extractor] Extract storyboards from SMIL manifests by [fstirlitz](https://github.com/fstirlitz)
* [outtmpl] Alternate form of format type `l` for `\n` delimited list
* [outtmpl] Format type `U` for unicode normalization
* [outtmpl] Allow empty output template to skip a type of file
* Merge webm formats into mkv if thumbnails are to be embedded
* [adobepass] Add RCN as MSO by [jfogelman](https://github.com/jfogelman)
* [ciscowebex] Add extractor by [damianoamatruda](https://github.com/damianoamatruda)
* [Gettr] Add extractor by [i6t](https://github.com/i6t)
* [GoPro] Add extractor by [i6t](https://github.com/i6t)
* [N1] Add extractor by [u-spec-png](https://github.com/u-spec-png)
* [Theta] Add video extractor by [alerikaisattera](https://github.com/alerikaisattera)
* [Veo] Add extractor by [i6t](https://github.com/i6t)
* [Vupload] Add extractor by [u-spec-png](https://github.com/u-spec-png)
* [bbc] Extract better quality videos by [ajj8](https://github.com/ajj8)
* [Bilibili] Add subtitle converter by [u-spec-png](https://github.com/u-spec-png)
* [CBC] Cleanup tests by [makeworld-the-better-one](https://github.com/makeworld-the-better-one)
* [Douyin] Rewrite extractor by [MinePlayersPE](https://github.com/MinePlayersPE)
* [Funimation] Fix for /v/ urls by [pukkandan](https://github.com/pukkandan), [Jules-A](https://github.com/Jules-A)
* [Funimation] Sort formats according to the relevant extractor-args
* [Hidive] Fix duplicate and incorrect formats
* [HotStarSeries] Fix cookies by [Ashish0804](https://github.com/Ashish0804)
* [LinkedInLearning] Add subtitles by [Ashish0804](https://github.com/Ashish0804)
* [Mediaite] Relax valid url by [coletdjnz](https://github.com/coletdjnz)
* [Newgrounds] Add age_limit and fix duration by [u-spec-png](https://github.com/u-spec-png)
* [Newgrounds] Fix view count on songs by [u-spec-png](https://github.com/u-spec-png)
* [parliamentlive.tv] Fix extractor by [u-spec-png](https://github.com/u-spec-png)
* [PolskieRadio] Fix extractors by [jakubadamw](https://github.com/jakubadamw), [u-spec-png](https://github.com/u-spec-png)
* [reddit] Add embedded url by [u-spec-png](https://github.com/u-spec-png)
* [reddit] Fix 429 by generating a random `reddit_session` by [AjaxGb](https://github.com/AjaxGb)
* [Rumble] Add RumbleChannelIE by [Ashish0804](https://github.com/Ashish0804)
* [soundcloud:playlist] Detect last page correctly
* [SovietsCloset] Add duration from m3u8 by [ChillingPepper](https://github.com/ChillingPepper)
* [Streamable] Add codecs by [u-spec-png](https://github.com/u-spec-png)
* [vidme] Remove extractor by [alerikaisattera](https://github.com/alerikaisattera)
* [youtube:tab] Fallback to API when webpage fails to download by [coletdjnz](https://github.com/coletdjnz)
* [youtube] Fix non-fatal errors in fetching player
* Fix `--flat-playlist` when neither IE nor id is known
* Fix `-f mp4` behaving differently from youtube-dl
* Workaround for bug in `ssl.SSLContext.load_default_certs`
* [aes] Improve performance slightly by [sulyi](https://github.com/sulyi)
* [cookies] Fix keyring fallback by [mbway](https://github.com/mbway)
* [embedsubtitle] Fix error when duration is unknown
* [ffmpeg] Fix error when subtitle file is missing
* [ffmpeg] Set max probesize to workaround AAC HLS stream issues by [shirt](https://github.com/shirt-dev)
* [FixupM3u8] Remove redundant run if merged is needed
* [hls] Fix decryption issues by [shirt](https://github.com/shirt-dev), [pukkandan](https://github.com/pukkandan)
* [http] Respect user-provided chunk size over extractor's
* [utils] Let traverse_obj accept functions as keys
* [docs] Add note about our custom ffmpeg builds
* [docs] Write embedding and contributing documentation by [pukkandan](https://github.com/pukkandan), [timethrow](https://github.com/timethrow)
* [update] Check for new version even if not updateable
* [build] Add more files to the tarball
* [build] Allow building with py2exe (and misc fixes)
* [build] Use pycryptodomex by [shirt](https://github.com/shirt-dev), [pukkandan](https://github.com/pukkandan)
* [cleanup] Some minor refactoring, improve docs and misc cleanup
### 2021.09.25
* Add new option `--netrc-location`
* [outtmpl] Allow alternate fields using `,`
* [outtmpl] Add format type `B` to treat the value as bytes (eg: to limit the filename to a certain number of bytes)
* Separate the options `--ignore-errors` and `--no-abort-on-error`
* Basic framework for simultaneous download of multiple formats by [nao20010128nao](https://github.com/nao20010128nao)
* [17live] Add 17.live extractor by [nao20010128nao](https://github.com/nao20010128nao)
* [bilibili] Add BiliIntlIE and BiliIntlSeriesIE by [Ashish0804](https://github.com/Ashish0804)
* [CAM4] Add extractor by [alerikaisattera](https://github.com/alerikaisattera)
* [Chingari] Add extractors by [Ashish0804](https://github.com/Ashish0804)
* [CGTN] Add extractor by [chao813](https://github.com/chao813)
* [damtomo] Add extractor by [nao20010128nao](https://github.com/nao20010128nao)
* [gotostage] Add extractor by [poschi3](https://github.com/poschi3)
* [Koo] Add extractor by [Ashish0804](https://github.com/Ashish0804)
* [Mediaite] Add Extractor by [Ashish0804](https://github.com/Ashish0804)
* [Mediaklikk] Add Extractor by [tmarki](https://github.com/tmarki), [mrx23dot](https://github.com/mrx23dot), [coletdjnz](https://github.com/coletdjnz)
* [MuseScore] Add Extractor by [Ashish0804](https://github.com/Ashish0804)
* [Newgrounds] Add NewgroundsUserIE and improve extractor by [u-spec-png](https://github.com/u-spec-png)
* [nzherald] Add NZHeraldIE by [coletdjnz](https://github.com/coletdjnz)
* [Olympics] Add replay extractor by [Ashish0804](https://github.com/Ashish0804)
* [Peertube] Add channel and playlist extractors by [u-spec-png](https://github.com/u-spec-png)
* [radlive] Add extractor by [nyuszika7h](https://github.com/nyuszika7h)
* [SovietsCloset] Add extractor by [ChillingPepper](https://github.com/ChillingPepper)
* [Streamanity] Add Extractor by [alerikaisattera](https://github.com/alerikaisattera)
* [Theta] Add extractor by [alerikaisattera](https://github.com/alerikaisattera)
* [Yandex] Add ZenYandexIE and ZenYandexChannelIE by [Ashish0804](https://github.com/Ashish0804)
* [9Now] handle episodes of series by [dalanmiller](https://github.com/dalanmiller)
* [AnimalPlanet] Fix extractor by [Sipherdrakon](https://github.com/Sipherdrakon)
* [Arte] Improve description extraction by [renalid](https://github.com/renalid)
* [atv.at] Use jwt for API by [NeroBurner](https://github.com/NeroBurner)
* [brightcove] Extract subtitles from manifests
* [CBC] Fix CBC Gem extractors by [makeworld-the-better-one](https://github.com/makeworld-the-better-one)
* [cbs] Report appropriate error for DRM
* [comedycentral] Support `collection-playlist` by [nixxo](https://github.com/nixxo)
* [DIYNetwork] Support new format by [Sipherdrakon](https://github.com/Sipherdrakon)
* [downloader/niconico] Pass custom headers by [nao20010128nao](https://github.com/nao20010128nao)
* [dw] Fix extractor
* [Fancode] Fix live streams by [zenerdi0de](https://github.com/zenerdi0de)
* [funimation] Fix for locations outside US by [Jules-A](https://github.com/Jules-A), [pukkandan](https://github.com/pukkandan)
* [globo] Fix GloboIE by [Ashish0804](https://github.com/Ashish0804)
* [HiDive] Fix extractor by [Ashish0804](https://github.com/Ashish0804)
* [Hotstar] Add referer for subs by [Ashish0804](https://github.com/Ashish0804)
* [itv] Fix extractor, add subtitles and thumbnails by [coletdjnz](https://github.com/coletdjnz), [sleaux-meaux](https://github.com/sleaux-meaux), [Vangelis66](https://github.com/Vangelis66)
* [lbry] Show error message from API response
* [Mxplayer] Use mobile API by [Ashish0804](https://github.com/Ashish0804)
* [NDR] Rewrite NDRIE by [Ashish0804](https://github.com/Ashish0804)
* [Nuvid] Fix extractor by [u-spec-png](https://github.com/u-spec-png)
* [Oreilly] Handle new web url by [MKSherbini](https://github.com/MKSherbini)
* [pbs] Fix subtitle extraction by [coletdjnz](https://github.com/coletdjnz), [gesa](https://github.com/gesa), [raphaeldore](https://github.com/raphaeldore)
* [peertube] Update instances by [u-spec-png](https://github.com/u-spec-png)
* [plutotv] Fix extractor for URLs with `/en`
* [reddit] Workaround for 429 by redirecting to old.reddit.com
* [redtube] Fix exts
* [soundcloud] Make playlist extraction lazy
* [soundcloud] Retry playlist pages on `502` error and update `_CLIENT_ID`
* [southpark] Fix SouthParkDE by [coletdjnz](https://github.com/coletdjnz)
* [SovietsCloset] Fix playlists for games with only named categories by [ConquerorDopy](https://github.com/ConquerorDopy)
* [SpankBang] Fix uploader by [f4pp3rk1ng](https://github.com/f4pp3rk1ng), [coletdjnz](https://github.com/coletdjnz)
* [tiktok] Use API to fetch higher quality video by [MinePlayersPE](https://github.com/MinePlayersPE), [llacb47](https://github.com/llacb47)
* [TikTokUser] Fix extractor using mobile API by [MinePlayersPE](https://github.com/MinePlayersPE), [llacb47](https://github.com/llacb47)
* [videa] Fix some extraction errors by [nyuszika7h](https://github.com/nyuszika7h)
* [VrtNU] Handle login errors by [llacb47](https://github.com/llacb47)
* [vrv] Don't raise error when thumbnails are missing
* [youtube] Cleanup authentication code by [coletdjnz](https://github.com/coletdjnz)
* [youtube] Fix `--mark-watched` with `--cookies-from-browser`
* [youtube] Improvements to JS player extraction and add extractor-args to skip it by [coletdjnz](https://github.com/coletdjnz)
* [youtube] Retry on 'Unknown Error' by [coletdjnz](https://github.com/coletdjnz)
* [youtube] Return full URL instead of just ID
* [youtube] Warn when trying to download clips
* [zdf] Improve format sorting
* [zype] Extract subtitles from the m3u8 manifest by [fstirlitz](https://github.com/fstirlitz)
* Allow `--force-write-archive` to work with `--flat-playlist`
* Download subtitles in order of `--sub-langs`
* Allow `0` in `--playlist-items`
* Handle more playlist errors with `-i`
* Fix `--no-get-comments`
* Fix `extra_info` being reused across runs
* Fix compat options `no-direct-merge` and `playlist-index`
* Dump files should obey `--trim-filename` by [sulyi](https://github.com/sulyi)
* [aes] Add `aes_gcm_decrypt_and_verify` by [sulyi](https://github.com/sulyi), [pukkandan](https://github.com/pukkandan)
* [aria2c] Fix IV for some AES-128 streams by [shirt](https://github.com/shirt-dev)
* [compat] Don't ignore `HOME` (if set) on windows
* [cookies] Make browser names case insensitive
* [cookies] Print warning for cookie decoding error only once
* [extractor] Fix root-relative URLs in MPD by [DigitalDJ](https://github.com/DigitalDJ)
* [ffmpeg] Add `aac_adtstoasc` when merging if needed
* [fragment,aria2c] Generalize and refactor some code
* [fragment] Avoid repeated request for AES key
* [fragment] Fix range header when using `-N` and media sequence by [shirt](https://github.com/shirt-dev)
* [hls,aes] Fallback to native implementation for AES-CBC and detect `Cryptodome` in addition to `Crypto`
* [hls] Byterange + AES128 is supported by native downloader
* [ModifyChapters] Improve sponsor chapter merge algorithm by [nihil-admirari](https://github.com/nihil-admirari)
* [ModifyChapters] Minor fixes
* [WebVTT] Adjust parser to accommodate PBS subtitles
* [utils] Improve `extract_timezone` by [dirkf](https://github.com/dirkf)
* [options] Fix `--no-config` and refactor reading of config files
* [options] Strip spaces and ignore empty entries in list-like switches
* [test/cookies] Improve logging
* [build] Automate more of the release process by [animelover1984](https://github.com/animelover1984), [pukkandan](https://github.com/pukkandan)
* [build] Fix sha256 by [nihil-admirari](https://github.com/nihil-admirari)
* [build] Bring back brew taps by [nao20010128nao](https://github.com/nao20010128nao)
* [build] Provide `--onedir` zip for windows by [pukkandan](https://github.com/pukkandan)
* [cleanup,docs] Add deprecation warning in docs for some counter intuitive behaviour
* [cleanup] Fix line endings for `nebula.py` by [glenn-slayden](https://github.com/glenn-slayden)
* [cleanup] Improve `make clean-test` by [sulyi](https://github.com/sulyi)
* [cleanup] Misc
### 2021.09.02
* **Native SponsorBlock** implementation by [nihil-admirari](https://github.com/nihil-admirari), [pukkandan](https://github.com/pukkandan)
* `--sponsorblock-remove CATS` removes specified chapters from file
* `--sponsorblock-mark CATS` marks the specified sponsor sections as chapters
* `--sponsorblock-chapter-title TMPL` to specify sponsor chapter template
* `--sponsorblock-api URL` to use a different API
* No re-encoding is done unless `--force-keyframes-at-cuts` is used
* The fetched sponsor sections are written to the infojson
* Deprecates: `--sponskrub`, `--no-sponskrub`, `--sponskrub-cut`, `--no-sponskrub-cut`, `--sponskrub-force`, `--no-sponskrub-force`, `--sponskrub-location`, `--sponskrub-args`
* Split `--embed-chapters` from `--embed-metadata` (it still implies the former by default)
* Add option `--remove-chapters` to remove arbitrary chapters by [nihil-admirari](https://github.com/nihil-admirari), [pukkandan](https://github.com/pukkandan)
* Add option `--force-keyframes-at-cuts` for more accurate cuts when removing and splitting chapters by [nihil-admirari](https://github.com/nihil-admirari)
* Let `--match-filter` reject entries early
* Makes redundant: `--match-title`, `--reject-title`, `--min-views`, `--max-views`
* [lazy_extractor] Improvements (It now passes all tests)
* Bugfix for when plugin directory doesn't exist by [kidonng](https://github.com/kidonng)
* Create instance only after pre-checking archive
* Import actual class if an attribute is accessed
* Fix `suitable` and add flake8 test
* [downloader/ffmpeg] Experimental support for DASH manifests (including live)
* Your ffmpeg must have [this patch](https://github.com/FFmpeg/FFmpeg/commit/3249c757aed678780e22e99a1a49f4672851bca9) applied for YouTube DASH to work
* [downloader/ffmpeg] Allow passing custom arguments before `-i`
* [BannedVideo] Add extractor by [smege1001](https://github.com/smege1001), [blackjack4494](https://github.com/blackjack4494), [pukkandan](https://github.com/pukkandan)
* [bilibili] Add category extractor by [animelover1984](https://github.com/animelover1984)
* [Epicon] Add extractors by [Ashish0804](https://github.com/Ashish0804)
* [filmmodu] Add extractor by [mzbaulhaque](https://github.com/mzbaulhaque)
* [GabTV] Add extractor by [Ashish0804](https://github.com/Ashish0804)
* [Hungama] Fix `HungamaSongIE` and add `HungamaAlbumPlaylistIE` by [Ashish0804](https://github.com/Ashish0804)
* [ManotoTV] Add new extractors by [tandy1000](https://github.com/tandy1000)
* [Niconico] Add Search extractors by [animelover1984](https://github.com/animelover1984), [pukkandan](https://github.com/pukkandan)
* [Patreon] Add `PatreonUserIE` by [zenerdi0de](https://github.com/zenerdi0de)
* [peloton] Add extractor by [IONECarter](https://github.com/IONECarter), [capntrips](https://github.com/capntrips), [pukkandan](https://github.com/pukkandan)
* [ProjectVeritas] Add extractor by [Ashish0804](https://github.com/Ashish0804)
* [radiko] Add extractors by [nao20010128nao](https://github.com/nao20010128nao)
* [StarTV] Add extractor for `startv.com.tr` by [mrfade](https://github.com/mrfade), [coletdjnz](https://github.com/coletdjnz)
* [tiktok] Add `TikTokUserIE` by [Ashish0804](https://github.com/Ashish0804), [pukkandan](https://github.com/pukkandan)
* [Tokentube] Add extractor by [u-spec-png](https://github.com/u-spec-png)
* [TV2Hu] Fix `TV2HuIE` and add `TV2HuSeriesIE` by [Ashish0804](https://github.com/Ashish0804)
* [voicy] Add extractor by [nao20010128nao](https://github.com/nao20010128nao)
* [adobepass] Fix Verizon SAML login by [nyuszika7h](https://github.com/nyuszika7h), [ParadoxGBB](https://github.com/ParadoxGBB)
* [afreecatv] Fix adult VODs by [wlritchi](https://github.com/wlritchi)
* [afreecatv] Tolerate failure to parse date string by [wlritchi](https://github.com/wlritchi)
* [aljazeera] Fix extractor by [MinePlayersPE](https://github.com/MinePlayersPE)
* [ATV.at] Fix extractor for ATV.at by [NeroBurner](https://github.com/NeroBurner), [coletdjnz](https://github.com/coletdjnz)
* [bitchute] Fix test by [mahanstreamer](https://github.com/mahanstreamer)
* [camtube] Remove obsolete extractor by [alerikaisattera](https://github.com/alerikaisattera)
* [CDA] Add more formats by [u-spec-png](https://github.com/u-spec-png)
* [eroprofile] Fix page skipping in albums by [jhwgh1968](https://github.com/jhwgh1968)
* [facebook] Fix format sorting
* [facebook] Fix metadata extraction by [kikuyan](https://github.com/kikuyan)
* [facebook] Update onion URL by [Derkades](https://github.com/Derkades)
* [HearThisAtIE] Fix extractor by [Ashish0804](https://github.com/Ashish0804)
* [instagram] Add referrer to prevent throttling by [u-spec-png](https://github.com/u-spec-png), [kikuyan](https://github.com/kikuyan)
* [iwara.tv] Extract more metadata by [BunnyHelp](https://github.com/BunnyHelp)
* [iwara] Add thumbnail by [i6t](https://github.com/i6t)
* [kakao] Fix extractor
* [mediaset] Fix extraction for some videos by [nyuszika7h](https://github.com/nyuszika7h)
* [Motherless] Fix extractor by [coletdjnz](https://github.com/coletdjnz)
* [Nova] fix extractor by [std-move](https://github.com/std-move)
* [ParamountPlus] Fix geo verification by [shirt](https://github.com/shirt-dev)
* [peertube] handle new video URL format by [Chocobozzz](https://github.com/Chocobozzz)
* [pornhub] Separate and fix playlist extractor by [mzbaulhaque](https://github.com/mzbaulhaque)
* [reddit] Fix for quarantined subreddits by [ouwou](https://github.com/ouwou)
* [ShemarooMe] Fix extractor by [Ashish0804](https://github.com/Ashish0804)
* [soundcloud] Refetch `client_id` on 403
* [tiktok] Fix metadata extraction
* [TV2] Fix extractor by [Ashish0804](https://github.com/Ashish0804)
* [tv5mondeplus] Fix extractor by [korli](https://github.com/korli)
* [VH1,TVLand] Fix extractors by [Sipherdrakon](https://github.com/Sipherdrakon)
* [Viafree] Fix extractor and extract subtitles by [coletdjnz](https://github.com/coletdjnz)
* [XHamster] Extract `uploader_id` by [octotherp](https://github.com/octotherp)
* [youtube] Add `shorts` to `_VALID_URL`
* [youtube] Add av01 itags to known formats list by [blackjack4494](https://github.com/blackjack4494)
* [youtube] Extract error messages from HTTPError response by [coletdjnz](https://github.com/coletdjnz)
* [youtube] Fix subtitle names
* [youtube] Prefer audio stream that YouTube considers default
* [youtube] Remove annotations and deprecate `--write-annotations` by [coletdjnz](https://github.com/coletdjnz)
* [Zee5] Fix extractor and add subtitles by [Ashish0804](https://github.com/Ashish0804)
* [aria2c] Obey `--rate-limit`
* [EmbedSubtitle] Continue even if some files are missing
* [extractor] Better error message for DRM
* [extractor] Common function `_match_valid_url`
* [extractor] Show video id in error messages if possible
* [FormatSort] Remove priority of `lang`
* [options] Add `_set_from_options_callback`
* [SubtitleConvertor] Fix bug during subtitle conversion
* [utils] Add `parse_qs`
* [webvtt] Fix timestamp overflow adjustment by [fstirlitz](https://github.com/fstirlitz)
* Bugfix for `--replace-in-metadata`
* Don't try to merge with final extension
* Fix `--force-overwrites` when using `-k`
* Fix `--no-prefer-free-formats` by [CeruleanSky](https://github.com/CeruleanSky)
* Fix `-F` for extractors that directly return url
* Fix `-J` when there are failed videos
* Fix `extra_info` being reused across runs
* Fix `playlist_index` not obeying `playlist_start` and add tests
* Fix resuming of single formats when using `--no-part`
* Revert erroneous use of the `Content-Length` header by [fstirlitz](https://github.com/fstirlitz)
* Use `os.replace` where applicable by; paulwrubel
* [build] Add homebrew taps `yt-dlp/taps/yt-dlp` by [nao20010128nao](https://github.com/nao20010128nao)
* [build] Fix bug in making `yt-dlp.tar.gz`
* [docs] Fix some typos by [pukkandan](https://github.com/pukkandan), [zootedb0t](https://github.com/zootedb0t)
* [cleanup] Replace improper use of tab in trovo by [glenn-slayden](https://github.com/glenn-slayden)
### 2021.08.10
* Add option `--replace-in-metadata`
* Add option `--no-simulate` to not simulate even when `--print` or `--list...` are used - Deprecates `--print-json`
* Allow entire infodict to be printed using `%()s` - makes `--dump-json` redundant
* Allow multiple `--exec` and `--exec-before-download`
* Add regex to `--match-filter`
* Add all format filtering operators also to `--match-filter` by [max-te](https://github.com/max-te)
* Add compat-option `no-keep-subs`
* [adobepass] Add MSO Cablevision by [Jessecar96](https://github.com/Jessecar96)
* [BandCamp] Add BandcampMusicIE by [Ashish0804](https://github.com/Ashish0804)
* [blackboardcollaborate] Add new extractor by [mzbaulhaque](https://github.com/mzbaulhaque)
* [eroprofile] Add album downloader by [jhwgh1968](https://github.com/jhwgh1968)
* [mirrativ] Add extractors by [nao20010128nao](https://github.com/nao20010128nao)
* [openrec] Add extractors by [nao20010128nao](https://github.com/nao20010128nao)
* [nbcolympics:stream] Fix extractor by [nchilada](https://github.com/nchilada), [pukkandan](https://github.com/pukkandan)
* [nbcolympics] Update extractor for 2020 olympics by [wesnm](https://github.com/wesnm)
* [paramountplus] Separate extractor and fix some titles by [shirt](https://github.com/shirt-dev), [pukkandan](https://github.com/pukkandan)
* [RCTIPlus] Support events and TV by [MinePlayersPE](https://github.com/MinePlayersPE)
* [Newgrounds] Improve extractor and fix playlist by [u-spec-png](https://github.com/u-spec-png)
* [aenetworks] Update `_THEPLATFORM_KEY` and `_THEPLATFORM_SECRET` by [wesnm](https://github.com/wesnm)
* [crunchyroll] Fix thumbnail by [funniray](https://github.com/funniray)
* [HotStar] Use API for metadata and extract subtitles by [Ashish0804](https://github.com/Ashish0804)
* [instagram] Fix comments extraction by [u-spec-png](https://github.com/u-spec-png)
* [peertube] Fix videos without description by [u-spec-png](https://github.com/u-spec-png)
* [twitch:clips] Extract `display_id` by [dirkf](https://github.com/dirkf)
* [viki] Print error message from API request
* [Vine] Remove invalid formats by [u-spec-png](https://github.com/u-spec-png)
* [VrtNU] Fix XSRF token by [pgaig](https://github.com/pgaig)
* [vrv] Fix thumbnail extraction by [funniray](https://github.com/funniray)
* [youtube] Add extractor-arg `include-live-dash` to show live dash formats
* [youtube] Improve signature function detection by [PSlava](https://github.com/PSlava)
* [youtube] Raise appropriate error when API pages can't be downloaded
* Ensure `_write_ytdl_file` closes file handle on error
* Fix `--compat-options filename` by [stdedos](https://github.com/stdedos)
* Fix issues with infodict sanitization
* Fix resuming when using `--no-part`
* Fix wrong extension for intermediate files
* Handle `BrokenPipeError` by [kikuyan](https://github.com/kikuyan)
* Show libraries present in verbose head
* [extractor] Detect `sttp` as subtitles in MPD by [fstirlitz](https://github.com/fstirlitz)
* [extractor] Reset non-repeating warnings per video
* [ffmpeg] Fix streaming `mp4` to `stdout`
* [ffpmeg] Allow `--ffmpeg-location` to be a file with different name
* [utils] Fix `InAdvancePagedList.__getitem__`
* [utils] Fix `traverse_obj` depth when `is_user_input`
* [webvtt] Merge daisy-chained duplicate cues by [fstirlitz](https://github.com/fstirlitz)
* [build] Use custom build of `pyinstaller` by [shirt](https://github.com/shirt-dev)
* [tests:download] Add batch testing for extractors (`test_YourExtractor_all`)
* [docs] Document which fields `--add-metadata` adds to the file
* [docs] Fix some mistakes and improve doc
* [cleanup] Misc code cleanup
### 2021.08.02
* Add logo, banner and donate links
* [outtmpl] Expand and escape environment variables
* [outtmpl] Add format types `j` (json), `l` (comma delimited list), `q` (quoted for terminal)
* [downloader] Allow streaming some unmerged formats to stdout using ffmpeg
* [youtube] **Age-gate bypass**
* Add `agegate` clients by [pukkandan](https://github.com/pukkandan), [MinePlayersPE](https://github.com/MinePlayersPE)
* Add `thirdParty` to agegate clients to bypass more videos
* Simplify client definitions, expose `embedded` clients
* Improve age-gate detection by [coletdjnz](https://github.com/coletdjnz)
* Fix default global API key by [coletdjnz](https://github.com/coletdjnz)
* Add `creator` clients for age-gate bypass using unverified accounts by [zerodytrash](https://github.com/zerodytrash), [coletdjnz](https://github.com/coletdjnz), [pukkandan](https://github.com/pukkandan)
* [adobepass] Add MSO Sling TV by [wesnm](https://github.com/wesnm)
* [CBS] Add ParamountPlusSeriesIE by [Ashish0804](https://github.com/Ashish0804)
* [dplay] Add `ScienceChannelIE` by [Sipherdrakon](https://github.com/Sipherdrakon)
* [UtreonIE] Add extractor by [Ashish0804](https://github.com/Ashish0804)
* [youtube] Add `mweb` client by [coletdjnz](https://github.com/coletdjnz)
* [youtube] Add `player_client=all`
* [youtube] Force `hl=en` for comments by [coletdjnz](https://github.com/coletdjnz)
* [youtube] Fix format sorting when using alternate clients
* [youtube] Misc cleanup by [pukkandan](https://github.com/pukkandan), [coletdjnz](https://github.com/coletdjnz)
* [youtube] Extract SAPISID only once
* [CBS] Add fallback by [llacb47](https://github.com/llacb47), [pukkandan](https://github.com/pukkandan)
* [Hotstar] Support cookies by [Ashish0804](https://github.com/Ashish0804)
* [HotStarSeriesIE] Fix regex by [Ashish0804](https://github.com/Ashish0804)
* [bilibili] Improve `_VALID_URL`
* [mediaset] Fix extraction by [nixxo](https://github.com/nixxo)
* [Mxplayer] Add h265 formats by [Ashish0804](https://github.com/Ashish0804)
* [RCTIPlus] Remove PhantomJS dependency by [MinePlayersPE](https://github.com/MinePlayersPE)
* [tenplay] Add MA15+ age limit by [pento](https://github.com/pento)
* [vidio] Fix login error detection by [MinePlayersPE](https://github.com/MinePlayersPE)
* [vimeo] Better extraction of original file by [Ashish0804](https://github.com/Ashish0804)
* [generic] Support KVS player (replaces ThisVidIE) by [rigstot](https://github.com/rigstot)
* Add compat-option `no-clean-infojson`
* Remove `asr` appearing twice in `-F`
* Set `home:` as the default key for `-P`
* [utils] Fix slicing of reversed `LazyList`
* [FormatSort] Fix bug for audio with unknown codec
* [test:download] Support testing with `ignore_no_formats_error`
* [cleanup] Refactor some code
### 2021.07.24
* [youtube:tab] Extract video duration early
* [downloader] Pass `info_dict` to `progress_hook`s
* [youtube] Fix age-gated videos for API clients when cookies are supplied by [coletdjnz](https://github.com/coletdjnz)
* [youtube] Disable `get_video_info` age-gate workaround - This endpoint seems to be completely dead
* [youtube] Try all clients even if age-gated
* [youtube] Fix subtitles only being extracted from the first client
* [youtube] Simplify `_get_text`
* [cookies] bugfix for microsoft edge on macOS
* [cookies] Handle `sqlite` `ImportError` gracefully by [mbway](https://github.com/mbway)
* [cookies] Handle errors when importing `keyring`
### 2021.07.21
* **Add option `--cookies-from-browser`** to load cookies from a browser by [mbway](https://github.com/mbway)
* Usage: `--cookies-from-browser BROWSER[:PROFILE_NAME_OR_PATH]`
* Also added `--no-cookies-from-browser`
* To decrypt chromium cookies, `keyring` is needed for UNIX and `pycryptodome` for Windows
* Add option `--exec-before-download`
* Add field `live_status`
* [FFmpegMetadata] Add language of each stream and some refactoring
* [douyin] Add extractor by [pukkandan](https://github.com/pukkandan), [pyx](https://github.com/pyx)
* [pornflip] Add extractor by [mzbaulhaque](https://github.com/mzbaulhaque)
* **[youtube] Extract data from multiple clients** by [pukkandan](https://github.com/pukkandan), [coletdjnz](https://github.com/coletdjnz)
* `player_client` now accepts multiple clients
* Default `player_client` = `android,web`
* This uses twice as many requests, but avoids throttling for most videos while also not losing any formats
* Music clients can be specifically requested and is enabled by default if `music.youtube.com`
* Added `player_client=ios` (Known issue: formats from ios are not sorted correctly)
* Add age-gate bypass for android and ios clients
* [youtube] Extract more thumbnails
* The thumbnail URLs are hard-coded and their actual existence is tested lazily
* Added option `--no-check-formats` to not test them
* [youtube] Misc fixes
* Improve extraction of livestream metadata by [pukkandan](https://github.com/pukkandan), [krichbanana](https://github.com/krichbanana)
* Hide live dash formats since they can't be downloaded anyway
* Fix authentication when using multiple accounts by [coletdjnz](https://github.com/coletdjnz)
* Fix controversial videos when requested via API by [coletdjnz](https://github.com/coletdjnz)
* Fix session index extraction and headers for non-web player clients by [coletdjnz](https://github.com/coletdjnz)
* Make `--extractor-retries` work for more errors
* Fix sorting of 3gp format
* Sanity check `chapters` (and refactor related code)
* Make `parse_time_text` and `_extract_chapters` non-fatal
* Misc cleanup and bug fixes by [coletdjnz](https://github.com/coletdjnz)
* [youtube:tab] Fix channels tab
* [youtube:tab] Extract playlist availability by [coletdjnz](https://github.com/coletdjnz)
* **[youtube:comments] Move comment extraction to new API** by [coletdjnz](https://github.com/coletdjnz)
* Adds extractor-args `comment_sort` (`top`/`new`), `max_comments`, `max_comment_depth`
* [youtube:comments] Fix `is_favorited`, improve `like_count` parsing by [coletdjnz](https://github.com/coletdjnz)
* [BravoTV] Improve metadata extraction by [kevinoconnor7](https://github.com/kevinoconnor7)
* [crunchyroll:playlist] Force http
* [yahoo:gyao:player] Relax `_VALID_URL` by [nao20010128nao](https://github.com/nao20010128nao)
* [nebula] Authentication via tokens from cookie jar by [hheimbuerger](https://github.com/hheimbuerger), [TpmKranz](https://github.com/TpmKranz)
* [RTP] Fix extraction and add subtitles by [fstirlitz](https://github.com/fstirlitz)
* [viki] Rewrite extractors and add extractor-arg `video_types` to `vikichannel` by [zackmark29](https://github.com/zackmark29), [pukkandan](https://github.com/pukkandan)
* [vlive] Extract thumbnail directly in addition to the one from Naver
* [generic] Extract previously missed subtitles by [fstirlitz](https://github.com/fstirlitz)
* [generic] Extract everything in the SMIL manifest and detect discarded subtitles by [fstirlitz](https://github.com/fstirlitz)
* [embedthumbnail] Fix `_get_thumbnail_resolution`
* [metadatafromfield] Do not detect numbers as field names
* Fix selectors `all`, `mergeall` and add tests
* Errors in playlist extraction should obey `--ignore-errors`
* Fix bug where `original_url` was not propagated when `_type`=`url`
* Revert "Merge webm formats into mkv if thumbnails are to be embedded (#173)"
* This was wrongly checking for `write_thumbnail`
* Improve `extractor_args` parsing
* Rename `NOTE` in `-F` to `MORE INFO` since it's often confused to be the same as `format_note`
* Add `only_once` param for `write_debug` and `report_warning`
* [extractor] Allow extracting multiple groups in `_search_regex` by [fstirlitz](https://github.com/fstirlitz)
* [utils] Improve `traverse_obj`
* [utils] Add `variadic`
* [utils] Improve `js_to_json` comment regex by [fstirlitz](https://github.com/fstirlitz)
* [webtt] Fix timestamps
* [compat] Remove unnecessary code
* [docs] fix default of multistreams
### 2021.07.07
* Merge youtube-dl: Upto [commit/a803582](https://github.com/ytdl-org/youtube-dl/commit/a8035827177d6b59aca03bd717acb6a9bdd75ada)
* Add `--extractor-args` to pass some extractor-specific arguments. See [readme](https://github.com/yt-dlp/yt-dlp#extractor-arguments)
* Add extractor option `skip` for `youtube`. Eg: `--extractor-args youtube:skip=hls,dash`
* Deprecates `--youtube-skip-dash-manifest`, `--youtube-skip-hls-manifest`, `--youtube-include-dash-manifest`, `--youtube-include-hls-manifest`
* Allow `--list...` options to work with `--print`, `--quiet` and other `--list...` options
* [youtube] Use `player` API for additional video extraction requests by [coletdjnz](https://github.com/coletdjnz)
* **Fixes youtube premium music** (format 141) extraction
* Adds extractor option `player_client` = `web`/`android`
* **`--extractor-args youtube:player_client=android` works around the throttling** for the time-being
* Adds extractor option `player_skip=config`
* Adds age-gate fallback using embedded client
* [youtube] Choose correct Live chat API for upcoming streams by [krichbanana](https://github.com/krichbanana)
* [youtube] Fix subtitle names for age-gated videos
* [youtube:comments] Fix error handling and add `itct` to params by [coletdjnz](https://github.com/coletdjnz)
* [youtube_live_chat] Fix download with cookies by [siikamiika](https://github.com/siikamiika)
* [youtube_live_chat] use `clickTrackingParams` by [siikamiika](https://github.com/siikamiika)
* [Funimation] Rewrite extractor
* Add `FunimationShowIE` by [Mevious](https://github.com/Mevious)
* **Treat the different versions of an episode as different formats of a single video**
* This changes the video `id` and will break break existing archives
* Compat option `seperate-video-versions` to fall back to old behavior including using the old video ids
* Support direct `/player/` URL
* Extractor options `language` and `version` to pre-select them during extraction
* These options may be removed in the future if we can extract all formats without additional network requests
* Do not rely on these for format selection and use `-f` filters instead
* [AdobePass] Add Spectrum MSO by [kevinoconnor7](https://github.com/kevinoconnor7), [ohmybahgosh](https://github.com/ohmybahgosh)
* [facebook] Extract description and fix title
* [fancode] Fix extraction, support live and allow login with refresh token by [zenerdi0de](https://github.com/zenerdi0de)
* [plutotv] Improve `_VALID_URL`
* [RCTIPlus] Add extractor by [MinePlayersPE](https://github.com/MinePlayersPE)
* [Soundcloud] Allow login using oauth token by [blackjack4494](https://github.com/blackjack4494)
* [TBS] Support livestreams by [llacb47](https://github.com/llacb47)
* [videa] Fix extraction by [nyuszika7h](https://github.com/nyuszika7h)
* [yahoo] Fix extraction by [llacb47](https://github.com/llacb47), [pukkandan](https://github.com/pukkandan)
* Process videos when using `--ignore-no-formats-error` by [krichbanana](https://github.com/krichbanana)
* Fix `--throttled-rate` when using `--load-info-json`
* Fix `--flat-playlist` when entry has no `ie_key`
* Fix `check_formats` catching `ExtractorError` instead of `DownloadError`
* Fix deprecated option `--list-formats-old`
* [downloader/ffmpeg] Fix `--ppa` when using simultaneous download
* [extractor] Prevent unnecessary download of hls manifests and refactor `hls_split_discontinuity`
* [fragment] Handle status of download and errors in threads correctly; and minor refactoring
* [thumbnailsconvertor] Treat `jpeg` as `jpg`
* [utils] Fix issues with `LazyList` reversal
* [extractor] Allow extractors to set their own login hint
* [cleanup] Simplify format selector code with `LazyList` and `yield from`
* [cleanup] Clean `extractor.common._merge_subtitles` signature
* [cleanup] Fix some typos
### 2021.06.23 ### 2021.06.23
* Merge youtube-dl: Upto [commit/379f52a](https://github.com/ytdl-org/youtube-dl/commit/379f52a4954013767219d25099cce9e0f9401961) * Merge youtube-dl: Upto [commit/379f52a](https://github.com/ytdl-org/youtube-dl/commit/379f52a4954013767219d25099cce9e0f9401961)
* **Add option `--throttled-rate`** below which video data is re-extracted * **Add option `--throttled-rate`** below which video data is re-extracted
* [fragment] **Merge during download for `-N`**, and refactor `hls`/`dash` * [fragment] **Merge during download for `-N`**, and refactor `hls`/`dash`
* [websockets] Add `WebSocketFragmentFD`by [nao20010128nao](https://github.com/nao20010128nao), [pukkandan](https://github.com/pukkandan) * [websockets] Add `WebSocketFragmentFD` by [nao20010128nao](https://github.com/nao20010128nao), [pukkandan](https://github.com/pukkandan)
* Allow `images` formats in addition to video/audio * Allow `images` formats in addition to video/audio
* [downloader/mhtml] Add new downloader for slideshows/storyboards by [fstirlitz](https://github.com/fstirlitz) * [downloader/mhtml] Add new downloader for slideshows/storyboards by [fstirlitz](https://github.com/fstirlitz)
* [youtube] Temporary **fix for age-gate** * [youtube] Temporary **fix for age-gate**
* [youtube] Support ongoing live chat by [siikamiika](https://github.com/siikamiika) * [youtube] Support ongoing live chat by [siikamiika](https://github.com/siikamiika)
* [youtube] Improve SAPISID cookie handling by [colethedj](https://github.com/colethedj) * [youtube] Improve SAPISID cookie handling by [coletdjnz](https://github.com/coletdjnz)
* [youtube] Login is not needed for `:ytrec` * [youtube] Login is not needed for `:ytrec`
* [youtube] Non-fatal alert reporting for unavailable videos page by [colethedj](https://github.com/colethedj) * [youtube] Non-fatal alert reporting for unavailable videos page by [coletdjnz](https://github.com/coletdjnz)
* [twitcasting] Websocket support by [nao20010128nao](https://github.com/nao20010128nao) * [twitcasting] Websocket support by [nao20010128nao](https://github.com/nao20010128nao)
* [mediasite] Extract slides by [fstirlitz](https://github.com/fstirlitz) * [mediasite] Extract slides by [fstirlitz](https://github.com/fstirlitz)
* [funimation] Extract subtitles * [funimation] Extract subtitles
@@ -55,7 +733,7 @@
### 2021.06.09 ### 2021.06.09
* Fix bug where `%(field)d` in filename template throws error * Fix bug where `%(field)d` in filename template throws error
* Improve offset parsing in outtmpl * [outtmpl] Improve offset parsing
* [test] More rigorous tests for `prepare_filename` * [test] More rigorous tests for `prepare_filename`
### 2021.06.08 ### 2021.06.08
@@ -89,7 +767,7 @@
* Merge youtube-dl: Upto [commit/d495292](https://github.com/ytdl-org/youtube-dl/commit/d495292852b6c2f1bd58bc2141ff2b0265c952cf) * Merge youtube-dl: Upto [commit/d495292](https://github.com/ytdl-org/youtube-dl/commit/d495292852b6c2f1bd58bc2141ff2b0265c952cf)
* Pre-check archive and filters during playlist extraction * Pre-check archive and filters during playlist extraction
* Handle Basic Auth `user:pass` in URLs by [hhirtz](https://github.com/hhirtz) and [pukkandan](https://github.com/pukkandan) * Handle Basic Auth `user:pass` in URLs by [hhirtz](https://github.com/hhirtz) and [pukkandan](https://github.com/pukkandan)
* [archiveorg] Add YoutubeWebArchiveIE by [colethedj](https://github.com/colethedj) and [alex-gedeon](https://github.com/alex-gedeon) * [archiveorg] Add YoutubeWebArchiveIE by [coletdjnz](https://github.com/coletdjnz) and [alex-gedeon](https://github.com/alex-gedeon)
* [fancode] Add extractor by [rhsmachine](https://github.com/rhsmachine) * [fancode] Add extractor by [rhsmachine](https://github.com/rhsmachine)
* [patreon] Support vimeo embeds by [rhsmachine](https://github.com/rhsmachine) * [patreon] Support vimeo embeds by [rhsmachine](https://github.com/rhsmachine)
* [Saitosan] Add new extractor by [llacb47](https://github.com/llacb47) * [Saitosan] Add new extractor by [llacb47](https://github.com/llacb47)
@@ -132,7 +810,7 @@
* **Youtube improvements**: * **Youtube improvements**:
* Support youtube music `MP`, `VL` and `browse` pages * Support youtube music `MP`, `VL` and `browse` pages
* Extract more formats for youtube music by [craftingmod](https://github.com/craftingmod), [colethedj](https://github.com/colethedj) and [pukkandan](https://github.com/pukkandan) * Extract more formats for youtube music by [craftingmod](https://github.com/craftingmod), [coletdjnz](https://github.com/coletdjnz) and [pukkandan](https://github.com/pukkandan)
* Extract multiple subtitles in same language by [pukkandan](https://github.com/pukkandan) and [tpikonen](https://github.com/tpikonen) * Extract multiple subtitles in same language by [pukkandan](https://github.com/pukkandan) and [tpikonen](https://github.com/tpikonen)
* Redirect channels that doesn't have a `videos` tab to their `UU` playlists * Redirect channels that doesn't have a `videos` tab to their `UU` playlists
* Support in-channel search * Support in-channel search
@@ -141,10 +819,10 @@
* Extract audio language * Extract audio language
* Add subtitle language names by [nixxo](https://github.com/nixxo) and [tpikonen](https://github.com/tpikonen) * Add subtitle language names by [nixxo](https://github.com/nixxo) and [tpikonen](https://github.com/tpikonen)
* Show alerts only from the final webpage * Show alerts only from the final webpage
* Add `html5=1` param to `get_video_info` page requests by [colethedj](https://github.com/colethedj) * Add `html5=1` param to `get_video_info` page requests by [coletdjnz](https://github.com/coletdjnz)
* Better message when login required * Better message when login required
* **Add option `--print`**: to print any field/template * **Add option `--print`**: to print any field/template
* Deprecates: `--get-description`, `--get-duration`, `--get-filename`, `--get-format`, `--get-id`, `--get-thumbnail`, `--get-title`, `--get-url` * Makes redundant: `--get-description`, `--get-duration`, `--get-filename`, `--get-format`, `--get-id`, `--get-thumbnail`, `--get-title`, `--get-url`
* Field `additional_urls` to download additional videos from metadata using [`--parse-metadata`](https://github.com/yt-dlp/yt-dlp#modifying-metadata) * Field `additional_urls` to download additional videos from metadata using [`--parse-metadata`](https://github.com/yt-dlp/yt-dlp#modifying-metadata)
* Merge youtube-dl: Upto [commit/dfbbe29](https://github.com/ytdl-org/youtube-dl/commit/dfbbe2902fc67f0f93ee47a8077c148055c67a9b) * Merge youtube-dl: Upto [commit/dfbbe29](https://github.com/ytdl-org/youtube-dl/commit/dfbbe2902fc67f0f93ee47a8077c148055c67a9b)
* Write thumbnail of playlist and add `pl_thumbnail` outtmpl key * Write thumbnail of playlist and add `pl_thumbnail` outtmpl key
@@ -238,11 +916,11 @@
* [TubiTv] Add TubiTvShowIE by [Ashish0804](https://github.com/Ashish0804) * [TubiTv] Add TubiTvShowIE by [Ashish0804](https://github.com/Ashish0804)
* [twitcasting] Fix extractor * [twitcasting] Fix extractor
* [viu:ott] Fix extractor and support series by [lkho](https://github.com/lkho) and [pukkandan](https://github.com/pukkandan) * [viu:ott] Fix extractor and support series by [lkho](https://github.com/lkho) and [pukkandan](https://github.com/pukkandan)
* [youtube:tab] Show unavailable videos in playlists by [colethedj](https://github.com/colethedj) * [youtube:tab] Show unavailable videos in playlists by [coletdjnz](https://github.com/coletdjnz)
* [youtube:tab] Reload with unavailable videos for all playlists * [youtube:tab] Reload with unavailable videos for all playlists
* [youtube] Ignore invalid stretch ratio * [youtube] Ignore invalid stretch ratio
* [youtube] Improve channel syncid extraction to support ytcfg by [colethedj](https://github.com/colethedj) * [youtube] Improve channel syncid extraction to support ytcfg by [coletdjnz](https://github.com/coletdjnz)
* [youtube] Standardize API calls for tabs, mixes and search by [colethedj](https://github.com/colethedj) * [youtube] Standardize API calls for tabs, mixes and search by [coletdjnz](https://github.com/coletdjnz)
* [youtube] Bugfix in `_extract_ytcfg` * [youtube] Bugfix in `_extract_ytcfg`
* [mildom:user:vod] Download only necessary amount of pages * [mildom:user:vod] Download only necessary amount of pages
* [mildom] Remove proxy completely by [fstirlitz](https://github.com/fstirlitz) * [mildom] Remove proxy completely by [fstirlitz](https://github.com/fstirlitz)
@@ -254,8 +932,8 @@
* Improve the yt-dlp.sh script by [fstirlitz](https://github.com/fstirlitz) * Improve the yt-dlp.sh script by [fstirlitz](https://github.com/fstirlitz)
* [lazy_extractor] Do not load plugins * [lazy_extractor] Do not load plugins
* [ci] Disable fail-fast * [ci] Disable fail-fast
* [documentation] Clarify which deprecated options still work * [docs] Clarify which deprecated options still work
* [documentation] Fix typos * [docs] Fix typos
### 2021.04.11 ### 2021.04.11
@@ -272,17 +950,17 @@
* [nitter] Fix extraction of reply tweets and update instance list by [B0pol](https://github.com/B0pol) * [nitter] Fix extraction of reply tweets and update instance list by [B0pol](https://github.com/B0pol)
* [nitter] Fix thumbnails by [B0pol](https://github.com/B0pol) * [nitter] Fix thumbnails by [B0pol](https://github.com/B0pol)
* [youtube] Fix thumbnail URL * [youtube] Fix thumbnail URL
* [youtube] Parse API parameters from initial webpage by [colethedj](https://github.com/colethedj) * [youtube] Parse API parameters from initial webpage by [coletdjnz](https://github.com/coletdjnz)
* [youtube] Extract comments' approximate timestamp by [colethedj](https://github.com/colethedj) * [youtube] Extract comments' approximate timestamp by [coletdjnz](https://github.com/coletdjnz)
* [youtube] Fix alert extraction * [youtube] Fix alert extraction
* [bilibili] Fix uploader * [bilibili] Fix uploader
* [utils] Add `datetime_from_str` and `datetime_add_months` by [colethedj](https://github.com/colethedj) * [utils] Add `datetime_from_str` and `datetime_add_months` by [coletdjnz](https://github.com/coletdjnz)
* Run some `postprocessors` before actual download * Run some `postprocessors` before actual download
* Improve argument parsing for `-P`, `-o`, `-S` * Improve argument parsing for `-P`, `-o`, `-S`
* Fix some `m3u8` not obeying `--allow-unplayable-formats` * Fix some `m3u8` not obeying `--allow-unplayable-formats`
* Fix default of `dynamic_mpd` * Fix default of `dynamic_mpd`
* Deprecate `--all-formats`, `--include-ads`, `--hls-prefer-native`, `--hls-prefer-ffmpeg` * Deprecate `--all-formats`, `--include-ads`, `--hls-prefer-native`, `--hls-prefer-ffmpeg`
* [documentation] Improvements * [docs] Improvements
### 2021.04.03 ### 2021.04.03
* Merge youtube-dl: Upto [commit/654b4f4](https://github.com/ytdl-org/youtube-dl/commit/654b4f4ff2718f38b3182c1188c5d569c14cc70a) * Merge youtube-dl: Upto [commit/654b4f4](https://github.com/ytdl-org/youtube-dl/commit/654b4f4ff2718f38b3182c1188c5d569c14cc70a)
@@ -293,10 +971,10 @@
* [mildom] Update extractor with current proxy by [nao20010128nao](https://github.com/nao20010128nao) * [mildom] Update extractor with current proxy by [nao20010128nao](https://github.com/nao20010128nao)
* [ard:mediathek] Fix video id extraction * [ard:mediathek] Fix video id extraction
* [generic] Detect Invidious' link element * [generic] Detect Invidious' link element
* [youtube] Show premium state in `availability` by [colethedj](https://github.com/colethedj) * [youtube] Show premium state in `availability` by [coletdjnz](https://github.com/coletdjnz)
* [viewsource] Add extractor to handle `view-source:` * [viewsource] Add extractor to handle `view-source:`
* [sponskrub] Run before embedding thumbnail * [sponskrub] Run before embedding thumbnail
* [documentation] Improve `--parse-metadata` documentation * [docs] Improve `--parse-metadata` documentation
### 2021.03.24.1 ### 2021.03.24.1
@@ -328,8 +1006,8 @@
* Use headers and cookies when downloading subtitles by [damianoamatruda](https://github.com/damianoamatruda) * Use headers and cookies when downloading subtitles by [damianoamatruda](https://github.com/damianoamatruda)
* Parse resolution in info dictionary by [damianoamatruda](https://github.com/damianoamatruda) * Parse resolution in info dictionary by [damianoamatruda](https://github.com/damianoamatruda)
* More consistent warning messages by [damianoamatruda](https://github.com/damianoamatruda) and [pukkandan](https://github.com/pukkandan) * More consistent warning messages by [damianoamatruda](https://github.com/damianoamatruda) and [pukkandan](https://github.com/pukkandan)
* [documentation] Add deprecated options and aliases in readme * [docs] Add deprecated options and aliases in readme
* [documentation] Fix some minor mistakes * [docs] Fix some minor mistakes
* [niconico] Partial fix adapted from [animelover1984/youtube-dl@b5eff52](https://github.com/animelover1984/youtube-dl/commit/b5eff52dd9ed5565672ea1694b38c9296db3fade) (login and smile formats still don't work) * [niconico] Partial fix adapted from [animelover1984/youtube-dl@b5eff52](https://github.com/animelover1984/youtube-dl/commit/b5eff52dd9ed5565672ea1694b38c9296db3fade) (login and smile formats still don't work)
* [niconico] Add user extractor by [animelover1984](https://github.com/animelover1984) * [niconico] Add user extractor by [animelover1984](https://github.com/animelover1984)
@@ -338,7 +1016,7 @@
* [stitcher] Merge from youtube-dl by [nixxo](https://github.com/nixxo) * [stitcher] Merge from youtube-dl by [nixxo](https://github.com/nixxo)
* [rcs] Improved extraction by [nixxo](https://github.com/nixxo) * [rcs] Improved extraction by [nixxo](https://github.com/nixxo)
* [linuxacadamy] Improve regex * [linuxacadamy] Improve regex
* [youtube] Show if video is `private`, `unlisted` etc in info (`availability`) by [colethedj](https://github.com/colethedj) and [pukkandan](https://github.com/pukkandan) * [youtube] Show if video is `private`, `unlisted` etc in info (`availability`) by [coletdjnz](https://github.com/coletdjnz) and [pukkandan](https://github.com/pukkandan)
* [youtube] bugfix for channel playlist extraction * [youtube] bugfix for channel playlist extraction
* [nbc] Improve metadata extraction by [2ShedsJackson](https://github.com/2ShedsJackson) * [nbc] Improve metadata extraction by [2ShedsJackson](https://github.com/2ShedsJackson)
@@ -355,15 +1033,15 @@
* [wimtv] Add extractor by [nixxo](https://github.com/nixxo) * [wimtv] Add extractor by [nixxo](https://github.com/nixxo)
* [mtv] Add mtv.it and extract series metadata by [nixxo](https://github.com/nixxo) * [mtv] Add mtv.it and extract series metadata by [nixxo](https://github.com/nixxo)
* [pluto.tv] Add extractor by [kevinoconnor7](https://github.com/kevinoconnor7) * [pluto.tv] Add extractor by [kevinoconnor7](https://github.com/kevinoconnor7)
* [youtube] Rewrite comment extraction by [colethedj](https://github.com/colethedj) * [youtube] Rewrite comment extraction by [coletdjnz](https://github.com/coletdjnz)
* [embedthumbnail] Set mtime correctly * [embedthumbnail] Set mtime correctly
* Refactor some postprocessor/downloader code by [pukkandan](https://github.com/pukkandan) and [shirt](https://github.com/shirt-dev) * Refactor some postprocessor/downloader code by [pukkandan](https://github.com/pukkandan) and [shirt](https://github.com/shirt-dev)
### 2021.03.07 ### 2021.03.07
* [youtube] Fix history, mixes, community pages and trending by [pukkandan](https://github.com/pukkandan) and [colethedj](https://github.com/colethedj) * [youtube] Fix history, mixes, community pages and trending by [pukkandan](https://github.com/pukkandan) and [coletdjnz](https://github.com/coletdjnz)
* [youtube] Fix private feeds/playlists on multi-channel accounts by [colethedj](https://github.com/colethedj) * [youtube] Fix private feeds/playlists on multi-channel accounts by [coletdjnz](https://github.com/coletdjnz)
* [youtube] Extract alerts from continuation by [colethedj](https://github.com/colethedj) * [youtube] Extract alerts from continuation by [coletdjnz](https://github.com/coletdjnz)
* [cbs] Add support for ParamountPlus by [shirt](https://github.com/shirt-dev) * [cbs] Add support for ParamountPlus by [shirt](https://github.com/shirt-dev)
* [mxplayer] Rewrite extractor with show support by [pukkandan](https://github.com/pukkandan) and [Ashish0804](https://github.com/Ashish0804) * [mxplayer] Rewrite extractor with show support by [pukkandan](https://github.com/pukkandan) and [Ashish0804](https://github.com/Ashish0804)
* [gedi] Improvements from youtube-dl by [nixxo](https://github.com/nixxo) * [gedi] Improvements from youtube-dl by [nixxo](https://github.com/nixxo)
@@ -375,7 +1053,7 @@
* [downloader] Fix bug for `ffmpeg`/`httpie` * [downloader] Fix bug for `ffmpeg`/`httpie`
* [update] Fix updater removing the executable bit on some UNIX distros * [update] Fix updater removing the executable bit on some UNIX distros
* [update] Fix current build hash for UNIX * [update] Fix current build hash for UNIX
* [documentation] Include wget/curl/aria2c install instructions for Unix by [Ashish0804](https://github.com/Ashish0804) * [docs] Include wget/curl/aria2c install instructions for Unix by [Ashish0804](https://github.com/Ashish0804)
* Fix some videos downloading with `m3u8` extension * Fix some videos downloading with `m3u8` extension
* Remove "fixup is ignored" warning when fixup wasn't passed by user * Remove "fixup is ignored" warning when fixup wasn't passed by user
@@ -384,7 +1062,7 @@
* [build] Fix bug * [build] Fix bug
### 2021.03.03 ### 2021.03.03
* [youtube] Use new browse API for continuation page extraction by [colethedj](https://github.com/colethedj) and [pukkandan](https://github.com/pukkandan) * [youtube] Use new browse API for continuation page extraction by [coletdjnz](https://github.com/coletdjnz) and [pukkandan](https://github.com/pukkandan)
* Fix HLS playlist downloading by [shirt](https://github.com/shirt-dev) * Fix HLS playlist downloading by [shirt](https://github.com/shirt-dev)
* Merge youtube-dl: Upto [2021.03.03](https://github.com/ytdl-org/youtube-dl/releases/tag/2021.03.03) * Merge youtube-dl: Upto [2021.03.03](https://github.com/ytdl-org/youtube-dl/releases/tag/2021.03.03)
* [mtv] Fix extractor * [mtv] Fix extractor
@@ -432,7 +1110,7 @@
* [ffmpeg] Allow passing custom arguments before -i using `--ppa "ffmpeg_i1:ARGS"` syntax * [ffmpeg] Allow passing custom arguments before -i using `--ppa "ffmpeg_i1:ARGS"` syntax
* Fix `--windows-filenames` removing `/` from UNIX paths * Fix `--windows-filenames` removing `/` from UNIX paths
* [hls] Show warning if pycryptodome is not found * [hls] Show warning if pycryptodome is not found
* [documentation] Improvements * [docs] Improvements
* Fix documentation of `Extractor Options` * Fix documentation of `Extractor Options`
* Document `all` in format selection * Document `all` in format selection
* Document `playable_in_embed` in output templates * Document `playable_in_embed` in output templates
@@ -460,7 +1138,7 @@
* Exclude `vcruntime140.dll` from UPX by [jbruchon](https://github.com/jbruchon) * Exclude `vcruntime140.dll` from UPX by [jbruchon](https://github.com/jbruchon)
* Set version number based on UTC time, not local time * Set version number based on UTC time, not local time
* Publish on PyPi only if token is set * Publish on PyPi only if token is set
* [documentation] Better document `--prefer-free-formats` and add `--no-prefer-free-format` * [docs] Better document `--prefer-free-formats` and add `--no-prefer-free-format`
### 2021.02.15 ### 2021.02.15
@@ -503,7 +1181,7 @@
* [movefiles] Fix compatibility with python2 * [movefiles] Fix compatibility with python2
* [remuxvideo] Fix validation of conditional remux * [remuxvideo] Fix validation of conditional remux
* [sponskrub] Don't raise error when the video does not exist * [sponskrub] Don't raise error when the video does not exist
* [documentation] Crypto is an optional dependency * [docs] Crypto is an optional dependency
### 2021.02.04 ### 2021.02.04
@@ -564,10 +1242,10 @@
* Merge youtube-dl: Upto [2021.01.24](https://github.com/ytdl-org/youtube-dl/releases/tag/2021.01.16) * Merge youtube-dl: Upto [2021.01.24](https://github.com/ytdl-org/youtube-dl/releases/tag/2021.01.16)
* Plugin support ([documentation](https://github.com/yt-dlp/yt-dlp#plugins)) * Plugin support ([documentation](https://github.com/yt-dlp/yt-dlp#plugins))
* **Multiple paths**: New option `-P`/`--paths` to give different paths for different types of files * **Multiple paths**: New option `-P`/`--paths` to give different paths for different types of files
* The syntax is `-P "type:path" -P "type:path"` ([documentation](https://github.com/yt-dlp/yt-dlp#:~:text=-P,%20--paths%20TYPE:PATH)) * The syntax is `-P "type:path" -P "type:path"`
* Valid types are: home, temp, description, annotation, subtitle, infojson, thumbnail * Valid types are: home, temp, description, annotation, subtitle, infojson, thumbnail
* Additionally, configuration file is taken from home directory or current directory ([documentation](https://github.com/yt-dlp/yt-dlp#:~:text=Home%20Configuration)) * Additionally, configuration file is taken from home directory or current directory
* Allow passing different arguments to different external downloaders ([documentation](https://github.com/yt-dlp/yt-dlp#:~:text=--downloader-args%20NAME:ARGS)) * Allow passing different arguments to different external downloaders
* [mildom] Add extractor by [nao20010128nao](https://github.com/nao20010128nao) * [mildom] Add extractor by [nao20010128nao](https://github.com/nao20010128nao)
* Warn when using old style `--external-downloader-args` and `--post-processor-args` * Warn when using old style `--external-downloader-args` and `--post-processor-args`
* Fix `--no-overwrite` when using `--write-link` * Fix `--no-overwrite` when using `--write-link`
@@ -602,9 +1280,9 @@
* [roosterteeth.com] Fix for bonus episodes by [Zocker1999NET](https://github.com/Zocker1999NET) * [roosterteeth.com] Fix for bonus episodes by [Zocker1999NET](https://github.com/Zocker1999NET)
* [tiktok] Fix for when share_info is empty * [tiktok] Fix for when share_info is empty
* [EmbedThumbnail] Fix bug due to incorrect function name * [EmbedThumbnail] Fix bug due to incorrect function name
* [documentation] Changed sponskrub links to point to [yt-dlp/SponSkrub](https://github.com/yt-dlp/SponSkrub) since I am now providing both linux and windows releases * [docs] Changed sponskrub links to point to [yt-dlp/SponSkrub](https://github.com/yt-dlp/SponSkrub) since I am now providing both linux and windows releases
* [documentation] Change all links to correctly point to new fork URL * [docs] Change all links to correctly point to new fork URL
* [documentation] Fixes typos * [docs] Fixes typos
### 2021.01.12 ### 2021.01.12
@@ -700,7 +1378,7 @@
* Redirect channel home to /video * Redirect channel home to /video
* Print youtube's warning message * Print youtube's warning message
* Handle Multiple pages for feeds better * Handle Multiple pages for feeds better
* [youtube] Fix ytsearch not returning results sometimes due to promoted content by [colethedj](https://github.com/colethedj) * [youtube] Fix ytsearch not returning results sometimes due to promoted content by [coletdjnz](https://github.com/coletdjnz)
* [youtube] Temporary fix for automatic captions - disable json3 by [blackjack4494](https://github.com/blackjack4494) * [youtube] Temporary fix for automatic captions - disable json3 by [blackjack4494](https://github.com/blackjack4494)
* Add --break-on-existing by [gergesh](https://github.com/gergesh) * Add --break-on-existing by [gergesh](https://github.com/gergesh)
* Pre-check video IDs in the archive before downloading by [pukkandan](https://github.com/pukkandan) * Pre-check video IDs in the archive before downloading by [pukkandan](https://github.com/pukkandan)

39
Collaborators.md Normal file
View File

@@ -0,0 +1,39 @@
# Collaborators
This is a list of the collaborators of the project and their major contributions. See the [Changelog](Changelog.md) for more details.
You can also find lists of all [contributors of yt-dlp](CONTRIBUTORS) and [authors of youtube-dl](https://github.com/ytdl-org/youtube-dl/blob/master/AUTHORS)
## [pukkandan](https://github.com/pukkandan)
[![ko-fi](https://img.shields.io/badge/_-Ko--fi-red.svg?logo=kofi&labelColor=555555&style=for-the-badge)](https://ko-fi.com/pukkandan)
* Owner of the fork
## [shirt](https://github.com/shirt-dev)
[![ko-fi](https://img.shields.io/badge/_-Ko--fi-red.svg?logo=kofi&labelColor=555555&style=for-the-badge)](https://ko-fi.com/shirt)
* Multithreading (`-N`) and aria2c support for fragment downloads
* Support for media initialization and discontinuity in HLS
* The self-updater (`-U`)
## [coletdjnz](https://github.com/coletdjnz)
[![gh-sponsor](https://img.shields.io/badge/_-Sponsor-red.svg?logo=githubsponsors&labelColor=555555&style=for-the-badge)](https://github.com/sponsors/coletdjnz)
* YouTube improvements including: age-gate bypass, private playlists, multiple-clients (to avoid throttling) and a lot of under-the-hood improvements
## [Ashish0804](https://github.com/Ashish0804)
[![ko-fi](https://img.shields.io/badge/_-Ko--fi-red.svg?logo=kofi&labelColor=555555&style=for-the-badge)](https://ko-fi.com/ashish0804)
* Added support for new websites Zee5, MXPlayer, DiscoveryPlusIndia, ShemarooMe, Utreon etc
* Added playlist/series downloads for TubiTv, SonyLIV, Voot, HotStar etc

View File

@@ -1,4 +1,4 @@
all: yt-dlp doc pypi-files all: lazy-extractors yt-dlp doc pypi-files
clean: clean-test clean-dist clean-cache clean: clean-test clean-dist clean-cache
completions: completion-bash completion-fish completion-zsh completions: completion-bash completion-fish completion-zsh
doc: README.md CONTRIBUTING.md issuetemplates supportedsites doc: README.md CONTRIBUTING.md issuetemplates supportedsites
@@ -13,7 +13,9 @@ pypi-files: AUTHORS Changelog.md LICENSE README.md README.txt supportedsites com
.PHONY: all clean install test tar pypi-files completions ot offlinetest codetest supportedsites .PHONY: all clean install test tar pypi-files completions ot offlinetest codetest supportedsites
clean-test: clean-test:
rm -rf *.dump *.part* *.ytdl *.info.json *.mp4 *.m4a *.flv *.mp3 *.avi *.mkv *.webm *.3gp *.wav *.ape *.swf *.jpg *.png *.frag *.frag.urls *.frag.aria2 rm -rf *.3gp *.annotations.xml *.ape *.avi *.description *.dump *.flac *.flv *.frag *.frag.aria2 *.frag.urls \
*.info.json *.jpeg *.jpg *.live_chat.json *.m4a *.m4v *.mkv *.mp3 *.mp4 *.ogg *.opus *.part* *.png *.sbv *.srt \
*.swf *.swp *.ttml *.vtt *.wav *.webm *.webp *.ytdl test/testdata/player-*.js
clean-dist: clean-dist:
rm -rf yt-dlp.1.temp.md yt-dlp.1 README.txt MANIFEST build/ dist/ .coverage cover/ yt-dlp.tar.gz completions/ yt_dlp/extractor/lazy_extractors.py *.spec CONTRIBUTING.md.tmp yt-dlp yt-dlp.exe yt_dlp.egg-info/ AUTHORS .mailmap rm -rf yt-dlp.1.temp.md yt-dlp.1 README.txt MANIFEST build/ dist/ .coverage cover/ yt-dlp.tar.gz completions/ yt_dlp/extractor/lazy_extractors.py *.spec CONTRIBUTING.md.tmp yt-dlp yt-dlp.exe yt_dlp.egg-info/ AUTHORS .mailmap
clean-cache: clean-cache:
@@ -38,9 +40,9 @@ SYSCONFDIR = $(shell if [ $(PREFIX) = /usr -o $(PREFIX) = /usr/local ]; then ech
# set markdown input format to "markdown-smart" for pandoc version 2 and to "markdown" for pandoc prior to version 2 # set markdown input format to "markdown-smart" for pandoc version 2 and to "markdown" for pandoc prior to version 2
MARKDOWN = $(shell if [ `pandoc -v | head -n1 | cut -d" " -f2 | head -c1` = "2" ]; then echo markdown-smart; else echo markdown; fi) MARKDOWN = $(shell if [ `pandoc -v | head -n1 | cut -d" " -f2 | head -c1` = "2" ]; then echo markdown-smart; else echo markdown; fi)
install: yt-dlp yt-dlp.1 completions install: lazy-extractors yt-dlp yt-dlp.1 completions
install -Dm755 yt-dlp $(DESTDIR)$(BINDIR) install -Dm755 yt-dlp $(DESTDIR)$(BINDIR)/yt-dlp
install -Dm644 yt-dlp.1 $(DESTDIR)$(MANDIR)/man1 install -Dm644 yt-dlp.1 $(DESTDIR)$(MANDIR)/man1/yt-dlp.1
install -Dm644 completions/bash/yt-dlp $(DESTDIR)$(SHAREDIR)/bash-completion/completions/yt-dlp install -Dm644 completions/bash/yt-dlp $(DESTDIR)$(SHAREDIR)/bash-completion/completions/yt-dlp
install -Dm644 completions/zsh/_yt-dlp $(DESTDIR)$(SHAREDIR)/zsh/site-functions/_yt-dlp install -Dm644 completions/zsh/_yt-dlp $(DESTDIR)$(SHAREDIR)/zsh/site-functions/_yt-dlp
install -Dm644 completions/fish/yt-dlp.fish $(DESTDIR)$(SHAREDIR)/fish/vendor_completions.d/yt-dlp.fish install -Dm644 completions/fish/yt-dlp.fish $(DESTDIR)$(SHAREDIR)/fish/vendor_completions.d/yt-dlp.fish
@@ -49,23 +51,11 @@ codetest:
flake8 . flake8 .
test: test:
#nosetests --with-coverage --cover-package=yt_dlp --cover-html --verbose --processes 4 test $(PYTHON) -m pytest
nosetests --verbose test
$(MAKE) codetest $(MAKE) codetest
# Keep this list in sync with devscripts/run_tests.sh
offlinetest: codetest offlinetest: codetest
$(PYTHON) -m nose --verbose test \ $(PYTHON) -m pytest -k "not download"
--exclude test_age_restriction.py \
--exclude test_download.py \
--exclude test_iqiyi_sdk_interpreter.py \
--exclude test_overwrites.py \
--exclude test_socks.py \
--exclude test_subtitles.py \
--exclude test_write_annotations.py \
--exclude test_youtube_lists.py \
--exclude test_youtube_signature.py \
--exclude test_post_hooks.py
yt-dlp: yt_dlp/*.py yt_dlp/*/*.py yt-dlp: yt_dlp/*.py yt_dlp/*/*.py
mkdir -p zip mkdir -p zip
@@ -88,12 +78,13 @@ README.md: yt_dlp/*.py yt_dlp/*/*.py
CONTRIBUTING.md: README.md CONTRIBUTING.md: README.md
$(PYTHON) devscripts/make_contributing.py README.md CONTRIBUTING.md $(PYTHON) devscripts/make_contributing.py README.md CONTRIBUTING.md
issuetemplates: devscripts/make_issue_template.py .github/ISSUE_TEMPLATE_tmpl/1_broken_site.md .github/ISSUE_TEMPLATE_tmpl/2_site_support_request.md .github/ISSUE_TEMPLATE_tmpl/3_site_feature_request.md .github/ISSUE_TEMPLATE_tmpl/4_bug_report.md .github/ISSUE_TEMPLATE_tmpl/5_feature_request.md yt_dlp/version.py issuetemplates: devscripts/make_issue_template.py .github/ISSUE_TEMPLATE_tmpl/1_broken_site.yml .github/ISSUE_TEMPLATE_tmpl/2_site_support_request.yml .github/ISSUE_TEMPLATE_tmpl/3_site_feature_request.yml .github/ISSUE_TEMPLATE_tmpl/4_bug_report.yml .github/ISSUE_TEMPLATE_tmpl/5_feature_request.yml yt_dlp/version.py
$(PYTHON) devscripts/make_issue_template.py .github/ISSUE_TEMPLATE_tmpl/1_broken_site.md .github/ISSUE_TEMPLATE/1_broken_site.md $(PYTHON) devscripts/make_issue_template.py .github/ISSUE_TEMPLATE_tmpl/1_broken_site.yml .github/ISSUE_TEMPLATE/1_broken_site.yml
$(PYTHON) devscripts/make_issue_template.py .github/ISSUE_TEMPLATE_tmpl/2_site_support_request.md .github/ISSUE_TEMPLATE/2_site_support_request.md $(PYTHON) devscripts/make_issue_template.py .github/ISSUE_TEMPLATE_tmpl/2_site_support_request.yml .github/ISSUE_TEMPLATE/2_site_support_request.yml
$(PYTHON) devscripts/make_issue_template.py .github/ISSUE_TEMPLATE_tmpl/3_site_feature_request.md .github/ISSUE_TEMPLATE/3_site_feature_request.md $(PYTHON) devscripts/make_issue_template.py .github/ISSUE_TEMPLATE_tmpl/3_site_feature_request.yml .github/ISSUE_TEMPLATE/3_site_feature_request.yml
$(PYTHON) devscripts/make_issue_template.py .github/ISSUE_TEMPLATE_tmpl/4_bug_report.md .github/ISSUE_TEMPLATE/4_bug_report.md $(PYTHON) devscripts/make_issue_template.py .github/ISSUE_TEMPLATE_tmpl/4_bug_report.yml .github/ISSUE_TEMPLATE/4_bug_report.yml
$(PYTHON) devscripts/make_issue_template.py .github/ISSUE_TEMPLATE_tmpl/5_feature_request.md .github/ISSUE_TEMPLATE/5_feature_request.md $(PYTHON) devscripts/make_issue_template.py .github/ISSUE_TEMPLATE_tmpl/5_feature_request.yml .github/ISSUE_TEMPLATE/5_feature_request.yml
$(PYTHON) devscripts/make_issue_template.py .github/ISSUE_TEMPLATE_tmpl/6_question.yml .github/ISSUE_TEMPLATE/6_question.yml
supportedsites: supportedsites:
$(PYTHON) devscripts/make_supportedsites.py supportedsites.md $(PYTHON) devscripts/make_supportedsites.py supportedsites.md
@@ -122,7 +113,7 @@ _EXTRACTOR_FILES = $(shell find yt_dlp/extractor -iname '*.py' -and -not -iname
yt_dlp/extractor/lazy_extractors.py: devscripts/make_lazy_extractors.py devscripts/lazy_load_template.py $(_EXTRACTOR_FILES) yt_dlp/extractor/lazy_extractors.py: devscripts/make_lazy_extractors.py devscripts/lazy_load_template.py $(_EXTRACTOR_FILES)
$(PYTHON) devscripts/make_lazy_extractors.py $@ $(PYTHON) devscripts/make_lazy_extractors.py $@
yt-dlp.tar.gz: README.md yt-dlp.1 completions Changelog.md AUTHORS yt-dlp.tar.gz: all
@tar -czf $(DESTDIR)/yt-dlp.tar.gz --transform "s|^|yt-dlp/|" --owner 0 --group 0 \ @tar -czf $(DESTDIR)/yt-dlp.tar.gz --transform "s|^|yt-dlp/|" --owner 0 --group 0 \
--exclude '*.DS_Store' \ --exclude '*.DS_Store' \
--exclude '*.kate-swp' \ --exclude '*.kate-swp' \
@@ -131,12 +122,12 @@ yt-dlp.tar.gz: README.md yt-dlp.1 completions Changelog.md AUTHORS
--exclude '*~' \ --exclude '*~' \
--exclude '__pycache__' \ --exclude '__pycache__' \
--exclude '.git' \ --exclude '.git' \
--exclude 'docs/_build' \
-- \ -- \
devscripts test \ README.md supportedsites.md Changelog.md LICENSE \
Changelog.md AUTHORS LICENSE README.md supportedsites.md \ CONTRIBUTING.md Collaborators.md CONTRIBUTORS AUTHORS \
Makefile MANIFEST.in yt-dlp.1 completions \ Makefile MANIFEST.in yt-dlp.1 README.txt completions \
setup.py setup.cfg yt-dlp setup.py setup.cfg yt-dlp yt_dlp requirements.txt \
devscripts test tox.ini pytest.ini
AUTHORS: .mailmap AUTHORS: .mailmap
git shortlog -s -n | cut -f2 | sort > AUTHORS git shortlog -s -n | cut -f2 | sort > AUTHORS

854
README.md

File diff suppressed because it is too large Load Diff

Binary file not shown.

Before

Width:  |  Height:  |  Size: 4.2 KiB

View File

@@ -1,20 +1,31 @@
#!/usr/bin/env python3
# coding: utf-8 # coding: utf-8
from __future__ import unicode_literals
import re import re
from ..utils import bug_reports_message, write_string
class LazyLoadExtractor(object):
class LazyLoadMetaClass(type):
def __getattr__(cls, name):
if '_real_class' not in cls.__dict__:
write_string(
f'WARNING: Falling back to normal extractor since lazy extractor '
f'{cls.__name__} does not have attribute {name}{bug_reports_message()}')
return getattr(cls._get_real_class(), name)
class LazyLoadExtractor(metaclass=LazyLoadMetaClass):
_module = None _module = None
_WORKING = True
@classmethod @classmethod
def ie_key(cls): def _get_real_class(cls):
return cls.__name__[:-2] if '_real_class' not in cls.__dict__:
mod = __import__(cls._module, fromlist=(cls.__name__,))
cls._real_class = getattr(mod, cls.__name__)
return cls._real_class
def __new__(cls, *args, **kwargs): def __new__(cls, *args, **kwargs):
mod = __import__(cls._module, fromlist=(cls.__name__,)) real_cls = cls._get_real_class()
real_cls = getattr(mod, cls.__name__)
instance = real_cls.__new__(real_cls) instance = real_cls.__new__(real_cls)
instance.__init__(*args, **kwargs) instance.__init__(*args, **kwargs)
return instance return instance

BIN
devscripts/logo.ico Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 40 KiB

View File

@@ -1,33 +1,34 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
from __future__ import unicode_literals from __future__ import unicode_literals
# import io import io
import optparse import optparse
# import re import re
def main(): def main():
return # This is unused in yt-dlp
parser = optparse.OptionParser(usage='%prog INFILE OUTFILE') parser = optparse.OptionParser(usage='%prog INFILE OUTFILE')
options, args = parser.parse_args() options, args = parser.parse_args()
if len(args) != 2: if len(args) != 2:
parser.error('Expected an input and an output filename') parser.error('Expected an input and an output filename')
infile, outfile = args
""" infile, outfile = args
with io.open(infile, encoding='utf-8') as inf: with io.open(infile, encoding='utf-8') as inf:
readme = inf.read() readme = inf.read()
bug_text = re.search( """ bug_text = re.search(
# r'(?s)#\s*BUGS\s*[^\n]*\s*(.*?)#\s*COPYRIGHT', readme).group(1) r'(?s)#\s*BUGS\s*[^\n]*\s*(.*?)#\s*COPYRIGHT', readme).group(1)
# dev_text = re.search( dev_text = re.search(
# r'(?s)(#\s*DEVELOPER INSTRUCTIONS.*?)#\s*EMBEDDING yt-dlp', r'(?s)(#\s*DEVELOPER INSTRUCTIONS.*?)#\s*EMBEDDING yt-dlp', readme).group(1)
""" readme).group(1)
out = bug_text + dev_text out = bug_text + dev_text
with io.open(outfile, 'w', encoding='utf-8') as outf: with io.open(outfile, 'w', encoding='utf-8') as outf:
outf.write(out) """ outf.write(out)
if __name__ == '__main__': if __name__ == '__main__':
main() main()

View File

@@ -7,32 +7,35 @@ import os
from os.path import dirname as dirn from os.path import dirname as dirn
import sys import sys
print('WARNING: Lazy loading extractors is an experimental feature that may not always work', file=sys.stderr)
sys.path.insert(0, dirn(dirn((os.path.abspath(__file__))))) sys.path.insert(0, dirn(dirn((os.path.abspath(__file__)))))
lazy_extractors_filename = sys.argv[1] lazy_extractors_filename = sys.argv[1] if len(sys.argv) > 1 else 'yt_dlp/extractor/lazy_extractors.py'
if os.path.exists(lazy_extractors_filename): if os.path.exists(lazy_extractors_filename):
os.remove(lazy_extractors_filename) os.remove(lazy_extractors_filename)
# Block plugins from loading # Block plugins from loading
os.rename('ytdlp_plugins', 'ytdlp_plugins_blocked') plugins_dirname = 'ytdlp_plugins'
plugins_blocked_dirname = 'ytdlp_plugins_blocked'
if os.path.exists(plugins_dirname):
os.rename(plugins_dirname, plugins_blocked_dirname)
from yt_dlp.extractor import _ALL_CLASSES from yt_dlp.extractor import _ALL_CLASSES
from yt_dlp.extractor.common import InfoExtractor, SearchInfoExtractor from yt_dlp.extractor.common import InfoExtractor, SearchInfoExtractor
os.rename('ytdlp_plugins_blocked', 'ytdlp_plugins') if os.path.exists(plugins_blocked_dirname):
os.rename(plugins_blocked_dirname, plugins_dirname)
with open('devscripts/lazy_load_template.py', 'rt') as f: with open('devscripts/lazy_load_template.py', 'rt') as f:
module_template = f.read() module_template = f.read()
CLASS_PROPERTIES = ['ie_key', 'working', '_match_valid_url', 'suitable', '_match_id', 'get_temp_id']
module_contents = [ module_contents = [
module_template + '\n' + getsource(InfoExtractor.suitable) + '\n', module_template,
'class LazyLoadSearchExtractor(LazyLoadExtractor):\n pass\n'] *[getsource(getattr(InfoExtractor, k)) for k in CLASS_PROPERTIES],
'\nclass LazyLoadSearchExtractor(LazyLoadExtractor):\n pass\n']
ie_template = ''' ie_template = '''
class {name}({bases}): class {name}({bases}):
_VALID_URL = {valid_url!r}
_module = '{module}' _module = '{module}'
''' '''
@@ -53,14 +56,17 @@ def get_base_name(base):
def build_lazy_ie(ie, name): def build_lazy_ie(ie, name):
valid_url = getattr(ie, '_VALID_URL', None)
s = ie_template.format( s = ie_template.format(
name=name, name=name,
bases=', '.join(map(get_base_name, ie.__bases__)), bases=', '.join(map(get_base_name, ie.__bases__)),
valid_url=valid_url,
module=ie.__module__) module=ie.__module__)
valid_url = getattr(ie, '_VALID_URL', None)
if valid_url:
s += f' _VALID_URL = {valid_url!r}\n'
if not ie._WORKING:
s += ' _WORKING = False\n'
if ie.suitable.__func__ is not InfoExtractor.suitable.__func__: if ie.suitable.__func__ is not InfoExtractor.suitable.__func__:
s += '\n' + getsource(ie.suitable) s += f'\n{getsource(ie.suitable)}'
if hasattr(ie, '_make_valid_url'): if hasattr(ie, '_make_valid_url'):
# search extractors # search extractors
s += make_valid_template.format(valid_url=ie._make_valid_url()) s += make_valid_template.format(valid_url=ie._make_valid_url())
@@ -98,7 +104,7 @@ for ie in ordered_cls:
names.append(name) names.append(name)
module_contents.append( module_contents.append(
'_ALL_CLASSES = [{0}]'.format(', '.join(names))) '\n_ALL_CLASSES = [{0}]'.format(', '.join(names)))
module_src = '\n'.join(module_contents) + '\n' module_src = '\n'.join(module_contents) + '\n'

View File

@@ -29,6 +29,9 @@ def main():
continue continue
if ie_desc is not None: if ie_desc is not None:
ie_md += ': {0}'.format(ie.IE_DESC) ie_md += ': {0}'.format(ie.IE_DESC)
search_key = getattr(ie, 'SEARCH_KEY', None)
if search_key is not None:
ie_md += f'; "{ie.SEARCH_KEY}:" prefix'
if not ie.working(): if not ie.working():
ie_md += ' (Currently broken)' ie_md += ' (Currently broken)'
yield ie_md yield ie_md

View File

@@ -1,17 +1,16 @@
@setlocal
@echo off @echo off
cd /d %~dp0..
rem Keep this list in sync with the `offlinetest` target in Makefile if ["%~1"]==[""] (
set DOWNLOAD_TESTS="age_restriction^|download^|iqiyi_sdk_interpreter^|socks^|subtitles^|write_annotations^|youtube_lists^|youtube_signature^|post_hooks" set "test_set="test""
) else if ["%~1"]==["core"] (
if "%YTDL_TEST_SET%" == "core" ( set "test_set="-m not download""
set test_set="-I test_("%DOWNLOAD_TESTS%")\.py" ) else if ["%~1"]==["download"] (
set multiprocess_args="" set "test_set="-m "download""
) else if "%YTDL_TEST_SET%" == "download" (
set test_set="-I test_(?!"%DOWNLOAD_TESTS%").+\.py"
set multiprocess_args="--processes=4 --process-timeout=540"
) else ( ) else (
echo YTDL_TEST_SET is not set or invalid echo.Invalid test type "%~1". Use "core" ^| "download"
exit /b 1 exit /b 1
) )
nosetests test --verbose %test_set:"=% %multiprocess_args:"=% pytest %test_set%

View File

@@ -1,22 +1,14 @@
#!/bin/bash #!/bin/sh
# Keep this list in sync with the `offlinetest` target in Makefile if [ -z $1 ]; then
DOWNLOAD_TESTS="age_restriction|download|iqiyi_sdk_interpreter|overwrites|socks|subtitles|write_annotations|youtube_lists|youtube_signature|post_hooks" test_set='test'
elif [ $1 = 'core' ]; then
test_set="-m not download"
elif [ $1 = 'download' ]; then
test_set="-m download"
else
echo 'Invalid test type "'$1'". Use "core" | "download"'
exit 1
fi
test_set="" python3 -m pytest "$test_set"
multiprocess_args=""
case "$YTDL_TEST_SET" in
core)
test_set="-I test_($DOWNLOAD_TESTS)\.py"
;;
download)
test_set="-I test_(?!$DOWNLOAD_TESTS).+\.py"
multiprocess_args="--processes=4 --process-timeout=540"
;;
*)
break
;;
esac
nosetests test --verbose $test_set $multiprocess_args

View File

@@ -0,0 +1,37 @@
#!/usr/bin/env python3
from __future__ import unicode_literals
import json
import os
import re
import sys
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from yt_dlp.compat import compat_urllib_request
# usage: python3 ./devscripts/update-formulae.py <path-to-formulae-rb> <version>
# version can be either 0-aligned (yt-dlp version) or normalized (PyPl version)
filename, version = sys.argv[1:]
normalized_version = '.'.join(str(int(x)) for x in version.split('.'))
pypi_release = json.loads(compat_urllib_request.urlopen(
'https://pypi.org/pypi/yt-dlp/%s/json' % normalized_version
).read().decode('utf-8'))
tarball_file = next(x for x in pypi_release['urls'] if x['filename'].endswith('.tar.gz'))
sha256sum = tarball_file['digests']['sha256']
url = tarball_file['url']
with open(filename, 'r') as r:
formulae_text = r.read()
formulae_text = re.sub(r'sha256 "[0-9a-f]*?"', 'sha256 "%s"' % sha256sum, formulae_text)
formulae_text = re.sub(r'url "[^"]*?"', 'url "%s"' % url, formulae_text)
with open(filename, 'w') as w:
w.write(formulae_text)

5
docs/Collaborators.md Normal file
View File

@@ -0,0 +1,5 @@
---
orphan: true
---
```{include} ../Collaborators.md
```

189
pyinst.py
View File

@@ -1,82 +1,135 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
# coding: utf-8 # coding: utf-8
import os
from __future__ import unicode_literals
import sys
# import os
import platform import platform
import sys
from PyInstaller.utils.hooks import collect_submodules from PyInstaller.utils.hooks import collect_submodules
from PyInstaller.utils.win32.versioninfo import (
VarStruct, VarFileInfo, StringStruct, StringTable,
StringFileInfo, FixedFileInfo, VSVersionInfo, SetVersion,
)
import PyInstaller.__main__
arch = sys.argv[1] if len(sys.argv) > 1 else platform.architecture()[0][:2]
assert arch in ('32', '64')
print('Building %sbit version' % arch)
_x86 = '_x86' if arch == '32' else ''
FILE_DESCRIPTION = 'yt-dlp%s' % (' (32 Bit)' if _x86 else '') OS_NAME = platform.system()
if OS_NAME == 'Windows':
from PyInstaller.utils.win32.versioninfo import (
VarStruct, VarFileInfo, StringStruct, StringTable,
StringFileInfo, FixedFileInfo, VSVersionInfo, SetVersion,
)
elif OS_NAME == 'Darwin':
pass
else:
raise Exception('{OS_NAME} is not supported')
# root_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), '..')) ARCH = platform.architecture()[0][:2]
# print('Changing working directory to %s' % root_dir)
# os.chdir(root_dir)
exec(compile(open('yt_dlp/version.py').read(), 'yt_dlp/version.py', 'exec'))
VERSION = locals()['__version__']
VERSION_LIST = VERSION.split('.') def main():
VERSION_LIST = list(map(int, VERSION_LIST)) + [0] * (4 - len(VERSION_LIST)) opts = parse_options()
version = read_version()
print('Version: %s%s' % (VERSION, _x86)) suffix = '_macos' if OS_NAME == 'Darwin' else '_x86' if ARCH == '32' else ''
print('Remember to update the version using devscipts\\update-version.py') final_file = 'dist/%syt-dlp%s%s' % (
'yt-dlp/' if '--onedir' in opts else '', suffix, '.exe' if OS_NAME == 'Windows' else '')
VERSION_FILE = VSVersionInfo( print(f'Building yt-dlp v{version} {ARCH}bit for {OS_NAME} with options {opts}')
ffi=FixedFileInfo( print('Remember to update the version using "devscripts/update-version.py"')
filevers=VERSION_LIST, if not os.path.isfile('yt_dlp/extractor/lazy_extractors.py'):
prodvers=VERSION_LIST, print('WARNING: Building without lazy_extractors. Run '
mask=0x3F, '"devscripts/make_lazy_extractors.py" to build lazy extractors', file=sys.stderr)
flags=0x0, print(f'Destination: {final_file}\n')
OS=0x4,
fileType=0x1, opts = [
subtype=0x0, f'--name=yt-dlp{suffix}',
date=(0, 0), '--icon=devscripts/logo.ico',
), '--upx-exclude=vcruntime140.dll',
kids=[ '--noconfirm',
StringFileInfo([ *dependancy_options(),
StringTable( *opts,
'040904B0', [ 'yt_dlp/__main__.py',
StringStruct('Comments', 'yt-dlp%s Command Line Interface.' % _x86),
StringStruct('CompanyName', 'https://github.com/yt-dlp'),
StringStruct('FileDescription', FILE_DESCRIPTION),
StringStruct('FileVersion', VERSION),
StringStruct('InternalName', 'yt-dlp%s' % _x86),
StringStruct(
'LegalCopyright',
'pukkandan.ytdlp@gmail.com | UNLICENSE',
),
StringStruct('OriginalFilename', 'yt-dlp%s.exe' % _x86),
StringStruct('ProductName', 'yt-dlp%s' % _x86),
StringStruct(
'ProductVersion',
'%s%s on Python %s' % (VERSION, _x86, platform.python_version())),
])]),
VarFileInfo([VarStruct('Translation', [0, 1200])])
] ]
) print(f'Running PyInstaller with {opts}')
dependancies = ['Crypto', 'mutagen'] + collect_submodules('websockets') import PyInstaller.__main__
excluded_modules = ['test', 'ytdlp_plugins', 'youtube-dl', 'youtube-dlc']
PyInstaller.__main__.run([ PyInstaller.__main__.run(opts)
'--name=yt-dlp%s' % _x86,
'--onefile', set_version_info(final_file, version)
'--icon=devscripts/cloud.ico',
*[f'--exclude-module={module}' for module in excluded_modules],
*[f'--hidden-import={module}' for module in dependancies], def parse_options():
'--upx-exclude=vcruntime140.dll', # Compatability with older arguments
'yt_dlp/__main__.py', opts = sys.argv[1:]
]) if opts[0:1] in (['32'], ['64']):
SetVersion('dist/yt-dlp%s.exe' % _x86, VERSION_FILE) if ARCH != opts[0]:
raise Exception(f'{opts[0]}bit executable cannot be built on a {ARCH}bit system')
opts = opts[1:]
return opts or ['--onefile']
def read_version():
exec(compile(open('yt_dlp/version.py').read(), 'yt_dlp/version.py', 'exec'))
return locals()['__version__']
def version_to_list(version):
version_list = version.split('.')
return list(map(int, version_list)) + [0] * (4 - len(version_list))
def dependancy_options():
dependancies = [pycryptodome_module(), 'mutagen'] + collect_submodules('websockets')
excluded_modules = ['test', 'ytdlp_plugins', 'youtube-dl', 'youtube-dlc']
yield from (f'--hidden-import={module}' for module in dependancies)
yield from (f'--exclude-module={module}' for module in excluded_modules)
def pycryptodome_module():
try:
import Cryptodome # noqa: F401
except ImportError:
try:
import Crypto # noqa: F401
print('WARNING: Using Crypto since Cryptodome is not available. '
'Install with: pip install pycryptodomex', file=sys.stderr)
return 'Crypto'
except ImportError:
pass
return 'Cryptodome'
def set_version_info(exe, version):
if OS_NAME == 'Windows':
windows_set_version(exe, version)
def windows_set_version(exe, version):
version_list = version_to_list(version)
suffix = '_x86' if ARCH == '32' else ''
SetVersion(exe, VSVersionInfo(
ffi=FixedFileInfo(
filevers=version_list,
prodvers=version_list,
mask=0x3F,
flags=0x0,
OS=0x4,
fileType=0x1,
subtype=0x0,
date=(0, 0),
),
kids=[
StringFileInfo([StringTable('040904B0', [
StringStruct('Comments', 'yt-dlp%s Command Line Interface.' % suffix),
StringStruct('CompanyName', 'https://github.com/yt-dlp'),
StringStruct('FileDescription', 'yt-dlp%s' % (' (32 Bit)' if ARCH == '32' else '')),
StringStruct('FileVersion', version),
StringStruct('InternalName', f'yt-dlp{suffix}'),
StringStruct('LegalCopyright', 'pukkandan.ytdlp@gmail.com | UNLICENSE'),
StringStruct('OriginalFilename', f'yt-dlp{suffix}.exe'),
StringStruct('ProductName', f'yt-dlp{suffix}'),
StringStruct(
'ProductVersion', f'{version}{suffix} on Python {platform.python_version()}'),
])]), VarFileInfo([VarStruct('Translation', [0, 1200])])
]
))
if __name__ == '__main__':
main()

4
pytest.ini Normal file
View File

@@ -0,0 +1,4 @@
[pytest]
addopts = -ra -v --strict-markers
markers =
download

View File

@@ -1,3 +1,3 @@
mutagen mutagen
pycryptodome pycryptodomex
websockets websockets

View File

@@ -1,52 +1,86 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
# coding: utf-8 # coding: utf-8
from setuptools import setup, Command, find_packages
import os.path import os.path
import warnings import warnings
import sys import sys
from distutils.spawn import spawn
try:
from setuptools import setup, Command, find_packages
setuptools_available = True
except ImportError:
from distutils.core import setup, Command
setuptools_available = False
from distutils.spawn import spawn
# Get the version from yt_dlp/version.py without importing the package # Get the version from yt_dlp/version.py without importing the package
exec(compile(open('yt_dlp/version.py').read(), 'yt_dlp/version.py', 'exec')) exec(compile(open('yt_dlp/version.py').read(), 'yt_dlp/version.py', 'exec'))
DESCRIPTION = 'Command-line program to download videos from YouTube.com and many other other video platforms.' DESCRIPTION = 'A youtube-dl fork with additional features and patches'
LONG_DESCRIPTION = '\n\n'.join(( LONG_DESCRIPTION = '\n\n'.join((
'Official repository: <https://github.com/yt-dlp/yt-dlp>', 'Official repository: <https://github.com/yt-dlp/yt-dlp>',
'**PS**: Some links in this document will not work since this is a copy of the README.md from Github', '**PS**: Some links in this document will not work since this is a copy of the README.md from Github',
open('README.md', 'r', encoding='utf-8').read())) open('README.md', 'r', encoding='utf-8').read()))
REQUIREMENTS = ['mutagen', 'pycryptodome', 'websockets'] REQUIREMENTS = ['mutagen', 'pycryptodomex', 'websockets']
if sys.argv[1:2] == ['py2exe']: if sys.argv[1:2] == ['py2exe']:
raise NotImplementedError('py2exe is not currently supported; instead, use "pyinst.py" to build with pyinstaller') import py2exe
warnings.warn(
'py2exe builds do not support pycryptodomex and needs VC++14 to run. '
'The recommended way is to use "pyinst.py" to build using pyinstaller')
params = {
'console': [{
'script': './yt_dlp/__main__.py',
'dest_base': 'yt-dlp',
'version': __version__,
'description': DESCRIPTION,
'comments': LONG_DESCRIPTION.split('\n')[0],
'product_name': 'yt-dlp',
'product_version': __version__,
}],
'options': {
'py2exe': {
'bundle_files': 0,
'compressed': 1,
'optimize': 2,
'dist_dir': './dist',
'excludes': ['Crypto', 'Cryptodome'], # py2exe cannot import Crypto
'dll_excludes': ['w9xpopen.exe', 'crypt32.dll'],
}
},
'zipfile': None
}
else:
files_spec = [
('share/bash-completion/completions', ['completions/bash/yt-dlp']),
('share/zsh/site-functions', ['completions/zsh/_yt-dlp']),
('share/fish/vendor_completions.d', ['completions/fish/yt-dlp.fish']),
('share/doc/yt_dlp', ['README.txt']),
('share/man/man1', ['yt-dlp.1'])
]
root = os.path.dirname(os.path.abspath(__file__))
data_files = []
for dirname, files in files_spec:
resfiles = []
for fn in files:
if not os.path.exists(fn):
warnings.warn('Skipping file %s since it is not present. Try running `make pypi-files` first' % fn)
else:
resfiles.append(fn)
data_files.append((dirname, resfiles))
files_spec = [ params = {
('share/bash-completion/completions', ['completions/bash/yt-dlp']), 'data_files': data_files,
('share/zsh/site-functions', ['completions/zsh/_yt-dlp']), }
('share/fish/vendor_completions.d', ['completions/fish/yt-dlp.fish']),
('share/doc/yt_dlp', ['README.txt']),
('share/man/man1', ['yt-dlp.1'])
]
root = os.path.dirname(os.path.abspath(__file__))
data_files = []
for dirname, files in files_spec:
resfiles = []
for fn in files:
if not os.path.exists(fn):
warnings.warn('Skipping file %s since it is not present. Try running `make pypi-files` first' % fn)
else:
resfiles.append(fn)
data_files.append((dirname, resfiles))
params = { if setuptools_available:
'data_files': data_files, params['entry_points'] = {'console_scripts': ['yt-dlp = yt_dlp:main']}
} else:
params['entry_points'] = {'console_scripts': ['yt-dlp = yt_dlp:main']} params['scripts'] = ['yt-dlp']
class build_lazy_extractors(Command): class build_lazy_extractors(Command):
@@ -64,7 +98,11 @@ class build_lazy_extractors(Command):
dry_run=self.dry_run) dry_run=self.dry_run)
packages = find_packages(exclude=('youtube_dl', 'test', 'ytdlp_plugins')) if setuptools_available:
packages = find_packages(exclude=('youtube_dl', 'youtube_dlc', 'test', 'ytdlp_plugins'))
else:
packages = ['yt_dlp', 'yt_dlp.downloader', 'yt_dlp.extractor', 'yt_dlp.postprocessor']
setup( setup(
name='yt-dlp', name='yt-dlp',
@@ -81,7 +119,7 @@ setup(
'Documentation': 'https://yt-dlp.readthedocs.io', 'Documentation': 'https://yt-dlp.readthedocs.io',
'Source': 'https://github.com/yt-dlp/yt-dlp', 'Source': 'https://github.com/yt-dlp/yt-dlp',
'Tracker': 'https://github.com/yt-dlp/yt-dlp/issues', 'Tracker': 'https://github.com/yt-dlp/yt-dlp/issues',
#'Funding': 'https://donate.pypi.org', 'Funding': 'https://github.com/yt-dlp/yt-dlp/blob/master/Collaborators.md#collaborators',
}, },
classifiers=[ classifiers=[
'Topic :: Multimedia :: Video', 'Topic :: Multimedia :: Video',

View File

@@ -1,4 +1,6 @@
# Supported sites # Supported sites
- **17live**
- **17live:clip**
- **1tv**: Первый канал - **1tv**: Первый канал
- **20min** - **20min**
- **220.ro** - **220.ro**
@@ -46,10 +48,12 @@
- **Alura** - **Alura**
- **AluraCourse** - **AluraCourse**
- **Amara** - **Amara**
- **AmazonStore**
- **AMCNetworks** - **AMCNetworks**
- **AmericasTestKitchen** - **AmericasTestKitchen**
- **AmericasTestKitchenSeason** - **AmericasTestKitchenSeason**
- **anderetijden**: npo.nl, ntr.nl, omroepwnl.nl, zapp.nl and npo3.nl - **anderetijden**: npo.nl, ntr.nl, omroepwnl.nl, zapp.nl and npo3.nl
- **AnimalPlanet**
- **AnimeLab** - **AnimeLab**
- **AnimeLabShows** - **AnimeLabShows**
- **AnimeOnDemand** - **AnimeOnDemand**
@@ -95,7 +99,9 @@
- **Bandcamp** - **Bandcamp**
- **Bandcamp:album** - **Bandcamp:album**
- **Bandcamp:weekly** - **Bandcamp:weekly**
- **BandcampMusic**
- **bangumi.bilibili.com**: BiliBili番剧 - **bangumi.bilibili.com**: BiliBili番剧
- **BannedVideo**
- **bbc**: BBC - **bbc**: BBC
- **bbc.co.uk**: BBC iPlayer - **bbc.co.uk**: BBC iPlayer
- **bbc.co.uk:article**: BBC articles - **bbc.co.uk:article**: BBC articles
@@ -117,11 +123,14 @@
- **Bigflix** - **Bigflix**
- **Bild**: Bild.de - **Bild**: Bild.de
- **BiliBili** - **BiliBili**
- **Bilibili category extractor**
- **BilibiliAudio** - **BilibiliAudio**
- **BilibiliAudioAlbum** - **BilibiliAudioAlbum**
- **BilibiliChannel** - **BilibiliChannel**
- **BiliBiliPlayer** - **BiliBiliPlayer**
- **BiliBiliSearch**: Bilibili video search, "bilisearch" keyword - **BiliBiliSearch**: Bilibili video search; "bilisearch:" prefix
- **BiliIntl**
- **BiliIntlSeries**
- **BioBioChileTV** - **BioBioChileTV**
- **Biography** - **Biography**
- **BIQLE** - **BIQLE**
@@ -129,6 +138,7 @@
- **BitChuteChannel** - **BitChuteChannel**
- **bitwave:replay** - **bitwave:replay**
- **bitwave:stream** - **bitwave:stream**
- **BlackboardCollaborate**
- **BleacherReport** - **BleacherReport**
- **BleacherReportCMS** - **BleacherReportCMS**
- **Bloomberg** - **Bloomberg**
@@ -148,10 +158,10 @@
- **BusinessInsider** - **BusinessInsider**
- **BuzzFeed** - **BuzzFeed**
- **BYUtv** - **BYUtv**
- **CAM4**
- **Camdemy** - **Camdemy**
- **CamdemyFolder** - **CamdemyFolder**
- **CamModels** - **CamModels**
- **CamTube**
- **CamWithHer** - **CamWithHer**
- **canalc2.tv** - **canalc2.tv**
- **Canalplus**: mycanal.fr and piwiplus.fr - **Canalplus**: mycanal.fr and piwiplus.fr
@@ -161,10 +171,7 @@
- **CarambaTVPage** - **CarambaTVPage**
- **CartoonNetwork** - **CartoonNetwork**
- **cbc.ca** - **cbc.ca**
- **cbc.ca:olympics**
- **cbc.ca:player** - **cbc.ca:player**
- **cbc.ca:watch**
- **cbc.ca:watch:video**
- **CBS** - **CBS**
- **CBSInteractive** - **CBSInteractive**
- **CBSLocal** - **CBSLocal**
@@ -178,11 +185,13 @@
- **CCTV**: 央视网 - **CCTV**: 央视网
- **CDA** - **CDA**
- **CeskaTelevize** - **CeskaTelevize**
- **CeskaTelevizePorady** - **CGTN**
- **channel9**: Channel 9 - **channel9**: Channel 9
- **CharlieRose** - **CharlieRose**
- **Chaturbate** - **Chaturbate**
- **Chilloutzone** - **Chilloutzone**
- **Chingari**
- **ChingariUser**
- **chirbit** - **chirbit**
- **chirbit:profile** - **chirbit:profile**
- **cielotv.it** - **cielotv.it**
@@ -190,6 +199,7 @@
- **Cinemax** - **Cinemax**
- **CiscoLiveSearch** - **CiscoLiveSearch**
- **CiscoLiveSession** - **CiscoLiveSession**
- **ciscowebex**: Cisco Webex
- **CJSW** - **CJSW**
- **cliphunter** - **cliphunter**
- **Clippit** - **Clippit**
@@ -216,13 +226,14 @@
- **Crackle** - **Crackle**
- **CrooksAndLiars** - **CrooksAndLiars**
- **crunchyroll** - **crunchyroll**
- **crunchyroll:beta**
- **crunchyroll:playlist** - **crunchyroll:playlist**
- **crunchyroll:playlist:beta**
- **CSpan**: C-SPAN - **CSpan**: C-SPAN
- **CtsNews**: 華視新聞 - **CtsNews**: 華視新聞
- **CTV** - **CTV**
- **CTVNews** - **CTVNews**
- **cu.ntv.co.jp**: Nippon Television Network - **cu.ntv.co.jp**: Nippon Television Network
- **Culturebox**
- **CultureUnplugged** - **CultureUnplugged**
- **curiositystream** - **curiositystream**
- **curiositystream:collection** - **curiositystream:collection**
@@ -232,6 +243,8 @@
- **dailymotion** - **dailymotion**
- **dailymotion:playlist** - **dailymotion:playlist**
- **dailymotion:user** - **dailymotion:user**
- **damtomo:record**
- **damtomo:video**
- **daum.net** - **daum.net**
- **daum.net:clip** - **daum.net:clip**
- **daum.net:playlist** - **daum.net:playlist**
@@ -255,10 +268,12 @@
- **DiscoveryPlusIndiaShow** - **DiscoveryPlusIndiaShow**
- **DiscoveryVR** - **DiscoveryVR**
- **Disney** - **Disney**
- **DIYNetwork**
- **dlive:stream** - **dlive:stream**
- **dlive:vod** - **dlive:vod**
- **DoodStream** - **DoodStream**
- **Dotsub** - **Dotsub**
- **Douyin**
- **DouyuShow** - **DouyuShow**
- **DouyuTV**: 斗鱼 - **DouyuTV**: 斗鱼
- **DPlay** - **DPlay**
@@ -292,13 +307,17 @@
- **Embedly** - **Embedly**
- **EMPFlix** - **EMPFlix**
- **Engadget** - **Engadget**
- **Epicon**
- **EpiconSeries**
- **Eporner** - **Eporner**
- **EroProfile** - **EroProfile**
- **EroProfile:album**
- **Escapist** - **Escapist**
- **ESPN** - **ESPN**
- **ESPNArticle** - **ESPNArticle**
- **EsriVideo** - **EsriVideo**
- **Europa** - **Europa**
- **EUScreen**
- **EWETV** - **EWETV**
- **ExpoTV** - **ExpoTV**
- **Expressen** - **Expressen**
@@ -306,11 +325,13 @@
- **EyedoTV** - **EyedoTV**
- **facebook** - **facebook**
- **FacebookPluginsVideo** - **FacebookPluginsVideo**
- **fancode:live**
- **fancode:vod** - **fancode:vod**
- **faz.net** - **faz.net**
- **fc2** - **fc2**
- **fc2:embed** - **fc2:embed**
- **Fczenit** - **Fczenit**
- **Filmmodu**
- **filmon** - **filmon**
- **filmon:channel** - **filmon:channel**
- **Filmweb** - **Filmweb**
@@ -327,13 +348,10 @@
- **foxnews**: Fox News and Fox Business Video - **foxnews**: Fox News and Fox Business Video
- **foxnews:article** - **foxnews:article**
- **FoxSports** - **FoxSports**
- **france2.fr:generation-what**
- **FranceCulture** - **FranceCulture**
- **FranceInter** - **FranceInter**
- **FranceTV** - **FranceTV**
- **FranceTVEmbed**
- **francetvinfo.fr** - **francetvinfo.fr**
- **FranceTVJeunesse**
- **FranceTVSite** - **FranceTVSite**
- **Freesound** - **Freesound**
- **freespeech.org** - **freespeech.org**
@@ -343,9 +361,13 @@
- **FrontendMastersLesson** - **FrontendMastersLesson**
- **FujiTVFODPlus7** - **FujiTVFODPlus7**
- **Funimation** - **Funimation**
- **funimation:page**
- **funimation:show**
- **Funk** - **Funk**
- **Fusion** - **Fusion**
- **Fux** - **Fux**
- **Gab**
- **GabTV**
- **Gaia** - **Gaia**
- **GameInformer** - **GameInformer**
- **GameSpot** - **GameSpot**
@@ -354,7 +376,11 @@
- **Gazeta** - **Gazeta**
- **GDCVault** - **GDCVault**
- **GediDigital** - **GediDigital**
- **gem.cbc.ca**
- **gem.cbc.ca:live**
- **gem.cbc.ca:playlist**
- **generic**: Generic downloader that works on some sites - **generic**: Generic downloader that works on some sites
- **Gettr**
- **Gfycat** - **Gfycat**
- **GiantBomb** - **GiantBomb**
- **Giga** - **Giga**
@@ -368,8 +394,11 @@
- **google:podcasts** - **google:podcasts**
- **google:podcasts:feed** - **google:podcasts:feed**
- **GoogleDrive** - **GoogleDrive**
- **GoPro**
- **Goshgay** - **Goshgay**
- **GoToStage**
- **GPUTechConf** - **GPUTechConf**
- **Gronkh**
- **Groupon** - **Groupon**
- **hbo** - **hbo**
- **HearThisAt** - **HearThisAt**
@@ -401,6 +430,7 @@
- **Huajiao**: 花椒直播 - **Huajiao**: 花椒直播
- **HuffPost**: Huffington Post - **HuffPost**: Huffington Post
- **Hungama** - **Hungama**
- **HungamaAlbumPlaylist**
- **HungamaSong** - **HungamaSong**
- **Hypem** - **Hypem**
- **ign.com** - **ign.com**
@@ -420,9 +450,11 @@
- **Instagram** - **Instagram**
- **instagram:tag**: Instagram hashtag search - **instagram:tag**: Instagram hashtag search
- **instagram:user**: Instagram user profile - **instagram:user**: Instagram user profile
- **InstagramIOS**: IOS instagram:// URL
- **Internazionale** - **Internazionale**
- **InternetVideoArchive** - **InternetVideoArchive**
- **IPrima** - **IPrima**
- **IPrimaCNN**
- **iqiyi**: 爱奇艺 - **iqiyi**: 爱奇艺
- **Ir90Tv** - **Ir90Tv**
- **ITTF** - **ITTF**
@@ -453,6 +485,7 @@
- **KinjaEmbed** - **KinjaEmbed**
- **KinoPoisk** - **KinoPoisk**
- **KonserthusetPlay** - **KonserthusetPlay**
- **Koo**
- **KrasView**: Красвью - **KrasView**: Красвью
- **Ku6** - **Ku6**
- **KUSI** - **KUSI**
@@ -513,6 +546,9 @@
- **MallTV** - **MallTV**
- **mangomolo:live** - **mangomolo:live**
- **mangomolo:video** - **mangomolo:video**
- **ManotoTV**: Manoto TV (Episode)
- **ManotoTVLive**: Manoto TV (Live)
- **ManotoTVShow**: Manoto TV (Show)
- **ManyVids** - **ManyVids**
- **MaoriTV** - **MaoriTV**
- **Markiza** - **Markiza**
@@ -523,8 +559,11 @@
- **MedalTV** - **MedalTV**
- **media.ccc.de** - **media.ccc.de**
- **media.ccc.de:lists** - **media.ccc.de:lists**
- **Mediaite**
- **MediaKlikk**
- **Medialaan** - **Medialaan**
- **Mediaset** - **Mediaset**
- **MediasetShow**
- **Mediasite** - **Mediasite**
- **MediasiteCatalog** - **MediasiteCatalog**
- **MediasiteNamedCatalog** - **MediasiteNamedCatalog**
@@ -539,6 +578,7 @@
- **Mgoon** - **Mgoon**
- **MGTV**: 芒果TV - **MGTV**: 芒果TV
- **MiaoPai** - **MiaoPai**
- **microsoftstream**: Microsoft Stream
- **mildom**: Record ongoing live by specific user in Mildom - **mildom**: Record ongoing live by specific user in Mildom
- **mildom:user:vod**: Download all VODs from specific user in Mildom - **mildom:user:vod**: Download all VODs from specific user in Mildom
- **mildom:vod**: Download a VOD in Mildom - **mildom:vod**: Download a VOD in Mildom
@@ -548,12 +588,15 @@
- **MinistryGrid** - **MinistryGrid**
- **Minoto** - **Minoto**
- **miomio.tv** - **miomio.tv**
- **mirrativ**
- **mirrativ:user**
- **MiTele**: mitele.es - **MiTele**: mitele.es
- **mixcloud** - **mixcloud**
- **mixcloud:playlist** - **mixcloud:playlist**
- **mixcloud:user** - **mixcloud:user**
- **MLB** - **MLB**
- **MLBVideo** - **MLBVideo**
- **MLSSoccer**
- **Mnet** - **Mnet**
- **MNetTV** - **MNetTV**
- **MoeVideo**: LetitBit video services: moevideo.net, playreplay.net and videochart.net - **MoeVideo**: LetitBit video services: moevideo.net, playreplay.net and videochart.net
@@ -579,6 +622,7 @@
- **mtvservices:embedded** - **mtvservices:embedded**
- **MTVUutisetArticle** - **MTVUutisetArticle**
- **MuenchenTV**: münchen.tv - **MuenchenTV**: münchen.tv
- **MuseScore**
- **mva**: Microsoft Virtual Academy videos - **mva**: Microsoft Virtual Academy videos
- **mva:course**: Microsoft Virtual Academy courses - **mva:course**: Microsoft Virtual Academy courses
- **Mwave** - **Mwave**
@@ -595,6 +639,8 @@
- **MyviEmbed** - **MyviEmbed**
- **MyVisionTV** - **MyVisionTV**
- **n-tv.de** - **n-tv.de**
- **N1Info:article**
- **N1InfoAsset**
- **natgeo:video** - **natgeo:video**
- **NationalGeographicTV** - **NationalGeographicTV**
- **Naver** - **Naver**
@@ -628,7 +674,8 @@
- **NetPlus** - **NetPlus**
- **Netzkino** - **Netzkino**
- **Newgrounds** - **Newgrounds**
- **NewgroundsPlaylist** - **Newgrounds:playlist**
- **Newgrounds:user**
- **Newstube** - **Newstube**
- **NextMedia**: 蘋果日報 - **NextMedia**: 蘋果日報
- **NextMediaActionNews**: 蘋果日報 - 動新聞 - **NextMediaActionNews**: 蘋果日報 - 動新聞
@@ -649,6 +696,9 @@
- **niconico**: ニコニコ動画 - **niconico**: ニコニコ動画
- **NiconicoPlaylist** - **NiconicoPlaylist**
- **NiconicoUser** - **NiconicoUser**
- **nicovideo:search**: Nico video searches; "nicosearch:" prefix
- **nicovideo:search:date**: Nico video searches, newest first; "nicosearchdate:" prefix
- **nicovideo:search_url**: Nico video search URLs
- **Nintendo** - **Nintendo**
- **Nitter** - **Nitter**
- **njoy**: N-JOY - **njoy**: N-JOY
@@ -661,6 +711,7 @@
- **NosVideo** - **NosVideo**
- **Nova**: TN.cz, Prásk.tv, Nova.cz, Novaplus.cz, FANDA.tv, Krásná.cz and Doma.cz - **Nova**: TN.cz, Prásk.tv, Nova.cz, Novaplus.cz, FANDA.tv, Krásná.cz and Doma.cz
- **NovaEmbed** - **NovaEmbed**
- **NovaPlay**
- **nowness** - **nowness**
- **nowness:playlist** - **nowness:playlist**
- **nowness:series** - **nowness:series**
@@ -686,11 +737,14 @@
- **NYTimes** - **NYTimes**
- **NYTimesArticle** - **NYTimesArticle**
- **NYTimesCooking** - **NYTimesCooking**
- **nzherald**
- **NZZ** - **NZZ**
- **ocw.mit.edu** - **ocw.mit.edu**
- **OdaTV** - **OdaTV**
- **Odnoklassniki** - **Odnoklassniki**
- **OktoberfestTV** - **OktoberfestTV**
- **OlympicsReplay**
- **on24**: ON24
- **OnDemandKorea** - **OnDemandKorea**
- **onet.pl** - **onet.pl**
- **onet.tv** - **onet.tv**
@@ -699,6 +753,8 @@
- **OnionStudios** - **OnionStudios**
- **Ooyala** - **Ooyala**
- **OoyalaExternal** - **OoyalaExternal**
- **openrec**
- **openrec:capture**
- **OraTV** - **OraTV**
- **orf:burgenland**: Radio Burgenland - **orf:burgenland**: Radio Burgenland
- **orf:fm4**: radio FM4 - **orf:fm4**: radio FM4
@@ -724,12 +780,18 @@
- **PalcoMP3:video** - **PalcoMP3:video**
- **pandora.tv**: 판도라TV - **pandora.tv**: 판도라TV
- **ParamountNetwork** - **ParamountNetwork**
- **ParamountPlus**
- **ParamountPlusSeries**
- **parliamentlive.tv**: UK parliament videos - **parliamentlive.tv**: UK parliament videos
- **Parlview** - **Parlview**
- **Patreon** - **Patreon**
- **PatreonUser**
- **pbs**: Public Broadcasting Service (PBS) and member stations: PBS: Public Broadcasting Service, APT - Alabama Public Television (WBIQ), GPB/Georgia Public Broadcasting (WGTV), Mississippi Public Broadcasting (WMPN), Nashville Public Television (WNPT), WFSU-TV (WFSU), WSRE (WSRE), WTCI (WTCI), WPBA/Channel 30 (WPBA), Alaska Public Media (KAKM), Arizona PBS (KAET), KNME-TV/Channel 5 (KNME), Vegas PBS (KLVX), AETN/ARKANSAS ETV NETWORK (KETS), KET (WKLE), WKNO/Channel 10 (WKNO), LPB/LOUISIANA PUBLIC BROADCASTING (WLPB), OETA (KETA), Ozarks Public Television (KOZK), WSIU Public Broadcasting (WSIU), KEET TV (KEET), KIXE/Channel 9 (KIXE), KPBS San Diego (KPBS), KQED (KQED), KVIE Public Television (KVIE), PBS SoCal/KOCE (KOCE), ValleyPBS (KVPT), CONNECTICUT PUBLIC TELEVISION (WEDH), KNPB Channel 5 (KNPB), SOPTV (KSYS), Rocky Mountain PBS (KRMA), KENW-TV3 (KENW), KUED Channel 7 (KUED), Wyoming PBS (KCWC), Colorado Public Television / KBDI 12 (KBDI), KBYU-TV (KBYU), Thirteen/WNET New York (WNET), WGBH/Channel 2 (WGBH), WGBY (WGBY), NJTV Public Media NJ (WNJT), WLIW21 (WLIW), mpt/Maryland Public Television (WMPB), WETA Television and Radio (WETA), WHYY (WHYY), PBS 39 (WLVT), WVPT - Your Source for PBS and More! (WVPT), Howard University Television (WHUT), WEDU PBS (WEDU), WGCU Public Media (WGCU), WPBT2 (WPBT), WUCF TV (WUCF), WUFT/Channel 5 (WUFT), WXEL/Channel 42 (WXEL), WLRN/Channel 17 (WLRN), WUSF Public Broadcasting (WUSF), ETV (WRLK), UNC-TV (WUNC), PBS Hawaii - Oceanic Cable Channel 10 (KHET), Idaho Public Television (KAID), KSPS (KSPS), OPB (KOPB), KWSU/Channel 10 & KTNW/Channel 31 (KWSU), WILL-TV (WILL), Network Knowledge - WSEC/Springfield (WSEC), WTTW11 (WTTW), Iowa Public Television/IPTV (KDIN), Nine Network (KETC), PBS39 Fort Wayne (WFWA), WFYI Indianapolis (WFYI), Milwaukee Public Television (WMVS), WNIN (WNIN), WNIT Public Television (WNIT), WPT (WPNE), WVUT/Channel 22 (WVUT), WEIU/Channel 51 (WEIU), WQPT-TV (WQPT), WYCC PBS Chicago (WYCC), WIPB-TV (WIPB), WTIU (WTIU), CET (WCET), ThinkTVNetwork (WPTD), WBGU-TV (WBGU), WGVU TV (WGVU), NET1 (KUON), Pioneer Public Television (KWCM), SDPB Television (KUSD), TPT (KTCA), KSMQ (KSMQ), KPTS/Channel 8 (KPTS), KTWU/Channel 11 (KTWU), East Tennessee PBS (WSJK), WCTE-TV (WCTE), WLJT, Channel 11 (WLJT), WOSU TV (WOSU), WOUB/WOUC (WOUB), WVPB (WVPB), WKYU-PBS (WKYU), KERA 13 (KERA), MPBN (WCBB), Mountain Lake PBS (WCFE), NHPTV (WENH), Vermont PBS (WETK), witf (WITF), WQED Multimedia (WQED), WMHT Educational Telecommunications (WMHT), Q-TV (WDCQ), WTVS Detroit Public TV (WTVS), CMU Public Television (WCMU), WKAR-TV (WKAR), WNMU-TV Public TV 13 (WNMU), WDSE - WRPT (WDSE), WGTE TV (WGTE), Lakeland Public Television (KAWE), KMOS-TV - Channels 6.1, 6.2 and 6.3 (KMOS), MontanaPBS (KUSM), KRWG/Channel 22 (KRWG), KACV (KACV), KCOS/Channel 13 (KCOS), WCNY/Channel 24 (WCNY), WNED (WNED), WPBS (WPBS), WSKG Public TV (WSKG), WXXI (WXXI), WPSU (WPSU), WVIA Public Media Studios (WVIA), WTVI (WTVI), Western Reserve PBS (WNEO), WVIZ/PBS ideastream (WVIZ), KCTS 9 (KCTS), Basin PBS (KPBT), KUHT / Channel 8 (KUHT), KLRN (KLRN), KLRU (KLRU), WTJX Channel 12 (WTJX), WCVE PBS (WCVE), KBTC Public Television (KBTC) - **pbs**: Public Broadcasting Service (PBS) and member stations: PBS: Public Broadcasting Service, APT - Alabama Public Television (WBIQ), GPB/Georgia Public Broadcasting (WGTV), Mississippi Public Broadcasting (WMPN), Nashville Public Television (WNPT), WFSU-TV (WFSU), WSRE (WSRE), WTCI (WTCI), WPBA/Channel 30 (WPBA), Alaska Public Media (KAKM), Arizona PBS (KAET), KNME-TV/Channel 5 (KNME), Vegas PBS (KLVX), AETN/ARKANSAS ETV NETWORK (KETS), KET (WKLE), WKNO/Channel 10 (WKNO), LPB/LOUISIANA PUBLIC BROADCASTING (WLPB), OETA (KETA), Ozarks Public Television (KOZK), WSIU Public Broadcasting (WSIU), KEET TV (KEET), KIXE/Channel 9 (KIXE), KPBS San Diego (KPBS), KQED (KQED), KVIE Public Television (KVIE), PBS SoCal/KOCE (KOCE), ValleyPBS (KVPT), CONNECTICUT PUBLIC TELEVISION (WEDH), KNPB Channel 5 (KNPB), SOPTV (KSYS), Rocky Mountain PBS (KRMA), KENW-TV3 (KENW), KUED Channel 7 (KUED), Wyoming PBS (KCWC), Colorado Public Television / KBDI 12 (KBDI), KBYU-TV (KBYU), Thirteen/WNET New York (WNET), WGBH/Channel 2 (WGBH), WGBY (WGBY), NJTV Public Media NJ (WNJT), WLIW21 (WLIW), mpt/Maryland Public Television (WMPB), WETA Television and Radio (WETA), WHYY (WHYY), PBS 39 (WLVT), WVPT - Your Source for PBS and More! (WVPT), Howard University Television (WHUT), WEDU PBS (WEDU), WGCU Public Media (WGCU), WPBT2 (WPBT), WUCF TV (WUCF), WUFT/Channel 5 (WUFT), WXEL/Channel 42 (WXEL), WLRN/Channel 17 (WLRN), WUSF Public Broadcasting (WUSF), ETV (WRLK), UNC-TV (WUNC), PBS Hawaii - Oceanic Cable Channel 10 (KHET), Idaho Public Television (KAID), KSPS (KSPS), OPB (KOPB), KWSU/Channel 10 & KTNW/Channel 31 (KWSU), WILL-TV (WILL), Network Knowledge - WSEC/Springfield (WSEC), WTTW11 (WTTW), Iowa Public Television/IPTV (KDIN), Nine Network (KETC), PBS39 Fort Wayne (WFWA), WFYI Indianapolis (WFYI), Milwaukee Public Television (WMVS), WNIN (WNIN), WNIT Public Television (WNIT), WPT (WPNE), WVUT/Channel 22 (WVUT), WEIU/Channel 51 (WEIU), WQPT-TV (WQPT), WYCC PBS Chicago (WYCC), WIPB-TV (WIPB), WTIU (WTIU), CET (WCET), ThinkTVNetwork (WPTD), WBGU-TV (WBGU), WGVU TV (WGVU), NET1 (KUON), Pioneer Public Television (KWCM), SDPB Television (KUSD), TPT (KTCA), KSMQ (KSMQ), KPTS/Channel 8 (KPTS), KTWU/Channel 11 (KTWU), East Tennessee PBS (WSJK), WCTE-TV (WCTE), WLJT, Channel 11 (WLJT), WOSU TV (WOSU), WOUB/WOUC (WOUB), WVPB (WVPB), WKYU-PBS (WKYU), KERA 13 (KERA), MPBN (WCBB), Mountain Lake PBS (WCFE), NHPTV (WENH), Vermont PBS (WETK), witf (WITF), WQED Multimedia (WQED), WMHT Educational Telecommunications (WMHT), Q-TV (WDCQ), WTVS Detroit Public TV (WTVS), CMU Public Television (WCMU), WKAR-TV (WKAR), WNMU-TV Public TV 13 (WNMU), WDSE - WRPT (WDSE), WGTE TV (WGTE), Lakeland Public Television (KAWE), KMOS-TV - Channels 6.1, 6.2 and 6.3 (KMOS), MontanaPBS (KUSM), KRWG/Channel 22 (KRWG), KACV (KACV), KCOS/Channel 13 (KCOS), WCNY/Channel 24 (WCNY), WNED (WNED), WPBS (WPBS), WSKG Public TV (WSKG), WXXI (WXXI), WPSU (WPSU), WVIA Public Media Studios (WVIA), WTVI (WTVI), Western Reserve PBS (WNEO), WVIZ/PBS ideastream (WVIZ), KCTS 9 (KCTS), Basin PBS (KPBT), KUHT / Channel 8 (KUHT), KLRN (KLRN), KLRU (KLRU), WTJX Channel 12 (WTJX), WCVE PBS (WCVE), KBTC Public Television (KBTC)
- **PearVideo** - **PearVideo**
- **PeerTube** - **PeerTube**
- **PeerTube:Playlist**
- **peloton**
- **peloton:live**: Peloton Live
- **People** - **People**
- **PerformGroup** - **PerformGroup**
- **periscope**: Periscope - **periscope**: Periscope
@@ -744,6 +806,7 @@
- **Pinterest** - **Pinterest**
- **PinterestCollection** - **PinterestCollection**
- **Pladform** - **Pladform**
- **PlanetMarathi**
- **Platzi** - **Platzi**
- **PlatziCourse** - **PlatziCourse**
- **play.fm** - **play.fm**
@@ -760,15 +823,22 @@
- **podomatic** - **podomatic**
- **Pokemon** - **Pokemon**
- **PokemonWatch** - **PokemonWatch**
- **PolsatGo**
- **PolskieRadio** - **PolskieRadio**
- **polskieradio:kierowcow**
- **polskieradio:player**
- **polskieradio:podcast**
- **polskieradio:podcast:list**
- **PolskieRadioCategory** - **PolskieRadioCategory**
- **Popcorntimes** - **Popcorntimes**
- **PopcornTV** - **PopcornTV**
- **PornCom** - **PornCom**
- **PornerBros** - **PornerBros**
- **PornFlip**
- **PornHd** - **PornHd**
- **PornHub**: PornHub and Thumbzilla - **PornHub**: PornHub and Thumbzilla
- **PornHubPagedVideoList** - **PornHubPagedVideoList**
- **PornHubPlaylist**
- **PornHubUser** - **PornHubUser**
- **PornHubUserVideosUpload** - **PornHubUserVideosUpload**
- **Pornotube** - **Pornotube**
@@ -776,6 +846,7 @@
- **PornoXO** - **PornoXO**
- **PornTube** - **PornTube**
- **PressTV** - **PressTV**
- **ProjectVeritas**
- **prosiebensat1**: ProSiebenSat.1 Digital - **prosiebensat1**: ProSiebenSat.1 Digital
- **puhutv** - **puhutv**
- **puhutv:serie** - **puhutv:serie**
@@ -792,22 +863,34 @@
- **QuicklineLive** - **QuicklineLive**
- **R7** - **R7**
- **R7Article** - **R7Article**
- **Radiko**
- **RadikoRadio**
- **radio.de** - **radio.de**
- **radiobremen** - **radiobremen**
- **radiocanada** - **radiocanada**
- **radiocanada:audiovideo** - **radiocanada:audiovideo**
- **radiofrance** - **radiofrance**
- **RadioJavan** - **RadioJavan**
- **radiokapital**
- **radiokapital:show**
- **radlive**
- **radlive:channel**
- **radlive:season**
- **Rai** - **Rai**
- **RaiPlay** - **RaiPlay**
- **RaiPlayLive** - **RaiPlayLive**
- **RaiPlayPlaylist** - **RaiPlayPlaylist**
- **RaiPlayRadio**
- **RaiPlayRadioPlaylist**
- **RayWenderlich** - **RayWenderlich**
- **RayWenderlichCourse** - **RayWenderlichCourse**
- **RBMARadio** - **RBMARadio**
- **RCS** - **RCS**
- **RCSEmbeds** - **RCSEmbeds**
- **RCSVarious** - **RCSVarious**
- **RCTIPlus**
- **RCTIPlusSeries**
- **RCTIPlusTV**
- **RDS**: RDS.ca - **RDS**: RDS.ca
- **RedBull** - **RedBull**
- **RedBullEmbed** - **RedBullEmbed**
@@ -826,6 +909,7 @@
- **RMCDecouverte** - **RMCDecouverte**
- **RockstarGames** - **RockstarGames**
- **RoosterTeeth** - **RoosterTeeth**
- **RoosterTeethSeries**
- **RottenTomatoes** - **RottenTomatoes**
- **Roxwel** - **Roxwel**
- **Rozhlas** - **Rozhlas**
@@ -845,6 +929,7 @@
- **RTVNH** - **RTVNH**
- **RTVS** - **RTVS**
- **RUHD** - **RUHD**
- **RumbleChannel**
- **RumbleEmbed** - **RumbleEmbed**
- **rutube**: Rutube videos - **rutube**: Rutube videos
- **rutube:channel**: Rutube channels - **rutube:channel**: Rutube channels
@@ -866,7 +951,8 @@
- **savefrom.net** - **savefrom.net**
- **SBS**: sbs.com.au - **SBS**: sbs.com.au
- **schooltv** - **schooltv**
- **screen.yahoo:search**: Yahoo screen search - **ScienceChannel**
- **screen.yahoo:search**: Yahoo screen search; "yvsearch:" prefix
- **Screencast** - **Screencast**
- **ScreencastOMatic** - **ScreencastOMatic**
- **ScrippsNetworks** - **ScrippsNetworks**
@@ -891,12 +977,14 @@
- **Sina** - **Sina**
- **sky.it** - **sky.it**
- **sky:news** - **sky:news**
- **sky:news:story**
- **sky:sports** - **sky:sports**
- **sky:sports:news** - **sky:sports:news**
- **skyacademy.it** - **skyacademy.it**
- **SkylineWebcams** - **SkylineWebcams**
- **skynewsarabia:article** - **skynewsarabia:article**
- **skynewsarabia:video** - **skynewsarabia:video**
- **SkyNewsAU**
- **Slideshare** - **Slideshare**
- **SlidesLive** - **SlidesLive**
- **Slutload** - **Slutload**
@@ -906,7 +994,7 @@
- **SonyLIVSeries** - **SonyLIVSeries**
- **soundcloud** - **soundcloud**
- **soundcloud:playlist** - **soundcloud:playlist**
- **soundcloud:search**: Soundcloud search - **soundcloud:search**: Soundcloud search; "scsearch:" prefix
- **soundcloud:set** - **soundcloud:set**
- **soundcloud:trackstation** - **soundcloud:trackstation**
- **soundcloud:user** - **soundcloud:user**
@@ -918,11 +1006,12 @@
- **southpark.de** - **southpark.de**
- **southpark.nl** - **southpark.nl**
- **southparkstudios.dk** - **southparkstudios.dk**
- **SovietsCloset**
- **SovietsClosetPlaylist**
- **SpankBang** - **SpankBang**
- **SpankBangPlaylist** - **SpankBangPlaylist**
- **Spankwire** - **Spankwire**
- **Spiegel** - **Spiegel**
- **sport.francetvinfo.fr**
- **Sport5** - **Sport5**
- **SportBox** - **SportBox**
- **SportDeutschland** - **SportDeutschland**
@@ -938,6 +1027,7 @@
- **SRGSSR** - **SRGSSR**
- **SRGSSRPlay**: srf.ch, rts.ch, rsi.ch, rtr.ch and swissinfo.ch play sites - **SRGSSRPlay**: srf.ch, rts.ch, rsi.ch, rtr.ch and swissinfo.ch play sites
- **stanfordoc**: Stanford Open ClassRoom - **stanfordoc**: Stanford Open ClassRoom
- **startv**
- **Steam** - **Steam**
- **Stitcher** - **Stitcher**
- **StitcherShow** - **StitcherShow**
@@ -945,6 +1035,7 @@
- **StoryFireSeries** - **StoryFireSeries**
- **StoryFireUser** - **StoryFireUser**
- **Streamable** - **Streamable**
- **Streamanity**
- **streamcloud.eu** - **streamcloud.eu**
- **StreamCZ** - **StreamCZ**
- **StreetVoice** - **StreetVoice**
@@ -962,7 +1053,6 @@
- **SztvHu** - **SztvHu**
- **t-online.de** - **t-online.de**
- **Tagesschau** - **Tagesschau**
- **tagesschau:player**
- **Tass** - **Tass**
- **TBS** - **TBS**
- **TDSLifeway** - **TDSLifeway**
@@ -1000,17 +1090,23 @@
- **TheScene** - **TheScene**
- **TheStar** - **TheStar**
- **TheSun** - **TheSun**
- **ThetaStream**
- **ThetaVideo**
- **TheWeatherChannel** - **TheWeatherChannel**
- **ThisAmericanLife** - **ThisAmericanLife**
- **ThisAV** - **ThisAV**
- **ThisOldHouse** - **ThisOldHouse**
- **ThisVid** - **ThreeSpeak**
- **ThreeSpeakUser**
- **TikTok** - **TikTok**
- **tiktok:user**
- **tinypic**: tinypic.com videos - **tinypic**: tinypic.com videos
- **TMZ** - **TMZ**
- **TNAFlix** - **TNAFlix**
- **TNAFlixNetworkEmbed** - **TNAFlixNetworkEmbed**
- **toggle** - **toggle**
- **Tokentube**
- **Tokentube:channel**
- **ToonGoggles** - **ToonGoggles**
- **tou.tv** - **tou.tv**
- **Toypics**: Toypics video - **Toypics**: Toypics video
@@ -1018,6 +1114,8 @@
- **TrailerAddict** (Currently broken) - **TrailerAddict** (Currently broken)
- **Trilulilu** - **Trilulilu**
- **Trovo** - **Trovo**
- **TrovoChannelClip**: All Clips of a trovo.live channel; "trovoclip:" prefix
- **TrovoChannelVod**: All VODs of a trovo.live channel; "trovovod:" prefix
- **TrovoVod** - **TrovoVod**
- **TruNews** - **TruNews**
- **TruTV** - **TruTV**
@@ -1033,10 +1131,11 @@
- **Turbo** - **Turbo**
- **tv.dfb.de** - **tv.dfb.de**
- **TV2** - **TV2**
- **tv2.hu**
- **TV2Article** - **TV2Article**
- **TV2DK** - **TV2DK**
- **TV2DKBornholmPlay** - **TV2DKBornholmPlay**
- **tv2play.hu**
- **tv2playseries.hu**
- **TV4**: tv4.se and tv4play.se - **TV4**: tv4.se and tv4play.se
- **TV5MondePlus**: TV5MONDE+ - **TV5MondePlus**: TV5MONDE+
- **tv5unis** - **tv5unis**
@@ -1062,6 +1161,7 @@
- **tvp**: Telewizja Polska - **tvp**: Telewizja Polska
- **tvp:embed**: Telewizja Polska - **tvp:embed**: Telewizja Polska
- **tvp:series** - **tvp:series**
- **tvp:stream**
- **TVPlayer** - **TVPlayer**
- **TVPlayHome** - **TVPlayHome**
- **Tweakers** - **Tweakers**
@@ -1101,9 +1201,11 @@
- **ustream:channel** - **ustream:channel**
- **ustudio** - **ustudio**
- **ustudio:embed** - **ustudio:embed**
- **Utreon**
- **Varzesh3** - **Varzesh3**
- **Vbox7** - **Vbox7**
- **VeeHD** - **VeeHD**
- **Veo**
- **Veoh** - **Veoh**
- **Vesti**: Вести.Ru - **Vesti**: Вести.Ru
- **Vevo** - **Vevo**
@@ -1119,7 +1221,7 @@
- **Viddler** - **Viddler**
- **Videa** - **Videa**
- **video.arnes.si**: Arnes Video - **video.arnes.si**: Arnes Video
- **video.google:search**: Google Video search - **video.google:search**: Google Video search; "gvsearch:" prefix (Currently broken)
- **video.sky.it** - **video.sky.it**
- **video.sky.it:live** - **video.sky.it:live**
- **VideoDetective** - **VideoDetective**
@@ -1132,9 +1234,6 @@
- **VidioLive** - **VidioLive**
- **VidioPremier** - **VidioPremier**
- **VidLii** - **VidLii**
- **vidme**
- **vidme:user**
- **vidme:user:likes**
- **vier**: vier.be and vijf.be - **vier**: vier.be and vijf.be
- **vier:videos** - **vier:videos**
- **viewlift** - **viewlift**
@@ -1169,6 +1268,8 @@
- **VODPl** - **VODPl**
- **VODPlatform** - **VODPlatform**
- **VoiceRepublic** - **VoiceRepublic**
- **voicy**
- **voicy:channel**
- **Voot** - **Voot**
- **VootSeries** - **VootSeries**
- **VoxMedia** - **VoxMedia**
@@ -1184,6 +1285,7 @@
- **VTXTV** - **VTXTV**
- **vube**: Vube.com - **vube**: Vube.com
- **VuClip** - **VuClip**
- **Vupload**
- **VVVVID** - **VVVVID**
- **VVVVIDShow** - **VVVVIDShow**
- **VyboryMos** - **VyboryMos**
@@ -1214,6 +1316,8 @@
- **WistiaPlaylist** - **WistiaPlaylist**
- **wnl**: npo.nl, ntr.nl, omroepwnl.nl, zapp.nl and npo3.nl - **wnl**: npo.nl, ntr.nl, omroepwnl.nl, zapp.nl and npo3.nl
- **WorldStarHipHop** - **WorldStarHipHop**
- **wppilot**
- **wppilot:channels**
- **WSJ**: Wall Street Journal - **WSJ**: Wall Street Journal
- **WSJArticle** - **WSJArticle**
- **WWE** - **WWE**
@@ -1261,19 +1365,19 @@
- **YouPorn** - **YouPorn**
- **YourPorn** - **YourPorn**
- **YourUpload** - **YourUpload**
- **youtube**: YouTube.com - **youtube**: YouTube
- **youtube:favorites**: YouTube.com liked videos, ":ytfav" for short (requires authentication) - **youtube:favorites**: YouTube liked videos; ":ytfav" keyword (requires cookies)
- **youtube:history**: Youtube watch history, ":ythis" for short (requires authentication) - **youtube:history**: Youtube watch history; ":ythis" keyword (requires cookies)
- **youtube:playlist**: YouTube.com playlists - **youtube:playlist**: YouTube playlists
- **youtube:recommended**: YouTube.com recommended videos, ":ytrec" for short (requires authentication) - **youtube:recommended**: YouTube recommended videos; ":ytrec" keyword
- **youtube:search**: YouTube.com searches, "ytsearch" keyword - **youtube:search**: YouTube searches; "ytsearch:" prefix
- **youtube:search:date**: YouTube.com searches, newest videos first, "ytsearchdate" keyword - **youtube:search:date**: YouTube searches, newest videos first; "ytsearchdate:" prefix
- **youtube:search_url**: YouTube.com search URLs - **youtube:search_url**: YouTube search URLs with sorting and filter support
- **youtube:subscriptions**: YouTube.com subscriptions feed, ":ytsubs" for short (requires authentication) - **youtube:subscriptions**: YouTube subscriptions feed; ":ytsubs" keyword (requires cookies)
- **youtube:tab**: YouTube.com tab - **youtube:tab**: YouTube Tabs
- **youtube:watchlater**: Youtube watch later list, ":ytwatchlater" for short (requires authentication) - **youtube:watchlater**: Youtube watch later list; ":ytwatchlater" keyword (requires cookies)
- **YoutubeYtBe**: youtu.be - **YoutubeYtBe**: youtu.be
- **YoutubeYtUser**: YouTube.com user videos, URL or "ytuser" keyword - **YoutubeYtUser**: YouTube user videos; "ytuser:" prefix
- **Zapiks** - **Zapiks**
- **Zattoo** - **Zattoo**
- **ZattooLive** - **ZattooLive**
@@ -1281,6 +1385,8 @@
- **ZDFChannel** - **ZDFChannel**
- **Zee5** - **Zee5**
- **zee5:series** - **zee5:series**
- **ZenYandex**
- **ZenYandexChannel**
- **Zhihu** - **Zhihu**
- **zingmp3**: mp3.zing.vn - **zingmp3**: mp3.zing.vn
- **zingmp3:album** - **zingmp3:album**

View File

@@ -22,11 +22,19 @@ from yt_dlp.utils import (
) )
if 'pytest' in sys.modules:
import pytest
is_download_test = pytest.mark.download
else:
def is_download_test(testClass):
return testClass
def get_params(override=None): def get_params(override=None):
PARAMETERS_FILE = os.path.join(os.path.dirname(os.path.abspath(__file__)), PARAMETERS_FILE = os.path.join(os.path.dirname(os.path.abspath(__file__)),
"parameters.json") 'parameters.json')
LOCAL_PARAMETERS_FILE = os.path.join(os.path.dirname(os.path.abspath(__file__)), LOCAL_PARAMETERS_FILE = os.path.join(os.path.dirname(os.path.abspath(__file__)),
"local_parameters.json") 'local_parameters.json')
with io.open(PARAMETERS_FILE, encoding='utf-8') as pf: with io.open(PARAMETERS_FILE, encoding='utf-8') as pf:
parameters = json.load(pf) parameters = json.load(pf)
if os.path.exists(LOCAL_PARAMETERS_FILE): if os.path.exists(LOCAL_PARAMETERS_FILE):
@@ -190,7 +198,10 @@ def expect_info_dict(self, got_dict, expected_dict):
expect_dict(self, got_dict, expected_dict) expect_dict(self, got_dict, expected_dict)
# Check for the presence of mandatory fields # Check for the presence of mandatory fields
if got_dict.get('_type') not in ('playlist', 'multi_video'): if got_dict.get('_type') not in ('playlist', 'multi_video'):
for key in ('id', 'url', 'title', 'ext'): mandatory_fields = ['id', 'title']
if expected_dict.get('ext'):
mandatory_fields.extend(('url', 'ext'))
for key in mandatory_fields:
self.assertTrue(got_dict.get(key), 'Missing mandatory field %s' % key) self.assertTrue(got_dict.get(key), 'Missing mandatory field %s' % key)
# Check for mandatory fields that are automatically set by YoutubeDL # Check for mandatory fields that are automatically set by YoutubeDL
for key in ['webpage_url', 'extractor', 'extractor_key']: for key in ['webpage_url', 'extractor', 'extractor_key']:

View File

@@ -1,4 +1,5 @@
{ {
"check_formats": false,
"consoletitle": false, "consoletitle": false,
"continuedl": true, "continuedl": true,
"forcedescription": false, "forcedescription": false,
@@ -8,7 +9,7 @@
"forcetitle": false, "forcetitle": false,
"forceurl": false, "forceurl": false,
"force_write_download_archive": false, "force_write_download_archive": false,
"format": "best", "format": "b/bv",
"ignoreerrors": false, "ignoreerrors": false,
"listformats": null, "listformats": null,
"logtostderr": false, "logtostderr": false,
@@ -43,6 +44,5 @@
"writesubtitles": false, "writesubtitles": false,
"allsubtitles": false, "allsubtitles": false,
"listsubtitles": false, "listsubtitles": false,
"socket_timeout": 20,
"fixup": "never" "fixup": "never"
} }

View File

@@ -35,13 +35,13 @@ class InfoExtractorTestRequestHandler(compat_http_server.BaseHTTPRequestHandler)
assert False assert False
class TestIE(InfoExtractor): class DummyIE(InfoExtractor):
pass pass
class TestInfoExtractor(unittest.TestCase): class TestInfoExtractor(unittest.TestCase):
def setUp(self): def setUp(self):
self.ie = TestIE(FakeYDL()) self.ie = DummyIE(FakeYDL())
def test_ie_key(self): def test_ie_key(self):
self.assertEqual(get_info_extractor(YoutubeIE.ie_key()), YoutubeIE) self.assertEqual(get_info_extractor(YoutubeIE.ie_key()), YoutubeIE)

View File

@@ -10,14 +10,15 @@ import unittest
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
import copy import copy
import json
from test.helper import FakeYDL, assertRegexpMatches from test.helper import FakeYDL, assertRegexpMatches
from yt_dlp import YoutubeDL from yt_dlp import YoutubeDL
from yt_dlp.compat import compat_str, compat_urllib_error from yt_dlp.compat import compat_os_name, compat_setenv, compat_str, compat_urllib_error
from yt_dlp.extractor import YoutubeIE from yt_dlp.extractor import YoutubeIE
from yt_dlp.extractor.common import InfoExtractor from yt_dlp.extractor.common import InfoExtractor
from yt_dlp.postprocessor.common import PostProcessor from yt_dlp.postprocessor.common import PostProcessor
from yt_dlp.utils import ExtractorError, int_or_none, match_filter_func from yt_dlp.utils import ExtractorError, int_or_none, match_filter_func, LazyList
TEST_URL = 'http://localhost/sample.mp4' TEST_URL = 'http://localhost/sample.mp4'
@@ -35,6 +36,9 @@ class YDL(FakeYDL):
def to_screen(self, msg): def to_screen(self, msg):
self.msgs.append(msg) self.msgs.append(msg)
def dl(self, *args, **kwargs):
assert False, 'Downloader must not be invoked for test_YoutubeDL'
def _make_result(formats, **kwargs): def _make_result(formats, **kwargs):
res = { res = {
@@ -117,35 +121,24 @@ class TestFormatSelection(unittest.TestCase):
] ]
info_dict = _make_result(formats) info_dict = _make_result(formats)
ydl = YDL({'format': '20/47'}) def test(inp, *expected, multi=False):
ydl.process_ie_result(info_dict.copy()) ydl = YDL({
downloaded = ydl.downloaded_info_dicts[0] 'format': inp,
self.assertEqual(downloaded['format_id'], '47') 'allow_multiple_video_streams': multi,
'allow_multiple_audio_streams': multi,
})
ydl.process_ie_result(info_dict.copy())
downloaded = map(lambda x: x['format_id'], ydl.downloaded_info_dicts)
self.assertEqual(list(downloaded), list(expected))
ydl = YDL({'format': '20/71/worst'}) test('20/47', '47')
ydl.process_ie_result(info_dict.copy()) test('20/71/worst', '35')
downloaded = ydl.downloaded_info_dicts[0] test(None, '2')
self.assertEqual(downloaded['format_id'], '35') test('webm/mp4', '47')
test('3gp/40/mp4', '35')
ydl = YDL() test('example-with-dashes', 'example-with-dashes')
ydl.process_ie_result(info_dict.copy()) test('all', '35', 'example-with-dashes', '45', '47', '2') # Order doesn't actually matter for this
downloaded = ydl.downloaded_info_dicts[0] test('mergeall', '2+47+45+example-with-dashes+35', multi=True)
self.assertEqual(downloaded['format_id'], '2')
ydl = YDL({'format': 'webm/mp4'})
ydl.process_ie_result(info_dict.copy())
downloaded = ydl.downloaded_info_dicts[0]
self.assertEqual(downloaded['format_id'], '47')
ydl = YDL({'format': '3gp/40/mp4'})
ydl.process_ie_result(info_dict.copy())
downloaded = ydl.downloaded_info_dicts[0]
self.assertEqual(downloaded['format_id'], '35')
ydl = YDL({'format': 'example-with-dashes'})
ydl.process_ie_result(info_dict.copy())
downloaded = ydl.downloaded_info_dicts[0]
self.assertEqual(downloaded['format_id'], 'example-with-dashes')
def test_format_selection_audio(self): def test_format_selection_audio(self):
formats = [ formats = [
@@ -655,12 +648,15 @@ class TestYoutubeDL(unittest.TestCase):
'title1': '$PATH', 'title1': '$PATH',
'title2': '%PATH%', 'title2': '%PATH%',
'title3': 'foo/bar\\test', 'title3': 'foo/bar\\test',
'title4': 'foo "bar" test',
'title5': 'áéí 𝐀',
'timestamp': 1618488000, 'timestamp': 1618488000,
'duration': 100000, 'duration': 100000,
'playlist_index': 1, 'playlist_index': 1,
'playlist_autonumber': 2,
'_last_playlist_index': 100, '_last_playlist_index': 100,
'n_entries': 10, 'n_entries': 10,
'formats': [{'id': 'id1'}, {'id': 'id2'}, {'id': 'id3'}] 'formats': [{'id': 'id 1'}, {'id': 'id 2'}, {'id': 'id 3'}]
} }
def test_prepare_outtmpl_and_filename(self): def test_prepare_outtmpl_and_filename(self):
@@ -670,32 +666,45 @@ class TestYoutubeDL(unittest.TestCase):
ydl._num_downloads = 1 ydl._num_downloads = 1
self.assertEqual(ydl.validate_outtmpl(tmpl), None) self.assertEqual(ydl.validate_outtmpl(tmpl), None)
outtmpl, tmpl_dict = ydl.prepare_outtmpl(tmpl, info or self.outtmpl_info) out = ydl.evaluate_outtmpl(tmpl, info or self.outtmpl_info)
out = outtmpl % tmpl_dict
fname = ydl.prepare_filename(info or self.outtmpl_info) fname = ydl.prepare_filename(info or self.outtmpl_info)
if callable(expected): if not isinstance(expected, (list, tuple)):
self.assertTrue(expected(out)) expected = (expected, expected)
self.assertTrue(expected(fname)) for (name, got), expect in zip((('outtmpl', out), ('filename', fname)), expected):
elif isinstance(expected, compat_str): if callable(expect):
self.assertEqual((out, fname), (expected, expected)) self.assertTrue(expect(got), f'Wrong {name} from {tmpl}')
else: else:
self.assertEqual((out, fname), expected) self.assertEqual(got, expect, f'Wrong {name} from {tmpl}')
# Side-effects
original_infodict = dict(self.outtmpl_info)
test('foo.bar', 'foo.bar')
original_infodict['epoch'] = self.outtmpl_info.get('epoch')
self.assertTrue(isinstance(original_infodict['epoch'], int))
test('%(epoch)d', int_or_none)
self.assertEqual(original_infodict, self.outtmpl_info)
# Auto-generated fields # Auto-generated fields
test('%(id)s.%(ext)s', '1234.mp4') test('%(id)s.%(ext)s', '1234.mp4')
test('%(duration_string)s', ('27:46:40', '27-46-40')) test('%(duration_string)s', ('27:46:40', '27-46-40'))
test('%(epoch)d', int_or_none)
test('%(resolution)s', '1080p') test('%(resolution)s', '1080p')
test('%(playlist_index)s', '001') test('%(playlist_index)s', '001')
test('%(playlist_autonumber)s', '02')
test('%(autonumber)s', '00001') test('%(autonumber)s', '00001')
test('%(autonumber+2)03d', '005', autonumber_start=3) test('%(autonumber+2)03d', '005', autonumber_start=3)
test('%(autonumber)s', '001', autonumber_size=3) test('%(autonumber)s', '001', autonumber_size=3)
# Escaping % # Escaping %
test('%', '%')
test('%%', '%') test('%%', '%')
test('%%%%', '%%') test('%%%%', '%%')
test('%s', '%s')
test('%%%s', '%%s')
test('%d', '%d')
test('%abc%', '%abc%')
test('%%(width)06d.%(ext)s', '%(width)06d.mp4') test('%%(width)06d.%(ext)s', '%(width)06d.mp4')
test('%%%(height)s', '%1080')
test('%(width)06d.%(ext)s', 'NA.mp4') test('%(width)06d.%(ext)s', 'NA.mp4')
test('%(width)06d.%%(ext)s', 'NA.%(ext)s') test('%(width)06d.%%(ext)s', 'NA.%(ext)s')
test('%%(width)06d.%(ext)s', '%(width)06d.mp4') test('%%(width)06d.%(ext)s', '%(width)06d.mp4')
@@ -710,18 +719,25 @@ class TestYoutubeDL(unittest.TestCase):
test('%(id)s', ('ab:cd', 'ab -cd'), info={'id': 'ab:cd'}) test('%(id)s', ('ab:cd', 'ab -cd'), info={'id': 'ab:cd'})
# Invalid templates # Invalid templates
self.assertTrue(isinstance(YoutubeDL.validate_outtmpl('%'), ValueError))
self.assertTrue(isinstance(YoutubeDL.validate_outtmpl('%(title)'), ValueError)) self.assertTrue(isinstance(YoutubeDL.validate_outtmpl('%(title)'), ValueError))
test('%(invalid@tmpl|def)s', 'none', outtmpl_na_placeholder='none') test('%(invalid@tmpl|def)s', 'none', outtmpl_na_placeholder='none')
test('%()s', 'NA') test('%(..)s', 'NA')
test('%s', '%s')
test('%d', '%d') # Entire info_dict
def expect_same_infodict(out):
got_dict = json.loads(out)
for info_field, expected in self.outtmpl_info.items():
self.assertEqual(got_dict.get(info_field), expected, info_field)
return True
test('%()j', (expect_same_infodict, str))
# NA placeholder # NA placeholder
NA_TEST_OUTTMPL = '%(uploader_date)s-%(width)d-%(x|def)s-%(id)s.%(ext)s' NA_TEST_OUTTMPL = '%(uploader_date)s-%(width)d-%(x|def)s-%(id)s.%(ext)s'
test(NA_TEST_OUTTMPL, 'NA-NA-def-1234.mp4') test(NA_TEST_OUTTMPL, 'NA-NA-def-1234.mp4')
test(NA_TEST_OUTTMPL, 'none-none-def-1234.mp4', outtmpl_na_placeholder='none') test(NA_TEST_OUTTMPL, 'none-none-def-1234.mp4', outtmpl_na_placeholder='none')
test(NA_TEST_OUTTMPL, '--def-1234.mp4', outtmpl_na_placeholder='') test(NA_TEST_OUTTMPL, '--def-1234.mp4', outtmpl_na_placeholder='')
test('%(non_existent.0)s', 'NA')
# String formatting # String formatting
FMT_TEST_OUTTMPL = '%%(height)%s.%%(ext)s' FMT_TEST_OUTTMPL = '%%(height)%s.%%(ext)s'
@@ -746,13 +762,37 @@ class TestYoutubeDL(unittest.TestCase):
test('%(width|0)04d', '0000') test('%(width|0)04d', '0000')
test('a%(width|)d', 'a', outtmpl_na_placeholder='none') test('a%(width|)d', 'a', outtmpl_na_placeholder='none')
# Internal formatting
FORMATS = self.outtmpl_info['formats'] FORMATS = self.outtmpl_info['formats']
sanitize = lambda x: x.replace(':', ' -').replace('"', "'").replace('\n', ' ')
# Custom type casting
test('%(formats.:.id)l', 'id 1, id 2, id 3')
test('%(formats.:.id)#l', ('id 1\nid 2\nid 3', 'id 1 id 2 id 3'))
test('%(ext)l', 'mp4')
test('%(formats.:.id) 18l', ' id 1, id 2, id 3')
test('%(formats)j', (json.dumps(FORMATS), sanitize(json.dumps(FORMATS))))
test('%(formats)#j', (json.dumps(FORMATS, indent=4), sanitize(json.dumps(FORMATS, indent=4))))
test('%(title5).3B', 'á')
test('%(title5)U', 'áéí 𝐀')
test('%(title5)#U', 'a\u0301e\u0301i\u0301 𝐀')
test('%(title5)+U', 'áéí A')
test('%(title5)+#U', 'a\u0301e\u0301i\u0301 A')
if compat_os_name == 'nt':
test('%(title4)q', ('"foo \\"bar\\" test"', "'foo _'bar_' test'"))
test('%(formats.:.id)#q', ('"id 1" "id 2" "id 3"', "'id 1' 'id 2' 'id 3'"))
test('%(formats.0.id)#q', ('"id 1"', "'id 1'"))
else:
test('%(title4)q', ('\'foo "bar" test\'', "'foo 'bar' test'"))
test('%(formats.:.id)#q', "'id 1' 'id 2' 'id 3'")
test('%(formats.0.id)#q', "'id 1'")
# Internal formatting
test('%(timestamp-1000>%H-%M-%S)s', '11-43-20') test('%(timestamp-1000>%H-%M-%S)s', '11-43-20')
test('%(title|%)s %(title|%%)s', '% %%')
test('%(id+1-height+3)05d', '00158') test('%(id+1-height+3)05d', '00158')
test('%(width+100)05d', 'NA') test('%(width+100)05d', 'NA')
test('%(formats.0) 15s', ('% 15s' % FORMATS[0], '% 15s' % str(FORMATS[0]).replace(':', ' -'))) test('%(formats.0) 15s', ('% 15s' % FORMATS[0], '% 15s' % sanitize(str(FORMATS[0]))))
test('%(formats.0)r', (repr(FORMATS[0]), repr(FORMATS[0]).replace(':', ' -'))) test('%(formats.0)r', (repr(FORMATS[0]), sanitize(repr(FORMATS[0]))))
test('%(height.0)03d', '001') test('%(height.0)03d', '001')
test('%(-height.0)04d', '-001') test('%(-height.0)04d', '-001')
test('%(formats.-1.id)s', FORMATS[-1]['id']) test('%(formats.-1.id)s', FORMATS[-1]['id'])
@@ -762,11 +802,34 @@ class TestYoutubeDL(unittest.TestCase):
test('%(formats.0.id.-1+id)f', '1235.000000') test('%(formats.0.id.-1+id)f', '1235.000000')
test('%(formats.0.id.-1+formats.1.id.-1)d', '3') test('%(formats.0.id.-1+formats.1.id.-1)d', '3')
# Alternates
test('%(title,id)s', '1234')
test('%(width-100,height+20|def)d', '1100')
test('%(width-100,height+width|def)s', 'def')
test('%(timestamp-x>%H\\,%M\\,%S,timestamp>%H\\,%M\\,%S)s', '12,00,00')
# Laziness
def gen():
yield from range(5)
raise self.assertTrue(False, 'LazyList should not be evaluated till here')
test('%(key.4)s', '4', info={'key': LazyList(gen())})
# Empty filename # Empty filename
test('%(foo|)s-%(bar|)s.%(ext)s', '-.mp4') test('%(foo|)s-%(bar|)s.%(ext)s', '-.mp4')
# test('%(foo|)s.%(ext)s', ('.mp4', '_.mp4')) # fixme # test('%(foo|)s.%(ext)s', ('.mp4', '_.mp4')) # fixme
# test('%(foo|)s', ('', '_')) # fixme # test('%(foo|)s', ('', '_')) # fixme
# Environment variable expansion for prepare_filename
compat_setenv('__yt_dlp_var', 'expanded')
envvar = '%__yt_dlp_var%' if compat_os_name == 'nt' else '$__yt_dlp_var'
test(envvar, (envvar, 'expanded'))
if compat_os_name == 'nt':
test('%s%', ('%s%', '%s%'))
compat_setenv('s', 'expanded')
test('%s%', ('%s%', 'expanded')) # %s% should be expanded before escaping %s
compat_setenv('(test)s', 'expanded')
test('%(test)s%', ('NA%', 'expanded')) # Environment should take priority over template
# Path expansion and escaping # Path expansion and escaping
test('Hello %(title1)s', 'Hello $PATH') test('Hello %(title1)s', 'Hello $PATH')
test('Hello %(title2)s', 'Hello %PATH%') test('Hello %(title2)s', 'Hello %PATH%')
@@ -941,54 +1004,32 @@ class TestYoutubeDL(unittest.TestCase):
ydl.process_ie_result(copy.deepcopy(playlist)) ydl.process_ie_result(copy.deepcopy(playlist))
return ydl.downloaded_info_dicts return ydl.downloaded_info_dicts
def get_ids(params): def test_selection(params, expected_ids):
return [int(v['id']) for v in get_downloaded_info_dicts(params)] results = [
(v['playlist_autonumber'] - 1, (int(v['id']), v['playlist_index']))
for v in get_downloaded_info_dicts(params)]
self.assertEqual(results, list(enumerate(zip(expected_ids, expected_ids))))
result = get_ids({}) test_selection({}, [1, 2, 3, 4])
self.assertEqual(result, [1, 2, 3, 4]) test_selection({'playlistend': 10}, [1, 2, 3, 4])
test_selection({'playlistend': 2}, [1, 2])
result = get_ids({'playlistend': 10}) test_selection({'playliststart': 10}, [])
self.assertEqual(result, [1, 2, 3, 4]) test_selection({'playliststart': 2}, [2, 3, 4])
test_selection({'playlist_items': '2-4'}, [2, 3, 4])
result = get_ids({'playlistend': 2}) test_selection({'playlist_items': '2,4'}, [2, 4])
self.assertEqual(result, [1, 2]) test_selection({'playlist_items': '10'}, [])
test_selection({'playlist_items': '0'}, [])
result = get_ids({'playliststart': 10})
self.assertEqual(result, [])
result = get_ids({'playliststart': 2})
self.assertEqual(result, [2, 3, 4])
result = get_ids({'playlist_items': '2-4'})
self.assertEqual(result, [2, 3, 4])
result = get_ids({'playlist_items': '2,4'})
self.assertEqual(result, [2, 4])
result = get_ids({'playlist_items': '10'})
self.assertEqual(result, [])
result = get_ids({'playlist_items': '3-10'})
self.assertEqual(result, [3, 4])
result = get_ids({'playlist_items': '2-4,3-4,3'})
self.assertEqual(result, [2, 3, 4])
# Tests for https://github.com/ytdl-org/youtube-dl/issues/10591 # Tests for https://github.com/ytdl-org/youtube-dl/issues/10591
# @{ test_selection({'playlist_items': '2-4,3-4,3'}, [2, 3, 4])
result = get_downloaded_info_dicts({'playlist_items': '2-4,3-4,3'}) test_selection({'playlist_items': '4,2'}, [4, 2])
self.assertEqual(result[0]['playlist_index'], 2)
self.assertEqual(result[1]['playlist_index'], 3)
result = get_downloaded_info_dicts({'playlist_items': '2-4,3-4,3'}) # Tests for https://github.com/yt-dlp/yt-dlp/issues/720
self.assertEqual(result[0]['playlist_index'], 2) # https://github.com/yt-dlp/yt-dlp/issues/302
self.assertEqual(result[1]['playlist_index'], 3) test_selection({'playlistreverse': True}, [4, 3, 2, 1])
self.assertEqual(result[2]['playlist_index'], 4) test_selection({'playliststart': 2, 'playlistreverse': True}, [4, 3, 2])
test_selection({'playlist_items': '2,4', 'playlistreverse': True}, [4, 2])
result = get_downloaded_info_dicts({'playlist_items': '4,2'}) test_selection({'playlist_items': '4,2'}, [4, 2])
self.assertEqual(result[0]['playlist_index'], 4)
self.assertEqual(result[1]['playlist_index'], 2)
# @}
def test_urlopen_no_file_protocol(self): def test_urlopen_no_file_protocol(self):
# see https://github.com/ytdl-org/youtube-dl/issues/8227 # see https://github.com/ytdl-org/youtube-dl/issues/8227

View File

@@ -7,7 +7,19 @@ import sys
import unittest import unittest
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from yt_dlp.aes import aes_decrypt, aes_encrypt, aes_cbc_decrypt, aes_cbc_encrypt, aes_decrypt_text from yt_dlp.aes import (
aes_decrypt,
aes_encrypt,
aes_cbc_decrypt,
aes_cbc_decrypt_bytes,
aes_cbc_encrypt,
aes_ctr_decrypt,
aes_ctr_encrypt,
aes_gcm_decrypt_and_verify,
aes_gcm_decrypt_and_verify_bytes,
aes_decrypt_text
)
from yt_dlp.compat import compat_pycrypto_AES
from yt_dlp.utils import bytes_to_intlist, intlist_to_bytes from yt_dlp.utils import bytes_to_intlist, intlist_to_bytes
import base64 import base64
@@ -27,18 +39,43 @@ class TestAES(unittest.TestCase):
self.assertEqual(decrypted, msg) self.assertEqual(decrypted, msg)
def test_cbc_decrypt(self): def test_cbc_decrypt(self):
data = bytes_to_intlist( data = b'\x97\x92+\xe5\x0b\xc3\x18\x91ky9m&\xb3\xb5@\xe6\x27\xc2\x96.\xc8u\x88\xab9-[\x9e|\xf1\xcd'
b"\x97\x92+\xe5\x0b\xc3\x18\x91ky9m&\xb3\xb5@\xe6'\xc2\x96.\xc8u\x88\xab9-[\x9e|\xf1\xcd" decrypted = intlist_to_bytes(aes_cbc_decrypt(bytes_to_intlist(data), self.key, self.iv))
)
decrypted = intlist_to_bytes(aes_cbc_decrypt(data, self.key, self.iv))
self.assertEqual(decrypted.rstrip(b'\x08'), self.secret_msg) self.assertEqual(decrypted.rstrip(b'\x08'), self.secret_msg)
if compat_pycrypto_AES:
decrypted = aes_cbc_decrypt_bytes(data, intlist_to_bytes(self.key), intlist_to_bytes(self.iv))
self.assertEqual(decrypted.rstrip(b'\x08'), self.secret_msg)
def test_cbc_encrypt(self): def test_cbc_encrypt(self):
data = bytes_to_intlist(self.secret_msg) data = bytes_to_intlist(self.secret_msg)
encrypted = intlist_to_bytes(aes_cbc_encrypt(data, self.key, self.iv)) encrypted = intlist_to_bytes(aes_cbc_encrypt(data, self.key, self.iv))
self.assertEqual( self.assertEqual(
encrypted, encrypted,
b"\x97\x92+\xe5\x0b\xc3\x18\x91ky9m&\xb3\xb5@\xe6'\xc2\x96.\xc8u\x88\xab9-[\x9e|\xf1\xcd") b'\x97\x92+\xe5\x0b\xc3\x18\x91ky9m&\xb3\xb5@\xe6\'\xc2\x96.\xc8u\x88\xab9-[\x9e|\xf1\xcd')
def test_ctr_decrypt(self):
data = bytes_to_intlist(b'\x03\xc7\xdd\xd4\x8e\xb3\xbc\x1a*O\xdc1\x12+8Aio\xd1z\xb5#\xaf\x08')
decrypted = intlist_to_bytes(aes_ctr_decrypt(data, self.key, self.iv))
self.assertEqual(decrypted.rstrip(b'\x08'), self.secret_msg)
def test_ctr_encrypt(self):
data = bytes_to_intlist(self.secret_msg)
encrypted = intlist_to_bytes(aes_ctr_encrypt(data, self.key, self.iv))
self.assertEqual(
encrypted,
b'\x03\xc7\xdd\xd4\x8e\xb3\xbc\x1a*O\xdc1\x12+8Aio\xd1z\xb5#\xaf\x08')
def test_gcm_decrypt(self):
data = b'\x159Y\xcf5eud\x90\x9c\x85&]\x14\x1d\x0f.\x08\xb4T\xe4/\x17\xbd'
authentication_tag = b'\xe8&I\x80rI\x07\x9d}YWuU@:e'
decrypted = intlist_to_bytes(aes_gcm_decrypt_and_verify(
bytes_to_intlist(data), self.key, bytes_to_intlist(authentication_tag), self.iv[:12]))
self.assertEqual(decrypted.rstrip(b'\x08'), self.secret_msg)
if compat_pycrypto_AES:
decrypted = aes_gcm_decrypt_and_verify_bytes(
data, intlist_to_bytes(self.key), authentication_tag, intlist_to_bytes(self.iv[:12]))
self.assertEqual(decrypted.rstrip(b'\x08'), self.secret_msg)
def test_decrypt_text(self): def test_decrypt_text(self):
password = intlist_to_bytes(self.key).decode('utf-8') password = intlist_to_bytes(self.key).decode('utf-8')

View File

@@ -7,8 +7,7 @@ import sys
import unittest import unittest
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from test.helper import try_rm from test.helper import try_rm, is_download_test
from yt_dlp import YoutubeDL from yt_dlp import YoutubeDL
@@ -32,6 +31,7 @@ def _download_restricted(url, filename, age):
return res return res
@is_download_test
class TestAgeRestriction(unittest.TestCase): class TestAgeRestriction(unittest.TestCase):
def _assert_restricted(self, url, filename, age, old_age=None): def _assert_restricted(self, url, filename, age, old_age=None):
self.assertTrue(_download_restricted(url, filename, old_age)) self.assertTrue(_download_restricted(url, filename, old_age))

107
test/test_cookies.py Normal file
View File

@@ -0,0 +1,107 @@
import unittest
from datetime import datetime, timezone
from yt_dlp import cookies
from yt_dlp.cookies import (
LinuxChromeCookieDecryptor,
MacChromeCookieDecryptor,
WindowsChromeCookieDecryptor,
parse_safari_cookies,
pbkdf2_sha1,
)
class Logger:
def debug(self, message):
print(f'[verbose] {message}')
def info(self, message):
print(message)
def warning(self, message, only_once=False):
self.error(message)
def error(self, message):
raise Exception(message)
class MonkeyPatch:
def __init__(self, module, temporary_values):
self._module = module
self._temporary_values = temporary_values
self._backup_values = {}
def __enter__(self):
for name, temp_value in self._temporary_values.items():
self._backup_values[name] = getattr(self._module, name)
setattr(self._module, name, temp_value)
def __exit__(self, exc_type, exc_val, exc_tb):
for name, backup_value in self._backup_values.items():
setattr(self._module, name, backup_value)
class TestCookies(unittest.TestCase):
def test_chrome_cookie_decryptor_linux_derive_key(self):
key = LinuxChromeCookieDecryptor.derive_key(b'abc')
self.assertEqual(key, b'7\xa1\xec\xd4m\xfcA\xc7\xb19Z\xd0\x19\xdcM\x17')
def test_chrome_cookie_decryptor_mac_derive_key(self):
key = MacChromeCookieDecryptor.derive_key(b'abc')
self.assertEqual(key, b'Y\xe2\xc0\xd0P\xf6\xf4\xe1l\xc1\x8cQ\xcb|\xcdY')
def test_chrome_cookie_decryptor_linux_v10(self):
with MonkeyPatch(cookies, {'_get_linux_keyring_password': lambda *args, **kwargs: b''}):
encrypted_value = b'v10\xccW%\xcd\xe6\xe6\x9fM" \xa7\xb0\xca\xe4\x07\xd6'
value = 'USD'
decryptor = LinuxChromeCookieDecryptor('Chrome', Logger())
self.assertEqual(decryptor.decrypt(encrypted_value), value)
def test_chrome_cookie_decryptor_linux_v11(self):
with MonkeyPatch(cookies, {'_get_linux_keyring_password': lambda *args, **kwargs: b'',
'KEYRING_AVAILABLE': True}):
encrypted_value = b'v11#\x81\x10>`w\x8f)\xc0\xb2\xc1\r\xf4\x1al\xdd\x93\xfd\xf8\xf8N\xf2\xa9\x83\xf1\xe9o\x0elVQd'
value = 'tz=Europe.London'
decryptor = LinuxChromeCookieDecryptor('Chrome', Logger())
self.assertEqual(decryptor.decrypt(encrypted_value), value)
def test_chrome_cookie_decryptor_windows_v10(self):
with MonkeyPatch(cookies, {
'_get_windows_v10_key': lambda *args, **kwargs: b'Y\xef\xad\xad\xeerp\xf0Y\xe6\x9b\x12\xc2<z\x16]\n\xbb\xb8\xcb\xd7\x9bA\xc3\x14e\x99{\xd6\xf4&'
}):
encrypted_value = b'v10T\xb8\xf3\xb8\x01\xa7TtcV\xfc\x88\xb8\xb8\xef\x05\xb5\xfd\x18\xc90\x009\xab\xb1\x893\x85)\x87\xe1\xa9-\xa3\xad='
value = '32101439'
decryptor = WindowsChromeCookieDecryptor('', Logger())
self.assertEqual(decryptor.decrypt(encrypted_value), value)
def test_chrome_cookie_decryptor_mac_v10(self):
with MonkeyPatch(cookies, {'_get_mac_keyring_password': lambda *args, **kwargs: b'6eIDUdtKAacvlHwBVwvg/Q=='}):
encrypted_value = b'v10\xb3\xbe\xad\xa1[\x9fC\xa1\x98\xe0\x9a\x01\xd9\xcf\xbfc'
value = '2021-06-01-22'
decryptor = MacChromeCookieDecryptor('', Logger())
self.assertEqual(decryptor.decrypt(encrypted_value), value)
def test_safari_cookie_parsing(self):
cookies = \
b'cook\x00\x00\x00\x01\x00\x00\x00i\x00\x00\x01\x00\x01\x00\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00Y' \
b'\x00\x00\x00\x00\x00\x00\x00 \x00\x00\x00\x00\x00\x00\x008\x00\x00\x00B\x00\x00\x00F\x00\x00\x00H' \
b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x80\x03\xa5>\xc3A\x00\x00\x80\xc3\x07:\xc3A' \
b'localhost\x00foo\x00/\x00test%20%3Bcookie\x00\x00\x00\x054\x07\x17 \x05\x00\x00\x00Kbplist00\xd1\x01' \
b'\x02_\x10\x18NSHTTPCookieAcceptPolicy\x10\x02\x08\x0b&\x00\x00\x00\x00\x00\x00\x01\x01\x00\x00\x00' \
b'\x00\x00\x00\x00\x03\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00('
jar = parse_safari_cookies(cookies)
self.assertEqual(len(jar), 1)
cookie = list(jar)[0]
self.assertEqual(cookie.domain, 'localhost')
self.assertEqual(cookie.port, None)
self.assertEqual(cookie.path, '/')
self.assertEqual(cookie.name, 'foo')
self.assertEqual(cookie.value, 'test%20%3Bcookie')
self.assertFalse(cookie.secure)
expected_expiration = datetime(2021, 6, 18, 21, 39, 19, tzinfo=timezone.utc)
self.assertEqual(cookie.expires, int(expected_expiration.timestamp()))
def test_pbkdf2_sha1(self):
key = pbkdf2_sha1(b'peanuts', b' ' * 16, 1, 16)
self.assertEqual(key, b'g\xe1\x8e\x0fQ\x1c\x9b\xf3\xc9`!\xaa\x90\xd9\xd34')

51
test/test_download.py Normal file → Executable file
View File

@@ -10,12 +10,13 @@ sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from test.helper import ( from test.helper import (
assertGreaterEqual, assertGreaterEqual,
expect_info_dict,
expect_warnings, expect_warnings,
get_params, get_params,
gettestcases, gettestcases,
expect_info_dict, is_download_test,
try_rm,
report_warning, report_warning,
try_rm,
) )
@@ -64,6 +65,7 @@ def _file_md5(fn):
defs = gettestcases() defs = gettestcases()
@is_download_test
class TestDownload(unittest.TestCase): class TestDownload(unittest.TestCase):
# Parallel testing in nosetests. See # Parallel testing in nosetests. See
# http://nose.readthedocs.org/en/latest/doc_tests/test_multiprocess/multiprocess.html # http://nose.readthedocs.org/en/latest/doc_tests/test_multiprocess/multiprocess.html
@@ -71,6 +73,8 @@ class TestDownload(unittest.TestCase):
maxDiff = None maxDiff = None
COMPLETED_TESTS = {}
def __str__(self): def __str__(self):
"""Identify each test with the `add_ie` attribute, if available.""" """Identify each test with the `add_ie` attribute, if available."""
@@ -92,6 +96,9 @@ class TestDownload(unittest.TestCase):
def generator(test_case, tname): def generator(test_case, tname):
def test_template(self): def test_template(self):
if self.COMPLETED_TESTS.get(tname):
return
self.COMPLETED_TESTS[tname] = True
ie = yt_dlp.extractor.get_info_extractor(test_case['name'])() ie = yt_dlp.extractor.get_info_extractor(test_case['name'])()
other_ies = [get_info_extractor(ie_key)() for ie_key in test_case.get('add_ie', [])] other_ies = [get_info_extractor(ie_key)() for ie_key in test_case.get('add_ie', [])]
is_playlist = any(k.startswith('playlist') for k in test_case) is_playlist = any(k.startswith('playlist') for k in test_case)
@@ -106,8 +113,13 @@ def generator(test_case, tname):
for tc in test_cases: for tc in test_cases:
info_dict = tc.get('info_dict', {}) info_dict = tc.get('info_dict', {})
if not (info_dict.get('id') and info_dict.get('ext')): params = tc.get('params', {})
raise Exception('Test definition incorrect. The output file cannot be known. Are both \'id\' and \'ext\' keys present?') if not info_dict.get('id'):
raise Exception('Test definition incorrect. \'id\' key is not present')
elif not info_dict.get('ext'):
if params.get('skip_download') and params.get('ignore_no_formats_error'):
continue
raise Exception('Test definition incorrect. The output file cannot be known. \'ext\' key is not present')
if 'skip' in test_case: if 'skip' in test_case:
print_skipping(test_case['skip']) print_skipping(test_case['skip'])
@@ -135,7 +147,7 @@ def generator(test_case, tname):
expect_warnings(ydl, test_case.get('expected_warnings', [])) expect_warnings(ydl, test_case.get('expected_warnings', []))
def get_tc_filename(tc): def get_tc_filename(tc):
return ydl.prepare_filename(tc.get('info_dict', {})) return ydl.prepare_filename(dict(tc.get('info_dict', {})))
res_dict = None res_dict = None
@@ -248,12 +260,12 @@ def generator(test_case, tname):
# And add them to TestDownload # And add them to TestDownload
for n, test_case in enumerate(defs): tests_counter = {}
tname = 'test_' + str(test_case['name']) for test_case in defs:
i = 1 name = test_case['name']
while hasattr(TestDownload, tname): i = tests_counter.get(name, 0)
tname = 'test_%s_%d' % (test_case['name'], i) tests_counter[name] = i + 1
i += 1 tname = f'test_{name}_{i}' if i else f'test_{name}'
test_method = generator(test_case, tname) test_method = generator(test_case, tname)
test_method.__name__ = str(tname) test_method.__name__ = str(tname)
ie_list = test_case.get('add_ie') ie_list = test_case.get('add_ie')
@@ -262,5 +274,22 @@ for n, test_case in enumerate(defs):
del test_method del test_method
def batch_generator(name, num_tests):
def test_template(self):
for i in range(num_tests):
getattr(self, f'test_{name}_{i}' if i else f'test_{name}')()
return test_template
for name, num_tests in tests_counter.items():
test_method = batch_generator(name, num_tests)
test_method.__name__ = f'test_{name}_all'
test_method.add_ie = ''
setattr(TestDownload, test_method.__name__, test_method)
del test_method
if __name__ == '__main__': if __name__ == '__main__':
unittest.main() unittest.main()

View File

@@ -8,7 +8,7 @@ import sys
import unittest import unittest
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from test.helper import FakeYDL from test.helper import FakeYDL, is_download_test
from yt_dlp.extractor import IqiyiIE from yt_dlp.extractor import IqiyiIE
@@ -31,6 +31,7 @@ class WarningLogger(object):
pass pass
@is_download_test
class TestIqiyiSDKInterpreter(unittest.TestCase): class TestIqiyiSDKInterpreter(unittest.TestCase):
def test_iqiyi_sdk_interpreter(self): def test_iqiyi_sdk_interpreter(self):
''' '''

View File

@@ -112,6 +112,71 @@ class TestJSInterpreter(unittest.TestCase):
''') ''')
self.assertEqual(jsi.call_function('z'), 5) self.assertEqual(jsi.call_function('z'), 5)
def test_for_loop(self):
jsi = JSInterpreter('''
function x() { a=0; for (i=0; i-10; i++) {a++} a }
''')
self.assertEqual(jsi.call_function('x'), 10)
def test_switch(self):
jsi = JSInterpreter('''
function x(f) { switch(f){
case 1:f+=1;
case 2:f+=2;
case 3:f+=3;break;
case 4:f+=4;
default:f=0;
} return f }
''')
self.assertEqual(jsi.call_function('x', 1), 7)
self.assertEqual(jsi.call_function('x', 3), 6)
self.assertEqual(jsi.call_function('x', 5), 0)
def test_switch_default(self):
jsi = JSInterpreter('''
function x(f) { switch(f){
case 2: f+=2;
default: f-=1;
case 5:
case 6: f+=6;
case 0: break;
case 1: f+=1;
} return f }
''')
self.assertEqual(jsi.call_function('x', 1), 2)
self.assertEqual(jsi.call_function('x', 5), 11)
self.assertEqual(jsi.call_function('x', 9), 14)
def test_try(self):
jsi = JSInterpreter('''
function x() { try{return 10} catch(e){return 5} }
''')
self.assertEqual(jsi.call_function('x'), 10)
def test_for_loop_continue(self):
jsi = JSInterpreter('''
function x() { a=0; for (i=0; i-10; i++) { continue; a++ } a }
''')
self.assertEqual(jsi.call_function('x'), 0)
def test_for_loop_break(self):
jsi = JSInterpreter('''
function x() { a=0; for (i=0; i-10; i++) { break; a++ } a }
''')
self.assertEqual(jsi.call_function('x'), 0)
def test_literal_list(self):
jsi = JSInterpreter('''
function x() { [1, 2, "asdf", [5, 6, 7]][3] }
''')
self.assertEqual(jsi.call_function('x'), [5, 6, 7])
def test_comma(self):
jsi = JSInterpreter('''
function x() { a=5; a -= 1, a+=3; return a }
''')
self.assertEqual(jsi.call_function('x'), 7)
if __name__ == '__main__': if __name__ == '__main__':
unittest.main() unittest.main()

View File

@@ -8,13 +8,14 @@ import sys
import unittest import unittest
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from test.helper import try_rm from test.helper import is_download_test, try_rm
root_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) root_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
download_file = join(root_dir, 'test.webm') download_file = join(root_dir, 'test.webm')
@is_download_test
class TestOverwrites(unittest.TestCase): class TestOverwrites(unittest.TestCase):
def setUp(self): def setUp(self):
# create an empty file # create an empty file

View File

@@ -7,7 +7,7 @@ import sys
import unittest import unittest
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from test.helper import get_params, try_rm from test.helper import get_params, try_rm, is_download_test
import yt_dlp.YoutubeDL import yt_dlp.YoutubeDL
from yt_dlp.utils import DownloadError from yt_dlp.utils import DownloadError
@@ -22,6 +22,7 @@ TEST_ID = 'gr51aVj-mLg'
EXPECTED_NAME = 'gr51aVj-mLg' EXPECTED_NAME = 'gr51aVj-mLg'
@is_download_test
class TestPostHooks(unittest.TestCase): class TestPostHooks(unittest.TestCase):
def setUp(self): def setUp(self):
self.stored_name_1 = None self.stored_name_1 = None

View File

@@ -6,37 +6,38 @@ from __future__ import unicode_literals
import os import os
import sys import sys
import unittest import unittest
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from yt_dlp import YoutubeDL from yt_dlp import YoutubeDL
from yt_dlp.compat import compat_shlex_quote from yt_dlp.compat import compat_shlex_quote
from yt_dlp.postprocessor import ( from yt_dlp.postprocessor import (
ExecAfterDownloadPP, ExecPP,
FFmpegThumbnailsConvertorPP, FFmpegThumbnailsConvertorPP,
MetadataFromFieldPP, MetadataFromFieldPP,
MetadataFromTitlePP, MetadataParserPP,
ModifyChaptersPP
) )
class TestMetadataFromField(unittest.TestCase): class TestMetadataFromField(unittest.TestCase):
def test_format_to_regex(self): def test_format_to_regex(self):
pp = MetadataFromFieldPP(None, ['title:%(title)s - %(artist)s']) self.assertEqual(
self.assertEqual(pp._data[0]['regex'], r'(?P<title>.+)\ \-\ (?P<artist>.+)') MetadataParserPP.format_to_regex('%(title)s - %(artist)s'),
r'(?P<title>.+)\ \-\ (?P<artist>.+)')
self.assertEqual(MetadataParserPP.format_to_regex(r'(?P<x>.+)'), r'(?P<x>.+)')
def test_field_to_outtmpl(self): def test_field_to_template(self):
pp = MetadataFromFieldPP(None, ['title:%(title)s : %(artist)s']) self.assertEqual(MetadataParserPP.field_to_template('title'), '%(title)s')
self.assertEqual(pp._data[0]['tmpl'], '%(title)s') self.assertEqual(MetadataParserPP.field_to_template('1'), '1')
self.assertEqual(MetadataParserPP.field_to_template('foo bar'), 'foo bar')
self.assertEqual(MetadataParserPP.field_to_template(' literal'), ' literal')
def test_in_out_seperation(self): def test_metadatafromfield(self):
pp = MetadataFromFieldPP(None, ['%(title)s \\: %(artist)s:%(title)s : %(artist)s']) self.assertEqual(
self.assertEqual(pp._data[0]['in'], '%(title)s : %(artist)s') MetadataFromFieldPP.to_action('%(title)s \\: %(artist)s:%(title)s : %(artist)s'),
self.assertEqual(pp._data[0]['out'], '%(title)s : %(artist)s') (MetadataParserPP.Actions.INTERPRET, '%(title)s : %(artist)s', '%(title)s : %(artist)s'))
class TestMetadataFromTitle(unittest.TestCase):
def test_format_to_regex(self):
pp = MetadataFromTitlePP(None, '%(title)s - %(artist)s')
self.assertEqual(pp._titleregex, r'(?P<title>.+)\ \-\ (?P<artist>.+)')
class TestConvertThumbnail(unittest.TestCase): class TestConvertThumbnail(unittest.TestCase):
@@ -60,12 +61,502 @@ class TestConvertThumbnail(unittest.TestCase):
os.remove(file.format(out)) os.remove(file.format(out))
class TestExecAfterDownload(unittest.TestCase): class TestExec(unittest.TestCase):
def test_parse_cmd(self): def test_parse_cmd(self):
pp = ExecAfterDownloadPP(YoutubeDL(), '') pp = ExecPP(YoutubeDL(), '')
info = {'filepath': 'file name'} info = {'filepath': 'file name'}
quoted_filepath = compat_shlex_quote(info['filepath']) cmd = 'echo %s' % compat_shlex_quote(info['filepath'])
self.assertEqual(pp.parse_cmd('echo', info), 'echo %s' % quoted_filepath) self.assertEqual(pp.parse_cmd('echo', info), cmd)
self.assertEqual(pp.parse_cmd('echo.{}', info), 'echo.%s' % quoted_filepath) self.assertEqual(pp.parse_cmd('echo {}', info), cmd)
self.assertEqual(pp.parse_cmd('echo "%(filepath)s"', info), 'echo "%s"' % info['filepath']) self.assertEqual(pp.parse_cmd('echo %(filepath)q', info), cmd)
class TestModifyChaptersPP(unittest.TestCase):
def setUp(self):
self._pp = ModifyChaptersPP(YoutubeDL())
@staticmethod
def _sponsor_chapter(start, end, cat, remove=False):
c = {'start_time': start, 'end_time': end, '_categories': [(cat, start, end)]}
if remove:
c['remove'] = True
return c
@staticmethod
def _chapter(start, end, title=None, remove=False):
c = {'start_time': start, 'end_time': end}
if title is not None:
c['title'] = title
if remove:
c['remove'] = True
return c
def _chapters(self, ends, titles):
self.assertEqual(len(ends), len(titles))
start = 0
chapters = []
for e, t in zip(ends, titles):
chapters.append(self._chapter(start, e, t))
start = e
return chapters
def _remove_marked_arrange_sponsors_test_impl(
self, chapters, expected_chapters, expected_removed):
actual_chapters, actual_removed = (
self._pp._remove_marked_arrange_sponsors(chapters))
for c in actual_removed:
c.pop('title', None)
c.pop('_categories', None)
actual_chapters = [{
'start_time': c['start_time'],
'end_time': c['end_time'],
'title': c['title'],
} for c in actual_chapters]
self.assertSequenceEqual(expected_chapters, actual_chapters)
self.assertSequenceEqual(expected_removed, actual_removed)
def test_remove_marked_arrange_sponsors_CanGetThroughUnaltered(self):
chapters = self._chapters([10, 20, 30, 40], ['c1', 'c2', 'c3', 'c4'])
self._remove_marked_arrange_sponsors_test_impl(chapters, chapters, [])
def test_remove_marked_arrange_sponsors_ChapterWithSponsors(self):
chapters = self._chapters([70], ['c']) + [
self._sponsor_chapter(10, 20, 'sponsor'),
self._sponsor_chapter(30, 40, 'preview'),
self._sponsor_chapter(50, 60, 'sponsor')]
expected = self._chapters(
[10, 20, 30, 40, 50, 60, 70],
['c', '[SponsorBlock]: Sponsor', 'c', '[SponsorBlock]: Preview/Recap',
'c', '[SponsorBlock]: Sponsor', 'c'])
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, [])
def test_remove_marked_arrange_sponsors_UniqueNamesForOverlappingSponsors(self):
chapters = self._chapters([120], ['c']) + [
self._sponsor_chapter(10, 45, 'sponsor'), self._sponsor_chapter(20, 40, 'selfpromo'),
self._sponsor_chapter(50, 70, 'sponsor'), self._sponsor_chapter(60, 85, 'selfpromo'),
self._sponsor_chapter(90, 120, 'selfpromo'), self._sponsor_chapter(100, 110, 'sponsor')]
expected = self._chapters(
[10, 20, 40, 45, 50, 60, 70, 85, 90, 100, 110, 120],
['c', '[SponsorBlock]: Sponsor', '[SponsorBlock]: Sponsor, Unpaid/Self Promotion',
'[SponsorBlock]: Sponsor',
'c', '[SponsorBlock]: Sponsor', '[SponsorBlock]: Sponsor, Unpaid/Self Promotion',
'[SponsorBlock]: Unpaid/Self Promotion',
'c', '[SponsorBlock]: Unpaid/Self Promotion', '[SponsorBlock]: Unpaid/Self Promotion, Sponsor',
'[SponsorBlock]: Unpaid/Self Promotion'])
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, [])
def test_remove_marked_arrange_sponsors_ChapterWithCuts(self):
cuts = [self._chapter(10, 20, remove=True),
self._sponsor_chapter(30, 40, 'sponsor', remove=True),
self._chapter(50, 60, remove=True)]
chapters = self._chapters([70], ['c']) + cuts
self._remove_marked_arrange_sponsors_test_impl(
chapters, self._chapters([40], ['c']), cuts)
def test_remove_marked_arrange_sponsors_ChapterWithSponsorsAndCuts(self):
chapters = self._chapters([70], ['c']) + [
self._sponsor_chapter(10, 20, 'sponsor'),
self._sponsor_chapter(30, 40, 'selfpromo', remove=True),
self._sponsor_chapter(50, 60, 'interaction')]
expected = self._chapters([10, 20, 40, 50, 60],
['c', '[SponsorBlock]: Sponsor', 'c',
'[SponsorBlock]: Interaction Reminder', 'c'])
self._remove_marked_arrange_sponsors_test_impl(
chapters, expected, [self._chapter(30, 40, remove=True)])
def test_remove_marked_arrange_sponsors_ChapterWithSponsorCutInTheMiddle(self):
cuts = [self._sponsor_chapter(20, 30, 'selfpromo', remove=True),
self._chapter(40, 50, remove=True)]
chapters = self._chapters([70], ['c']) + [self._sponsor_chapter(10, 60, 'sponsor')] + cuts
expected = self._chapters(
[10, 40, 50], ['c', '[SponsorBlock]: Sponsor', 'c'])
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, cuts)
def test_remove_marked_arrange_sponsors_ChapterWithCutHidingSponsor(self):
cuts = [self._sponsor_chapter(20, 50, 'selpromo', remove=True)]
chapters = self._chapters([60], ['c']) + [
self._sponsor_chapter(10, 20, 'intro'),
self._sponsor_chapter(30, 40, 'sponsor'),
self._sponsor_chapter(50, 60, 'outro'),
] + cuts
expected = self._chapters(
[10, 20, 30], ['c', '[SponsorBlock]: Intermission/Intro Animation', '[SponsorBlock]: Endcards/Credits'])
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, cuts)
def test_remove_marked_arrange_sponsors_ChapterWithAdjacentSponsors(self):
chapters = self._chapters([70], ['c']) + [
self._sponsor_chapter(10, 20, 'sponsor'),
self._sponsor_chapter(20, 30, 'selfpromo'),
self._sponsor_chapter(30, 40, 'interaction')]
expected = self._chapters(
[10, 20, 30, 40, 70],
['c', '[SponsorBlock]: Sponsor', '[SponsorBlock]: Unpaid/Self Promotion',
'[SponsorBlock]: Interaction Reminder', 'c'])
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, [])
def test_remove_marked_arrange_sponsors_ChapterWithAdjacentCuts(self):
chapters = self._chapters([70], ['c']) + [
self._sponsor_chapter(10, 20, 'sponsor'),
self._sponsor_chapter(20, 30, 'interaction', remove=True),
self._chapter(30, 40, remove=True),
self._sponsor_chapter(40, 50, 'selpromo', remove=True),
self._sponsor_chapter(50, 60, 'interaction')]
expected = self._chapters([10, 20, 30, 40],
['c', '[SponsorBlock]: Sponsor',
'[SponsorBlock]: Interaction Reminder', 'c'])
self._remove_marked_arrange_sponsors_test_impl(
chapters, expected, [self._chapter(20, 50, remove=True)])
def test_remove_marked_arrange_sponsors_ChapterWithOverlappingSponsors(self):
chapters = self._chapters([70], ['c']) + [
self._sponsor_chapter(10, 30, 'sponsor'),
self._sponsor_chapter(20, 50, 'selfpromo'),
self._sponsor_chapter(40, 60, 'interaction')]
expected = self._chapters(
[10, 20, 30, 40, 50, 60, 70],
['c', '[SponsorBlock]: Sponsor', '[SponsorBlock]: Sponsor, Unpaid/Self Promotion',
'[SponsorBlock]: Unpaid/Self Promotion', '[SponsorBlock]: Unpaid/Self Promotion, Interaction Reminder',
'[SponsorBlock]: Interaction Reminder', 'c'])
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, [])
def test_remove_marked_arrange_sponsors_ChapterWithOverlappingCuts(self):
chapters = self._chapters([70], ['c']) + [
self._sponsor_chapter(10, 30, 'sponsor', remove=True),
self._sponsor_chapter(20, 50, 'selfpromo', remove=True),
self._sponsor_chapter(40, 60, 'interaction', remove=True)]
self._remove_marked_arrange_sponsors_test_impl(
chapters, self._chapters([20], ['c']), [self._chapter(10, 60, remove=True)])
def test_remove_marked_arrange_sponsors_ChapterWithRunsOfOverlappingSponsors(self):
chapters = self._chapters([170], ['c']) + [
self._sponsor_chapter(0, 30, 'intro'),
self._sponsor_chapter(20, 50, 'sponsor'),
self._sponsor_chapter(40, 60, 'selfpromo'),
self._sponsor_chapter(70, 90, 'sponsor'),
self._sponsor_chapter(80, 100, 'sponsor'),
self._sponsor_chapter(90, 110, 'sponsor'),
self._sponsor_chapter(120, 140, 'selfpromo'),
self._sponsor_chapter(130, 160, 'interaction'),
self._sponsor_chapter(150, 170, 'outro')]
expected = self._chapters(
[20, 30, 40, 50, 60, 70, 110, 120, 130, 140, 150, 160, 170],
['[SponsorBlock]: Intermission/Intro Animation', '[SponsorBlock]: Intermission/Intro Animation, Sponsor', '[SponsorBlock]: Sponsor',
'[SponsorBlock]: Sponsor, Unpaid/Self Promotion', '[SponsorBlock]: Unpaid/Self Promotion', 'c',
'[SponsorBlock]: Sponsor', 'c', '[SponsorBlock]: Unpaid/Self Promotion',
'[SponsorBlock]: Unpaid/Self Promotion, Interaction Reminder',
'[SponsorBlock]: Interaction Reminder',
'[SponsorBlock]: Interaction Reminder, Endcards/Credits', '[SponsorBlock]: Endcards/Credits'])
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, [])
def test_remove_marked_arrange_sponsors_ChapterWithRunsOfOverlappingCuts(self):
chapters = self._chapters([170], ['c']) + [
self._chapter(0, 30, remove=True),
self._sponsor_chapter(20, 50, 'sponsor', remove=True),
self._chapter(40, 60, remove=True),
self._sponsor_chapter(70, 90, 'sponsor', remove=True),
self._chapter(80, 100, remove=True),
self._chapter(90, 110, remove=True),
self._sponsor_chapter(120, 140, 'sponsor', remove=True),
self._sponsor_chapter(130, 160, 'selfpromo', remove=True),
self._chapter(150, 170, remove=True)]
expected_cuts = [self._chapter(0, 60, remove=True),
self._chapter(70, 110, remove=True),
self._chapter(120, 170, remove=True)]
self._remove_marked_arrange_sponsors_test_impl(
chapters, self._chapters([20], ['c']), expected_cuts)
def test_remove_marked_arrange_sponsors_OverlappingSponsorsDifferentTitlesAfterCut(self):
chapters = self._chapters([60], ['c']) + [
self._sponsor_chapter(10, 60, 'sponsor'),
self._sponsor_chapter(10, 40, 'intro'),
self._sponsor_chapter(30, 50, 'interaction'),
self._sponsor_chapter(30, 50, 'selfpromo', remove=True),
self._sponsor_chapter(40, 50, 'interaction'),
self._sponsor_chapter(50, 60, 'outro')]
expected = self._chapters(
[10, 30, 40], ['c', '[SponsorBlock]: Sponsor, Intermission/Intro Animation', '[SponsorBlock]: Sponsor, Endcards/Credits'])
self._remove_marked_arrange_sponsors_test_impl(
chapters, expected, [self._chapter(30, 50, remove=True)])
def test_remove_marked_arrange_sponsors_SponsorsNoLongerOverlapAfterCut(self):
chapters = self._chapters([70], ['c']) + [
self._sponsor_chapter(10, 30, 'sponsor'),
self._sponsor_chapter(20, 50, 'interaction'),
self._sponsor_chapter(30, 50, 'selpromo', remove=True),
self._sponsor_chapter(40, 60, 'sponsor'),
self._sponsor_chapter(50, 60, 'interaction')]
expected = self._chapters(
[10, 20, 40, 50], ['c', '[SponsorBlock]: Sponsor',
'[SponsorBlock]: Sponsor, Interaction Reminder', 'c'])
self._remove_marked_arrange_sponsors_test_impl(
chapters, expected, [self._chapter(30, 50, remove=True)])
def test_remove_marked_arrange_sponsors_SponsorsStillOverlapAfterCut(self):
chapters = self._chapters([70], ['c']) + [
self._sponsor_chapter(10, 60, 'sponsor'),
self._sponsor_chapter(20, 60, 'interaction'),
self._sponsor_chapter(30, 50, 'selfpromo', remove=True)]
expected = self._chapters(
[10, 20, 40, 50], ['c', '[SponsorBlock]: Sponsor',
'[SponsorBlock]: Sponsor, Interaction Reminder', 'c'])
self._remove_marked_arrange_sponsors_test_impl(
chapters, expected, [self._chapter(30, 50, remove=True)])
def test_remove_marked_arrange_sponsors_ChapterWithRunsOfOverlappingSponsorsAndCuts(self):
chapters = self._chapters([200], ['c']) + [
self._sponsor_chapter(10, 40, 'sponsor'),
self._sponsor_chapter(10, 30, 'intro'),
self._chapter(20, 30, remove=True),
self._sponsor_chapter(30, 40, 'selfpromo'),
self._sponsor_chapter(50, 70, 'sponsor'),
self._sponsor_chapter(60, 80, 'interaction'),
self._chapter(70, 80, remove=True),
self._sponsor_chapter(70, 90, 'sponsor'),
self._sponsor_chapter(80, 100, 'interaction'),
self._sponsor_chapter(120, 170, 'selfpromo'),
self._sponsor_chapter(130, 180, 'outro'),
self._chapter(140, 150, remove=True),
self._chapter(150, 160, remove=True)]
expected = self._chapters(
[10, 20, 30, 40, 50, 70, 80, 100, 110, 130, 140, 160],
['c', '[SponsorBlock]: Sponsor, Intermission/Intro Animation', '[SponsorBlock]: Sponsor, Unpaid/Self Promotion',
'c', '[SponsorBlock]: Sponsor', '[SponsorBlock]: Sponsor, Interaction Reminder',
'[SponsorBlock]: Interaction Reminder', 'c', '[SponsorBlock]: Unpaid/Self Promotion',
'[SponsorBlock]: Unpaid/Self Promotion, Endcards/Credits', '[SponsorBlock]: Endcards/Credits', 'c'])
expected_cuts = [self._chapter(20, 30, remove=True),
self._chapter(70, 80, remove=True),
self._chapter(140, 160, remove=True)]
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, expected_cuts)
def test_remove_marked_arrange_sponsors_SponsorOverlapsMultipleChapters(self):
chapters = (self._chapters([20, 40, 60, 80, 100], ['c1', 'c2', 'c3', 'c4', 'c5'])
+ [self._sponsor_chapter(10, 90, 'sponsor')])
expected = self._chapters([10, 90, 100], ['c1', '[SponsorBlock]: Sponsor', 'c5'])
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, [])
def test_remove_marked_arrange_sponsors_CutOverlapsMultipleChapters(self):
cuts = [self._chapter(10, 90, remove=True)]
chapters = self._chapters([20, 40, 60, 80, 100], ['c1', 'c2', 'c3', 'c4', 'c5']) + cuts
expected = self._chapters([10, 20], ['c1', 'c5'])
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, cuts)
def test_remove_marked_arrange_sponsors_SponsorsWithinSomeChaptersAndOverlappingOthers(self):
chapters = (self._chapters([10, 40, 60, 80], ['c1', 'c2', 'c3', 'c4'])
+ [self._sponsor_chapter(20, 30, 'sponsor'),
self._sponsor_chapter(50, 70, 'selfpromo')])
expected = self._chapters([10, 20, 30, 40, 50, 70, 80],
['c1', 'c2', '[SponsorBlock]: Sponsor', 'c2', 'c3',
'[SponsorBlock]: Unpaid/Self Promotion', 'c4'])
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, [])
def test_remove_marked_arrange_sponsors_CutsWithinSomeChaptersAndOverlappingOthers(self):
cuts = [self._chapter(20, 30, remove=True), self._chapter(50, 70, remove=True)]
chapters = self._chapters([10, 40, 60, 80], ['c1', 'c2', 'c3', 'c4']) + cuts
expected = self._chapters([10, 30, 40, 50], ['c1', 'c2', 'c3', 'c4'])
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, cuts)
def test_remove_marked_arrange_sponsors_ChaptersAfterLastSponsor(self):
chapters = (self._chapters([20, 40, 50, 60], ['c1', 'c2', 'c3', 'c4'])
+ [self._sponsor_chapter(10, 30, 'music_offtopic')])
expected = self._chapters(
[10, 30, 40, 50, 60],
['c1', '[SponsorBlock]: Non-Music Section', 'c2', 'c3', 'c4'])
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, [])
def test_remove_marked_arrange_sponsors_ChaptersAfterLastCut(self):
cuts = [self._chapter(10, 30, remove=True)]
chapters = self._chapters([20, 40, 50, 60], ['c1', 'c2', 'c3', 'c4']) + cuts
expected = self._chapters([10, 20, 30, 40], ['c1', 'c2', 'c3', 'c4'])
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, cuts)
def test_remove_marked_arrange_sponsors_SponsorStartsAtChapterStart(self):
chapters = (self._chapters([10, 20, 40], ['c1', 'c2', 'c3'])
+ [self._sponsor_chapter(20, 30, 'sponsor')])
expected = self._chapters([10, 20, 30, 40], ['c1', 'c2', '[SponsorBlock]: Sponsor', 'c3'])
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, [])
def test_remove_marked_arrange_sponsors_CutStartsAtChapterStart(self):
cuts = [self._chapter(20, 30, remove=True)]
chapters = self._chapters([10, 20, 40], ['c1', 'c2', 'c3']) + cuts
expected = self._chapters([10, 20, 30], ['c1', 'c2', 'c3'])
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, cuts)
def test_remove_marked_arrange_sponsors_SponsorEndsAtChapterEnd(self):
chapters = (self._chapters([10, 30, 40], ['c1', 'c2', 'c3'])
+ [self._sponsor_chapter(20, 30, 'sponsor')])
expected = self._chapters([10, 20, 30, 40], ['c1', 'c2', '[SponsorBlock]: Sponsor', 'c3'])
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, [])
def test_remove_marked_arrange_sponsors_CutEndsAtChapterEnd(self):
cuts = [self._chapter(20, 30, remove=True)]
chapters = self._chapters([10, 30, 40], ['c1', 'c2', 'c3']) + cuts
expected = self._chapters([10, 20, 30], ['c1', 'c2', 'c3'])
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, cuts)
def test_remove_marked_arrange_sponsors_SponsorCoincidesWithChapters(self):
chapters = (self._chapters([10, 20, 30, 40], ['c1', 'c2', 'c3', 'c4'])
+ [self._sponsor_chapter(10, 30, 'sponsor')])
expected = self._chapters([10, 30, 40], ['c1', '[SponsorBlock]: Sponsor', 'c4'])
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, [])
def test_remove_marked_arrange_sponsors_CutCoincidesWithChapters(self):
cuts = [self._chapter(10, 30, remove=True)]
chapters = self._chapters([10, 20, 30, 40], ['c1', 'c2', 'c3', 'c4']) + cuts
expected = self._chapters([10, 20], ['c1', 'c4'])
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, cuts)
def test_remove_marked_arrange_sponsors_SponsorsAtVideoBoundaries(self):
chapters = (self._chapters([20, 40, 60], ['c1', 'c2', 'c3'])
+ [self._sponsor_chapter(0, 10, 'intro'), self._sponsor_chapter(50, 60, 'outro')])
expected = self._chapters(
[10, 20, 40, 50, 60], ['[SponsorBlock]: Intermission/Intro Animation', 'c1', 'c2', 'c3', '[SponsorBlock]: Endcards/Credits'])
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, [])
def test_remove_marked_arrange_sponsors_CutsAtVideoBoundaries(self):
cuts = [self._chapter(0, 10, remove=True), self._chapter(50, 60, remove=True)]
chapters = self._chapters([20, 40, 60], ['c1', 'c2', 'c3']) + cuts
expected = self._chapters([10, 30, 40], ['c1', 'c2', 'c3'])
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, cuts)
def test_remove_marked_arrange_sponsors_SponsorsOverlapChaptersAtVideoBoundaries(self):
chapters = (self._chapters([10, 40, 50], ['c1', 'c2', 'c3'])
+ [self._sponsor_chapter(0, 20, 'intro'), self._sponsor_chapter(30, 50, 'outro')])
expected = self._chapters(
[20, 30, 50], ['[SponsorBlock]: Intermission/Intro Animation', 'c2', '[SponsorBlock]: Endcards/Credits'])
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, [])
def test_remove_marked_arrange_sponsors_CutsOverlapChaptersAtVideoBoundaries(self):
cuts = [self._chapter(0, 20, remove=True), self._chapter(30, 50, remove=True)]
chapters = self._chapters([10, 40, 50], ['c1', 'c2', 'c3']) + cuts
expected = self._chapters([10], ['c2'])
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, cuts)
def test_remove_marked_arrange_sponsors_EverythingSponsored(self):
chapters = (self._chapters([10, 20, 30, 40], ['c1', 'c2', 'c3', 'c4'])
+ [self._sponsor_chapter(0, 20, 'intro'), self._sponsor_chapter(20, 40, 'outro')])
expected = self._chapters([20, 40], ['[SponsorBlock]: Intermission/Intro Animation', '[SponsorBlock]: Endcards/Credits'])
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, [])
def test_remove_marked_arrange_sponsors_EverythingCut(self):
cuts = [self._chapter(0, 20, remove=True), self._chapter(20, 40, remove=True)]
chapters = self._chapters([10, 20, 30, 40], ['c1', 'c2', 'c3', 'c4']) + cuts
self._remove_marked_arrange_sponsors_test_impl(
chapters, [], [self._chapter(0, 40, remove=True)])
def test_remove_marked_arrange_sponsors_TinyChaptersInTheOriginalArePreserved(self):
chapters = self._chapters([0.1, 0.2, 0.3, 0.4], ['c1', 'c2', 'c3', 'c4'])
self._remove_marked_arrange_sponsors_test_impl(chapters, chapters, [])
def test_remove_marked_arrange_sponsors_TinySponsorsAreIgnored(self):
chapters = [self._sponsor_chapter(0, 0.1, 'intro'), self._chapter(0.1, 0.2, 'c1'),
self._sponsor_chapter(0.2, 0.3, 'sponsor'), self._chapter(0.3, 0.4, 'c2'),
self._sponsor_chapter(0.4, 0.5, 'outro')]
self._remove_marked_arrange_sponsors_test_impl(
chapters, self._chapters([0.3, 0.5], ['c1', 'c2']), [])
def test_remove_marked_arrange_sponsors_TinyChaptersResultingFromCutsAreIgnored(self):
cuts = [self._chapter(1.5, 2.5, remove=True)]
chapters = self._chapters([2, 3, 3.5], ['c1', 'c2', 'c3']) + cuts
self._remove_marked_arrange_sponsors_test_impl(
chapters, self._chapters([2, 2.5], ['c1', 'c3']), cuts)
def test_remove_marked_arrange_sponsors_SingleTinyChapterIsPreserved(self):
cuts = [self._chapter(0.5, 2, remove=True)]
chapters = self._chapters([2], ['c']) + cuts
self._remove_marked_arrange_sponsors_test_impl(
chapters, self._chapters([0.5], ['c']), cuts)
def test_remove_marked_arrange_sponsors_TinyChapterAtTheStartPrependedToTheNext(self):
cuts = [self._chapter(0.5, 2, remove=True)]
chapters = self._chapters([2, 4], ['c1', 'c2']) + cuts
self._remove_marked_arrange_sponsors_test_impl(
chapters, self._chapters([2.5], ['c2']), cuts)
def test_remove_marked_arrange_sponsors_TinyChaptersResultingFromSponsorOverlapAreIgnored(self):
chapters = self._chapters([1, 3, 4], ['c1', 'c2', 'c3']) + [
self._sponsor_chapter(1.5, 2.5, 'sponsor')]
self._remove_marked_arrange_sponsors_test_impl(
chapters, self._chapters([1.5, 2.5, 4], ['c1', '[SponsorBlock]: Sponsor', 'c3']), [])
def test_remove_marked_arrange_sponsors_TinySponsorsOverlapsAreIgnored(self):
chapters = self._chapters([2, 3, 5], ['c1', 'c2', 'c3']) + [
self._sponsor_chapter(1, 3, 'sponsor'),
self._sponsor_chapter(2.5, 4, 'selfpromo')
]
self._remove_marked_arrange_sponsors_test_impl(
chapters, self._chapters([1, 3, 4, 5], [
'c1', '[SponsorBlock]: Sponsor', '[SponsorBlock]: Unpaid/Self Promotion', 'c3']), [])
def test_remove_marked_arrange_sponsors_TinySponsorsPrependedToTheNextSponsor(self):
chapters = self._chapters([4], ['c']) + [
self._sponsor_chapter(1.5, 2, 'sponsor'),
self._sponsor_chapter(2, 4, 'selfpromo')
]
self._remove_marked_arrange_sponsors_test_impl(
chapters, self._chapters([1.5, 4], ['c', '[SponsorBlock]: Unpaid/Self Promotion']), [])
def test_remove_marked_arrange_sponsors_SmallestSponsorInTheOverlapGetsNamed(self):
self._pp._sponsorblock_chapter_title = '[SponsorBlock]: %(name)s'
chapters = self._chapters([10], ['c']) + [
self._sponsor_chapter(2, 8, 'sponsor'),
self._sponsor_chapter(4, 6, 'selfpromo')
]
self._remove_marked_arrange_sponsors_test_impl(
chapters, self._chapters([2, 4, 6, 8, 10], [
'c', '[SponsorBlock]: Sponsor', '[SponsorBlock]: Unpaid/Self Promotion',
'[SponsorBlock]: Sponsor', 'c'
]), [])
def test_make_concat_opts_CommonCase(self):
sponsor_chapters = [self._chapter(1, 2, 's1'), self._chapter(10, 20, 's2')]
expected = '''ffconcat version 1.0
file 'file:test'
outpoint 1.000000
file 'file:test'
inpoint 2.000000
outpoint 10.000000
file 'file:test'
inpoint 20.000000
'''
opts = self._pp._make_concat_opts(sponsor_chapters, 30)
self.assertEqual(expected, ''.join(self._pp._concat_spec(['test'] * len(opts), opts)))
def test_make_concat_opts_NoZeroDurationChunkAtVideoStart(self):
sponsor_chapters = [self._chapter(0, 1, 's1'), self._chapter(10, 20, 's2')]
expected = '''ffconcat version 1.0
file 'file:test'
inpoint 1.000000
outpoint 10.000000
file 'file:test'
inpoint 20.000000
'''
opts = self._pp._make_concat_opts(sponsor_chapters, 30)
self.assertEqual(expected, ''.join(self._pp._concat_spec(['test'] * len(opts), opts)))
def test_make_concat_opts_NoZeroDurationChunkAtVideoEnd(self):
sponsor_chapters = [self._chapter(1, 2, 's1'), self._chapter(10, 20, 's2')]
expected = '''ffconcat version 1.0
file 'file:test'
outpoint 1.000000
file 'file:test'
inpoint 2.000000
outpoint 10.000000
'''
opts = self._pp._make_concat_opts(sponsor_chapters, 20)
self.assertEqual(expected, ''.join(self._pp._concat_spec(['test'] * len(opts), opts)))
def test_quote_for_concat_RunsOfQuotes(self):
self.assertEqual(
r"'special '\'' '\'\''characters'\'\'\''galore'",
self._pp._quote_for_ffmpeg("special ' ''characters'''galore"))
def test_quote_for_concat_QuotesAtStart(self):
self.assertEqual(
r"\'\'\''special '\'' characters '\'' galore'",
self._pp._quote_for_ffmpeg("'''special ' characters ' galore"))
def test_quote_for_concat_QuotesAtEnd(self):
self.assertEqual(
r"'special '\'' characters '\'' galore'\'\'\'",
self._pp._quote_for_ffmpeg("special ' characters ' galore'''"))

View File

@@ -14,6 +14,7 @@ import subprocess
from test.helper import ( from test.helper import (
FakeYDL, FakeYDL,
get_params, get_params,
is_download_test,
) )
from yt_dlp.compat import ( from yt_dlp.compat import (
compat_str, compat_str,
@@ -21,6 +22,7 @@ from yt_dlp.compat import (
) )
@is_download_test
class TestMultipleSocks(unittest.TestCase): class TestMultipleSocks(unittest.TestCase):
@staticmethod @staticmethod
def _check_params(attrs): def _check_params(attrs):
@@ -76,6 +78,7 @@ class TestMultipleSocks(unittest.TestCase):
params['secondary_server_ip']) params['secondary_server_ip'])
@is_download_test
class TestSocks(unittest.TestCase): class TestSocks(unittest.TestCase):
_SKIP_SOCKS_TEST = True _SKIP_SOCKS_TEST = True

View File

@@ -7,7 +7,7 @@ import sys
import unittest import unittest
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from test.helper import FakeYDL, md5 from test.helper import FakeYDL, md5, is_download_test
from yt_dlp.extractor import ( from yt_dlp.extractor import (
@@ -19,6 +19,7 @@ from yt_dlp.extractor import (
CeskaTelevizeIE, CeskaTelevizeIE,
LyndaIE, LyndaIE,
NPOIE, NPOIE,
PBSIE,
ComedyCentralIE, ComedyCentralIE,
NRKTVIE, NRKTVIE,
RaiPlayIE, RaiPlayIE,
@@ -30,6 +31,7 @@ from yt_dlp.extractor import (
) )
@is_download_test
class BaseTestSubtitles(unittest.TestCase): class BaseTestSubtitles(unittest.TestCase):
url = None url = None
IE = None IE = None
@@ -55,6 +57,7 @@ class BaseTestSubtitles(unittest.TestCase):
return dict((l, sub_info['data']) for l, sub_info in subtitles.items()) return dict((l, sub_info['data']) for l, sub_info in subtitles.items())
@is_download_test
class TestYoutubeSubtitles(BaseTestSubtitles): class TestYoutubeSubtitles(BaseTestSubtitles):
url = 'QRS8MkLhQmM' url = 'QRS8MkLhQmM'
IE = YoutubeIE IE = YoutubeIE
@@ -111,6 +114,7 @@ class TestYoutubeSubtitles(BaseTestSubtitles):
self.assertFalse(subtitles) self.assertFalse(subtitles)
@is_download_test
class TestDailymotionSubtitles(BaseTestSubtitles): class TestDailymotionSubtitles(BaseTestSubtitles):
url = 'http://www.dailymotion.com/video/xczg00' url = 'http://www.dailymotion.com/video/xczg00'
IE = DailymotionIE IE = DailymotionIE
@@ -134,6 +138,7 @@ class TestDailymotionSubtitles(BaseTestSubtitles):
self.assertFalse(subtitles) self.assertFalse(subtitles)
@is_download_test
class TestTedSubtitles(BaseTestSubtitles): class TestTedSubtitles(BaseTestSubtitles):
url = 'http://www.ted.com/talks/dan_dennett_on_our_consciousness.html' url = 'http://www.ted.com/talks/dan_dennett_on_our_consciousness.html'
IE = TEDIE IE = TEDIE
@@ -149,6 +154,7 @@ class TestTedSubtitles(BaseTestSubtitles):
self.assertTrue(subtitles.get(lang) is not None, 'Subtitles for \'%s\' not extracted' % lang) self.assertTrue(subtitles.get(lang) is not None, 'Subtitles for \'%s\' not extracted' % lang)
@is_download_test
class TestVimeoSubtitles(BaseTestSubtitles): class TestVimeoSubtitles(BaseTestSubtitles):
url = 'http://vimeo.com/76979871' url = 'http://vimeo.com/76979871'
IE = VimeoIE IE = VimeoIE
@@ -170,6 +176,7 @@ class TestVimeoSubtitles(BaseTestSubtitles):
self.assertFalse(subtitles) self.assertFalse(subtitles)
@is_download_test
class TestWallaSubtitles(BaseTestSubtitles): class TestWallaSubtitles(BaseTestSubtitles):
url = 'http://vod.walla.co.il/movie/2705958/the-yes-men' url = 'http://vod.walla.co.il/movie/2705958/the-yes-men'
IE = WallaIE IE = WallaIE
@@ -191,6 +198,7 @@ class TestWallaSubtitles(BaseTestSubtitles):
self.assertFalse(subtitles) self.assertFalse(subtitles)
@is_download_test
class TestCeskaTelevizeSubtitles(BaseTestSubtitles): class TestCeskaTelevizeSubtitles(BaseTestSubtitles):
url = 'http://www.ceskatelevize.cz/ivysilani/10600540290-u6-uzasny-svet-techniky' url = 'http://www.ceskatelevize.cz/ivysilani/10600540290-u6-uzasny-svet-techniky'
IE = CeskaTelevizeIE IE = CeskaTelevizeIE
@@ -212,6 +220,7 @@ class TestCeskaTelevizeSubtitles(BaseTestSubtitles):
self.assertFalse(subtitles) self.assertFalse(subtitles)
@is_download_test
class TestLyndaSubtitles(BaseTestSubtitles): class TestLyndaSubtitles(BaseTestSubtitles):
url = 'http://www.lynda.com/Bootstrap-tutorials/Using-exercise-files/110885/114408-4.html' url = 'http://www.lynda.com/Bootstrap-tutorials/Using-exercise-files/110885/114408-4.html'
IE = LyndaIE IE = LyndaIE
@@ -224,6 +233,7 @@ class TestLyndaSubtitles(BaseTestSubtitles):
self.assertEqual(md5(subtitles['en']), '09bbe67222259bed60deaa26997d73a7') self.assertEqual(md5(subtitles['en']), '09bbe67222259bed60deaa26997d73a7')
@is_download_test
class TestNPOSubtitles(BaseTestSubtitles): class TestNPOSubtitles(BaseTestSubtitles):
url = 'http://www.npo.nl/nos-journaal/28-08-2014/POW_00722860' url = 'http://www.npo.nl/nos-journaal/28-08-2014/POW_00722860'
IE = NPOIE IE = NPOIE
@@ -236,6 +246,7 @@ class TestNPOSubtitles(BaseTestSubtitles):
self.assertEqual(md5(subtitles['nl']), 'fc6435027572b63fb4ab143abd5ad3f4') self.assertEqual(md5(subtitles['nl']), 'fc6435027572b63fb4ab143abd5ad3f4')
@is_download_test
class TestMTVSubtitles(BaseTestSubtitles): class TestMTVSubtitles(BaseTestSubtitles):
url = 'http://www.cc.com/video-clips/p63lk0/adam-devine-s-house-party-chasing-white-swans' url = 'http://www.cc.com/video-clips/p63lk0/adam-devine-s-house-party-chasing-white-swans'
IE = ComedyCentralIE IE = ComedyCentralIE
@@ -251,6 +262,7 @@ class TestMTVSubtitles(BaseTestSubtitles):
self.assertEqual(md5(subtitles['en']), '78206b8d8a0cfa9da64dc026eea48961') self.assertEqual(md5(subtitles['en']), '78206b8d8a0cfa9da64dc026eea48961')
@is_download_test
class TestNRKSubtitles(BaseTestSubtitles): class TestNRKSubtitles(BaseTestSubtitles):
url = 'http://tv.nrk.no/serie/ikke-gjoer-dette-hjemme/DMPV73000411/sesong-2/episode-1' url = 'http://tv.nrk.no/serie/ikke-gjoer-dette-hjemme/DMPV73000411/sesong-2/episode-1'
IE = NRKTVIE IE = NRKTVIE
@@ -263,6 +275,7 @@ class TestNRKSubtitles(BaseTestSubtitles):
self.assertEqual(md5(subtitles['no']), '544fa917d3197fcbee64634559221cc2') self.assertEqual(md5(subtitles['no']), '544fa917d3197fcbee64634559221cc2')
@is_download_test
class TestRaiPlaySubtitles(BaseTestSubtitles): class TestRaiPlaySubtitles(BaseTestSubtitles):
IE = RaiPlayIE IE = RaiPlayIE
@@ -283,6 +296,7 @@ class TestRaiPlaySubtitles(BaseTestSubtitles):
self.assertEqual(md5(subtitles['it']), '4b3264186fbb103508abe5311cfcb9cd') self.assertEqual(md5(subtitles['it']), '4b3264186fbb103508abe5311cfcb9cd')
@is_download_test
class TestVikiSubtitles(BaseTestSubtitles): class TestVikiSubtitles(BaseTestSubtitles):
url = 'http://www.viki.com/videos/1060846v-punch-episode-18' url = 'http://www.viki.com/videos/1060846v-punch-episode-18'
IE = VikiIE IE = VikiIE
@@ -295,6 +309,7 @@ class TestVikiSubtitles(BaseTestSubtitles):
self.assertEqual(md5(subtitles['en']), '53cb083a5914b2d84ef1ab67b880d18a') self.assertEqual(md5(subtitles['en']), '53cb083a5914b2d84ef1ab67b880d18a')
@is_download_test
class TestThePlatformSubtitles(BaseTestSubtitles): class TestThePlatformSubtitles(BaseTestSubtitles):
# from http://www.3playmedia.com/services-features/tools/integrations/theplatform/ # from http://www.3playmedia.com/services-features/tools/integrations/theplatform/
# (see http://theplatform.com/about/partners/type/subtitles-closed-captioning/) # (see http://theplatform.com/about/partners/type/subtitles-closed-captioning/)
@@ -309,6 +324,7 @@ class TestThePlatformSubtitles(BaseTestSubtitles):
self.assertEqual(md5(subtitles['en']), '97e7670cbae3c4d26ae8bcc7fdd78d4b') self.assertEqual(md5(subtitles['en']), '97e7670cbae3c4d26ae8bcc7fdd78d4b')
@is_download_test
class TestThePlatformFeedSubtitles(BaseTestSubtitles): class TestThePlatformFeedSubtitles(BaseTestSubtitles):
url = 'http://feed.theplatform.com/f/7wvmTC/msnbc_video-p-test?form=json&pretty=true&range=-40&byGuid=n_hardball_5biden_140207' url = 'http://feed.theplatform.com/f/7wvmTC/msnbc_video-p-test?form=json&pretty=true&range=-40&byGuid=n_hardball_5biden_140207'
IE = ThePlatformFeedIE IE = ThePlatformFeedIE
@@ -321,6 +337,7 @@ class TestThePlatformFeedSubtitles(BaseTestSubtitles):
self.assertEqual(md5(subtitles['en']), '48649a22e82b2da21c9a67a395eedade') self.assertEqual(md5(subtitles['en']), '48649a22e82b2da21c9a67a395eedade')
@is_download_test
class TestRtveSubtitles(BaseTestSubtitles): class TestRtveSubtitles(BaseTestSubtitles):
url = 'http://www.rtve.es/alacarta/videos/los-misterios-de-laura/misterios-laura-capitulo-32-misterio-del-numero-17-2-parte/2428621/' url = 'http://www.rtve.es/alacarta/videos/los-misterios-de-laura/misterios-laura-capitulo-32-misterio-del-numero-17-2-parte/2428621/'
IE = RTVEALaCartaIE IE = RTVEALaCartaIE
@@ -335,6 +352,7 @@ class TestRtveSubtitles(BaseTestSubtitles):
self.assertEqual(md5(subtitles['es']), '69e70cae2d40574fb7316f31d6eb7fca') self.assertEqual(md5(subtitles['es']), '69e70cae2d40574fb7316f31d6eb7fca')
@is_download_test
class TestDemocracynowSubtitles(BaseTestSubtitles): class TestDemocracynowSubtitles(BaseTestSubtitles):
url = 'http://www.democracynow.org/shows/2015/7/3' url = 'http://www.democracynow.org/shows/2015/7/3'
IE = DemocracynowIE IE = DemocracynowIE
@@ -355,5 +373,42 @@ class TestDemocracynowSubtitles(BaseTestSubtitles):
self.assertEqual(md5(subtitles['en']), 'acaca989e24a9e45a6719c9b3d60815c') self.assertEqual(md5(subtitles['en']), 'acaca989e24a9e45a6719c9b3d60815c')
@is_download_test
class TestPBSSubtitles(BaseTestSubtitles):
url = 'https://www.pbs.org/video/how-fantasy-reflects-our-world-picecq/'
IE = PBSIE
def test_allsubtitles(self):
self.DL.params['writesubtitles'] = True
self.DL.params['allsubtitles'] = True
subtitles = self.getSubtitles()
self.assertEqual(set(subtitles.keys()), set(['en']))
def test_subtitles_dfxp_format(self):
self.DL.params['writesubtitles'] = True
self.DL.params['subtitlesformat'] = 'dfxp'
subtitles = self.getSubtitles()
self.assertIn(md5(subtitles['en']), ['643b034254cdc3768ff1e750b6b5873b'])
def test_subtitles_vtt_format(self):
self.DL.params['writesubtitles'] = True
self.DL.params['subtitlesformat'] = 'vtt'
subtitles = self.getSubtitles()
self.assertIn(
md5(subtitles['en']), ['937a05711555b165d4c55a9667017045', 'f49ea998d6824d94959c8152a368ff73'])
def test_subtitles_srt_format(self):
self.DL.params['writesubtitles'] = True
self.DL.params['subtitlesformat'] = 'srt'
subtitles = self.getSubtitles()
self.assertIn(md5(subtitles['en']), ['2082c21b43759d9bf172931b2f2ca371'])
def test_subtitles_sami_format(self):
self.DL.params['writesubtitles'] = True
self.DL.params['subtitlesformat'] = 'sami'
subtitles = self.getSubtitles()
self.assertIn(md5(subtitles['en']), ['4256b16ac7da6a6780fafd04294e85cd'])
if __name__ == '__main__': if __name__ == '__main__':
unittest.main() unittest.main()

View File

@@ -62,6 +62,7 @@ from yt_dlp.utils import (
parse_iso8601, parse_iso8601,
parse_resolution, parse_resolution,
parse_bitrate, parse_bitrate,
parse_qs,
pkcs1pad, pkcs1pad,
read_batch_urls, read_batch_urls,
sanitize_filename, sanitize_filename,
@@ -117,8 +118,6 @@ from yt_dlp.compat import (
compat_getenv, compat_getenv,
compat_os_name, compat_os_name,
compat_setenv, compat_setenv,
compat_urlparse,
compat_parse_qs,
) )
@@ -688,38 +687,36 @@ class TestUtil(unittest.TestCase):
self.assertTrue(isinstance(data, bytes)) self.assertTrue(isinstance(data, bytes))
def test_update_url_query(self): def test_update_url_query(self):
def query_dict(url): self.assertEqual(parse_qs(update_url_query(
return compat_parse_qs(compat_urlparse.urlparse(url).query)
self.assertEqual(query_dict(update_url_query(
'http://example.com/path', {'quality': ['HD'], 'format': ['mp4']})), 'http://example.com/path', {'quality': ['HD'], 'format': ['mp4']})),
query_dict('http://example.com/path?quality=HD&format=mp4')) parse_qs('http://example.com/path?quality=HD&format=mp4'))
self.assertEqual(query_dict(update_url_query( self.assertEqual(parse_qs(update_url_query(
'http://example.com/path', {'system': ['LINUX', 'WINDOWS']})), 'http://example.com/path', {'system': ['LINUX', 'WINDOWS']})),
query_dict('http://example.com/path?system=LINUX&system=WINDOWS')) parse_qs('http://example.com/path?system=LINUX&system=WINDOWS'))
self.assertEqual(query_dict(update_url_query( self.assertEqual(parse_qs(update_url_query(
'http://example.com/path', {'fields': 'id,formats,subtitles'})), 'http://example.com/path', {'fields': 'id,formats,subtitles'})),
query_dict('http://example.com/path?fields=id,formats,subtitles')) parse_qs('http://example.com/path?fields=id,formats,subtitles'))
self.assertEqual(query_dict(update_url_query( self.assertEqual(parse_qs(update_url_query(
'http://example.com/path', {'fields': ('id,formats,subtitles', 'thumbnails')})), 'http://example.com/path', {'fields': ('id,formats,subtitles', 'thumbnails')})),
query_dict('http://example.com/path?fields=id,formats,subtitles&fields=thumbnails')) parse_qs('http://example.com/path?fields=id,formats,subtitles&fields=thumbnails'))
self.assertEqual(query_dict(update_url_query( self.assertEqual(parse_qs(update_url_query(
'http://example.com/path?manifest=f4m', {'manifest': []})), 'http://example.com/path?manifest=f4m', {'manifest': []})),
query_dict('http://example.com/path')) parse_qs('http://example.com/path'))
self.assertEqual(query_dict(update_url_query( self.assertEqual(parse_qs(update_url_query(
'http://example.com/path?system=LINUX&system=WINDOWS', {'system': 'LINUX'})), 'http://example.com/path?system=LINUX&system=WINDOWS', {'system': 'LINUX'})),
query_dict('http://example.com/path?system=LINUX')) parse_qs('http://example.com/path?system=LINUX'))
self.assertEqual(query_dict(update_url_query( self.assertEqual(parse_qs(update_url_query(
'http://example.com/path', {'fields': b'id,formats,subtitles'})), 'http://example.com/path', {'fields': b'id,formats,subtitles'})),
query_dict('http://example.com/path?fields=id,formats,subtitles')) parse_qs('http://example.com/path?fields=id,formats,subtitles'))
self.assertEqual(query_dict(update_url_query( self.assertEqual(parse_qs(update_url_query(
'http://example.com/path', {'width': 1080, 'height': 720})), 'http://example.com/path', {'width': 1080, 'height': 720})),
query_dict('http://example.com/path?width=1080&height=720')) parse_qs('http://example.com/path?width=1080&height=720'))
self.assertEqual(query_dict(update_url_query( self.assertEqual(parse_qs(update_url_query(
'http://example.com/path', {'bitrate': 5020.43})), 'http://example.com/path', {'bitrate': 5020.43})),
query_dict('http://example.com/path?bitrate=5020.43')) parse_qs('http://example.com/path?bitrate=5020.43'))
self.assertEqual(query_dict(update_url_query( self.assertEqual(parse_qs(update_url_query(
'http://example.com/path', {'test': '第二行тест'})), 'http://example.com/path', {'test': '第二行тест'})),
query_dict('http://example.com/path?test=%E7%AC%AC%E4%BA%8C%E8%A1%8C%D1%82%D0%B5%D1%81%D1%82')) parse_qs('http://example.com/path?test=%E7%AC%AC%E4%BA%8C%E8%A1%8C%D1%82%D0%B5%D1%81%D1%82'))
def test_multipart_encode(self): def test_multipart_encode(self):
self.assertEqual( self.assertEqual(
@@ -851,30 +848,52 @@ class TestUtil(unittest.TestCase):
self.assertEqual(parse_codecs('avc1.77.30, mp4a.40.2'), { self.assertEqual(parse_codecs('avc1.77.30, mp4a.40.2'), {
'vcodec': 'avc1.77.30', 'vcodec': 'avc1.77.30',
'acodec': 'mp4a.40.2', 'acodec': 'mp4a.40.2',
'dynamic_range': None,
}) })
self.assertEqual(parse_codecs('mp4a.40.2'), { self.assertEqual(parse_codecs('mp4a.40.2'), {
'vcodec': 'none', 'vcodec': 'none',
'acodec': 'mp4a.40.2', 'acodec': 'mp4a.40.2',
'dynamic_range': None,
}) })
self.assertEqual(parse_codecs('mp4a.40.5,avc1.42001e'), { self.assertEqual(parse_codecs('mp4a.40.5,avc1.42001e'), {
'vcodec': 'avc1.42001e', 'vcodec': 'avc1.42001e',
'acodec': 'mp4a.40.5', 'acodec': 'mp4a.40.5',
'dynamic_range': None,
}) })
self.assertEqual(parse_codecs('avc3.640028'), { self.assertEqual(parse_codecs('avc3.640028'), {
'vcodec': 'avc3.640028', 'vcodec': 'avc3.640028',
'acodec': 'none', 'acodec': 'none',
'dynamic_range': None,
}) })
self.assertEqual(parse_codecs(', h264,,newcodec,aac'), { self.assertEqual(parse_codecs(', h264,,newcodec,aac'), {
'vcodec': 'h264', 'vcodec': 'h264',
'acodec': 'aac', 'acodec': 'aac',
'dynamic_range': None,
}) })
self.assertEqual(parse_codecs('av01.0.05M.08'), { self.assertEqual(parse_codecs('av01.0.05M.08'), {
'vcodec': 'av01.0.05M.08', 'vcodec': 'av01.0.05M.08',
'acodec': 'none', 'acodec': 'none',
'dynamic_range': None,
})
self.assertEqual(parse_codecs('vp9.2'), {
'vcodec': 'vp9.2',
'acodec': 'none',
'dynamic_range': 'HDR10',
})
self.assertEqual(parse_codecs('av01.0.12M.10.0.110.09.16.09.0'), {
'vcodec': 'av01.0.12M.10',
'acodec': 'none',
'dynamic_range': 'HDR10',
})
self.assertEqual(parse_codecs('dvhe'), {
'vcodec': 'dvhe',
'acodec': 'none',
'dynamic_range': 'DV',
}) })
self.assertEqual(parse_codecs('theora, vorbis'), { self.assertEqual(parse_codecs('theora, vorbis'), {
'vcodec': 'theora', 'vcodec': 'theora',
'acodec': 'vorbis', 'acodec': 'vorbis',
'dynamic_range': None,
}) })
self.assertEqual(parse_codecs('unknownvcodec, unknownacodec'), { self.assertEqual(parse_codecs('unknownvcodec, unknownacodec'), {
'vcodec': 'unknownvcodec', 'vcodec': 'unknownvcodec',
@@ -1054,6 +1073,9 @@ class TestUtil(unittest.TestCase):
on = js_to_json('{ "040": "040" }') on = js_to_json('{ "040": "040" }')
self.assertEqual(json.loads(on), {'040': '040'}) self.assertEqual(json.loads(on), {'040': '040'})
on = js_to_json('[1,//{},\n2]')
self.assertEqual(json.loads(on), [1, 2])
def test_js_to_json_malformed(self): def test_js_to_json_malformed(self):
self.assertEqual(js_to_json('42a1'), '42"a1"') self.assertEqual(js_to_json('42a1'), '42"a1"')
self.assertEqual(js_to_json('42a-1'), '42"a"-1') self.assertEqual(js_to_json('42a-1'), '42"a"-1')
@@ -1141,12 +1163,15 @@ class TestUtil(unittest.TestCase):
def test_parse_resolution(self): def test_parse_resolution(self):
self.assertEqual(parse_resolution(None), {}) self.assertEqual(parse_resolution(None), {})
self.assertEqual(parse_resolution(''), {}) self.assertEqual(parse_resolution(''), {})
self.assertEqual(parse_resolution('1920x1080'), {'width': 1920, 'height': 1080}) self.assertEqual(parse_resolution(' 1920x1080'), {'width': 1920, 'height': 1080})
self.assertEqual(parse_resolution('1920×1080'), {'width': 1920, 'height': 1080}) self.assertEqual(parse_resolution('1920×1080 '), {'width': 1920, 'height': 1080})
self.assertEqual(parse_resolution('1920 x 1080'), {'width': 1920, 'height': 1080}) self.assertEqual(parse_resolution('1920 x 1080'), {'width': 1920, 'height': 1080})
self.assertEqual(parse_resolution('720p'), {'height': 720}) self.assertEqual(parse_resolution('720p'), {'height': 720})
self.assertEqual(parse_resolution('4k'), {'height': 2160}) self.assertEqual(parse_resolution('4k'), {'height': 2160})
self.assertEqual(parse_resolution('8K'), {'height': 4320}) self.assertEqual(parse_resolution('8K'), {'height': 4320})
self.assertEqual(parse_resolution('pre_1920x1080_post'), {'width': 1920, 'height': 1080})
self.assertEqual(parse_resolution('ep1x2'), {})
self.assertEqual(parse_resolution('1920, 1080'), {'width': 1920, 'height': 1080})
def test_parse_bitrate(self): def test_parse_bitrate(self):
self.assertEqual(parse_bitrate(None), None) self.assertEqual(parse_bitrate(None), None)
@@ -1204,35 +1229,12 @@ ffmpeg version 2.4.4 Copyright (c) 2000-2014 the FFmpeg ...'''), '2.4.4')
'9999 51') '9999 51')
def test_match_str(self): def test_match_str(self):
self.assertRaises(ValueError, match_str, 'xy>foobar', {}) # Unary
self.assertFalse(match_str('xy', {'x': 1200})) self.assertFalse(match_str('xy', {'x': 1200}))
self.assertTrue(match_str('!xy', {'x': 1200})) self.assertTrue(match_str('!xy', {'x': 1200}))
self.assertTrue(match_str('x', {'x': 1200})) self.assertTrue(match_str('x', {'x': 1200}))
self.assertFalse(match_str('!x', {'x': 1200})) self.assertFalse(match_str('!x', {'x': 1200}))
self.assertTrue(match_str('x', {'x': 0})) self.assertTrue(match_str('x', {'x': 0}))
self.assertFalse(match_str('x>0', {'x': 0}))
self.assertFalse(match_str('x>0', {}))
self.assertTrue(match_str('x>?0', {}))
self.assertTrue(match_str('x>1K', {'x': 1200}))
self.assertFalse(match_str('x>2K', {'x': 1200}))
self.assertTrue(match_str('x>=1200 & x < 1300', {'x': 1200}))
self.assertFalse(match_str('x>=1100 & x < 1200', {'x': 1200}))
self.assertFalse(match_str('y=a212', {'y': 'foobar42'}))
self.assertTrue(match_str('y=foobar42', {'y': 'foobar42'}))
self.assertFalse(match_str('y!=foobar42', {'y': 'foobar42'}))
self.assertTrue(match_str('y!=foobar2', {'y': 'foobar42'}))
self.assertFalse(match_str(
'like_count > 100 & dislike_count <? 50 & description',
{'like_count': 90, 'description': 'foo'}))
self.assertTrue(match_str(
'like_count > 100 & dislike_count <? 50 & description',
{'like_count': 190, 'description': 'foo'}))
self.assertFalse(match_str(
'like_count > 100 & dislike_count <? 50 & description',
{'like_count': 190, 'dislike_count': 60, 'description': 'foo'}))
self.assertFalse(match_str(
'like_count > 100 & dislike_count <? 50 & description',
{'like_count': 190, 'dislike_count': 10}))
self.assertTrue(match_str('is_live', {'is_live': True})) self.assertTrue(match_str('is_live', {'is_live': True}))
self.assertFalse(match_str('is_live', {'is_live': False})) self.assertFalse(match_str('is_live', {'is_live': False}))
self.assertFalse(match_str('is_live', {'is_live': None})) self.assertFalse(match_str('is_live', {'is_live': None}))
@@ -1246,6 +1248,76 @@ ffmpeg version 2.4.4 Copyright (c) 2000-2014 the FFmpeg ...'''), '2.4.4')
self.assertFalse(match_str('!title', {'title': 'abc'})) self.assertFalse(match_str('!title', {'title': 'abc'}))
self.assertFalse(match_str('!title', {'title': ''})) self.assertFalse(match_str('!title', {'title': ''}))
# Numeric
self.assertFalse(match_str('x>0', {'x': 0}))
self.assertFalse(match_str('x>0', {}))
self.assertTrue(match_str('x>?0', {}))
self.assertTrue(match_str('x>1K', {'x': 1200}))
self.assertFalse(match_str('x>2K', {'x': 1200}))
self.assertTrue(match_str('x>=1200 & x < 1300', {'x': 1200}))
self.assertFalse(match_str('x>=1100 & x < 1200', {'x': 1200}))
self.assertTrue(match_str('x > 1:0:0', {'x': 3700}))
# String
self.assertFalse(match_str('y=a212', {'y': 'foobar42'}))
self.assertTrue(match_str('y=foobar42', {'y': 'foobar42'}))
self.assertFalse(match_str('y!=foobar42', {'y': 'foobar42'}))
self.assertTrue(match_str('y!=foobar2', {'y': 'foobar42'}))
self.assertTrue(match_str('y^=foo', {'y': 'foobar42'}))
self.assertFalse(match_str('y!^=foo', {'y': 'foobar42'}))
self.assertFalse(match_str('y^=bar', {'y': 'foobar42'}))
self.assertTrue(match_str('y!^=bar', {'y': 'foobar42'}))
self.assertRaises(ValueError, match_str, 'x^=42', {'x': 42})
self.assertTrue(match_str('y*=bar', {'y': 'foobar42'}))
self.assertFalse(match_str('y!*=bar', {'y': 'foobar42'}))
self.assertFalse(match_str('y*=baz', {'y': 'foobar42'}))
self.assertTrue(match_str('y!*=baz', {'y': 'foobar42'}))
self.assertTrue(match_str('y$=42', {'y': 'foobar42'}))
self.assertFalse(match_str('y$=43', {'y': 'foobar42'}))
# And
self.assertFalse(match_str(
'like_count > 100 & dislike_count <? 50 & description',
{'like_count': 90, 'description': 'foo'}))
self.assertTrue(match_str(
'like_count > 100 & dislike_count <? 50 & description',
{'like_count': 190, 'description': 'foo'}))
self.assertFalse(match_str(
'like_count > 100 & dislike_count <? 50 & description',
{'like_count': 190, 'dislike_count': 60, 'description': 'foo'}))
self.assertFalse(match_str(
'like_count > 100 & dislike_count <? 50 & description',
{'like_count': 190, 'dislike_count': 10}))
# Regex
self.assertTrue(match_str(r'x~=\bbar', {'x': 'foo bar'}))
self.assertFalse(match_str(r'x~=\bbar.+', {'x': 'foo bar'}))
self.assertFalse(match_str(r'x~=^FOO', {'x': 'foo bar'}))
self.assertTrue(match_str(r'x~=(?i)^FOO', {'x': 'foo bar'}))
# Quotes
self.assertTrue(match_str(r'x^="foo"', {'x': 'foo "bar"'}))
self.assertFalse(match_str(r'x^="foo "', {'x': 'foo "bar"'}))
self.assertFalse(match_str(r'x$="bar"', {'x': 'foo "bar"'}))
self.assertTrue(match_str(r'x$=" \"bar\""', {'x': 'foo "bar"'}))
# Escaping &
self.assertFalse(match_str(r'x=foo & bar', {'x': 'foo & bar'}))
self.assertTrue(match_str(r'x=foo \& bar', {'x': 'foo & bar'}))
self.assertTrue(match_str(r'x=foo \& bar & x^=foo', {'x': 'foo & bar'}))
self.assertTrue(match_str(r'x="foo \& bar" & x^=foo', {'x': 'foo & bar'}))
# Example from docs
self.assertTrue(match_str(
r"!is_live & like_count>?100 & description~='(?i)\bcats \& dogs\b'",
{'description': 'Raining Cats & Dogs'}))
# Incomplete
self.assertFalse(match_str('id!=foo', {'id': 'foo'}, True))
self.assertTrue(match_str('x', {'id': 'foo'}, True))
self.assertTrue(match_str('!x', {'id': 'foo'}, True))
self.assertFalse(match_str('x', {'id': 'foo'}, False))
def test_parse_dfxp_time_expr(self): def test_parse_dfxp_time_expr(self):
self.assertEqual(parse_dfxp_time_expr(None), None) self.assertEqual(parse_dfxp_time_expr(None), None)
self.assertEqual(parse_dfxp_time_expr(''), None) self.assertEqual(parse_dfxp_time_expr(''), None)
@@ -1321,21 +1393,21 @@ The first line
</body> </body>
</tt>'''.encode('utf-8') </tt>'''.encode('utf-8')
srt_data = '''1 srt_data = '''1
00:00:02,080 --> 00:00:05,839 00:00:02,080 --> 00:00:05,840
<font color="white" face="sansSerif" size="16">default style<font color="red">custom style</font></font> <font color="white" face="sansSerif" size="16">default style<font color="red">custom style</font></font>
2 2
00:00:02,080 --> 00:00:05,839 00:00:02,080 --> 00:00:05,840
<b><font color="cyan" face="sansSerif" size="16"><font color="lime">part 1 <b><font color="cyan" face="sansSerif" size="16"><font color="lime">part 1
</font>part 2</font></b> </font>part 2</font></b>
3 3
00:00:05,839 --> 00:00:09,560 00:00:05,840 --> 00:00:09,560
<u><font color="lime">line 3 <u><font color="lime">line 3
part 3</font></u> part 3</font></u>
4 4
00:00:09,560 --> 00:00:12,359 00:00:09,560 --> 00:00:12,360
<i><u><font color="yellow"><font color="lime">inner <i><u><font color="yellow"><font color="lime">inner
</font>style</font></u></i> </font>style</font></u></i>
@@ -1534,8 +1606,11 @@ Line 1
self.assertEqual(LazyList(it).exhaust(), it) self.assertEqual(LazyList(it).exhaust(), it)
self.assertEqual(LazyList(it)[5], it[5]) self.assertEqual(LazyList(it)[5], it[5])
self.assertEqual(LazyList(it)[5:], it[5:])
self.assertEqual(LazyList(it)[:5], it[:5])
self.assertEqual(LazyList(it)[::2], it[::2]) self.assertEqual(LazyList(it)[::2], it[::2])
self.assertEqual(LazyList(it)[1::2], it[1::2]) self.assertEqual(LazyList(it)[1::2], it[1::2])
self.assertEqual(LazyList(it)[5::-1], it[5::-1])
self.assertEqual(LazyList(it)[6:2:-2], it[6:2:-2]) self.assertEqual(LazyList(it)[6:2:-2], it[6:2:-2])
self.assertEqual(LazyList(it)[::-1], it[::-1]) self.assertEqual(LazyList(it)[::-1], it[::-1])
@@ -1545,8 +1620,9 @@ Line 1
self.assertEqual(repr(LazyList(it)), repr(it)) self.assertEqual(repr(LazyList(it)), repr(it))
self.assertEqual(str(LazyList(it)), str(it)) self.assertEqual(str(LazyList(it)), str(it))
self.assertEqual(list(reversed(LazyList(it))), it[::-1]) self.assertEqual(list(LazyList(it).reverse()), it[::-1])
self.assertEqual(list(reversed(LazyList(it))[1:3:7]), it[::-1][1:3:7]) self.assertEqual(list(LazyList(it).reverse()[1:3:7]), it[::-1][1:3:7])
self.assertEqual(list(LazyList(it).reverse()[::-1]), it)
def test_LazyList_laziness(self): def test_LazyList_laziness(self):
@@ -1559,13 +1635,13 @@ Line 1
test(ll, 5, 5, range(6)) test(ll, 5, 5, range(6))
test(ll, -3, 7, range(10)) test(ll, -3, 7, range(10))
ll = reversed(LazyList(range(10))) ll = LazyList(range(10)).reverse()
test(ll, -1, 0, range(1)) test(ll, -1, 0, range(1))
test(ll, 3, 6, range(10)) test(ll, 3, 6, range(10))
ll = LazyList(itertools.count()) ll = LazyList(itertools.count())
test(ll, 10, 10, range(11)) test(ll, 10, 10, range(11))
reversed(ll) ll.reverse()
test(ll, -15, 14, range(15)) test(ll, -15, 14, range(15))

View File

@@ -8,7 +8,7 @@ import sys
import unittest import unittest
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from test.helper import get_params, try_rm from test.helper import get_params, try_rm, is_download_test
import io import io
@@ -38,6 +38,7 @@ ANNOTATIONS_FILE = TEST_ID + '.annotations.xml'
EXPECTED_ANNOTATIONS = ['Speech bubble', 'Note', 'Title', 'Spotlight', 'Label'] EXPECTED_ANNOTATIONS = ['Speech bubble', 'Note', 'Title', 'Spotlight', 'Label']
@is_download_test
class TestAnnotations(unittest.TestCase): class TestAnnotations(unittest.TestCase):
def setUp(self): def setUp(self):
# Clear old files # Clear old files

View File

@@ -7,7 +7,7 @@ import sys
import unittest import unittest
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from test.helper import FakeYDL from test.helper import FakeYDL, is_download_test
from yt_dlp.extractor import ( from yt_dlp.extractor import (
@@ -17,6 +17,7 @@ from yt_dlp.extractor import (
) )
@is_download_test
class TestYoutubeLists(unittest.TestCase): class TestYoutubeLists(unittest.TestCase):
def assertIsPlaylist(self, info): def assertIsPlaylist(self, info):
"""Make sure the info has '_type' set to 'playlist'""" """Make sure the info has '_type' set to 'playlist'"""

View File

@@ -12,11 +12,12 @@ import io
import re import re
import string import string
from test.helper import FakeYDL from test.helper import FakeYDL, is_download_test
from yt_dlp.extractor import YoutubeIE from yt_dlp.extractor import YoutubeIE
from yt_dlp.jsinterp import JSInterpreter
from yt_dlp.compat import compat_str, compat_urlretrieve from yt_dlp.compat import compat_str, compat_urlretrieve
_TESTS = [ _SIG_TESTS = [
( (
'https://s.ytimg.com/yts/jsbin/html5player-vflHOr_nV.js', 'https://s.ytimg.com/yts/jsbin/html5player-vflHOr_nV.js',
86, 86,
@@ -64,7 +65,19 @@ _TESTS = [
) )
] ]
_NSIG_TESTS = [
(
'https://www.youtube.com/s/player/9216d1f7/player_ias.vflset/en_US/base.js',
'SLp9F5bwjAdhE9F-', 'gWnb9IK2DJ8Q1w',
),
(
'https://www.youtube.com/s/player/f8cb7a3b/player_ias.vflset/en_US/base.js',
'oBo2h5euWy6osrUt', 'ivXHpm7qJjJN',
),
]
@is_download_test
class TestPlayerInfo(unittest.TestCase): class TestPlayerInfo(unittest.TestCase):
def test_youtube_extract_player_info(self): def test_youtube_extract_player_info(self):
PLAYER_URLS = ( PLAYER_URLS = (
@@ -87,6 +100,7 @@ class TestPlayerInfo(unittest.TestCase):
self.assertEqual(player_id, expected_player_id) self.assertEqual(player_id, expected_player_id)
@is_download_test
class TestSignature(unittest.TestCase): class TestSignature(unittest.TestCase):
def setUp(self): def setUp(self):
TEST_DIR = os.path.dirname(os.path.abspath(__file__)) TEST_DIR = os.path.dirname(os.path.abspath(__file__))
@@ -95,35 +109,49 @@ class TestSignature(unittest.TestCase):
os.mkdir(self.TESTDATA_DIR) os.mkdir(self.TESTDATA_DIR)
def make_tfunc(url, sig_input, expected_sig): def t_factory(name, sig_func, url_pattern):
m = re.match(r'.*-([a-zA-Z0-9_-]+)(?:/watch_as3|/html5player)?\.[a-z]+$', url) def make_tfunc(url, sig_input, expected_sig):
assert m, '%r should follow URL format' % url m = url_pattern.match(url)
test_id = m.group(1) assert m, '%r should follow URL format' % url
test_id = m.group('id')
def test_func(self): def test_func(self):
basename = 'player-%s.js' % test_id basename = f'player-{name}-{test_id}.js'
fn = os.path.join(self.TESTDATA_DIR, basename) fn = os.path.join(self.TESTDATA_DIR, basename)
if not os.path.exists(fn): if not os.path.exists(fn):
compat_urlretrieve(url, fn) compat_urlretrieve(url, fn)
with io.open(fn, encoding='utf-8') as testf:
jscode = testf.read()
self.assertEqual(sig_func(jscode, sig_input), expected_sig)
ydl = FakeYDL() test_func.__name__ = f'test_{name}_js_{test_id}'
ie = YoutubeIE(ydl) setattr(TestSignature, test_func.__name__, test_func)
with io.open(fn, encoding='utf-8') as testf: return make_tfunc
jscode = testf.read()
func = ie._parse_sig_js(jscode)
src_sig = (
compat_str(string.printable[:sig_input])
if isinstance(sig_input, int) else sig_input)
got_sig = func(src_sig)
self.assertEqual(got_sig, expected_sig)
test_func.__name__ = str('test_signature_js_' + test_id)
setattr(TestSignature, test_func.__name__, test_func)
for test_spec in _TESTS: def signature(jscode, sig_input):
make_tfunc(*test_spec) func = YoutubeIE(FakeYDL())._parse_sig_js(jscode)
src_sig = (
compat_str(string.printable[:sig_input])
if isinstance(sig_input, int) else sig_input)
return func(src_sig)
def n_sig(jscode, sig_input):
funcname = YoutubeIE(FakeYDL())._extract_n_function_name(jscode)
return JSInterpreter(jscode).call_function(funcname, sig_input)
make_sig_test = t_factory(
'signature', signature, re.compile(r'.*-(?P<id>[a-zA-Z0-9_-]+)(?:/watch_as3|/html5player)?\.[a-z]+$'))
for test_spec in _SIG_TESTS:
make_sig_test(*test_spec)
make_nsig_test = t_factory(
'nsig', n_sig, re.compile(r'.+/player/(?P<id>[a-zA-Z0-9_-]+)/.+.js$'))
for test_spec in _NSIG_TESTS:
make_nsig_test(*test_spec)
if __name__ == '__main__': if __name__ == '__main__':

View File

@@ -1,5 +1,7 @@
[tox] [tox]
envlist = py26,py27,py33,py34,py35 envlist = py26,py27,py33,py34,py35
# Needed?
[testenv] [testenv]
deps = deps =
nose nose

File diff suppressed because it is too large Load Diff

View File

@@ -1,25 +1,27 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
# coding: utf-8 # coding: utf-8
from __future__ import unicode_literals f'You are using an unsupported version of Python. Only Python versions 3.6 and above are supported by yt-dlp' # noqa: F541
__license__ = 'Public Domain' __license__ = 'Public Domain'
import codecs import codecs
import io import io
import itertools
import os import os
import random import random
import re import re
import sys import sys
from .options import ( from .options import (
parseOpts, parseOpts,
) )
from .compat import ( from .compat import (
compat_getpass, compat_getpass,
compat_shlex_quote,
workaround_optparse_bug9161, workaround_optparse_bug9161,
) )
from .cookies import SUPPORTED_BROWSERS
from .utils import ( from .utils import (
DateRange, DateRange,
decodeOption, decodeOption,
@@ -27,8 +29,11 @@ from .utils import (
error_to_compat_str, error_to_compat_str,
ExistingVideoReached, ExistingVideoReached,
expand_path, expand_path,
float_or_none,
int_or_none,
match_filter_func, match_filter_func,
MaxDownloadsReached, MaxDownloadsReached,
parse_duration,
preferredencoding, preferredencoding,
read_batch_urls, read_batch_urls,
RejectedVideoReached, RejectedVideoReached,
@@ -45,14 +50,15 @@ from .downloader import (
from .extractor import gen_extractors, list_extractors from .extractor import gen_extractors, list_extractors
from .extractor.common import InfoExtractor from .extractor.common import InfoExtractor
from .extractor.adobepass import MSO_INFO from .extractor.adobepass import MSO_INFO
from .postprocessor.ffmpeg import ( from .postprocessor import (
FFmpegExtractAudioPP, FFmpegExtractAudioPP,
FFmpegSubtitlesConvertorPP, FFmpegSubtitlesConvertorPP,
FFmpegThumbnailsConvertorPP, FFmpegThumbnailsConvertorPP,
FFmpegVideoConvertorPP, FFmpegVideoConvertorPP,
FFmpegVideoRemuxerPP, FFmpegVideoRemuxerPP,
MetadataFromFieldPP,
MetadataParserPP,
) )
from .postprocessor.metadatafromfield import MetadataFromFieldPP
from .YoutubeDL import YoutubeDL from .YoutubeDL import YoutubeDL
@@ -106,22 +112,22 @@ def _real_main(argv=None):
if opts.list_extractors: if opts.list_extractors:
for ie in list_extractors(opts.age_limit): for ie in list_extractors(opts.age_limit):
write_string(ie.IE_NAME + (' (CURRENTLY BROKEN)' if not ie._WORKING else '') + '\n', out=sys.stdout) write_string(ie.IE_NAME + (' (CURRENTLY BROKEN)' if not ie.working() else '') + '\n', out=sys.stdout)
matchedUrls = [url for url in all_urls if ie.suitable(url)] matchedUrls = [url for url in all_urls if ie.suitable(url)]
for mu in matchedUrls: for mu in matchedUrls:
write_string(' ' + mu + '\n', out=sys.stdout) write_string(' ' + mu + '\n', out=sys.stdout)
sys.exit(0) sys.exit(0)
if opts.list_extractor_descriptions: if opts.list_extractor_descriptions:
for ie in list_extractors(opts.age_limit): for ie in list_extractors(opts.age_limit):
if not ie._WORKING: if not ie.working():
continue continue
desc = getattr(ie, 'IE_DESC', ie.IE_NAME) desc = getattr(ie, 'IE_DESC', ie.IE_NAME)
if desc is False: if desc is False:
continue continue
if hasattr(ie, 'SEARCH_KEY'): if getattr(ie, 'SEARCH_KEY', None) is not None:
_SEARCHES = ('cute kittens', 'slithering pythons', 'falling cat', 'angry poodle', 'purple fish', 'running tortoise', 'sleeping bunny', 'burping cow') _SEARCHES = ('cute kittens', 'slithering pythons', 'falling cat', 'angry poodle', 'purple fish', 'running tortoise', 'sleeping bunny', 'burping cow')
_COUNTS = ('', '5', '10', 'all') _COUNTS = ('', '5', '10', 'all')
desc += ' (Example: "%s%s:%s" )' % (ie.SEARCH_KEY, random.choice(_COUNTS), random.choice(_SEARCHES)) desc += f'; "{ie.SEARCH_KEY}:" prefix (Example: "{ie.SEARCH_KEY}{random.choice(_COUNTS)}:{random.choice(_SEARCHES)}")'
write_string(desc + '\n', out=sys.stdout) write_string(desc + '\n', out=sys.stdout)
sys.exit(0) sys.exit(0)
if opts.ap_list_mso: if opts.ap_list_mso:
@@ -221,11 +227,13 @@ def _real_main(argv=None):
if opts.playlistend not in (-1, None) and opts.playlistend < opts.playliststart: if opts.playlistend not in (-1, None) and opts.playlistend < opts.playliststart:
raise ValueError('Playlist end must be greater than playlist start') raise ValueError('Playlist end must be greater than playlist start')
if opts.extractaudio: if opts.extractaudio:
opts.audioformat = opts.audioformat.lower()
if opts.audioformat not in ['best'] + list(FFmpegExtractAudioPP.SUPPORTED_EXTS): if opts.audioformat not in ['best'] + list(FFmpegExtractAudioPP.SUPPORTED_EXTS):
parser.error('invalid audio format specified') parser.error('invalid audio format specified')
if opts.audioquality: if opts.audioquality:
opts.audioquality = opts.audioquality.strip('k').strip('K') opts.audioquality = opts.audioquality.strip('k').strip('K')
if not opts.audioquality.isdigit(): audioquality = int_or_none(float_or_none(opts.audioquality)) # int_or_none prevents inf, nan
if audioquality is None or audioquality < 0:
parser.error('invalid audio quality specified') parser.error('invalid audio quality specified')
if opts.recodevideo is not None: if opts.recodevideo is not None:
opts.recodevideo = opts.recodevideo.replace(' ', '') opts.recodevideo = opts.recodevideo.replace(' ', '')
@@ -242,40 +250,21 @@ def _real_main(argv=None):
if opts.convertthumbnails not in FFmpegThumbnailsConvertorPP.SUPPORTED_EXTS: if opts.convertthumbnails not in FFmpegThumbnailsConvertorPP.SUPPORTED_EXTS:
parser.error('invalid thumbnail format specified') parser.error('invalid thumbnail format specified')
if opts.cookiesfrombrowser is not None:
opts.cookiesfrombrowser = [
part.strip() or None for part in opts.cookiesfrombrowser.split(':', 1)]
if opts.cookiesfrombrowser[0].lower() not in SUPPORTED_BROWSERS:
parser.error('unsupported browser specified for cookies')
if opts.date is not None: if opts.date is not None:
date = DateRange.day(opts.date) date = DateRange.day(opts.date)
else: else:
date = DateRange(opts.dateafter, opts.datebefore) date = DateRange(opts.dateafter, opts.datebefore)
def parse_compat_opts(): compat_opts = opts.compat_opts
parsed_compat_opts, compat_opts = set(), opts.compat_opts[::-1]
while compat_opts:
actual_opt = opt = compat_opts.pop().lower()
if opt == 'youtube-dl':
compat_opts.extend(['-multistreams', 'all'])
elif opt == 'youtube-dlc':
compat_opts.extend(['-no-youtube-channel-redirect', '-no-live-chat', 'all'])
elif opt == 'all':
parsed_compat_opts.update(all_compat_opts)
elif opt == '-all':
parsed_compat_opts = set()
else:
if opt[0] == '-':
opt = opt[1:]
parsed_compat_opts.discard(opt)
else:
parsed_compat_opts.update([opt])
if opt not in all_compat_opts:
parser.error('Invalid compatibility option %s' % actual_opt)
return parsed_compat_opts
all_compat_opts = [ def report_conflict(arg1, arg2):
'filename', 'format-sort', 'abort-on-error', 'format-spec', 'no-playlist-metafiles', warnings.append(f'{arg2} is ignored since {arg1} was given')
'multistreams', 'no-live-chat', 'playlist-index', 'list-formats', 'no-direct-merge',
'no-youtube-channel-redirect', 'no-youtube-unavailable-videos', 'no-attach-info-json',
'embed-thumbnail-atomicparsley',
]
compat_opts = parse_compat_opts()
def _unused_compat_opt(name): def _unused_compat_opt(name):
if name not in compat_opts: if name not in compat_opts:
@@ -284,7 +273,7 @@ def _real_main(argv=None):
compat_opts.update(['*%s' % name]) compat_opts.update(['*%s' % name])
return True return True
def set_default_compat(compat_name, opt_name, default=True, remove_compat=False): def set_default_compat(compat_name, opt_name, default=True, remove_compat=True):
attr = getattr(opts, opt_name) attr = getattr(opts, opt_name)
if compat_name in compat_opts: if compat_name in compat_opts:
if attr is None: if attr is None:
@@ -298,8 +287,9 @@ def _real_main(argv=None):
setattr(opts, opt_name, default) setattr(opts, opt_name, default)
return None return None
set_default_compat('abort-on-error', 'ignoreerrors') set_default_compat('abort-on-error', 'ignoreerrors', 'only_download')
set_default_compat('no-playlist-metafiles', 'allow_playlist_files') set_default_compat('no-playlist-metafiles', 'allow_playlist_files')
set_default_compat('no-clean-infojson', 'clean_infojson')
if 'format-sort' in compat_opts: if 'format-sort' in compat_opts:
opts.format_sort.extend(InfoExtractor.FormatSort.ytdl_default) opts.format_sort.extend(InfoExtractor.FormatSort.ytdl_default)
_video_multistreams_set = set_default_compat('multistreams', 'allow_multiple_video_streams', False, remove_compat=False) _video_multistreams_set = set_default_compat('multistreams', 'allow_multiple_video_streams', False, remove_compat=False)
@@ -307,10 +297,14 @@ def _real_main(argv=None):
if _video_multistreams_set is False and _audio_multistreams_set is False: if _video_multistreams_set is False and _audio_multistreams_set is False:
_unused_compat_opt('multistreams') _unused_compat_opt('multistreams')
outtmpl_default = opts.outtmpl.get('default') outtmpl_default = opts.outtmpl.get('default')
if opts.useid:
if outtmpl_default is None:
outtmpl_default = opts.outtmpl['default'] = '%(id)s.%(ext)s'
else:
report_conflict('--output', '--id')
if 'filename' in compat_opts: if 'filename' in compat_opts:
if outtmpl_default is None: if outtmpl_default is None:
outtmpl_default = '%(title)s.%(id)s.%(ext)s' outtmpl_default = opts.outtmpl['default'] = '%(title)s-%(id)s.%(ext)s'
opts.outtmpl.update({'default': outtmpl_default})
else: else:
_unused_compat_opt('filename') _unused_compat_opt('filename')
@@ -320,9 +314,14 @@ def _real_main(argv=None):
parser.error('invalid %s %r: %s' % (msg, tmpl, error_to_compat_str(err))) parser.error('invalid %s %r: %s' % (msg, tmpl, error_to_compat_str(err)))
for k, tmpl in opts.outtmpl.items(): for k, tmpl in opts.outtmpl.items():
validate_outtmpl(tmpl, '%s output template' % k) validate_outtmpl(tmpl, f'{k} output template')
for tmpl in opts.forceprint: opts.forceprint = opts.forceprint or []
for tmpl in opts.forceprint or []:
validate_outtmpl(tmpl, 'print template') validate_outtmpl(tmpl, 'print template')
validate_outtmpl(opts.sponsorblock_chapter_title, 'SponsorBlock chapter title')
for k, tmpl in opts.progress_template.items():
k = f'{k[:-6]} console title' if '-title' in k else f'{k} progress'
validate_outtmpl(tmpl, f'{k} template')
if opts.extractaudio and not opts.keepvideo and opts.format is None: if opts.extractaudio and not opts.keepvideo and opts.format is None:
opts.format = 'bestaudio/best' opts.format = 'bestaudio/best'
@@ -336,13 +335,29 @@ def _real_main(argv=None):
if re.match(InfoExtractor.FormatSort.regex, f) is None: if re.match(InfoExtractor.FormatSort.regex, f) is None:
parser.error('invalid format sort string "%s" specified' % f) parser.error('invalid format sort string "%s" specified' % f)
if opts.metafromfield is None: def metadataparser_actions(f):
opts.metafromfield = [] if isinstance(f, str):
cmd = '--parse-metadata %s' % compat_shlex_quote(f)
try:
actions = [MetadataFromFieldPP.to_action(f)]
except Exception as err:
parser.error(f'{cmd} is invalid; {err}')
else:
cmd = '--replace-in-metadata %s' % ' '.join(map(compat_shlex_quote, f))
actions = ((MetadataParserPP.Actions.REPLACE, x, *f[1:]) for x in f[0].split(','))
for action in actions:
try:
MetadataParserPP.validate_action(*action)
except Exception as err:
parser.error(f'{cmd} is invalid; {err}')
yield action
if opts.parse_metadata is None:
opts.parse_metadata = []
if opts.metafromtitle is not None: if opts.metafromtitle is not None:
opts.metafromfield.append('title:%s' % opts.metafromtitle) opts.parse_metadata.append('title:%s' % opts.metafromtitle)
for f in opts.metafromfield: opts.parse_metadata = list(itertools.chain(*map(metadataparser_actions, opts.parse_metadata)))
if re.match(MetadataFromFieldPP.regex, f) is None:
parser.error('invalid format string "%s" specified for --parse-metadata' % f)
any_getting = opts.forceprint or opts.geturl or opts.gettitle or opts.getid or opts.getthumbnail or opts.getdescription or opts.getfilename or opts.getformat or opts.getduration or opts.dumpjson or opts.dump_single_json any_getting = opts.forceprint or opts.geturl or opts.gettitle or opts.getid or opts.getthumbnail or opts.getdescription or opts.getfilename or opts.getformat or opts.getduration or opts.dumpjson or opts.dump_single_json
any_printing = opts.print_json any_printing = opts.print_json
@@ -353,15 +368,31 @@ def _real_main(argv=None):
if opts.getcomments and not printing_json: if opts.getcomments and not printing_json:
opts.writeinfojson = True opts.writeinfojson = True
def report_conflict(arg1, arg2): if opts.no_sponsorblock:
warnings.append('%s is ignored since %s was given' % (arg2, arg1)) opts.sponsorblock_mark = set()
opts.sponsorblock_remove = set()
sponsorblock_query = opts.sponsorblock_mark | opts.sponsorblock_remove
if (opts.addmetadata or opts.sponsorblock_mark) and opts.addchapters is None:
opts.addchapters = True
opts.remove_chapters = opts.remove_chapters or []
if (opts.remove_chapters or sponsorblock_query) and opts.sponskrub is not False:
if opts.sponskrub:
if opts.remove_chapters:
report_conflict('--remove-chapters', '--sponskrub')
if opts.sponsorblock_mark:
report_conflict('--sponsorblock-mark', '--sponskrub')
if opts.sponsorblock_remove:
report_conflict('--sponsorblock-remove', '--sponskrub')
opts.sponskrub = False
if opts.sponskrub_cut and opts.split_chapters and opts.sponskrub is not False:
report_conflict('--split-chapter', '--sponskrub-cut')
opts.sponskrub_cut = False
if opts.remuxvideo and opts.recodevideo: if opts.remuxvideo and opts.recodevideo:
report_conflict('--recode-video', '--remux-video') report_conflict('--recode-video', '--remux-video')
opts.remuxvideo = False opts.remuxvideo = False
if opts.sponskrub_cut and opts.split_chapters and opts.sponskrub is not False:
report_conflict('--split-chapter', '--sponskrub-cut')
opts.sponskrub_cut = False
if opts.allow_unplayable_formats: if opts.allow_unplayable_formats:
if opts.extractaudio: if opts.extractaudio:
@@ -388,16 +419,30 @@ def _real_main(argv=None):
if opts.fixup and opts.fixup.lower() not in ('never', 'ignore'): if opts.fixup and opts.fixup.lower() not in ('never', 'ignore'):
report_conflict('--allow-unplayable-formats', '--fixup') report_conflict('--allow-unplayable-formats', '--fixup')
opts.fixup = 'never' opts.fixup = 'never'
if opts.remove_chapters:
report_conflict('--allow-unplayable-formats', '--remove-chapters')
opts.remove_chapters = []
if opts.sponsorblock_remove:
report_conflict('--allow-unplayable-formats', '--sponsorblock-remove')
opts.sponsorblock_remove = set()
if opts.sponskrub: if opts.sponskrub:
report_conflict('--allow-unplayable-formats', '--sponskrub') report_conflict('--allow-unplayable-formats', '--sponskrub')
opts.sponskrub = False opts.sponskrub = False
# PostProcessors # PostProcessors
postprocessors = [] postprocessors = list(opts.add_postprocessors)
if opts.metafromfield: if sponsorblock_query:
postprocessors.append({ postprocessors.append({
'key': 'MetadataFromField', 'key': 'SponsorBlock',
'formats': opts.metafromfield, 'categories': sponsorblock_query,
'api': opts.sponsorblock_api,
# Run this immediately after extraction is complete
'when': 'pre_process'
})
if opts.parse_metadata:
postprocessors.append({
'key': 'MetadataParser',
'actions': opts.parse_metadata,
# Run this immediately after extraction is complete # Run this immediately after extraction is complete
'when': 'pre_process' 'when': 'pre_process'
}) })
@@ -415,6 +460,13 @@ def _real_main(argv=None):
# Run this before the actual video download # Run this before the actual video download
'when': 'before_dl' 'when': 'before_dl'
}) })
# Must be after all other before_dl
if opts.exec_before_dl_cmd:
postprocessors.append({
'key': 'Exec',
'exec_cmd': opts.exec_before_dl_cmd,
'when': 'before_dl'
})
if opts.extractaudio: if opts.extractaudio:
postprocessors.append({ postprocessors.append({
'key': 'FFmpegExtractAudio', 'key': 'FFmpegExtractAudio',
@@ -432,29 +484,55 @@ def _real_main(argv=None):
'key': 'FFmpegVideoConvertor', 'key': 'FFmpegVideoConvertor',
'preferedformat': opts.recodevideo, 'preferedformat': opts.recodevideo,
}) })
# FFmpegMetadataPP should be run after FFmpegVideoConvertorPP and # If ModifyChapters is going to remove chapters, subtitles must already be in the container.
# FFmpegExtractAudioPP as containers before conversion may not support
# metadata (3gp, webm, etc.)
# And this post-processor should be placed before other metadata
# manipulating post-processors (FFmpegEmbedSubtitle) to prevent loss of
# extra metadata. By default ffmpeg preserves metadata applicable for both
# source and target containers. From this point the container won't change,
# so metadata can be added here.
if opts.addmetadata:
postprocessors.append({'key': 'FFmpegMetadata'})
if opts.embedsubtitles: if opts.embedsubtitles:
already_have_subtitle = opts.writesubtitles already_have_subtitle = opts.writesubtitles and 'no-keep-subs' not in compat_opts
postprocessors.append({ postprocessors.append({
'key': 'FFmpegEmbedSubtitle', 'key': 'FFmpegEmbedSubtitle',
# already_have_subtitle = True prevents the file from being deleted after embedding # already_have_subtitle = True prevents the file from being deleted after embedding
'already_have_subtitle': already_have_subtitle 'already_have_subtitle': already_have_subtitle
}) })
if not already_have_subtitle: if not opts.writeautomaticsub and 'no-keep-subs' not in compat_opts:
opts.writesubtitles = True opts.writesubtitles = True
# --all-sub automatically sets --write-sub if --write-auto-sub is not given # --all-sub automatically sets --write-sub if --write-auto-sub is not given
# this was the old behaviour if only --all-sub was given. # this was the old behaviour if only --all-sub was given.
if opts.allsubtitles and not opts.writeautomaticsub: if opts.allsubtitles and not opts.writeautomaticsub:
opts.writesubtitles = True opts.writesubtitles = True
# ModifyChapters must run before FFmpegMetadataPP
remove_chapters_patterns, remove_ranges = [], []
for regex in opts.remove_chapters:
if regex.startswith('*'):
dur = list(map(parse_duration, regex[1:].split('-')))
if len(dur) == 2 and all(t is not None for t in dur):
remove_ranges.append(tuple(dur))
continue
parser.error(f'invalid --remove-chapters time range {regex!r}. Must be of the form ?start-end')
try:
remove_chapters_patterns.append(re.compile(regex))
except re.error as err:
parser.error(f'invalid --remove-chapters regex {regex!r} - {err}')
if opts.remove_chapters or sponsorblock_query:
postprocessors.append({
'key': 'ModifyChapters',
'remove_chapters_patterns': remove_chapters_patterns,
'remove_sponsor_segments': opts.sponsorblock_remove,
'remove_ranges': remove_ranges,
'sponsorblock_chapter_title': opts.sponsorblock_chapter_title,
'force_keyframes': opts.force_keyframes_at_cuts
})
# FFmpegMetadataPP should be run after FFmpegVideoConvertorPP and
# FFmpegExtractAudioPP as containers before conversion may not support
# metadata (3gp, webm, etc.)
# By default ffmpeg preserves metadata applicable for both
# source and target containers. From this point the container won't change,
# so metadata can be added here.
if opts.addmetadata or opts.addchapters:
postprocessors.append({
'key': 'FFmpegMetadata',
'add_chapters': opts.addchapters,
'add_metadata': opts.addmetadata,
})
# Note: Deprecated
# This should be above EmbedThumbnail since sponskrub removes the thumbnail attachment # This should be above EmbedThumbnail since sponskrub removes the thumbnail attachment
# but must be below EmbedSubtitle and FFmpegMetadata # but must be below EmbedSubtitle and FFmpegMetadata
# See https://github.com/yt-dlp/yt-dlp/issues/204 , https://github.com/faissaloo/SponSkrub/issues/29 # See https://github.com/yt-dlp/yt-dlp/issues/204 , https://github.com/faissaloo/SponSkrub/issues/29
@@ -477,15 +555,19 @@ def _real_main(argv=None):
}) })
if not already_have_thumbnail: if not already_have_thumbnail:
opts.writethumbnail = True opts.writethumbnail = True
opts.outtmpl['pl_thumbnail'] = ''
if opts.split_chapters: if opts.split_chapters:
postprocessors.append({'key': 'FFmpegSplitChapters'}) postprocessors.append({
'key': 'FFmpegSplitChapters',
'force_keyframes': opts.force_keyframes_at_cuts,
})
# XAttrMetadataPP should be run after post-processors that may change file contents # XAttrMetadataPP should be run after post-processors that may change file contents
if opts.xattrs: if opts.xattrs:
postprocessors.append({'key': 'XAttrMetadata'}) postprocessors.append({'key': 'XAttrMetadata'})
# ExecAfterDownload must be the last PP # Exec must be the last PP
if opts.exec_cmd: if opts.exec_cmd:
postprocessors.append({ postprocessors.append({
'key': 'ExecAfterDownload', 'key': 'Exec',
'exec_cmd': opts.exec_cmd, 'exec_cmd': opts.exec_cmd,
# Run this only after the files have been moved to their final locations # Run this only after the files have been moved to their final locations
'when': 'after_move' 'when': 'after_move'
@@ -514,6 +596,7 @@ def _real_main(argv=None):
ydl_opts = { ydl_opts = {
'usenetrc': opts.usenetrc, 'usenetrc': opts.usenetrc,
'netrc_location': opts.netrc_location,
'username': opts.username, 'username': opts.username,
'password': opts.password, 'password': opts.password,
'twofactor': opts.twofactor, 'twofactor': opts.twofactor,
@@ -535,7 +618,7 @@ def _real_main(argv=None):
'forcejson': opts.dumpjson or opts.print_json, 'forcejson': opts.dumpjson or opts.print_json,
'dump_single_json': opts.dump_single_json, 'dump_single_json': opts.dump_single_json,
'force_write_download_archive': opts.force_write_download_archive, 'force_write_download_archive': opts.force_write_download_archive,
'simulate': opts.simulate or any_getting, 'simulate': (any_getting or None) if opts.simulate is None else opts.simulate,
'skip_download': opts.skip_download, 'skip_download': opts.skip_download,
'format': opts.format, 'format': opts.format,
'allow_unplayable_formats': opts.allow_unplayable_formats, 'allow_unplayable_formats': opts.allow_unplayable_formats,
@@ -569,8 +652,9 @@ def _real_main(argv=None):
'noresizebuffer': opts.noresizebuffer, 'noresizebuffer': opts.noresizebuffer,
'http_chunk_size': opts.http_chunk_size, 'http_chunk_size': opts.http_chunk_size,
'continuedl': opts.continue_dl, 'continuedl': opts.continue_dl,
'noprogress': opts.noprogress, 'noprogress': opts.quiet if opts.noprogress is None else opts.noprogress,
'progress_with_newline': opts.progress_with_newline, 'progress_with_newline': opts.progress_with_newline,
'progress_template': opts.progress_template,
'playliststart': opts.playliststart, 'playliststart': opts.playliststart,
'playlistend': opts.playlistend, 'playlistend': opts.playlistend,
'playlistreverse': opts.playlist_reverse, 'playlistreverse': opts.playlist_reverse,
@@ -621,6 +705,7 @@ def _real_main(argv=None):
'break_on_reject': opts.break_on_reject, 'break_on_reject': opts.break_on_reject,
'skip_playlist_after_errors': opts.skip_playlist_after_errors, 'skip_playlist_after_errors': opts.skip_playlist_after_errors,
'cookiefile': opts.cookiefile, 'cookiefile': opts.cookiefile,
'cookiesfrombrowser': opts.cookiesfrombrowser,
'nocheckcertificate': opts.no_check_certificate, 'nocheckcertificate': opts.no_check_certificate,
'prefer_insecure': opts.prefer_insecure, 'prefer_insecure': opts.prefer_insecure,
'proxy': opts.proxy, 'proxy': opts.proxy,
@@ -631,6 +716,7 @@ def _real_main(argv=None):
'include_ads': opts.include_ads, 'include_ads': opts.include_ads,
'default_search': opts.default_search, 'default_search': opts.default_search,
'dynamic_mpd': opts.dynamic_mpd, 'dynamic_mpd': opts.dynamic_mpd,
'extractor_args': opts.extractor_args,
'youtube_include_dash_manifest': opts.youtube_include_dash_manifest, 'youtube_include_dash_manifest': opts.youtube_include_dash_manifest,
'youtube_include_hls_manifest': opts.youtube_include_hls_manifest, 'youtube_include_hls_manifest': opts.youtube_include_hls_manifest,
'encoding': opts.encoding, 'encoding': opts.encoding,
@@ -663,12 +749,8 @@ def _real_main(argv=None):
'geo_bypass': opts.geo_bypass, 'geo_bypass': opts.geo_bypass,
'geo_bypass_country': opts.geo_bypass_country, 'geo_bypass_country': opts.geo_bypass_country,
'geo_bypass_ip_block': opts.geo_bypass_ip_block, 'geo_bypass_ip_block': opts.geo_bypass_ip_block,
'warnings': warnings, '_warnings': warnings,
'compat_opts': compat_opts, 'compat_opts': compat_opts,
# just for deprecation check
'autonumber': opts.autonumber or None,
'usetitle': opts.usetitle or None,
'useid': opts.useid or None,
} }
with YoutubeDL(ydl_opts) as ydl: with YoutubeDL(ydl_opts) as ydl:
@@ -713,10 +795,15 @@ def main(argv=None):
_real_main(argv) _real_main(argv)
except DownloadError: except DownloadError:
sys.exit(1) sys.exit(1)
except SameFileError: except SameFileError as e:
sys.exit('ERROR: fixed output name but more than one file to download') sys.exit(f'ERROR: {e}')
except KeyboardInterrupt: except KeyboardInterrupt:
sys.exit('\nERROR: Interrupted by user') sys.exit('\nERROR: Interrupted by user')
except BrokenPipeError as e:
# https://docs.python.org/3/library/signal.html#note-on-sigpipe
devnull = os.open(os.devnull, os.O_WRONLY)
os.dup2(devnull, sys.stdout.fileno())
sys.exit(f'\nERROR: {e}')
__all__ = ['main', 'YoutubeDL', 'gen_extractors', 'list_extractors'] __all__ = ['main', 'YoutubeDL', 'gen_extractors', 'list_extractors']

View File

@@ -2,36 +2,68 @@ from __future__ import unicode_literals
from math import ceil from math import ceil
from .compat import compat_b64decode from .compat import compat_b64decode, compat_pycrypto_AES
from .utils import bytes_to_intlist, intlist_to_bytes from .utils import bytes_to_intlist, intlist_to_bytes
if compat_pycrypto_AES:
def aes_cbc_decrypt_bytes(data, key, iv):
""" Decrypt bytes with AES-CBC using pycryptodome """
return compat_pycrypto_AES.new(key, compat_pycrypto_AES.MODE_CBC, iv).decrypt(data)
def aes_gcm_decrypt_and_verify_bytes(data, key, tag, nonce):
""" Decrypt bytes with AES-GCM using pycryptodome """
return compat_pycrypto_AES.new(key, compat_pycrypto_AES.MODE_GCM, nonce).decrypt_and_verify(data, tag)
else:
def aes_cbc_decrypt_bytes(data, key, iv):
""" Decrypt bytes with AES-CBC using native implementation since pycryptodome is unavailable """
return intlist_to_bytes(aes_cbc_decrypt(*map(bytes_to_intlist, (data, key, iv))))
def aes_gcm_decrypt_and_verify_bytes(data, key, tag, nonce):
""" Decrypt bytes with AES-GCM using native implementation since pycryptodome is unavailable """
return intlist_to_bytes(aes_gcm_decrypt_and_verify(*map(bytes_to_intlist, (data, key, tag, nonce))))
BLOCK_SIZE_BYTES = 16 BLOCK_SIZE_BYTES = 16
def aes_ctr_decrypt(data, key, counter): def aes_ctr_decrypt(data, key, iv):
""" """
Decrypt with aes in counter mode Decrypt with aes in counter mode
@param {int[]} data cipher @param {int[]} data cipher
@param {int[]} key 16/24/32-Byte cipher key @param {int[]} key 16/24/32-Byte cipher key
@param {instance} counter Instance whose next_value function (@returns {int[]} 16-Byte block) @param {int[]} iv 16-Byte initialization vector
returns the next counter block
@returns {int[]} decrypted data @returns {int[]} decrypted data
""" """
return aes_ctr_encrypt(data, key, iv)
def aes_ctr_encrypt(data, key, iv):
"""
Encrypt with aes in counter mode
@param {int[]} data cleartext
@param {int[]} key 16/24/32-Byte cipher key
@param {int[]} iv 16-Byte initialization vector
@returns {int[]} encrypted data
"""
expanded_key = key_expansion(key) expanded_key = key_expansion(key)
block_count = int(ceil(float(len(data)) / BLOCK_SIZE_BYTES)) block_count = int(ceil(float(len(data)) / BLOCK_SIZE_BYTES))
counter = iter_vector(iv)
decrypted_data = [] encrypted_data = []
for i in range(block_count): for i in range(block_count):
counter_block = counter.next_value() counter_block = next(counter)
block = data[i * BLOCK_SIZE_BYTES: (i + 1) * BLOCK_SIZE_BYTES] block = data[i * BLOCK_SIZE_BYTES: (i + 1) * BLOCK_SIZE_BYTES]
block += [0] * (BLOCK_SIZE_BYTES - len(block)) block += [0] * (BLOCK_SIZE_BYTES - len(block))
cipher_counter_block = aes_encrypt(counter_block, expanded_key) cipher_counter_block = aes_encrypt(counter_block, expanded_key)
decrypted_data += xor(block, cipher_counter_block) encrypted_data += xor(block, cipher_counter_block)
decrypted_data = decrypted_data[:len(data)] encrypted_data = encrypted_data[:len(data)]
return decrypted_data return encrypted_data
def aes_cbc_decrypt(data, key, iv): def aes_cbc_decrypt(data, key, iv):
@@ -88,39 +120,47 @@ def aes_cbc_encrypt(data, key, iv):
return encrypted_data return encrypted_data
def key_expansion(data): def aes_gcm_decrypt_and_verify(data, key, tag, nonce):
""" """
Generate key schedule Decrypt with aes in GBM mode and checks authenticity using tag
@param {int[]} data 16/24/32-Byte cipher key @param {int[]} data cipher
@returns {int[]} 176/208/240-Byte expanded key @param {int[]} key 16-Byte cipher key
@param {int[]} tag authentication tag
@param {int[]} nonce IV (recommended 12-Byte)
@returns {int[]} decrypted data
""" """
data = data[:] # copy
rcon_iteration = 1
key_size_bytes = len(data)
expanded_key_size_bytes = (key_size_bytes // 4 + 7) * BLOCK_SIZE_BYTES
while len(data) < expanded_key_size_bytes: # XXX: check aes, gcm param
temp = data[-4:]
temp = key_schedule_core(temp, rcon_iteration)
rcon_iteration += 1
data += xor(temp, data[-key_size_bytes: 4 - key_size_bytes])
for _ in range(3): hash_subkey = aes_encrypt([0] * BLOCK_SIZE_BYTES, key_expansion(key))
temp = data[-4:]
data += xor(temp, data[-key_size_bytes: 4 - key_size_bytes])
if key_size_bytes == 32: if len(nonce) == 12:
temp = data[-4:] j0 = nonce + [0, 0, 0, 1]
temp = sub_bytes(temp) else:
data += xor(temp, data[-key_size_bytes: 4 - key_size_bytes]) fill = (BLOCK_SIZE_BYTES - (len(nonce) % BLOCK_SIZE_BYTES)) % BLOCK_SIZE_BYTES + 8
ghash_in = nonce + [0] * fill + bytes_to_intlist((8 * len(nonce)).to_bytes(8, 'big'))
j0 = ghash(hash_subkey, ghash_in)
for _ in range(3 if key_size_bytes == 32 else 2 if key_size_bytes == 24 else 0): # TODO: add nonce support to aes_ctr_decrypt
temp = data[-4:]
data += xor(temp, data[-key_size_bytes: 4 - key_size_bytes])
data = data[:expanded_key_size_bytes]
return data # nonce_ctr = j0[:12]
iv_ctr = inc(j0)
decrypted_data = aes_ctr_decrypt(data, key, iv_ctr + [0] * (BLOCK_SIZE_BYTES - len(iv_ctr)))
pad_len = len(data) // 16 * 16
s_tag = ghash(
hash_subkey,
data
+ [0] * (BLOCK_SIZE_BYTES - len(data) + pad_len) # pad
+ bytes_to_intlist((0 * 8).to_bytes(8, 'big') # length of associated data
+ ((len(data) * 8).to_bytes(8, 'big'))) # length of data
)
if tag != aes_ctr_encrypt(s_tag, key, j0):
raise ValueError("Mismatching authentication tag")
return decrypted_data
def aes_encrypt(data, expanded_key): def aes_encrypt(data, expanded_key):
@@ -138,7 +178,7 @@ def aes_encrypt(data, expanded_key):
data = sub_bytes(data) data = sub_bytes(data)
data = shift_rows(data) data = shift_rows(data)
if i != rounds: if i != rounds:
data = mix_columns(data) data = list(iter_mix_columns(data, MIX_COLUMN_MATRIX))
data = xor(data, expanded_key[i * BLOCK_SIZE_BYTES: (i + 1) * BLOCK_SIZE_BYTES]) data = xor(data, expanded_key[i * BLOCK_SIZE_BYTES: (i + 1) * BLOCK_SIZE_BYTES])
return data return data
@@ -157,7 +197,7 @@ def aes_decrypt(data, expanded_key):
for i in range(rounds, 0, -1): for i in range(rounds, 0, -1):
data = xor(data, expanded_key[i * BLOCK_SIZE_BYTES: (i + 1) * BLOCK_SIZE_BYTES]) data = xor(data, expanded_key[i * BLOCK_SIZE_BYTES: (i + 1) * BLOCK_SIZE_BYTES])
if i != rounds: if i != rounds:
data = mix_columns_inv(data) data = list(iter_mix_columns(data, MIX_COLUMN_MATRIX_INV))
data = shift_rows_inv(data) data = shift_rows_inv(data)
data = sub_bytes_inv(data) data = sub_bytes_inv(data)
data = xor(data, expanded_key[:BLOCK_SIZE_BYTES]) data = xor(data, expanded_key[:BLOCK_SIZE_BYTES])
@@ -189,15 +229,7 @@ def aes_decrypt_text(data, password, key_size_bytes):
nonce = data[:NONCE_LENGTH_BYTES] nonce = data[:NONCE_LENGTH_BYTES]
cipher = data[NONCE_LENGTH_BYTES:] cipher = data[NONCE_LENGTH_BYTES:]
class Counter(object): decrypted_data = aes_ctr_decrypt(cipher, key, nonce + [0] * (BLOCK_SIZE_BYTES - NONCE_LENGTH_BYTES))
__value = nonce + [0] * (BLOCK_SIZE_BYTES - NONCE_LENGTH_BYTES)
def next_value(self):
temp = self.__value
self.__value = inc(self.__value)
return temp
decrypted_data = aes_ctr_decrypt(cipher, key, Counter())
plaintext = intlist_to_bytes(decrypted_data) plaintext = intlist_to_bytes(decrypted_data)
return plaintext return plaintext
@@ -278,6 +310,47 @@ RIJNDAEL_LOG_TABLE = (0x00, 0x00, 0x19, 0x01, 0x32, 0x02, 0x1a, 0xc6, 0x4b, 0xc7
0x67, 0x4a, 0xed, 0xde, 0xc5, 0x31, 0xfe, 0x18, 0x0d, 0x63, 0x8c, 0x80, 0xc0, 0xf7, 0x70, 0x07) 0x67, 0x4a, 0xed, 0xde, 0xc5, 0x31, 0xfe, 0x18, 0x0d, 0x63, 0x8c, 0x80, 0xc0, 0xf7, 0x70, 0x07)
def key_expansion(data):
"""
Generate key schedule
@param {int[]} data 16/24/32-Byte cipher key
@returns {int[]} 176/208/240-Byte expanded key
"""
data = data[:] # copy
rcon_iteration = 1
key_size_bytes = len(data)
expanded_key_size_bytes = (key_size_bytes // 4 + 7) * BLOCK_SIZE_BYTES
while len(data) < expanded_key_size_bytes:
temp = data[-4:]
temp = key_schedule_core(temp, rcon_iteration)
rcon_iteration += 1
data += xor(temp, data[-key_size_bytes: 4 - key_size_bytes])
for _ in range(3):
temp = data[-4:]
data += xor(temp, data[-key_size_bytes: 4 - key_size_bytes])
if key_size_bytes == 32:
temp = data[-4:]
temp = sub_bytes(temp)
data += xor(temp, data[-key_size_bytes: 4 - key_size_bytes])
for _ in range(3 if key_size_bytes == 32 else 2 if key_size_bytes == 24 else 0):
temp = data[-4:]
data += xor(temp, data[-key_size_bytes: 4 - key_size_bytes])
data = data[:expanded_key_size_bytes]
return data
def iter_vector(iv):
while True:
yield iv
iv = inc(iv)
def sub_bytes(data): def sub_bytes(data):
return [SBOX[x] for x in data] return [SBOX[x] for x in data]
@@ -302,48 +375,36 @@ def xor(data1, data2):
return [x ^ y for x, y in zip(data1, data2)] return [x ^ y for x, y in zip(data1, data2)]
def rijndael_mul(a, b): def iter_mix_columns(data, matrix):
if(a == 0 or b == 0): for i in (0, 4, 8, 12):
return 0 for row in matrix:
return RIJNDAEL_EXP_TABLE[(RIJNDAEL_LOG_TABLE[a] + RIJNDAEL_LOG_TABLE[b]) % 0xFF] mixed = 0
for j in range(4):
# xor is (+) and (-)
def mix_column(data, matrix): mixed ^= (0 if data[i:i + 4][j] == 0 or row[j] == 0 else
data_mixed = [] RIJNDAEL_EXP_TABLE[(RIJNDAEL_LOG_TABLE[data[i + j]] + RIJNDAEL_LOG_TABLE[row[j]]) % 0xFF])
for row in range(4): yield mixed
mixed = 0
for column in range(4):
# xor is (+) and (-)
mixed ^= rijndael_mul(data[column], matrix[row][column])
data_mixed.append(mixed)
return data_mixed
def mix_columns(data, matrix=MIX_COLUMN_MATRIX):
data_mixed = []
for i in range(4):
column = data[i * 4: (i + 1) * 4]
data_mixed += mix_column(column, matrix)
return data_mixed
def mix_columns_inv(data):
return mix_columns(data, MIX_COLUMN_MATRIX_INV)
def shift_rows(data): def shift_rows(data):
data_shifted = [] return [data[((column + row) & 0b11) * 4 + row] for column in range(4) for row in range(4)]
for column in range(4):
for row in range(4):
data_shifted.append(data[((column + row) & 0b11) * 4 + row])
return data_shifted
def shift_rows_inv(data): def shift_rows_inv(data):
return [data[((column - row) & 0b11) * 4 + row] for column in range(4) for row in range(4)]
def shift_block(data):
data_shifted = [] data_shifted = []
for column in range(4):
for row in range(4): bit = 0
data_shifted.append(data[((column - row) & 0b11) * 4 + row]) for n in data:
if bit:
n |= 0x100
bit = n & 1
n >>= 1
data_shifted.append(n)
return data_shifted return data_shifted
@@ -358,4 +419,50 @@ def inc(data):
return data return data
__all__ = ['aes_encrypt', 'key_expansion', 'aes_ctr_decrypt', 'aes_cbc_decrypt', 'aes_decrypt_text'] def block_product(block_x, block_y):
# NIST SP 800-38D, Algorithm 1
if len(block_x) != BLOCK_SIZE_BYTES or len(block_y) != BLOCK_SIZE_BYTES:
raise ValueError("Length of blocks need to be %d bytes" % BLOCK_SIZE_BYTES)
block_r = [0xE1] + [0] * (BLOCK_SIZE_BYTES - 1)
block_v = block_y[:]
block_z = [0] * BLOCK_SIZE_BYTES
for i in block_x:
for bit in range(7, -1, -1):
if i & (1 << bit):
block_z = xor(block_z, block_v)
do_xor = block_v[-1] & 1
block_v = shift_block(block_v)
if do_xor:
block_v = xor(block_v, block_r)
return block_z
def ghash(subkey, data):
# NIST SP 800-38D, Algorithm 2
if len(data) % BLOCK_SIZE_BYTES:
raise ValueError("Length of data should be %d bytes" % BLOCK_SIZE_BYTES)
last_y = [0] * BLOCK_SIZE_BYTES
for i in range(0, len(data), BLOCK_SIZE_BYTES):
block = data[i : i + BLOCK_SIZE_BYTES] # noqa: E203
last_y = block_product(xor(last_y, block), subkey)
return last_y
__all__ = [
'aes_ctr_decrypt',
'aes_cbc_decrypt',
'aes_cbc_decrypt_bytes',
'aes_decrypt_text',
'aes_encrypt',
'aes_gcm_decrypt_and_verify',
'aes_gcm_decrypt_and_verify_bytes',
'key_expansion'
]

View File

@@ -50,6 +50,7 @@ class Cache(object):
except OSError as ose: except OSError as ose:
if ose.errno != errno.EEXIST: if ose.errno != errno.EEXIST:
raise raise
self._ydl.write_debug(f'Saving {section}.{key} to cache')
write_json_file(data, fn) write_json_file(data, fn)
except Exception: except Exception:
tb = traceback.format_exc() tb = traceback.format_exc()
@@ -66,6 +67,7 @@ class Cache(object):
try: try:
try: try:
with io.open(cache_fn, 'r', encoding='utf-8') as cachef: with io.open(cache_fn, 'r', encoding='utf-8') as cachef:
self._ydl.write_debug(f'Loading {section}.{key} from cache')
return json.load(cachef) return json.load(cachef)
except ValueError: except ValueError:
try: try:

File diff suppressed because it is too large Load Diff

745
yt_dlp/cookies.py Normal file
View File

@@ -0,0 +1,745 @@
import ctypes
import json
import os
import shutil
import struct
import subprocess
import sys
import tempfile
from datetime import datetime, timedelta, timezone
from hashlib import pbkdf2_hmac
from .aes import aes_cbc_decrypt_bytes, aes_gcm_decrypt_and_verify_bytes
from .compat import (
compat_b64decode,
compat_cookiejar_Cookie,
)
from .utils import (
bug_reports_message,
expand_path,
Popen,
YoutubeDLCookieJar,
)
try:
import sqlite3
SQLITE_AVAILABLE = True
except ImportError:
# although sqlite3 is part of the standard library, it is possible to compile python without
# sqlite support. See: https://github.com/yt-dlp/yt-dlp/issues/544
SQLITE_AVAILABLE = False
try:
import keyring
KEYRING_AVAILABLE = True
KEYRING_UNAVAILABLE_REASON = f'due to unknown reasons{bug_reports_message()}'
except ImportError:
KEYRING_AVAILABLE = False
KEYRING_UNAVAILABLE_REASON = (
'as the `keyring` module is not installed. '
'Please install by running `python3 -m pip install keyring`. '
'Depending on your platform, additional packages may be required '
'to access the keyring; see https://pypi.org/project/keyring')
except Exception as _err:
KEYRING_AVAILABLE = False
KEYRING_UNAVAILABLE_REASON = 'as the `keyring` module could not be initialized: %s' % _err
CHROMIUM_BASED_BROWSERS = {'brave', 'chrome', 'chromium', 'edge', 'opera', 'vivaldi'}
SUPPORTED_BROWSERS = CHROMIUM_BASED_BROWSERS | {'firefox', 'safari'}
class YDLLogger:
def __init__(self, ydl=None):
self._ydl = ydl
def debug(self, message):
if self._ydl:
self._ydl.write_debug(message)
def info(self, message):
if self._ydl:
self._ydl.to_screen(f'[Cookies] {message}')
def warning(self, message, only_once=False):
if self._ydl:
self._ydl.report_warning(message, only_once)
def error(self, message):
if self._ydl:
self._ydl.report_error(message)
def load_cookies(cookie_file, browser_specification, ydl):
cookie_jars = []
if browser_specification is not None:
browser_name, profile = _parse_browser_specification(*browser_specification)
cookie_jars.append(extract_cookies_from_browser(browser_name, profile, YDLLogger(ydl)))
if cookie_file is not None:
cookie_file = expand_path(cookie_file)
jar = YoutubeDLCookieJar(cookie_file)
if os.access(cookie_file, os.R_OK):
jar.load(ignore_discard=True, ignore_expires=True)
cookie_jars.append(jar)
return _merge_cookie_jars(cookie_jars)
def extract_cookies_from_browser(browser_name, profile=None, logger=YDLLogger()):
if browser_name == 'firefox':
return _extract_firefox_cookies(profile, logger)
elif browser_name == 'safari':
return _extract_safari_cookies(profile, logger)
elif browser_name in CHROMIUM_BASED_BROWSERS:
return _extract_chrome_cookies(browser_name, profile, logger)
else:
raise ValueError('unknown browser: {}'.format(browser_name))
def _extract_firefox_cookies(profile, logger):
logger.info('Extracting cookies from firefox')
if not SQLITE_AVAILABLE:
logger.warning('Cannot extract cookies from firefox without sqlite3 support. '
'Please use a python interpreter compiled with sqlite3 support')
return YoutubeDLCookieJar()
if profile is None:
search_root = _firefox_browser_dir()
elif _is_path(profile):
search_root = profile
else:
search_root = os.path.join(_firefox_browser_dir(), profile)
cookie_database_path = _find_most_recently_used_file(search_root, 'cookies.sqlite')
if cookie_database_path is None:
raise FileNotFoundError('could not find firefox cookies database in {}'.format(search_root))
logger.debug('Extracting cookies from: "{}"'.format(cookie_database_path))
with tempfile.TemporaryDirectory(prefix='yt_dlp') as tmpdir:
cursor = None
try:
cursor = _open_database_copy(cookie_database_path, tmpdir)
cursor.execute('SELECT host, name, value, path, expiry, isSecure FROM moz_cookies')
jar = YoutubeDLCookieJar()
for host, name, value, path, expiry, is_secure in cursor.fetchall():
cookie = compat_cookiejar_Cookie(
version=0, name=name, value=value, port=None, port_specified=False,
domain=host, domain_specified=bool(host), domain_initial_dot=host.startswith('.'),
path=path, path_specified=bool(path), secure=is_secure, expires=expiry, discard=False,
comment=None, comment_url=None, rest={})
jar.set_cookie(cookie)
logger.info('Extracted {} cookies from firefox'.format(len(jar)))
return jar
finally:
if cursor is not None:
cursor.connection.close()
def _firefox_browser_dir():
if sys.platform in ('linux', 'linux2'):
return os.path.expanduser('~/.mozilla/firefox')
elif sys.platform == 'win32':
return os.path.expandvars(r'%APPDATA%\Mozilla\Firefox\Profiles')
elif sys.platform == 'darwin':
return os.path.expanduser('~/Library/Application Support/Firefox')
else:
raise ValueError('unsupported platform: {}'.format(sys.platform))
def _get_chromium_based_browser_settings(browser_name):
# https://chromium.googlesource.com/chromium/src/+/HEAD/docs/user_data_dir.md
if sys.platform in ('linux', 'linux2'):
config = _config_home()
browser_dir = {
'brave': os.path.join(config, 'BraveSoftware/Brave-Browser'),
'chrome': os.path.join(config, 'google-chrome'),
'chromium': os.path.join(config, 'chromium'),
'edge': os.path.join(config, 'microsoft-edge'),
'opera': os.path.join(config, 'opera'),
'vivaldi': os.path.join(config, 'vivaldi'),
}[browser_name]
elif sys.platform == 'win32':
appdata_local = os.path.expandvars('%LOCALAPPDATA%')
appdata_roaming = os.path.expandvars('%APPDATA%')
browser_dir = {
'brave': os.path.join(appdata_local, r'BraveSoftware\Brave-Browser\User Data'),
'chrome': os.path.join(appdata_local, r'Google\Chrome\User Data'),
'chromium': os.path.join(appdata_local, r'Chromium\User Data'),
'edge': os.path.join(appdata_local, r'Microsoft\Edge\User Data'),
'opera': os.path.join(appdata_roaming, r'Opera Software\Opera Stable'),
'vivaldi': os.path.join(appdata_local, r'Vivaldi\User Data'),
}[browser_name]
elif sys.platform == 'darwin':
appdata = os.path.expanduser('~/Library/Application Support')
browser_dir = {
'brave': os.path.join(appdata, 'BraveSoftware/Brave-Browser'),
'chrome': os.path.join(appdata, 'Google/Chrome'),
'chromium': os.path.join(appdata, 'Chromium'),
'edge': os.path.join(appdata, 'Microsoft Edge'),
'opera': os.path.join(appdata, 'com.operasoftware.Opera'),
'vivaldi': os.path.join(appdata, 'Vivaldi'),
}[browser_name]
else:
raise ValueError('unsupported platform: {}'.format(sys.platform))
# Linux keyring names can be determined by snooping on dbus while opening the browser in KDE:
# dbus-monitor "interface='org.kde.KWallet'" "type=method_return"
keyring_name = {
'brave': 'Brave',
'chrome': 'Chrome',
'chromium': 'Chromium',
'edge': 'Microsoft Edge' if sys.platform == 'darwin' else 'Chromium',
'opera': 'Opera' if sys.platform == 'darwin' else 'Chromium',
'vivaldi': 'Vivaldi' if sys.platform == 'darwin' else 'Chrome',
}[browser_name]
browsers_without_profiles = {'opera'}
return {
'browser_dir': browser_dir,
'keyring_name': keyring_name,
'supports_profiles': browser_name not in browsers_without_profiles
}
def _extract_chrome_cookies(browser_name, profile, logger):
logger.info('Extracting cookies from {}'.format(browser_name))
if not SQLITE_AVAILABLE:
logger.warning(('Cannot extract cookies from {} without sqlite3 support. '
'Please use a python interpreter compiled with sqlite3 support').format(browser_name))
return YoutubeDLCookieJar()
config = _get_chromium_based_browser_settings(browser_name)
if profile is None:
search_root = config['browser_dir']
elif _is_path(profile):
search_root = profile
config['browser_dir'] = os.path.dirname(profile) if config['supports_profiles'] else profile
else:
if config['supports_profiles']:
search_root = os.path.join(config['browser_dir'], profile)
else:
logger.error('{} does not support profiles'.format(browser_name))
search_root = config['browser_dir']
cookie_database_path = _find_most_recently_used_file(search_root, 'Cookies')
if cookie_database_path is None:
raise FileNotFoundError('could not find {} cookies database in "{}"'.format(browser_name, search_root))
logger.debug('Extracting cookies from: "{}"'.format(cookie_database_path))
decryptor = get_cookie_decryptor(config['browser_dir'], config['keyring_name'], logger)
with tempfile.TemporaryDirectory(prefix='yt_dlp') as tmpdir:
cursor = None
try:
cursor = _open_database_copy(cookie_database_path, tmpdir)
cursor.connection.text_factory = bytes
column_names = _get_column_names(cursor, 'cookies')
secure_column = 'is_secure' if 'is_secure' in column_names else 'secure'
cursor.execute('SELECT host_key, name, value, encrypted_value, path, '
'expires_utc, {} FROM cookies'.format(secure_column))
jar = YoutubeDLCookieJar()
failed_cookies = 0
for host_key, name, value, encrypted_value, path, expires_utc, is_secure in cursor.fetchall():
host_key = host_key.decode('utf-8')
name = name.decode('utf-8')
value = value.decode('utf-8')
path = path.decode('utf-8')
if not value and encrypted_value:
value = decryptor.decrypt(encrypted_value)
if value is None:
failed_cookies += 1
continue
cookie = compat_cookiejar_Cookie(
version=0, name=name, value=value, port=None, port_specified=False,
domain=host_key, domain_specified=bool(host_key), domain_initial_dot=host_key.startswith('.'),
path=path, path_specified=bool(path), secure=is_secure, expires=expires_utc, discard=False,
comment=None, comment_url=None, rest={})
jar.set_cookie(cookie)
if failed_cookies > 0:
failed_message = ' ({} could not be decrypted)'.format(failed_cookies)
else:
failed_message = ''
logger.info('Extracted {} cookies from {}{}'.format(len(jar), browser_name, failed_message))
return jar
finally:
if cursor is not None:
cursor.connection.close()
class ChromeCookieDecryptor:
"""
Overview:
Linux:
- cookies are either v10 or v11
- v10: AES-CBC encrypted with a fixed key
- v11: AES-CBC encrypted with an OS protected key (keyring)
- v11 keys can be stored in various places depending on the activate desktop environment [2]
Mac:
- cookies are either v10 or not v10
- v10: AES-CBC encrypted with an OS protected key (keyring) and more key derivation iterations than linux
- not v10: 'old data' stored as plaintext
Windows:
- cookies are either v10 or not v10
- v10: AES-GCM encrypted with a key which is encrypted with DPAPI
- not v10: encrypted with DPAPI
Sources:
- [1] https://chromium.googlesource.com/chromium/src/+/refs/heads/main/components/os_crypt/
- [2] https://chromium.googlesource.com/chromium/src/+/refs/heads/main/components/os_crypt/key_storage_linux.cc
- KeyStorageLinux::CreateService
"""
def decrypt(self, encrypted_value):
raise NotImplementedError
def get_cookie_decryptor(browser_root, browser_keyring_name, logger):
if sys.platform in ('linux', 'linux2'):
return LinuxChromeCookieDecryptor(browser_keyring_name, logger)
elif sys.platform == 'darwin':
return MacChromeCookieDecryptor(browser_keyring_name, logger)
elif sys.platform == 'win32':
return WindowsChromeCookieDecryptor(browser_root, logger)
else:
raise NotImplementedError('Chrome cookie decryption is not supported '
'on this platform: {}'.format(sys.platform))
class LinuxChromeCookieDecryptor(ChromeCookieDecryptor):
def __init__(self, browser_keyring_name, logger):
self._logger = logger
self._v10_key = self.derive_key(b'peanuts')
if KEYRING_AVAILABLE:
self._v11_key = self.derive_key(_get_linux_keyring_password(browser_keyring_name))
else:
self._v11_key = None
@staticmethod
def derive_key(password):
# values from
# https://chromium.googlesource.com/chromium/src/+/refs/heads/main/components/os_crypt/os_crypt_linux.cc
return pbkdf2_sha1(password, salt=b'saltysalt', iterations=1, key_length=16)
def decrypt(self, encrypted_value):
version = encrypted_value[:3]
ciphertext = encrypted_value[3:]
if version == b'v10':
return _decrypt_aes_cbc(ciphertext, self._v10_key, self._logger)
elif version == b'v11':
if self._v11_key is None:
self._logger.warning(f'cannot decrypt cookie {KEYRING_UNAVAILABLE_REASON}', only_once=True)
return None
return _decrypt_aes_cbc(ciphertext, self._v11_key, self._logger)
else:
return None
class MacChromeCookieDecryptor(ChromeCookieDecryptor):
def __init__(self, browser_keyring_name, logger):
self._logger = logger
password = _get_mac_keyring_password(browser_keyring_name, logger)
self._v10_key = None if password is None else self.derive_key(password)
@staticmethod
def derive_key(password):
# values from
# https://chromium.googlesource.com/chromium/src/+/refs/heads/main/components/os_crypt/os_crypt_mac.mm
return pbkdf2_sha1(password, salt=b'saltysalt', iterations=1003, key_length=16)
def decrypt(self, encrypted_value):
version = encrypted_value[:3]
ciphertext = encrypted_value[3:]
if version == b'v10':
if self._v10_key is None:
self._logger.warning('cannot decrypt v10 cookies: no key found', only_once=True)
return None
return _decrypt_aes_cbc(ciphertext, self._v10_key, self._logger)
else:
# other prefixes are considered 'old data' which were stored as plaintext
# https://chromium.googlesource.com/chromium/src/+/refs/heads/main/components/os_crypt/os_crypt_mac.mm
return encrypted_value
class WindowsChromeCookieDecryptor(ChromeCookieDecryptor):
def __init__(self, browser_root, logger):
self._logger = logger
self._v10_key = _get_windows_v10_key(browser_root, logger)
def decrypt(self, encrypted_value):
version = encrypted_value[:3]
ciphertext = encrypted_value[3:]
if version == b'v10':
if self._v10_key is None:
self._logger.warning('cannot decrypt v10 cookies: no key found', only_once=True)
return None
# https://chromium.googlesource.com/chromium/src/+/refs/heads/main/components/os_crypt/os_crypt_win.cc
# kNonceLength
nonce_length = 96 // 8
# boringssl
# EVP_AEAD_AES_GCM_TAG_LEN
authentication_tag_length = 16
raw_ciphertext = ciphertext
nonce = raw_ciphertext[:nonce_length]
ciphertext = raw_ciphertext[nonce_length:-authentication_tag_length]
authentication_tag = raw_ciphertext[-authentication_tag_length:]
return _decrypt_aes_gcm(ciphertext, self._v10_key, nonce, authentication_tag, self._logger)
else:
# any other prefix means the data is DPAPI encrypted
# https://chromium.googlesource.com/chromium/src/+/refs/heads/main/components/os_crypt/os_crypt_win.cc
return _decrypt_windows_dpapi(encrypted_value, self._logger).decode('utf-8')
def _extract_safari_cookies(profile, logger):
if profile is not None:
logger.error('safari does not support profiles')
if sys.platform != 'darwin':
raise ValueError('unsupported platform: {}'.format(sys.platform))
cookies_path = os.path.expanduser('~/Library/Cookies/Cookies.binarycookies')
if not os.path.isfile(cookies_path):
raise FileNotFoundError('could not find safari cookies database')
with open(cookies_path, 'rb') as f:
cookies_data = f.read()
jar = parse_safari_cookies(cookies_data, logger=logger)
logger.info('Extracted {} cookies from safari'.format(len(jar)))
return jar
class ParserError(Exception):
pass
class DataParser:
def __init__(self, data, logger):
self._data = data
self.cursor = 0
self._logger = logger
def read_bytes(self, num_bytes):
if num_bytes < 0:
raise ParserError('invalid read of {} bytes'.format(num_bytes))
end = self.cursor + num_bytes
if end > len(self._data):
raise ParserError('reached end of input')
data = self._data[self.cursor:end]
self.cursor = end
return data
def expect_bytes(self, expected_value, message):
value = self.read_bytes(len(expected_value))
if value != expected_value:
raise ParserError('unexpected value: {} != {} ({})'.format(value, expected_value, message))
def read_uint(self, big_endian=False):
data_format = '>I' if big_endian else '<I'
return struct.unpack(data_format, self.read_bytes(4))[0]
def read_double(self, big_endian=False):
data_format = '>d' if big_endian else '<d'
return struct.unpack(data_format, self.read_bytes(8))[0]
def read_cstring(self):
buffer = []
while True:
c = self.read_bytes(1)
if c == b'\x00':
return b''.join(buffer).decode('utf-8')
else:
buffer.append(c)
def skip(self, num_bytes, description='unknown'):
if num_bytes > 0:
self._logger.debug('skipping {} bytes ({}): {}'.format(
num_bytes, description, self.read_bytes(num_bytes)))
elif num_bytes < 0:
raise ParserError('invalid skip of {} bytes'.format(num_bytes))
def skip_to(self, offset, description='unknown'):
self.skip(offset - self.cursor, description)
def skip_to_end(self, description='unknown'):
self.skip_to(len(self._data), description)
def _mac_absolute_time_to_posix(timestamp):
return int((datetime(2001, 1, 1, 0, 0, tzinfo=timezone.utc) + timedelta(seconds=timestamp)).timestamp())
def _parse_safari_cookies_header(data, logger):
p = DataParser(data, logger)
p.expect_bytes(b'cook', 'database signature')
number_of_pages = p.read_uint(big_endian=True)
page_sizes = [p.read_uint(big_endian=True) for _ in range(number_of_pages)]
return page_sizes, p.cursor
def _parse_safari_cookies_page(data, jar, logger):
p = DataParser(data, logger)
p.expect_bytes(b'\x00\x00\x01\x00', 'page signature')
number_of_cookies = p.read_uint()
record_offsets = [p.read_uint() for _ in range(number_of_cookies)]
if number_of_cookies == 0:
logger.debug('a cookies page of size {} has no cookies'.format(len(data)))
return
p.skip_to(record_offsets[0], 'unknown page header field')
for record_offset in record_offsets:
p.skip_to(record_offset, 'space between records')
record_length = _parse_safari_cookies_record(data[record_offset:], jar, logger)
p.read_bytes(record_length)
p.skip_to_end('space in between pages')
def _parse_safari_cookies_record(data, jar, logger):
p = DataParser(data, logger)
record_size = p.read_uint()
p.skip(4, 'unknown record field 1')
flags = p.read_uint()
is_secure = bool(flags & 0x0001)
p.skip(4, 'unknown record field 2')
domain_offset = p.read_uint()
name_offset = p.read_uint()
path_offset = p.read_uint()
value_offset = p.read_uint()
p.skip(8, 'unknown record field 3')
expiration_date = _mac_absolute_time_to_posix(p.read_double())
_creation_date = _mac_absolute_time_to_posix(p.read_double()) # noqa: F841
try:
p.skip_to(domain_offset)
domain = p.read_cstring()
p.skip_to(name_offset)
name = p.read_cstring()
p.skip_to(path_offset)
path = p.read_cstring()
p.skip_to(value_offset)
value = p.read_cstring()
except UnicodeDecodeError:
logger.warning('failed to parse Safari cookie because UTF-8 decoding failed', only_once=True)
return record_size
p.skip_to(record_size, 'space at the end of the record')
cookie = compat_cookiejar_Cookie(
version=0, name=name, value=value, port=None, port_specified=False,
domain=domain, domain_specified=bool(domain), domain_initial_dot=domain.startswith('.'),
path=path, path_specified=bool(path), secure=is_secure, expires=expiration_date, discard=False,
comment=None, comment_url=None, rest={})
jar.set_cookie(cookie)
return record_size
def parse_safari_cookies(data, jar=None, logger=YDLLogger()):
"""
References:
- https://github.com/libyal/dtformats/blob/main/documentation/Safari%20Cookies.asciidoc
- this data appears to be out of date but the important parts of the database structure is the same
- there are a few bytes here and there which are skipped during parsing
"""
if jar is None:
jar = YoutubeDLCookieJar()
page_sizes, body_start = _parse_safari_cookies_header(data, logger)
p = DataParser(data[body_start:], logger)
for page_size in page_sizes:
_parse_safari_cookies_page(p.read_bytes(page_size), jar, logger)
p.skip_to_end('footer')
return jar
def _get_linux_keyring_password(browser_keyring_name):
password = keyring.get_password('{} Keys'.format(browser_keyring_name),
'{} Safe Storage'.format(browser_keyring_name))
if password is None:
# this sometimes occurs in KDE because chrome does not check hasEntry and instead
# just tries to read the value (which kwallet returns "") whereas keyring checks hasEntry
# to verify this:
# dbus-monitor "interface='org.kde.KWallet'" "type=method_return"
# while starting chrome.
# this may be a bug as the intended behaviour is to generate a random password and store
# it, but that doesn't matter here.
password = ''
return password.encode('utf-8')
def _get_mac_keyring_password(browser_keyring_name, logger):
if KEYRING_AVAILABLE:
logger.debug('using keyring to obtain password')
password = keyring.get_password('{} Safe Storage'.format(browser_keyring_name), browser_keyring_name)
return password.encode('utf-8')
else:
logger.debug('using find-generic-password to obtain password')
proc = Popen(
['security', 'find-generic-password',
'-w', # write password to stdout
'-a', browser_keyring_name, # match 'account'
'-s', '{} Safe Storage'.format(browser_keyring_name)], # match 'service'
stdout=subprocess.PIPE, stderr=subprocess.DEVNULL)
try:
stdout, stderr = proc.communicate_or_kill()
if stdout[-1:] == b'\n':
stdout = stdout[:-1]
return stdout
except BaseException as e:
logger.warning(f'exception running find-generic-password: {type(e).__name__}({e})')
return None
def _get_windows_v10_key(browser_root, logger):
path = _find_most_recently_used_file(browser_root, 'Local State')
if path is None:
logger.error('could not find local state file')
return None
with open(path, 'r', encoding='utf8') as f:
data = json.load(f)
try:
base64_key = data['os_crypt']['encrypted_key']
except KeyError:
logger.error('no encrypted key in Local State')
return None
encrypted_key = compat_b64decode(base64_key)
prefix = b'DPAPI'
if not encrypted_key.startswith(prefix):
logger.error('invalid key')
return None
return _decrypt_windows_dpapi(encrypted_key[len(prefix):], logger)
def pbkdf2_sha1(password, salt, iterations, key_length):
return pbkdf2_hmac('sha1', password, salt, iterations, key_length)
def _decrypt_aes_cbc(ciphertext, key, logger, initialization_vector=b' ' * 16):
plaintext = aes_cbc_decrypt_bytes(ciphertext, key, initialization_vector)
padding_length = plaintext[-1]
try:
return plaintext[:-padding_length].decode('utf-8')
except UnicodeDecodeError:
logger.warning('failed to decrypt cookie (AES-CBC) because UTF-8 decoding failed. Possibly the key is wrong?', only_once=True)
return None
def _decrypt_aes_gcm(ciphertext, key, nonce, authentication_tag, logger):
try:
plaintext = aes_gcm_decrypt_and_verify_bytes(ciphertext, key, authentication_tag, nonce)
except ValueError:
logger.warning('failed to decrypt cookie (AES-GCM) because the MAC check failed. Possibly the key is wrong?', only_once=True)
return None
try:
return plaintext.decode('utf-8')
except UnicodeDecodeError:
logger.warning('failed to decrypt cookie (AES-GCM) because UTF-8 decoding failed. Possibly the key is wrong?', only_once=True)
return None
def _decrypt_windows_dpapi(ciphertext, logger):
"""
References:
- https://docs.microsoft.com/en-us/windows/win32/api/dpapi/nf-dpapi-cryptunprotectdata
"""
from ctypes.wintypes import DWORD
class DATA_BLOB(ctypes.Structure):
_fields_ = [('cbData', DWORD),
('pbData', ctypes.POINTER(ctypes.c_char))]
buffer = ctypes.create_string_buffer(ciphertext)
blob_in = DATA_BLOB(ctypes.sizeof(buffer), buffer)
blob_out = DATA_BLOB()
ret = ctypes.windll.crypt32.CryptUnprotectData(
ctypes.byref(blob_in), # pDataIn
None, # ppszDataDescr: human readable description of pDataIn
None, # pOptionalEntropy: salt?
None, # pvReserved: must be NULL
None, # pPromptStruct: information about prompts to display
0, # dwFlags
ctypes.byref(blob_out) # pDataOut
)
if not ret:
logger.warning('failed to decrypt with DPAPI', only_once=True)
return None
result = ctypes.string_at(blob_out.pbData, blob_out.cbData)
ctypes.windll.kernel32.LocalFree(blob_out.pbData)
return result
def _config_home():
return os.environ.get('XDG_CONFIG_HOME', os.path.expanduser('~/.config'))
def _open_database_copy(database_path, tmpdir):
# cannot open sqlite databases if they are already in use (e.g. by the browser)
database_copy_path = os.path.join(tmpdir, 'temporary.sqlite')
shutil.copy(database_path, database_copy_path)
conn = sqlite3.connect(database_copy_path)
return conn.cursor()
def _get_column_names(cursor, table_name):
table_info = cursor.execute('PRAGMA table_info({})'.format(table_name)).fetchall()
return [row[1].decode('utf-8') for row in table_info]
def _find_most_recently_used_file(root, filename):
# if there are multiple browser profiles, take the most recently used one
paths = []
for root, dirs, files in os.walk(root):
for file in files:
if file == filename:
paths.append(os.path.join(root, file))
return None if not paths else max(paths, key=lambda path: os.lstat(path).st_mtime)
def _merge_cookie_jars(jars):
output_jar = YoutubeDLCookieJar()
for jar in jars:
for cookie in jar:
output_jar.set_cookie(cookie)
if jar.filename is not None:
output_jar.filename = jar.filename
return output_jar
def _is_path(value):
return os.path.sep in value
def _parse_browser_specification(browser_name, profile=None):
browser_name = browser_name.lower()
if browser_name not in SUPPORTED_BROWSERS:
raise ValueError(f'unsupported browser: "{browser_name}"')
if profile is not None and _is_path(profile):
profile = os.path.expanduser(profile)
return browser_name, profile

View File

@@ -3,17 +3,25 @@ from __future__ import unicode_literals
from ..compat import compat_str from ..compat import compat_str
from ..utils import ( from ..utils import (
determine_protocol, determine_protocol,
NO_DEFAULT
) )
def _get_real_downloader(info_dict, protocol=None, *args, **kwargs): def get_suitable_downloader(info_dict, params={}, default=NO_DEFAULT, protocol=None, to_stdout=False):
info_dict['protocol'] = determine_protocol(info_dict)
info_copy = info_dict.copy() info_copy = info_dict.copy()
if protocol: info_copy['to_stdout'] = to_stdout
info_copy['protocol'] = protocol
return get_suitable_downloader(info_copy, *args, **kwargs) downloaders = [_get_suitable_downloader(info_copy, proto, params, default)
for proto in (protocol or info_copy['protocol']).split('+')]
if set(downloaders) == {FFmpegFD} and FFmpegFD.can_merge_formats(info_copy, params):
return FFmpegFD
elif len(downloaders) == 1:
return downloaders[0]
return None
# Some of these require _get_real_downloader # Some of these require get_suitable_downloader
from .common import FileDownloader from .common import FileDownloader
from .dash import DashSegmentsFD from .dash import DashSegmentsFD
from .f4m import F4mFD from .f4m import F4mFD
@@ -69,32 +77,39 @@ def shorten_protocol_name(proto, simplify=False):
return short_protocol_names.get(proto, proto) return short_protocol_names.get(proto, proto)
def get_suitable_downloader(info_dict, params={}, default=HttpFD): def _get_suitable_downloader(info_dict, protocol, params, default):
"""Get the downloader class that can handle the info dict.""" """Get the downloader class that can handle the info dict."""
protocol = determine_protocol(info_dict) if default is NO_DEFAULT:
info_dict['protocol'] = protocol default = HttpFD
# if (info_dict.get('start_time') or info_dict.get('end_time')) and not info_dict.get('requested_formats') and FFmpegFD.can_download(info_dict): # if (info_dict.get('start_time') or info_dict.get('end_time')) and not info_dict.get('requested_formats') and FFmpegFD.can_download(info_dict):
# return FFmpegFD # return FFmpegFD
info_dict['protocol'] = protocol
downloaders = params.get('external_downloader') downloaders = params.get('external_downloader')
external_downloader = ( external_downloader = (
downloaders if isinstance(downloaders, compat_str) or downloaders is None downloaders if isinstance(downloaders, compat_str) or downloaders is None
else downloaders.get(shorten_protocol_name(protocol, True), downloaders.get('default'))) else downloaders.get(shorten_protocol_name(protocol, True), downloaders.get('default')))
if external_downloader and external_downloader.lower() == 'native':
external_downloader = 'native'
if external_downloader not in (None, 'native'): if external_downloader is None:
if info_dict['to_stdout'] and FFmpegFD.can_merge_formats(info_dict, params):
return FFmpegFD
elif external_downloader.lower() != 'native':
ed = get_external_downloader(external_downloader) ed = get_external_downloader(external_downloader)
if ed.can_download(info_dict, external_downloader): if ed.can_download(info_dict, external_downloader):
return ed return ed
if protocol == 'http_dash_segments':
if info_dict.get('is_live') and (external_downloader or '').lower() != 'native':
return FFmpegFD
if protocol in ('m3u8', 'm3u8_native'): if protocol in ('m3u8', 'm3u8_native'):
if info_dict.get('is_live'): if info_dict.get('is_live'):
return FFmpegFD return FFmpegFD
elif external_downloader == 'native': elif (external_downloader or '').lower() == 'native':
return HlsFD return HlsFD
elif _get_real_downloader(info_dict, 'm3u8_frag_urls', params, None): elif get_suitable_downloader(
info_dict, params, None, protocol='m3u8_frag_urls', to_stdout=info_dict['to_stdout']):
return HlsFD return HlsFD
elif params.get('hls_prefer_native') is True: elif params.get('hls_prefer_native') is True:
return HlsFD return HlsFD

View File

@@ -2,11 +2,9 @@ from __future__ import division, unicode_literals
import os import os
import re import re
import sys
import time import time
import random import random
from ..compat import compat_os_name
from ..utils import ( from ..utils import (
decodeArgument, decodeArgument,
encodeFilename, encodeFilename,
@@ -14,6 +12,13 @@ from ..utils import (
format_bytes, format_bytes,
shell_quote, shell_quote,
timeconvert, timeconvert,
timetuple_from_msec,
)
from ..minicurses import (
MultilineLogger,
MultilinePrinter,
QuietMultilinePrinter,
BreaklineStatusPrinter
) )
@@ -38,20 +43,22 @@ class FileDownloader(object):
noresizebuffer: Do not automatically resize the download buffer. noresizebuffer: Do not automatically resize the download buffer.
continuedl: Try to continue downloads if possible. continuedl: Try to continue downloads if possible.
noprogress: Do not print the progress bar. noprogress: Do not print the progress bar.
logtostderr: Log messages to stderr instead of stdout.
consoletitle: Display progress in console window's titlebar.
nopart: Do not use temporary .part files. nopart: Do not use temporary .part files.
updatetime: Use the Last-modified header to set output file timestamps. updatetime: Use the Last-modified header to set output file timestamps.
test: Download only first bytes to test the downloader. test: Download only first bytes to test the downloader.
min_filesize: Skip files smaller than this size min_filesize: Skip files smaller than this size
max_filesize: Skip files larger than this size max_filesize: Skip files larger than this size
xattr_set_filesize: Set ytdl.filesize user xattribute with expected size. xattr_set_filesize: Set ytdl.filesize user xattribute with expected size.
external_downloader_args: A list of additional command-line arguments for the external_downloader_args: A dictionary of downloader keys (in lower case)
external downloader. and a list of additional command-line arguments for the
executable. Use 'default' as the name for arguments to be
passed to all downloaders. For compatibility with youtube-dl,
a single list of args can also be used
hls_use_mpegts: Use the mpegts container for HLS videos. hls_use_mpegts: Use the mpegts container for HLS videos.
http_chunk_size: Size of a chunk for chunk-based HTTP downloading. May be http_chunk_size: Size of a chunk for chunk-based HTTP downloading. May be
useful for bypassing bandwidth throttling imposed by useful for bypassing bandwidth throttling imposed by
a webserver (experimental) a webserver (experimental)
progress_template: See YoutubeDL.py
Subclasses of this one must re-define the real_download method. Subclasses of this one must re-define the real_download method.
""" """
@@ -64,18 +71,17 @@ class FileDownloader(object):
self.ydl = ydl self.ydl = ydl
self._progress_hooks = [] self._progress_hooks = []
self.params = params self.params = params
self._prepare_multiline_status()
self.add_progress_hook(self.report_progress) self.add_progress_hook(self.report_progress)
@staticmethod @staticmethod
def format_seconds(seconds): def format_seconds(seconds):
(mins, secs) = divmod(seconds, 60) time = timetuple_from_msec(seconds * 1000)
(hours, mins) = divmod(mins, 60) if time.hours > 99:
if hours > 99:
return '--:--:--' return '--:--:--'
if hours == 0: if not time.hours:
return '%02d:%02d' % (mins, secs) return '%02d:%02d' % time[1:-1]
else: return '%02d:%02d:%02d' % time[:-1]
return '%02d:%02d:%02d' % (hours, mins, secs)
@staticmethod @staticmethod
def calc_percent(byte_counter, data_len): def calc_percent(byte_counter, data_len):
@@ -200,12 +206,12 @@ class FileDownloader(object):
return filename + '.ytdl' return filename + '.ytdl'
def try_rename(self, old_filename, new_filename): def try_rename(self, old_filename, new_filename):
if old_filename == new_filename:
return
try: try:
if old_filename == new_filename: os.replace(old_filename, new_filename)
return
os.rename(encodeFilename(old_filename), encodeFilename(new_filename))
except (IOError, OSError) as err: except (IOError, OSError) as err:
self.report_error('unable to rename file: %s' % error_to_compat_str(err)) self.report_error(f'unable to rename file: {err}')
def try_utime(self, filename, last_modified_hdr): def try_utime(self, filename, last_modified_hdr):
"""Try to set the last-modified time of the given file.""" """Try to set the last-modified time of the given file."""
@@ -232,39 +238,46 @@ class FileDownloader(object):
"""Report destination filename.""" """Report destination filename."""
self.to_screen('[download] Destination: ' + filename) self.to_screen('[download] Destination: ' + filename)
def _report_progress_status(self, msg, is_last_line=False): def _prepare_multiline_status(self, lines=1):
fullmsg = '[download] ' + msg if self.params.get('noprogress'):
if self.params.get('progress_with_newline', False): self._multiline = QuietMultilinePrinter()
self.to_screen(fullmsg) elif self.ydl.params.get('logger'):
self._multiline = MultilineLogger(self.ydl.params['logger'], lines)
elif self.params.get('progress_with_newline'):
self._multiline = BreaklineStatusPrinter(self.ydl._screen_file, lines)
else: else:
if compat_os_name == 'nt': self._multiline = MultilinePrinter(self.ydl._screen_file, lines, not self.params.get('quiet'))
prev_len = getattr(self, '_report_progress_prev_line_length',
0) def _finish_multiline_status(self):
if prev_len > len(fullmsg): self._multiline.end()
fullmsg += ' ' * (prev_len - len(fullmsg))
self._report_progress_prev_line_length = len(fullmsg) def _report_progress_status(self, s):
clear_line = '\r' progress_dict = s.copy()
else: progress_dict.pop('info_dict')
clear_line = ('\r\x1b[K' if sys.stderr.isatty() else '\r') progress_dict = {'info': s['info_dict'], 'progress': progress_dict}
self.to_screen(clear_line + fullmsg, skip_eol=not is_last_line)
self.to_console_title('yt-dlp ' + msg) progress_template = self.params.get('progress_template', {})
self._multiline.print_at_line(self.ydl.evaluate_outtmpl(
progress_template.get('download') or '[download] %(progress._default_template)s',
progress_dict), s.get('progress_idx') or 0)
self.to_console_title(self.ydl.evaluate_outtmpl(
progress_template.get('download-title') or 'yt-dlp %(progress._default_template)s',
progress_dict))
def report_progress(self, s): def report_progress(self, s):
if s['status'] == 'finished': if s['status'] == 'finished':
if self.params.get('noprogress', False): if self.params.get('noprogress'):
self.to_screen('[download] Download completed') self.to_screen('[download] Download completed')
else: msg_template = '100%%'
msg_template = '100%%' if s.get('total_bytes') is not None:
if s.get('total_bytes') is not None: s['_total_bytes_str'] = format_bytes(s['total_bytes'])
s['_total_bytes_str'] = format_bytes(s['total_bytes']) msg_template += ' of %(_total_bytes_str)s'
msg_template += ' of %(_total_bytes_str)s' if s.get('elapsed') is not None:
if s.get('elapsed') is not None: s['_elapsed_str'] = self.format_seconds(s['elapsed'])
s['_elapsed_str'] = self.format_seconds(s['elapsed']) msg_template += ' in %(_elapsed_str)s'
msg_template += ' in %(_elapsed_str)s' s['_percent_str'] = self.format_percent(100)
self._report_progress_status( s['_default_template'] = msg_template % s
msg_template % s, is_last_line=True) self._report_progress_status(s)
if self.params.get('noprogress'):
return return
if s['status'] != 'downloading': if s['status'] != 'downloading':
@@ -306,8 +319,12 @@ class FileDownloader(object):
msg_template = '%(_downloaded_bytes_str)s at %(_speed_str)s' msg_template = '%(_downloaded_bytes_str)s at %(_speed_str)s'
else: else:
msg_template = '%(_percent_str)s % at %(_speed_str)s ETA %(_eta_str)s' msg_template = '%(_percent_str)s % at %(_speed_str)s ETA %(_eta_str)s'
if s.get('fragment_index') and s.get('fragment_count'):
self._report_progress_status(msg_template % s) msg_template += ' (frag %(fragment_index)s/%(fragment_count)s)'
elif s.get('fragment_index'):
msg_template += ' (frag %(fragment_index)s)'
s['_default_template'] = msg_template % s
self._report_progress_status(s)
def report_resuming_byte(self, resume_len): def report_resuming_byte(self, resume_len):
"""Report attempt to resume at given byte.""" """Report attempt to resume at given byte."""
@@ -319,12 +336,9 @@ class FileDownloader(object):
'[download] Got server HTTP error: %s. Retrying (attempt %d of %s) ...' '[download] Got server HTTP error: %s. Retrying (attempt %d of %s) ...'
% (error_to_compat_str(err), count, self.format_retries(retries))) % (error_to_compat_str(err), count, self.format_retries(retries)))
def report_file_already_downloaded(self, file_name): def report_file_already_downloaded(self, *args, **kwargs):
"""Report file has already been fully downloaded.""" """Report file has already been fully downloaded."""
try: return self.ydl.report_file_already_downloaded(*args, **kwargs)
self.to_screen('[download] %s has already been downloaded' % file_name)
except UnicodeEncodeError:
self.to_screen('[download] The file has already been downloaded')
def report_unable_to_resume(self): def report_unable_to_resume(self):
"""Report it was impossible to resume download.""" """Report it was impossible to resume download."""
@@ -342,7 +356,7 @@ class FileDownloader(object):
""" """
nooverwrites_and_exists = ( nooverwrites_and_exists = (
not self.params.get('overwrites', subtitle) not self.params.get('overwrites', True)
and os.path.exists(encodeFilename(filename)) and os.path.exists(encodeFilename(filename))
) )
@@ -360,7 +374,7 @@ class FileDownloader(object):
'filename': filename, 'filename': filename,
'status': 'finished', 'status': 'finished',
'total_bytes': os.path.getsize(encodeFilename(filename)), 'total_bytes': os.path.getsize(encodeFilename(filename)),
}) }, info_dict)
return True, False return True, False
if subtitle is False: if subtitle is False:
@@ -382,13 +396,21 @@ class FileDownloader(object):
'[download] Sleeping %s seconds ...' % ( '[download] Sleeping %s seconds ...' % (
sleep_interval_sub)) sleep_interval_sub))
time.sleep(sleep_interval_sub) time.sleep(sleep_interval_sub)
return self.real_download(filename, info_dict), True ret = self.real_download(filename, info_dict)
self._finish_multiline_status()
return ret, True
def real_download(self, filename, info_dict): def real_download(self, filename, info_dict):
"""Real download process. Redefine in subclasses.""" """Real download process. Redefine in subclasses."""
raise NotImplementedError('This method must be implemented by subclasses') raise NotImplementedError('This method must be implemented by subclasses')
def _hook_progress(self, status): def _hook_progress(self, status, info_dict):
if not self._progress_hooks:
return
status['info_dict'] = info_dict
# youtube-dl passes the same status object to all the hooks.
# Some third party scripts seems to be relying on this.
# So keep this behavior if possible
for ph in self._progress_hooks: for ph in self._progress_hooks:
ph(status) ph(status)

View File

@@ -1,6 +1,6 @@
from __future__ import unicode_literals from __future__ import unicode_literals
from ..downloader import _get_real_downloader from ..downloader import get_suitable_downloader
from .fragment import FragmentFD from .fragment import FragmentFD
from ..utils import urljoin from ..utils import urljoin
@@ -15,11 +15,15 @@ class DashSegmentsFD(FragmentFD):
FD_NAME = 'dashsegments' FD_NAME = 'dashsegments'
def real_download(self, filename, info_dict): def real_download(self, filename, info_dict):
if info_dict.get('is_live'):
self.report_error('Live DASH videos are not supported')
fragment_base_url = info_dict.get('fragment_base_url') fragment_base_url = info_dict.get('fragment_base_url')
fragments = info_dict['fragments'][:1] if self.params.get( fragments = info_dict['fragments'][:1] if self.params.get(
'test', False) else info_dict['fragments'] 'test', False) else info_dict['fragments']
real_downloader = _get_real_downloader(info_dict, 'dash_frag_urls', self.params, None) real_downloader = get_suitable_downloader(
info_dict, self.params, None, protocol='dash_frag_urls', to_stdout=(filename == '-'))
ctx = { ctx = {
'filename': filename, 'filename': filename,
@@ -29,7 +33,7 @@ class DashSegmentsFD(FragmentFD):
if real_downloader: if real_downloader:
self._prepare_external_frag_download(ctx) self._prepare_external_frag_download(ctx)
else: else:
self._prepare_and_start_frag_download(ctx) self._prepare_and_start_frag_download(ctx, info_dict)
fragments_to_download = [] fragments_to_download = []
frag_index = 0 frag_index = 0
@@ -51,15 +55,8 @@ class DashSegmentsFD(FragmentFD):
if real_downloader: if real_downloader:
self.to_screen( self.to_screen(
'[%s] Fragment downloads will be delegated to %s' % (self.FD_NAME, real_downloader.get_basename())) '[%s] Fragment downloads will be delegated to %s' % (self.FD_NAME, real_downloader.get_basename()))
info_copy = info_dict.copy() info_dict['fragments'] = fragments_to_download
info_copy['fragments'] = fragments_to_download
fd = real_downloader(self.ydl, self.params) fd = real_downloader(self.ydl, self.params)
# TODO: Make progress updates work without hooking twice return fd.real_download(filename, info_dict)
# for ph in self._progress_hooks:
# fd.add_progress_hook(ph) return self.download_and_append_fragments(ctx, fragments_to_download, info_dict)
success = fd.real_download(filename, info_copy)
if not success:
return False
else:
self.download_and_append_fragments(ctx, fragments_to_download, info_dict)
return True

View File

@@ -6,13 +6,7 @@ import subprocess
import sys import sys
import time import time
try: from .fragment import FragmentFD
from Crypto.Cipher import AES
can_decrypt_frag = True
except ImportError:
can_decrypt_frag = False
from .common import FileDownloader
from ..compat import ( from ..compat import (
compat_setenv, compat_setenv,
compat_str, compat_str,
@@ -22,20 +16,19 @@ from ..utils import (
cli_option, cli_option,
cli_valueless_option, cli_valueless_option,
cli_bool_option, cli_bool_option,
cli_configuration_args, _configuration_args,
encodeFilename, encodeFilename,
encodeArgument, encodeArgument,
handle_youtubedl_headers, handle_youtubedl_headers,
check_executable, check_executable,
is_outdated_version, Popen,
process_communicate_or_kill,
sanitized_Request,
sanitize_open, sanitize_open,
) )
class ExternalFD(FileDownloader): class ExternalFD(FragmentFD):
SUPPORTED_PROTOCOLS = ('http', 'https', 'ftp', 'ftps') SUPPORTED_PROTOCOLS = ('http', 'https', 'ftp', 'ftps')
can_download_to_stdout = False
def real_download(self, filename, info_dict): def real_download(self, filename, info_dict):
self.report_destination(filename) self.report_destination(filename)
@@ -67,7 +60,7 @@ class ExternalFD(FileDownloader):
'downloaded_bytes': fsize, 'downloaded_bytes': fsize,
'total_bytes': fsize, 'total_bytes': fsize,
}) })
self._hook_progress(status) self._hook_progress(status, info_dict)
return True return True
else: else:
self.to_stderr('\n') self.to_stderr('\n')
@@ -93,7 +86,9 @@ class ExternalFD(FileDownloader):
@classmethod @classmethod
def supports(cls, info_dict): def supports(cls, info_dict):
return info_dict['protocol'] in cls.SUPPORTED_PROTOCOLS return (
(cls.can_download_to_stdout or not info_dict.get('to_stdout'))
and info_dict['protocol'] in cls.SUPPORTED_PROTOCOLS)
@classmethod @classmethod
def can_download(cls, info_dict, path=None): def can_download(cls, info_dict, path=None):
@@ -108,11 +103,10 @@ class ExternalFD(FileDownloader):
def _valueless_option(self, command_option, param, expected_value=True): def _valueless_option(self, command_option, param, expected_value=True):
return cli_valueless_option(self.params, command_option, param, expected_value) return cli_valueless_option(self.params, command_option, param, expected_value)
def _configuration_args(self, *args, **kwargs): def _configuration_args(self, keys=None, *args, **kwargs):
return cli_configuration_args( return _configuration_args(
self.params.get('external_downloader_args'), self.get_basename(), self.params.get('external_downloader_args'), self.get_basename(),
[self.get_basename(), 'default'], keys, *args, **kwargs)
*args, **kwargs)
def _call_downloader(self, tmpfilename, info_dict): def _call_downloader(self, tmpfilename, info_dict):
""" Either overwrite this or implement _make_cmd """ """ Either overwrite this or implement _make_cmd """
@@ -120,73 +114,54 @@ class ExternalFD(FileDownloader):
self._debug_cmd(cmd) self._debug_cmd(cmd)
if 'fragments' in info_dict: if 'fragments' not in info_dict:
fragment_retries = self.params.get('fragment_retries', 0) p = Popen(cmd, stderr=subprocess.PIPE)
skip_unavailable_fragments = self.params.get('skip_unavailable_fragments', True) _, stderr = p.communicate_or_kill()
count = 0
while count <= fragment_retries:
p = subprocess.Popen(
cmd, stderr=subprocess.PIPE)
_, stderr = process_communicate_or_kill(p)
if p.returncode == 0:
break
# TODO: Decide whether to retry based on error code
# https://aria2.github.io/manual/en/html/aria2c.html#exit-status
self.to_stderr(stderr.decode('utf-8', 'replace'))
count += 1
if count <= fragment_retries:
self.to_screen(
'[%s] Got error. Retrying fragments (attempt %d of %s)...'
% (self.get_basename(), count, self.format_retries(fragment_retries)))
if count > fragment_retries:
if not skip_unavailable_fragments:
self.report_error('Giving up after %s fragment retries' % fragment_retries)
return -1
dest, _ = sanitize_open(tmpfilename, 'wb')
for frag_index, fragment in enumerate(info_dict['fragments']):
fragment_filename = '%s-Frag%d' % (tmpfilename, frag_index)
try:
src, _ = sanitize_open(fragment_filename, 'rb')
except IOError:
if skip_unavailable_fragments and frag_index > 1:
self.to_screen('[%s] Skipping fragment %d ...' % (self.get_basename(), frag_index))
continue
self.report_error('Unable to open fragment %d' % frag_index)
return -1
decrypt_info = fragment.get('decrypt_info')
if decrypt_info:
if decrypt_info['METHOD'] == 'AES-128':
iv = decrypt_info.get('IV')
decrypt_info['KEY'] = decrypt_info.get('KEY') or self.ydl.urlopen(
self._prepare_url(info_dict, info_dict.get('_decryption_key_url') or decrypt_info['URI'])).read()
encrypted_data = src.read()
decrypted_data = AES.new(
decrypt_info['KEY'], AES.MODE_CBC, iv).decrypt(encrypted_data)
dest.write(decrypted_data)
else:
fragment_data = src.read()
dest.write(fragment_data)
else:
fragment_data = src.read()
dest.write(fragment_data)
src.close()
if not self.params.get('keep_fragments', False):
os.remove(encodeFilename(fragment_filename))
dest.close()
os.remove(encodeFilename('%s.frag.urls' % tmpfilename))
else:
p = subprocess.Popen(
cmd, stderr=subprocess.PIPE)
_, stderr = process_communicate_or_kill(p)
if p.returncode != 0: if p.returncode != 0:
self.to_stderr(stderr.decode('utf-8', 'replace')) self.to_stderr(stderr.decode('utf-8', 'replace'))
return p.returncode return p.returncode
def _prepare_url(self, info_dict, url): fragment_retries = self.params.get('fragment_retries', 0)
headers = info_dict.get('http_headers') skip_unavailable_fragments = self.params.get('skip_unavailable_fragments', True)
return sanitized_Request(url, None, headers) if headers else url
count = 0
while count <= fragment_retries:
p = Popen(cmd, stderr=subprocess.PIPE)
_, stderr = p.communicate_or_kill()
if p.returncode == 0:
break
# TODO: Decide whether to retry based on error code
# https://aria2.github.io/manual/en/html/aria2c.html#exit-status
self.to_stderr(stderr.decode('utf-8', 'replace'))
count += 1
if count <= fragment_retries:
self.to_screen(
'[%s] Got error. Retrying fragments (attempt %d of %s)...'
% (self.get_basename(), count, self.format_retries(fragment_retries)))
if count > fragment_retries:
if not skip_unavailable_fragments:
self.report_error('Giving up after %s fragment retries' % fragment_retries)
return -1
decrypt_fragment = self.decrypter(info_dict)
dest, _ = sanitize_open(tmpfilename, 'wb')
for frag_index, fragment in enumerate(info_dict['fragments']):
fragment_filename = '%s-Frag%d' % (tmpfilename, frag_index)
try:
src, _ = sanitize_open(fragment_filename, 'rb')
except IOError as err:
if skip_unavailable_fragments and frag_index > 1:
self.report_skip_fragment(frag_index, err)
continue
self.report_error(f'Unable to open fragment {frag_index}; {err}')
return -1
dest.write(decrypt_fragment(fragment, src.read()))
src.close()
if not self.params.get('keep_fragments', False):
os.remove(encodeFilename(fragment_filename))
dest.close()
os.remove(encodeFilename('%s.frag.urls' % tmpfilename))
return 0
class CurlFD(ExternalFD): class CurlFD(ExternalFD):
@@ -221,8 +196,8 @@ class CurlFD(ExternalFD):
self._debug_cmd(cmd) self._debug_cmd(cmd)
# curl writes the progress to stderr so don't capture it. # curl writes the progress to stderr so don't capture it.
p = subprocess.Popen(cmd) p = Popen(cmd)
process_communicate_or_kill(p) p.communicate_or_kill()
return p.returncode return p.returncode
@@ -286,6 +261,7 @@ class Aria2cFD(ExternalFD):
if info_dict.get('http_headers') is not None: if info_dict.get('http_headers') is not None:
for key, val in info_dict['http_headers'].items(): for key, val in info_dict['http_headers'].items():
cmd += ['--header', '%s: %s' % (key, val)] cmd += ['--header', '%s: %s' % (key, val)]
cmd += self._option('--max-overall-download-limit', 'ratelimit')
cmd += self._option('--interface', 'source_address') cmd += self._option('--interface', 'source_address')
cmd += self._option('--all-proxy', 'proxy') cmd += self._option('--all-proxy', 'proxy')
cmd += self._bool_option('--check-certificate', 'nocheckcertificate', 'false', 'true', '=') cmd += self._bool_option('--check-certificate', 'nocheckcertificate', 'false', 'true', '=')
@@ -340,17 +316,32 @@ class HttpieFD(ExternalFD):
class FFmpegFD(ExternalFD): class FFmpegFD(ExternalFD):
SUPPORTED_PROTOCOLS = ('http', 'https', 'ftp', 'ftps', 'm3u8', 'm3u8_native', 'rtsp', 'rtmp', 'rtmp_ffmpeg', 'mms') SUPPORTED_PROTOCOLS = ('http', 'https', 'ftp', 'ftps', 'm3u8', 'm3u8_native', 'rtsp', 'rtmp', 'rtmp_ffmpeg', 'mms', 'http_dash_segments')
can_download_to_stdout = True
@classmethod @classmethod
def available(cls, path=None): def available(cls, path=None):
# TODO: Fix path for ffmpeg # TODO: Fix path for ffmpeg
# Fixme: This may be wrong when --ffmpeg-location is used
return FFmpegPostProcessor().available return FFmpegPostProcessor().available
@classmethod
def supports(cls, info_dict):
return all(proto in cls.SUPPORTED_PROTOCOLS for proto in info_dict['protocol'].split('+'))
def on_process_started(self, proc, stdin): def on_process_started(self, proc, stdin):
""" Override this in subclasses """ """ Override this in subclasses """
pass pass
@classmethod
def can_merge_formats(cls, info_dict, params):
return (
info_dict.get('requested_formats')
and info_dict.get('protocol')
and not params.get('allow_unplayable_formats')
and 'no-direct-merge' not in params.get('compat_opts', [])
and cls.can_download(info_dict))
def _call_downloader(self, tmpfilename, info_dict): def _call_downloader(self, tmpfilename, info_dict):
urls = [f['url'] for f in info_dict.get('requested_formats', [])] or [info_dict['url']] urls = [f['url'] for f in info_dict.get('requested_formats', [])] or [info_dict['url']]
ffpp = FFmpegPostProcessor(downloader=self) ffpp = FFmpegPostProcessor(downloader=self)
@@ -368,6 +359,9 @@ class FFmpegFD(ExternalFD):
if not self.params.get('verbose'): if not self.params.get('verbose'):
args += ['-hide_banner'] args += ['-hide_banner']
args += info_dict.get('_ffmpeg_args', [])
# This option exists only for compatibility. Extractors should use `_ffmpeg_args` instead
seekable = info_dict.get('_seekable') seekable = info_dict.get('_seekable')
if seekable is not None: if seekable is not None:
# setting -seekable prevents ffmpeg from guessing if the server # setting -seekable prevents ffmpeg from guessing if the server
@@ -377,8 +371,6 @@ class FFmpegFD(ExternalFD):
# http://trac.ffmpeg.org/ticket/6125#comment:10 # http://trac.ffmpeg.org/ticket/6125#comment:10
args += ['-seekable', '1' if seekable else '0'] args += ['-seekable', '1' if seekable else '0']
args += self._configuration_args()
# start_time = info_dict.get('start_time') or 0 # start_time = info_dict.get('start_time') or 0
# if start_time: # if start_time:
# args += ['-ss', compat_str(start_time)] # args += ['-ss', compat_str(start_time)]
@@ -444,19 +436,20 @@ class FFmpegFD(ExternalFD):
elif isinstance(conn, compat_str): elif isinstance(conn, compat_str):
args += ['-rtmp_conn', conn] args += ['-rtmp_conn', conn]
for url in urls: for i, url in enumerate(urls):
args += ['-i', url] args += self._configuration_args((f'_i{i + 1}', '_i')) + ['-i', url]
args += ['-c', 'copy'] args += ['-c', 'copy']
if info_dict.get('requested_formats'): if info_dict.get('requested_formats') or protocol == 'http_dash_segments':
for (i, fmt) in enumerate(info_dict['requested_formats']): for (i, fmt) in enumerate(info_dict.get('requested_formats') or [info_dict]):
if fmt.get('acodec') != 'none': stream_number = fmt.get('manifest_stream_number', 0)
args.extend(['-map', '%d:a:0' % i]) a_or_v = 'a' if fmt.get('acodec') != 'none' else 'v'
if fmt.get('vcodec') != 'none': args.extend(['-map', f'{i}:{a_or_v}:{stream_number}'])
args.extend(['-map', '%d:v:0' % i])
if self.params.get('test', False): if self.params.get('test', False):
args += ['-fs', compat_str(self._TEST_FILE_SIZE)] args += ['-fs', compat_str(self._TEST_FILE_SIZE)]
ext = info_dict['ext']
if protocol in ('m3u8', 'm3u8_native'): if protocol in ('m3u8', 'm3u8_native'):
use_mpegts = (tmpfilename == '-') or self.params.get('hls_use_mpegts') use_mpegts = (tmpfilename == '-') or self.params.get('hls_use_mpegts')
if use_mpegts is None: if use_mpegts is None:
@@ -465,19 +458,22 @@ class FFmpegFD(ExternalFD):
args += ['-f', 'mpegts'] args += ['-f', 'mpegts']
else: else:
args += ['-f', 'mp4'] args += ['-f', 'mp4']
if (ffpp.basename == 'ffmpeg' and is_outdated_version(ffpp._versions['ffmpeg'], '3.2', False)) and (not info_dict.get('acodec') or info_dict['acodec'].split('.')[0] in ('aac', 'mp4a')): if (ffpp.basename == 'ffmpeg' and ffpp._features.get('needs_adtstoasc')) and (not info_dict.get('acodec') or info_dict['acodec'].split('.')[0] in ('aac', 'mp4a')):
args += ['-bsf:a', 'aac_adtstoasc'] args += ['-bsf:a', 'aac_adtstoasc']
elif protocol == 'rtmp': elif protocol == 'rtmp':
args += ['-f', 'flv'] args += ['-f', 'flv']
elif ext == 'mp4' and tmpfilename == '-':
args += ['-f', 'mpegts']
else: else:
args += ['-f', EXT_TO_OUT_FORMATS.get(info_dict['ext'], info_dict['ext'])] args += ['-f', EXT_TO_OUT_FORMATS.get(ext, ext)]
args += self._configuration_args(('_o1', '_o', ''))
args = [encodeArgument(opt) for opt in args] args = [encodeArgument(opt) for opt in args]
args.append(encodeFilename(ffpp._ffmpeg_filename_argument(tmpfilename), True)) args.append(encodeFilename(ffpp._ffmpeg_filename_argument(tmpfilename), True))
self._debug_cmd(args) self._debug_cmd(args)
proc = subprocess.Popen(args, stdin=subprocess.PIPE, env=env) proc = Popen(args, stdin=subprocess.PIPE, env=env)
if url in ('-', 'pipe:'): if url in ('-', 'pipe:'):
self.on_process_started(proc, proc.stdin) self.on_process_started(proc, proc.stdin)
try: try:
@@ -489,7 +485,7 @@ class FFmpegFD(ExternalFD):
# streams). Note that Windows is not affected and produces playable # streams). Note that Windows is not affected and produces playable
# files (see https://github.com/ytdl-org/youtube-dl/issues/8300). # files (see https://github.com/ytdl-org/youtube-dl/issues/8300).
if isinstance(e, KeyboardInterrupt) and sys.platform != 'win32' and url not in ('-', 'pipe:'): if isinstance(e, KeyboardInterrupt) and sys.platform != 'win32' and url not in ('-', 'pipe:'):
process_communicate_or_kill(proc, b'q') proc.communicate_or_kill(b'q')
else: else:
proc.kill() proc.kill()
proc.wait() proc.wait()
@@ -504,7 +500,7 @@ class AVconvFD(FFmpegFD):
_BY_NAME = dict( _BY_NAME = dict(
(klass.get_basename(), klass) (klass.get_basename(), klass)
for name, klass in globals().items() for name, klass in globals().items()
if name.endswith('FD') and name != 'ExternalFD' if name.endswith('FD') and name not in ('ExternalFD', 'FragmentFD')
) )

View File

@@ -380,7 +380,7 @@ class F4mFD(FragmentFD):
base_url_parsed = compat_urllib_parse_urlparse(base_url) base_url_parsed = compat_urllib_parse_urlparse(base_url)
self._start_frag_download(ctx) self._start_frag_download(ctx, info_dict)
frag_index = 0 frag_index = 0
while fragments_list: while fragments_list:
@@ -434,6 +434,6 @@ class F4mFD(FragmentFD):
msg = 'Missed %d fragments' % (fragments_list[0][1] - (frag_i + 1)) msg = 'Missed %d fragments' % (fragments_list[0][1] - (frag_i + 1))
self.report_warning(msg) self.report_warning(msg)
self._finish_frag_download(ctx) self._finish_frag_download(ctx, info_dict)
return True return True

View File

@@ -3,12 +3,7 @@ from __future__ import division, unicode_literals
import os import os
import time import time
import json import json
from math import ceil
try:
from Crypto.Cipher import AES
can_decrypt_frag = True
except ImportError:
can_decrypt_frag = False
try: try:
import concurrent.futures import concurrent.futures
@@ -18,6 +13,7 @@ except ImportError:
from .common import FileDownloader from .common import FileDownloader
from .http import HttpFD from .http import HttpFD
from ..aes import aes_cbc_decrypt_bytes
from ..compat import ( from ..compat import (
compat_urllib_error, compat_urllib_error,
compat_struct_pack, compat_struct_pack,
@@ -35,6 +31,10 @@ class HttpQuietDownloader(HttpFD):
def to_screen(self, *args, **kargs): def to_screen(self, *args, **kargs):
pass pass
def report_retry(self, err, count, retries):
super().to_screen(
f'[download] Got server HTTP error: {err}. Retrying (attempt {count} of {self.format_retries(retries)}) ...')
class FragmentFD(FileDownloader): class FragmentFD(FileDownloader):
""" """
@@ -48,6 +48,7 @@ class FragmentFD(FileDownloader):
Skip unavailable fragments (DASH and hlsnative only) Skip unavailable fragments (DASH and hlsnative only)
keep_fragments: Keep downloaded fragments on disk after downloading is keep_fragments: Keep downloaded fragments on disk after downloading is
finished finished
concurrent_fragment_downloads: The number of threads to use for native hls and dash downloads
_no_ytdl_file: Don't use .ytdl file _no_ytdl_file: Don't use .ytdl file
For each incomplete fragment download yt-dlp keeps on disk a special For each incomplete fragment download yt-dlp keeps on disk a special
@@ -76,16 +77,17 @@ class FragmentFD(FileDownloader):
'\r[download] Got server HTTP error: %s. Retrying fragment %d (attempt %d of %s) ...' '\r[download] Got server HTTP error: %s. Retrying fragment %d (attempt %d of %s) ...'
% (error_to_compat_str(err), frag_index, count, self.format_retries(retries))) % (error_to_compat_str(err), frag_index, count, self.format_retries(retries)))
def report_skip_fragment(self, frag_index): def report_skip_fragment(self, frag_index, err=None):
self.to_screen('[download] Skipping fragment %d ...' % frag_index) err = f' {err};' if err else ''
self.to_screen(f'[download]{err} Skipping fragment {frag_index:d} ...')
def _prepare_url(self, info_dict, url): def _prepare_url(self, info_dict, url):
headers = info_dict.get('http_headers') headers = info_dict.get('http_headers')
return sanitized_Request(url, None, headers) if headers else url return sanitized_Request(url, None, headers) if headers else url
def _prepare_and_start_frag_download(self, ctx): def _prepare_and_start_frag_download(self, ctx, info_dict):
self._prepare_frag_download(ctx) self._prepare_frag_download(ctx)
self._start_frag_download(ctx) self._start_frag_download(ctx, info_dict)
def __do_ytdl_file(self, ctx): def __do_ytdl_file(self, ctx):
return not ctx['live'] and not ctx['tmpfilename'] == '-' and not self.params.get('_no_ytdl_file') return not ctx['live'] and not ctx['tmpfilename'] == '-' and not self.params.get('_no_ytdl_file')
@@ -105,17 +107,19 @@ class FragmentFD(FileDownloader):
def _write_ytdl_file(self, ctx): def _write_ytdl_file(self, ctx):
frag_index_stream, _ = sanitize_open(self.ytdl_filename(ctx['filename']), 'w') frag_index_stream, _ = sanitize_open(self.ytdl_filename(ctx['filename']), 'w')
downloader = { try:
'current_fragment': { downloader = {
'index': ctx['fragment_index'], 'current_fragment': {
}, 'index': ctx['fragment_index'],
} },
if 'extra_state' in ctx: }
downloader['extra_state'] = ctx['extra_state'] if 'extra_state' in ctx:
if ctx.get('fragment_count') is not None: downloader['extra_state'] = ctx['extra_state']
downloader['fragment_count'] = ctx['fragment_count'] if ctx.get('fragment_count') is not None:
frag_index_stream.write(json.dumps({'downloader': downloader})) downloader['fragment_count'] = ctx['fragment_count']
frag_index_stream.close() frag_index_stream.write(json.dumps({'downloader': downloader}))
finally:
frag_index_stream.close()
def _download_fragment(self, ctx, frag_url, info_dict, headers=None, request_data=None): def _download_fragment(self, ctx, frag_url, info_dict, headers=None, request_data=None):
fragment_filename = '%s-Frag%d' % (ctx['tmpfilename'], ctx['fragment_index']) fragment_filename = '%s-Frag%d' % (ctx['tmpfilename'], ctx['fragment_index'])
@@ -123,6 +127,7 @@ class FragmentFD(FileDownloader):
'url': frag_url, 'url': frag_url,
'http_headers': headers or info_dict.get('http_headers'), 'http_headers': headers or info_dict.get('http_headers'),
'request_data': request_data, 'request_data': request_data,
'ctx_id': ctx.get('ctx_id'),
} }
success = ctx['dl'].download(fragment_filename, fragment_info_dict) success = ctx['dl'].download(fragment_filename, fragment_info_dict)
if not success: if not success:
@@ -167,7 +172,7 @@ class FragmentFD(FileDownloader):
self.ydl, self.ydl,
{ {
'continuedl': True, 'continuedl': True,
'quiet': True, 'quiet': self.params.get('quiet'),
'noprogress': True, 'noprogress': True,
'ratelimit': self.params.get('ratelimit'), 'ratelimit': self.params.get('ratelimit'),
'retries': self.params.get('retries', 0), 'retries': self.params.get('retries', 0),
@@ -219,9 +224,10 @@ class FragmentFD(FileDownloader):
'complete_frags_downloaded_bytes': resume_len, 'complete_frags_downloaded_bytes': resume_len,
}) })
def _start_frag_download(self, ctx): def _start_frag_download(self, ctx, info_dict):
resume_len = ctx['complete_frags_downloaded_bytes'] resume_len = ctx['complete_frags_downloaded_bytes']
total_frags = ctx['total_frags'] total_frags = ctx['total_frags']
ctx_id = ctx.get('ctx_id')
# This dict stores the download progress, it's updated by the progress # This dict stores the download progress, it's updated by the progress
# hook # hook
state = { state = {
@@ -236,6 +242,7 @@ class FragmentFD(FileDownloader):
start = time.time() start = time.time()
ctx.update({ ctx.update({
'started': start, 'started': start,
'fragment_started': start,
# Amount of fragment's bytes downloaded by the time of the previous # Amount of fragment's bytes downloaded by the time of the previous
# frag progress hook invocation # frag progress hook invocation
'prev_frag_downloaded_bytes': 0, 'prev_frag_downloaded_bytes': 0,
@@ -245,9 +252,16 @@ class FragmentFD(FileDownloader):
if s['status'] not in ('downloading', 'finished'): if s['status'] not in ('downloading', 'finished'):
return return
if ctx_id is not None and s.get('ctx_id') != ctx_id:
return
state['max_progress'] = ctx.get('max_progress')
state['progress_idx'] = ctx.get('progress_idx')
time_now = time.time() time_now = time.time()
state['elapsed'] = time_now - start state['elapsed'] = time_now - start
frag_total_bytes = s.get('total_bytes') or 0 frag_total_bytes = s.get('total_bytes') or 0
s['fragment_info_dict'] = s.pop('info_dict', {})
if not ctx['live']: if not ctx['live']:
estimated_size = ( estimated_size = (
(ctx['complete_frags_downloaded_bytes'] + frag_total_bytes) (ctx['complete_frags_downloaded_bytes'] + frag_total_bytes)
@@ -259,6 +273,9 @@ class FragmentFD(FileDownloader):
ctx['fragment_index'] = state['fragment_index'] ctx['fragment_index'] = state['fragment_index']
state['downloaded_bytes'] += frag_total_bytes - ctx['prev_frag_downloaded_bytes'] state['downloaded_bytes'] += frag_total_bytes - ctx['prev_frag_downloaded_bytes']
ctx['complete_frags_downloaded_bytes'] = state['downloaded_bytes'] ctx['complete_frags_downloaded_bytes'] = state['downloaded_bytes']
ctx['speed'] = state['speed'] = self.calc_speed(
ctx['fragment_started'], time_now, frag_total_bytes)
ctx['fragment_started'] = time.time()
ctx['prev_frag_downloaded_bytes'] = 0 ctx['prev_frag_downloaded_bytes'] = 0
else: else:
frag_downloaded_bytes = s['downloaded_bytes'] frag_downloaded_bytes = s['downloaded_bytes']
@@ -267,16 +284,16 @@ class FragmentFD(FileDownloader):
state['eta'] = self.calc_eta( state['eta'] = self.calc_eta(
start, time_now, estimated_size - resume_len, start, time_now, estimated_size - resume_len,
state['downloaded_bytes'] - resume_len) state['downloaded_bytes'] - resume_len)
state['speed'] = s.get('speed') or ctx.get('speed') ctx['speed'] = state['speed'] = self.calc_speed(
ctx['speed'] = state['speed'] ctx['fragment_started'], time_now, frag_downloaded_bytes)
ctx['prev_frag_downloaded_bytes'] = frag_downloaded_bytes ctx['prev_frag_downloaded_bytes'] = frag_downloaded_bytes
self._hook_progress(state) self._hook_progress(state, info_dict)
ctx['dl'].add_progress_hook(frag_progress_hook) ctx['dl'].add_progress_hook(frag_progress_hook)
return start return start
def _finish_frag_download(self, ctx): def _finish_frag_download(self, ctx, info_dict):
ctx['dest_stream'].close() ctx['dest_stream'].close()
if self.__do_ytdl_file(ctx): if self.__do_ytdl_file(ctx):
ytdl_filename = encodeFilename(self.ytdl_filename(ctx['filename'])) ytdl_filename = encodeFilename(self.ytdl_filename(ctx['filename']))
@@ -303,7 +320,10 @@ class FragmentFD(FileDownloader):
'filename': ctx['filename'], 'filename': ctx['filename'],
'status': 'finished', 'status': 'finished',
'elapsed': elapsed, 'elapsed': elapsed,
}) 'ctx_id': ctx.get('ctx_id'),
'max_progress': ctx.get('max_progress'),
'progress_idx': ctx.get('progress_idx'),
}, info_dict)
def _prepare_external_frag_download(self, ctx): def _prepare_external_frag_download(self, ctx):
if 'live' not in ctx: if 'live' not in ctx:
@@ -326,22 +346,81 @@ class FragmentFD(FileDownloader):
'fragment_index': 0, 'fragment_index': 0,
}) })
def download_and_append_fragments(self, ctx, fragments, info_dict, pack_func=None): def decrypter(self, info_dict):
_key_cache = {}
def _get_key(url):
if url not in _key_cache:
_key_cache[url] = self.ydl.urlopen(self._prepare_url(info_dict, url)).read()
return _key_cache[url]
def decrypt_fragment(fragment, frag_content):
decrypt_info = fragment.get('decrypt_info')
if not decrypt_info or decrypt_info['METHOD'] != 'AES-128':
return frag_content
iv = decrypt_info.get('IV') or compat_struct_pack('>8xq', fragment['media_sequence'])
decrypt_info['KEY'] = decrypt_info.get('KEY') or _get_key(info_dict.get('_decryption_key_url') or decrypt_info['URI'])
# Don't decrypt the content in tests since the data is explicitly truncated and it's not to a valid block
# size (see https://github.com/ytdl-org/youtube-dl/pull/27660). Tests only care that the correct data downloaded,
# not what it decrypts to.
if self.params.get('test', False):
return frag_content
decrypted_data = aes_cbc_decrypt_bytes(frag_content, decrypt_info['KEY'], iv)
return decrypted_data[:-decrypted_data[-1]]
return decrypt_fragment
def download_and_append_fragments_multiple(self, *args, pack_func=None, finish_func=None):
'''
@params (ctx1, fragments1, info_dict1), (ctx2, fragments2, info_dict2), ...
all args must be either tuple or list
'''
max_progress = len(args)
if max_progress == 1:
return self.download_and_append_fragments(*args[0], pack_func=pack_func, finish_func=finish_func)
max_workers = self.params.get('concurrent_fragment_downloads', max_progress)
if max_progress > 1:
self._prepare_multiline_status(max_progress)
def thread_func(idx, ctx, fragments, info_dict, tpe):
ctx['max_progress'] = max_progress
ctx['progress_idx'] = idx
return self.download_and_append_fragments(ctx, fragments, info_dict, pack_func=pack_func, finish_func=finish_func, tpe=tpe)
class FTPE(concurrent.futures.ThreadPoolExecutor):
# has to stop this or it's going to wait on the worker thread itself
def __exit__(self, exc_type, exc_val, exc_tb):
pass
spins = []
for idx, (ctx, fragments, info_dict) in enumerate(args):
tpe = FTPE(ceil(max_workers / max_progress))
job = tpe.submit(thread_func, idx, ctx, fragments, info_dict, tpe)
spins.append((tpe, job))
result = True
for tpe, job in spins:
try:
result = result and job.result()
finally:
tpe.shutdown(wait=True)
return result
def download_and_append_fragments(self, ctx, fragments, info_dict, *, pack_func=None, finish_func=None, tpe=None):
fragment_retries = self.params.get('fragment_retries', 0) fragment_retries = self.params.get('fragment_retries', 0)
skip_unavailable_fragments = self.params.get('skip_unavailable_fragments', True) is_fatal = (lambda idx: idx == 0) if self.params.get('skip_unavailable_fragments', True) else (lambda _: True)
test = self.params.get('test', False)
if not pack_func: if not pack_func:
pack_func = lambda frag_content, _: frag_content pack_func = lambda frag_content, _: frag_content
def download_fragment(fragment, ctx): def download_fragment(fragment, ctx):
frag_index = ctx['fragment_index'] = fragment['frag_index'] frag_index = ctx['fragment_index'] = fragment['frag_index']
headers = info_dict.get('http_headers', {}) headers = info_dict.get('http_headers', {}).copy()
byte_range = fragment.get('byte_range') byte_range = fragment.get('byte_range')
if byte_range: if byte_range:
headers['Range'] = 'bytes=%d-%d' % (byte_range['start'], byte_range['end'] - 1) headers['Range'] = 'bytes=%d-%d' % (byte_range['start'], byte_range['end'] - 1)
# Never skip the first fragment # Never skip the first fragment
fatal = (fragment.get('index') or frag_index) == 0 or not skip_unavailable_fragments fatal = is_fatal(fragment.get('index') or (frag_index - 1))
count, frag_content = 0, None count, frag_content = 0, None
while count <= fragment_retries: while count <= fragment_retries:
try: try:
@@ -372,25 +451,10 @@ class FragmentFD(FileDownloader):
return False, frag_index return False, frag_index
return frag_content, frag_index return frag_content, frag_index
def decrypt_fragment(fragment, frag_content):
decrypt_info = fragment.get('decrypt_info')
if not decrypt_info or decrypt_info['METHOD'] != 'AES-128':
return frag_content
iv = decrypt_info.get('IV') or compat_struct_pack('>8xq', fragment['media_sequence'])
decrypt_info['KEY'] = decrypt_info.get('KEY') or self.ydl.urlopen(
self._prepare_url(info_dict, info_dict.get('_decryption_key_url') or decrypt_info['URI'])).read()
# Don't decrypt the content in tests since the data is explicitly truncated and it's not to a valid block
# size (see https://github.com/ytdl-org/youtube-dl/pull/27660). Tests only care that the correct data downloaded,
# not what it decrypts to.
if test:
return frag_content
return AES.new(decrypt_info['KEY'], AES.MODE_CBC, iv).decrypt(frag_content)
def append_fragment(frag_content, frag_index, ctx): def append_fragment(frag_content, frag_index, ctx):
if not frag_content: if not frag_content:
fatal = frag_index == 1 or not skip_unavailable_fragments if not is_fatal(frag_index - 1):
if not fatal: self.report_skip_fragment(frag_index, 'fragment not found')
self.report_skip_fragment(frag_index)
return True return True
else: else:
ctx['dest_stream'].close() ctx['dest_stream'].close()
@@ -400,20 +464,18 @@ class FragmentFD(FileDownloader):
self._append_fragment(ctx, pack_func(frag_content, frag_index)) self._append_fragment(ctx, pack_func(frag_content, frag_index))
return True return True
decrypt_fragment = self.decrypter(info_dict)
max_workers = self.params.get('concurrent_fragment_downloads', 1) max_workers = self.params.get('concurrent_fragment_downloads', 1)
if can_threaded_download and max_workers > 1: if can_threaded_download and max_workers > 1:
def _download_fragment(fragment): def _download_fragment(fragment):
try: ctx_copy = ctx.copy()
ctx_copy = ctx.copy() frag_content, frag_index = download_fragment(fragment, ctx_copy)
frag_content, frag_index = download_fragment(fragment, ctx_copy) return fragment, frag_content, frag_index, ctx_copy.get('fragment_filename_sanitized')
return fragment, frag_content, frag_index, ctx_copy.get('fragment_filename_sanitized')
except Exception:
# Return immediately on exception so that it is raised in the main thread
return
self.report_warning('The download speed shown is only of one thread. This is a known issue and patches are welcome') self.report_warning('The download speed shown is only of one thread. This is a known issue and patches are welcome')
with concurrent.futures.ThreadPoolExecutor(max_workers) as pool: with tpe or concurrent.futures.ThreadPoolExecutor(max_workers) as pool:
for fragment, frag_content, frag_index, frag_filename in pool.map(_download_fragment, fragments): for fragment, frag_content, frag_index, frag_filename in pool.map(_download_fragment, fragments):
ctx['fragment_filename_sanitized'] = frag_filename ctx['fragment_filename_sanitized'] = frag_filename
ctx['fragment_index'] = frag_index ctx['fragment_index'] = frag_index
@@ -427,4 +489,8 @@ class FragmentFD(FileDownloader):
if not result: if not result:
return False return False
self._finish_frag_download(ctx) if finish_func is not None:
ctx['dest_stream'].write(finish_func())
ctx['dest_stream'].flush()
self._finish_frag_download(ctx, info_dict)
return True

View File

@@ -4,11 +4,12 @@ import re
import io import io
import binascii import binascii
from ..downloader import _get_real_downloader from ..downloader import get_suitable_downloader
from .fragment import FragmentFD, can_decrypt_frag from .fragment import FragmentFD
from .external import FFmpegFD from .external import FFmpegFD
from ..compat import ( from ..compat import (
compat_pycrypto_AES,
compat_urlparse, compat_urlparse,
) )
from ..utils import ( from ..utils import (
@@ -29,7 +30,7 @@ class HlsFD(FragmentFD):
FD_NAME = 'hlsnative' FD_NAME = 'hlsnative'
@staticmethod @staticmethod
def can_download(manifest, info_dict, allow_unplayable_formats=False, with_crypto=can_decrypt_frag): def can_download(manifest, info_dict, allow_unplayable_formats=False):
UNSUPPORTED_FEATURES = [ UNSUPPORTED_FEATURES = [
# r'#EXT-X-BYTERANGE', # playlists composed of byte ranges of media files [2] # r'#EXT-X-BYTERANGE', # playlists composed of byte ranges of media files [2]
@@ -56,9 +57,6 @@ class HlsFD(FragmentFD):
def check_results(): def check_results():
yield not info_dict.get('is_live') yield not info_dict.get('is_live')
is_aes128_enc = '#EXT-X-KEY:METHOD=AES-128' in manifest
yield with_crypto or not is_aes128_enc
yield not (is_aes128_enc and r'#EXT-X-BYTERANGE' in manifest)
for feature in UNSUPPORTED_FEATURES: for feature in UNSUPPORTED_FEATURES:
yield not re.search(feature, manifest) yield not re.search(feature, manifest)
return all(check_results()) return all(check_results())
@@ -71,25 +69,27 @@ class HlsFD(FragmentFD):
man_url = urlh.geturl() man_url = urlh.geturl()
s = urlh.read().decode('utf-8', 'ignore') s = urlh.read().decode('utf-8', 'ignore')
if not self.can_download(s, info_dict, self.params.get('allow_unplayable_formats')): can_download, message = self.can_download(s, info_dict, self.params.get('allow_unplayable_formats')), None
if info_dict.get('extra_param_to_segment_url') or info_dict.get('_decryption_key_url'): if can_download and not compat_pycrypto_AES and '#EXT-X-KEY:METHOD=AES-128' in s:
self.report_error('pycryptodome not found. Please install') if FFmpegFD.available():
return False can_download, message = False, 'The stream has AES-128 encryption and pycryptodomex is not available'
if self.can_download(s, info_dict, with_crypto=True): else:
self.report_warning('pycryptodome is needed to download this file natively') message = ('The stream has AES-128 encryption and neither ffmpeg nor pycryptodomex are available; '
'Decryption will be performed natively, but will be extremely slow')
if not can_download:
message = message or 'Unsupported features have been detected'
fd = FFmpegFD(self.ydl, self.params) fd = FFmpegFD(self.ydl, self.params)
self.report_warning( self.report_warning(f'{message}; extraction will be delegated to {fd.get_basename()}')
'%s detected unsupported features; extraction will be delegated to %s' % (self.FD_NAME, fd.get_basename()))
# TODO: Make progress updates work without hooking twice
# for ph in self._progress_hooks:
# fd.add_progress_hook(ph)
return fd.real_download(filename, info_dict) return fd.real_download(filename, info_dict)
elif message:
self.report_warning(message)
is_webvtt = info_dict['ext'] == 'vtt' is_webvtt = info_dict['ext'] == 'vtt'
if is_webvtt: if is_webvtt:
real_downloader = None # Packing the fragments is not currently supported for external downloader real_downloader = None # Packing the fragments is not currently supported for external downloader
else: else:
real_downloader = _get_real_downloader(info_dict, 'm3u8_frag_urls', self.params, None) real_downloader = get_suitable_downloader(
info_dict, self.params, None, protocol='m3u8_frag_urls', to_stdout=(filename == '-'))
if real_downloader and not real_downloader.supports_manifest(s): if real_downloader and not real_downloader.supports_manifest(s):
real_downloader = None real_downloader = None
if real_downloader: if real_downloader:
@@ -133,7 +133,7 @@ class HlsFD(FragmentFD):
if real_downloader: if real_downloader:
self._prepare_external_frag_download(ctx) self._prepare_external_frag_download(ctx)
else: else:
self._prepare_and_start_frag_download(ctx) self._prepare_and_start_frag_download(ctx, info_dict)
extra_state = ctx.setdefault('extra_state', {}) extra_state = ctx.setdefault('extra_state', {})
@@ -174,6 +174,7 @@ class HlsFD(FragmentFD):
'byte_range': byte_range, 'byte_range': byte_range,
'media_sequence': media_sequence, 'media_sequence': media_sequence,
}) })
media_sequence += 1
elif line.startswith('#EXT-X-MAP'): elif line.startswith('#EXT-X-MAP'):
if format_index and discontinuity_count != format_index: if format_index and discontinuity_count != format_index:
@@ -198,6 +199,7 @@ class HlsFD(FragmentFD):
'byte_range': byte_range, 'byte_range': byte_range,
'media_sequence': media_sequence 'media_sequence': media_sequence
}) })
media_sequence += 1
if map_info.get('BYTERANGE'): if map_info.get('BYTERANGE'):
splitted_byte_range = map_info.get('BYTERANGE').split('@') splitted_byte_range = map_info.get('BYTERANGE').split('@')
@@ -237,91 +239,111 @@ class HlsFD(FragmentFD):
elif line.startswith('#EXT-X-DISCONTINUITY'): elif line.startswith('#EXT-X-DISCONTINUITY'):
discontinuity_count += 1 discontinuity_count += 1
i += 1 i += 1
media_sequence += 1
# We only download the first fragment during the test # We only download the first fragment during the test
if self.params.get('test', False): if self.params.get('test', False):
fragments = [fragments[0] if fragments else None] fragments = [fragments[0] if fragments else None]
if real_downloader: if real_downloader:
info_copy = info_dict.copy() info_dict['fragments'] = fragments
info_copy['fragments'] = fragments
fd = real_downloader(self.ydl, self.params) fd = real_downloader(self.ydl, self.params)
# TODO: Make progress updates work without hooking twice # TODO: Make progress updates work without hooking twice
# for ph in self._progress_hooks: # for ph in self._progress_hooks:
# fd.add_progress_hook(ph) # fd.add_progress_hook(ph)
success = fd.real_download(filename, info_copy) return fd.real_download(filename, info_dict)
if not success:
return False if is_webvtt:
def pack_fragment(frag_content, frag_index):
output = io.StringIO()
adjust = 0
overflow = False
mpegts_last = None
for block in webvtt.parse_fragment(frag_content):
if isinstance(block, webvtt.CueBlock):
extra_state['webvtt_mpegts_last'] = mpegts_last
if overflow:
extra_state['webvtt_mpegts_adjust'] += 1
overflow = False
block.start += adjust
block.end += adjust
dedup_window = extra_state.setdefault('webvtt_dedup_window', [])
ready = []
i = 0
is_new = True
while i < len(dedup_window):
wcue = dedup_window[i]
wblock = webvtt.CueBlock.from_json(wcue)
i += 1
if wblock.hinges(block):
wcue['end'] = block.end
is_new = False
continue
if wblock == block:
is_new = False
continue
if wblock.end > block.start:
continue
ready.append(wblock)
i -= 1
del dedup_window[i]
if is_new:
dedup_window.append(block.as_json)
for block in ready:
block.write_into(output)
# we only emit cues once they fall out of the duplicate window
continue
elif isinstance(block, webvtt.Magic):
# take care of MPEG PES timestamp overflow
if block.mpegts is None:
block.mpegts = 0
extra_state.setdefault('webvtt_mpegts_adjust', 0)
block.mpegts += extra_state['webvtt_mpegts_adjust'] << 33
if block.mpegts < extra_state.get('webvtt_mpegts_last', 0):
overflow = True
block.mpegts += 1 << 33
mpegts_last = block.mpegts
if frag_index == 1:
extra_state['webvtt_mpegts'] = block.mpegts or 0
extra_state['webvtt_local'] = block.local or 0
# XXX: block.local = block.mpegts = None ?
else:
if block.mpegts is not None and block.local is not None:
adjust = (
(block.mpegts - extra_state.get('webvtt_mpegts', 0))
- (block.local - extra_state.get('webvtt_local', 0))
)
continue
elif isinstance(block, webvtt.HeaderBlock):
if frag_index != 1:
# XXX: this should probably be silent as well
# or verify that all segments contain the same data
self.report_warning(bug_reports_message(
'Discarding a %s block found in the middle of the stream; '
'if the subtitles display incorrectly,'
% (type(block).__name__)))
continue
block.write_into(output)
return output.getvalue().encode('utf-8')
def fin_fragments():
dedup_window = extra_state.get('webvtt_dedup_window')
if not dedup_window:
return b''
output = io.StringIO()
for cue in dedup_window:
webvtt.CueBlock.from_json(cue).write_into(output)
return output.getvalue().encode('utf-8')
self.download_and_append_fragments(
ctx, fragments, info_dict, pack_func=pack_fragment, finish_func=fin_fragments)
else: else:
if is_webvtt: return self.download_and_append_fragments(ctx, fragments, info_dict)
def pack_fragment(frag_content, frag_index):
output = io.StringIO()
adjust = 0
for block in webvtt.parse_fragment(frag_content):
if isinstance(block, webvtt.CueBlock):
block.start += adjust
block.end += adjust
dedup_window = extra_state.setdefault('webvtt_dedup_window', [])
cue = block.as_json
# skip the cue if an identical one appears
# in the window of potential duplicates
# and prune the window of unviable candidates
i = 0
skip = True
while i < len(dedup_window):
window_cue = dedup_window[i]
if window_cue == cue:
break
if window_cue['end'] >= cue['start']:
i += 1
continue
del dedup_window[i]
else:
skip = False
if skip:
continue
# add the cue to the window
dedup_window.append(cue)
elif isinstance(block, webvtt.Magic):
# take care of MPEG PES timestamp overflow
if block.mpegts is None:
block.mpegts = 0
extra_state.setdefault('webvtt_mpegts_adjust', 0)
block.mpegts += extra_state['webvtt_mpegts_adjust'] << 33
if block.mpegts < extra_state.get('webvtt_mpegts_last', 0):
extra_state['webvtt_mpegts_adjust'] += 1
block.mpegts += 1 << 33
extra_state['webvtt_mpegts_last'] = block.mpegts
if frag_index == 1:
extra_state['webvtt_mpegts'] = block.mpegts or 0
extra_state['webvtt_local'] = block.local or 0
# XXX: block.local = block.mpegts = None ?
else:
if block.mpegts is not None and block.local is not None:
adjust = (
(block.mpegts - extra_state.get('webvtt_mpegts', 0))
- (block.local - extra_state.get('webvtt_local', 0))
)
continue
elif isinstance(block, webvtt.HeaderBlock):
if frag_index != 1:
# XXX: this should probably be silent as well
# or verify that all segments contain the same data
self.report_warning(bug_reports_message(
'Discarding a %s block found in the middle of the stream; '
'if the subtitles display incorrectly,'
% (type(block).__name__)))
continue
block.write_into(output)
return output.getvalue().encode('utf-8')
else:
pack_fragment = None
self.download_and_append_fragments(ctx, fragments, info_dict, pack_fragment)
return True

View File

@@ -48,8 +48,9 @@ class HttpFD(FileDownloader):
is_test = self.params.get('test', False) is_test = self.params.get('test', False)
chunk_size = self._TEST_FILE_SIZE if is_test else ( chunk_size = self._TEST_FILE_SIZE if is_test else (
info_dict.get('downloader_options', {}).get('http_chunk_size') self.params.get('http_chunk_size')
or self.params.get('http_chunk_size') or 0) or info_dict.get('downloader_options', {}).get('http_chunk_size')
or 0)
ctx.open_mode = 'wb' ctx.open_mode = 'wb'
ctx.resume_len = 0 ctx.resume_len = 0
@@ -57,6 +58,7 @@ class HttpFD(FileDownloader):
ctx.block_size = self.params.get('buffersize', 1024) ctx.block_size = self.params.get('buffersize', 1024)
ctx.start_time = time.time() ctx.start_time = time.time()
ctx.chunk_size = None ctx.chunk_size = None
throttle_start = None
if self.params.get('continuedl', True): if self.params.get('continuedl', True):
# Establish possible resume length # Establish possible resume length
@@ -177,7 +179,7 @@ class HttpFD(FileDownloader):
'status': 'finished', 'status': 'finished',
'downloaded_bytes': ctx.resume_len, 'downloaded_bytes': ctx.resume_len,
'total_bytes': ctx.resume_len, 'total_bytes': ctx.resume_len,
}) }, info_dict)
raise SucceedDownload() raise SucceedDownload()
else: else:
# The length does not match, we start the download over # The length does not match, we start the download over
@@ -189,13 +191,16 @@ class HttpFD(FileDownloader):
# Unexpected HTTP error # Unexpected HTTP error
raise raise
raise RetryDownload(err) raise RetryDownload(err)
except socket.error as err: except socket.timeout as err:
if err.errno != errno.ECONNRESET:
# Connection reset is no problem, just retry
raise
raise RetryDownload(err) raise RetryDownload(err)
except socket.error as err:
if err.errno in (errno.ECONNRESET, errno.ETIMEDOUT):
# Connection reset is no problem, just retry
raise RetryDownload(err)
raise
def download(): def download():
nonlocal throttle_start
data_len = ctx.data.info().get('Content-length', None) data_len = ctx.data.info().get('Content-length', None)
# Range HTTP header may be ignored/unsupported by a webserver # Range HTTP header may be ignored/unsupported by a webserver
@@ -224,7 +229,6 @@ class HttpFD(FileDownloader):
# measure time over whole while-loop, so slow_down() and best_block_size() work together properly # measure time over whole while-loop, so slow_down() and best_block_size() work together properly
now = None # needed for slow_down() in the first loop run now = None # needed for slow_down() in the first loop run
before = start # start measuring before = start # start measuring
throttle_start = None
def retry(e): def retry(e):
to_stdout = ctx.tmpfilename == '-' to_stdout = ctx.tmpfilename == '-'
@@ -238,7 +242,7 @@ class HttpFD(FileDownloader):
while True: while True:
try: try:
# Download and write # Download and write
data_block = ctx.data.read(block_size if data_len is None else min(block_size, data_len - byte_counter)) data_block = ctx.data.read(block_size if not is_test else min(block_size, data_len - byte_counter))
# socket.timeout is a subclass of socket.error but may not have # socket.timeout is a subclass of socket.error but may not have
# errno set # errno set
except socket.timeout as e: except socket.timeout as e:
@@ -310,7 +314,8 @@ class HttpFD(FileDownloader):
'eta': eta, 'eta': eta,
'speed': speed, 'speed': speed,
'elapsed': now - ctx.start_time, 'elapsed': now - ctx.start_time,
}) 'ctx_id': info_dict.get('ctx_id'),
}, info_dict)
if data_len is not None and byte_counter == data_len: if data_len is not None and byte_counter == data_len:
break break
@@ -324,7 +329,7 @@ class HttpFD(FileDownloader):
if ctx.stream is not None and ctx.tmpfilename != '-': if ctx.stream is not None and ctx.tmpfilename != '-':
ctx.stream.close() ctx.stream.close()
raise ThrottledDownload() raise ThrottledDownload()
else: elif speed:
throttle_start = None throttle_start = None
if not is_test and ctx.chunk_size and ctx.data_len is not None and byte_counter < ctx.data_len: if not is_test and ctx.chunk_size and ctx.data_len is not None and byte_counter < ctx.data_len:
@@ -357,7 +362,8 @@ class HttpFD(FileDownloader):
'filename': ctx.filename, 'filename': ctx.filename,
'status': 'finished', 'status': 'finished',
'elapsed': time.time() - ctx.start_time, 'elapsed': time.time() - ctx.start_time,
}) 'ctx_id': info_dict.get('ctx_id'),
}, info_dict)
return True return True
@@ -369,6 +375,8 @@ class HttpFD(FileDownloader):
count += 1 count += 1
if count <= retries: if count <= retries:
self.report_retry(e.source_error, count, retries) self.report_retry(e.source_error, count, retries)
else:
self.to_screen(f'[download] Got server HTTP error: {e.source_error}')
continue continue
except NextFragment: except NextFragment:
continue continue

View File

@@ -246,7 +246,7 @@ class IsmFD(FragmentFD):
'total_frags': len(segments), 'total_frags': len(segments),
} }
self._prepare_and_start_frag_download(ctx) self._prepare_and_start_frag_download(ctx, info_dict)
extra_state = ctx.setdefault('extra_state', { extra_state = ctx.setdefault('extra_state', {
'ism_track_written': False, 'ism_track_written': False,
@@ -284,6 +284,6 @@ class IsmFD(FragmentFD):
self.report_error('giving up after %s fragment retries' % fragment_retries) self.report_error('giving up after %s fragment retries' % fragment_retries)
return False return False
self._finish_frag_download(ctx) self._finish_frag_download(ctx, info_dict)
return True return True

View File

@@ -122,7 +122,7 @@ body > figure > img {
'total_frags': len(fragments), 'total_frags': len(fragments),
} }
self._prepare_and_start_frag_download(ctx) self._prepare_and_start_frag_download(ctx, info_dict)
extra_state = ctx.setdefault('extra_state', { extra_state = ctx.setdefault('extra_state', {
'header_written': False, 'header_written': False,
@@ -198,5 +198,5 @@ body > figure > img {
ctx['dest_stream'].write( ctx['dest_stream'].write(
b'--%b--\r\n\r\n' % frag_boundary.encode('us-ascii')) b'--%b--\r\n\r\n' % frag_boundary.encode('us-ascii'))
self._finish_frag_download(ctx) self._finish_frag_download(ctx, info_dict)
return True return True

View File

@@ -4,9 +4,9 @@ from __future__ import unicode_literals
import threading import threading
from .common import FileDownloader from .common import FileDownloader
from ..downloader import _get_real_downloader from ..downloader import get_suitable_downloader
from ..extractor.niconico import NiconicoIE from ..extractor.niconico import NiconicoIE
from ..compat import compat_urllib_request from ..utils import sanitized_Request
class NiconicoDmcFD(FileDownloader): class NiconicoDmcFD(FileDownloader):
@@ -20,7 +20,7 @@ class NiconicoDmcFD(FileDownloader):
ie = NiconicoIE(self.ydl) ie = NiconicoIE(self.ydl)
info_dict, heartbeat_info_dict = ie._get_heartbeat_info(info_dict) info_dict, heartbeat_info_dict = ie._get_heartbeat_info(info_dict)
fd = _get_real_downloader(info_dict, params=self.params)(self.ydl, self.params) fd = get_suitable_downloader(info_dict, params=self.params)(self.ydl, self.params)
success = download_complete = False success = download_complete = False
timer = [None] timer = [None]
@@ -29,9 +29,11 @@ class NiconicoDmcFD(FileDownloader):
heartbeat_data = heartbeat_info_dict['data'].encode() heartbeat_data = heartbeat_info_dict['data'].encode()
heartbeat_interval = heartbeat_info_dict.get('interval', 30) heartbeat_interval = heartbeat_info_dict.get('interval', 30)
request = sanitized_Request(heartbeat_url, heartbeat_data)
def heartbeat(): def heartbeat():
try: try:
compat_urllib_request.urlopen(url=heartbeat_url, data=heartbeat_data) self.ydl.urlopen(request).read()
except Exception: except Exception:
self.to_screen('[%s] Heartbeat failed' % self.FD_NAME) self.to_screen('[%s] Heartbeat failed' % self.FD_NAME)

View File

@@ -12,6 +12,7 @@ from ..utils import (
encodeFilename, encodeFilename,
encodeArgument, encodeArgument,
get_exe_version, get_exe_version,
Popen,
) )
@@ -26,7 +27,7 @@ class RtmpFD(FileDownloader):
start = time.time() start = time.time()
resume_percent = None resume_percent = None
resume_downloaded_data_len = None resume_downloaded_data_len = None
proc = subprocess.Popen(args, stderr=subprocess.PIPE) proc = Popen(args, stderr=subprocess.PIPE)
cursor_in_new_line = True cursor_in_new_line = True
proc_stderr_closed = False proc_stderr_closed = False
try: try:
@@ -66,7 +67,7 @@ class RtmpFD(FileDownloader):
'eta': eta, 'eta': eta,
'elapsed': time_now - start, 'elapsed': time_now - start,
'speed': speed, 'speed': speed,
}) }, info_dict)
cursor_in_new_line = False cursor_in_new_line = False
else: else:
# no percent for live streams # no percent for live streams
@@ -82,7 +83,7 @@ class RtmpFD(FileDownloader):
'status': 'downloading', 'status': 'downloading',
'elapsed': time_now - start, 'elapsed': time_now - start,
'speed': speed, 'speed': speed,
}) }, info_dict)
cursor_in_new_line = False cursor_in_new_line = False
elif self.params.get('verbose', False): elif self.params.get('verbose', False):
if not cursor_in_new_line: if not cursor_in_new_line:
@@ -208,7 +209,7 @@ class RtmpFD(FileDownloader):
'filename': filename, 'filename': filename,
'status': 'finished', 'status': 'finished',
'elapsed': time.time() - started, 'elapsed': time.time() - started,
}) }, info_dict)
return True return True
else: else:
self.to_stderr('\n') self.to_stderr('\n')

View File

@@ -39,7 +39,7 @@ class RtspFD(FileDownloader):
'total_bytes': fsize, 'total_bytes': fsize,
'filename': filename, 'filename': filename,
'status': 'finished', 'status': 'finished',
}) }, info_dict)
return True return True
else: else:
self.to_stderr('\n') self.to_stderr('\n')

View File

@@ -44,7 +44,7 @@ class YoutubeLiveChatFD(FragmentFD):
return self._download_fragment(ctx, url, info_dict, http_headers, data) return self._download_fragment(ctx, url, info_dict, http_headers, data)
def parse_actions_replay(live_chat_continuation): def parse_actions_replay(live_chat_continuation):
offset = continuation_id = None offset = continuation_id = click_tracking_params = None
processed_fragment = bytearray() processed_fragment = bytearray()
for action in live_chat_continuation.get('actions', []): for action in live_chat_continuation.get('actions', []):
if 'replayChatItemAction' in action: if 'replayChatItemAction' in action:
@@ -53,17 +53,34 @@ class YoutubeLiveChatFD(FragmentFD):
processed_fragment.extend( processed_fragment.extend(
json.dumps(action, ensure_ascii=False).encode('utf-8') + b'\n') json.dumps(action, ensure_ascii=False).encode('utf-8') + b'\n')
if offset is not None: if offset is not None:
continuation_id = try_get( continuation = try_get(
live_chat_continuation, live_chat_continuation,
lambda x: x['continuations'][0]['liveChatReplayContinuationData']['continuation']) lambda x: x['continuations'][0]['liveChatReplayContinuationData'], dict)
if continuation:
continuation_id = continuation.get('continuation')
click_tracking_params = continuation.get('clickTrackingParams')
self._append_fragment(ctx, processed_fragment) self._append_fragment(ctx, processed_fragment)
return continuation_id, offset return continuation_id, offset, click_tracking_params
def try_refresh_replay_beginning(live_chat_continuation):
# choose the second option that contains the unfiltered live chat replay
refresh_continuation = try_get(
live_chat_continuation,
lambda x: x['header']['liveChatHeaderRenderer']['viewSelector']['sortFilterSubMenuRenderer']['subMenuItems'][1]['continuation']['reloadContinuationData'], dict)
if refresh_continuation:
# no data yet but required to call _append_fragment
self._append_fragment(ctx, b'')
refresh_continuation_id = refresh_continuation.get('continuation')
offset = 0
click_tracking_params = refresh_continuation.get('trackingParams')
return refresh_continuation_id, offset, click_tracking_params
return parse_actions_replay(live_chat_continuation)
live_offset = 0 live_offset = 0
def parse_actions_live(live_chat_continuation): def parse_actions_live(live_chat_continuation):
nonlocal live_offset nonlocal live_offset
continuation_id = None continuation_id = click_tracking_params = None
processed_fragment = bytearray() processed_fragment = bytearray()
for action in live_chat_continuation.get('actions', []): for action in live_chat_continuation.get('actions', []):
timestamp = self.parse_live_timestamp(action) timestamp = self.parse_live_timestamp(action)
@@ -84,45 +101,52 @@ class YoutubeLiveChatFD(FragmentFD):
continuation_data = try_get(live_chat_continuation, continuation_data_getters, dict) continuation_data = try_get(live_chat_continuation, continuation_data_getters, dict)
if continuation_data: if continuation_data:
continuation_id = continuation_data.get('continuation') continuation_id = continuation_data.get('continuation')
click_tracking_params = continuation_data.get('clickTrackingParams')
timeout_ms = int_or_none(continuation_data.get('timeoutMs')) timeout_ms = int_or_none(continuation_data.get('timeoutMs'))
if timeout_ms is not None: if timeout_ms is not None:
time.sleep(timeout_ms / 1000) time.sleep(timeout_ms / 1000)
self._append_fragment(ctx, processed_fragment) self._append_fragment(ctx, processed_fragment)
return continuation_id, live_offset return continuation_id, live_offset, click_tracking_params
if info_dict['protocol'] == 'youtube_live_chat_replay': def download_and_parse_fragment(url, frag_index, request_data=None, headers=None):
parse_actions = parse_actions_replay
elif info_dict['protocol'] == 'youtube_live_chat':
parse_actions = parse_actions_live
def download_and_parse_fragment(url, frag_index, request_data, headers):
count = 0 count = 0
while count <= fragment_retries: while count <= fragment_retries:
try: try:
success, raw_fragment = dl_fragment(url, request_data, headers) success, raw_fragment = dl_fragment(url, request_data, headers)
if not success: if not success:
return False, None, None return False, None, None, None
data = json.loads(raw_fragment) try:
data = ie.extract_yt_initial_data(video_id, raw_fragment.decode('utf-8', 'replace'))
except RegexNotFoundError:
data = None
if not data:
data = json.loads(raw_fragment)
live_chat_continuation = try_get( live_chat_continuation = try_get(
data, data,
lambda x: x['continuationContents']['liveChatContinuation'], dict) or {} lambda x: x['continuationContents']['liveChatContinuation'], dict) or {}
continuation_id, offset = parse_actions(live_chat_continuation) if info_dict['protocol'] == 'youtube_live_chat_replay':
return True, continuation_id, offset if frag_index == 1:
continuation_id, offset, click_tracking_params = try_refresh_replay_beginning(live_chat_continuation)
else:
continuation_id, offset, click_tracking_params = parse_actions_replay(live_chat_continuation)
elif info_dict['protocol'] == 'youtube_live_chat':
continuation_id, offset, click_tracking_params = parse_actions_live(live_chat_continuation)
return True, continuation_id, offset, click_tracking_params
except compat_urllib_error.HTTPError as err: except compat_urllib_error.HTTPError as err:
count += 1 count += 1
if count <= fragment_retries: if count <= fragment_retries:
self.report_retry_fragment(err, frag_index, count, fragment_retries) self.report_retry_fragment(err, frag_index, count, fragment_retries)
if count > fragment_retries: if count > fragment_retries:
self.report_error('giving up after %s fragment retries' % fragment_retries) self.report_error('giving up after %s fragment retries' % fragment_retries)
return False, None, None return False, None, None, None
self._prepare_and_start_frag_download(ctx) self._prepare_and_start_frag_download(ctx, info_dict)
success, raw_fragment = dl_fragment(info_dict['url']) success, raw_fragment = dl_fragment(info_dict['url'])
if not success: if not success:
return False return False
try: try:
data = ie._extract_yt_initial_data(video_id, raw_fragment.decode('utf-8', 'replace')) data = ie.extract_yt_initial_data(video_id, raw_fragment.decode('utf-8', 'replace'))
except RegexNotFoundError: except RegexNotFoundError:
return False return False
continuation_id = try_get( continuation_id = try_get(
@@ -131,7 +155,7 @@ class YoutubeLiveChatFD(FragmentFD):
# no data yet but required to call _append_fragment # no data yet but required to call _append_fragment
self._append_fragment(ctx, b'') self._append_fragment(ctx, b'')
ytcfg = ie._extract_ytcfg(video_id, raw_fragment.decode('utf-8', 'replace')) ytcfg = ie.extract_ytcfg(video_id, raw_fragment.decode('utf-8', 'replace'))
if not ytcfg: if not ytcfg:
return False return False
@@ -142,10 +166,13 @@ class YoutubeLiveChatFD(FragmentFD):
visitor_data = try_get(innertube_context, lambda x: x['client']['visitorData'], str) visitor_data = try_get(innertube_context, lambda x: x['client']['visitorData'], str)
if info_dict['protocol'] == 'youtube_live_chat_replay': if info_dict['protocol'] == 'youtube_live_chat_replay':
url = 'https://www.youtube.com/youtubei/v1/live_chat/get_live_chat_replay?key=' + api_key url = 'https://www.youtube.com/youtubei/v1/live_chat/get_live_chat_replay?key=' + api_key
chat_page_url = 'https://www.youtube.com/live_chat_replay?continuation=' + continuation_id
elif info_dict['protocol'] == 'youtube_live_chat': elif info_dict['protocol'] == 'youtube_live_chat':
url = 'https://www.youtube.com/youtubei/v1/live_chat/get_live_chat?key=' + api_key url = 'https://www.youtube.com/youtubei/v1/live_chat/get_live_chat?key=' + api_key
chat_page_url = 'https://www.youtube.com/live_chat?continuation=' + continuation_id
frag_index = offset = 0 frag_index = offset = 0
click_tracking_params = None
while continuation_id is not None: while continuation_id is not None:
frag_index += 1 frag_index += 1
request_data = { request_data = {
@@ -154,17 +181,22 @@ class YoutubeLiveChatFD(FragmentFD):
} }
if frag_index > 1: if frag_index > 1:
request_data['currentPlayerState'] = {'playerOffsetMs': str(max(offset - 5000, 0))} request_data['currentPlayerState'] = {'playerOffsetMs': str(max(offset - 5000, 0))}
headers = ie._generate_api_headers(ytcfg, visitor_data=visitor_data) if click_tracking_params:
headers.update({'content-type': 'application/json'}) request_data['context']['clickTracking'] = {'clickTrackingParams': click_tracking_params}
fragment_request_data = json.dumps(request_data, ensure_ascii=False).encode('utf-8') + b'\n' headers = ie.generate_api_headers(ytcfg=ytcfg, visitor_data=visitor_data)
success, continuation_id, offset = download_and_parse_fragment( headers.update({'content-type': 'application/json'})
url, frag_index, fragment_request_data, headers) fragment_request_data = json.dumps(request_data, ensure_ascii=False).encode('utf-8') + b'\n'
success, continuation_id, offset, click_tracking_params = download_and_parse_fragment(
url, frag_index, fragment_request_data, headers)
else:
success, continuation_id, offset, click_tracking_params = download_and_parse_fragment(
chat_page_url, frag_index)
if not success: if not success:
return False return False
if test: if test:
break break
self._finish_frag_download(ctx) self._finish_frag_download(ctx, info_dict)
return True return True
@staticmethod @staticmethod

View File

@@ -1,14 +1,15 @@
from __future__ import unicode_literals import os
from ..utils import load_plugins from ..utils import load_plugins
try: _LAZY_LOADER = False
from .lazy_extractors import * if not os.environ.get('YTDLP_NO_LAZY_EXTRACTORS'):
from .lazy_extractors import _ALL_CLASSES try:
_LAZY_LOADER = True from .lazy_extractors import *
_PLUGIN_CLASSES = [] from .lazy_extractors import _ALL_CLASSES
except ImportError: _LAZY_LOADER = True
_LAZY_LOADER = False except ImportError:
pass
if not _LAZY_LOADER: if not _LAZY_LOADER:
from .extractors import * from .extractors import *
@@ -19,8 +20,8 @@ if not _LAZY_LOADER:
] ]
_ALL_CLASSES.append(GenericIE) _ALL_CLASSES.append(GenericIE)
_PLUGIN_CLASSES = load_plugins('extractor', 'IE', globals()) _PLUGIN_CLASSES = load_plugins('extractor', 'IE', globals())
_ALL_CLASSES = _PLUGIN_CLASSES + _ALL_CLASSES _ALL_CLASSES = list(_PLUGIN_CLASSES.values()) + _ALL_CLASSES
def gen_extractor_classes(): def gen_extractor_classes():

View File

@@ -1,7 +1,6 @@
# coding: utf-8 # coding: utf-8
from __future__ import unicode_literals from __future__ import unicode_literals
import re
from .amp import AMPIE from .amp import AMPIE
from .common import InfoExtractor from .common import InfoExtractor
@@ -59,7 +58,7 @@ class AbcNewsVideoIE(AMPIE):
}] }]
def _real_extract(self, url): def _real_extract(self, url):
mobj = re.match(self._VALID_URL, url) mobj = self._match_valid_url(url)
display_id = mobj.group('display_id') display_id = mobj.group('display_id')
video_id = mobj.group('id') video_id = mobj.group('id')
info_dict = self._extract_feed_info( info_dict = self._extract_feed_info(

View File

@@ -1,7 +1,6 @@
# coding: utf-8 # coding: utf-8
from __future__ import unicode_literals from __future__ import unicode_literals
import re
from .common import InfoExtractor from .common import InfoExtractor
from ..compat import compat_str from ..compat import compat_str
@@ -55,7 +54,7 @@ class ABCOTVSIE(InfoExtractor):
} }
def _real_extract(self, url): def _real_extract(self, url):
site, display_id, video_id = re.match(self._VALID_URL, url).groups() site, display_id, video_id = self._match_valid_url(url).groups()
display_id = display_id or video_id display_id = display_id or video_id
station = self._SITE_MAP[site] station = self._SITE_MAP[site]

View File

@@ -1,7 +1,6 @@
# coding: utf-8 # coding: utf-8
from __future__ import unicode_literals from __future__ import unicode_literals
import re
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..utils import (
@@ -80,7 +79,7 @@ class ACastIE(ACastBaseIE):
}] }]
def _real_extract(self, url): def _real_extract(self, url):
channel, display_id = re.match(self._VALID_URL, url).groups() channel, display_id = self._match_valid_url(url).groups()
episode = self._call_api( episode = self._call_api(
'%s/episodes/%s' % (channel, display_id), '%s/episodes/%s' % (channel, display_id),
display_id, {'showInfo': 'true'}) display_id, {'showInfo': 'true'})

View File

@@ -15,6 +15,7 @@ from ..compat import (
compat_ord, compat_ord,
) )
from ..utils import ( from ..utils import (
ass_subtitles_timecode,
bytes_to_intlist, bytes_to_intlist,
bytes_to_long, bytes_to_long,
ExtractorError, ExtractorError,
@@ -68,10 +69,6 @@ class ADNIE(InfoExtractor):
'end': 4, 'end': 4,
} }
@staticmethod
def _ass_subtitles_timecode(seconds):
return '%01d:%02d:%02d.%02d' % (seconds / 3600, (seconds % 3600) / 60, seconds % 60, (seconds % 1) * 100)
def _get_subtitles(self, sub_url, video_id): def _get_subtitles(self, sub_url, video_id):
if not sub_url: if not sub_url:
return None return None
@@ -117,8 +114,8 @@ Format: Marked,Start,End,Style,Name,MarginL,MarginR,MarginV,Effect,Text'''
continue continue
alignment = self._POS_ALIGN_MAP.get(position_align, 2) + self._LINE_ALIGN_MAP.get(line_align, 0) alignment = self._POS_ALIGN_MAP.get(position_align, 2) + self._LINE_ALIGN_MAP.get(line_align, 0)
ssa += os.linesep + 'Dialogue: Marked=0,%s,%s,Default,,0,0,0,,%s%s' % ( ssa += os.linesep + 'Dialogue: Marked=0,%s,%s,Default,,0,0,0,,%s%s' % (
self._ass_subtitles_timecode(start), ass_subtitles_timecode(start),
self._ass_subtitles_timecode(end), ass_subtitles_timecode(end),
'{\\a%d}' % alignment if alignment != 2 else '', '{\\a%d}' % alignment if alignment != 2 else '',
text.replace('\n', '\\N').replace('<i>', '{\\i1}').replace('</i>', '{\\i0}')) text.replace('\n', '\\N').replace('<i>', '{\\i1}').replace('</i>', '{\\i0}'))

View File

@@ -1,6 +1,7 @@
# coding: utf-8 # coding: utf-8
from __future__ import unicode_literals from __future__ import unicode_literals
import json
import re import re
import time import time
import xml.etree.ElementTree as etree import xml.etree.ElementTree as etree
@@ -36,6 +37,11 @@ MSO_INFO = {
'username_field': 'email', 'username_field': 'email',
'password_field': 'loginpassword', 'password_field': 'loginpassword',
}, },
'RCN': {
'name': 'RCN',
'username_field': 'username',
'password_field': 'password',
},
'Rogers': { 'Rogers': {
'name': 'Rogers', 'name': 'Rogers',
'username_field': 'UserName', 'username_field': 'UserName',
@@ -61,6 +67,11 @@ MSO_INFO = {
'username_field': 'IDToken1', 'username_field': 'IDToken1',
'password_field': 'IDToken2', 'password_field': 'IDToken2',
}, },
'Spectrum': {
'name': 'Spectrum',
'username_field': 'IDToken1',
'password_field': 'IDToken2',
},
'Philo': { 'Philo': {
'name': 'Philo', 'name': 'Philo',
'username_field': 'ident' 'username_field': 'ident'
@@ -70,6 +81,11 @@ MSO_INFO = {
'username_field': 'IDToken1', 'username_field': 'IDToken1',
'password_field': 'IDToken2', 'password_field': 'IDToken2',
}, },
'Cablevision': {
'name': 'Optimum/Cablevision',
'username_field': 'j_username',
'password_field': 'j_password',
},
'thr030': { 'thr030': {
'name': '3 Rivers Communications' 'name': '3 Rivers Communications'
}, },
@@ -1324,6 +1340,11 @@ MSO_INFO = {
'cou060': { 'cou060': {
'name': 'Zito Media' 'name': 'Zito Media'
}, },
'slingtv': {
'name': 'Sling TV',
'username_field': 'username',
'password_field': 'password',
},
} }
@@ -1492,7 +1513,8 @@ class AdobePassIE(InfoExtractor):
# In general, if you're connecting from a Verizon-assigned IP, # In general, if you're connecting from a Verizon-assigned IP,
# you will not actually pass your credentials. # you will not actually pass your credentials.
provider_redirect_page, urlh = provider_redirect_page_res provider_redirect_page, urlh = provider_redirect_page_res
if 'Please wait ...' in provider_redirect_page: # From non-Verizon IP, still gave 'Please wait', but noticed N==Y; will need to try on Verizon IP
if 'Please wait ...' in provider_redirect_page and '\'N\'== "Y"' not in provider_redirect_page:
saml_redirect_url = self._html_search_regex( saml_redirect_url = self._html_search_regex(
r'self\.parent\.location=(["\'])(?P<url>.+?)\1', r'self\.parent\.location=(["\'])(?P<url>.+?)\1',
provider_redirect_page, provider_redirect_page,
@@ -1500,7 +1522,8 @@ class AdobePassIE(InfoExtractor):
saml_login_page = self._download_webpage( saml_login_page = self._download_webpage(
saml_redirect_url, video_id, saml_redirect_url, video_id,
'Downloading SAML Login Page') 'Downloading SAML Login Page')
else: elif 'Verizon FiOS - sign in' in provider_redirect_page:
# FXNetworks from non-Verizon IP
saml_login_page_res = post_form( saml_login_page_res = post_form(
provider_redirect_page_res, 'Logging in', { provider_redirect_page_res, 'Logging in', {
mso_info['username_field']: username, mso_info['username_field']: username,
@@ -1510,6 +1533,26 @@ class AdobePassIE(InfoExtractor):
if 'Please try again.' in saml_login_page: if 'Please try again.' in saml_login_page:
raise ExtractorError( raise ExtractorError(
'We\'re sorry, but either the User ID or Password entered is not correct.') 'We\'re sorry, but either the User ID or Password entered is not correct.')
else:
# ABC from non-Verizon IP
saml_redirect_url = self._html_search_regex(
r'var\surl\s*=\s*(["\'])(?P<url>.+?)\1',
provider_redirect_page,
'SAML Redirect URL', group='url')
saml_redirect_url = saml_redirect_url.replace(r'\/', '/')
saml_redirect_url = saml_redirect_url.replace(r'\-', '-')
saml_redirect_url = saml_redirect_url.replace(r'\x26', '&')
saml_login_page = self._download_webpage(
saml_redirect_url, video_id,
'Downloading SAML Login Page')
saml_login_page, urlh = post_form(
[saml_login_page, saml_redirect_url], 'Logging in', {
mso_info['username_field']: username,
mso_info['password_field']: password,
})
if 'Please try again.' in saml_login_page:
raise ExtractorError(
'Failed to login, incorrect User ID or Password.')
saml_login_url = self._search_regex( saml_login_url = self._search_regex(
r'xmlHttp\.open\("POST"\s*,\s*(["\'])(?P<url>.+?)\1', r'xmlHttp\.open\("POST"\s*,\s*(["\'])(?P<url>.+?)\1',
saml_login_page, 'SAML Login URL', group='url') saml_login_page, 'SAML Login URL', group='url')
@@ -1524,6 +1567,75 @@ class AdobePassIE(InfoExtractor):
}), headers={ }), headers={
'Content-Type': 'application/x-www-form-urlencoded' 'Content-Type': 'application/x-www-form-urlencoded'
}) })
elif mso_id == 'Spectrum':
# Spectrum's login for is dynamically loaded via JS so we need to hardcode the flow
# as a one-off implementation.
provider_redirect_page, urlh = provider_redirect_page_res
provider_login_page_res = post_form(
provider_redirect_page_res, self._DOWNLOADING_LOGIN_PAGE)
saml_login_page, urlh = provider_login_page_res
relay_state = self._search_regex(
r'RelayState\s*=\s*"(?P<relay>.+?)";',
saml_login_page, 'RelayState', group='relay')
saml_request = self._search_regex(
r'SAMLRequest\s*=\s*"(?P<saml_request>.+?)";',
saml_login_page, 'SAMLRequest', group='saml_request')
login_json = {
mso_info['username_field']: username,
mso_info['password_field']: password,
'RelayState': relay_state,
'SAMLRequest': saml_request,
}
saml_response_json = self._download_json(
'https://tveauthn.spectrum.net/tveauthentication/api/v1/manualAuth', video_id,
'Downloading SAML Response',
data=json.dumps(login_json).encode(),
headers={
'Content-Type': 'application/json',
'Accept': 'application/json',
})
self._download_webpage(
saml_response_json['SAMLRedirectUri'], video_id,
'Confirming Login', data=urlencode_postdata({
'SAMLResponse': saml_response_json['SAMLResponse'],
'RelayState': relay_state,
}), headers={
'Content-Type': 'application/x-www-form-urlencoded'
})
elif mso_id == 'slingtv':
# SlingTV has a meta-refresh based authentication, but also
# looks at the tab history to count the number of times the
# browser has been on a page
first_bookend_page, urlh = provider_redirect_page_res
hidden_data = self._hidden_inputs(first_bookend_page)
hidden_data['history'] = 1
provider_login_page_res = self._download_webpage_handle(
urlh.geturl(), video_id, 'Sending first bookend',
query=hidden_data)
provider_association_redirect, urlh = post_form(
provider_login_page_res, 'Logging in', {
mso_info['username_field']: username,
mso_info['password_field']: password
})
provider_refresh_redirect_url = extract_redirect_url(
provider_association_redirect, url=urlh.geturl())
last_bookend_page, urlh = self._download_webpage_handle(
provider_refresh_redirect_url, video_id,
'Downloading Auth Association Redirect Page')
hidden_data = self._hidden_inputs(last_bookend_page)
hidden_data['history'] = 3
mvpd_confirm_page_res = self._download_webpage_handle(
urlh.geturl(), video_id, 'Sending final bookend',
query=hidden_data)
post_form(mvpd_confirm_page_res, 'Confirming Login')
else: else:
# Some providers (e.g. DIRECTV NOW) have another meta refresh # Some providers (e.g. DIRECTV NOW) have another meta refresh
# based redirect that should be followed. # based redirect that should be followed.
@@ -1536,10 +1648,13 @@ class AdobePassIE(InfoExtractor):
'Downloading Provider Redirect Page (meta refresh)') 'Downloading Provider Redirect Page (meta refresh)')
provider_login_page_res = post_form( provider_login_page_res = post_form(
provider_redirect_page_res, self._DOWNLOADING_LOGIN_PAGE) provider_redirect_page_res, self._DOWNLOADING_LOGIN_PAGE)
mvpd_confirm_page_res = post_form(provider_login_page_res, 'Logging in', { form_data = {
mso_info.get('username_field', 'username'): username, mso_info.get('username_field', 'username'): username,
mso_info.get('password_field', 'password'): password, mso_info.get('password_field', 'password'): password
}) }
if mso_id == 'Cablevision':
form_data['_eventId_proceed'] = ''
mvpd_confirm_page_res = post_form(provider_login_page_res, 'Logging in', form_data)
if mso_id != 'Rogers': if mso_id != 'Rogers':
post_form(mvpd_confirm_page_res, 'Confirming Login') post_form(mvpd_confirm_page_res, 'Confirming Login')

View File

@@ -9,6 +9,7 @@ from ..utils import (
float_or_none, float_or_none,
int_or_none, int_or_none,
ISO639Utils, ISO639Utils,
join_nonempty,
OnDemandPagedList, OnDemandPagedList,
parse_duration, parse_duration,
str_or_none, str_or_none,
@@ -132,7 +133,7 @@ class AdobeTVIE(AdobeTVBaseIE):
} }
def _real_extract(self, url): def _real_extract(self, url):
language, show_urlname, urlname = re.match(self._VALID_URL, url).groups() language, show_urlname, urlname = self._match_valid_url(url).groups()
if not language: if not language:
language = 'en' language = 'en'
@@ -178,7 +179,7 @@ class AdobeTVShowIE(AdobeTVPlaylistBaseIE):
_process_data = AdobeTVBaseIE._parse_video_data _process_data = AdobeTVBaseIE._parse_video_data
def _real_extract(self, url): def _real_extract(self, url):
language, show_urlname = re.match(self._VALID_URL, url).groups() language, show_urlname = self._match_valid_url(url).groups()
if not language: if not language:
language = 'en' language = 'en'
query = { query = {
@@ -215,7 +216,7 @@ class AdobeTVChannelIE(AdobeTVPlaylistBaseIE):
show_data['url'], 'AdobeTVShow', str_or_none(show_data.get('id'))) show_data['url'], 'AdobeTVShow', str_or_none(show_data.get('id')))
def _real_extract(self, url): def _real_extract(self, url):
language, channel_urlname, category_urlname = re.match(self._VALID_URL, url).groups() language, channel_urlname, category_urlname = self._match_valid_url(url).groups()
if not language: if not language:
language = 'en' language = 'en'
query = { query = {
@@ -263,7 +264,7 @@ class AdobeTVVideoIE(AdobeTVBaseIE):
continue continue
formats.append({ formats.append({
'filesize': int_or_none(source.get('kilobytes') or None, invscale=1000), 'filesize': int_or_none(source.get('kilobytes') or None, invscale=1000),
'format_id': '-'.join(filter(None, [source.get('format'), source.get('label')])), 'format_id': join_nonempty(source.get('format'), source.get('label')),
'height': int_or_none(source.get('height') or None), 'height': int_or_none(source.get('height') or None),
'tbr': int_or_none(source.get('bitrate') or None), 'tbr': int_or_none(source.get('bitrate') or None),
'width': int_or_none(source.get('width') or None), 'width': int_or_none(source.get('width') or None),

Some files were not shown because too many files have changed in this diff Show More