Ok just to let you know - i used that script for a while, but there were quite a lot of problems with it in my opinion.
- main script got stuck from time to time (had to write a "check.sh" to analyze logs and restart it if nothing outputed for a few minutes ...)
- main script got internal weird python errors about variables not anymore recognized (?!)
- must "follow" the models (which is really a pain as it leads to login troubles thereafter)
- very difficult to make any evolutions on my side (as written in python, which i'm not fan of)
- not very robust (checking the 14th or 15th column of a "ps" for example)
As correcting all this would have been quite a hassle for me (again, i dont know python and i'm really not interested by this language), i've rewritten the whole thing and now i have a solution who (for me at least) works really better.
I have two Docker containers.
The first one "rips" the videos.
The second one re-encodes them from .flv to .mp4 (just the audio, see below).
The "ripper" Docker container is a java process which allows :
- to publish (very basic, but enough for me) web pages to administrate the wishlist online (subscribe / unsubscribe), check what is and what has been recorded, cancel a recording, ... (wishlist.txt is still editable on the disk but it's easier to just use the basic web pages)
- to have a robust internal behavior ("0 file size" (which occurs really less now) are checked in the background for example)
- to check several URLs and on several sub-pages (no need to follom models) (checking 10 pages takes less than 1 sec. on my core i3)
- to move (if configured) the downloaded files to another folder
- to have an easy-to-modify file pattern (destination, filename structure, date pattern)
- to work either as a daemon, either on demand
- ...
Quote:
09:48:31.683 [THREAD-MAIN] INFO ********.rip.plugins.cb.CBRipper - Configuration : CBRipperConfiguration : connectionTimeout [5000ms], daemonMode [true], downloadedPath [downloaded], downloadingPath [downloading], filenameDateTimePattern [YYYY-MM-dd_HH-mm], filenamePattern [${model}_${date}.flv], onlineModelsURL [https://chaturbate.com/female-cams/,.../couple-cams/], onlineModelsURLPageCount [5], password [******], processedPath [processed], processingPath [processing], rtmpBinary [/opt/rtmpdump-ksv/rtmpdump], token [*****], username [*****], waitTimeout [45000ms], wishlistFileName [/downloads/cb/wishlist.txt]
|
The "converter" Docker container is just a wrapper around ffmpeg (built with libspleex) (real ffmpeg, and not avconv) with the following conversion in a "while [ true ]" shell loop :
Quote:
< /dev/null ffmpeg -threads 2 -i "$INPUT" -c copy -acodec mp3 "$OUTPUT" > ffmpeg.log 2>&1
|
(may seems quite simple but took me awhile to have this running, as before Dockerizing this part, i had a very old avconv binary)
The folders organization is as follow :
"ripper" container
- download files under "downloading/"
- move them once finished under "downloaded/"
"converter" container
- move files from "downloaded/" to "converting/"
- mode them once finished under "converted/"
Additionnally, a "syncing" operation transfers everything from my server to my NAS :
- move files from "converted/" to "syncing/"
- deletes them if transfer has been successful
That way i never have any "synchronisation" problem between files and folders (every step has its own input and output folder).