Useful script to collect downloads from several sites

Updated . Posted . Visible to the public.

For university I have to stay up-to-date with lecture documents. Since my university doesn't offer RSS feeds, I wrote a little script that collects files from web pages.

You want this, if you have several web pages that offer downloads that you don't want to check manually. Just register the URL and a CSS snippet to retrieve the files in the attached script and run it – it will fetch all your files. It will store all files in a single place or sort them into respective directories.

Edit the header of the file (providing your data), save it to /usr/local/bin, chmod +x skript and enjoy. It'll look like this:

deborah:~ $> skript
# ComSys:
- already loaded: blatt01.pdf
- already loaded: blatt02.pdf
- already loaded: blatt03.pdf
- already loaded: blatt04.pdf
- already loaded: blatt05.pdf
- already loaded: blatt06.pdf
- already loaded: blatt07.pdf
- already loaded: blatt08.pdf
- already loaded: blatt09.pdf

Please enter the password for 'deborah' on 'Computer Science': 
# Computer Science:
- downloading: /Users/deborah/Desktop/WUM-WS1112-Fallstudie1.pdf ... done.
- already loaded: WUM-WS1112-Fallstudie2.pdf
- already loaded: WUM-Klausur-SS11.pdf
- already loaded: WUM-Skript-09-WS1112.pdf
- already loaded: WUM-WS1112-Uebung01.pdf
- already loaded: WUM-WS1112-Uebung02.pdf
- already loaded: WUM-WS1112-Uebung03.pdf
- already loaded: WUM-WS1112-Uebung04.pdf
- already loaded: WUM-WS1112-Uebung05.pdf
- already loaded: WUM-WS1112-Uebung06.pdf
- already loaded: WUM-WS1112-Uebung07.pdf
Dominik Schöler
Last edit
Attachments
License
Source code in this card is licensed under the MIT License.
Posted by Dominik Schöler to makandra dev (2011-12-06 14:44)