Some sites require a user to login or authenticate before a file could be downloaded. To do so with wget, make use of the –post-data and –save-cookies arguments as described here. We first have to view the source of the html file, determine the name of the variables to post, and pass these information onto… Read more »
This lifehacker post shows some pretty useful capabilities of wget.
I do more and more downloading using the command line these days, mainly using wget and cURL. This is a good comparison of the two. Both are great at downloading. cURL supports more protocol (outside of http, https, ftp) and is bi-directional. wget can download files recursively (links on a webpage, and links that appear… Read more »
Having an electronic version of a book is great. I can skim and search through it very easily. Although I do find hardcopies useful at times, I prefer softcopies 99% of the time due to their accessibility and searchability. Most universities have deals with publishers where students can access the electronic version of a book… Read more »
Sometimes I want to download all files on a page. The flashgot plugin works, but it involves clicking which can be a pain if you have a lot of pages to download. Recently I’ve been wanting to download pdf’s off a page. Found out that I can do so with wget on the command line:… Read more »