I had a site that contained a load of pdfs that I wanted to download, to save me from clicking on each of the pdfs I did some googleing and found how to download all files ending in .pdf. cat index.html | grep -o -e http://[^[:space:]\"]*.pdf | xargs wget and for an even better approach you can make a little bash script that takes the URL as a parameter.
With help from Google and Linux forum, found a cool command that will delete everything else but not the folder or file name in the specified field behold the command of destiny! rm -r `ls | grep -v 'snapshots'` Make sure that around the piped ls command you have tilda and a single quote around the file/folder name. Chomputers away!