BaseSeed
John
dingo at coco2.arach.net.au
Thu Sep 9 07:39:20 CDT 2004
James Gregory wrote:
>On Wed, 2004-09-08 at 14:04 -0700, Matt Zimmerman wrote:
>
>
>>On Thu, Sep 09, 2004 at 04:54:34AM +0800, John wrote:
>>
>>
>>
>>>How else do you do this?
>>>summer at Dolphin:~$ time lynx -dump http://www.x.com/ | tail
>>> 30. http://www.ebay.com/
>>> 31. http://www.paypal.com/cgi-bin/webscr
>>> 32. http://www.paypal.com/cgi-bin/webscr?cmd=p/gen/fdic-outside
>>> 33. http://www.paypal.com/cgi-bin/webscr?cmd=p/gen/privacy-outside
>>> 34. http://www.bbbonline.org/cks.asp?id=20111061155818568
>>>
>>>
>>>I regularly want a list of URLs for some reason, often to get a list of
>>>files to download with wget or (sometimes) with curl.
>>>
>>>
>>You don't need a browser at all if you only want to extract URLs.
>>
>>wget -O- http://www.x.com/ | urlview
>>
>>
>
>You can also go to mozilla and click 'page info'. There's a links tab
>there with all the links for the page. But if you want to download
>everything on a page, wget -r will work.
>
>
>
I can't do either of those in a script. You're missing the point.
More information about the sounder
mailing list