A few weeks ago, I wrote about using SearchAroo as a spider to index
a site with DotLucene. I've written a new WebCrawler using SearchAroo as a base and turned
it into a library that can be reused for other applications.
Download Web Crawler (zip file with WebCrawler engine and sample web and forms apps)
Here are
the improvements I've made:
- Gets text from the following HTML tag attributes: alt, title, summary, longdesc
- Better ability to determine relative URLs
- The
WebDocument object keeps record of all files it links out to, including
external and internal links, as well as images. This is useful for
determining if your site has missing images or outgoing links. - Compiled
into a reusable library (the author of SearchAroo didn't want to have a
dll, but I feel it's much more usable this way) which means it can be
plugged into any indexing framework or used for other purposes such as
simple link checking.
Here is the basic code to get it running:
CrawlerEngine crawler = new CrawlerEngine();
crawler.OnDocumentLoaded += new DocumentHandler(crawler_OnDocumentLoaded);
crawler.Crawl(baseUrl);
void crawler_OnDocumentLoaded(WebDocumentBase webDocument, int level) {
// do indexing code
// WebDocumentBase is a base class for all documents that are downloaded and spidered
// it has the following properties (Uri, ContentType, MimeType, Encoding, Length, TextData, InternalLinks, ExternalLinks, ImageSrcs)
// if the file an HTML file, then it can be cast as an HtmlDocument
// with the following additional properties (Title, Description, Keywords, Html)
// future additions will hopefully have plugins for PdfDocument and WordDocument
}
Future things I'd like to add:
- Other document types (PDF, Word, other Office formats) for indexing like DotLucene's indexer.
- More events to help steer the crawling
- Weight to heading tags (h1, h2, etc.)
Please note, the namespace "Refresh.Web" is for a future business
endeavor. The code is released with an CC-attributive license. If you're interested using it, please leave a comment on additional features you'd like to see.
This post is a bit dated, but if you ever get the notion to investigate crawling any further see: http://arachnode.net – an open source site crawler written in C# using SQL Server 2005.
Your sample is too simple even if it is a beginning.
You don’t manage Proxy & Credential, I work with a Proxy that have an identification access, and your code don’t work. If I will find a solution, I will push it to you.
let me see
Hey –
I just promoted arachnode.net to release/stable status, for those that are interested!
-an
I realize it pales in comparison to Arachnode, but I also wrote a small crawler. It’s at http://www.CuteCrawler.com . ^_^
The download link is inactive
I have read a lot of the comments and I just wonder why people say the things they do, I mean they can find the bad in anything. I guess that is where we are in this world. Just hurt hurt hurt, no matter what the subject is. Lawrence Williams http://www.trybw.com Fort Myers, Naples, Bonita,Cape Coral Computer Repair Service