hmol / linkcrawler Goto Github PK
View Code? Open in Web Editor NEWFind broken links in webpage
License: MIT License
Find broken links in webpage
License: MIT License
When crawling http://www.the-website-to-crawl.com
and the application gets to a url for a file; http://www.the-website-to-crawl.com/reports/report.pdf
, it will throw exception. This is beacuse it will then try to fetch html markup from pdf-file. So, when getting to a url for file, dont fetch markup, just continue.
A url that returns a 301 (moved permanently) doesn't get crawled afterwards.
I seem to be having a bunch of those where url's not ending in a slash give a 301 to the page containing the trailing slash.
I'm guessing this is not intended.
If not, let me know and I have some code ready that fixes this which needs some cleanup and perhaps some unit tests.
(tracking in duplicate: #42)
I've just stumbled across this and love it... but I'd like to add support for tweeting out broken links automatically. Similar to the slack option currently just another platform for it I guess.
I've got experience working with Twitter's API and associated .Net libs for it so can't see it being particularly tricky, just wanted to see if it would be a welcome addition from your point of view before I go off and fork etc.
If the addition would be welcome let me know and I'll provide an outline of how I'd plan on doing it. ๐
When program is finished running, write elapsed time to console output.
Any inputs on wether we should port this over to dotnet core?
Right now all requests that is not 1xx
or 2xx
is treated as failed requests. It could be useful to filter create a filter on this. Maybe you don't want http-status 302
(temporary redirect) to be reported as error.
This is somehting that could be configrable in app.config.
<add key="SuccessHttpStatusCodes" value="1xx,2xx,302,303" />
Now that we've got the option to output the elapsed time at the end of processing (PR #21), it might be good to offer a summary table too. This could list the different statuses alongside the counts of links for each. E.g.:
Status | Links |
---|---|
200 | 113 |
301 | 12 |
302 | 6 |
404 | 3 |
418 | 1 |
To achieve this we could turn the two lists of strings into a list of objects that contain the link URL, a bool for whether or not we've processed the response yet, and a field for the status code.
Proposed subtasks:
Thoughts very welcome.
As an extra feature I think it would be beneficial if this supported sites that block first page with age gate or that have login (and most of the content is after login only)
Just came across this when I enabled outputting to CSV on a website.
System.IndexOutOfRangeException was unhandled by user code
HResult=-2146233080
Message=Probable I/O race condition detected while copying memory. The I/O package is not thread safe by default. In multithreaded applications, a stream must be accessed in a thread-safe way, such as a thread-safe wrapper returned by TextReader's or TextWriter's Synchronized methods. This also applies to classes like StreamWriter and StreamReader.
Source=mscorlib
StackTrace:
at System.Buffer.InternalBlockCopy(Array src, Int32 srcOffsetBytes, Array dst, Int32 dstOffsetBytes, Int32 byteCount)
at System.IO.StreamWriter.Write(Char[] buffer, Int32 index, Int32 count)
at System.IO.TextWriter.WriteLine(String value)
at System.IO.TextWriter.WriteLine(String format, Object[] arg)
at LinkCrawler.Utils.Outputs.CsvOutput.Write(IResponseModel responseModel) in C:\Users\Chris\Downloads\LinkCrawler-develop\LinkCrawler\LinkCrawler\Utils\Outputs\CsvOutput.cs:line 43
at LinkCrawler.Utils.Outputs.CsvOutput.WriteInfo(IResponseModel responseModel) in C:\Users\Chris\Downloads\LinkCrawler-develop\LinkCrawler\LinkCrawler\Utils\Outputs\CsvOutput.cs:line 38
at LinkCrawler.LinkCrawler.WriteOutput(IResponseModel responseModel) in C:\Users\Chris\Downloads\LinkCrawler-develop\LinkCrawler\LinkCrawler\LinkCrawler.cs:line 92
at LinkCrawler.LinkCrawler.ProcessResponse(IResponseModel responseModel) in C:\Users\Chris\Downloads\LinkCrawler-develop\LinkCrawler\LinkCrawler\LinkCrawler.cs:line 59
at LinkCrawler.LinkCrawler.<>c__DisplayClass31_0.<SendRequest>b__0(IRestResponse response) in C:\Users\Chris\Downloads\LinkCrawler-develop\LinkCrawler\LinkCrawler\LinkCrawler.cs:line 53
at RestSharp.RestClientExtensions.<>c__DisplayClass1.<ExecuteAsync>b__0(IRestResponse response, RestRequestAsyncHandle handle)
at RestSharp.RestClient.ProcessResponse(IRestRequest request, HttpResponse httpResponse, RestRequestAsyncHandle asyncHandle, Action`2 callback)
at RestSharp.RestClient.<>c__DisplayClass3.<ExecuteAsync>b__0(HttpResponse r)
at RestSharp.Http.ExecuteCallback(HttpResponse response, Action`1 callback)
at RestSharp.Http.<>c__DisplayClass15.<ResponseCallback>b__13(HttpWebResponse webResponse)
at RestSharp.Http.GetRawResponseAsync(IAsyncResult result, Action`1 callback)
at RestSharp.Http.ResponseCallback(IAsyncResult result, Action`1 callback)
InnerException:
Hello!
I like this project and I would like to colaborate. But I just crawled a website with LinkCrawler and it only shows the links with http, but it dont look into local links like a href="html_images.html" or similars.
Is it ok? or it will be developed in a future? and what happen with deep linking?
Regards
Don't know if this is a bug or a feature, but should local links be shown with status 0? e.g. links that are missing the base url (/page.html)? I assume they're also not crawled afterwards
After built and run
I only see these on the console
0 0 https://github.com
Referer:
Been waiting for half hour and nothing else shown.
I tried both with develop and master branch.
How to use this tool ?
Add a unit test to check for a valid pattern for a valid URL.
This solution needs more unit tests :)
Unless I'm misunderstanding something, it seems like all IOutput implementations are always used for each 'error' response.
One thing that might be nice is that currently your program will try to output to ALL implementation of IOutput. It might be better to all that to be configurable (e.g. in app config).
Something like:
So, if you keep adding output options (as in issue #6), you don't have to keep writing to all of them. Does that sound like a useful feature?
RestSharp provides proxy support, so can this be surfaced in LinkCrawler too, ideally as a setting through app.config?
I'm happy to make the change and roll it up into the work I'll be doing with #12 (with separate pull requests of course)
How about being able to run the app like this: LinkCrawler www.mycoolsite.com as an alternative option?
If the server responds with redirect (301 or 302), the crawler should follow these redirects (like curl is able for example), otherwise it's missing a bunch of crawlable content
It looks like the example links on the readme are broken.
Things like "Example run with console output" go to a page that says "Cannot proxy the given URL".
Thank you :)
Add support for emailing an aggregated report using SMTP. See #12 for discussion around specifics of what would be included etc., but general gist is:
simples
(I'll pick this up along with others)
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.