maurosoria / dirsearch Goto Github PK
View Code? Open in Web Editor NEWWeb path scanner
Web path scanner
Hey,
Often times you'll get a series of dirnames that return garbage. Sometimes they'll look like:
/offers-orlando/
/offers-mexico/
/offers-newyork/
etc..
What would be the best way to be able to exclude these sorts of directories? I'm opening an issue because I'd like other thoughts. I'm not sure if there should be an exclude by regex or exclude by simply checking for a substring in the path. Thoughts?
The project is growing so fast. It would make sense to introduce PEP8 guidelines as a coding standard for the project so that everything is uniform.
What are your thoughts on this?
Hi!
Why does the redirect path exist in console output:
301 - 0B - /.CSV -> http://website/csv-parser/
And does not exist in json:
{ "content-length": 0, "path": "/.CSV", "status": 301 }
Can you update this?
Error:
Traceback (most recent call last):
File "dirsearch.py", line 38, in <module>
main = Program()
File "dirsearch.py", line 34, in __init__
self.controller = Controller(self.script_path, self.arguments, self.output)
File "dirsearch/lib/controller/Controller.py", line 135, in __init__
self.wait()
File "dirsearch/lib/controller/Controller.py", line 334, in wait
self.fuzzer.start()
File "dirsearch/lib/core/Fuzzer.py", line 80, in start
self.setupScanners()
File "dirsearch/lib/core/Fuzzer.py", line 56, in setupScanners
self.defaultScanner = Scanner(self.requester, self.testFailPath, "")
File "dirsearch/lib/core/Scanner.py", line 44, in __init__
self.setup()
File "dirsearch/lib/core/Scanner.py", line 61, in setup
self.dynamicParser = DynamicContentParser(self.requester, firstPath, firstResponse.body, secondResponse.body)
File "dirsearch/thirdparty/sqlmap/DynamicContentParser.py", line 16, in __init__
self.generateDynamicMarks(firstPage, secondPage)
File "dirsearch/thirdparty/sqlmap/DynamicContentParser.py", line 33, in generateDynamicMarks
self.cleanPage = self.removeDynamicContent(firstPage, self.dynamicMarks)
File "dirsearch/thirdparty/sqlmap/DynamicContentParser.py", line 96, in removeDynamicContent
page = re.sub(r'(?s){0}.+$'.format(re.escape(prefix)), prefix.replace('\\', r'\\'), page)
TypeError: a bytes-like object is required, not 'str'
Hi,
In Requester.py, requests
is passed the URL, so specifying an IP address in --ip
has no effect. The best method to cover both when IP is supplied and when it is resolved is to call requests
with the IP (either supplied or resolved) and specify the host part from the URL in the Host
header.
I will send in a PR to fix this shortly.
Hello there, can this be an option(argument) please? Not a default hardcoded thing.
dirsearch/lib/connection/Requester.py
Line 46 in a5ac52b
Thanks!
For some reason, dirsearch begins all JSON report files with a bunch of NULL characters.
This confuses some JSON decoders, like the one included in Python.
scan to the admin directory, it crashed
[09:47:26] Starting: /admin/
Traceback (most recent call last):
File "dirsearch.py", line 38, in <module>
main = Program()
File "dirsearch.py", line 34, in __init__
self.controller = Controller(self.script_path, self.arguments, self.output)
File "/opt/dirsearch/lib/controller/Controller.py", line 134, in __init__
self.wait()
File "/opt/dirsearch/lib/controller/Controller.py", line 330, in wait
self.fuzzer.start()
File "/opt/dirsearch/lib/core/Fuzzer.py", line 80, in start
self.setupScanners()
File "/opt/dirsearch/lib/core/Fuzzer.py", line 56, in setupScanners
self.defaultScanner = Scanner(self.requester, self.testFailPath, "")
File "/opt/dirsearch/lib/core/Scanner.py", line 44, in __init__
self.setup()
File "/opt/dirsearch/lib/core/Scanner.py", line 48, in setup
firstResponse = self.requester.request(firstPath)
File "/opt/dirsearch/lib/connection/Requester.py", line 152, in request
{'message': 'CONNECTION TIMEOUT: There was a problem in the request to: {0}'.format(path)}
lib.connection.RequestException.RequestException: {'message': 'CONNECTION TIMEOUT: There was a problem in the request to: tpbSkYC2y9ec'}
but the site is still ok when i scan restart..
Hi,
Today I tried to scan a server with two different WebServer instances, the first on the standard port 80 and the second on port 8880.
If I try scanning the port that interests me, I specify the port after the url (http://123.456.789.000:8880) it will still be scanned port 80 and not 8880.
You can correct? If you want a IP to do a test privately, please contact me in private.
Thanks
Andrea
How to Enter User Name and Password in Password
AUTH Basic authentication
dirsearch / thirdparty / requests / auth.py
The script is in the folders
But there is none
Run command
When dirs3arch is used to scan php file, a bug appears. What i'v done,
[all3g@core dirs3arch]$ python3 dirs3arch.py -u http://192.168.1.101/ -e "php" -w ~/sectools/fuzzdb/discovery/wordlists.txt -t 20
This is my wordlist file:
123
phpinfo
core
main
...
dirs3arch fails to get the right url path, and the error url is /123? during scanning.
What we need is /123.php.
HTTP Response as follow:
GET /123? HTTP/1.1
Connection: keep-alive
Accept-Encoding: identity
User-agent: Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/28.0.1468.0 Safari/537.36
Host: 192.168.1.101
Cache-Control: max-age=0
Accept-Language: en-us
Keep-Alive: 300
HTTP/1.1 404 Not Found
Date: Sun, 22 Mar 2015 10:05:49 GMT
Server: Apache/2.4.10 (Win32) OpenSSL/1.0.1h PHP/5.4.31
Vary: accept-language,accept-charset
Accept-Ranges: bytes
Keep-Alive: timeout=5, max=100
Connection: Keep-Alive
Transfer-Encoding: chunked
Content-Type: text/html; charset=utf-8
Content-Language: en
hello๏ผI Scan multiple sites at the same time, the tool run error๏ผ
Traceback (most recent call last):
File "dirsearch.py", line 40, in <module>
main = Program()
File "dirsearch.py", line 34, in __init__
self.controller = Controller(self.script_path, self.arguments, self.output)
File "/root/tools/dirsearch/lib/controller/Controller.py", line 134, in __init__
self.wait()
File "/root/tools/dirsearch/lib/controller/Controller.py", line 330, in wait
self.fuzzer.start()
File "/root/tools/dirsearch/lib/core/Fuzzer.py", line 80, in start
self.setupScanners()
File "/root/tools/dirsearch/lib/core/Fuzzer.py", line 59, in setupScanners
self.scanners[extension] = Scanner(self.requester, self.testFailPath, "." + extension)
File "/root/tools/dirsearch/lib/core/Scanner.py", line 44, in __init__
self.setup()
File "/root/tools/dirsearch/lib/core/Scanner.py", line 48, in setup
firstResponse = self.requester.request(firstPath)
File "/root/tools/dirsearch/lib/connection/Requester.py", line 151, in request
{'message': 'CONNECTION TIMEOUT: There was a problem in the request to: {0}'.format(path)}
lib.connection.RequestException.RequestException: {'message': 'CONNECTION TIMEOUT: There was a problem in the request to: qsYkJ4xZvPdh.aspx'}
Hi Mauro,
I'm a developer of BackBox Linux. Remember?! :D
Now the software will automatically generate the log, to create a correctly Debian package would be appropriate that the logs were generated in the user's folder.
Otherwise the script must be executed with the permissions of SUDO. Otherwise it generates the error "Permission denied: '/usr/share/DirSearch/logs".
You can change the log salvage path? In Python you can use: os.path.expanduser ("~")
Thanks
Andrea
I get some of these:
14.74% - Last request to: pete
Unexpected error:
There was a problem in the request to: led
It would be useful to know more about the nature of the unexpected error. It was a connection timeout or what?
Brilliant tool! It would be a great convenience for macOS users if you can publish it to Homebrew.
Error:
Traceback (most recent call last):
File "./dirsearch.py", line 38, in <module>
main = Program()
File "./dirsearch.py", line 34, in __init__
self.controller = Controller(self.script_path, self.arguments, self.output)
File "/root/dirsearch/lib/controller/Controller.py", line 135, in __init__
self.wait()
File "/root/dirsearch/lib/controller/Controller.py", line 334, in wait
self.fuzzer.start()
File "/root/dirsearch/lib/core/Fuzzer.py", line 80, in start
self.setupScanners()
File "/root/dirsearch/lib/core/Fuzzer.py", line 59, in setupScanners
self.scanners[extension] = Scanner(self.requester, self.testFailPath, "." + extension)
File "/root/dirsearch/lib/core/Scanner.py", line 44, in __init__
self.setup()
File "/root/dirsearch/lib/core/Scanner.py", line 61, in setup
self.dynamicParser = DynamicContentParser(self.requester, firstPath, firstResponse.body, secondResponse.body)
File "/root/dirsearch/thirdparty/sqlmap/DynamicContentParser.py", line 14, in __init__
self.generateDynamicMarks(firstPage, secondPage)
File "/root/dirsearch/thirdparty/sqlmap/DynamicContentParser.py", line 31, in generateDynamicMarks
self.cleanPage = self.removeDynamicContent(firstPage, self.dynamicMarks)
File "/root/dirsearch/thirdparty/sqlmap/DynamicContentParser.py", line 94, in removeDynamicContent
page = re.sub(r'(?s){0}.+{1}'.format(re.escape(prefix), re.escape(suffix)), "{0}{1}".format(prefix.replace('\\', r'\\'), suffix.replace('\\', r'\\')), page)
TypeError: 'str' does not support the buffer interface
So I'm not sure exactly what the recursive switch is intended to do, but I though it should start searching in the dirs it finds in the first search. However, it just says "Task Completed" even though a directory is found. The admin directory contains several other files and directories I would like to search automatically. Here's a minimal example i made on my own box.
root@kali:~/tools/dirsearch# python3 /root/tools/dirsearch/dirsearch.py -u localhost -w /usr/share/wordlists/dirb/common.txt -e php -t 10 -r
_|. _ _ _ _ _ _|_ v0.3.7
(_||| _) (/_(_|| (_| )
Extensions: php | Threads: 10 | Wordlist size: 4614
Error Log: /root/tools/dirsearch/logs/errors-17-08-03_10-59-21.log
Target: localhost
[10:59:21] Starting:
[10:59:21] 200 - 504B - /
[10:59:22] 301 - 0B - /admin -> /admin/
[10:59:25] 200 - 2KB - /file
[10:59:28] 200 - 4KB - /packages
Task Completed
trying with complex URL to bruteforce app function :
http://sub.sub.sub.domain.net/func/service.dll?fid=415&f=/myfunc/loginverify&checkif=
And for traceback :
Error Log: C:\Users\murray\Downloads\dirsearch-master\dirsearch-master\logs\errors-17-01-17_16-16-37.log
Target: http://sub.sub.sub.domain.net/func/service.dll?fid=415&f=/myfunc/loginverify&checkif=
Traceback (most recent call last):
File "dirsearch.py", line 40, in
main = Program()
File "dirsearch.py", line 34, in init
self.controller = Controller(self.script_path, self.arguments, self.output)
File "C:\Users\murray\Downloads\dirsearch-master\dirsearch-master\lib\controller\Controller.py", line 123, in init
self.setupReports(self.requester)
File "C:\Users\murray\Downloads\dirsearch-master\dirsearch-master\lib\controller\Controller.py", line 232, in setupReports
outputFile)
File "C:\Users\murray\Downloads\dirsearch-master\dirsearch-master\lib\reports\BaseReport.py", line 32, in init
self.open()
File "C:\Users\murray\Downloads\dirsearch-master\dirsearch-master\lib\reports\BaseReport.py", line 43, in open
self.file = open(self.output, 'w+')
FileNotFoundError: [Errno 2] No such file or directory: 'C:\Users\murray\Downloads\dirsearch-master\dirsearch-master\reports\sub.sub.sub.domain.net\func/service._17-01-17_16-16-38'
I get this:
98.57% - Last request to: dating-southport
Unexpected error:
There was a problem in the request to: san telmo museoa
98.59% - Last request to: development-play
Unexpected error:
There was a problem in the request to: palacio goikoa
98.86% - Last request to: health-skin-tone
Unexpected error:
There was a problem in the request to: konporta ke
I would except to try for example konporta%20ke instead of throwing and exception.
The command can make the script work multiple processes running simultaneously
Example
-p PROCESS Num of processes running concurrently, 30 by default
https://github.com/lijiejie/BBScan
There is also a problem connecting to the target
If the target is dead, the script takes a long time to move to another target of the file
Not sure this is caused by using the cookie flag, but it is when it happened to crash, this is the error I got:
Traceback (most recent call last):
File "dirs3arch.py", line 36, in
main = Program()
File "dirs3arch.py", line 32, in init
self.controller = Controller(self.script_path, self.arguments, self.output)
File "/root/Desktop/dirs3arch/lib/controller/Controller.py", line 59, in init
self.wait()
File "/root/Desktop/dirs3arch/lib/controller/Controller.py", line 199, in wait
self.fuzzer.start()
File "/root/Desktop/dirs3arch/lib/core/Fuzzer.py", line 75, in start
self.testersSetup()
File "/root/Desktop/dirs3arch/lib/core/Fuzzer.py", line 56, in testersSetup
self.testers[extension] = NotFoundTester(self.requester, '{0}.{1}'.format(self.testFailPath, extension))
File "/root/Desktop/dirs3arch/lib/core/NotFoundTester.py", line 34, in init
self.tester = ContentTester(self.getNotFoundDynamicContentParser())
File "/root/Desktop/dirs3arch/lib/core/NotFoundTester.py", line 41, in getNotFoundDynamicContentParser
return DynamicContentParser(self.requester, self.notFoundPath)
File "/root/Desktop/dirs3arch/thirdparty/sqlmap/DynamicContentParser.py", line 23, in init
self.generateDynamicMarks(firstPage, secondPage)
File "/root/Desktop/dirs3arch/thirdparty/sqlmap/DynamicContentParser.py", line 37, in generateDynamicMarks
self.dynamicMarks += findDynamicContent(firstPage, secondPage)
NameError: global name 'findDynamicContent' is not defined
I See Dirsearch Throwing Out Killed on Passing 35MB of Wordlists .
What could be the Reasons of "Killed" Status?
I'm trying to run dirs3arch on Windows 8.1 (64-bit) with Python 3.4.3 and I'm getting the following error when trying to run a scan with valid arguments:
Is 3.4.3 not supported? By "3.X support", I assumed that included 3.4.3 but dirs3arch.py:19 tells me otherwise.
Hi, can you help me please to fix this error? Thank you
raceback (most recent call last):
File "./dirsearch.py", line 38, in
main = Program()
File "./dirsearch.py", line 34, in init
self.controller = Controller(self.script_path, self.arguments, self.output)
File "/root/dirsearch/lib/controller/Controller.py", line 134, in init
self.wait()
File "/root/dirsearch/lib/controller/Controller.py", line 330, in wait
self.fuzzer.start()
File "/root/dirsearch/lib/core/Fuzzer.py", line 80, in start
self.setupScanners()
File "/root/dirsearch/lib/core/Fuzzer.py", line 56, in setupScanners
self.defaultScanner = Scanner(self.requester, self.testFailPath, "")
File "/root/dirsearch/lib/core/Scanner.py", line 44, in init
self.setup()
File "/root/dirsearch/lib/core/Scanner.py", line 61, in setup
self.dynamicParser = DynamicContentParser(self.requester, firstPath, firstResponse.body, secondResponse.body)
File "/root/dirsearch/thirdparty/sqlmap/DynamicContentParser.py", line 14, in init
self.generateDynamicMarks(firstPage, secondPage)
File "/root/dirsearch/thirdparty/sqlmap/DynamicContentParser.py", line 31, in generateDynamicMarks
self.cleanPage = self.removeDynamicContent(firstPage, self.dynamicMarks)
File "/root/dirsearch/thirdparty/sqlmap/DynamicContentParser.py", line 94, in removeDynamicContent
page = re.sub(r'(?s)%s.+%s' % (prefix, suffix), '%s%s' % (prefix, suffix), str(page))
File "/usr/lib/python3.5/re.py", line 182, in sub
return _compile(pattern, flags).sub(repl, string, count)
File "/usr/lib/python3.5/re.py", line 293, in _compile
p = sre_compile.compile(pattern, flags)
File "/usr/lib/python3.5/sre_compile.py", line 536, in compile
p = sre_parse.parse(p, flags)
File "/usr/lib/python3.5/sre_parse.py", line 834, in parse
raise source.error("unbalanced parenthesis")
sre_constants.error: unbalanced parenthesis at position 14
Hi Mauro,
the last reales you've released is the year 2016 (v0.3.7). But since then many improvements have been made, so many commit.
Cuddly can you release a new version? So I put it in BackBox.
Thank you
Andrea
Hi. One "Feature Enhancement" would be to cache the reporting output to the file as the directory brute-forcing is taking place. Maybe this could be done after a certain period of time has passed or when a new directory is found?
I noticed that if the application crashes during brute-forcing, you actually lose ALL data within the report files. Although the files are created they are simply empty.
This would make a great addition to PyPI! Please add a setup.py file and publish this project so that it can be installed through pip!
I get this error message some target hosts on kali x86
Linux XXX 3.12-kali1-686-pae #1 SMP Debian 3.12.6-2kali1 (2014-01-06) i686 GNU/Linux
Python3 lib installed and run as sudo.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "dirs3arch.py", line 36, in
main = Program()
File "dirs3arch.py", line 32, in init
self.controller = Controller(self.script_path, self.arguments, self.output)
File "/root/dirs3arch/dirs3arch/lib/controller/Controller.py", line 59, in init
self.wait()
File "/root/dirs3arch/dirs3arch/lib/controller/Controller.py", line 199, in wait
self.fuzzer.start()
File "/root/dirs3arch/dirs3arch/lib/core/Fuzzer.py", line 75, in start
self.testersSetup()
File "/root/dirs3arch/dirs3arch/lib/core/Fuzzer.py", line 54, in testersSetup
self.testers['/'] = NotFoundTester(self.requester, '{0}/'.format(self.testFailPath))
File "/root/dirs3arch/dirs3arch/lib/core/NotFoundTester.py", line 31, in init
if self.testNotFoundStatus():
File "/root/dirs3arch/dirs3arch/lib/core/NotFoundTester.py", line 37, in testNotFoundStatus
response = self.requester.request(self.notFoundPath)
File "/root/dirs3arch/dirs3arch/lib/connection/Requester.py", line 114, in request
assert_same_host=False)
File "/root/dirs3arch/dirs3arch/thirdparty/urllib3/connectionpool.py", line 502, in urlopen
raise SSLError(e)
thirdparty.urllib3.exceptions.SSLError: [Errno 1] _ssl.c:392: error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure
Hi. I had a suggestion for dirs3arch that would hopefully make it more intuitive and flexible to use.
A problem right now is that dirs3arch requires custom wordlists, due to the %EXT% being included to replace the extension. This is problematic because it increases the size of the wordlists by double, and you can't just quickly download a wordlist to use but have to determine how precisely to use the extension with %EXT%.
Might I suggest using a simple wordlist and leaving it up to the user to append the extension.. for example:
./dirs3arch.py -u 'foo' -w bar -e /,.aspx
This could then append / and then .aspx to the wordlists. This also allows for the default behaviour to NOT have an extension which is nice for REST-style URLs. The potential confusion with this is the explicit inclusion of the . before the aspx. In the fork in my repo I've made some modifications for this (albeit an older version), that allow for this usage but it should be quite simple to implement.
Anyways, thanks for the tool!
Sam
Error since the latest pull:
[06:07:59] Starting:
Traceback (most recent call last):
File "/root/tools/dirsearch/dirsearch.py", line 38, in
main = Program()
File "/root/tools/dirsearch/dirsearch.py", line 34, in init
self.controller = Controller(self.script_path, self.arguments, self.output)
File "/root/tools/dirsearch/lib/controller/Controller.py", line 135, in init
self.wait()
File "/root/tools/dirsearch/lib/controller/Controller.py", line 334, in wait
self.fuzzer.start()
File "/root/tools/dirsearch/lib/core/Fuzzer.py", line 80, in start
self.setupScanners()
File "/root/tools/dirsearch/lib/core/Fuzzer.py", line 56, in setupScanners
self.defaultScanner = Scanner(self.requester, self.testFailPath, "")
File "/root/tools/dirsearch/lib/core/Scanner.py", line 44, in init
self.setup()
File "/root/tools/dirsearch/lib/core/Scanner.py", line 61, in setup
self.dynamicParser = DynamicContentParser(self.requester, firstPath, firstResponse.body, secondResponse.body)
File "/root/tools/dirsearch/thirdparty/sqlmap/DynamicContentParser.py", line 14, in init
self.generateDynamicMarks(firstPage, secondPage)
File "/root/tools/dirsearch/thirdparty/sqlmap/DynamicContentParser.py", line 31, in generateDynamicMarks
self.cleanPage = self.removeDynamicContent(firstPage, self.dynamicMarks)
File "/root/tools/dirsearch/thirdparty/sqlmap/DynamicContentParser.py", line 101, in removeDynamicContent
page = re.sub(r'(?s){0}.+{1}'.format(re.escape(prefix), re.escape(suffix)), "{0}{1}".format(prefix.replace('\', r'\'), suffix.replace('\', r'\')), page)
TypeError: a bytes-like object is required, not 'str'
On a very congested network, adding a delay would be very handy to avoid network errors (502, etc).
Hello,
this morning I tried scanning 3 domains using the CDN and SSL certificate of CloudFlare . Scanning always reports the same error:
"CONNECTION TIMEOUT: There was a problem in the request to:
Task Completed "
If you want, you can try using my domain to test: https://www.andreadraghetti.it
I think the problem is that Dirsearch makes a Requests request to the domain IP, the domain name is included in the header. But since it's a CDN, the IP does not contain any data off the site.
I tried to modify the Requester.py code as follows:
if self.requestByHostname:
url = "{0}://{1}:{2}".format(self.protocol, self.host, self.port)
else:
url = "{0}://{1}:{2}".format(self.protocol, self.host, self.port)
and now it works properly.
I do not know why it was decided to scan the IP and not the URL directly. So I avoided doing a Pull Request with code modification.
I leave Mauro the job of intervening, he will know how to work better than me.
Thank you
Hi there.
It would be nice to trap for keyboard interrupt, so that an in-process scan can be gracefully aborted.
Great tool.
Thanks.
[10:50:52] 301 - 0B - /admin2/index.php -> https://bo0om.ru/admin2/
[10:51:04] 301 - 0B - /admin_area/index.php -> https://bo0om.ru/admin_area/
22.89% - Last request to: admin_setup.phpException in thread Thread-4:
Traceback (most recent call last):
File "/home/bo0om/soft/dirsearch/thirdparty/requests/packages/urllib3/response.py", line 397, in _update_chunk_length
self.chunk_left = int(line, 16)
ValueError: invalid literal for int() with base 16: b''
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.4/threading.py", line 920, in _bootstrap_inner
self.run()
File "/usr/lib/python3.4/threading.py", line 868, in run
self._target(*self._args, **self._kwargs)
File "/home/bo0om/soft/dirsearch/lib/core/Fuzzer.py", line 151, in thread_proc
status, response = self.scan(path)
File "/home/bo0om/soft/dirsearch/lib/core/Fuzzer.py", line 110, in scan
response = self.requester.request(path)
File "/home/bo0om/soft/dirsearch/lib/connection/Requester.py", line 108, in request
response = requests.get(url, proxies=proxy, verify=False, allow_redirects=self.redirect, headers=headers, timeout=self.timeout)
File "/home/bo0om/soft/dirsearch/thirdparty/requests/api.py", line 69, in get
return request('get', url, params=params, **kwargs)
File "/home/bo0om/soft/dirsearch/thirdparty/requests/api.py", line 50, in request
response = session.request(method=method, url=url, **kwargs)
File "/home/bo0om/soft/dirsearch/thirdparty/requests/sessions.py", line 470, in request
resp = self.send(prep, **send_kwargs)
File "/home/bo0om/soft/dirsearch/thirdparty/requests/sessions.py", line 610, in send
r.content
File "/home/bo0om/soft/dirsearch/thirdparty/requests/models.py", line 734, in content
self._content = bytes().join(self.iter_content(CONTENT_CHUNK_SIZE)) or bytes()
File "/home/bo0om/soft/dirsearch/thirdparty/requests/models.py", line 657, in generate
for chunk in self.raw.stream(chunk_size, decode_content=True):
File "/home/bo0om/soft/dirsearch/thirdparty/requests/packages/urllib3/response.py", line 303, in stream
for line in self.read_chunked(amt, decode_content=decode_content):
File "/home/bo0om/soft/dirsearch/thirdparty/requests/packages/urllib3/response.py", line 447, in read_chunked
self._update_chunk_length()
File "/home/bo0om/soft/dirsearch/thirdparty/requests/packages/urllib3/response.py", line 401, in _update_chunk_length
raise httplib.IncompleteRead(line)
http.client.IncompleteRead: IncompleteRead(0 bytes read)
[10:51:43] 301 - 0B - /adminarea/index.php -> https://bo0om.ru/adminarea/
I collect potentially dangerous files.
https://github.com/Bo0oM/fuzz.txt
Maybe it makes sense to sometimes merge.
If you use the -L sitelist.txt
cat sitelist.txt
site1
site2
site3
site4
ls -Rl reports/
./site1:
-rw-r--r-- 154 15-11-13_10-17-36.txt
./site2:
-rw-r--r-- 0 15-11-13_10-17-02.txt
./site3:
-rw-r--r-- 0 15-11-13_10-11-03.txt
./site4:
-rw-r--r-- 0 15-11-13_10-15-12.txt
There remains only site1
Hi,
Thanks for this new app, i have a request about it.
May it is possible in the future to see apear a new function as a = Url's list loading...i mean the possibility to load a URL_LIST at the place of a unic url adress, like ;
python dirs3arch.py -u -L=myUrlList.txt -e php ( where "L" is the option for loading the list )
Thank you in advance,
Regards,
Don
When program finished scanning, report file have a big size(for example 15 kb(but actually report have 10-15 lines itself)) and i cant open it with any graphical editor, but when i read report file using cat - it prints normal
OS - GNU/Linux
First, thank you for your work.
Please add FreeBSD to the list of operating systems under which to use _get_terminal_size_linux() function:
diff lib/utils/TerminalSize.py ~/dirsearch.git/trunk/lib/utils/TerminalSize.py
40c40
< if current_os in ['Linux', 'Darwin', 'FreeBSD'] or current_os.startswith('CYGWIN'):
---
> if current_os in ['Linux', 'Darwin'] or current_os.startswith('CYGWIN'):
42a43
> print("default")
Also print("default") on line 43 seems to be a forgotten debug message.
If you make the extensions option a string the program doesn't recognise, such as ''
or foo
, it defaults to running searches for all of the extensions in its list. Is this the intended behaviour or should it fail when given invalid extensions? I'd be happy to implement this fix myself once I know what the intended behaviour is.
Hello. Testing this in Kali with Python v2.7.3, and it appears that the extensions are not being appended. I've just pulled down the latest version as well. The wordlist of "list.txt" contains:
# cat list.txt
foo
login
admin
This is the command I'm using to run.
# ./dirs3arch.py -u http://127.0.0.1/ -w list.txt -t 1 -e aspx --http-proxy 127.0.0.1:8080
_ _ _____ _
__| (_)_ __ ___ |___ / __ _ _ __ ___| |__
/ _` | | '__/ __| |_ \ / _` | '__/ __| '_ \
| (_| | | | \__ \ ___) | | (_| | | | (__| | | |
\__,_|_|_| |___/ |____/ \__,_|_| \___|_| |_|
version 0.2.6
- Searching in: http://127.0.0.1/
- For extensions: aspx
- Number of Threads: 1
- Wordlist size: 3
Scanning directory:
Task Completed
But, when we look at the Apache logs, we only get the first "you cannot be here" resource with the .aspx extended
[Wed Nov 05 13:17:04 2014] [error] [client 127.0.0.1] File does not exist: /var/www/youCannotBeHere7331
[Wed Nov 05 13:17:04 2014] [error] [client 127.0.0.1] File does not exist: /var/www/youCannotBeHere7331.aspx
[Wed Nov 05 13:17:04 2014] [error] [client 127.0.0.1] File does not exist: /var/www/foo
[Wed Nov 05 13:17:04 2014] [error] [client 127.0.0.1] File does not exist: /var/www/login
[Wed Nov 05 13:17:04 2014] [error] [client 127.0.0.1] File does not exist: /var/www/admin
Hello and thank you for this tool!
I was wondering if you could implement a system that would allow us to use dictionaries specific to a language. For example, dirseach could by default work on a common dictionary (with strings like .svn, .git, backup, etc) as well as an English dictionary (with strings like schedule, gallery...), and we could also add an option in the command line (for example, --language ISOCODE) to test for language-specific strings from dictionaries located in the db/ folder (for example, db/dicc_ISOCODE.txt).
Thank you for your answer.
Hello,
It would be great to have probabilistic brute-force ! (ex: search for most used directories, then search for most used files in these specifics directories).
Regards,
Is the db/dirbuster/ and other files in the db-directory been removed?
if not set delay time , the waf would ban it!
Please document somewhere this. It took me half day to realize, I need to add %EXT% on line endings in dictionary file to check extensions. Please...
When configured for use multiple threads sometimes doesnt find resources.
It'easy to check. Just launch the program against a web server with known resources and with a medium size dictionary file.
Try it with "-t 1" and with "-t 20" several times. You'll see that "-t 20" option sometimes returns less hits.
Hi,
scanning the http://tt-karaj.ir site I get this error:
Target: http://tt-karaj.ir
[09:59:45] Starting:
Traceback (most recent call last):
File "dirsearch.py", line 40, in <module>
main = Program()
File "dirsearch.py", line 34, in __init__
self.controller = Controller(self.script_path, self.arguments, self.output)
File "/home/drego85/Scrivania/Tools/dirsearch/lib/controller/Controller.py", line 130, in __init__
self.wait()
File "/home/drego85/Scrivania/Tools/dirsearch/lib/controller/Controller.py", line 323, in wait
self.fuzzer.start()
File "/home/drego85/Scrivania/Tools/dirsearch/lib/core/Fuzzer.py", line 80, in start
self.setupScanners()
File "/home/drego85/Scrivania/Tools/dirsearch/lib/core/Fuzzer.py", line 56, in setupScanners
self.defaultScanner = Scanner(self.requester, self.testFailPath, "")
File "/home/drego85/Scrivania/Tools/dirsearch/lib/core/Scanner.py", line 44, in __init__
self.setup()
File "/home/drego85/Scrivania/Tools/dirsearch/lib/core/Scanner.py", line 61, in setup
self.dynamicParser = DynamicContentParser(self.requester, firstPath, firstResponse.body, secondResponse.body)
File "/home/drego85/Scrivania/Tools/dirsearch/thirdparty/sqlmap/DynamicContentParser.py", line 14, in __init__
self.generateDynamicMarks(firstPage, secondPage)
File "/home/drego85/Scrivania/Tools/dirsearch/thirdparty/sqlmap/DynamicContentParser.py", line 31, in generateDynamicMarks
self.cleanPage = self.removeDynamicContent(firstPage, self.dynamicMarks)
File "/home/drego85/Scrivania/Tools/dirsearch/thirdparty/sqlmap/DynamicContentParser.py", line 92, in removeDynamicContent
page = re.sub(r'(?s)%s.+$' % prefix, prefix, str(page))
File "/usr/lib/python3.4/re.py", line 179, in sub
return _compile(pattern, flags).sub(repl, string, count)
File "/usr/lib/python3.4/re.py", line 294, in _compile
p = sre_compile.compile(pattern, flags)
File "/usr/lib/python3.4/sre_compile.py", line 568, in compile
p = sre_parse.parse(p, flags)
File "/usr/lib/python3.4/sre_parse.py", line 765, in parse
raise error("unbalanced parenthesis")
sre_constants.error: unbalanced parenthesis
```
Hey,
Let's say you want to try aspx,asp and no extension.
When you try this: -e asp,aspx,,
it tries, -let's say the current word is service-:
service.asp
service.aspx
service.
service/
You want to look for:
service.asp
service.aspx
service
service/
So these two lines do the trick:
dirsearch/lib/connection/Requester.py
file:
if url.endswith('.'):
url = url[:-1]
I added it before the request: lines 130-131
;)
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.