Git Product home page Git Product logo

meg's People

Contributors

cmbuckley avatar edoverflow avatar jack-dds avatar leesoh avatar realytcracker avatar tomnomnom avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

meg's Issues

Random user agent

Hi,

Would be nice to have an option to add a random user agent. The one sent currently is easy to identify and block (robots.txt, .htaccess).

Thanks!

It's not possible to send POST data

It's not currently possible to send data with POST, PUT request etc.

It would be nice to use -d like curl, but it's already taken by the request delay.

Support for single host

Hi @tomnomnom,

Thank you for actively developing this project, I would like to request minor improvement for this project.

Currently, meg supports suffix and prefix format, where we can feed both lists and it will process them accordingly, it would be nice and helpful for the quick test if we can feed single host as well along with existing option.

Example:

meg suffix http://example.com

Thank you.

meg can't follow redirects

At the moment meg will just save the response for HTTP redirects. It would be nice to have an option to follow them and save the resulting responses too.

curl has -L / --location for following redirects so that seems like a reasonable candidate.

This will be much easier for the Go HTTP client than the rawhttp client as it's the default behaviour of the former.

path issue

get the meg binary and move it to /usr/bin
getting ./paths not found

whats the issue ?

No way to detect Wildcard

Hi Tom,

I think, there should be some measures to detect Wildcard URL's and basis on that stop sending request to those endpoints.

This doesn't impacts on result, But it will rather save time and resource for endpoints that return 200 Ok for every thing.

Personally I am not much familiar with golang :|

But this what I have tried in detecting

  1. creates a counter int variable and empty map
  2. checks if the randomshit is in url and server responds with 200 ok
  3. Increases the counter by 1 and adds the host name to the map as key and counter as value
  4. if count is greater then 3 check if the map has host name with value more then 3 too
  5. finds the index of hostname in hosts slice and then remove it from it

Code compared to original repo

It does works in detecting and removing the domain from hosts slice value.
But since the request worker is running in background and reads from hosts only initially, thus making it of no use.

This is how it looks like now
screen

Even if you don't wish to implement this feature in meg.
Still It would be a of great help, if you can give your inputs on this and on my logic of trying to solve this issue.

Regards,
Bugbaba

(feature request) Present failed connections when finished

First of all thanks a lot for so many useful tools!
I was testing an application and I was sure that meg was missing some requests since testing manually with curl I was getting 200.
Reducing the number of threads solve the problem but I was wondering how many requests meg missed in the past and I didn't know it.
I think if meg was presenting the total number of connections failed it'd help to identify this issue quickly.

Getting request failed: unsupported protocol scheme error.

Hi,

I am getting request failed: unsupported protocol scheme error for all the hosts even though they are alive and resolve when accessed through browser.

Error :

request failed: Get sadsa.test.com*: unsupported protocol scheme ""

Can you please suggest.

Thanks

Lines in the hosts file must specify the protocol

At the moment you must specify hosts like this:

http://example.com
https://example.org
http://example.net

It's pretty common for tools to accept raw domains as input (and therefore pretty common for people to have them stored in that format), but at the moment meg does now allow that.

There's a couple of options to handle bare domains:

  1. Blindly add https:// and/or http://
  2. Do a request for both HTTP and HTTPS and only add those that respond

The former is definitely easier, but the second will avoid the potential for lots of timeouts (timeouts tie up a worker goroutine for the entire duration of the timeout) at the expense of increased complexity and a couple of extra requests to the hosts.

There's no way to make non-compliant requests

One of the things I've looked at in the past is how web applications react to non-compliant requests; such as those with invalid URL encoding (e.g %%0a0a), or those where the request path doesn't start with a slash (e.g. GET @example.com HTTP/1.1).

Go's http library does not provide a way to do that (the parser in the url package chokes on invalid URLs), which is great for 99% of use cases, but not this one.

I've started working on a package that addresses this issue (https://github.com/tomnomnom/rawhttp), so it would be nice to have an option to use it.

rawhttp still has a few issues (e.g. it doesn't yet support chunked encoding, which cloudflare seem awfully keen on), but it works well enough to be included - perhaps with an 'experimental' warning against it.

The main issue with sending non-compliant requests will likely be that the request type is currently:

type request struct {
    method  string
    url     *url.URL
    headers []string
}

But it's not possible to make a *url.URL type for a malformed URL.

The best way around this is probably to split the url property into prefix and suffix components:

type request struct {
    method  string
    prefix  string
    suffix  string
    headers []string
}

And then attach a few methods to request that do things like parse out the hostname that are currently provided by *url.URL. The prefix should pretty much always be parseable by the url package, so it might be a good idea to use that under the hood for getting the hostname etc.

Response code wrong when redirecting

I see that in #18 , meg gained the ability to follow redirects which is an excellent addition. Unfortunately, in the index file, these redirects are being saved with 200 response codes.

e.g. This is what I am currently getting

out/example.com/{hash} https://example.com/this-url-redirects (200 OK)
out/example.com/{hash} https://example.com/redirected/ (200 OK)

What I think it should show is this:

out/example.com/{hash} https://example.com/this-url-redirects (301 Moved Permanently)
out/example.com/{hash} https://example.com/redirected/ (200 OK)

Feature Feedback: Would you be interested in being able to use files of full urls?

Hi, I created a fork (https://github.com/3lpsy/megurl) that ingests a file containing full urls as opposed to a hosts file + a paths file. The fork completely destroys the ability the do the hosts + paths approach but if I made a PR that maintained backwards compatibility but also added the ability to simply pass a pregenerated list of full urls, is this something you'd be interested in? I imagine it'd look like this:

meg paths hosts outputdir
meg -urls-only urls.txt outputdir

If this is not something you're interested in, no worries,

Error while installing

Hello,

I have a problem like this

root@xxx:~/sf/megplus# go get github.com/tomnomnom/meg
# github.com/tomnomnom/rawhttp
/root/go/src/github.com/tomnomnom/rawhttp/request.go:102: u.Hostname undefined (
type *url.URL has no field or method Hostname)
/root/go/src/github.com/tomnomnom/rawhttp/request.go:103: u.Port undefined (type
 *url.URL has no field or method Port)
/root/go/src/github.com/tomnomnom/rawhttp/request.go:259: undefined: x509.System
CertPool

how to solve this? Thanks.

Support multiple values for an option & negative match if possible

I am not quite sure what the appropriate pattern for this might be, either a comma separated list:
-s 200,403
Or be specified multiple times
-s 200 -s 403

and if possible add a negative match parameter
-e 404,500 ( save all except 404/500 )

I dont know if makes sense but if user explicit use -s or -e parameters, apply those filters to output (terminal/files)

There's not enough entropy in the output filenames

At the moment only the request URL is used as an input to the filename hash.

It should really be a hash of the entire output to avoid overwriting files from matching URLs where the output might have changed.

There's no ability to use multiple suffixes

It'd be really useful to be able to provide multiple suffixes and have them all fetched.

To avoid hammering any one site too much it should do something like:

for suffix := range suffix {
    for prefix := range prefixes {
        fetch(prefix + suffix)
    }
    time.Sleep(...)
}

There should be a configurable delay between checking each suffix.

Add support to report progress for a running request

Add some kind of basic progress status, updating a percentage value, based on hosts X paths left...

 [14:40:12] Starting:
[14:40:14] 302 - http://xxxxxxx.com/a/ output/xxxxxxx.com/02179d82731b29aead42ca2035fbb29c69a3eacd
[14:40:15] 404 - http://xxxxxxx.com/b/ output/xxxxxxx.com/02179d82731b29aead42ca2035fbb29c69a3eace
[14:40:18] 200 - http://xxxxxxx.com/c/ output/xxxxxxx.com/02179d82731b29aead42ca2035fbb29c69a3eacf
**10.00%** - Last request to: http://xxxxxxx.com/c

Argument processing takes up nearly half of main()

As the number of arguments and options increases, more and more of main() is taken up with bookkeeping instead of what the program is actually doing.

It'd be good to move the argument processing to a function that returns some kind of config struct.

Feature Request/Question: Accept URL's from Stdin

Hey! I'm not sure if I'm being an idiot, but it seems like there isn't any way to accept URL's from stdin?

I was about to write my tool then I discovered meg, would it be possible to do something like:
crobat -s dyson.com | httprobe | meg paths - out

Is there support for this with bash-fu or is this something I can PR for?

Thanks!

Keep up the great work :)

Deterministic output file names

Would you be interested in a command line parameter that makes meg output deterministic file names? By that I mean that running meg on http://domain.tld/file.ext any number of times would always output the response in the same file.

My use case is that I run my recon in a git repo and I want to be able to diff the responses.

Basically instead of naming the file with a hash of the content, I'd use a hash of the path.

I'm going to implement this anyway in a fork, just asking this to know if you'll be interested in a PR afterwards. :)

There's no user agent

Meg should have its own user agent so that anyone seeing requests in their logs can see what's making them.

Ideally should be a mozilla-alike user agent for sites that do UA detection etc.

Rate limiting is very basic

It'd be a really good idea to rate limit per domain (or maybe per IP) to prevent hammering hosts when there aren't many prefixes.

Meg Slash issue

by default meg is sending last / after the path wordlist, is there any way to restrict meg to not send the last / after words in path.txt ?

I have a question

From the file of hosts do i really need to put http or https?? i can't just put example.com ? because all the subdomain tools such at subfinder amass and sublist3r , they're not going to put http or https://example.com , sorry for my english and i hope you can understand me
screenshot 759

Support for storing response header only

Hi @tomnomnom,

Right now meg store response body for all the request, but sometimes we only look for response header for some header inspection, and we have no option to store the only header or exclude the body, will suggest to have optional flag for the same, as It can improve the speed in case we are only looking for response header.

Timeout isn't controllable

The HTTP timeout is currently hard coded to 10 seconds.

The user should be able to decide what that value is.

The prefix/suffix terminology is non-standard

The use of 'prefix' and 'suffix', while technically accurate, is a bit confusing to people trying to understand what the tool really does.

It should be changed to host(s) and path(s) instead

Multiple Headers

Hi, I wanted to test Host Header Injection with Multiple Headers but I found that only one hydride can be sent
Suggest that you add the possibility to send Multiple Headers

path error while running first time

Hi I am facing this error while running meg first time:
sorry but I am new to this.
root@abc:~# meg
failed to open paths file: file ./paths not found

I am using:
root@abc:~# go version
go version go1.12.7 linux/amd64

go env Output
root@abc:~# go env
GOARCH="amd64"
GOBIN=""
GOCACHE="/root/.cache/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/root/work"
GOPROXY=""
GORACE=""
GOROOT="/usr/local/go"
GOTMPDIR=""
GOTOOLDIR="/usr/local/go/pkg/tool/linux_amd64"
GCCGO="gccgo"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build257246342=/tmp/go-build -gno-record-gcc-switches"

Display Human-readable size of the http response

Could you show the length of the response in output?
Some sites don't care about best practices and responds with 200 status code to show their custom 404 pages ("sorry not found"), making it hard to detect positive findings.
With size in output we can easily filter out those findings...

Proxy Settings on Cygwin

Hi,

Thanks for amazing tool!
I'm in a cygwin environment behind a firewall and proxy.
The variables below seems not be working for me to use meg with a proxy.

export HTTP_PROXY="http://myproxy.com:8080";
export HTTPS_PROXY="http://myproxy.com:8080";

I was assuming that meg would load the proxy settings from OS variables above but it doesn't work.
Do you know how can I use meg in the situation described above?

Rawhttp is flakey

rawhttp still doesn't support chunked responses etc, so it'd be nice to have switchable http engines.

Using the Go http engine by default would be best; only switching to the rawhttp if the request is 'weird'

Modem Unresponsive

Hello @tomnomnom ,
Whenever I scan a list of 100+ hosts with meg my wifi hang / become unresponsive , have to restart wifi multiple times whenever I scan with meg.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.