Git Product home page Git Product logo

efetch's People

Contributors

atilaromero avatar maurermj08 avatar michael-dolosdev avatar robersor avatar syrkadian avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

efetch's Issues

Hide plugins that require Elasticsearch when they cannot connect

Currently plugins that requires Elasticsearch are always visible even when Efetch cannot connect to Elasticsearch. Viewing these plugins results in a 500 response and a ConnectionError.

Add a check to all plugins that require Elasticsearch to "hide" when Efetch fails to connect to Elasticsearch.

Docker image

Hi,

Great product, is it possible for you to produce either a ready docker image or a docker build file for this project?

Thanks.

Rison Error when changing color in Kibana

When changing the color of a visualization in Kibana, the field is updated with "colors:(file:#FF0000)" which breaks the rison library. Retrieve invalid character '#' error.

ERROR:root:Failed to parse rison: (filters:!(),options:(darkTheme:!f),panels:!((col:10,id:xpquick-action,panelIndex:1,row:1,size_x:3,size_y:4,type:visualization),(col:7,id:xpquick-ext,panelIndex:2,row:1,size_x:3,size_y:4,type:visualization),(col:1,id:xpquick-histogram,panelIndex:3,row:1,size_x:6,size_y:4,type:visualization),(col:1,columns:!(pathspec,path,type),id:xpquick,panelIndex:5,row:5,size_x:9,size_y:4,sort:!(datetime_atime,desc),type:search),(col:10,id:xpquick-type,panelIndex:6,row:5,size_x:3,size_y:2,type:visualization),(col:10,id:xpquick-count,panelIndex:7,row:7,size_x:3,size_y:2,type:visualization)),query:(query_string:(analyze_wildcard:!t,query:'*')),title:'xpquick one',uiState:(P-2:(vis:(legendOpen:!t)),P-6:(vis:(colors:(file:#2F575E)))))
Traceback (most recent call last):
  File "/home/user/GIT/efetch/efetch_server/utils/db_util.py", line 110, in get_theme
    a_parsed = rison.loads(a_parameter)
  File "build/bdist.linux-x86_64/egg/rison/decoder.py", line 225, in loads
    return Parser().parse(s, format=format)
  File "build/bdist.linux-x86_64/egg/rison/decoder.py", line 34, in parse
    value = self.read_value()
  File "build/bdist.linux-x86_64/egg/rison/decoder.py", line 45, in read_value
    return self.parse_open_paren()
  File "build/bdist.linux-x86_64/egg/rison/decoder.py", line 120, in parse_open_paren
    v = self.read_value()
  File "build/bdist.linux-x86_64/egg/rison/decoder.py", line 45, in read_value
    return self.parse_open_paren()
  File "build/bdist.linux-x86_64/egg/rison/decoder.py", line 120, in parse_open_paren
    v = self.read_value()
  File "build/bdist.linux-x86_64/egg/rison/decoder.py", line 45, in read_value
    return self.parse_open_paren()
  File "build/bdist.linux-x86_64/egg/rison/decoder.py", line 120, in parse_open_paren
    v = self.read_value()
  File "build/bdist.linux-x86_64/egg/rison/decoder.py", line 45, in read_value
    return self.parse_open_paren()
  File "build/bdist.linux-x86_64/egg/rison/decoder.py", line 120, in parse_open_paren
    v = self.read_value()
  File "build/bdist.linux-x86_64/egg/rison/decoder.py", line 45, in read_value
    return self.parse_open_paren()
  File "build/bdist.linux-x86_64/egg/rison/decoder.py", line 120, in parse_open_paren
    v = self.read_value()
  File "build/bdist.linux-x86_64/egg/rison/decoder.py", line 62, in read_value
    raise ParserException("invalid character: '" + c + "'")
ParserException: invalid character: '#'```

0.5.0 Release

The following items should be completed for the 0.5.0 release:

  • Replace Bottle with Flask
  • Update directory to Jinja2 and AJAX
  • Update analyze to Jinja2
  • Update overview to Jinja2
  • Add small loading icon to Strings, Hex Dump, and Directory during AJAX calls
  • Fix (re-enable) polling of Plugins YAML file
  • Update thumbnail to not require PIL (No PIL = icons only)
  • Update strings to be memory only and AJAX
  • Update hexdump to be memory only and AJAX
  • Memory only hash plugin (SHA1, SHA256, MD5)
  • Update all plugins to final names (there should be no more name changes after this release)
  • Timezone flag (-z)
  • Better Elasticsearch detection
  • Bitmaps (BMP) do not show in browser, remove from preview and create bitmap preview plugin
  • Separate partition and shadow volume page (or at least label shadow volumes by partition)
  • Clean and document Efetch Helper, Pathspec Helper, and DB Util
  • Add tests
  • Significant testing

Possibly this release:

  • Encryption password prompt
  • Pseudo pathspec
  • Pure python default

Update/Clean DB Util

The DB Util currently has many unused methods and is missing many important methods plugins must implement themselves.

  • Remove all custom Elasticsearch calls from core plugins to DB Util.
  • Remove all unused methods from DB Util

Run plugin on multiple files

Ability to run a single plugin on a set number of files in an Elasticsearch query. Currently the Action plugin has basic support, but further support would be required.

efetch won't start up

Hi,
I installed efetch on Ubuntu 16.04 with the sift/stable and gift/stable repositories.
Then when I went to start it I got the following error:

ImportError: No module named methods.wsgi

The complete error message was:
efetch_error.txt

Expanding evidence items in ZIP files

There is an issue with trying to expand any evidence item with in a ZIP file.

Trying to list the sub items in the following pathspec will result in getting a list containing only the same pathspec, resulting in a loop in efetch and a blank directory listing:
'{"type_indicator": "ZIP", "type": "PathSpec", "location": "/EfetchTestCase.E01", "parent": {"type_indicator": "OS", "type": "PathSpec", "location": "/media/sf_J_DRIVE/EfetchTestCase.zip"}}'

The issue appears to be in the dfvfs_util.get_base_from_pathspec(), though it may stem from other issues with the ZIP pathspec.

Post your YAML plugins here!

If you created a YAML plugin and are willing to share it, please post it here in the comments! I will add them to the YAML Plugins. A simple plain text post is fine with a description and any dependencies.

Example:
CLAMSCAN: Runs ClamAV against a specific file and displays the results.
Requires: clamav

clamscan:
  name: Clam Scan
  icon: fa-bug
  command: "clamscan '{{ file_cache_path }}'"

Ideas for efetch

Plugins:

Misc:

  • dfVFS Remote Wrapper: A wrapper for dfVFS that indicates a file exists on a specific remote system
  • Docker: An efetch docker file and image
  • VM: A pre-built efetch virtual box VM or OVA

Paging Past 10000

When paging past 10000 items elasticsearch throws an exception:
" QueryPhaseExecutionException[Result window is too large, from + size must be less than or equal to: [10000] but was [28950]. See the scroll api for a more efficient way to request large data sets. This limit can be set by changing the [index.max_result_window] index level parameter.]"

Below from Elasticsearch's website explains why:

"Deep Paging in Distributed Systems

To understand why deep paging is problematic, let’s imagine that we are searching within a single index with five primary shards. When we request the first page of results (results 1 to 10), each shard produces its own top 10 results and returns them to the coordinating node, which then sorts all 50 results in order to select the overall top 10.

Now imagine that we ask for page 1,000—results 10,001 to 10,010. Everything works in the same way except that each shard has to produce its top 10,010 results. The coordinating node then sorts through all 50,050 results and discards 50,040 of them!

You can see that, in a distributed system, the cost of sorting results grows exponentially the deeper we page. There is a good reason that web search engines don’t return more than 1,000 results for any query."

https://www.elastic.co/guide/en/elasticsearch/guide/current/pagination.html

The current work around is:

curl -XPUT "http://localhost:9200/efetch*/_settings" -d '{ "index" : { "max_result_window" : 500000 } }'

Better Non-TSK Pathspec Support

The following fields are only pulled when using TSK:

  • size
  • mtime
  • atime
  • ctime
  • crtime
  • uuid
  • gid

In addition, the field meta_type is limited to only File, Directory, and Unknown when not using TSK. Further testing is required for non TSK pathspecs and additional fields should be added.

Cannot handle multi part EWF files

I get the following error when uploading an image using the EWF format with more than one file:

[2019-02-10 10:41:10,318] ERROR in app: Exception on / [POST]
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/Flask-1.0.2-py2.7.egg/flask/app.py", line 2292, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python2.7/dist-packages/Flask-1.0.2-py2.7.egg/flask/app.py", line 1815, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/lib/python2.7/dist-packages/Flask-1.0.2-py2.7.egg/flask/app.py", line 1718, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/usr/local/lib/python2.7/dist-packages/Flask-1.0.2-py2.7.egg/flask/app.py", line 1813, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python2.7/dist-packages/Flask-1.0.2-py2.7.egg/flask/app.py", line 1799, in dispatch_request
return self.view_functionsrule.endpoint
File "/home/charlie/install/efetch/efetch-master/efetch_server/efetch_app.py", line 50, in home
pathspec = _helper.pathspec_helper.get_encoded_pathspec(upload_cache_path)
File "/home/charlie/install/efetch/efetch-master/efetch_server/utils/pathspec_helper.py", line 768, in get_encoded_pathspec
return JsonPathSpecSerializer.WriteSerialized(PathspecHelper.get_pathspec(pathspec_or_source))
File "/home/charlie/install/efetch/efetch-master/efetch_server/utils/pathspec_helper.py", line 754, in get_pathspec
dfvfs_util = DfvfsUtil(pathspec_or_source)
File "/home/charlie/install/efetch/efetch-master/efetch_server/utils/dfvfs_util.py", line 54, in init
self.base_path_specs = self.get_base_pathspecs(source, interactive)
File "/home/charlie/install/efetch/efetch-master/efetch_server/utils/dfvfs_util.py", line 855, in get_base_pathspecs
u'Unable to scan source with error: {0:s}.'.format(exception))
RuntimeError: Unable to scan source with error: Unable to process source path specification with error: 'pyewf_handle_read_buffer: unable to read data. libewf_chunk_data_initialize: invalid chunk data. libewf_read_io_handle_read_chunk_data: unable to create chunk data. libewf_handle_read_buffer: unable to read chunk data: 148660.'.

The image I used was here: https://www.cfreds.nist.gov/Hacking_Case.html, in particular, https://www.cfreds.nist.gov/images/4Dell%20Latitude%20CPi.E01 and https://www.cfreds.nist.gov/images/4Dell%20Latitude%20CPi.E02

I am using ubuntu 18.04, downloaded the efetchmaster.zip and installed using the install.sh script.
It works fine when the EWF image consists of one file.

Issue with port being in use when trying to run with docker

I have tried to build a docker image using your quick installer on a ubuntu:14.04 image. It seems to work when you just type "efetch" without any parameters on the command line, however I get a "connection refused" from the webserver. When I try to specify the address as seen below and open up the port in docker I get an error, also seen below. Any ideas?

It does not mather which port i specify, I get the same error message.

Run command

docker run -d --name=efetch -p 8080:8080 -ti efetch:0.4 efetch --address 192.168.99.104 --elastic 192.168.99.101:9200

Error

$ docker logs -f efetch
INFO:root:Plugin YAML file updated
ERROR:Rocket.Errors.Port8080:Socket 192.168.99.104:8080 in use by other process and it won't share.
INFO:Rocket:Starting Rocket 1.2.4
INFO:Rocket:Listening on sockets: 192.168.99.104:8080
WARNING:Rocket.Errors.Port8080:Listener started when not ready.

NB! There should not be anything in used on that port, so the error is strange. Something with the webserver used by efetch?

Prepare for update 0.4.0

List of tasks required before creating 0.4.0 Beta

  • Create dpkg [CANCELED]
  • Add evidence recursion (raw, E01, VMDK, etc.)
  • Clean up Pathspec helper
  • Hide plugins that require Elasticsearch when it is not available (Possibly a flag when running efetch?) [Moved to 0.5.0]
  • Add timezone flag -z [Moved to 0.5.0]
  • Update UI
  • Run tests on the updated caching
  • Separate the hash plugins and update them to use the buffer instead of the cache [Moved to 0.5.0]
  • Remove/Hide/Comment out experimental plugins

ImportError: No module named methods.wsgi

hey bro i got this error how i can solve it ( i used sift )

INFO:root:Plugin YAML file updated
Traceback (most recent call last):
File "/usr/local/bin/efetch", line 4, in
import('pkg_resources').run_script('efetch==0.4b0', u'efetch')
File "/usr/local/lib/python2.7/dist-packages/pkg_resources/init.py", line 654, in run_script
self.require(requires)[0].run_script(script_name, ns)
File "/usr/local/lib/python2.7/dist-packages/pkg_resources/init.py", line 1434, in run_script
exec(code, namespace, namespace)
File "/usr/local/lib/python2.7/dist-packages/efetch-0.4b0-py2.7.egg/EGG-INFO/scripts/efetch", line 62, in
efetch.start()
File "/usr/local/lib/python2.7/dist-packages/efetch-0.4b0-py2.7.egg/efetch_server/init.py", line 71, in start
rocket = Rocket((self._address, self._port), 'wsgi', {'wsgi_app': self._app})
File "build/bdist.linux-x86_64/egg/rocket/main.py", line 74, in init
File "build/bdist.linux-x86_64/egg/rocket/worker.py", line 400, in get_method
ImportError: No module named methods.wsgi

Storing values for YAML plugins

Currently there is no way to save the output of the YAML plugins to Elasticsearch. There should be an option to store the results of a plugin to Elasticsearch, matching on pathspec.

Example, being able to run Clam AV against all exe's and storing if it passed or failed with each timeline entry. Therefore, entries could be filtered on the results.

Broken Dependencies - Python

The install breaks when running this command from the Dockerfile:
RUN apt-get -y install python-plaso python-dev python-setuptools unoconv libpff libpff-python zlib1g-dev libjpeg-dev libtiff5-dev python-pip

It seems to need updating

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.