Git Product home page Git Product logo

kevoreilly / capev2 Goto Github PK

View Code? Open in Web Editor NEW
1.7K 65.0 378.0 157.92 MB

Malware Configuration And Payload Extraction

Home Page: https://capesandbox.com/analysis/

License: Other

Python 91.80% HTML 4.78% JavaScript 0.54% Makefile 0.01% C 0.01% YARA 0.79% Shell 0.94% Mako 0.01% CSS 1.00% Batchfile 0.08% PowerShell 0.05%
configs debugging-tools malware malware-analysis malware-research reverse-engineering sandbox unpacking cape

capev2's Introduction

CAPE: Malware Configuration And Payload Extraction - Documentation

CAPE is a malware sandbox.

A sandbox is used to execute malicious files in an isolated environment whilst instrumenting their dynamic behaviour and collecting forensic artefacts.

CAPE was derived from Cuckoo v1 which features the following core capabilities on the Windows platform:

  • Behavioral instrumentation based on API hooking
  • Capture of files created, modified and deleted during execution
  • Network traffic capture in PCAP format
  • Malware classification based on behavioral and network signatures
  • Screenshots of the desktop taken during the execution of the malware
  • Full memory dumps of the target system

CAPE complements Cuckoo's traditional sandbox output with several key additions:

  • Automated dynamic malware unpacking
  • Malware classification based on YARA signatures of unpacked payloads
  • Static & dynamic malware configuration extraction
  • Automated debugger programmable via YARA signatures, allowing:
    • Custom unpacking/config extractors
    • Dynamic anti-sandbox countermeasures
    • Instruction traces
  • Interactive desktop

There is a free demonstration instance online that anyone can use:

https://capesandbox.com - For account activation reach to https://twitter.com/capesandbox

Some History

Cuckoo Sandbox started as a Google Summer of Code project in 2010 within The Honeynet Project. It was originally designed and developed by Claudio Guarnieri, the first beta release was published in 2011. In January 2014, Cuckoo v1.0 was released.

2015 was a pivotal year, with a significant fork in Cuckoo's history. Development of the original monitor and API hooking method was halted in the main Cuckoo project. It was replaced by an alternative monitor using a restructuredText-based signature format compiled via Linux toolchain, created by Jurriaan Bremer.

Around the same time, a fork called Cuckoo-modified was created by Brad 'Spender' Spengler continuing development of the original monitor with significant improvements including 64-bit support and importantly introducing Microsoft's Visual Studio compiler.

During that same year development of a dynamic command-line configuration and payload extraction tool called CAPE was begun at Context Information Security by Kevin O'Reilly. The name was coined as an acronym of 'Config And Payload Extraction' and the original research focused on using API hooks provided by Microsoft's Detours library to capture unpacked malware payloads and configuration. However, it became apparent that API hooks alone provide insufficient power and precision to allow for unpacking of payloads or configs from arbitrary malware.

For this reason research began into a novel debugger concept to allow malware to be precisely controlled and instrumented whilst avoiding use of Microsoft debugging interfaces, in order to be as stealthy as possible. This debugger was integrated into the proof-of-concept Detours-based command-line tool, combining with API hooks and resulting in very powerful capabilities.

When initial work showed that it would be possible to replace Microsoft Detours with Cuckoo-modified's API hooking engine, the idea for CAPE Sandbox was born. With the addition of the debugger, automated unpacking, YARA-based classification and integrated config extraction, in September 2016 at 44con, CAPE Sandbox was publicly released for the first time: CAPE version 1.

In the summer of 2018 the project was fortunate to see the beginning of huge contributions from Andriy 'doomedraven' Brukhovetskyy, a long-time Cuckoo contributor. In 2019 he began the mammoth task of porting CAPE to Python 3 and in October of that year CAPEv2 was released.

CAPE has been continuously developed and improved to keep pace with advancements in both malware and operating system capabilities. In 2021, the ability to program CAPE's debugger during detonation via dynamic YARA scans was added, allowing for dynamic bypasses to be created for anti-sandbox techniques. Windows 10 became the default operating system, and other significant additions include interactive desktop, AMSI (Anti-Malware Scan Interface) payload capture, 'syscall hooking' based on Microsoft Nirvana and debugger-based direct/indirect syscall countermeasures.

Classification

image

Malware can be classified in CAPE via three mechanisms:

  • YARA scans of unpacked payloads
  • Suricata scans of network captures
  • Behavioral signatures scanning API hook output

Config Extraction

image

Parsing can be done using CAPE's own framework, alternatively the following frameworks are supported: RATDecoders, DC3-MWCP or MalDuck

Special note about config parsing frameworks:

  • Due to the nature of malware, since it changes constantly when any new version is released, something might become broken!
  • We suggest using CAPE's framework which is simply pure Python with entry point def extract_config(data): that will be called by cape_utils.py and 0 complications.
    • As a bonus, you can reuse your extractors in other projects.

Automated Unpacking

image

CAPE takes advantage of many malware techniques or behaviours to allow for unpacked payload capture:

  • Process injection
    • Shellcode injection
    • DLL injection
    • Process Hollowing
    • Process Doppelganging
  • Extraction or decompression of executable modules or shellcode in memory

These behaviours will result in the capture of payloads being injected, extracted, or decompressed for further analysis. In addition CAPE automatically creates a process dump for each process, or, in the case of a DLL, the DLL's module image in memory. This is useful for samples packed with simple packers, where often the module image dump is fully unpacked.

In addition to CAPE's default 'passive' unpacking mechanisms, it is possible to enable 'active' unpacking which uses breakpoints to detect writing to newly allocated or protected memory regions, in order to capture unpacked payloads as early as possible prior to execution. This is enabled via web submission tickbox or by specifying option unpacker=2 and is left off by default as it may impact detonation quality.

CAPE can be programmed via YARA signature to unpack specific packers. For example, UPX-type packers are very common and, although in CAPE these result in unpacked payloads being passively captured, the default capture is made after the unpacked payload has begun executing. Therefore by detecting UPX-derived packers dynamically via custom YARA signature and setting a breakpoint on the final packer instruction, it is possible to capture the payload at its original entry point (OEP) before it has begun executing.

image

image

The dump-on-api option allows a module to be dumped when it calls a specific API function that can be specified in the web interface (e.g. dump-on-api=DnsQuery_A).

The debugger has allowed CAPE to continue to evolve beyond its original capabilities, which now include dynamic anti-evasion bypasses. Since modern malware commonly tries to evade analysis within sandboxes, for example by using timing traps for virtualisation or API hook detection, CAPE allows dynamic countermeasures to be developed combining debugger actions within Yara signatures to detect evasive malware as it detonates, and perform control-flow manipulation to force the sample to detonate fully or skip evasive actions.

image image

Quick access to the debugger is made possible with the submission options bp0 through bp3 accepting RVA or VA values to set breakpoints, whereupon a short instruction trace will be output, governed by count and depth options (e.g. bp0=0x1234,depth=1,count=100). image

To set a breakpoint at the module entry point, ep is used instead of an address (e.g. bp0=ep). Alternatively break-on-return allows for a breakpoint on the return address of a hooked API (e.g. break-on-return=NtGetContextThread). An optional base-on-api parameter allows the image base for RVA breakpoints to be set by API call (e.g. base-on-api=NtReadFile,bp0=0x2345).

image

Options action0 - action3 allow actions to be performed when breakpoints are hit, such as dumping memory regions (e.g. action0=dumpebx) or changing the execution control flow (e.g. action1=skip). CAPE`s documentation contains further examples of such actions.

The repository containing the code for the CAPE's monitor is distinct.

Updates summary changelog

There is a community repository of signatures containing several hundred signatures developed by the CAPE community. All new community feature should be pushed to that repo. Later they can be moved to core if devs are able and willing to maintain them.

Please contribute to this project by helping create new signatures, parsers, or bypasses for further malware families. There are many in the works currently, so watch this space.

A huge thank you to @D00m3dR4v3n for single-handedly porting CAPE to Python 3.

  • Python3
    • agent.py is tested with python (3.7.2|3.8) x86. You should use x86 python version inside of the VM!
    • host tested with python3 version 3.7, 3.8, 3.10, but newer versions should work too

Installation recommendations and scripts for optimal performance

  • Only rooter should be executed as root, the rest as cape user. Running as root will mess with permissions.
  1. Become familiar with the documentation and do read ALL config files inside of conf folder!
  2. For best compabitility we strongly suggest installing on Ubuntu 22.04 LTS and using Windows 10 21H2 as target.
  3. kvm-qemu.sh and cape2.sh SHOULD BE executed from tmux session to prevent any OS problems if ssh connections breaks.
  4. KVM is recommended as the hypervisor.
  • Replace <username> with a real pattern.
  • You need to replace all <WOOT> inside!
  • Read it! You must understand what it does! It has configuration in header of the script.
  • sudo ./kvm-qemu.sh all <username> | tee kvm-qemu.log
  1. To install CAPE itself, cape2.sh with all optimizations
    • Read and understand what it does! This is not a silver bullet for all your problems! It has configuration in header of the script.
    • sudo ./cape2.sh base | tee cape.log
  2. After installing everything save both installation logs as gold!
  3. Configure CAPE by doing mods to config files inside conf folder.
  4. Restart all CAPE services to pick config changes and run CAPE properly!
    • CAPE Services
      • cape.service
      • cape-processor.service
      • cape-web.service
      • cape-rooter.service
      • To restart any service use systemctl restart <service_name>
      • To see service log use journalctl -u <service_name>
    • To debug any problem, stop the relevant service and run the command that runs that service by hand to see more logs. Check -h for the help menu. Running the service in debug mode (-d) can help as well.
  5. Reboot and enjoy!
  • All scripts contain help -h, but please check the scripts to understand what they are doing.

How to create VMs with virt-manager see docs for configuration

Virtual machine core dependency

How to update

  • CAPE: git pull
  • community: python3 utils/community.py -waf see -h before to ensure you understand

How to upgrade with a lot of custom small modifications that can't be public?

With rebase

git add --all
git commit -m '[STASH]'
git pull --rebase origin master
# fix conflict (rebase) if needed
git reset HEAD~1

With merge

# make sure kevoreilly repo has been added as a remote (only needs to be done once)
git remote add kevoreilly https://github.com/kevoreilly/CAPEv2.git
# make sure all your changes are commited on the branch which you will be merging
git commit -a -m '<your commit message goes here>'
# fetch changes from kevoreilly repo
git fetch kevoreilly
# merge kevoreilly master branch into your current branch
git merge kevoreilly/master
# fix merge conflicts if needed
# push to your repo if desired
git push

How to cite this work

If you use CAPEv2 in your work, please cite it as specified in the "Cite this repository" GitHub menu.

Special note about 3rd part dependencies:

  • They becoming a headache, specially those that using pefile as each pins version that they want.
    • Our suggestion is clone/fork them, remove pefile dependency as you already have it installed. Volia no more pain.

Docs

capev2's People

Contributors

actions-user avatar bartblaze avatar cccs-kevin avatar cccs-mog avatar claudiowayne avatar dependabot[bot] avatar ditekshen avatar doomedraven avatar enzok avatar federicofantini avatar firmianay avatar kevoreilly avatar lint-action avatar maxzhenzhera avatar n1nesun avatar naxonez avatar nbargnesi avatar para0x0dise avatar qux-bbb avatar r0ny123 avatar raw-data avatar razvioverflow avatar rkoumis avatar seanthegeek avatar tbeadle avatar themythologist avatar threathive avatar vvelox avatar winson0123 avatar wmetcalf avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

capev2's Issues

upload max size

Using CAPE fresh installation v2

During sample analysis I got several warning messages in cuckoo.log file related to max upload size

| OS version | Ubuntu 18.04, Windows 7 x86, windows 10x64

Failure Logs

2020-04-23 00:06:00,215 [root] DEBUG: Task #16 uploaded file length: 1310720
2020-04-23 00:06:03,784 [lib.cuckoo.core.guest] DEBUG: Windows7_32bit_infected: analysis #16 still processing
2020-04-23 00:06:05,219 [root] DEBUG: Task #16: File upload for b'files/e4dbda3f1c564a0c72519ff24abe4506b8f890a33ae918bb1dc3e9c4580f3d3c'
2020-04-23 00:06:05,444 [root] WARNING: Uploaded file length larger than upload_max_size, stopping upload.
2020-04-23 00:06:06,105 [root] DEBUG: Task #16 uploaded file length: 100485775
2020-04-23 00:06:08,850 [lib.cuckoo.core.guest] DEBUG: Windows7_32bit_infected: analysis #16 still processing
2020-04-23 00:06:10,754 [root] DEBUG: Task #16: File upload for b'files/342927a1cf9c912553b4042fda8d0d118f5ad9bff44666cc82035c089214c437'
2020-04-23 00:06:11,009 [root] WARNING: Uploaded file length larger than upload_max_size, stopping upload.

Unable to inject into 32-bit process

This is opensource and you getting free support so be friendly!

Prerequisites

Please answer the following questions for yourself before submitting an issue.

  • I am running the latest version
  • I checked the documentation and found no answer
  • I checked to make sure that this issue has not already been filed
  • I'm reporting the issue to the correct repository (for multi-repository projects)

Expected Behavior

DLL injection must occur when the sample is launched

Current Behavior

DLL injection fails

Failure Information (for bugs)

2020-03-12 12:34:57,364 [root] INFO: Analyzer: Package modules.packages.exe does not specify a DLL option
2020-03-12 12:34:57,364 [root] INFO: Analyzer: Package modules.packages.exe does not specify a DLL_64 option
2020-03-12 12:34:57,800 [lib.api.process] INFO: Successfully executed process from path "C:\Users\IEUser\AppData\Local\Temp\administrator.exe" with arguments "" with pid 3172
2020-03-12 12:34:59,394 [lib.api.process] INFO: Monitor config for process 3172: C:\tmp7xdvwb45\dll\3172.ini
2020-03-12 12:34:59,612 [lib.api.process] INFO: Option 'procmemdump' with value '1' sent to monitor
2020-03-12 12:34:59,612 [lib.api.process] INFO: Option 'procdump' with value '1' sent to monitor
2020-03-12 12:34:59,612 [lib.api.process] INFO: 32-bit DLL to inject is C:\tmp7xdvwb45\dll\PWxOFCoD.dll, loader C:\tmp7xdvwb45\bin\dCmPkow.exe
2020-03-12 12:35:01,059 [root] DEBUG: b'ReadConfig: Successfully loaded pipe name \\\\.\\PIPE\\ZHHpox.'
2020-03-12 12:35:01,059 [root] DEBUG: b'Loader: Injecting process 3172 (thread 0) with C:\\tmp7xdvwb45\\dll\\PWxOFCoD.dll.'
2020-03-12 12:35:01,081 [root] DEBUG: b'Error 299 (0x12b) - GetProcessInitialThreadId: Failed to read from process: Only part of a ReadProcessMemory or WriteProcessMemory request was completed.'
2020-03-12 12:35:01,081 [root] DEBUG: b'InjectDll: No thread ID supplied, GetProcessInitialThreadId failed (SessionId=1).'
2020-03-12 12:35:01,081 [root] DEBUG: b'Failed to inject DLL C:\\tmp7xdvwb45\\dll\\PWxOFCoD.dll.'
2020-03-12 12:35:04,097 [lib.api.process] ERROR: Unable to inject into 32-bit process with pid 3172, error: 4294967281
2020-03-12 12:35:04,097 [lib.api.process] INFO: Successfully resumed process with pid 3172

Context

Apparently, the DLL's have been upgraded on the repo a few days ago. This morning I sync my local repo and now injection of the DLL fails... Not sure it's related but I did not have this behaviour before.

Relying on a very outdated version of PyMISP

Someone mentioned an issue related to PyMISP using CAPEv2 and it seems that you're relying on an extremely outdated version of the API: most of the methods you're using have been spitting deprecation warning for over a year by now, and the PyMISP packages released after 2020-01-01 do not support them anymore at all.

I will try to provide a PR (if you're interested), but as I'm not using CAPEv2, I cannot promise the accuracy of the code.

Problem submitting text samples

This is opensource and you getting free support so be friendly!

Prerequisites

Please answer the following questions for yourself before submitting an issue.

  • I am running the latest version
  • I checked the documentation and found no answer
  • I checked to make sure that this issue has not already been filed
  • I'm reporting the issue to the correct repository (for multi-repository projects)

Expected Behavior

Submit text files (HTML code, Powershell, etc)

Current Behavior

Submit does not work

Failure Information (for bugs)

From web.log generated by supervisord, I see this:

==> web.err.log <==
Python 3.6.9 (default, Nov  7 2019, 10:44:02)
[GCC 8.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
(InteractiveConsole)
[10/Apr/2020 10:58:15] "GET /submit/ HTTP/1.1" 200 47495

==> web.out.log <==
>>>
==> web.err.log <==
Python 3.6.9 (default, Nov  7 2019, 10:44:02)
[GCC 8.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
(InteractiveConsole)

KeyError: (<weakref at 0x7fbf4a8f5d68; to 'function' at 0x7fbf43b9dd90 (go)>,)

This is opensource and you getting free support so be friendly!

Prerequisites

Please answer the following questions for yourself before submitting an issue.

  • [yes] I am running the latest version
  • [yes] I checked the documentation and found no answer
  • [yes] I checked to make sure that this issue has not already been filed
  • [yes] I'm reporting the issue to the correct repository (for multi-repository projects)

Expected Behavior

Cuckoo to proceed with its execution

Current Behavior

Exited with the error below:

Failure Information (for bugs)

Exception ignored in: <bound method Database.del of <lib.cuckoo.core.database.Database object at 0x7fbefcf22828>>
Traceback (most recent call last):
File "/opt/CAPEv2/lib/cuckoo/core/database.py", line 447, in del
File "/{redacted}/.local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 2039, in dispose
File "/{redacted}/.local/lib/python3.6/site-packages/sqlalchemy/pool/impl.py", line 251, in recreate
File "", line 2, in init
File "/{redacted}/.local/lib/python3.6/site-packages/sqlalchemy/util/deprecations.py", line 128, in warned
File "/{redacted}/.local/lib/python3.6/site-packages/sqlalchemy/pool/base.py", line 221, in init
File "/{redacted}/.local/lib/python3.6/site-packages/sqlalchemy/event/base.py", line 150, in _update
File "/{redacted}/.local/lib/python3.6/site-packages/sqlalchemy/event/attr.py", line 392, in _update
File "/{redacted}/.local/lib/python3.6/site-packages/sqlalchemy/event/registry.py", line 115, in _stored_in_collection_multi
KeyError: (<weakref at 0x7fbf4a8f5d68; to 'function' at 0x7fbf43b9dd90 (go)>,)

Steps to Reproduce

Please provide detailed steps for reproducing the issue.

first installation of cuckoo. i ran cape2.sh, seems to be installing well. but when i execute it, i have the above error.

Context

Please provide any relevant information about your setup. This is important in case the issue is not reproducible except for under certain conditions.

Question Answer
Git commit na
OS version Ubuntu 18.04.4 LTS

Failure Logs

Please include any relevant log snippets or files here.

CuckooDisableMobule import error

Prerequisites

Please answer the following questions for yourself before submitting an issue.

  • I am running the latest version
  • I checked the documentation and found no answer
  • I checked to make sure that this issue has not already been filed
  • I'm reporting the issue to the correct repository (for multi-repository projects)

Current Behavior

I'm looking for information about the following error:
(reported in analysis logs)

2020-03-24 07:24:35,450 [root] WARNING: Unable to import the auxiliary module "modules.auxiliary.procmon": cannot import name 'CuckooDisableModule' from 'lib.common.exceptions' (C:\tmpsq5gh5u_\lib\common\exceptions.py)

In lib/cuckoo/common/exceptions.py, I have:

class CuckooDisableModule(CuckooOperationalError):
    """Exception for disabling a module dynamically."""
    pass

Can start analysis: Invalid argument

This is opensource and you getting free support so be friendly!

Prerequisites

Please answer the following questions for yourself before submitting an issue.

  • I am running the latest version
  • I checked the documentation and found no answer
  • I checked to make sure that this issue has not already been filed
  • I'm reporting the issue to the correct repository (for multi-repository projects)

Failure Information (for bugs)

LOG
2020-02-25 06:59:50,691 [root] DEBUG: Starting analyzer from: C:\tmpxxfk82pv
2020-02-25 06:59:50,835 [root] DEBUG: Storing results at: C:\NzOOhPQkuv
2020-02-25 06:59:50,938 [root] DEBUG: Pipe server name: \.\PIPE\QTeLLjd
2020-02-25 06:59:50,938 [root] DEBUG: Python path: C:\Users\IEUser\AppData\Local\Programs\Python\Python38-32
2020-02-25 06:59:50,938 [root] DEBUG: No analysis package specified, trying to detect it automagically.
2020-02-25 06:59:52,408 [root] ERROR: Traceback (most recent call last):
File "C:/tmpxxfk82pv/analyzer.py", line 1572, in
success = analyzer.run()
File "C:/tmpxxfk82pv/analyzer.py", line 350, in run
package = choose_package(self.config.file_type, self.config.file_name, self.config.exports, self.target)
File "C:\tmpxxfk82pv\lib\core\packages.py", line 20, in choose_package
file_content = open(target, "rb").read()
OSError: [Errno 22] Invalid argument: 'C:\Users\IEUser\AppData\Local\Temp\xxx.ps1'
Traceback (most recent call last):
File "C:/tmpxxfk82pv/analyzer.py", line 1572, in
success = analyzer.run()
File "C:/tmpxxfk82pv/analyzer.py", line 350, in run
package = choose_package(self.config.file_type, self.config.file_name, self.config.exports, self.target)
File "C:\tmpxxfk82pv\lib\core\packages.py", line 20, in choose_package
file_content = open(target, "rb").read()
OSError: [Errno 22] Invalid argument: 'C:\Users\IEUser\AppData\Local\Temp\xxx.ps1'
2020-02-25 06:59:52,476 [root] WARNING: Folder at path "C:\NzOOhPQkuv\debugger" does not exist, skip.
2020-02-25 06:59:52,476 [root] INFO: Analysis completed.

I don't know why they're escaped '', is it normal?

HTTPConnectionPool Read Timeout

I use Cuckoo sandbox as well and was working perfectly on that end. I ported the VMs I use, Win7 Sandboxes to use for CAPE and changed them to use Python3 & the new agent.py located in the git repo of CAPE.

Unfortunately I keep getting this error sometime after the analysis starts. HTTPConnectionPool(host='192.168.56.102', port=8000): Read timed out. (read timeout=5). The python error logs says its a socket timeout. Any reason why this would be happening? My VBox Virtual Machine can contact the Host only 192.168.56.1 but is unable to throw the data back through the agent.py it seems.

At the end of the analysis this error pops up as well: Unable to passthrough root command as we're unable to connect to the rooter unix socket: [Errno 13] Permission denied although my rooter is running using sudo (sudo python3 rooter.py -g [username]). I am using INETSIM btw. Following that the web page just shows my analysis task as processing and never goes to reported.

Change how LogPipe is passed so all Loader log messages are available

This is opensource and you getting free support so be friendly!

Prerequisites

Please answer the following questions for yourself before submitting an issue.

  • I am running the latest version
  • I checked the documentation and found no answer
  • I checked to make sure that this issue has not already been filed
  • I'm reporting the issue to the correct repository (for multi-repository projects)

Expected Behavior

All Loader log messages should be shown in CAPE Analysis log.

Current Behavior

Calls to DoOutputDebugString and DoOutputErrorString in Loader only function after ReadConfig has been called, since that's where the random-named LogPipe is defined. This only happens for "injection" and "load" executions.

Steps to Reproduce

Review Analysis log and notice many DoOutputDebugString and DoOutputErrorString calls are absent, such as "DoOutputDebugString("CAPE loader.\n");"

Maybe the LogPipe can be passed as an argument to Loader invocation???

Context

Question Answer
Git commit commit 9046ad63a3564ecb3225f4062767ca51dc12b347 (HEAD -> capemon, origin/capemon, origin/HEAD)
OS version Ubuntu 18.04.4 LTS

Running a sample twice produces an invalid JSON roughly every second time

This is opensource and you getting free support so be friendly!

Prerequisites

Please answer the following questions for yourself before submitting an issue.

  • I am running the latest version
  • I checked the documentation and found no answer
  • I checked to make sure that this issue has not already been filed
  • I'm reporting the issue to the correct repository (for multi-repository projects)

Expected Behavior

Running the same file twice should create the same report (of course not PIDs and things like that) and it should contain valid JSON

Current Behavior

Roughly every second time you submit a file it creates a JSON report which is not valid because there are "}" or other string fragments at the end of the JSON. I'll attach two reports to this case.

95 is invalid (See the ' ion" ' at one of the last lines)
96 is a valid report

The were both created with the same sample I uploaded (I'll attach this one too - It's a benign powershell script which gets the public IP)

Failure Information (for bugs)

Tough one for me to troubleshoot. Woudl be interesting if this happens for others, too

Steps to Reproduce

  • Submit test1.ps1 once
  • Submit test1.ps1 a second time

One of the reports shouldn't be parseable due to text fragments on the end of the file

Context

Installed with cape2.sh

Question Answer
Git commit cec3fe0
OS version linux capev2 5.3.0-42-generic #34-18.04-1-Ubuntu

test1.ps1.txt
95_report.txt
96_report.txt

Failure Logs

Please include any relevant log snippets or files here.

Web not working

This is opensource and you getting free support so be friendly!

Prerequisites

Please answer the following questions for yourself before submitting an issue.

  • [x ] I am running the latest version
  • [ x] I checked the documentation and found no answer
  • [x ] I checked to make sure that this issue has not already been filed
  • I'm reporting the issue to the correct repository (for multi-repository projects)

Expected Behavior

Error displaying web interfae

Current Behavior

What is the current behavior?

Failure Information (for bugs)

Please help provide information about the failure if this is a bug. If it is not a bug, please remove the rest of this template.

Steps to Reproduce

Please provide detailed steps for reproducing the issue.

  1. step 1
  2. step 2
  3. you get it...

Context

Please provide any relevant information about your setup. This is important in case the issue is not reproducible except for under certain conditions.

Question Answer
Git commit Type $ git log | head -n1 to find out
OS version Ubuntu 18.04

Failure Logs

Please include any relevant log snippets or files here.

[28/Jan/2020 14:33:04] "GET /static/css/style.css HTTP/1.1" 404 77
[28/Jan/2020 14:33:04] "GET /static/css/lightbox.css HTTP/1.1" 404 77
[28/Jan/2020 14:33:04] "GET /static/css/bootstrap.min.css HTTP/1.1" 404 77
[28/Jan/2020 14:33:04] "GET /static/css/bootstrap-datetimepicker.min.css HTTP/1.1" 404 77
[28/Jan/2020 14:33:04] "GET /static/js/jquery.js HTTP/1.1" 404 77
[28/Jan/2020 14:33:04] "GET /static/graphic/cape.png HTTP/1.1" 404 77
[28/Jan/2020 14:33:04] "GET /static/js/bootstrap.min.js HTTP/1.1" 404 77
[28/Jan/2020 14:33:04] "GET /static/js/bootstrap-fileupload.js HTTP/1.1" 404 77
[28/Jan/2020 14:33:04] "GET /static/js/bootstrap-transition.js HTTP/1.1" 404 77
[28/Jan/2020 14:33:04] "GET /static/js/bootstrap-collapse.js HTTP/1.1" 404 77
[28/Jan/2020 14:33:04] "GET /static/js/moment.min.js HTTP/1.1" 404 77
[28/Jan/2020 14:33:04] "GET /static/js/lightbox.js HTTP/1.1" 404 77
[28/Jan/2020 14:33:04] "GET /static/js/bootstrap-datetimepicker.min.js HTTP/1.1" 404 77
[28/Jan/2020 14:33:05] "GET /static//img/cape.png HTTP/1.1" 404 77

Results not displayed

Sample that has some anti-dbg functionality for older debugger such as SoftIce etc., opens handles to

  • ??\NTICE
  • ??\SICE
  • ??\SIWDEBUG

Sig:
https://github.com/kevoreilly/community/blob/0c517f27af3ceefa7c1fbb8d4f79e80aba6fdc9e/modules/signatures/antidbg_devices.py

Issue:
Hits are not displayed. For example:

Checks for the presence of known devices from debuggers and forensic tools
If you click on the results, it displays blank, it should display the found results.

Example:
https://capesandbox.com/analysis/15081/

Proxmox Issue

Hi,
I'm moving forward and am trying to use proxmox.
At first run I was missing pip3 proxmoxer. Installed good.

Before and after I'm getting the following error:
2020-03-11 14:50:31,169 [root] DEBUG: Importing modules...
2020-03-11 14:50:31,184 [volatility.framework.interfaces.layers] DEBUG: Imported python-magic, autodetecting compressed files based on content
2020-03-11 14:50:31,739 [lib.cuckoo.core.plugins] WARNING: Unable to import plugin "modules.machinery.proxmox": cannot import name 'config'
WARNING lib.cuckoo.core.plugins: Unable to import plugin "modules.machinery.proxmox": cannot import name 'config'
2020-03-11 14:50:31,739 [root] DEBUG: Imported "auxiliary" modules:
2020-03-11 14:50:31,739 [root] DEBUG: `-- Sniffer

Did I miss something or is it a lib.cuckoo.core.plugins issue?
Thanks alot and beer.io is down :(

Proc Memory missing

Hi,
I activated procmemory in the configuration file but in the report it is missing. This problem occurred in the last days. I have always had the data about process memory.
It would be cool to understand what changed and how to have the data back. I'm trying to evaluate CAPE sandbox and this information is needed by my script.

Thanks in advance

Where is sudo kvm-qemu.sh?

I'm starting to build out CAPE v2 and in the README is mentions a non-existent shell script.

Where is this kvm-qemu.sh script?

Procmon usage/setup?

This is opensource and you getting free support so be friendly!

Prerequisites

Please answer the following questions for yourself before submitting an issue.

  • I am running the latest version
  • I checked the documentation and found no answer
  • I checked to make sure that this issue has not already been filed
  • I'm reporting the issue to the correct repository (for multi-repository projects)

Expected Behavior

Use of procmon

Current Behavior

Seen in logs:

2020-04-16 05:14:16,638 [root] WARNING: Cannot execute auxiliary module Procmon: In order to use the Process Monitor functionality it is required to have Procmon setup with Cuckoo. Please run the Cuckoo Community script which will automatically fetch all related files to get you up-and-running.

Context

I see this in the source code:

        bin_path = os.path.join(ROOT, "bin")

        self.procmon_exe = os.path.join(bin_path, "procmon.exe")
        self.procmon_pmc = os.path.join(bin_path, "procmon.pmc")
        self.procmon_pml = os.path.join(bin_path, "procmon.pml")
        self.procmon_xml = os.path.join(bin_path, "procmon.xml")

        if not os.path.exists(self.procmon_exe) or \
                not os.path.exists(self.procmon_pmc):
            raise CuckooPackageError(
                "In order to use the Process Monitor functionality it is "
                "required to have Procmon setup with Cuckoo. Please run the "
                "Cuckoo Community script which will automatically fetch all "
                "related files to get you up-and-running."
            )

I executed the community scripts, everything is up-to-date. Where should be installed procmon?
Tx!

HTML and PDF reports not found

This is opensource and you getting free support so be friendly!

Prerequisites

Please answer the following questions for yourself before submitting an issue.

  • [ X ] I am running the latest version
  • [ X ] I checked the documentation and found no answer
  • [ X ] I checked to make sure that this issue has not already been filed
  • [ X ] I'm reporting the issue to the correct repository (for multi-repository projects)

Expected Behavior

Enabling the HTML and PDF reporting modules in reporting.conf should generate HTML and PDF reports, with a download link included in the analysis.

Current Behavior

The links are generated for each analysis, but the files are "not found". Only the link to the JSON report is valid.

Failure Information (for bugs)

Logs report that the HTML and PDF modules are loaded. Cannot identify any additional error information.

Steps to Reproduce

  • Deploy CAPEv2

  • Enable HTML and PDF modules

  • Try to access reports from analysed sample

Cleaning out Old Tasks

With the removal of the --clean option from CAPE v1, how do we go about cleaning out all previous tasks without the use of the GUI. Also, I have a task that was interrupted as the file submitted was not found. Its listed in the web GUI as failed and I want to delete it but I cannot find the task data inside MongoDB or Postgresql.

It would also help if this option were to be reintroduced into CAPE v2 for mass cleanup.

Cannot use Sysmon module

  • [x ] I am running the latest version
  • [x ] I checked the documentation and found no answer
  • [x ] I checked to make sure that this issue has not already been filed
  • [x ] I'm reporting the issue to the correct repository (for multi-repository projects)

Expected Behavior

Sysmon is activated on the configuration

Current Behavior

Sysmon module is reported as "non implemented" ?

Failure Information (for bugs)

From analysis.log:

2020-03-06 13:06:52,757 [root] DEBUG: Trying to import auxiliary module "modules.auxiliary.sysmon"...
2020-03-06 13:06:52,757 [root] DEBUG: Imported auxiliary module "modules.auxiliary.sysmon".

And later in the log:

2020-03-06 13:06:57,882 [root] DEBUG: Trying to initialize auxiliary module "Sysmon"...
2020-03-06 13:06:57,882 [root] WARNING: Auxiliary module Sysmon was not implemented

No other error or warning message...

Android/APK-analysis

This is opensource and you getting free support so be friendly!

Prerequisites

Please answer the following questions for yourself before submitting an issue.

  • I will be running the latest version
  • I checked the documentation and found no answer
  • I checked to make sure that this issue has not already been filed
  • I'm reporting the issue to the correct repository (for multi-repository projects)

Expected Behavior

As the title says, I'm wondering if any of Muhzii's fixes to Android analysis has been implemented into this fork, and/or if Android/APK-analysis will work/has been tested?

Current Behavior

No current behavior, not tested on my end(yet).

TrId error

Hi,
I have a problem with TrID in processing:

ERROR lib.cuckoo.core.plugins: Failed to run the processing module "TrID":
Traceback (most recent call last):
File "/opt/CAPEv2/utils/../lib/cuckoo/core/plugins.py", line 228, in process
data = current.run()
File "/opt/CAPEv2/utils/../modules/processing/trid.py", line 31, in run
output = subprocess.check_output([ trid_binary, "-d:%s" % definitions, self.file_path], stderr=subprocess.STDOUT)
File "/usr/lib/python3.6/subprocess.py", line 356, in check_output
**kwargs).stdout
File "/usr/lib/python3.6/subprocess.py", line 438, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['/opt/CAPEv2/data/trid/trid', '-d:/opt/CAPEv2/data/trid/triddefs.trd', '/opt/CAPEv2/storage/binaries/7f59bf0b9a1a27b3ab586a0c8a8cd523fcc1d530a0231de8df1e09e8e3387016']' died with <Signals.SIGABRT: 6>.
2020-03-20 09:24:10,909 [lib.cuckoo.core.plugins] ERROR: Failed to run the processing module "TrID":
Traceback (most recent call last):
File "/opt/CAPEv2/utils/../lib/cuckoo/core/plugins.py", line 228, in process
data = current.run()
File "/opt/CAPEv2/utils/../modules/processing/trid.py", line 31, in run
output = subprocess.check_output([ trid_binary, "-d:%s" % definitions, self.file_path], stderr=subprocess.STDOUT)
File "/usr/lib/python3.6/subprocess.py", line 356, in check_output
**kwargs).stdout
File "/usr/lib/python3.6/subprocess.py", line 438, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['/opt/CAPEv2/data/trid/trid', '-d:/opt/CAPEv2/data/trid/triddefs.trd', '/opt/CAPEv2/storage/binaries/7f59bf0b9a1a27b3ab586a0c8a8cd523fcc1d530a0231de8df1e09e8e3387016']' died with <Signals.SIGABRT: 6>.

More URL analysis issue?

I just got this error while analyzing a URL:

2020-04-21 05:05:21,726 [lib.api.process] INFO: Successfully executed process from path "C:\Program Files (x86)\Internet Explorer\iexplore.exe" with arguments ""http://xxxxxxx"" with pid 4128
2020-04-21 05:05:22,414 [lib.api.process] INFO: Monitor config for process 4128: C:\tmp0pm2gis2\dll\4128.ini
2020-04-21 05:05:30,564 [root] ERROR: Traceback (most recent call last):
  File "C:/tmp0pm2gis2/analyzer.py", line 517, in run
    pids = pack.start(self.target)
  File "C:\tmp0pm2gis2\modules\packages\ie.py", line 17, in start
    return self.execute(iexplore, "\"%s\"" % url, url)
  File "C:\tmp0pm2gis2\lib\common\abstracts.py", line 134, in execute
    p.inject(INJECT_QUEUEUSERAPC, interest)
  File "C:\tmp0pm2gis2\lib\api\process.py", line 626, in inject
    self.write_monitor_config(interest, nosleepskip)
  File "C:\tmp0pm2gis2\lib\api\process.py", line 563, in write_monitor_config
    config.write("referrer={0}\n".format(get_referrer_url(interest)))
  File "C:\tmp0pm2gis2\lib\api\process.py", line 62, in get_referrer_url
    vedstr = "0CCEQfj" + base64.urlsafe_b64encode(random_string(random.randint(5, 8) * 3))
  File "C:\Users\IEUser\AppData\Local\Programs\Python\Python38-32\lib\base64.py", line 118, in urlsafe_b64encode
    return b64encode(s).translate(_urlsafe_encode_translation)
  File "C:\Users\IEUser\AppData\Local\Programs\Python\Python38-32\lib\base64.py", line 58, in b64encode
    encoded = binascii.b2a_base64(s, newline=False)
TypeError: a bytes-like object is required, not 'str'

cuckoo3.sh and kvm-qemu.sh does not install properly

Expected Behavior

I ran the two scripts provided by @doomedraven and I expected to have a fully working CAPEv2 running.

Current Behavior

Several things are not working after running the scripts. I had to manually purge and reinstall things to get them to work. This has everything to do with CAPE because right now the only way to install is by using @doomedraven written scripts.

Failure Information (for bugs)

  1. MongoDB would not start because the install script creates 3 systemd service files under /etc/systemd/system that do not need to be there. MongoDB installs its own systemd service file under /lib/systemd/system.
  2. Tor does not start after install/reboot using the install scripts. I had to manually purge the tor package and reinstall.
  3. Libvirtd is not enabled or started. KVM qemu is not detected by virt-manager.

Steps to Reproduce

  1. Install a fresh installation of Ubuntu 18.04, per the README.md
  2. Run the kvm-qemu.sh script, wait for it to finish, then reboot.
  3. Run the cuckoo3.sh script, wait for it to finish, then reboot.
  4. Monitor the startup process and see that above mentioned services do not start.

Context

Question Answer
Git commit commit ca9d510
OS version Ubuntu 18.04

Notes

A previous issue was closed prematurely without considering that the install scripts are flawed and do not leave you with a working CAPEv2. @kevoreilly please do not let this one get closed without addressing the issues reported.

Some services do not start after install.

Prerequisites

Please answer the following questions for yourself before submitting an issue.

  • I am running the latest version
  • I checked the documentation and found no answer
  • I checked to make sure that this issue has not already been filed
  • I'm reporting the issue to the correct repository (for multi-repository projects)

Expected Behavior

After running the kvm-qemu.sh and cuckoo3.sh scripts, I expected to have a fully working CAPEv2 installation.

Current Behavior

Some required services do not start. (Mongo and Tor )

Failure Information (for bugs)

MONGO

โ— mongodb.service - High-performance, schema-free document-oriented database
   Loaded: loaded (/etc/systemd/system/mongodb.service; enabled; vendor preset: enabled)
   Active: failed (Result: exit-code) since Tue 2020-02-18 19:09:29 UTC; 5min ago
  Process: 8725 ExecStart=/usr/bin/numactl --interleave=all /usr/bin/mongod --quiet --shardsvr --bind_ip_all --port 27017 (code=exited, status=203/EXEC)
  Process: 8721 ExecStartPre=/bin/chown mongodb:mongodb /data -R (code=exited, status=0/SUCCESS)
  Process: 8715 ExecStartPre=/bin/mkdir -p /data/{config,}db (code=exited, status=0/SUCCESS)
 Main PID: 8725 (code=exited, status=203/EXEC)

Feb 18 19:09:28 cuckoo systemd[1]: Started High-performance, schema-free document-oriented database.
Feb 18 19:09:28 cuckoo systemd[1]: mongodb.service: Main process exited, code=exited, status=203/EXEC
Feb 18 19:09:28 cuckoo systemd[1]: mongodb.service: Failed with result 'exit-code'.
Feb 18 19:09:29 cuckoo systemd[1]: mongodb.service: Service hold-off time over, scheduling restart.
Feb 18 19:09:29 cuckoo systemd[1]: mongodb.service: Scheduled restart job, restart counter is at 5.
Feb 18 19:09:29 cuckoo systemd[1]: Stopped High-performance, schema-free document-oriented database.
Feb 18 19:09:29 cuckoo systemd[1]: mongodb.service: Start request repeated too quickly.
Feb 18 19:09:29 cuckoo systemd[1]: mongodb.service: Failed with result 'exit-code'.
Feb 18 19:09:29 cuckoo systemd[1]: Failed to start High-performance, schema-free document-oriented database.

TOR

โ— [email protected] - Anonymizing overlay network for TCP
   Loaded: loaded (/lib/systemd/system/[email protected]; enabled-runtime; vendor preset: enabled)
   Active: failed (Result: exit-code) since Tue 2020-02-18 19:01:11 UTC; 14min ago
 Main PID: 2711 (code=exited, status=1/FAILURE)

Feb 18 19:01:11 cuckoo tor[2711]: Feb 18 19:01:11.737 [notice] Closing partially-constructed Socks listener on 127.0.0.1:9050
Feb 18 19:01:11 cuckoo tor[2711]: Feb 18 19:01:11.737 [warn] Failed to parse/validate config: Failed to bind one of the listener ports.
Feb 18 19:01:11 cuckoo systemd[1]: Failed to start Anonymizing overlay network for TCP.
Feb 18 19:01:11 cuckoo tor[2711]: Feb 18 19:01:11.737 [err] Reading config failed--see warnings above.
Feb 18 19:01:11 cuckoo systemd[1]: [email protected]: Service hold-off time over, scheduling restart.
Feb 18 19:01:11 cuckoo systemd[1]: [email protected]: Scheduled restart job, restart counter is at 5.
Feb 18 19:01:11 cuckoo systemd[1]: Stopped Anonymizing overlay network for TCP.
Feb 18 19:01:11 cuckoo systemd[1]: [email protected]: Start request repeated too quickly.
Feb 18 19:01:11 cuckoo systemd[1]: [email protected]: Failed with result 'exit-code'.
Feb 18 19:01:11 cuckoo systemd[1]: Failed to start Anonymizing overlay network for TCP.

Steps to Reproduce

Please provide detailed steps for reproducing the issue.

  1. Install a fresh installation of Ubuntu 18.04
  2. Modify and run kvm-qemu.sh and reboot
  3. Modify and run cuckoo3.sh
  4. Reboot and monitor the startup process

Context

Question Answer
Git commit commit ca9d510
OS version Ubuntu 18.04

Failure Logs

Not sure which logs to provide.

Samples submitted via API don't have internet access

This is opensource and you getting free support so be friendly!

Prerequisites

Please answer the following questions for yourself before submitting an issue.

  • I am running the latest version
  • [X I checked the documentation and found no answer
  • I checked to make sure that this issue has not already been filed
  • I'm reporting the issue to the correct repository (for multi-repository projects)

Expected Behavior

If I submit a sample over Web UI the sample uploaded has internet access as expected. If I use the API

http://cape.ip/api/tasks/create/file/

it seems that the sample has no internet access. For testing purposes I created a small PowerShell script

Invoke-RestMethod http://ipinfo.io/json | Select -exp ip

which I keep uploading - with the same result. I have the feeling, that I could add the network options through "options" but I'm unable to find documentation. The options available on the Submit-page don't list network specific options.

Current Behavior

No internet access for samples submitted via API

Failure Information (for bugs)

Please help provide information about the failure if this is a bug. If it is not a bug, please remove the rest of this template.

Steps to Reproduce

Please provide detailed steps for reproducing the issue.

  • Submit test powershell script via API to CAPE instance
  • HTTP Section and TCP Section stay empty
  • Upload sample via Web and there will be HTTP and TCP section filled

physical machinery broken

I can not get up the physical setup.

It always hit the following exception:

except socket.error as e: # Could not contact agent. log.debug("Agent unresponsive: %s (%s) (Error: %s).", machine.id, machine.ip, e)

The cuckoo log states that the agent is not responding.

Testing the agent from the host with curl http://x.x.x.x:8000 works fine.

By negating the exception handling (dirty I know... :)) cuckoo gets up and proceeds with the analysis (sample gets executed) but crashes after trying to reboot the machine.

I think that the TimeoutServer is the problem here.

Config has no attribute 'Routing'

This is opensource and you getting free support so be friendly!

Expected Behavior

On submitting a new file, Analysis fails to start. 'Config has no attribute 'Routing'

Current Behavior

Cuckoo process errors. Full output below.

Failure Information (for bugs)

2020-01-05 03:25:42,233 [lib.cuckoo.core.scheduler] ERROR: ESC[31mTask #13: Failure in AnalysisManager.run: 'Config' object has no attribute 'routing'ESC[0m
Traceback (most recent call last):
File "/opt/CAPEv2/lib/cuckoo/core/scheduler.py", line 460, in run
success = self.launch_analysis()
File "/opt/CAPEv2/lib/cuckoo/core/scheduler.py", line 307, in launch_analysis
self.route_network()
File "/opt/CAPEv2/lib/cuckoo/core/scheduler.py", line 534, in route_network
elif self.route == "internet" and self.cfg.routing.internet != "none":
AttributeError: 'Config' object has no attribute 'routing'

###Routing.conf (first few lines)

`
[routing]
# Default network routing mode; "none", "internet", or "vpn_name".
# In none mode we don't do any special routing - the VM doesn't have any
# network access (this has been the default actually for quite a while).
# In internet mode by default all the VMs will be routed through the network
# interface configured below (the "dirty line").
# And in VPN mode by default the VMs will be routed through the VPN identified
# by the given name of the VPN (as per vpn.conf).
# Note that just like enabling VPN configuration setting this option to
# anything other than "none" requires one to run utils/rooter.py as root next
# to the CAPE instance (as it's required for setting up the routing).
route = internet

# Network interface that allows a VM to connect to the entire internet, the
# "dirty line" so to say. Note that, just like with the VPNs, this will allow
# malicious traffic through your network. So think twice before enabling it.
# (For example, to route all VMs through eth0 by default: "internet = eth0").
internet = ens32

`

Steps to Reproduce

Please provide detailed steps for reproducing the issue.

1.) Submit file for analysis
2.) Review logs

Context

Please provide any relevant information about your setup. This is important in case the issue is not reproducible except for under certain conditions.

Question Answer
Git commit 610812d
OS version Ubuntu 18.04.3 LTS

Failure Logs

2020-01-05 03:25:42,233 [lib.cuckoo.core.scheduler] ERROR: ESC[31mTask #13: Failure in AnalysisManager.run: 'Config' object has no attribute 'routing'ESC[0m
Traceback (most recent call last):
File "/opt/CAPEv2/lib/cuckoo/core/scheduler.py", line 460, in run
success = self.launch_analysis()
File "/opt/CAPEv2/lib/cuckoo/core/scheduler.py", line 307, in launch_analysis
self.route_network()
File "/opt/CAPEv2/lib/cuckoo/core/scheduler.py", line 534, in route_network
elif self.route == "internet" and self.cfg.routing.internet != "none":
AttributeError: 'Config' object has no attribute 'routing'

Ability to upload sample to virustotal

I noticed that cape could not upload samples to VT for analysis, maybe we can add a checkbox or button to do it. If you're interested, I'd like to try it out.

KVM no domain with matching name found

While trying to fix the physical machinery issue I thought it would be easy to setup kvm :)

CAPEv2 gives the following error:

libvirt: QEMU Driver error : Domain not found: no domain with matching name 'win7x64_malware01'
CRITICAL: CuckooCriticalError: Error initializing machines: Cannot find machine win7x64_malware01

But the vm is running:

virsh list --all
Id Name State
1 win7x64_malware01 running

The user is in the following KVM groups:

  • kvm
  • libvirt
  • libvirt-qemu

Am I missing something ?

Thx for attention!

Improve Symantec VBN coverage

I looked at the quarantine.py file and notice the Symantec section could use some huge improvement. I have done extensive research on Symantecโ€™s files and improved DeXRAY in the process. When I get some time, I would like to help improve the decrypting of Symantec files. Just leaving this here so I donโ€™t forget.

Possible error in utils/rooter.py

This is opensource and you getting free support so be friendly!

Prerequisites

Please answer the following questions for yourself before submitting an issue.

  • I am running the latest version
  • I checked the documentation and found no answer
  • I checked to make sure that this issue has not already been filed
  • I'm reporting the issue to the correct repository (for multi-repository projects)

Expected Behavior

Function init_rttable should actually create routes but it errors out since run() returns a byte object and not a string which doesn't support split

Please describe the behavior you are expecting

If VPN gets enabled, rooter,py can't get up (Spawn error). If you disable VPN in router.conf everything works well.

What is the current behavior?

2020-03-16 14:22:04,223 [cuckoo-rooter] DEBUG: b'0.0.0.0/1 via 10.8.8.1 \n10.8.8.0/24 proto kernel scope link src 10.8.8.6 \n128.0.0.0/1 via 10.8.8.1 \n'
2020-03-16 14:22:04,223 [cuckoo-rooter] ERROR: Error executing command
Traceback (most recent call last):
  File "rooter.py", line 393, in <module>
    output = handlers[command](*args, **kwargs)
  File "rooter.py", line 112, in init_rttable
    for line in stdout.split("\n"):
TypeError: a bytes-like object is required, not 'str'
Traceback (most recent call last):
  File "rooter.py", line 399, in <module>
    "exception": str(e) if e else None,
NameError: name 'e' is not defined

Please help provide information about the failure if this is a bug. If it is not a bug, please remove the rest of this template.

Steps to Reproduce

  • Enable VPN in router.conf
  • sudo supervisorctl restart all

Please provide detailed steps for reproducing the issue.

  1. step 1
  2. step 2
  3. you get it...

Context

In my case it helped to change

for line in stdout.split("\n")

to

for line in stdout.decode("ascii").split("\n")

to fix the problem.

Guest python crash: faulting module name: _ctypes.pyd

This is opensource and you getting free support so be friendly!

Prerequisites

Please answer the following questions for yourself before submitting an issue.

  • [] I am running the latest version
  • I checked the documentation and found no answer
  • I checked to make sure that this issue has not already been filed
  • I'm reporting the issue to the correct repository (for multi-repository projects)

Expected Behavior

Behavior/dynamic analysis

Current Behavior

What is the current behavior?
Pop up in guest: Python has stopped working (without details)
--> which results in no dynamic analysis in my result page

Failure Information (for bugs)

Error occurs right after the following log lines:
[lib.cuckoo.core.guest] info: guest is running cuckoo agent 0.11 (id=cuckoo-cape, ip=192.168.56.101)
[lib.cuckoo.core.guest] debug: uploading analyzer to guest 0.11 (id=cuckoo-cape, ip=192.168.56.101)
[lib.cuckoo.core.guest] info: uploading support files to guest (id=cuckoo-cape, ip=192.168.56.101, size=17113585)
[root] debug: task #8: live log analysis.log initialized

Application evtx log in windows:
Faulting application name: python.exe
faulting module name: _ctypes.pyd
Exception code: 0xc0000005

Please help provide information about the failure if this is a bug. If it is not a bug, please remove the rest of this template.

Context

Please provide any relevant information about your setup. This is important in case the issue is not reproducible except for under certain conditions.

Question Answer
OS version Ubuntu 18.04.4, guest: Win7x64

I tried with Python 3.8 and 3.7 on the guest
On my host I'm running python 3.6

Cape downloaded and installed on March 25, 2020.

Failure Logs

Please include any relevant log snippets or files here.

File Resubmission error 'bytes-like object is required, not str'

Expected Behavior

Resubmitting a file from an analysis page would restart an analysis

Current Behavior

After selecting 'Resubmit Sample' on the analysis page and Selecting 'Analyze' on the Submission page, I receive a django traceback error

Failure Information (for bugs)

Log in web.err.log:

[04/Jan/2020 15:13:21] "GET /submit/resubmit/bc154f8a0b83d7ce9a4e18929dcde5e16730d044af583feac1c35bae7b78c0f4/ HTTP/1.0" 200 39487
Internal Server Error: /submit/resubmit/bc154f8a0b83d7ce9a4e18929dcde5e16730d044af583feac1c35bae7b78c0f4/
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/django/core/handlers/exception.py", line 34, in inner
response = get_response(request)
File "/usr/local/lib/python3.6/dist-packages/django/core/handlers/base.py", line 115, in _get_response
response = self.process_exception_by_middleware(e, request)
File "/usr/local/lib/python3.6/dist-packages/django/core/handlers/base.py", line 113, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/opt/CAPEv2/web/submission/views.py", line 253, in index
clock, custom, memory, enforce_timeout, referrer, tags, orig_options, task_machines, static)
File "/opt/CAPEv2/web/../lib/cuckoo/common/web_utils.py", line 191, in download_file
orig_options, timeout, enforce_timeout = recon(filename, orig_options, timeout, enforce_timeout)
File "/opt/CAPEv2/web/../lib/cuckoo/common/web_utils.py", line 113, in recon
if "name" in filename:
TypeError: a bytes-like object is required, not 'str'
[04/Jan/2020 15:13:24] "POST /submit/resubmit/bc154f8a0b83d7ce9a4e18929dcde5e16730d044af583feac1c35bae7b78c0f4/ HTTP/1.0" 500 95566

Steps to Reproduce

Step 1: Submit a file for analysis.
Step 2: After analysis is complete, Select 'Resubmit Sample' on the analysis page.
Step 3: On the Submission page, select 'Analyze'
Step 4: Receive error

Context

Question Answer
Git commit 2910849
OS version Ubuntu 18.04.3 LTS

Virtual Address CAPE

Hi,
In my local CAPE I can't see the Virtual Address in the CAPE report section. In the online version it works fine but I'm not getting it. What I'm doing wrong?

I would like to be able to contact you for some question about some CAPE behaviours if it's possible.

Thanks

Analysis finishes but never completes processing/reporting

This is opensource and you getting free support so be friendly!

Prerequisites

Please answer the following questions for yourself before submitting an issue.

  • I am running the latest version
  • I checked the documentation and found no answer
  • I checked to make sure that this issue has not already been filed
  • I'm reporting the issue to the correct repository (for multi-repository projects)

Expected Behavior

Analysis submissions complete processing and a report is produced

Current Behavior

Analysis procedures complete, processing appears to never finish and thus no report created.

Failure Information (for bugs)

Please help provide information about the failure if this is a bug. If it is not a bug, please remove the rest of this template.

Steps to Reproduce

Please provide detailed steps for reproducing the issue.

  1. Submit analysis
  2. Wait....
  3. Wait some more...
  4. Kill cuckoo

Context

Please provide any relevant information about your setup. This is important in case the issue is not reproducible except for under certain conditions.

Question Answer
OS version Ubuntu 18.04.4 LTS

Failure Logs

Please include any relevant log snippets or files here.

/opt/CAPEv2$ python3 cuckoo.py

2020-04-22 08:23:21,091 [lib.cuckoo.core.scheduler] INFO: Using "kvm" machine manager with max_analysis_count=0, max_machines_count=10, and max_vmstartup_count=5
2020-04-22 08:23:21,245 [lib.cuckoo.common.abstracts] DEBUG: Getting status for cuckoo2
2020-04-22 08:23:21,263 [lib.cuckoo.core.scheduler] INFO: Loaded 1 machine/s
2020-04-22 08:23:21,284 [lib.cuckoo.core.scheduler] INFO: Waiting for analysis tasks.
2020-04-22 08:26:19,482 [lib.cuckoo.core.scheduler] DEBUG: Task #6: Processing task
2020-04-22 08:26:19,496 [lib.cuckoo.core.scheduler] INFO: Task #6: Starting analysis of FILE '/tmp/cuckoo-tmp/upload_ptq1bzyw/putin.vbs'
2020-04-22 08:26:19,530 [lib.cuckoo.core.scheduler] INFO: Task #6: acquired machine cuckoo2 (label=cuckoo2, platform=windows)
2020-04-22 08:26:19,566 [root] DEBUG: Now tracking machine 192.168.122.52 for task #6
2020-04-22 08:26:19,589 [lib.cuckoo.common.abstracts] DEBUG: Starting machine cuckoo2
2020-04-22 08:26:19,589 [lib.cuckoo.common.abstracts] DEBUG: Getting status for cuckoo2
2020-04-22 08:26:19,625 [lib.cuckoo.common.abstracts] DEBUG: Using snapshot snapshot1 for virtual machine cuckoo2
2020-04-22 08:26:27,138 [lib.cuckoo.common.abstracts] DEBUG: Getting status for cuckoo2
2020-04-22 08:26:27,262 [lib.cuckoo.core.scheduler] INFO: Enabled route 'internet'
2020-04-22 08:26:27,298 [modules.auxiliary.sniffer] INFO: Started sniffer with PID 18848 (interface=virbr0, host=192.168.122.52, dump path=/opt/CAPEv2/storage/analyses/6/dump.pcap)
2020-04-22 08:26:27,299 [lib.cuckoo.core.plugins] DEBUG: Started auxiliary module: Sniffer
2020-04-22 08:26:27,321 [lib.cuckoo.core.guest] INFO: Starting analysis #6 on guest (id=cuckoo2, ip=192.168.122.52)
2020-04-22 08:26:27,347 [lib.cuckoo.core.guest] INFO: Guest is running CAPE Agent 0.11 (id=cuckoo2, ip=192.168.122.52)
2020-04-22 08:26:27,439 [lib.cuckoo.core.guest] DEBUG: Uploading analyzer to guest (id=cuckoo2, ip=192.168.122.52, size=19276943)
2020-04-22 08:26:29,302 [lib.cuckoo.core.guest] INFO: Uploading support files to guest (id=cuckoo2, ip=192.168.122.52)
2020-04-22 08:26:29,943 [root] DEBUG: Task #6: live log analysis.log initialized.
2020-04-22 08:26:30,412 [root] DEBUG: Task #6: File upload for b'aux/DigiSig.json'
2020-04-22 08:26:30,413 [root] DEBUG: Task #6 uploaded file length: 196
2020-04-22 08:26:32,603 [root] DEBUG: Task #6 is sending a BSON stream. For pid 3012
2020-04-22 08:26:34,395 [lib.cuckoo.core.guest] DEBUG: cuckoo2: analysis #6 still processing
2020-04-22 08:26:39,307 [root] DEBUG: Task #6 is sending a BSON stream. For pid 604
2020-04-22 08:26:39,458 [lib.cuckoo.core.guest] DEBUG: cuckoo2: analysis #6 still processing
2020-04-22 08:26:43,524 [root] DEBUG: Task #6 is sending a BSON stream. For pid 2792
2020-04-22 08:26:44,526 [lib.cuckoo.core.guest] DEBUG: cuckoo2: analysis #6 still processing
2020-04-22 08:26:47,166 [root] DEBUG: Task #6: File upload for b'procdump/b894beab2eb9b430d3b9d84e9a9040e9eaed2fde059959eb066b23e9e13cbacc'
2020-04-22 08:26:47,170 [root] DEBUG: Task #6 uploaded file length: 141824
2020-04-22 08:26:49,584 [lib.cuckoo.core.guest] DEBUG: cuckoo2: analysis #6 still processing
2020-04-22 08:26:54,668 [lib.cuckoo.core.guest] DEBUG: cuckoo2: analysis #6 still processing
2020-04-22 08:26:59,721 [lib.cuckoo.core.guest] DEBUG: cuckoo2: analysis #6 still processing
2020-04-22 08:27:04,775 [lib.cuckoo.core.guest] DEBUG: cuckoo2: analysis #6 still processing
2020-04-22 08:27:09,821 [lib.cuckoo.core.guest] DEBUG: cuckoo2: analysis #6 still processing
2020-04-22 08:27:14,861 [lib.cuckoo.core.guest] DEBUG: cuckoo2: analysis #6 still processing
2020-04-22 08:27:19,904 [lib.cuckoo.core.guest] DEBUG: cuckoo2: analysis #6 still processing
2020-04-22 08:27:24,947 [lib.cuckoo.core.guest] DEBUG: cuckoo2: analysis #6 still processing
2020-04-22 08:27:29,988 [lib.cuckoo.core.guest] DEBUG: cuckoo2: analysis #6 still processing
2020-04-22 08:27:35,027 [lib.cuckoo.core.guest] DEBUG: cuckoo2: analysis #6 still processing
2020-04-22 08:27:38,275 [root] DEBUG: Task #6 is sending a BSON stream. For pid 2676
2020-04-22 08:27:38,759 [root] DEBUG: Task #6: File upload for b'procdump/986fc4dc37362068cf5bdd0b8560fe94aa95619240d72e7126d17369ef1d6bf4'
2020-04-22 08:27:38,770 [root] DEBUG: Task #6 uploaded file length: 9728
2020-04-22 08:27:40,080 [lib.cuckoo.core.guest] DEBUG: cuckoo2: analysis #6 still processing
2020-04-22 08:27:45,138 [lib.cuckoo.core.guest] DEBUG: cuckoo2: analysis #6 still processing
2020-04-22 08:27:50,177 [lib.cuckoo.core.guest] DEBUG: cuckoo2: analysis #6 still processing
2020-04-22 08:27:55,218 [lib.cuckoo.core.guest] DEBUG: cuckoo2: analysis #6 still processing
2020-04-22 08:28:00,261 [lib.cuckoo.core.guest] DEBUG: cuckoo2: analysis #6 still processing
2020-04-22 08:28:05,317 [lib.cuckoo.core.guest] DEBUG: cuckoo2: analysis #6 still processing
2020-04-22 08:28:10,360 [lib.cuckoo.core.guest] DEBUG: cuckoo2: analysis #6 still processing
2020-04-22 08:28:15,398 [lib.cuckoo.core.guest] DEBUG: cuckoo2: analysis #6 still processing
2020-04-22 08:28:20,439 [lib.cuckoo.core.guest] DEBUG: cuckoo2: analysis #6 still processing
2020-04-22 08:28:25,479 [lib.cuckoo.core.guest] DEBUG: cuckoo2: analysis #6 still processing
2020-04-22 08:28:30,522 [lib.cuckoo.core.guest] DEBUG: cuckoo2: analysis #6 still processing
2020-04-22 08:28:35,562 [lib.cuckoo.core.guest] DEBUG: cuckoo2: analysis #6 still processing
2020-04-22 08:28:40,602 [lib.cuckoo.core.guest] DEBUG: cuckoo2: analysis #6 still processing
2020-04-22 08:28:45,642 [lib.cuckoo.core.guest] DEBUG: cuckoo2: analysis #6 still processing
2020-04-22 08:28:50,682 [lib.cuckoo.core.guest] DEBUG: cuckoo2: analysis #6 still processing
2020-04-22 08:28:55,720 [lib.cuckoo.core.guest] DEBUG: cuckoo2: analysis #6 still processing
2020-04-22 08:29:00,757 [lib.cuckoo.core.guest] DEBUG: cuckoo2: analysis #6 still processing
2020-04-22 08:29:05,808 [lib.cuckoo.core.guest] DEBUG: cuckoo2: analysis #6 still processing
2020-04-22 08:29:10,849 [lib.cuckoo.core.guest] DEBUG: cuckoo2: analysis #6 still processing
2020-04-22 08:29:15,893 [lib.cuckoo.core.guest] DEBUG: cuckoo2: analysis #6 still processing
2020-04-22 08:29:20,928 [lib.cuckoo.core.guest] DEBUG: cuckoo2: analysis #6 still processing
2020-04-22 08:29:25,975 [lib.cuckoo.core.guest] DEBUG: cuckoo2: analysis #6 still processing
2020-04-22 08:29:31,025 [lib.cuckoo.core.guest] DEBUG: cuckoo2: analysis #6 still processing
2020-04-22 08:29:36,066 [lib.cuckoo.core.guest] DEBUG: cuckoo2: analysis #6 still processing
2020-04-22 08:29:41,105 [lib.cuckoo.core.guest] DEBUG: cuckoo2: analysis #6 still processing
2020-04-22 08:29:46,141 [lib.cuckoo.core.guest] DEBUG: cuckoo2: analysis #6 still processing
2020-04-22 08:29:51,177 [lib.cuckoo.core.guest] DEBUG: cuckoo2: analysis #6 still processing
2020-04-22 08:29:52,926 [root] DEBUG: Task #6: File upload for b'procdump/53193158565e10c76c466347ff47bedc67c2797ae0cf3351794cb471224411b6'
2020-04-22 08:29:52,932 [root] DEBUG: Task #6 uploaded file length: 26624
2020-04-22 08:29:54,087 [root] DEBUG: Task #6 had connection reset for <Context for b'BSON'>
2020-04-22 08:29:54,088 [root] DEBUG: Task #6 had connection reset for <Context for b'BSON'>
2020-04-22 08:29:54,088 [root] DEBUG: Task #6 had connection reset for <Context for b'LOG'>
2020-04-22 08:29:54,089 [root] DEBUG: Task #6 had connection reset for <Context for b'BSON'>
2020-04-22 08:29:54,089 [root] DEBUG: Task #6 had connection reset for <Context for b'BSON'>
2020-04-22 08:29:54,198 [lib.cuckoo.core.guest] INFO: cuckoo2: analysis completed successfully
2020-04-22 08:29:54,215 [lib.cuckoo.core.plugins] DEBUG: Stopped auxiliary module: Sniffer
2020-04-22 08:29:54,215 [lib.cuckoo.common.abstracts] DEBUG: Stopping machine cuckoo2
2020-04-22 08:29:54,215 [lib.cuckoo.common.abstracts] DEBUG: Getting status for cuckoo2
2020-04-22 08:29:54,834 [lib.cuckoo.common.abstracts] DEBUG: Getting status for cuckoo2
2020-04-22 08:29:54,874 [root] DEBUG: Stopped tracking machine 192.168.122.52 for task #6
2020-04-22 08:29:54,874 [lib.cuckoo.core.rooter] CRITICAL: Unable to passthrough root command as we're unable to connect to the rooter unix socket: [Errno 13] Permission denied.
CRITICAL lib.cuckoo.core.rooter: Unable to passthrough root command as we're unable to connect to the rooter unix socket: [Errno 13] Permission denied.
2020-04-22 08:29:54,877 [lib.cuckoo.core.scheduler] INFO: Disabled route 'internet'
2020-04-22 08:29:54,878 [lib.cuckoo.core.rooter] CRITICAL: Unable to passthrough root command as we're unable to connect to the rooter unix socket: [Errno 13] Permission denied.
CRITICAL lib.cuckoo.core.rooter: Unable to passthrough root command as we're unable to connect to the rooter unix socket: [Errno 13] Permission denied.
2020-04-22 08:29:54,908 [lib.cuckoo.core.scheduler] DEBUG: Task #6: Released database task with status True
2020-04-22 08:29:54,908 [lib.cuckoo.core.scheduler] INFO: Task #6: analysis procedure completed
^^^^ Will sit here indefinitely

/opt/CAPEv2$ sudo python3 ./utils/rooter.py -v -g xeck29x
2020-04-22 08:23:21,041 [cuckoo-rooter] INFO: Processing command: forward_drop
2020-04-22 08:23:21,042 [cuckoo-rooter] DEBUG: ('/sbin/iptables', '-P', 'FORWARD', 'DROP')
2020-04-22 08:23:21,047 [cuckoo-rooter] INFO: Processing command: state_disable
2020-04-22 08:23:21,047 [cuckoo-rooter] DEBUG: ('/sbin/iptables', '-D', 'INPUT', '-m', 'state', '--state', 'ESTABLISHED,RELATED', '-j', 'ACCEPT')
2020-04-22 08:23:21,052 [cuckoo-rooter] INFO: Processing command: state_enable
2020-04-22 08:23:21,052 [cuckoo-rooter] DEBUG: ('/sbin/iptables', '-A', 'INPUT', '-m', 'state', '--state', 'ESTABLISHED,RELATED', '-j', 'ACCEPT')
2020-04-22 08:23:21,057 [cuckoo-rooter] INFO: Processing command: nic_available eno1
2020-04-22 08:23:21,061 [cuckoo-rooter] INFO: Processing command: rt_available main
2020-04-22 08:23:21,075 [cuckoo-rooter] INFO: Processing command: disable_nat eno1
2020-04-22 08:23:21,075 [cuckoo-rooter] DEBUG: ('/sbin/iptables', '-t', 'nat', '-D', 'POSTROUTING', '-o', 'eno1', '-j', 'MASQUERADE')
2020-04-22 08:23:21,079 [cuckoo-rooter] INFO: Processing command: enable_nat eno1
2020-04-22 08:23:21,079 [cuckoo-rooter] DEBUG: ('/sbin/iptables', '-t', 'nat', '-A', 'POSTROUTING', '-o', 'eno1', '-j', 'MASQUERADE')
2020-04-22 08:23:21,275 [cuckoo-rooter] INFO: Processing command: forward_disable virbr0 eno1 192.168.122.52
2020-04-22 08:23:21,275 [cuckoo-rooter] DEBUG: ('/sbin/iptables', '-D', 'FORWARD', '-i', 'virbr0', '-o', 'eno1', '--source', '192.168.122.52', '-j', 'ACCEPT')
2020-04-22 08:23:21,279 [cuckoo-rooter] DEBUG: ('/sbin/iptables', '-D', 'FORWARD', '-i', 'eno1', '-o', 'virbr0', '--destination', '192.168.122.52', '-j', 'ACCEPT')
2020-04-22 08:25:58,064 [cuckoo-rooter] INFO: Processing command: forward_drop
2020-04-22 08:25:58,064 [cuckoo-rooter] DEBUG: ('/sbin/iptables', '-P', 'FORWARD', 'DROP')
2020-04-22 08:25:58,069 [cuckoo-rooter] INFO: Processing command: state_disable
2020-04-22 08:25:58,069 [cuckoo-rooter] DEBUG: ('/sbin/iptables', '-D', 'INPUT', '-m', 'state', '--state', 'ESTABLISHED,RELATED', '-j', 'ACCEPT')
2020-04-22 08:25:58,072 [cuckoo-rooter] DEBUG: ('/sbin/iptables', '-D', 'INPUT', '-m', 'state', '--state', 'ESTABLISHED,RELATED', '-j', 'ACCEPT')
2020-04-22 08:25:58,077 [cuckoo-rooter] INFO: Processing command: state_enable
2020-04-22 08:25:58,078 [cuckoo-rooter] DEBUG: ('/sbin/iptables', '-A', 'INPUT', '-m', 'state', '--state', 'ESTABLISHED,RELATED', '-j', 'ACCEPT')
2020-04-22 08:25:58,083 [cuckoo-rooter] INFO: Processing command: nic_available eno1
2020-04-22 08:25:58,088 [cuckoo-rooter] INFO: Processing command: rt_available main
2020-04-22 08:25:58,093 [cuckoo-rooter] INFO: Processing command: disable_nat eno1
2020-04-22 08:25:58,093 [cuckoo-rooter] DEBUG: ('/sbin/iptables', '-t', 'nat', '-D', 'POSTROUTING', '-o', 'eno1', '-j', 'MASQUERADE')
2020-04-22 08:25:58,097 [cuckoo-rooter] INFO: Processing command: enable_nat eno1
2020-04-22 08:25:58,098 [cuckoo-rooter] DEBUG: ('/sbin/iptables', '-t', 'nat', '-A', 'POSTROUTING', '-o', 'eno1', '-j', 'MASQUERADE')
2020-04-22 08:25:59,992 [cuckoo-rooter] INFO: Processing command: forward_drop
2020-04-22 08:25:59,992 [cuckoo-rooter] DEBUG: ('/sbin/iptables', '-P', 'FORWARD', 'DROP')
2020-04-22 08:25:59,996 [cuckoo-rooter] INFO: Processing command: state_disable
2020-04-22 08:25:59,997 [cuckoo-rooter] DEBUG: ('/sbin/iptables', '-D', 'INPUT', '-m', 'state', '--state', 'ESTABLISHED,RELATED', '-j', 'ACCEPT')
2020-04-22 08:26:00,000 [cuckoo-rooter] DEBUG: ('/sbin/iptables', '-D', 'INPUT', '-m', 'state', '--state', 'ESTABLISHED,RELATED', '-j', 'ACCEPT')
2020-04-22 08:26:00,003 [cuckoo-rooter] INFO: Processing command: state_enable
2020-04-22 08:26:00,003 [cuckoo-rooter] DEBUG: ('/sbin/iptables', '-A', 'INPUT', '-m', 'state', '--state', 'ESTABLISHED,RELATED', '-j', 'ACCEPT')
2020-04-22 08:26:00,007 [cuckoo-rooter] INFO: Processing command: nic_available eno1
2020-04-22 08:26:00,011 [cuckoo-rooter] INFO: Processing command: rt_available main
2020-04-22 08:26:00,014 [cuckoo-rooter] INFO: Processing command: disable_nat eno1
2020-04-22 08:26:00,014 [cuckoo-rooter] DEBUG: ('/sbin/iptables', '-t', 'nat', '-D', 'POSTROUTING', '-o', 'eno1', '-j', 'MASQUERADE')
2020-04-22 08:26:00,018 [cuckoo-rooter] INFO: Processing command: enable_nat eno1
2020-04-22 08:26:00,018 [cuckoo-rooter] DEBUG: ('/sbin/iptables', '-t', 'nat', '-A', 'POSTROUTING', '-o', 'eno1', '-j', 'MASQUERADE')
2020-04-22 08:26:27,248 [cuckoo-rooter] INFO: Processing command: nic_available eno1
2020-04-22 08:26:27,251 [cuckoo-rooter] INFO: Processing command: forward_enable virbr0 eno1 192.168.122.52
2020-04-22 08:26:27,251 [cuckoo-rooter] DEBUG: ('/sbin/iptables', '-D', 'FORWARD', '-i', 'virbr0', '-j', 'REJECT')
2020-04-22 08:26:27,254 [cuckoo-rooter] DEBUG: ('/sbin/iptables', '-D', 'FORWARD', '-o', 'virbr0', '-j', 'REJECT')
2020-04-22 08:26:27,256 [cuckoo-rooter] DEBUG: ('/sbin/iptables', '-A', 'FORWARD', '-i', 'virbr0', '-o', 'eno1', '--source', '192.168.122.52', '-j', 'ACCEPT')
2020-04-22 08:26:27,259 [cuckoo-rooter] DEBUG: ('/sbin/iptables', '-A', 'FORWARD', '-i', 'eno1', '-o', 'virbr0', '--destination', '192.168.122.52', '-j', 'ACCEPT')
2020-04-22 08:26:27,262 [cuckoo-rooter] INFO: Processing command: srcroute_enable main 192.168.122.52
2020-04-22 08:26:27,263 [cuckoo-rooter] DEBUG: ('/sbin/ip', 'rule', 'add', 'from', '192.168.122.52', 'table', 'main')
2020-04-22 08:26:27,265 [cuckoo-rooter] DEBUG: ('/sbin/ip', 'route', 'flush', 'cache')


Long time Cuckoo user, first time with CAPE, love the work you guys are doing! Not sure why rooter is throwing errors, as it appears rooter is working properly and both being ran under the same user. Any assistance to help me along here would be greatly appreciated!

Office "auto_close" not triggered

This is opensource and you getting free support so be friendly!

Prerequisites

Please answer the following questions for yourself before submitting an issue.

  • I am running the latest version
  • I checked the documentation and found no answer
  • I checked to make sure that this issue has not already been filed
  • I'm reporting the issue to the correct repository (for multi-repository projects)

Expected Behavior

Macros executed by auto_close, should be triggered somehow
OR
Make it possible to access the VM via the browser

Current Behavior

auto_close macros are listed in the static analysis, but nothing shows in the behavioral analysis because the file doesn't close while running the analysis.

Steps to Reproduce

  1. Upload a office document with auto_close macro
  2. analyse with entering the VM
  3. Upload an office document with auto_close macro
  4. During the analysis, enter the vm and close the document
  5. Compare results (specifically the behavioral analysis)

Context

Please provide any relevant information about your setup. This is important in case the issue is not reproducible except for under certain conditions.

Question Answer
Git commit fab86b4
OS version Host: Ubuntu 18.04, client:Windows 7

Add description

Could a description for the repo be added? E.g. same like before but changed URL:

Malware Configuration And Payload Extraction https://capesandbox.com

Not super-important, but would be nice to add. Thank you!

Silent failure of Dropped Files tab when using ES as DB

This is opensource and you getting free support so be friendly!

Prerequisites

Please answer the following questions for yourself before submitting an issue.

  • I am running the latest version
  • I checked the documentation and found no answer
  • I checked to make sure that this issue has not already been filed
  • I'm reporting the issue to the correct repository (for multi-repository projects)

Expected Behavior

Indicator of non-support for listing dropped files when using ES -

#ES isn't supported

Current Behavior

A silent failure and a display of 'No dropped files'. This is misleading to an analyst.

Steps to Reproduce

Please provide detailed steps for reproducing the issue.

  1. submit executables that drop files
  2. view analysis once analysis has been completed
  3. go to 'Dropped Files' tab

kvmremote

This is opensource and you getting free support so be friendly!

Prerequisites

Please answer the following questions for yourself before submitting an issue.

  • [x ] I am running the latest version
  • I checked the documentation and found no answer
  • I checked to make sure that this issue has not already been filed
  • I'm reporting the issue to the correct repository (for multi-repository projects)

Expected Behavior

On start CAPE is espected to shutdown the vm

Current Behavior

Whem I start CAPE is not shutdown the VM and also after submit a nalysis the vm is power off

Failure Information (for bugs)

Please help provide information about the failure if this is a bug. If it is not a bug, please remove the rest of this template.

Steps to Reproduce

Context

Please provide any relevant information about your setup. This is important in case the issue is not reproducible except for under certain conditions.

Question Answer
OS version Ubuntu 18

Connections to 1.1.1.1

For a while now, I've noticed the online CAPE instance always reports connections to 1.1.1.1, regardless of analysis (you can check any recent analysis on https://capesandbox.com/).

Locally I'm still running the old CAPE, which doesn't have this issue - not sure where it's coming from.

agent.py

Hi,

I've been trying to install CAPEv2 over the last few days and seem close to my objective :)
The new cape2.sh and kvm-qemu.sh (04.03.20) were used on fully updated (05.03.20) Ubuntu LTS18.04. I had different issues, notably with MongoDB which was not functioning correctly.

MongoDB solution for me was to
systemctl disable mongodb.service
and use
mongod --dbpath /opt/CAPEv2/db

But now my issue is with the agent.py on a win7 machine. The agent is started but cape (python3 cuckoo -d) cannot communicate. When I curl the guest I get:

<title>Error response</title>
Error response
Error code 501.
Message: Unsupported method ('GET').
Error code explanation: 501 = Server does not support this operation.

And the Cuckoo Critical Error:

-[lib.cuckoo.core.guest] CRITICAL: While trying to determine the Agent version that your VM is running we retrieved an unexpected HTTP status code: 501. If this is a false positive, please report this issue to the Cuckoo Developers. HTTP response headers: {"Server": "BaseHTTP/0.3 Python/2.7.13", "Date": "Thu, 05 Mar 2020 16:44:57 GMT", "Connection": "close", "Content-Type": "text/html"}

After that the connection closes, and the server just reports 'processing'.

I believe it is a python2.7 (agent) to python3 issue. But still looking into it. Is there anything else I should check before?
Thank you for your work and help!

Error with sysmon upload

This is opensource and you getting free support so be friendly!

Prerequisites

Please answer the following questions for yourself before submitting an issue.

  • [ X ] I am running the latest version
  • [ X ] I checked the documentation and found no answer
  • [ X ] I checked to make sure that this issue has not already been filed
  • [ X ] I'm reporting the issue to the correct repository (for multi-repository projects)

Expected Behavior

Cape should dump the sysmon event log to C:\sysmon.xml and upload to host.

Current Behavior

sysmon.xml is being created, but not uploaded.

Failure Information (for bugs)

From Cape log:

2020-03-11 02:40:25,062 [modules.auxiliary.sysmon] ERROR: Sysmon log file not found in guest machine

From sysmon.py:

        if os.path.exists("C:\\sysmon.xml"):
            now = time.time()
            upload_to_host("C:\\sysmon.xml", f"sysmon/{now}.sysmon.xml", False)
        else:
            log.error("Sysmon log file not found in guest machine")

I've tested that sysmon is actually being created, and it is - so the issue is obviously in the upload_to_host function. I'm not entirely sure how this function works being only new to this project, but there is definitely no files within the /sysmon folder of my Cape installation.

Steps to Reproduce

  • Enable sysmon module (with service installed in Windows sandbox!)
  • Run analysis
  • Sysmon file is not uploaded (if my issue is reproduceable!)

MITRE not working despite being enabled

This is opensource and you getting free support so be friendly!

Prerequisites

Please answer the following questions for yourself before submitting an issue.

  • I am running the latest version
  • I checked the documentation and found no answer
  • I checked to make sure that this issue has not already been filed
  • I'm reporting the issue to the correct repository (for multi-repository projects)

Expected Behavior

MITRE informations should be available in a report if it is enabled in conf/reporting.conf

Current Behavior

MITRE informations are not available in a report if it is enabled in conf/reporting.conf

Failure Information (for bugs)

Looks like the default config "reporting.conf" [mitre] section is shipped without the line
local_file = data/mitre_attack.json

After adding the line so it reads

[mitre]
enabled = yes
local_file = data/mitre_attack.json

and restarting CAPE it works as expected

Steps to Reproduce

Add malware like hxxp://litetronix-me.com/images/Javarunetime.jar to CAPE, look at the report and you will notice MITRE tab is missing despite MITRE being enabled in config.

Web GUI: How to force a "default" guest image?

Prerequisites

Please answer the following questions for yourself before submitting an issue.

  • I am running the latest version
  • I checked the documentation and found no answer
  • I checked to make sure that this issue has not already been filed
  • I'm reporting the issue to the correct repository (for multi-repository projects)

Context

I've added a very specific guest to my CAPE instance that should not be used by default. When I want to submit a file via the web gui, the default for the machine is "First available". Is there a way to force a default VM? (to not use the specific one). It's dangerous to have to change the machine manually... There are risks to forget to select the correct one.
Via the API, it's possible to specify the machine to use...

Change default user from 'cape' to others

Hi one more issue, I want to actually change the user that deploys the files upon submitting a document. So currently now 'cape' owns the /opt/CAPEv2 and everytime it creates files in 'tmp/cuckoo-tmp' its under 'CAPE' as well. Is there a way to change this to another user. cause currently I am running cape of a different user and due to permissions upon creation the file is permission denied.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.