izar / pytm Goto Github PK
View Code? Open in Web Editor NEWA Pythonic framework for threat modeling
License: Other
A Pythonic framework for threat modeling
License: Other
I'm not sure if it's by intent or not, but if I have two boundaries with the same name the components rendered into one boundary.
i.e:
dbVpc = Boundary("VPC")
serviceVpc = Boundary("VPC")
...
db = Datastore("Postgres Aurora")
db.inBoundary = dbVpc
server = Server("Server")
server.inBoundary = serviceVpc
Generates the following:
Per the model I can defined a Boundary inside another Boundary. I expected all items in the childBoundary and the childBoundary to be visible within the parentBoundary. In this case a Server Boundary within a DataCenter Boundary.
Instead both Boundaries are completely separate.
With object types like Server
or Asset
, can these contain other Server
s or Process
es? They should. If they can already, better docs needed.
It is getting big and probably will be bigger. We need to start looking at the setup framework so we are able to use the standard tooling and publish to package directory sites.
Line 224 in 24addb4
Defined properly, this would allow selective import of threats, perhaps for more targeted analysis.
Trying to launch a sample tm.py from root of repository:
Traceback (most recent call last):
File "./tm.py", line 3, in <module>
from pytm import TM, Actor, Boundary, Dataflow, Datastore, Lambda, Server, Data, Classification
File "/usr/lib/python3.6/site-packages/pytm-1.1.1-py3.6.egg/pytm/__init__.py", line 3, in <module>
from .pytm import Element, Server, ExternalEntity, Dataflow, Datastore, Actor, Process, SetOfProcesses, Boundary, TM, Action, Lambda, Threat, Classification, Data
File "/usr/lib/python3.6/site-packages/pytm-1.1.1-py3.6.egg/pytm/pytm.py", line 487, in <module>
class TM():
File "/usr/lib/python3.6/site-packages/pytm-1.1.1-py3.6.egg/pytm/pytm.py", line 781, in TM
@lru_cache
File "/usr/lib64/python3.6/functools.py", line 477, in lru_cache
raise TypeError('Expected maxsize to be an integer or None')
TypeError: Expected maxsize to be an integer or None
Yes
OS: SUSE SLES 15 SP1
Python version: Python 3.6.10
Also reproduced in Docker container python:3.7.9-alpine
Change @lru_cache
on line 487 to @lru_cache()
.
Line 781 in 5db9b2e
@lru_cache
still works in Docker container python:3.8.6-alpine
which is current stable version of Python 3.8It appears that there is an issue with making sequence diagrams with the latest release.
./tm.py --seq
@startuml
actor cbdcbcbdaaddaadbdfadadfaaae as "User"
Traceback (most recent call last):
File "./tm.py", line 64, in
tm.process()
File "/opt/pytm/pytm/pytm.py", line 255, in process
self.seq()
File "/opt/pytm/pytm/pytm.py", line 234, in seq
print("entity {0} as "{1}"".format(_uniq_name(e.name), e.name))
TypeError: _uniq_name() missing 1 required positional argument: 'obj_uuid'
Line 8 in 95523de
... well it was anyway.
I added a python script to take a CSV with pairs of elements. I then create generic Element definitions for each unique name and create dataflow for each pair.
After editing the file to replace Element with Actor, Server, Process, etc I can generate a basic TM DFD then start to annotate each element and add boundaries as needed.
Before I do any more with this take a look and lets discuss. Initially I wanted the csv to be as lightweight as possible but we could have it contain various more data like variableName, displayName, element type, or various annotations.
I've committed the geneate.py file, a sample csv, the generate sample.py and sample.png and then a modified (Element->Actor,Process, etc) py and png so you can see what its doing.
Hi all,
I'm just starting with pytm, following examples and if I try to follow the README instructions I got this using --seq
kali@kali:~/pytm$ ./tm.py --seq
@startuml
entity cbfcfffaaffdacebab as "Internet"
entity aeebeaeccccdadbfbbed as "Server/DB"
entity acdbbafaebffabebcefad as "AWS VPC"
actor bdafaaafeadfceac as "User"
database afeaeabaaabeaaabcbedeaa as "SQL Database"
bdafaaafeadfceac -> facdfcdecbdebecaeaa: User enters comments (*)
note left
This is a simple web app
that stores and retrieves user comments.
end note
facdfcdecbdebecaeaa -> afeaeabaaabeaaabcbedeaa: Insert query with comments
note left
Web server inserts user comments
into it's SQL query and stores them in the DB.
end note
afeaeabaaabeaaabcbedeaa -> facdfcdecbdebecaeaa: Retrieve comments
facdfcdecbdebecaeaa -> bdafaaafeadfceac: Show comments (*)
bbeedeaacfffcbbbbcba -> afeaeabaaabeaaabcbedeaa: Lambda periodically cleans DB
@enduml
kali@kali:~/pytm$ ./tm.py --seq | java -Djava.awt.headless=true -jar plantuml.jar -tpng -pipe > seq.png
Picked up _JAVA_OPTIONS: -Dawt.useSystemAAFontSettings=on -Dswing.aatext=true
And then this is my seq diagram
Am I missing something? I've already check that I've all requirements installed.
Thx in advance ^_^
In the README, it says {findings:repeat:* ...}
but, the template given was:
|{findings:repeat:
<details>
<summary> {{item.id}} -- {{item.description}}</summary>
<h6> Targeted Element </h6>
<p> {{item.target}} </p>
<h6> Severity </h6>
<p>{{item.severity}}</p>
<h6>Example Instances</h6>
<p>{{item.example}}</p>
<h6>Mitigations</h6>
<p>{{item.mitigations}}</p>
<h6>References</h6>
<p>{{item.references}}</p>
 
</details>
}|
I am trying to change the template but no idea where i should look into to properly build a nested loop in md.
please assist!
A clean install from pip Returns this error when generating a report.
The problem is on line 338:
_authenticatesDestination = varBool(False)
which has a leading _
character.
The code is correct in master branch in version control.
Is it possible to get an updated pip installer released?
DE02 is a more specific variant of DE01. Two questions come to mind:
Related to #17
I saw Threat Mitigations in the TODO file and thought it might be useful to start a threat to brainstorm about it.
The primary goal of implementing mitigation logic would be to:
Any other goals?
I want to render two boundaries around one component (for example if I have a database which protected by user/password and reside in VPC).
Hi all,
thanks for sharing this nice tool. I just wanted to explore the sample but getting an error
➜ pytm git:(master) ✗ ./tm.py --dfd | dot -Tpng -o sample1.png
2019-03-31 07:24:45.271 dot[27759:12821403] +[__NSCFConstantString length]: unrecognized selector sent to class 0x7fff95b4a8c0
2019-03-31 07:24:45.272 dot[27759:12821403] *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '+[__NSCFConstantString length]: unrecognized selector sent to class 0x7fff95b4a8c0'
*** First throw call stack:
(
0 CoreFoundation 0x00007fff3df1743d __exceptionPreprocess + 256
1 libobjc.A.dylib 0x00007fff69e25720 objc_exception_throw + 48
2 CoreFoundation 0x00007fff3df941a5 __CFExceptionProem + 0
3 CoreFoundation 0x00007fff3deb6ad0 ___forwarding___ + 1486
4 CoreFoundation 0x00007fff3deb6478 _CF_forwarding_prep_0 + 120
5 CoreFoundation 0x00007fff3de47f54 CFStringCompareWithOptionsAndLocale + 72
6 ImageIO 0x00007fff409b5367 _ZN17IIO_ReaderHandler15readerForUTTypeEPK10__CFString + 53
7 ImageIO 0x00007fff4098d527 _ZN14IIOImageSource14extractOptionsEP13IIODictionary + 183
8 ImageIO 0x00007fff409ba2e6 _ZN14IIOImageSourceC2EP14CGDataProviderP13IIODictionary + 72
9 ImageIO 0x00007fff409ba1bb CGImageSourceCreateWithDataProvider + 172
10 libgvplugin_quartz.6.dylib 0x0000000107cfcc54 quartz_loadimage_quartz + 224
11 libgvc.6.dylib 0x0000000107c59781 gvloadimage + 269
12 libgvc.6.dylib 0x0000000107c587e0 gvrender_usershape + 955
13 libgvc.6.dylib 0x0000000107c8662e poly_gencode + 2129
14 libgvc.6.dylib 0x0000000107c92b7b emit_node + 1030
15 libgvc.6.dylib 0x0000000107c91805 emit_graph + 4769
16 libgvc.6.dylib 0x0000000107c96d0d gvRenderJobs + 4911
17 dot 0x0000000107c4fd62 main + 697
18 libdyld.dylib 0x00007fff6aef3085 start + 1
)
libc++abi.dylib: terminating with uncaught exception of type NSException
[1] 27758 done ./tm.py --dfd |
27759 abort dot -Tpng -o sample1.png
I installed graphviz via brew, using macOS 10.14 and Python 3.7.3.
Line 21 in 24addb4
Consider using CWSS rather than CVSS for severity scoring, especially where threats != vulnerabilities.
Also consider adding a scoring interface so users can define their own scoring methods (with some pre-canned ones); perhaps this function would take a JSON file describing similarly to rules the conditions for each severity level?
Some properties/conditions are obvious and don't really need documentation, such as protocol or isEncrypted. But other properties may not be obvious to some, such as Dataflow.authenticatedWith - with what?
Issue can be easily reproduced when trying to generate report using provided threat library, sample tm.py (both one in repo and another slightly different in README.md) and template.
Traceback using tm.py from repo:
Exception has occurred: AttributeError
'Actor' object has no attribute 'providesIntegrity'
File "/root/pytm/pytm/pytm.py", line 445, in apply
return eval(self.condition)
File "/root/pytm/pytm/pytm.py", line 547, in resolve
if not t.apply(e):
File "/root/pytm/pytm/pytm.py", line 721, in process
self.resolve()
File "/root/pytm/tm.py", line 91, in <module>
tm.process()
Threat being checked is AC05
with condition '((not target.source.providesIntegrity or not target.sink.providesIntegrity) and not target.isEncrypted) or (target.source.inScope and not target.isResponse and (not target.authenticatesDestination or not target.checksDestinationRevocation))'
. As we know Actor
object doesn't have any providesIntegrity
attribute, but it's being checked.
Yes. That's what I used.
OS: SLES 15/python:alpine-3.8 image
Python version: 3.6.10/3.8.6
Your model file, if possible: sample tm.py
from repo and another one from README.md
Not yet. I'm not proficient in Python and still poking the code.
EDIT: I think a simple exception can be added to handle such attribute issues in non-elegant way:
def apply(self, target):
if not isinstance(target, self.target):
return None
try:
return eval(self.condition)
except AttributeError:
return None
When drawing a DFD I will often group things logically on the diagram, different local file datastores might be together as are external services out of scope, various AWS services etc. Currently the diagram is draw with things in random places. Using a Boundary would accomplish this but in some cases there isn't a boundary.
I was thinking it may be useful to have a logical group similar to a Boundary using an inGroup property and when drawing the DFD the Group would not be visible.
"DO01": { "description": "Potential Excessive Resource Consumption", "source": Element, "target": (Process, Server), "condition": "target.handlesResourceConsumption is False",
Without knowing when handlesResourceConsumption should be set to True, the check here appears to not do what is intended. Excessive resource consumption is subjective, and would be a result of memory or resource leaks, and even managed code (e.g. JVM or .NET based Processes) can still run out of file descriptors. There is not enough information in the dataflows to know if resource consumption will be excessive, imho. It should be possible to know if resource consumption could be an issue if e.g. the Process is multi-tenant/multi-user e.g. a RESTful web server or a database.
"CR01": { "description": "Collision Attacks", "source": Process, "target": Process, "condition": "target.implementsCommunicationProtocol is True", },
A collision attack is usually associated with hash algorithms, not communication protocols (unless you mean protocols pre-1980). Implementation of a custom comm protocol does not automatically mean a collision is a security threat, and implementsCommunicationProcotol should not imply a custom one.
"AA03": { "description": "Weakness in SSO Authorization", "source": (Process, Element), "target": (Process, Server), "condition": "target.implementsAuthenticationScheme is False", },
What if the Process implements BasicAuth or uses mutual TLS (neither of which is SSO)?
If the Process uses SAML or OAuth, then maybe.
Maybe authenticationScheme as a string var is necessary?
"DS01": { "description": "Weak Credential Storage", "source": (Process, Element), "target": Datastore, "condition": "(target.storesPII is True or target.storesSensitiveData is True) and (target.isEncrypted is False or target.providesConfidentiality is False or target.providesIntegrity is False)", },
Condition includes storesPII, which would not include credentials (at least not for the target or source); it also includes storesSensitiveData (same comment applies). A better test would be source.hasAccessControl or source.authenticatedWith - these conditions suggest the datastore holds credentials, and the target checks then make sense.
Today, we have object types:
It seems that a Server, Client, and Lambda are all specializations of Process or Asset, and really represent the "role" of each; role is really determined by the specific use - a server is the sink for a dataflow, the source is a client. But when describing an object, until the dataflows are determined, why force users to know ahead of time which one they need? Also, a client or server may be a server or a client, based on other data flows...
An alternative suggestion: create a generic "node" (Asset may be the right object already available), and allow assignment of properties that are generic. If roles are needed, assigning a role may add attributes specific to the role(s) added at runtime.
This approach helps with constructing models based on less-than-perfect knowledge of the system.
The recent DataSet issue (#77) breaks the report templates.
The data column in the report now shows a class __repr__
:
DataSet({<pytm.pytm.Data(User ID and SSL Cert.) at 0x105ef5ac0>})
Unless I'm misunderstanding something, I believe adding a __str__
method to the DataSet class (on or about line 191) addresses this issue.
def __str__(self):
return ", ".join([d.name for d in self])
Looking forward to making good use of the new DataSet feature.
By looking on the doc I need to run:
tm.py --report REPORT (output report using the named template file)
What is this template file? Where can I find it?
"DE01": { "description": "Data Flow Sniffing", "source": (Process, Element, Datastore), "target": Dataflow, "condition": "target.protocol == 'HTTP' and target.isEncrypted is False", },
In this threat, it checks to see if the protocol is HTTP and if the channel is unencrypted. A user by error may set the protocol but not the flag, or vice versa, unless there is code somewhere which makes the connection automatically. Instead, it may be best to make this an OR condition - either http or unencrypted will trigger the threat.
$ pip install pytm
$ python pytm-example.py
Traceback (most recent call last):
File "pytm-example.py", line 3, in <module>
from pytm.pytm import TM, Server, Datastore, Dataflow, Boundary, Actor, Lambda
File "/home/math/.pyenv/versions/3.7.8/lib/python3.7/site-packages/pytm/__init__.py", line 3, in <module>
from .pytm import Element, Server, ExternalEntity, Dataflow, Datastore, Actor, Process, SetOfProcesses, Boundary, TM, Action, Lambda, Threat, Classification, Data
File "/home/math/.pyenv/versions/3.7.8/lib/python3.7/site-packages/pytm/pytm.py", line 486, in <module>
class TM():
File "/home/math/.pyenv/versions/3.7.8/lib/python3.7/site-packages/pytm/pytm.py", line 780, in TM
@lru_cache
File "/home/math/.pyenv/versions/3.7.8/lib/python3.7/functools.py", line 490, in lru_cache
raise TypeError('Expected maxsize to be an integer or None')
TypeError: Expected maxsize to be an integer or None
I was mocking up a sample DFD with a Process and some local files datastores and I am getting a DF1 threat (Dataflow not authn'd) which isn't the beast threat here.
I would like to add logic so this threat doesn't apply to local file data stores and maybe introduce a threat about permissions or something.
I've mocked up two places where I can do this.
-Use the Protocol property on the Dataflow.
"target.authenticatedWith is False and target.protocol is not 'FileSystem'"
-Add an isLocalFile to DataStore
"target.authenticatedWith is False and ( (type(target.source) is Datastore and target.source.isLocalFile is False) or (type(target.sink) is Datastore and target.sink.isLocalFile is False) )"
Thoughts?
when an element can be both a source and a sink with respect to a given data flow - for example, a user interacting with a web application where they both can download and upload data - it would be nice to define a single Dataflow
object (or BidirectionalDataflow
if another class is preferred) that renders as a bidirectional dataflow in the DFD and elicits the threats in both directions. is there interest in supporting that?
"DO02": { "description": "Potential Process Crash or Stop", "source": (Process, Datastore, Element), "target": Process, "condition": "target.handlesCrashes is False", },
What is the thought process behind this threat? Is it that the Process can crash, or that it could crash in some security-relevant way?
It would seem to me that if the concern is that a Process may crash, then the conditions one might check for would be susceptibility to a buffer overflow in unmanaged code, not whether or not it can "handle" crashes, whatever that means...
Region ordering is messed if we have a bi-directional flow. LR rankdir is not working as expected.
No Option for Invisible nodes.
I was wondering if we might make Boundary identification a capability of the tool, rather than letting a user define them in their object definitions. In other words, a user may decide to place a trust boundary based on particular characteristics, like team organizational units, or areas of control by teams, or based on a misunderstanding of the ability to enable trust relationships. But it should be possible for us to detect strong relationships between entities to establish, or at least hint at, trust boundaries, as a feature to the user.
Many of the attributes defined for the DataFlow object belong elsewhere:
Correctly assigned:
source = varElement(None, required=True)
sink = varElement(None, required=True)
order = varInt(-1, doc="Number of this data flow in the threat model")
note = varString("")
Maybe correct:
isResponse = varBool(False, doc="Is a response to another data flow") --> Is this a dup with `responseTo`?
response = varElement(None, doc="Another data flow that is a response to this one") --> If this is non-empty, is either `isResponse` or `responseTo` needed (since it would be detectable as True if non-empty)?
responseTo = varElement(None, doc="Is a response to this data flow") --> Is this a dup with `isResponse`?
data = varData([], doc="Default type of data in incoming data flows") --> Does this represent the data sent by the source, or returned by the sink? I think this highlights a challenge in setting data to the flow and not associating the connection to the source as sender of data, and server is replier of data.
Should be a property of the Source:
srcPort = varInt(-1, doc="Source TCP port")
isEncrypted = varBool(False, doc="Is the data encrypted") --> Clarification needed - is this data encryption independent of the protocol?
authenticatesDestination = varBool(False, doc="""Verifies the identity of the destination,
checksDestinationRevocation = varBool(False, doc="""Correctly checks the revocation status
Should be a property of the Sink:
usesSessionTokens = varBool(False)
authorizesSource = varBool(False)
usesLatestTLSversion = varBool(False) --> This will become out of date (TLS 1.2 to TLS 1.3 to whatever is next), and TLS is not the only option for secure protocol, so maybe this should be a list
implementsAuthenticationScheme = varBool(False)
authenticatedWith = varBool(False)
protocol = varString("", doc="Protocol used in this data flow") --> With this list, it is possible to check for `usesLatestTLSversion` state
dstPort = varInt(-1, doc="Destination TCP port")
Should be a property of either source or sink:
usesVPN = varBool(False) --> DataFlows are associated with a source (the initiator) and a sink (the target). It is either the source or sink that determines if a VPN is in use. The source may use one, or the sink may do so, or both, but the DataFlow would if anything inherit the state of this flag based on `source.usesVPN or sink.usesVPN`.
implementsCommunicationProtocol = varBool(False) --> Sink always determines the protocol to be used by the source, but this may also apply to the source's comm stack
Hi there!
Thank you a lot for this really nice tool. Here is an idea/suggestion.
Loading models from OWASP Threat Dragon json format.
"AC03": { "description": "The Data Store Could Be Corrupted", "source": (Process, Element), "target": Datastore, "condition": "target.isShared is True or target.hasWriteAccess is True", },
If a Datastore is shared and allows write access, it may be corrupted, which is True. But what is missing from this logic is if the shared Processes/Elements are granted Write access - an Element:Datastore relationship need not be symmetric or universal. This requires some additional logic, and goes to the complexity of such things.
Consider:
Datastore A
Process A
Process B
A.isShared is True
A.hasWriteAccess (from Process A) is True
A.hasWriteAccess (from Process B) is False
Threat?
Problem: we can't represent this currently - it requires Source:Target:Condition relationships that cannot be represented given the current object model. Note the Object Model I posted to the wiki can represent this relationship, but may be too complex for some.
Can you please ellaborate about each property?
For example:
➜ meetings-service git:(Michael/rank/SCPJM-115) ✗ ./tm.py --describe Datastore
The following properties are available for Datastore
OS
authenticatesDestination
authenticatesSource
authenticationScheme
authorizesSource
check
definesConnectionTimeout
description
dfd
handlesInterruptions
handlesResources
hasAccessControl
hasWriteAccess
implementsAuthenticationScheme
implementsNonce
inBoundary
inScope
isAdmin
isEncrypted
isHardened
isResilient
isSQL
isShared
name
onAWS
onRDS
providesConfidentiality
providesIntegrity
storesLogData
storesPII
storesSensitiveData
What is the difference between storesSensitiveData
and storesPII
?
onRDS
include Aurora
?
What's authenticatesDestination
/authenticatesSource
?
What's the expected values (enum) for authenticationScheme
?
And so on
Thanks
Hi, I just pulled the new changes and faced some problems with creating the example sequence diagram:
PowerShell -Command tm.py --seq | java -Djava.awt.headless=true -jar plantuml.jar -tpng -pipe > seq.png
ERROR
2
Syntax Error?
Some diagram description contains errors
Do you have any idea why it fails? I can't find out whats wrong. With an older version it is working fine.
Have a Table/Data element that has an attribute describing the type of data stored in it. Threat conditions could be based on the relationship of the data stored. e.g. If the table has two columns, username, and userSSN, it would lead to a privacy issue.
Does it make sense to have two conditions and inclusion and exclusion? I think this will simplify more complex conditional logic.
Rather than having to make a complex logic to handle inclusion and exclusion I think there could be value in having two separate conditions that should be written to return true. First loop thru the elements and apply the inclusion condition then apply the exclusion before returning.
First apologies for the long winded issue...
tl;dr - Are there any plans in extending out some plantuml sequence diagram functionality?
I've been using pytm for about a week now and have found it to be a pretty good tool. Coming more from a security side I am more focused on the dataflow diagram and threat modelling report however I had a lot of requests to add bits and pieces to the the seq diagram as well (e.g. lifelines, queue participants, dividers, arrow style).
I'm happy to make a pull request with a few suggested changes (still wrapping my head around how it all ties together)
The easiest being:
class Queue(Datastore):
pass
class TM:
...
def seq(self):
for e in TM._elements:
...
elif isinstance(e, Queue): # this would need to go before the `Datastore` check
value = 'queue {0} as "{1}"'.format(e._uniq_name(), e.display_name())
I think the more robust, and probably more extensible way would be another attribute and, ideally, a new method to handle the line that will be formatted / printed in seq()
class Element:
def seq_line(self): # naming things is difficult
return 'entity {} as "{}"'.format(self._uniq_name(), self.display_name()'
class Actor(Element):
def seq_line(self):
return 'actor {} as "{}"'.format(self._uniq_name(), self.display_name()'
class Datastore(Element):
def seq_line(self):
if self.isQueue:
puml_participant = "queue"
else:
puml_participant = "database"
return '{} {} as "{}"'.format(puml_participant, self._uniq_name(), self.display_name()'
class TM:
def seq(self):
participants = [e.seq_line() for e in TM._elements if not isinstance(e, (Dataflow, Boundary)]
...
...
my_queue = Datastore("my_queue", isQueue=True)
I think the simplest would be to add new attribute arrowStyle
, implement the suggested seq_line
from above and instantiate a dataflow the following way to get a dotted blue, open arrow line for the lines in the sequence digramt:
class Dataflow(Element):
...
arrowStyle = varString("->")
...
def seq_line(self):
note = "\nnote left\n{}\nend note".format(self.note) if not self.note else ""
line = "{source} {arrow} {sink}: {display_name}{note}".format(
source=self.source._uniq_name(),
arrow=self.arrowStyle,
sink=self.sink._uniq_name(),
note=note,
)
...
df = Dateflow(source, sink, "My dataflow", arrowStyle="-[#blue]->>")
This one I am still exploring and not confident in any implementation yet but maybe something like
class TM:
...
includeSeqLifelines = varBool(False)
...
def seq(self):
...
messages = []
for e in TM._flows:
if e.response and self.includeSeqLifelines: # at the start of the loop
messages.append('activate {}\n'.format(e.sink._uniq_name()))
... all the other flow stuff here ...
if e.responseTo and self.includeSeqLifelines: # at the end of the loop
messages.append('deactivate {}\n'.format(e.responseTo.sink._uniq_name()))
Or introduce a concept of SeqLifelines
, again not happy with the exploration I've done so far but quick back of a napkins code:
class _Dummy:
"""A temporary dummy class that allows me to insert SeqLifelines in the flows portion of TM.
This is where a lot of my uncertaintity comes in. Obviously, if I implement I would properly fix up the
SeqLifeline to work properly
"""
data = []
levels = {0}
overrides = []
protocol = "HTTPS"
port = 443
authenticatesDestination = True
checksDestinationRevocation = True
name = "Dummy"
class Lifeline(Enum):
ACTIVATE = "activate"
DEACTIVATE = "deactivate"
DESTROY = "destroy"
class varLifeline(var):
def __set__(self, instance, value):
if not isinstance(value, Lifeline):
raise ValueError("expecting a Lifetime, got a {}".format(type(value)))
super().__set__(instance, value)
class SeqLifeline(Element):
name = varString("", required=True)
action = varLifeline(None, required=True)
participant = varElement(None, required=True)
color = varString("", required=False, doc="Color variable for the lifeline")
source = _Dummy
sink = _Dummy
data = []
responseTo = None
isResponse = False
response = None
order = -1
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
TM._flows.append(self)
def seq_line(self):
return '{} {}\n'.format(self.action.value, self.participant._uniq_name())
def dfd(self, **kwargs) -> str:
"""SeqLifeline shouldn't show on dfd's but we do want them to render on seq diagrams."""
return ""
varNote
class that allows defining shape, color, location see Notes on messages for attributes we could declareSeqNewPage
and the ability write seq()
out to separate files
hide unlinked
(would only work with new page)SeqDivider
see Divider or speratorAs suggested in the comments on #38 a proper way to import threat lists would be desirable. The goal would be to have an easy way to import threat lists based on existing lists of controls. (Many companies already have something like that in excel or csv, so it would be a matter of tweaking in the right layout and of you go).
Off course, it would also mitigate the dangerous use of eval() in my commit ;)
However the current dictionary structure makes this challenging. I first tried to output the existing dictionary and analyze what would be suitable to adopt, but so far my experiments aren't really successful:
csv.writer
seems to do the job, but gives full class names and throws all values in a big string. I will do some more research.
json.dump and json.dumps
crash on output with complaints about structure
pickle.dump
works, but creates unreadable file; not suitable as import mechanism
jsonpickle
works, but adds more meta info
Not sure what the best solution would be, so I'm open to suggestions.
I was thinking of adding CWE to Threat metadata and I see remediation is in some of the commented threats. Lets brainstorm on other elements we would like in as Threat metadata
Existing elements
-ID
-Description
-CVSS
-Condition
Possible new elements
-Remediation
-CWE
-Exclusion Condition (#14)
-References (blog posts, books, whitepapers, etc)
-Severity (Info, Low, Medium, High, Critical), see cvss change below.
Changes
-CVSS -- Should this be a specific score or a range?
I'd like to move the writer logic outside pytm. Python isn't my first language so my terms might not be right but I'd like to create a interface for tmwriter so we can support other formats than graphviz.
One writer might just be a report for Threats, Mitigations, Elements and Annotations, another could be existing graphviz, another could be mxGraph (https://jgraph.github.io/mxgraph/javascript/examples/helloworld.html) or use mxGraph XML format which can be loaded into Draw.io (Extras->Edit Diagram)
<mxGraphModel dx="1190" dy="727" grid="1" gridSize="10" guides="1" tooltips="1" connect="1" arrows="1" fold="1" page="1" pageScale="1" pageWidth="850" pageHeight="1100" background="#ffffff" math="0" shadow="0"> <root> <mxCell id="0"/> <mxCell id="1" parent="0"/> <mxCell id="2" value="Client" style="ellipse;whiteSpace=wrap;html=1;aspect=fixed;fontColor=#000000;align=center;" parent="1" vertex="1"> <mxGeometry x="120" y="140" width="80" height="80" as="geometry"/> </mxCell> <mxCell id="3" value="Server" style="ellipse;whiteSpace=wrap;html=1;aspect=fixed;fontColor=#000000;align=center;" parent="1" vertex="1"> <mxGeometry x="410" y="110" width="80" height="80" as="geometry"/> </mxCell> <mxCell id="4" value="" style="endArrow=classic;html=1;fontColor=#000000;exitX=1;exitY=0.5;entryX=0;entryY=0.5;" parent="1" source="2" target="3" edge="1"> <mxGeometry width="50" height="50" relative="1" as="geometry"/> </mxCell> </root> </mxGraphModel>
I know this is in all in the py file but it would be nice to have a readable report with all elements and annotations.
Considering the example you provide in the readme, and thinking logically about what hardening of a system means, "is not hardened" should be a threat on its own, with conditions that indicate the hardening state (which could be none, partial, or "complete"). This would also require/allow hardening detection for individual object types (e.g. web server vs database server). Consider something like this condition for a web server being not hardened:
condition : "target.RunsAsRoot is True and target.exposesHTTP is True"
(obviously psuedo-conditions)
I've been thinking about this and relates to a few issues I've added recently.
I think the logic is going to get messy as we add more Threats, Mitigations, and add logic to alter severity while applying mitigations.
Does it make sense to continue creating a tightly coupled rules engine here vs using something existing?
Idk what exists for Python. For Java I've worked with Drools that would be perfect for this. So much so I had the fleeting thought to port this to Java to use it.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.