Comments (10)
I can PR example text into the security-wg, and once we agree either the text or a link to the text can be put in the Security section of the main nodejs/node README.
from security-wg.
To kick it off:
- direct use of C++ bindings - not an issue?
- use of documented APIs that can access the host system (like
fs
) are not a security issue. may need to be called out because filesystem access without user's permission in some javascript environments, like browsers, would be a vulnerability
from security-wg.
- direct use of C++ bindings - not an issue?
Generally no. It's not a good practice, because it is harder to validate data in C++ rather than in JS. It might be possible to cause aborts and crashes if the data is not validated correctly.
- use of documented APIs that can access the host system (like fs) are not a security issue. may need to be called out because filesystem access without user's permission in some javascript environments, like browsers, would be a vulnerability
The use of fs
without validating that the external user can access the data that is requested. The use of relative paths can allow an attacker to fetch/write unwanted files.
from security-wg.
I tried to put up a definition this week end. The goal was to have something that fit the possible scope of the WG:
Any technical bypass that is not directly a mistake in the implementation of business logic and that could result in an unauthorized entity to gain access or control on a system.
from security-wg.
re:
Security researchers have requested that we make an attempt at documenting this.
From https://github.com/nodejs/security-wg/blob/master/meetings/2016-12-22.md
- Deian: Fraser and I have disclosed some vulns to core, and want a description
of what is a sec vuln, and not just a bug. Describing this may be difficult,
but without a threat model, its hard to communicate.
@deian Comments on the suggestions above? What kinds of thing do you want to be clearer? Alternatively, what questions did you have?
from security-wg.
I think @vdeturckheim's definition is getting closer. I'll add to/refine this more later as I find cycles, but for now here are some of my thoughts:
To me, it seems like type safety, memory, and crash safety are the first properties to try to strive for. JavaScript (out of the box) "provides" this. As such, modifying or extending the JavaScript environment should not introduce new unintended security/safety issues. (Browsers do this + other security policies. We should aim for the former and then try the latter.)
Unfortunately, this should include the binding code or code that is otherwise not intended to be "public". There are ways to truly hide unsafe APIs, but that's not done today. (For example, the Chrome extension system does this, so it's doable, but that's not something Node.js can be easily retrofitted to.)
In general, any exposed function (or object when used as an argument to a function) that can be used to do something its intended specification (even if the spec is just English) did not allow, should be considered a potential safety/security problem IMO.
Being able to use Buffer.from to read arbitrary parts of memory is one such example. But, bugs in other seemingly security-unrelated libraries matter too. Unlike browsers, we don't have a policy we can go to and say this is what should be enforced. So, while clear bugs like an HTTP parser implementation that allows arbitrary code execution (or for the exhaustion of memory) are obvious vulnerabilities, I think it's important to err on the side of “this may have security implications” -- people are building apps whose security logic may depend on the correctness of the Node.js APIs and, as far as I know, there are no metrics on how people actually use the APIs.
I don’t think we need to be overly conservative, but there are some things like memory, type, and crash safety that IMO should be considered security bugs. Similarly, bugs in APIs we know to be security-related (e.g., crypto, tls, etc.) should similarly be flagged as such.
from security-wg.
Unfortunately, this should include the binding code or code that is otherwise not intended to be "public".
This suggests the attack surface is the Node.js API, but that can't be the case, more damage can be done with execSync("rm -rf /")
than can be done with process.binding("dgram").blah
.
Languages that allow access to the runtime can be manipulated in surprising ways: monkey-patching of object, for example. I can see the point that while that patching can cause node/python/ruby to crash, in python/ruby its harder to write code that causes a segfault, but I don't see how one is more or less exploitable.
Buffer.from()
is an interesting example, because its an API that allows casual mis-use. Its not clear to me that accessing the binding layer is in the same class, its such a deliberate act that whoever writes the code needs to take responsibility for security problems they create, not node (IMO).
That said, I'm not personally opposed to making the binding layer fail harder and faster when misused. I assume adding ASSERT()
checks for binding-layer misuse would be a satisfactory way of ensuring it cannot be used outside its intended specification?
There has been talk of hiding the bindings, and also of making them not get exposed without a flag, that might be worth raising again. I can't recall off the top of my head the reasons they are being directly used by code outside of node's internal js libraries.
there are no metrics on how people actually use the APIs.
Not for private code, but we do have tools to crawl npmjs.org's packages and look for instances of API use.
from security-wg.
This suggests the attack surface is the Node.js API, but that can't be the case, more damage can be done with execSync("rm -rf /") than can be done with process.binding("dgram").blah.
Sorry, what I was trying to get at is that you can't not include the private APIs in your attack surface if they are not actually hidden.
Languages that allow access to the runtime can be manipulated in surprising ways: monkey-patching of object, for example. I can see the point that while that patching can cause node/python/ruby to crash, in python/ruby its harder to write code that causes a segfault, but I don't see how one is more or less exploitable.
There are a bunch of language-level security techniques that we can use to isolate/sandbox/confine somewhat untrusted code (caja, djs, cowl, etc.). Most of these are useless if you can read/write arbitrary parts of memory.
Buffer.from() is an interesting example, because its an API that allows casual mis-use. Its not clear to me that accessing the binding layer is in the same class, its such a deliberate act that whoever writes the code needs to take responsibility for security problems they create, not node (IMO).
I don't completely follow the second part, mind clarifying?
That said, I'm not personally opposed to making the binding layer fail harder and faster when misused. I assume adding ASSERT() checks for binding-layer misuse would be a satisfactory way of ensuring it cannot be used outside its intended specification?
A hard, deliberate crash is better than a crash due to null ptr dereferencing or OOB access. I don't think this is the right thing to do, always. Definitely do it when you need to bail out, but I think you should be throwing a JS exception if you can. Particularly because a hard crash can more easily take down a server than language-level exceptions. Now, if the hiding of bindings is done as you suggest and the JS code that calls the binding layer does proper sanitization, then ASSERTs sound great!
There has been talk of hiding the bindings, and also of making them not get exposed without a flag, that might be worth raising again. I can't recall off the top of my head the reasons they are being directly used by code outside of node's internal js libraries.
I think this is a great idea!
from security-wg.
Sorry, what I was trying to get at is that you can't not include the private APIs in your attack surface if they are not actually hidden.
Why should they be in our attack surface if they can only be used by legimate code authors? Doesn't that imply our attackers are the authors of the js code node is intentionally being run on? Those authors can do anything. I don't understand your threat model, can you clarify?
Buffer.from() is an interesting example, because its an API that allows casual mis-use. Its not clear to me that accessing the binding layer is in the same class, its such a deliberate act that whoever writes the code needs to take responsibility for security problems they create, not node (IMO).
Buffer.from() is an interesting example, because its an API that allows casual mis-use. Its not clear to me that accessing the binding layer is in the same class, its such a deliberate act that whoever writes the code needs to take responsibility for security problems they create, not node (IMO).
I don't completely follow the second part, mind clarifying?
Buffer.from is a documented API, with dangerous side-effects.
process.binding() is explicitly "use at your own risk". You use it, you risk it.
from security-wg.
@sam-github lgtm. Thanks a lot!
from security-wg.
Related Issues (20)
- Require optional PoC videos from hackers to help triaging reports
- Node.js Security team Meeting 2024-04-25 HOT 1
- More control over remote debugging (and killing) HOT 4
- Threat Model question about Permission Model HOT 2
- Security Vulnerability to report HOT 1
- OpenSSF Scorecard Report Updated!
- OpenSSF Scorecard Report Updated!
- Scores of vulnerability found in experimental features can be too high HOT 9
- Adding language to Bug Bounty program to differentiate "security features" from "defense in depth features" HOT 1
- Permission Model adoption from Package Managers HOT 3
- Node.js Security team Meeting 2024-05-09
- OpenSSF Scorecard Report Updated!
- OpenSSF Scorecard Report Updated!
- OpenSSF Scorecard Report Updated!
- OpenSSF Scorecard Report Updated!
- Node.js Security team Meeting 2024-05-23
- Node.js Security team Meeting 2024-06-06 HOT 4
- OpenSSF Scorecard Report Updated!
- OpenSSF Scorecard Report Updated!
- Ping TSC on deps update not from GithubBot HOT 10
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from security-wg.