Comments (9)
Expanding on the above: internally we have an "all-in-one" container deployment (with neo4j and cartography in the same image), a "direct" deployment (neo4j and cartography running together on a VM), and we'll likely replace the direct deployment with a multi-container deployment at some point in the future.
Why not just provide a standard docker-compose.yml file? You don't have to provide everything under the sun.
a "direct" deployment (neo4j and cartography running together on a VM)
VMs are already supported regardless of whether you add a Dockerfile.
Dockerfiles and docker-compose.yml are composable and allow people to manage configurations on their own. This flexibility is already built into Docker.
we'll likely replace the direct deployment with a multi-container deployment at some point in the future
There are two PRs that help with this.
Some users may need to provide AWS credentials through environment variables, others may need to use the EC2 metadata service.
This is handled by boto3 already. I don't understand the issue at play here. Both are easily supported without a change to the codebase. https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html
Personally, we'll have to keep running in a fork and will make it harder to make upstream contributions back. If you look at the PR, this actually decouples the app start up to the db start up time by allowing for retries. I appreciate the effort to open source but at least supporting containers lowers barrier of entry on setup.
from cartography.
Expanding on the above: internally we have an "all-in-one" container deployment (with neo4j and cartography in the same image), a "direct" deployment (neo4j and cartography running together on a VM), and we'll likely replace the direct deployment with a multi-container deployment at some point in the future. Each of these deployments is designed for a specific use case (e.g. dev work, exploration, production) and may not be suitable for other use cases. Each of these deployments uses different mechanisms to add credentials and other configuration to the environment such that cartography can make use of them, and this is based on suitable options for each particular deployment.
I believe the variation in our internal deployments is a good representation of the variation in external use cases, and because of this I'm hesitant to provide any default or stock deployment for users. Some users may be fine with running neo4j and cartography in a single container, others may need multiple containers. Some users may need to provide AWS credentials through environment variables, others may need to use the EC2 metadata service. As we add more intel modules the variability in deployments increases and the work required to support this variability in any default/stock deployment option increases as well.
I'm open to discussion on this point, but as of now the decision is to document deployment options rather than accept PRs containing deployment scripts, Dockerfiles, k8s configs, Helm charts, etc.
from cartography.
Plus one for dockerization. It makes this a lot more portable and easier to setup in almost any environment
from cartography.
@nishils I ended up writing a Dockerfile and docker-compose.yml that is almost exactly the same line-for-line as the one in PR #275 before even noticing there was a PR for this. -_-
It would be a great starting point just to get familiar and play around.
from cartography.
I guess it's good we came to the same implementation independently.
I honestly had trouble with setting AWS creds as environment variables. If we can figure out how to stop writing creds to disk, then I think that should be enough to get it merged. If you have some time, AWS actually looked into it a bit and gave me some next steps. I'm happy to share that with you if you have time to look into it.
from cartography.
I guess it's good we came to the same implementation independently.
Agreed!
Re: creds, I'm still running locally (not deployed yet) and we're using SSO login to obtain temporary AWS creds that are exported as environment variables. These vars are passed in to Docker with no problem.
What's requiring you to write your credentials to disk?
from cartography.
@nishils, cc:@via-jordan-sokolic
I had a chance to play around with this so I made this sketch that passes the creds via env var: https://github.com/achantavy/boto3-docker
It works as long as your AWS config is set as described in the README. What do you think?
I can reopen your PR and add a commit that makes this adaptation, or I can open a new PR, or if you have time you can give this a try yourself.
from cartography.
I liked where you were going with that. I don't mind updates to the original PR. It will make history simpler imho. I think at this point I think it is worth modifying the PR and getting it merged. We can document the limitations. (I'm not sure if there are any.)
from cartography.
This has been documented https://github.com/lyft/cartography/tree/master/docs/containers with https://github.com/lyft/cartography/blob/master/docker-compose.yml. There is also an issue to document K8s deployment recommendations (#597) - we can reopen that one if there is demand.
from cartography.
Related Issues (20)
- Support LoadBalancerV2s of type `gateway`
- Okta Human is undocumented HOT 1
- Refactor LoadBalancers to new data model
- Help message link for NEO4J_URI links to a 404, and the documentation should include user and password for neo4j HOT 4
- Issue: AWS Scan failing - KeyError: 'OutpostArn' HOT 1
- cleanup_job condition is too restrictive HOT 7
- aws_handle_regions in util.py attempts to retry on UnauthorizedOperation errors HOT 8
- bug in 0.85.0 unable to complete AWS scans HOT 1
- [Feature request] Create a DNS_POINTS_TO relationship between DNSRecord and ElasticIPAddress HOT 1
- Rename sub_resource_relationship to tenant_relationship
- Docker build fails due to dependancy issue HOT 6
- googleapi.com HTTP 503 causes crash
- [BUG] GCP init only grabs first page of services if a lot of services are enabled
- UPDATE CVE module to use NVD API instead of JSON feed
- Consider expanding collection of AWS Tags, beyond those supported by the Resource Groups Tagging API HOT 1
- Documentation generation failing due to Sphinx version not supported.
- EC2 get_images_in_use() failed with CypherSyntaxError when using Neo4J 5.16.0 HOT 2
- Kandji Device support
- Add SnipeIT source
- EC2 launch template sync TransactionOutOfMemoryError
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from cartography.