Git Product home page Git Product logo

cartography's People

Contributors

achantavy avatar aneeshusa avatar ccrims0n avatar ecdavis avatar fayiekcbd0xfqf2qk2e4viahg8rmm2vbjykdjtg avatar heryxpc avatar jg10 avatar jychp avatar kedarghule avatar krisek avatar kunaals avatar lgomezma avatar marco-lancini avatar mdrdannyr avatar meng-han avatar mpurusottamc avatar p-l- avatar pchheda-lyft avatar phishoes avatar ramonpetgrave64 avatar roshinis78 avatar ryan-lane avatar ryohare avatar sachafaust avatar serge-wq avatar skiptomyliu avatar tayasteere avatar thomashli avatar tmsteere avatar tobli avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cartography's Issues

Cypher Favorites Management

Hi!
First of all, thank you for your work on Cartography!

I was wondering if you have any suggestions on how to handle favorite Cypher queries, especially how to save and share them among different users.

The neo4j documentation recommends to manually dump them via the console in a browser's developer tools (source), but I was wondering if there is a more efficient (and automated) way that you use at Lyft (and that you can share with the public)

CRXcavator load crash

  File "{PATH}/cartography/intel/crxcavator/crxcavator.py", line 365, in sync_extensions
    neo4j_session, common_job_parameters['UPDATE_TAG'],
  File "{PATH}/cartography/intel/crxcavator/crxcavator.py", line 278, in load_extensions
    session.run(extensions_permissions_cypher, ExtensionPermissions=extension_permissions, UpdateTag=update_tag)
  File "{PATH}/neo4j/__init__.py", line 499, in run
    self._connection.fetch()
  File "{PATH}/neobolt/direct.py", line 414, in fetch
    return self._fetch()
  File "{PATH}/neobolt/direct.py", line 454, in _fetch
    response.on_failure(summary_metadata or {})
  File "{PATH}/neobolt/direct.py", line 738, in on_failure
    raise CypherError.hydrate(**metadata)
neobolt.exceptions.ClientError: Cannot merge node using null property value for id

Attack modeling with Cartography

Hey all,

I was thinking about the CapitalOne breach. They had an instance that was vulnerable to SSRF and internet accessible. The instance was running with an IAM Role that had access to the S3 bucket containing the customer PII that was leaked.

I was hoping I could model this attack in Cartography. It's only slightly more complicated than the typical "bucket left open to the world".

I was hoping that I could build a query that would help me find the shortest routes from internet accessible infrastructure (ec2:EC2Instance{exposed_internet: true}) to an IAM Role that had access to a given S3 bucket.

Unfortunately, neither EC2Instance, nor AutoScalingGroup have any mapping to either InstanceProfile or IAM Roles, and LaunchConfig doesn't seem to exist either.

Further, there's not currently a mapping of S3Bucket to IAMRole either. (I understand this would be difficult, even with PolicyUniverse's whos_allowed()

policy = Policy(policy05)
assert policy.whos_allowed() == set([
    PrincipalTuple(category='principal', value='arn:aws:iam::*:role/Hello'),
    PrincipalTuple(category='principal', value='arn:aws:iam::012345678910:root'),
    ConditionTuple(category='cidr', value='0.0.0.0/0'),
    ConditionTuple(category='account', value='012345678910')
])

Is this kind of attack modeling the idea behind Cartography? Is the end goal to be able to build these kind of queries? I can see a future where we're mapping the shortest (weakest) routes from our sensitive data through AWS to our internet exposed infrastructure, and then mapping the inspector findings on those AMI's to find the most likely routes an attacker may take. That's the power of the graphdb, right?

Is Lyft running any cool complex queries beyond what's in the documentation? If so, I'd friggin love to read about them. Thank you.

Move GSuite ingest to own module

CRXcavator module creates a GSuiteUser object with only the information available from CRXcavator (email). This node should be created in an independent module specific to ingestion of GSuite data and only a relationship should be created here.

CRXcavator API change fix

The API currently used to pull extension data from CRXcavator only returns the two most recent versions of the extension. Need to update module to query /group/users/extensions for full list then query /report/{extension_id}/{version} for each extension to get the required details.

cartographer only syncs VPCs in one region

AWS VPCs are regional. cartography should scan all accounts across all regions, at the moment it's missing most of them, and with VPC peering support, it's missing cross-region peers :)

Use APOC to better support datetime objects (and more)

I saw this comment in the elasticsearch transform:

# TODO this is a hacky workaround -- neo4j doesn't accept datetime objects and this section of the object
# TODO contains one. we really shouldn't be sending the entire object to neo4j

Neo4J doesn't support datetime natively, but there's an extension set that does:

http://neo4j-contrib.github.io/neo4j-apoc-procedures/3.5/utilities/datetime-conversions/

Would the project consider using this extension pack?

Add profile & platform argument in cartography cli

Hi Team,

  1. Request to add --profile <aws-account-name/aws-account-number> attribute in cartography cli so that individual account could be identified and bifurcated on Neo4j console.

Eg; cartograph was executed for 2 different AWS account via separate teams. But both of them merged the sync data in single "default" profile in Neo4j.

Team A:
INFO:cartography.intel.aws:Syncing AWS account with ID 'AAAA' using configured profile 'default'.

Team B:
INFO:cartography.intel.aws:Syncing AWS account with ID 'BBBB' using configured profile 'default'.

  1. Request to add --platform <aws/gcp> attribute in cartography cli so that cartograph should not be running unnecessary steps/cycle.

Thank You.

CRXcavator gateway timeout

Trace:

Traceback (most recent call last):
  File "cartography/cartography/cli.py", line 161, in main
    return cartography.sync.run_with_config(self.sync, config)
  File "cartography/cartography/sync.py", line 131, in run_with_config
    return sync.run(neo4j_driver, config)
  File "cartography/cartography/sync.py", line 65, in run
    stage_func(neo4j_session, config)
  File "cartography/cartography/intel/crxcavator/__init__.py", line 30, in start_extension_ingestion
    sync_extensions(session, common_job_parameters, CRXCAVATOR_API_KEY, CRXCAVATOR_API_BASE_URL)
  File "cartography/cartography/intel/crxcavator/crxcavator.py", line 209, in sync_extensions
    extension_json = get_extensions(crxcavator_api_key, crxcavator_base_url)
  File "cartography/cartography/intel/crxcavator/crxcavator.py", line 17, in get_extensions
    return call_crxcavator_api("/group/extensions/combined", crxcavator_api_key, crxcavator_base_url)
  File "cartography/cartography/intel/crxcavator/crxcavator.py", line 45, in call_crxcavator_api
    data.raise_for_status()
  File "cartography/requests/models.py", line 940, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 504 Server Error: Gateway Timeout for url: https://api.crxcavator.io/v1/group/extensions/combined

ERROR: cartography --neo4j-uri bolt://127.0.0.1:7687

Hello
I have a error when I launch cartography --neo4j-uri bolt://127.0.0.1:7687.

Error is :

ERROR:cartography.sync:Unhandled exception during sync stage 'aws'
Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/cartography/sync.py", line 63, in run
    stage_func(neo4j_session, config)
  File "/usr/lib/python3.6/site-packages/cartography/intel/aws/__init__.py", line 121, in start_aws_ingestion
    _sync_multiple_accounts(session, aws_accounts, regions, config.update_tag, common_job_parameters)
  File "/usr/lib/python3.6/site-packages/cartography/intel/aws/__init__.py", line 62, in _sync_multiple_accounts
    _sync_one_account(session, boto3_session, account_id, regions, sync_tag, common_job_parameters)
  File "/usr/lib/python3.6/site-packages/cartography/intel/aws/__init__.py", line 20, in _sync_one_account
    iam.sync_group_policies(session, boto3_session, account_id, sync_tag, common_job_parameters)
  File "/usr/lib/python3.6/site-packages/cartography/intel/aws/iam.py", line 382, in sync_group_policies
    load_group_policies(neo4j_session, groups_policies, aws_update_tag)
  File "/usr/lib/python3.6/site-packages/cartography/intel/aws/iam.py", line 278, in load_group_policies
    action = statement.get('Action')
AttributeError: 'str' object has no attribute 'get'
# python --version
Python 3.6.8

I run cartography in docker with neo4j:3.3.9 as image.

Thank you for your help !

Accuracy of exposed_internet flag on EC2 Instances

If an instance doesn't have an EIP assigned to it, has an ENI attached to a private subnet in a VPC and has 0.0.0.0/0 permitted by a security group would it still be flagged with exposed_internet? It probably is worth flagging but maybe with something that conveyed the excessive permissiveness instead?

Sync crash: get_bucket_acl possible race condition

  File "cartography/cli.py", line 161, in main
    return cartography.sync.run_with_config(self.sync, config)
  File "cartography/sync.py", line 129, in run_with_config
    return sync.run(neo4j_driver, config)
  File "cartography/sync.py", line 63, in run
    stage_func(neo4j_session, config)
[...]
  File "cartography/intel/aws/__init__.py", line 63, in _sync_multiple_accounts
    _sync_one_account(session, boto3_session, account_id, regions, sync_tag, common_job_parameters)
  File "cartography/intel/aws/__init__.py", line 25, in _sync_one_account
    s3.sync(session, boto3_session, account_id, sync_tag, common_job_parameters)
  File "cartography/intel/aws/s3.py", line 322, in sync
    load_s3_details(neo4j_session, acl_and_policy_data_iter, current_aws_account_id, aws_update_tag)
  File "cartography/intel/aws/s3.py", line 132, in load_s3_details
    for bucket, acl, policy in s3_details_iter:
  File "cartography/intel/aws/s3.py", line 25, in get_s3_bucket_details
    acl = get_acl(bucket, client)
  File "cartography/intel/aws/s3.py", line 53, in get_acl
    acl = client.get_bucket_acl(Bucket=bucket['Name'])
  File "botocore/client.py", line 357, in _api_call
    return self._make_api_call(operation_name, kwargs)
  File "botocore/client.py", line 661, in _make_api_call
    raise error_class(parsed_response, operation_name)
botocore.errorfactory.NoSuchBucket: An error occurred (NoSuchBucket) when calling the GetBucketAcl operation: The specified bucket does not exist

Similar to #12. We get all S3 buckets with boto and then get all bucket acls on each of them, but by the time we call get_bucket_acl() the bucket might have been deleted.

This problem can happen in any case where we get one object and then perform subsequent calls that assume that that object still exists.

Address family not supported by protocol

Trying to see if this works with GCP , downloaded my cred file and exported it. tried running

cartography --neo4j-user neo4j --neo4j-password-prompt --neo4j-uri bolt://localhost:7687

but get the following error, and i am not sure but how does cartography know to goto gcp if there are no config files? curious

Python 3.6.8
Neo4j 3.5.12

Traceback (most recent call last):
  File "/home/user1/.local/lib/python3.6/site-packages/neobolt/direct.py", line 829, in _connect
    s = socket(AF_INET6)
  File "/usr/lib/python3.6/socket.py", line 144, in __init__
    _socket.socket.__init__(self, family, type, proto, fileno)
OSError: [Errno 97] Address family not supported by protocol

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/bin/cartography", line 11, in <module>
    sys.exit(main())
  File "/usr/local/lib/python3.6/dist-packages/cartography/cli.py", line 182, in main
    return CLI(default_sync, prog='cartography').main(argv)
  File "/usr/local/lib/python3.6/dist-packages/cartography/cli.py", line 162, in main
    return cartography.sync.run_with_config(self.sync, config)
  File "/usr/local/lib/python3.6/dist-packages/cartography/sync.py", line 97, in run_with_config
    auth=neo4j_auth,
  File "/home/user1/.local/lib/python3.6/site-packages/neo4j/__init__.py", line 116, in driver
    return Driver(uri, **config)
  File "/home/user1/.local/lib/python3.6/site-packages/neo4j/__init__.py", line 157, in __new__
    return subclass(uri, **config)
  File "/home/user1/.local/lib/python3.6/site-packages/neo4j/__init__.py", line 231, in __new__
    pool.release(pool.acquire())
  File "/home/user1/.local/lib/python3.6/site-packages/neobolt/direct.py", line 719, in acquire
    return self.acquire_direct(self.address)
  File "/home/user1/.local/lib/python3.6/site-packages/neobolt/direct.py", line 612, in acquire_direct
    connection = self.connector(address, error_handler=self.connection_error_handler)
  File "/home/user1/.local/lib/python3.6/site-packages/neo4j/__init__.py", line 228, in connector
    return connect(address, **dict(config, **kwargs))
  File "/home/user1/.local/lib/python3.6/site-packages/neobolt/direct.py", line 976, in connect
    raise last_error
  File "/home/user1/.local/lib/python3.6/site-packages/neobolt/direct.py", line 966, in connect
    s = _connect(resolved_address, **config)
  File "/home/user1/.local/lib/python3.6/site-packages/neobolt/direct.py", line 846, in _connect
    s.close()
AttributeError: 'NoneType' object has no attribute 'close'

CRXcavator missing extension nodes

CRXcavator ingest is only creating one node for each extension by name instead of a single node for each combination of extension_id|version. Need to determine root cause and fix.

Error while cleaning GCP Instances

Currently Cartography raises the following error while cleaning up GCP Instances:

...
INFO:cartography.sync:Finishing sync stage 'aws'
INFO:cartography.sync:Starting sync stage 'gcp'
INFO:oauth2client.transport:Attempting refresh to obtain initial access_token
INFO:oauth2client.client:Refreshing access_token
INFO:oauth2client.transport:Attempting refresh to obtain initial access_token
INFO:oauth2client.client:Refreshing access_token
INFO:cartography.intel.gcp:Syncing GCP project example-project.
INFO:cartography.intel.gcp.compute:Syncing Compute objects for project example-project.
INFO:oauth2client.transport:Attempting refresh to obtain initial access_token
INFO:oauth2client.client:Refreshing access_token

ERROR:cartography.graph.job:Unhandled error while executing statement in job 'cleanup GCP Instances': Variable `r` already declared (line 25, column 26 (offset: 799))
" MERGE (fw)<-[r:DENIED_BY]-(rule)"
^
ERROR:cartography.sync:Unhandled exception during sync stage 'gcp'
Traceback (most recent call last):
File "/usr/lib/python3.7/site-packages/cartography/sync.py", line 68, in run
stage_func(neo4j_session, config)
File "/usr/lib/python3.7/site-packages/cartography/intel/gcp/__init__.py", line 134, in start_gcp_ingestion
_sync_multiple_projects(neo4j_session, resources, projects, config.update_tag, common_job_parameters)
File "/usr/lib/python3.7/site-packages/cartography/intel/gcp/__init__.py", line 94, in _sync_multiple_projects
_sync_single_project(neo4j_session, resources, project_id, gcp_update_tag, common_job_parameters)
File "/usr/lib/python3.7/site-packages/cartography/intel/gcp/__init__.py", line 72, in _sync_single_project
compute.sync(neo4j_session, resources.compute, project_id, gcp_update_tag, common_job_parameters)
File "/usr/lib/python3.7/site-packages/cartography/intel/gcp/compute.py", line 948, in sync
sync_gcp_firewall_rules(neo4j_session, compute, project_id, gcp_update_tag, common_job_parameters)
File "/usr/lib/python3.7/site-packages/cartography/intel/gcp/compute.py", line 911, in sync_gcp_firewall_rules
cleanup_gcp_firewall_rules(neo4j_session, common_job_parameters)
File "/usr/lib/python3.7/site-packages/cartography/intel/gcp/compute.py", line 853, in cleanup_gcp_firewall_rules
run_cleanup_job('gcp_compute_firewall_cleanup.json', neo4j_session, common_job_parameters)
File "/usr/lib/python3.7/site-packages/cartography/util.py", line 29, in run_cleanup_job
common_job_parameters,
File "/usr/lib/python3.7/site-packages/cartography/graph/job.py", line 95, in run_from_json
job.run(neo4j_session)
File "/usr/lib/python3.7/site-packages/cartography/graph/job.py", line 45, in run
stm.run(neo4j_session)
File "/usr/lib/python3.7/site-packages/cartography/graph/statement.py", line 47, in run
self._run_iterative(session)
File "/usr/lib/python3.7/site-packages/cartography/graph/statement.py", line 81, in _run_iterative
for r in results:
File "/usr/lib/python3.7/site-packages/neo4j/__init__.py", line 948, in records
self._session.fetch()
File "/usr/lib/python3.7/site-packages/neo4j/__init__.py", line 523, in fetch
detail_count, _ = self._connection.fetch()
File "/usr/lib/python3.7/site-packages/neobolt/direct.py", line 422, in fetch
return self._fetch()
File "/usr/lib/python3.7/site-packages/neobolt/direct.py", line 464, in _fetch
response.on_failure(summary_metadata or {})
File "/usr/lib/python3.7/site-packages/neobolt/direct.py", line 759, in on_failure
raise CypherError.hydrate(**metadata)
neobolt.exceptions.CypherSyntaxError: Variable `r` already declared (line 25, column 26 (offset: 799))
" MERGE (fw)<-[r:DENIED_BY]-(rule)"
^
Traceback (most recent call last):
File "/usr/bin/cartography", line 11, in <module>
load_entry_point('cartography==0.10.0', 'console_scripts', 'cartography')()
File "/usr/lib/python3.7/site-packages/cartography/cli.py", line 182, in main
return CLI(default_sync, prog='cartography').main(argv)
File "/usr/lib/python3.7/site-packages/cartography/cli.py", line 162, in main
return cartography.sync.run_with_config(self.sync, config)
File "/usr/lib/python3.7/site-packages/cartography/sync.py", line 134, in run_with_config
return sync.run(neo4j_driver, config)
File "/usr/lib/python3.7/site-packages/cartography/sync.py", line 68, in run
stage_func(neo4j_session, config)
File "/usr/lib/python3.7/site-packages/cartography/intel/gcp/__init__.py", line 134, in start_gcp_ingestion
_sync_multiple_projects(neo4j_session, resources, projects, config.update_tag, common_job_parameters)
File "/usr/lib/python3.7/site-packages/cartography/intel/gcp/__init__.py", line 94, in _sync_multiple_projects
_sync_single_project(neo4j_session, resources, project_id, gcp_update_tag, common_job_parameters)
File "/usr/lib/python3.7/site-packages/cartography/intel/gcp/__init__.py", line 72, in _sync_single_project
compute.sync(neo4j_session, resources.compute, project_id, gcp_update_tag, common_job_parameters)
File "/usr/lib/python3.7/site-packages/cartography/intel/gcp/compute.py", line 948, in sync
sync_gcp_firewall_rules(neo4j_session, compute, project_id, gcp_update_tag, common_job_parameters)
File "/usr/lib/python3.7/site-packages/cartography/intel/gcp/compute.py", line 911, in sync_gcp_firewall_rules
cleanup_gcp_firewall_rules(neo4j_session, common_job_parameters)
File "/usr/lib/python3.7/site-packages/cartography/intel/gcp/compute.py", line 853, in cleanup_gcp_firewall_rules
run_cleanup_job('gcp_compute_firewall_cleanup.json', neo4j_session, common_job_parameters)
File "/usr/lib/python3.7/site-packages/cartography/util.py", line 29, in run_cleanup_job
common_job_parameters,
File "/usr/lib/python3.7/site-packages/cartography/graph/job.py", line 95, in run_from_json
job.run(neo4j_session)
File "/usr/lib/python3.7/site-packages/cartography/graph/job.py", line 45, in run
stm.run(neo4j_session)
File "/usr/lib/python3.7/site-packages/cartography/graph/statement.py", line 47, in run
self._run_iterative(session)
File "/usr/lib/python3.7/site-packages/cartography/graph/statement.py", line 81, in _run_iterative
for r in results:
File "/usr/lib/python3.7/site-packages/neo4j/__init__.py", line 948, in records
self._session.fetch()
File "/usr/lib/python3.7/site-packages/neo4j/__init__.py", line 523, in fetch
detail_count, _ = self._connection.fetch()
File "/usr/lib/python3.7/site-packages/neobolt/direct.py", line 422, in fetch
return self._fetch()
File "/usr/lib/python3.7/site-packages/neobolt/direct.py", line 464, in _fetch
response.on_failure(summary_metadata or {})
File "/usr/lib/python3.7/site-packages/neobolt/direct.py", line 759, in on_failure
raise CypherError.hydrate(**metadata)
neobolt.exceptions.CypherSyntaxError: Variable `r` already declared (line 25, column 26 (offset: 799))
" MERGE (fw)<-[r:DENIED_BY]-(rule)"
^

This appears to be called in https://github.com/lyft/cartography/blob/master/cartography/intel/gcp/compute.py#L872

ThrottlingException in AWS elasticsearch domain sync.

File "/srv/venvs/service/trusty/service_venv_python3.6/lib/python3.6/site-packages/cartography/intel/aws/__init__.py", line 63, in _sync_multiple_accounts
    _sync_one_account(session, boto3_session, account_id, regions, sync_tag, common_job_parameters)
  File "/srv/venvs/service/trusty/service_venv_python3.6/lib/python3.6/site-packages/cartography/intel/aws/__init__.py", line 48, in _sync_one_account
    elasticsearch.sync(session, boto3_session, account_id, sync_tag)
  File "/srv/venvs/service/trusty/service_venv_python3.6/lib/python3.6/site-packages/cartography/intel/aws/elasticsearch.py", line 203, in sync
    data = _get_es_domains(client)
  File "/srv/venvs/service/trusty/service_venv_python3.6/lib/python3.6/site-packages/cartography/intel/aws/elasticsearch.py", line 44, in _get_es_domains
    chunk_data = client.describe_elasticsearch_domains(DomainNames=domain_name_chunk)
  File "/srv/venvs/service/trusty/service_venv_python3.6/lib/python3.6/site-packages/botocore/client.py", line 357, in _api_call
    return self._make_api_call(operation_name, kwargs)
  File "/srv/venvs/service/trusty/service_venv_python3.6/lib/python3.6/site-packages/botocore/client.py", line 661, in _make_api_call
    raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (ThrottlingException) when calling the DescribeElasticsearchDomains operation (reached max retries: 4): Rate exceeded

We mitigate this partially by chunking DescribeElasticsearchDomains requests but it looks like this method breaks when the number of domains reaches some threshold. ThrottlingException indicates that retries are possible, so some kind of exponential backoff implementation might be useful?

get_gcp_instance_responses() crash

This seems to fail: https://github.com/lyft/cartography/blob/master/cartography/intel/gcp/compute.py#L93

Stack trace:

Traceback (most recent call last):

  File "cartography/cli.py", line 162, in main
    return cartography.sync.run_with_config(self.sync, config)
  File "cartography/sync.py", line 134, in run_with_config
    return sync.run(neo4j_driver, config)
  File "cartography/sync.py", line 68, in run
    stage_func(neo4j_session, config)
  File "cartography/intel/gcp/__init__.py", line 134, in start_gcp_ingestion
    _sync_multiple_projects(neo4j_session, resources, projects, config.update_tag, common_job_parameters)
  File "cartography/intel/gcp/__init__.py", line 94, in _sync_multiple_projects
    _sync_single_project(neo4j_session, resources, project_id, gcp_update_tag, common_job_parameters)
  File "cartography/intel/gcp/__init__.py", line 72, in _sync_single_project
    compute.sync(neo4j_session, resources.compute, project_id, gcp_update_tag, common_job_parameters)
  File "cartography/intel/gcp/compute.py", line 948, in sync
    sync_gcp_instances(neo4j_session, compute, project_id, zones, gcp_update_tag, common_job_parameters)
  File "cartography/intel/gcp/compute.py", line 867, in sync_gcp_instances
    instance_responses = get_gcp_instance_responses(project_id, zones, compute)
  File "cartography/intel/gcp/compute.py", line 93, in get_gcp_instance_responses
    res = req.execute()
  File "googleapiclient/_helpers.py", line 130, in positional_wrapper
    return wrapped(*args, **kwargs)
  File "googleapiclient/http.py", line 856, in execute
    raise HttpError(resp, content, uri=self.uri)
googleapiclient.errors.HttpError: <HttpError 503 when requesting https://compute.googleapis.com/compute/v1/projects/{projectname}/zones/{zonename}/instances?alt=json returned "Internal error. Please try again or contact Google Support. (Code: '{very long error code}')">

Document deployment/containerization options and recommendations.

There have been a number of issues and PRs focusing on ways of deploying cartography, many focusing on running both neo4j and cartography in a Docker container.

Due to the plethora of options here we are unlikely to accept any PRs adding Dockerfiles or similar deployment configurations to cartography. cartography can be installed using pip in any context which supports that (directly on a system, in a virtualenv, in a Docker image, etc.) and is configurable such that it can target a running neo4j DB provided it has access.

One area we should definitely improve is documentation of the above, and we should include common deployment methods in any such documentation.

botocore NoSuchBucket when calling GetBucketAcl

File "cartography/sync.py", line 63, in run
    stage_func(neo4j_session, config)
  File "intelmodules/lyft/aws.py", line 200, in lyft_start_aws_ingestion
    common_job_parameters
  File "cartography/intel/aws/__init__.py", line 60, in _sync_multiple_accounts
    _sync_one_account(session, boto3_session, account_id, regions, sync_tag, common_job_parameters)
  File "cartography/intel/aws/__init__.py", line 24, in _sync_one_account
    s3.sync(session, boto3_session, account_id, sync_tag, common_job_parameters)
  File "cartography/intel/aws/s3.py", line 322, in sync
    load_s3_details(neo4j_session, acl_and_policy_data_iter, current_aws_account_id, aws_update_tag)
  File "cartography/intel/aws/s3.py", line 132, in load_s3_details
    for bucket, acl, policy in s3_details_iter:
  File "cartography/intel/aws/s3.py", line 25, in get_s3_bucket_details
    acl = get_acl(bucket, client)
  File "cartography/intel/aws/s3.py", line 53, in get_acl
    acl = client.get_bucket_acl(Bucket=bucket['Name'])
  File "botocore/client.py", line 357, in _api_call
    return self._make_api_call(operation_name, kwargs)
  File "botocore/client.py", line 661, in _make_api_call
    raise error_class(parsed_response, operation_name)
botocore.errorfactory.NoSuchBucket: An error occurred (NoSuchBucket) when calling the GetBucketAcl operation: The specified bucket does not exist 

Possible race condition - the bucket might have been deleted after the call to get_s3_bucket_list but before get_s3_bucket_details. This crash is similar to #12.

Side note: it would be helpful to add information in the exception messages on which bucket caused this crash.

Add billing information

It would be very cool to be able to add billing information into the mix here.... ๐Ÿ’ฐ๐Ÿ’ฐ๐Ÿ’ฐ

Unable to get running on MacOS

I'm using pyenv if that helps at all:

$ cartography
Traceback (most recent call last):
  File "/Users/donovanhernandez/.pyenv/versions/2.7.15/bin/cartography", line 11, in <module>
    load_entry_point('cartography==0.2.2rc1', 'console_scripts', 'cartography')()
  File "/Users/donovanhernandez/.pyenv/versions/2.7.15/lib/python2.7/site-packages/pkg_resources/__init__.py", line 480, in load_entry_point
    return get_distribution(dist).load_entry_point(group, name)
  File "/Users/donovanhernandez/.pyenv/versions/2.7.15/lib/python2.7/site-packages/pkg_resources/__init__.py", line 2693, in load_entry_point
    return ep.load()
  File "/Users/donovanhernandez/.pyenv/versions/2.7.15/lib/python2.7/site-packages/pkg_resources/__init__.py", line 2324, in load
    return self.resolve()
  File "/Users/donovanhernandez/.pyenv/versions/2.7.15/lib/python2.7/site-packages/pkg_resources/__init__.py", line 2330, in resolve
    module = __import__(self.module_name, fromlist=['__name__'], level=0)
  File "/Users/donovanhernandez/.pyenv/versions/2.7.15/lib/python2.7/site-packages/cartography/cli.py", line 7, in <module>
    import cartography.sync
  File "/Users/donovanhernandez/.pyenv/versions/2.7.15/lib/python2.7/site-packages/cartography/sync.py", line 8, in <module>
    import cartography.intel.aws
  File "/Users/donovanhernandez/.pyenv/versions/2.7.15/lib/python2.7/site-packages/cartography/intel/aws/__init__.py", line 5, in <module>
    from cartography.intel.aws import dynamodb, ec2, elasticsearch, iam, organizations, route53, s3, rds
  File "/Users/donovanhernandez/.pyenv/versions/2.7.15/lib/python2.7/site-packages/cartography/intel/aws/elasticsearch.py", line 2, in <module>
    from cartography.intel.dns import ingest_dns_record_by_fqdn
  File "/Users/donovanhernandez/.pyenv/versions/2.7.15/lib/python2.7/site-packages/cartography/intel/dns.py", line 2, in <module>
    import dns.resolver
ImportError: No module named resolver

$ /Users/donovanhernandez/.pyenv/versions/2.7.15/bin/pip list | grep dns
DEPRECATION: Python 2.7 will reach the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 won't be maintained after that date. A future version of pip will drop support for Python 2.7.
dnspython           1.16.0

$ cartography --neo4j-uri bolt://localhost:7687
Traceback (most recent call last):
  File "/Users/donovanhernandez/.pyenv/versions/2.7.15/bin/cartography", line 6, in <module>
    from pkg_resources import load_entry_point
  File "/Users/donovanhernandez/.pyenv/versions/2.7.15/lib/python2.7/site-packages/pkg_resources/__init__.py", line 3088, in <module>
    @_call_aside
  File "/Users/donovanhernandez/.pyenv/versions/2.7.15/lib/python2.7/site-packages/pkg_resources/__init__.py", line 3072, in _call_aside
    f(*args, **kwargs)
  File "/Users/donovanhernandez/.pyenv/versions/2.7.15/lib/python2.7/site-packages/pkg_resources/__init__.py", line 3101, in _initialize_master_working_set
    working_set = WorkingSet._build_master()
  File "/Users/donovanhernandez/.pyenv/versions/2.7.15/lib/python2.7/site-packages/pkg_resources/__init__.py", line 576, in _build_master
    return cls._build_from_requirements(__requires__)
  File "/Users/donovanhernandez/.pyenv/versions/2.7.15/lib/python2.7/site-packages/pkg_resources/__init__.py", line 589, in _build_from_requirements
    dists = ws.resolve(reqs, Environment())
  File "/Users/donovanhernandez/.pyenv/versions/2.7.15/lib/python2.7/site-packages/pkg_resources/__init__.py", line 783, in resolve
    raise VersionConflict(dist, req).with_context(dependent_req)
pkg_resources.ContextualVersionConflict: (botocore 1.12.122 (/Users/donovanhernandez/.pyenv/versions/2.7.15/lib/python2.7/site-packages), Requirement.parse('botocore<1.13.0,>=1.12.127'), set(['boto3']))

$ pip list  | grep boto
boto3               1.9.127
botocore            1.12.122

Cartography race conditions

This issue tracks all race condition-like instances where Cartography performs actions on stale data.

Roles

Cartography loads a role to the graph, the role gets deleted, and then Cartography tries to list role policies on that nonexistent role ==> Cartography crash.

It is good though that the failing role name gets logged.

I'm opening this issue for documentation purposes.

  File "{PATH}/cartography/intel/aws/__init__.py", line 44, in _sync_multiple_accounts
    _sync_one_account(neo4j_session, boto3_session, account_id, regions, sync_tag, common_job_parameters)
  File "{PATH}/cartography/intel/aws/__init__.py", line 21, in _sync_one_account
    iam.sync(neo4j_session, boto3_session, account_id, sync_tag, common_job_parameters)
  File "{PATH}/cartography/intel/aws/iam.py", line 446, in sync
    sync_role_policies(neo4j_session, boto3_session, account_id, update_tag, common_job_parameters)
  File "{PATH}/cartography/intel/aws/iam.py", line 414, in sync_role_policies
    for policy_name in get_role_policies(boto3_session, role_name)['PolicyNames']:
  File "{PATH}/cartography/intel/aws/iam.py", line 69, in get_role_policies
    for page in paginator.paginate(RoleName=role_name):
  File "{PATH}/botocore/paginate.py", line 255, in __iter__
    response = self._make_request(current_kwargs)
  File "{PATH}/botocore/paginate.py", line 332, in _make_request
    return self._method(**current_kwargs)
  File "{PATH}/botocore/client.py", line 357, in _api_call
    return self._make_api_call(operation_name, kwargs)
  File "{PATH}/botocore/client.py", line 661, in _make_api_call
    raise error_class(parsed_response, operation_name)
botocore.errorfactory.NoSuchEntityException: An error occurred (NoSuchEntity) when calling the ListRolePolicies operation: The role with name {ROLENAME} cannot be found.

S3 and get_bucket_acl()

File "cartography/cli.py", line 161, in main
    return cartography.sync.run_with_config(self.sync, config)
  File "cartography/sync.py", line 129, in run_with_config
    return sync.run(neo4j_driver, config)
  File "cartography/sync.py", line 63, in run
    stage_func(neo4j_session, config)
[...]
  File "cartography/intel/aws/__init__.py", line 63, in _sync_multiple_accounts
    _sync_one_account(session, boto3_session, account_id, regions, sync_tag, common_job_parameters)
  File "cartography/intel/aws/__init__.py", line 25, in _sync_one_account
    s3.sync(session, boto3_session, account_id, sync_tag, common_job_parameters)
  File "cartography/intel/aws/s3.py", line 322, in sync
    load_s3_details(neo4j_session, acl_and_policy_data_iter, current_aws_account_id, aws_update_tag)
  File "cartography/intel/aws/s3.py", line 132, in load_s3_details
    for bucket, acl, policy in s3_details_iter:
  File "cartography/intel/aws/s3.py", line 25, in get_s3_bucket_details
    acl = get_acl(bucket, client)
  File "cartography/intel/aws/s3.py", line 53, in get_acl
    acl = client.get_bucket_acl(Bucket=bucket['Name'])
  File "botocore/client.py", line 357, in _api_call
    return self._make_api_call(operation_name, kwargs)
  File "botocore/client.py", line 661, in _make_api_call
    raise error_class(parsed_response, operation_name)
botocore.errorfactory.NoSuchBucket: An error occurred (NoSuchBucket) when calling the GetBucketAcl operation: The specified bucket does not exist

GSuite members

  File "{dir}/cartography/cli.py", line 220, in main
    return cartography.sync.run_with_config(self.sync, config)
  File "{dir}/cartography/sync.py", line 135, in run_with_config
    return sync.run(neo4j_driver, config)
  File "{dir}/cartography/sync.py", line 69, in run
    stage_func(neo4j_session, config)
  File "{dir}/cartography/intel/gsuite/__init__.py", line 80, in start_gsuite_ingestion
    api.sync_gsuite_groups(session, resources.admin, config.update_tag, common_job_parameters)
  File "{dir}/cartography/intel/gsuite/api.py", line 252, in sync_gsuite_groups
    sync_gsuite_members(groups, session, admin, gsuite_update_tag)
  File "{dir}/cartography/intel/gsuite/api.py", line 257, in sync_gsuite_members
    members = get_members_for_group(admin, group['email'])
  File "{dir}/cartography/intel/gsuite/api.py", line 88, in get_members_for_group
    resp = request.execute(num_retries=GOOGLE_API_NUM_RETRIES)
  File "{dir}/googleapiclient/_helpers.py", line 130, in positional_wrapper
    return wrapped(*args, **kwargs)
  File "{dir}/googleapiclient/http.py", line 856, in execute
    raise HttpError(resp, content, uri=self.uri)
googleapiclient.errors.HttpError: <HttpError 404 when requesting https://www.googleapis.com/admin/directory/v1/groups/{group-name}/members?maxResults=500&alt=json returned "Resource Not Found: groupKey">

We need to figure out a longer term strategy for handling nodes that are dependent on other nodes. This involves problems like deciding which parts of a sync are able to continue when others have failed.

Sync crash: Throttling exception on route53

Stack trace

  File "{Path}/cartography/intel/aws/__init__.py", line 42, in _sync_one_account
    route53.sync_route53(session, boto3_session, account_id, sync_tag)
  File "{Path}/cartography/intel/aws/route53.py", line 256, in sync_route53
    zones = get_zones(client)
  File "{Path}/cartography/intel/aws/route53.py", line 240, in get_zones
    record_sets = get_zone_record_sets(client, hosted_zone['Id'])
  File "{Path}/cartography/intel/aws/route53.py", line 227, in get_zone_record_sets
    for page in pages:
  File "{Path}/botocore/paginate.py", line 255, in __iter__
    response = self._make_request(current_kwargs)
  File "{Path}/botocore/paginate.py", line 332, in _make_request
    return self._method(**current_kwargs)
  File "{Path}/botocore/client.py", line 357, in _api_call
    return self._make_api_call(operation_name, kwargs)
  File "{Path}/botocore/client.py", line 661, in _make_api_call
    raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (Throttling) when calling the ListResourceRecordSets operation (reached max retries: 4): Rate exceeded

Proposal for improvements to `exposed_internet` on Load Balancers

Hey Lyft team,

This tool is pretty neat. Thank you for building and sharing it.

Right now loadbalancers and any attached instances are flagged with exposed_internet if the scheme on the loadbalancer is internet-facing.

I believe this creates a number of false positives. My proposal is to change the logic to only flag loadbalancers and instances when the following criteria are met:

  1. Loadbalancer has an internet-facing scheme. (public DNS)
  2. Loadbalancer has a listener on the same protocol and port as one of the security group rules allowing 0.0.0.0/0. (Care must be taken for rules with "any" protocol or port ranges.)

It seems like this kind of logic should be pretty easy with a graph database, but I'm very new to neo4j. (I have been trying to come up with a query that performs something like this and outputs a list of dns entries and ports that should be internet-reachable. I would love any help if such a query is possible)

This is the same logic I used when I wrote the Security Monkey auditor for load balancers:
ELBs: https://github.com/Netflix/security_monkey/blob/develop/security_monkey/auditors/elb.py#L182
ALBs: https://github.com/Netflix/security_monkey/blob/develop/security_monkey/auditors/elbv2.py#L63

Please let me know what you think of this proposal. If there are reasons why you are not already looking at the listeners/sg's, I would love to learn what those are.

Thank you. Great tool!

SSL Error : WRONG_VERSION_NUMBER

Hi,
Am getting this SSL Error :
File "c:~\local\programs\python\python37-32\lib\site-packages\neobolt\direct.py", line 832, in _secure
s = ssl_context.wrap_socket(s, server_hostname=host if HAS_SNI and host else None)
File "c:~\local\programs\python\python37-32\lib\ssl.py", line 412, in wrap_socket
session=session
File "c:~\local\programs\python\python37-32\lib\ssl.py", line 853, in _create
self.do_handshake()
File "c:~\local\programs\python\python37-32\lib\ssl.py", line 1117, in do_handshake
self._sslobj.do_handshake()
ssl.SSLError: [SSL: WRONG_VERSION_NUMBER] wrong version number (_ssl.c:1056)

CRXcavator - dict object has no attribute update_tag

File "/cartography/cli.py", line 162, in main
return cartography.sync.run_with_config(self.sync, config)
File "/cartography/sync.py", line 133, in run_with_config
return sync.run(neo4j_driver, config)
File "/cartography/sync.py", line 67, in run
stage_func(neo4j_session, config)
File "intelmodules/lyft/crxcavator.py", line 21, in start_crxcavator_ingestion
common_job_parameters
File "/cartography/intel/crxcavator/init.py", line 30, in start_extension_ingestion
"UPDATE_TAG": config.update_tag,
AttributeError: 'dict' object has no attribute 'update_tag'

Handle missing networkIP while PROVISIONING

While an instance has 'status':'PROVISIONING', the field nic['networkIP'] in the api response may not yet exist, so line 608 would result in a KeyError.

for nic in instance.get('networkInterfaces', []):
# Make an ID for GCPNetworkInterface nodes because GCP doesn't define one but we need to uniquely identify them
nic_id = f"{instance['partial_uri']}/networkinterfaces/{nic['name']}"
neo4j_session.run(
query,
InstanceId=instance['partial_uri'],
NicId=nic_id,
NetworkIP=nic['networkIP'],

Recommend changing to

NetworkIP=nic.get('networkIP'),

The Instance Life Cycle docs provide some information on state of an instance's resources at different instance statuses.

It could be good to also ingest instance['status'] within load_gcp_instances().

Sync connections can become blocked on neo4j >=3.3, which will then cause them to be dropped due to inactivity.

References:

To summarize:

  • When sending large quantities of data to neo4j without consuming the results of those queries, various buffers can fill up and cause the connection to get "stuck" in a send which won't succeed until a recv is completed.
  • After 15 minutes in this state, neo4j will drop the connection (see stack trace from neo4j below).
  • During cleanup, the Python driver will try to write to the dropped connection and this will cause a broken pipe error (see stack trace from cartography below).

A workaround for this issue is to consume or detach the results of all queries executed during sync. I would prefer to find a neater solution but this may be the only one available.

neo4j debug.log

2019-09-30 13:06:21.367+0000 ERROR [o.n.b.v.m.BoltRequestMessageReaderV3] Failed to write response to driver Bolt connection [/0:0:0:0:0:0:0:1%0:51612] will be closed because the client did not consume outgoing buffers for 00:15:00.000 which is not expected.
org.neo4j.bolt.messaging.BoltIOException: Bolt connection [/0:0:0:0:0:0:0:1%0:51612] will be closed because the client did not consume outgoing buffers for 00:15:00.000 which is not expected.
        at org.neo4j.bolt.v1.transport.ChunkedOutput.flush(ChunkedOutput.java:136)
        at org.neo4j.bolt.v1.transport.ChunkedOutput.messageSucceeded(ChunkedOutput.java:105)
        at org.neo4j.bolt.v1.messaging.BoltResponseMessageWriterV1.packCompleteMessageOrFail(BoltResponseMessageWriterV1.java:105)
        at org.neo4j.bolt.v1.messaging.BoltResponseMessageWriterV1.write(BoltResponseMessageWriterV1.java:79)
        at org.neo4j.bolt.v1.messaging.MessageProcessingHandler.onFinish(MessageProcessingHandler.java:102)
        at org.neo4j.bolt.v1.runtime.BoltStateMachineV1.after(BoltStateMachineV1.java:132)
        at org.neo4j.bolt.v1.runtime.BoltStateMachineV1.process(BoltStateMachineV1.java:97)
        at org.neo4j.bolt.messaging.BoltRequestMessageReader.lambda$doRead$1(BoltRequestMessageReader.java:89)
        at org.neo4j.bolt.runtime.DefaultBoltConnection.processNextBatch(DefaultBoltConnection.java:191)
        at org.neo4j.bolt.runtime.DefaultBoltConnection.processNextBatch(DefaultBoltConnection.java:139)
        at org.neo4j.bolt.runtime.ExecutorBoltScheduler.executeBatch(ExecutorBoltScheduler.java:171)
        at org.neo4j.bolt.runtime.ExecutorBoltScheduler.lambda$scheduleBatchOrHandleError$2(ExecutorBoltScheduler.java:154)
        at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
Caused by: org.neo4j.bolt.transport.TransportThrottleException: Bolt connection [/0:0:0:0:0:0:0:1%0:51612] will be closed because the client did not consume outgoing buffers for 00:15:00.000 which is not expected.
        at org.neo4j.bolt.transport.TransportWriteThrottle.acquire(TransportWriteThrottle.java:101)
        at org.neo4j.bolt.v1.transport.ChunkedOutput.flush(ChunkedOutput.java:132)
        ... 15 more

cartography stack trace

Traceback (most recent call last):
  ...
  File ".../lib/python3.6/site-packages/neo4j/__init__.py", line 498, in run
    self._connection.send()
  File ".../lib/python3.6/site-packages/neobolt/direct.py", line 394, in send
    self._send()
  File ".../lib/python3.6/site-packages/neobolt/direct.py", line 409, in _send
    self.socket.sendall(data)
  File "/usr/lib/python3.6/ssl.py", line 965, in sendall
    v = self.send(data[count:])
  File "/usr/lib/python3.6/ssl.py", line 935, in send
    return self._sslobj.write(data)
  File "/usr/lib/python3.6/ssl.py", line 636, in write
    return self._sslobj.write(data)
ConnectionResetError: [Errno 104] Connection reset by peer

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File ".../lib/python3.6/site-packages/neo4j/__init__.py", line 395, in close
    self._connection.sync()
  File ".../lib/python3.6/site-packages/neobolt/direct.py", line 505, in sync
    self.send()
  File ".../lib/python3.6/site-packages/neobolt/direct.py", line 394, in send
    self._send()
  File ".../lib/python3.6/site-packages/neobolt/direct.py", line 409, in _send
    self.socket.sendall(data)
  File "/usr/lib/python3.6/ssl.py", line 965, in sendall
    v = self.send(data[count:])
  File "/usr/lib/python3.6/ssl.py", line 935, in send
    return self._sslobj.write(data)
  File "/usr/lib/python3.6/ssl.py", line 636, in write
    return self._sslobj.write(data)
BrokenPipeError: [Errno 32] Broken pipe

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  ...
  File ".../lib/python3.6/site-packages/neo4j/__init__.py", line 370, in __exit__
    self.close()
  File ".../lib/python3.6/site-packages/neo4j/__init__.py", line 397, in close
    ServiceUnavailable, SessionError):
NameError: name 'ServiceUnavailable' is not defined

Sync crash: botocore NoSuchEntity exception

Crash on sync:

botocore.errorfactory.NoSuchEntityException: An error occurred (NoSuchEntity) when calling the ListAccessKeys operation: The user with name _REDACTED_ cannot be found.

The user existed in the graph at the time it was sync'd last (https://github.com/lyft/cartography/blob/master/cartography/intel/aws/iam.py#L392)

but the user did not exist by the next sync when we tried to pull access key data:
https://github.com/lyft/cartography/blob/master/cartography/intel/aws/iam.py#L395

We might want to add an exception handler here to simply skip over the lost record.

Handle RDS instances without subnet groups

I'm getting an exception during data collection when a RDS instance doesn't have an attached DBSubnetGroup. While this is unlikely, it's possible and the error condition should probably be handled by catching a KeyError here.

Feature request: Create a Docker image

Pretty self-explanatory. Would be nice to be able to pull a Docker image instead of having to install all of the dependencies locally. Publishing the image to Docker hub would be awesome as well.

Offline Data Collection and Sync

It would be great if there is an offline mode of syncing the data to the neo4j instance.

Being an auditor , it would be extremely useful , if there is a collector script / module that can collect all the required information required for the cartographer tool to sync with neo4j. Then the collected json exports can be taken and synced to a neo4j instance in another system for later analysis.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.