Git Product home page Git Product logo

edda's Introduction

Build Status

DESCRIPTION

Edda is a Service to track changes in your cloud deployments.

DETAILS

Please see the wiki.

SUPPORT

Edda Google group.

LICENSE

Copyright 2012-2016 Netflix, Inc.

Licensed under the Apache License, Version 2.0 (the “License”); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

edda's People

Contributors

alde avatar atoulme avatar brharrington avatar challahc avatar copperlight avatar coryb avatar cquinn avatar e-gineer avatar froschi avatar garethbowles avatar hridyeshpant avatar iamjarvo avatar johnjelinek avatar keis avatar nevins-b avatar osterman avatar quidryan avatar randgalt avatar richo avatar rpalcolea avatar rspieldenner avatar sghill avatar vide avatar wstrucke avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

edda's Issues

Test fail (was: Cobertura plugin pulls in outdated log4j which conflicts with newer slf4j version)

When I try to compile I get an error about invalid log4j-version being pulled in. When I check via "./gradlew dependencies", I see that the cobertura plugin causes this. Removing cobertura from gradle/check.gradle makes the build continue further.

$ ./gradlew dependencies
:dependencies

------------------------------------------------------------
Root project
------------------------------------------------------------

archives - Configuration for archive artifacts.
No dependencies

checkstyle - The Checkstyle libraries to be used for this project.
No dependencies

cobertura
\--- net.sourceforge.cobertura:cobertura:1.9.4.1
     +--- oro:oro:2.0.8
     +--- asm:asm:3.0
     +--- asm:asm-tree:3.0
     |    \--- asm:asm:3.0
     +--- log4j:log4j:1.2.9
...
...
:licenseTest UP-TO-DATE
:pmdMain UP-TO-DATE
:pmdTest UP-TO-DATE
:test
[ant:scalatest] SLF4J: This version of SLF4J requires log4j version 1.2.12 or later. See also http://www.slf4j.org/codes.html#log4j_version
:test FAILED

FAILURE: Build failed with an exception.

* Where:
Build file 'C:\workspaces\edda\build.gradle' line: 74

* What went wrong:
Execution failed for task ':test'.
> ScalaTest run failed.

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output.

BUILD FAILED

Is there a limit to how many resources edda will crawl in one pass?

I've noticed that our edda instance is under reporting the number of RDS instances that it's recording activity on. When I run curl -s "$EDDA/aws/databases;_pp" | wc -l

I am currently getting back 223 RDS instance names. From doing a scan of just four of our AWS accounts (we have more) we have a combined 397 RDS instances so I'm not sure where those other instances are going. Is there some kind of limit to how many RDS instances Edda will scan from AWS?

Another clue in regards to the 100 instance limit are my edda logs which always show 100 resources are being crawled at a time for certain collections that I know have more than 100 RDS instances:

013-11-10 07:50:03.146 - INFO  - [Crawler.scala:130] [Crawler green.aws.databases] Crawled 100 records in 1.10 sec
2013-11-10 07:50:03.156 - INFO  - [Collection.scala:286] [Collection green.aws.databases] total: 100 changed: 40 added: 0 removed: 0

That collection green.aws.databases currently has 154 RDS instances associated with it in AWS. Is there a configuration setting I've overlooked? Thanks!

I added in New Regions which fail to authenticate

The error is:

com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 400, AWS Service: Amazon S3, AWS Request ID: 784076F1E98745AC, AWS Error Code: InvalidRequest, AWS Error Message: The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256., S3 Extended Request ID: dVjKGfS8GwbeyPwvkBqVZ+OFcYzgw5YUl8erHWdoMRZ9gXkKEtHzdTqIZa/rGT3IWlsXUKrGQE4=

The old regions work, some of the new ones don't. Specifically us-east-2, eu-west-2, eu-west-3, eu-central-1 seem to fail.

AWS says
"Past a member of our S3 support team who stated he had seen similar errors in the past. For those it was usually related to using older versions of our code signing signature in new regions. A break down of which regions support which versions of the signature can be found here: https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.aws.amazon.com_general_latest_gr_rande.html-23s3-5Fregion&d=DwICaQ&c=udBTRvFvXC5Dhqg7UHpJlPps3mZ3LRxpb6__0PomBTQ&r=nHtjxYB2z8kPeV8bKkS2HKIYxOgz4isoxyTRvFzRWsAF_5OCaT8sVtGmeamSMNn2&m=I-w_2SxO7FOZj3O-3d9ZjbSQjlU56krE8TZ1nfOPTT4&s=XSo8F2lKRitvPjwzdQfFuGg0r3SFvzvFj7w9rOPbT3A&e"

and that the software application needs to be updated.

What version does the software need to be upgraded to use "signature-4" authentication?

Bob

Upgrade ElasticSearchDatastore clien

Hi, It would be nice if you can upgrade the elastic search client
"org.elasticsearch" % "elasticsearch" % "0.90.3"
as the current version is very old and deprecated.
In my company, we are trying to run edda service but we have elasticsearch-7 cluster only.

Leader instance is always giving wrong mtime for resources

Leader instance has a write though in memory cache. But it does not give correct mtime. It is usually the start time of the jetty or when it was elected as leader.

However followers give updated mtime because it refreshes their cache at regular intervals.

Problem using multiple accounts?

I was not able to get edda working with multiple accounts?

If edda.aws.accessKey and edda.aws.secretKey are not set, then edda always fails with "com.amazonaws.AmazonClientException: Unable to load AWS credentials from any provider in the chain". This happens even if you have edda.accounts and the various edda.$account config options set.

If I set edda.aws.accessKey and edda.aws.secretKey in addition to the edda.$account config options, then edda appears to always use the default keys not the account specific ones.

Sorry I couldn't debug further to suggest a fix or confirm that I was doing something wrong, I got lost somewhere inside AwsCollectionBuilder::buildAll...

Thanks, Nathan

IAM crawlers do not process paginated results

Currently all the iam crawlers only process the first page of all these results:

ListAccessKeysResult
ListGroupPoliciesResult
ListGroupsResult
ListMFADevicesResult
ListRolesResult
ListUserPoliciesResult
ListUsersResult

They need to be checked and paginated using result.getIsTrunctated() and result.getMarker() then on the subsequent request call setMarker().

@e-gineer since you contributed the code, perhaps you could look into this? (or @ralph-tice since you were interested in these collections?) We don't use the iam collections at Netflix, so we have not noticed the problem, but Bob Brown on the edda-users ran into this issue.

-Cory

Edda not collecting all VPC

Hi,
I have multiple VPC per account, when i configure edda , it is only collecting information based on default VPC for that account. i could not see VPC related setting in edda properties file.
Am i missing somethings?

Exception when running on a dev machine: can't find master

Is it possible to run Edda on a non-EC2 instance, just to try it out? I'm getting stuck at this exception:

com.mongodb.MongoException: can't find a master
  at com.mongodb.DBTCPConnector.checkMaster(DBTCPConnector.java:406)
  at com.mongodb.DBTCPConnector.call(DBTCPConnector.java:193)
  at com.mongodb.DBApiLayer$MyCollection.__find(DBApiLayer.java:303)
  at com.mongodb.DB.getCollectionNames(DB.java:266)
  at com.mongodb.DB.collectionExists(DB.java:307)
  at com.netflix.edda.mongo.MongoDatastore$.mongoCollection(MongoDatastore.scala:146)
  at com.netflix.edda.mongo.MongoElector.liftedTree1$1(MongoElector.scala:38)
  at com.netflix.edda.mongo.MongoElector.<init>(MongoElector.scala:37)
  at com.netflix.edda.basic.BasicServer.init(BasicServer.scala:42)
  at javax.servlet.GenericServlet.init(GenericServlet.java:241)
  at org.mortbay.jetty.servlet.ServletHolder.initServlet(ServletHolder.java:440)
  at org.mortbay.jetty.servlet.ServletHolder.doStart(ServletHolder.java:263)
  at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
 ...

Since this is my dev machine I'm not running a replica set, it's just my local mongod, I don't get why it's not finding a master, localhost is master:

> rs.isMaster()
{ "ismaster" : true, "maxBsonObjectSize" : 16777216, "ok" : 1 }

I don't have any username/password protection for the database. I've only changed the edda.aws.accessKey and edda.aws.secretKey (the environment variables weren't being picked up), and set EC2_INSTANCE_ID to a real instance's ID (seemed to be required).

Feature Request

From an instance lookup, be able to identify the ELBs it is in.

aws.databases collection creates too many document revisions

It looks like a new document is being created every time a snapshot/backup is taken on a RDS database. This has the effect of creating way to many document revisions. We need to filter out the latestRestorableTime when comparing records in an overload Collection.newStateTimeForChange

-Cory

$ curl 'http://localhost:8080/edda/api/v2/aws/databases/<dbname>;_all;_limit=10;_diff=0'
--- /edda/api/v2/aws/databases/<dbname>;_pp;_at=1370296312238
+++ /edda/api/v2/aws/databases/<dbname>;_pp;_at=1370296672399
@@ -36,1 +36,1 @@
-  "latestRestorableTime" : "2013-06-03T21:50:00.000Z",
+  "latestRestorableTime" : "2013-06-03T21:55:00.000Z",
--- /edda/api/v2/aws/databases/<dbname>;_pp;_at=1370296071805
+++ /edda/api/v2/aws/databases/<dbname>;_pp;_at=1370296312238
@@ -36,1 +36,1 @@
-  "latestRestorableTime" : "2013-06-03T21:45:00.000Z",
+  "latestRestorableTime" : "2013-06-03T21:50:00.000Z",
--- /edda/api/v2/aws/databases/<dbname>;_pp;_at=1370295708398
+++ /edda/api/v2/aws/databases/<dbname>;_pp;_at=1370296071805
@@ -36,1 +36,1 @@
-  "latestRestorableTime" : "2013-06-03T21:40:00.000Z",
+  "latestRestorableTime" : "2013-06-03T21:45:00.000Z",
--- /edda/api/v2/aws/databases/<dbname>;_pp;_at=1370295468396
+++ /edda/api/v2/aws/databases/<dbname>;_pp;_at=1370295708398
@@ -36,1 +36,1 @@
-  "latestRestorableTime" : "2013-06-03T21:35:00.000Z",
+  "latestRestorableTime" : "2013-06-03T21:40:00.000Z",
--- /edda/api/v2/aws/databases/<dbname>;_pp;_at=1370295108221
+++ /edda/api/v2/aws/databases/<dbname>;_pp;_at=1370295468396
@@ -36,1 +36,1 @@
-  "latestRestorableTime" : "2013-06-03T21:30:00.000Z",
+  "latestRestorableTime" : "2013-06-03T21:35:00.000Z",
--- /edda/api/v2/aws/databases/<dbname>;_pp;_at=1370294868640
+++ /edda/api/v2/aws/databases/<dbname>;_pp;_at=1370295108221
@@ -36,1 +36,1 @@
-  "latestRestorableTime" : "2013-06-03T21:25:00.000Z",
+  "latestRestorableTime" : "2013-06-03T21:30:00.000Z",
--- /edda/api/v2/aws/databases/<dbname>;_pp;_at=1370294508410
+++ /edda/api/v2/aws/databases/<dbname>;_pp;_at=1370294868640
@@ -36,1 +36,1 @@
-  "latestRestorableTime" : "2013-06-03T21:20:00.000Z",
+  "latestRestorableTime" : "2013-06-03T21:25:00.000Z",
--- /edda/api/v2/aws/databases/<dbname>;_pp;_at=1370294267931
+++ /edda/api/v2/aws/databases/<dbname>;_pp;_at=1370294508410
@@ -36,1 +36,1 @@
-  "latestRestorableTime" : "2013-06-03T21:15:00.000Z",
+  "latestRestorableTime" : "2013-06-03T21:20:00.000Z",
--- /edda/api/v2/aws/databases/<dbname>;_pp;_at=1370293907999
+++ /edda/api/v2/aws/databases/<dbname>;_pp;_at=1370294267931
@@ -36,1 +36,1 @@
-  "latestRestorableTime" : "2013-06-03T21:10:00.000Z",
+  "latestRestorableTime" : "2013-06-03T21:15:00.000Z",

Release Request

Could you add the compiled war files to the Release pages for v2.3.0 and v2.2.0 as you did for v2.1? I would like to update the Edda Zero to Docker builds to reflect the newer versions.

Build is broken

This is what I get when I try to build master:

:test FAILED

FAILURE: Build failed with an exception.

* Where:
Build file '/edda/build.gradle' line: 83

* What went wrong:
Execution failed for task ':test'.
> ScalaTest run failed.

* Try:
Run with --info or --debug option to get more log output.

* Exception is:
org.gradle.api.tasks.TaskExecutionException: Execution failed for task ':test'.
    at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeActions(ExecuteActionsTaskExecuter.java:72)
    at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.execute(ExecuteActionsTaskExecuter.java:49)
    at org.gradle.api.internal.tasks.execution.PostExecutionAnalysisTaskExecuter.execute(PostExecutionAnalysisTaskExecuter.java:34)
    at org.gradle.api.internal.changedetection.CacheLockHandlingTaskExecuter$1.run(CacheLockHandlingTaskExecuter.java:34)
    at org.gradle.internal.Factories$1.create(Factories.java:22)
    at org.gradle.cache.internal.DefaultCacheAccess.longRunningOperation(DefaultCacheAccess.java:179)
    at org.gradle.cache.internal.DefaultCacheAccess.longRunningOperation(DefaultCacheAccess.java:232)
    at org.gradle.cache.internal.DefaultPersistentDirectoryStore.longRunningOperation(DefaultPersistentDirectoryStore.java:142)
    at org.gradle.api.internal.changedetection.DefaultTaskArtifactStateCacheAccess.longRunningOperation(DefaultTaskArtifactStateCacheAccess.java:83)
    at org.gradle.api.internal.changedetection.CacheLockHandlingTaskExecuter.execute(CacheLockHandlingTaskExecuter.java:32)
    at org.gradle.api.internal.tasks.execution.SkipUpToDateTaskExecuter.execute(SkipUpToDateTaskExecuter.java:55)
    at org.gradle.api.internal.tasks.execution.ValidatingTaskExecuter.execute(ValidatingTaskExecuter.java:57)
    at org.gradle.api.internal.tasks.execution.SkipEmptySourceFilesTaskExecuter.execute(SkipEmptySourceFilesTaskExecuter.java:41)
    at org.gradle.api.internal.tasks.execution.SkipTaskWithNoActionsExecuter.execute(SkipTaskWithNoActionsExecuter.java:51)
    at org.gradle.api.internal.tasks.execution.SkipOnlyIfTaskExecuter.execute(SkipOnlyIfTaskExecuter.java:52)
    at org.gradle.api.internal.tasks.execution.ExecuteAtMostOnceTaskExecuter.execute(ExecuteAtMostOnceTaskExecuter.java:42)
    at org.gradle.api.internal.AbstractTask.executeWithoutThrowingTaskFailure(AbstractTask.java:275)
    at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor.executeTask(DefaultTaskPlanExecutor.java:52)
    at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor.processTask(DefaultTaskPlanExecutor.java:38)
    at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor.process(DefaultTaskPlanExecutor.java:30)
    at org.gradle.execution.taskgraph.DefaultTaskGraphExecuter.execute(DefaultTaskGraphExecuter.java:84)
    at org.gradle.execution.SelectedTaskExecutionAction.execute(SelectedTaskExecutionAction.java:29)
    at org.gradle.execution.DefaultBuildExecuter.execute(DefaultBuildExecuter.java:61)
    at org.gradle.execution.DefaultBuildExecuter.access$200(DefaultBuildExecuter.java:23)
    at org.gradle.execution.DefaultBuildExecuter$2.proceed(DefaultBuildExecuter.java:67)
    at org.gradle.api.internal.changedetection.TaskCacheLockHandlingBuildExecuter$1.run(TaskCacheLockHandlingBuildExecuter.java:31)
    at org.gradle.internal.Factories$1.create(Factories.java:22)
    at org.gradle.cache.internal.DefaultCacheAccess.useCache(DefaultCacheAccess.java:124)
    at org.gradle.cache.internal.DefaultCacheAccess.useCache(DefaultCacheAccess.java:112)
    at org.gradle.cache.internal.DefaultPersistentDirectoryStore.useCache(DefaultPersistentDirectoryStore.java:134)
    at org.gradle.api.internal.changedetection.DefaultTaskArtifactStateCacheAccess.useCache(DefaultTaskArtifactStateCacheAccess.java:79)
    at org.gradle.api.internal.changedetection.TaskCacheLockHandlingBuildExecuter.execute(TaskCacheLockHandlingBuildExecuter.java:29)
    at org.gradle.execution.DefaultBuildExecuter.execute(DefaultBuildExecuter.java:61)
    at org.gradle.execution.DefaultBuildExecuter.access$200(DefaultBuildExecuter.java:23)
    at org.gradle.execution.DefaultBuildExecuter$2.proceed(DefaultBuildExecuter.java:67)
    at org.gradle.execution.DryRunBuildExecutionAction.execute(DryRunBuildExecutionAction.java:32)
    at org.gradle.execution.DefaultBuildExecuter.execute(DefaultBuildExecuter.java:61)
    at org.gradle.execution.DefaultBuildExecuter.execute(DefaultBuildExecuter.java:54)
    at org.gradle.initialization.DefaultGradleLauncher.doBuildStages(DefaultGradleLauncher.java:166)
    at org.gradle.initialization.DefaultGradleLauncher.doBuild(DefaultGradleLauncher.java:113)
    at org.gradle.initialization.DefaultGradleLauncher.run(DefaultGradleLauncher.java:81)
    at org.gradle.launcher.cli.ExecuteBuildAction.run(ExecuteBuildAction.java:38)
    at org.gradle.launcher.exec.InProcessGradleLauncherActionExecuter.execute(InProcessGradleLauncherActionExecuter.java:39)
    at org.gradle.launcher.daemon.server.exec.ExecuteBuild.doBuild(ExecuteBuild.java:45)
    at org.gradle.launcher.daemon.server.exec.BuildCommandOnly.execute(BuildCommandOnly.java:34)
    at org.gradle.launcher.daemon.server.exec.DaemonCommandExecution.proceed(DaemonCommandExecution.java:125)
    at org.gradle.launcher.daemon.server.exec.WatchForDisconnection.execute(WatchForDisconnection.java:42)
    at org.gradle.launcher.daemon.server.exec.DaemonCommandExecution.proceed(DaemonCommandExecution.java:125)
    at org.gradle.launcher.daemon.server.exec.ResetDeprecationLogger.execute(ResetDeprecationLogger.java:24)
    at org.gradle.launcher.daemon.server.exec.DaemonCommandExecution.proceed(DaemonCommandExecution.java:125)
    at org.gradle.launcher.daemon.server.exec.StartStopIfBuildAndStop.execute(StartStopIfBuildAndStop.java:33)
    at org.gradle.launcher.daemon.server.exec.DaemonCommandExecution.proceed(DaemonCommandExecution.java:125)
    at org.gradle.launcher.daemon.server.exec.ReturnResult.execute(ReturnResult.java:34)
    at org.gradle.launcher.daemon.server.exec.DaemonCommandExecution.proceed(DaemonCommandExecution.java:125)
    at org.gradle.launcher.daemon.server.exec.ForwardClientInput$2.call(ForwardClientInput.java:70)
    at org.gradle.launcher.daemon.server.exec.ForwardClientInput$2.call(ForwardClientInput.java:68)
    at org.gradle.util.Swapper.swap(Swapper.java:38)
    at org.gradle.launcher.daemon.server.exec.ForwardClientInput.execute(ForwardClientInput.java:68)
    at org.gradle.launcher.daemon.server.exec.DaemonCommandExecution.proceed(DaemonCommandExecution.java:125)
    at org.gradle.launcher.daemon.server.exec.LogToClient.doBuild(LogToClient.java:60)
    at org.gradle.launcher.daemon.server.exec.BuildCommandOnly.execute(BuildCommandOnly.java:34)
    at org.gradle.launcher.daemon.server.exec.DaemonCommandExecution.proceed(DaemonCommandExecution.java:125)
    at org.gradle.launcher.daemon.server.exec.EstablishBuildEnvironment.doBuild(EstablishBuildEnvironment.java:59)
    at org.gradle.launcher.daemon.server.exec.BuildCommandOnly.execute(BuildCommandOnly.java:34)
    at org.gradle.launcher.daemon.server.exec.DaemonCommandExecution.proceed(DaemonCommandExecution.java:125)
    at org.gradle.launcher.daemon.server.exec.StartBuildOrRespondWithBusy$1.run(StartBuildOrRespondWithBusy.java:45)
    at org.gradle.launcher.daemon.server.DaemonStateCoordinator.runCommand(DaemonStateCoordinator.java:186)
    at org.gradle.launcher.daemon.server.exec.StartBuildOrRespondWithBusy.doBuild(StartBuildOrRespondWithBusy.java:49)
    at org.gradle.launcher.daemon.server.exec.BuildCommandOnly.execute(BuildCommandOnly.java:34)
    at org.gradle.launcher.daemon.server.exec.DaemonCommandExecution.proceed(DaemonCommandExecution.java:125)
    at org.gradle.launcher.daemon.server.exec.HandleStop.execute(HandleStop.java:36)
    at org.gradle.launcher.daemon.server.exec.DaemonCommandExecution.proceed(DaemonCommandExecution.java:125)
    at org.gradle.launcher.daemon.server.exec.CatchAndForwardDaemonFailure.execute(CatchAndForwardDaemonFailure.java:32)
    at org.gradle.launcher.daemon.server.exec.DaemonCommandExecution.proceed(DaemonCommandExecution.java:125)
    at org.gradle.launcher.daemon.server.exec.DefaultDaemonCommandExecuter.executeCommand(DefaultDaemonCommandExecuter.java:48)
    at org.gradle.launcher.daemon.server.DefaultIncomingConnectionHandler$ConnectionWorker.handleCommand(DefaultIncomingConnectionHandler.java:155)
    at org.gradle.launcher.daemon.server.DefaultIncomingConnectionHandler$ConnectionWorker.receiveAndHandleCommand(DefaultIncomingConnectionHandler.java:128)
    at org.gradle.launcher.daemon.server.DefaultIncomingConnectionHandler$ConnectionWorker.run(DefaultIncomingConnectionHandler.java:116)
    at org.gradle.internal.concurrent.DefaultExecutorFactory$StoppableExecutorImpl$1.run(DefaultExecutorFactory.java:66)
Caused by: : ScalaTest run failed.
    at org.scalatest.tools.ScalaTestAntTask.execute(ScalaTestAntTask.scala:279)
    at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291)
    at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106)
    at org.gradle.api.internal.project.ant.BasicAntBuilder.nodeCompleted(BasicAntBuilder.java:71)
    at org.gradle.api.internal.project.ant.BasicAntBuilder.doInvokeMethod(BasicAntBuilder.java:86)
    at org.gradle.api.internal.project.DefaultAntBuilder.super$3$invokeMethod(DefaultAntBuilder.groovy)
    at org.gradle.api.internal.project.DefaultAntBuilder.invokeMethod(DefaultAntBuilder.groovy:37)
    at build_3kbbvefjgtk85j3ao4u5vcml6v$_run_closure4.doCall(/edda/build.gradle:83)
    at org.gradle.api.internal.AbstractTask$ClosureTaskAction.execute(AbstractTask.java:485)
    at org.gradle.api.internal.AbstractTask$ClosureTaskAction.execute(AbstractTask.java:469)
    at org.gradle.api.internal.tasks.TaskStatusNagger$1.execute(TaskStatusNagger.java:78)
    at org.gradle.api.internal.tasks.TaskStatusNagger$1.execute(TaskStatusNagger.java:74)
    at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeActions(ExecuteActionsTaskExecuter.java:64)
    ... 78 more

MergedCollectionTest is flakey

[info] Tests: succeeded 52, failed 1, canceled 0, ignored 0, pending 0
[info] *** 1 TEST FAILED ***
[error] Failed tests:
[error]     com.netflix.edda.MergedCollectionTest
[error] (test:test) sbt.TestsFailedException: Tests unsuccessful

edda build

Hello,

This is more a question than a issue.
I have followed https://github.com/Netflix/edda/wiki/Quick-Start-Guide and ran buildViaTravis.sh on a mac laptop.

I think the build was successful -
[info] All tests passed.
[success] Total time: 47 s, completed 15-Feb-2017 08:02:46
[info] all files have correct license header
[success] Total time: 0 s, completed 15-Feb-2017 08:02:46

Wondering what I need to do next. Is there a specific document I need to follow to get the installation to work and get jetty running?

Appreciate your inputs.

  • S

Reserved Instances

It doesn't look like reserved instances are currently being collected. Am I missing something? Is it planned?

Too much data without an index error

Saw the following Mongo exception during startup.

2013-01-09 17:31:30.993 - ERROR - [StateMachine.scala:179] [Collection aws.tags] caught exception
com.mongodb.MongoException: too much data for sort() with no index.  add an index or specify a smaller limit
    at com.mongodb.MongoException.parse(MongoException.java:82)
    at com.mongodb.DBApiLayer$MyCollection.__find(DBApiLayer.java:310)
    at com.mongodb.DBCursor._check(DBCursor.java:360)
    at com.mongodb.DBCursor._hasNext(DBCursor.java:490)
    at com.mongodb.DBCursor.hasNext(DBCursor.java:515)
    at scala.collection.JavaConversions$JIteratorWrapper.hasNext(JavaConversions.scala:574)
    at scala.collection.Iterator$class.foreach(Iterator.scala:660)
    at scala.collection.JavaConversions$JIteratorWrapper.foreach(JavaConversions.scala:573)
    at scala.collection.IterableLike$class.foreach(IterableLike.scala:73)
    at scala.collection.JavaConversions$JIterableWrapper.foreach(JavaConversions.scala:587)
    at scala.collection.TraversableLike$class.map(TraversableLike.scala:194)
    at scala.collection.JavaConversions$JIterableWrapper.map(JavaConversions.scala:587)
    at com.netflix.edda.mongo.MongoDatastore.load(MongoDatastore.scala:201)
    at com.netflix.edda.Collection.load(Collection.scala:158)
    at com.netflix.edda.Collection.initState(Collection.scala:227)
    at com.netflix.edda.StateMachine.act(StateMachine.scala:149)
    at scala.actors.Reactor$$anonfun$dostart$1.apply(Reactor.scala:222)
    at scala.actors.Reactor$$anonfun$dostart$1.apply(Reactor.scala:222)
    at scala.actors.ReactorTask.run(ReactorTask.scala:33)
    at scala.concurrent.forkjoin.ForkJoinPool$AdaptedRunnable.exec(ForkJoinPool.java:611)
    at scala.concurrent.forkjoin.ForkJoinTask.quietlyExec(ForkJoinTask.java:422)
    at scala.concurrent.forkjoin.ForkJoinWorkerThread.mainLoop(ForkJoinWorkerThread.java:340)
    at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:325)

loadBalancers crawl failing

latest change breaking loadBalancers crawl .
c61ccf6

2017-02-26 14:06:00.895 - ERROR - [StateMachine.scala:204] [45300f82-d2cd-4b32-930e-f550d4cfe4b8 refresh] failed to handle event Crawl([Collection us-east-1.aws.loadBalancers] refresher)
java.lang.ClassCastException: scala.collection.immutable.$colon$colon cannot be cast to com.netflix.edda.Record
at com.netflix.edda.aws.AwsLoadBalancerCrawler$$anonfun$19.apply(AwsCrawlers.scala:370)
at com.netflix.edda.aws.AwsLoadBalancerCrawler$$anonfun$19.apply(AwsCrawlers.scala:370)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
at scala.collection.Iterator$class.toStream(Iterator.scala:1322)
at scala.collection.AbstractIterator.toStream(Iterator.scala:1336)
at scala.collection.TraversableOnce$class.toSeq(TraversableOnce.scala:298)
at scala.collection.AbstractIterator.toSeq(Iterator.scala:1336)
at com.netflix.edda.aws.AwsLoadBalancerCrawler.doCrawl(AwsCrawlers.scala:370)
at com.netflix.edda.aws.AwsLoadBalancerCrawler.doCrawl(AwsCrawlers.scala:352)
at com.netflix.edda.Crawler$$anonfun$localTransitions$1.applyOrElse(Crawler.scala:132)
at com.netflix.edda.Crawler$$anonfun$localTransitions$1.applyOrElse(Crawler.scala:119)
at scala.PartialFunction$OrElse.apply(PartialFunction.scala:167)
at com.netflix.edda.StateMachine$$anonfun$act$1$$anonfun$applyOrElse$2$$anonfun$apply$1.applyOrElse(StateMachine.scala:201)
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36)
at scala.actors.ReactorTask.run(ReactorTask.scala:31)
at scala.actors.Reactor$class.resumeReceiver(Reactor.scala:130)
at com.netflix.edda.StateMachine.scala$actors$InternalReplyReactor$$super$resumeReceiver(StateMachine.scala:100)
at scala.actors.InternalReplyReactor$class.resumeReceiver(InternalReplyReactor.scala:60)
at com.netflix.edda.StateMachine.resumeReceiver(StateMachine.scala:100)
at scala.actors.InternalActor$class.searchMailbox(InternalActor.scala:76)
at com.netflix.edda.StateMachine.searchMailbox(StateMachine.scala:100)
at scala.actors.Reactor$$anonfun$startSearch$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(Reactor.scala:118)
at scala.actors.Reactor$$anonfun$startSearch$1$$anonfun$apply$mcV$sp$1.apply(Reactor.scala:115)
at scala.actors.Reactor$$anonfun$startSearch$1$$anonfun$apply$mcV$sp$1.apply(Reactor.scala:115)
at scala.actors.ReactorTask.run(ReactorTask.scala:33)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

aws.hostedRecords collection creates too many records

This is related to issue #7.

There seems to be a problem with hostedRecords creating to many document revisions. I have found thousands of new documents with identical content as the previous revision (_diff also show no output). My hunch is that multiple records are colliding, perhaps multiple CNAME records acting as a RR dns entry? So at a minimum I think we cannot use the "name" attribute as the id, but looking at the resource I dont see any attribute that could be used as the resource id. We probably need to keep the recordSets together as a single resource.

-Cory

only 100 records

@alde @brharrington could you help on my understand, Edda is not showing all data but storing some part of data.

1. aws.stacks is only showing 100 recodes, but have  more the 100 stacks.
db.aws.stacks.count()
100
2.loadBalancers is also not showing all ELBs, there are more then 2000 ELB in account but edda is storing only few.
db.aws.loadBalancers.count()
402```

am i missing some configuration setting ?

Instance State is not changing

I was running a report that simply grabs the data from edda and noticed several instances that I knew had been stopped were showing they were running. I verified this in the AWS console.

I then went back and modified a couple of other instances by putting them into the stopped state. They are stopped but Edda is still showing them as running.

compileScala error

I'm hitting the following build error compiling scala when I run a 'gradlew build':

:compileScala
[ant:scalac] /home/ec2-user/edda/src/main/scala/com/netflix/edda/aws/AwsCollections.scala:24: error: object RootCollection is not a member of package com.netflix.edda
[ant:scalac] import com.netflix.edda.RootCollection
[ant:scalac]        ^
[ant:scalac] /home/ec2-user/edda/src/main/scala/com/netflix/edda/aws/AwsCollections.scala:95: error: not found: type RootCollection
[ant:scalac]   def mkCollections(ctx: AwsCollection.Context, accountName: String, elector: Elector): Seq[RootCollection] = {
[ant:scalac]                                                                                             ^
[ant:scalac] error: bad symbolic reference. A signature in Collection.class refers to type CollectionState
[ant:scalac] in package com.netflix.edda which is not available.
[ant:scalac] It may be completely missing from the current classpath, or the version on
[ant:scalac] the classpath might be incompatible with the version used when compiling Collection.class.
[ant:scalac] /home/ec2-user/edda/src/main/scala/com/netflix/edda/aws/AwsCollections.scala:76: error: value groupBy is not a member of Array[Nothing]
[ant:scalac] possible cause: maybe a semicolon is missing before `value groupBy'?
[ant:scalac]           }).groupBy(_._1).mapValues(c => c.map(x => x._2))

Any help or insight on this would be greatly appreciated. I'm happy to provide more info if needed. Thanks!

AWS api requests amount

Hi, is there a way to throttle requests/reduce requests amount, so other apps that using api under same account will not suffer from aws-initiated throttling?

typos or incorrect configuration options are silently ignored

In an increasingly complex edda.properties file, it is too easy to make typos in an account.name or forget a part of the proper suffix for config options.

Look into the ability to sanity check configuration options, especially around account credentials (which will silently inherit higher level credentials or try to auto-discover via metadata.

At the least, consider adding informational log messages regarding credential providers and what value the credentials were sourced from (or auto-discovered via metadata).

EC2 Instance-Tags with a "." (dot) cannot be inserted into mongodb

We are using tag-names to mark EC2 instances with additional information, some of them contain a dot, which Amazon can handle just fine, however mongodb seems to reject these as key:

2013-07-10 15:27:25.087 - ERROR - [MongoDatastore.scala:339] failed to upsert record:       
...
"tags": [{
            "Client.E-mail": "[email protected]",
        },
...
scala.actors.Actor$$anon$1@ab81c86: caught java.lang.IllegalArgumentException: fields stored in the db can't have . in them
java.lang.IllegalArgumentException: fields stored in the db can't have . in them
    at com.mongodb.DBCollection._checkKeys(DBCollection.java:1087)
...

missing aws.stacks collection

I am using edda 2.1.0 and found missing missing collection api/v2/aws.stacks.
are we not collecting aws.stacks any more?

I am pretty much sure i was using this collection in old version.

Issue with leader election with two nodes

I have been trying to run a two node edda cluster with one node serving as the elected leader and acting as the primary AWS crawler and the other node acting as just another powerless citizen merely serving API requests. This setup worked well for about 35 minutes as I monitored both CPU usage and leader election results in the logs of both nodes. However after about 35 to 40 minutes both nodes go into a state where they both think they are leader (split brain sort of), both nodes start to crawl AWS and both nodes (m1.mediums) java process pin the cpu to 100%.

I'm just wondering if anyone else has run into a similar issue running an edda cluster. Would a three node cluster be a better fit for the leader election algorithm used? As an alternative is there any way to force a node to operate only as an api node? Would you say doing leader election more often is better?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.