Git Product home page Git Product logo

priam's People

Contributors

adityajami avatar alfasin avatar arunagrawal-84 avatar arunagrawal84 avatar aryanet avatar ayushisingh29 avatar chengw-netflix avatar codyaray avatar cthumuluru avatar dtrebbien avatar gopaczewski avatar hashbrowncipher avatar jasobrown avatar jcacciatore avatar jkschneider avatar jolynch avatar mattl-netflix avatar psadhu avatar randgalt avatar rpalcolea avatar sagarl avatar schakrovorthy avatar sumanth-pasupuleti avatar tieum avatar timiblossom avatar trams avatar tulumvinh avatar vijay2win avatar vinaykumarchella avatar zmarois avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

priam's Issues

Snapshot upload to S3 fails

Hi,

Upload of snapshot to S3 fails. It tries to upload as soon as we create the key space and column family. We have not modified any default parameters. Our Priam property just has the entry for cluster name and zones.

Following is the error log:

2012-03-22 13:31:05.0754 INFO DefaultQuartzScheduler_Worker-9 com.netflix.priam.aws.S3FileSystem Uploading to backup/us-east-1/NithyaTest1/1808575600/201203221330/SST/system/Schema-g-1-Index.db with chunk size 10
2012-03-22 13:31:05.0907 ERROR DefaultQuartzScheduler_Worker-9 com.netflix.priam.scheduler.Task Couldnt execute the task because of Error uploading file Schema-g-1-Index.db
com.netflix.priam.backup.BackupRestoreException: Error uploading file Schema-g-1-Index.db
at com.netflix.priam.aws.S3FileSystem.upload(S3FileSystem.java:150)
at com.netflix.priam.backup.AbstractBackup$1.retriableCall(AbstractBackup.java:71)
at com.netflix.priam.backup.AbstractBackup$1.retriableCall(AbstractBackup.java:67)
at com.netflix.priam.utils.RetryableCallable.call(RetryableCallable.java:43)
at com.netflix.priam.backup.AbstractBackup.upload(AbstractBackup.java:66)
at com.netflix.priam.backup.AbstractBackup.upload(AbstractBackup.java:54)
at com.netflix.priam.backup.IncrementalBackup.execute(IncrementalBackup.java:41)
at com.netflix.priam.scheduler.Task.execute(Task.java:80)
at org.quartz.core.JobRunShell.run(JobRunShell.java:199)
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:546)
Caused by: Status Code: 400, AWS Service: Amazon S3, AWS Request ID: A83BA4E72320F418, AWS Error Code: EntityTooSmall, AWS Error Message: Your proposed upload is smaller than the minimum allowed size, S3 Extended Request ID: hSO8Id5oePWBcdNA9JcHeTAjgfr9+WS2J++P914Zpu5EJIjP8iZ1zuWcLjiay8e3
at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:548)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:288)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:170)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:2619)
at com.amazonaws.services.s3.AmazonS3Client.completeMultipartUpload(AmazonS3Client.java:1892)
at com.netflix.priam.aws.S3PartUploader.completeUpload(S3PartUploader.java:58)
at com.netflix.priam.aws.S3FileSystem.upload(S3FileSystem.java:145)

Thanks.

Backup/restore commit logs

Right now Priam doesn't support consistent backup. Only flushed sstables are stored in S3. All information from commit logs are lost. If I restore cluster from such backup I get inconsistent state. cassandra 1.1 supports commit logs archiving, so if Priam saves both sstables and commit logs it could restore cluster in consistent state.

Thank you,
Andrey

RFEN: Make Priam WAR base REST URL configurable w/a default of the current value (backwards compatible)

I looked at the source. There are four references that are hard-coded to start with:

'http://127.0.0.1:8080/Priam'

in NFThinCassandraDaemon.java & NFSeedProvider.java.

The base URL above is fine for a default, but it would be useful to be able to specify (in decreasing order of usefulness):

  • the port number
  • the WAR name
  • the regular IP address of the local machine (rather than just localhost (127.0.0.1))

why auto_bootstrap for all new nodes

I might be missing something but I am not totally new to Cassandra, so this leaves me a bit puzzled. You can correct me if I am wrong. auto_bootstrap is a mechanism for a new node to assume responsibility of a token range based on existing token ring automatically and I have spawn up several cassandra nodes in my existing clusters to scale them up manually and they chose a token based on the heaviest loaded node. However, when I am trying to bring up a new cluster with no nodes using auto scaling groups and Priam, Priam is overriding configuration and adds auto_bootstrap to every single node's configuration. Nodes come up all at once and while they are being setup automatically, Priam starts cassandra on one or the other, tried to use NFSeedProvider to contact other nodes and find a token to bootstrap. This seams a process which suffices and existing cluster of cassandra nodes, however, in this scenario which is everyone's starting point, it fails since other nodes aren't available to respond just yet. What am I missing here? Shouldn't Priam automatically calculate tokens based on RAC Position and not use auto boot strap?

Priam fails to hook into Cassandra

I can't quite figure out where I went off the rails... I got Priam up and working, but it never put anything in SDB about the instance. I set the class in the cassandra runtime, and added both the WAR and the JAR (still named priam-0.0.5-SNAPSHOT.jar) in their respective directories.

ERROR 15:51:39,203 Cannot find the Seeds []
ERROR 15:51:40,081 Couldnt execute the task because of null
java.lang.NullPointerException
at com.netflix.priam.backup.IncrementalBackup.execute(IncrementalBacku.java:36)
at com.netflix.priam.scheduler.Task.execute(Task.java:78)
at org.quartz.core.JobRunShell.run(JobRunShell.java:199)
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPooljava:546)
ERROR 15:51:50,080 Couldnt execute the task because of null
java.lang.NullPointerException
:
INFO: No provider classes found.
Apr 24, 2012 3:48:40 PM com.sun.jersey.server.impl.application.WebApplicationImp
l _initiate
INFO: Initiating Jersey application, version 'Jersey: 1.9.1 09/14/2011 02:36 PM'
ERROR 15:48:41,412 Retry #1 for: Access Denied
Apr 24, 2012 3:48:41 PM com.sun.jersey.guice.spi.container.GuiceComponentProvide
rFactory getComponentProvider
INFO: Binding com.netflix.priam.resources.CassandraConfig to GuiceInstantiatedCo
mponentProvider
Apr 24, 2012 3:48:41 PM com.sun.jersey.guice.spi.container.GuiceComponentProvide
rFactory getComponentProvider
INFO: Binding com.netflix.priam.resources.BackupServlet to GuiceInstantiatedComp
onentProvider
Apr 24, 2012 3:48:41 PM com.sun.jersey.guice.spi.container.GuiceComponentProvide
rFactory getComponentProvider
INFO: Binding com.netflix.priam.resources.CassandraAdmin to GuiceInstantiatedCom
ponentProvider
onentProvider
Apr 24, 2012 3:48:41 PM com.sun.jersey.guice.spi.container.GuiceComponentProvide
rFactory getComponentProvider
INFO: Binding com.netflix.priam.resources.CassandraAdmin to GuiceInstantiatedCom
ponentProvider
Apr 24, 2012 3:48:41 PM org.apache.coyote.http11.Http11Protocol start
INFO: Starting Coyote HTTP/1.1 on http-8080
Apr 24, 2012 3:48:41 PM org.apache.catalina.startup.Catalina start
INFO: Server startup in 37297 ms
ERROR 15:48:42,062 Retry #2 for: Access Denied
ERROR 15:48:42,698 Retry #3 for: Access Denied
ERROR 15:48:43,331 Retry #4 for: Access Denied
ERROR 15:48:43,966 Retry #5 for: Access Denied
ERROR 15:48:44,616 Retry #6 for: Access Denied
ERROR 15:48:45,250 Retry #7 for: Access Denied
ERROR 15:48:45,890 Retry #8 for: Access Denied
ERROR 15:48:46,524 Retry #9 for: Access Denied
ERROR 15:48:47,172 Retry #10 for: Access Denied
ERROR 15:48:47,943 Retry #11 for: Access Denied
ERROR 15:48:48,577 Retry #12 for: Access Denied
ERROR 15:48:49,217 Retry #13 for: Access Denied

listen and broadcast address are set to null

From tomcat.log I have observed Priam detecting the correct cluster name and setting security group ACL and detecting other nodes in ASG, however what I don't understand is that why it has set the listen and broadcast addresses to null. I traced this down to the following line of code:

File:
src/main/java/com/netflix/priam/utils/TuneCassandra.java: TuneCassandra.updateYaml(config, config.getCassHome() + "/conf/cassandra.yaml", null, config.getSeedProviderName());

This doesn't seam right to me as it is explicitly setting hostname parameter for that method to null and inside the method there is no reference reading the hostname or public ip from IConfiguration.

I might be missing something in understanding how Priam supposed to work, and your advice is greatly appreciated.

Unable to find com.netflix.priam.cassandra.NFSeedProvider

I tried updating Priam the other day(currently, I am using the old maven build), and I ran into an issue with the new build system. It looks like priam-web writes out the seed_provider class name as com.netflix.priam.cassandra.NFSeedProvider. However when I build using gradlew the actual path becomes com.netflix.priam.cassandra.extensions.NFSeedProvider . So I end up getting:

java.lang.ClassNotFoundException: com.netflix.priam.cassandra.NFSeedProvider

If I modify class_name in cassandra.yaml to be

class_name: com.netflix.priam.cassandra.extensions.NFSeedProvider

it works fine. Also, I am using the 1.1 branch. Am I missing something?

default storage ports not consistent with wiki

It appears the the storage ports are set as follows (PriamConfiguration.java):

private final int DEFAULT_STORAGE_PORT = 7101;
private final int DEFAULT_SSL_STORAGE_PORT = 7101;

shouldn't this be:

private final int DEFAULT_STORAGE_PORT = 7000;
private final int DEFAULT_SSL_STORAGE_PORT = 7001;

to be consistent with cassandra defaults (and the wiki)

So close, but still not working

Hi everyone,

I've got an issue setting up Priam. I followed all the steps described in your setup guide, however, I've got the following issue:

  • Communication to Simple DB works fine
  • It updates my cassandra.yaml with the cluster name
  • using the rest api to get the token works as well

However, the initial token int the cassandra.yaml file doesn't get updated with the token from the Rest API
And per default it should use EC2Snitch, which doesn't seem to work either

I set up Priam launching two Servers with one ASG (Cassandra 1.1, latest Priam version)
Cassandra starts up fine btw (but the 2 nodes can't find each other / thus aren't part of the same cluster)

Any help would be appreciated
Dan

Imbalanced Initial Cluster

I have launched a 12 node multizone multiregion cluster. However, I am seeing the initial Ownership is not even and one node (presumably the very first node in the first zone) has got a very off token causing it to have 50+% ownership and others have less. I am investigating why this has happened. If you have seen this, please advice. Here is the ring:

Address DC Rack Status State Load Owns Token
77981375752715064543690014205080921348
****** us-east 1a Up Normal 11.5 KB 54.17% 1808575600
****** us-east 1b Up Normal 11.5 KB 4.17% 7089215977519551322153637656637080005
****** us-west-2 2a Up Normal 11.5 KB 4.17% 14178431955039102644307275311624381703
****** us-west-2 2b Up Normal 11.5 KB 4.17% 21267647932558653966460912966452886108
****** us-east 1a Up Normal 11.5 KB 4.17% 28356863910078205288614550621122593220
****** us-east 1b Up Normal 11.5 KB 4.17% 35446079887597756610768188275951097625
****** us-west-2 2a Up Normal 11.5 KB 4.17% 42535295865117307932921825930938399323
****** us-west-2 2b Up Normal 11.5 KB 4.17% 49624511842636859255075463585766903728
****** us-east 1a Up Normal 13.37 KB 4.17% 56713727820156410577229101240436610840
****** us-east 1b Up Normal 11.5 KB 4.17% 63802943797675961899382738895265115245
****** us-west-2 2a Up Normal 11.5 KB 4.17% 70892159775195513221536376550252416943
****** us-west-2 2b Up Normal 11.5 KB 4.17% 77981375752715064543690014205080921348

Alternatives to AWS SimpleDB

This is a very interesting project, but we don't run our clusters on EC2. Is there any plan to support non-SimpleDB persistence for token management? Would this project accept a patch that adds such support?

REST calls are failing

Hi,

Some of the REST api calls are failing.
/v1/cassadmin/info, /v1/cassadmin/ring. I'm getting the following exception:

Versions -
Priam: 1.1.6
Cassandra: 1.1.4

Exception -

javax.management.InstanceNotFoundException: org.apache.cassandra.db:type=StorageService
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1094)
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:662)
com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:638)
javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1404)
javax.management.remote.rmi.RMIConnectionImpl.access$200(RMIConnectionImpl.java:72)
javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1265)
javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1360)
javax.management.remote.rmi.RMIConnectionImpl.getAttribute(RMIConnectionImpl.java:600)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
java.lang.reflect.Method.invoke(Method.java:597)

Thanks
Rohit

Error when starting Priam - docs unclear ?

We're seeing this when we start Priam on an EC2 instance; S3 bucket create is fine and SimpleDB table creation is happening too. For the purposes of testing getting Priam working the Cassandra instance we're trying to manage is on the same machine as Priam.

Is there an expectation that we have to create some data into the SimpleDB before running Priam? It's not clea from the docs whether I am to expect Priam to discover all the nodes and data it will be using, or if we are supposed to seed something.

pr 21, 2012 12:02:34 AM org.apache.catalina.startup.TaglibUriRule body
INFO: TLD skipped. URI: urn:com:sun:jersey:api:view is already defined
log4j: Parsing for [root] with value=[INFO, R, stdout].
log4j: Level token is [INFO].
log4j: Category root set to INFO
log4j: Parsing appender named "R".
log4j: Parsing layout options for "R".
log4j: Setting property [conversionPattern] to [%d{yyyy-MM-dd HH:mm:ss.SSSS} %p %t %c %m%n].
log4j: End of parsing for "R".
log4j: Setting property [file] to [/usr/share/tomcat7/logs/tomcat.log].
log4j: Setting property [maxFileSize] to [5MB].
log4j: Setting property [maxBackupIndex] to [5].
log4j: setFile called: /usr/share/tomcat7/logs/tomcat.log, true
log4j: setFile ended
log4j: Parsed "R" options.
log4j: Parsing appender named "stdout".
log4j: Parsing layout options for "stdout".
log4j: Setting property [conversionPattern] to [%5p %d{HH:mm:ss,SSS} %m%n].
log4j: End of parsing for "stdout".
log4j: Parsed "stdout" options.
log4j: Finished configuring.
INFO 00:02:35,046 Calling URL API: http://169.254.169.254/latest/meta-data/placement/availability-zone returns: us-west-2b
INFO 00:02:35,051 Calling URL API: http://169.254.169.254/latest/meta-data/hostname returns: ip-10-10-3-101
INFO 00:02:35,060 Calling URL API: http://169.254.169.254/latest/meta-data/local-ipv4 returns: 10.10.3.101
INFO 00:02:35,061 Calling URL API: http://169.254.169.254/latest/meta-data/instance-id returns: i-4aaa937a
INFO 00:02:35,062 Calling URL API: http://169.254.169.254/latest/meta-data/instance-type returns: m1.large
INFO 00:02:35,573 REGION set to us-west-2, ASG Name (nnnnn-uswest2)
INFO 00:02:36,551 Job execution threads will use class loader of thread: http-bio-8080-exec-66
INFO 00:02:36,568 Initialized Scheduler Signaller of type: class org.quartz.core.SchedulerSignalerImpl
INFO 00:02:36,569 Quartz Scheduler v.1.7.3 created.
INFO 00:02:36,571 RAMJobStore initialized.
INFO 00:02:36,572 Quartz scheduler 'DefaultQuartzScheduler' initialized from default resource file in Quartz package: 'quartz.properties'
INFO 00:02:36,572 Quartz scheduler version: 1.7.3
INFO 00:02:36,572 JobFactory set to: com.netflix.priam.scheduler.GuiceJobFactory@7e49a1c5
INFO 00:02:36,712 New update(s) found: 1.8.5 [http://www.terracotta.org/kit/reflector?kitID=default&pageID=QuartzChangeLog]
INFO 00:02:38,279 Querying Amazon returned following instance in the ASG: us-west-2b -->
INFO 00:03:02,438 Query on ASG returning 0 instances
ERROR 00:03:02,438 Retry #1 for: size must be > 0
INFO 00:03:13,295 Query on ASG returning 0 instances
ERROR 00:03:13,296 Retry #2 for: size must be > 0
INFO 00:03:29,377 Query on ASG returning 0 instances
ERROR 00:03:29,378 Retry #3 for: size must be > 0
INFO 00:03:42,704 Query on ASG returning 0 instances
ERROR 00:03:42,705 Retry #4 for: size must be > 0
INFO 00:03:47,227 Query on ASG returning 0 instances
ERROR 00:03:47,227 Retry #5 for: size must be > 0
INFO 00:03:51,786 Query on ASG returning 0 instances
ERROR 00:03:51,787 Retry #6 for: size must be > 0
INFO 00:04:06,667 Query on ASG returning 0 instances
ERROR 00:04:06,668 Retry #7 for: size must be > 0
INFO 00:04:19,150 Query on ASG returning 0 instances
ERROR 00:04:19,151 Retry #8 for: size must be > 0
INFO 00:04:34,354 Query on ASG returning 0 instances
ERROR 00:04:34,355 Retry #9 for: size must be > 0
INFO 00:04:37,139 Query on ASG returning 0 instances
ERROR 00:04:37,140 Retry #10 for: size must be > 0
INFO 00:04:40,728 Query on ASG returning 0 instances
ERROR 00:04:40,728 Retry #11 for: size must be > 0
INFO 00:04:48,434 Query on ASG returning 0 instances
ERROR 00:04:48,435 Retry #12 for: size must be > 0
INFO 00:05:04,548 Query on ASG returning 0 instances
ERROR 00:05:04,549 Retry #13 for: size must be > 0
INFO 00:05:13,591 Query on ASG returning 0 instances
ERROR 00:05:13,591 Retry #14 for: size must be > 0
INFO 00:05:22,577 Query on ASG returning 0 instances
ERROR 00:05:23,117 Guice provision errors:

  1. Error injecting constructor, java.lang.IllegalArgumentException: size must be > 0
    at com.netflix.priam.identity.InstanceIdentity.(InstanceIdentity.java:46)
    at com.netflix.priam.identity.InstanceIdentity.class(InstanceIdentity.java:25)
    while locating com.netflix.priam.identity.InstanceIdentity
    for parameter 2 at com.netflix.priam.PriamServer.(PriamServer.java:31)
    at com.netflix.priam.PriamServer.class(PriamServer.java:31)
    while locating com.netflix.priam.PriamServer

1 error
com.google.inject.ProvisionException: Guice provision errors:

  1. Error injecting constructor, java.lang.IllegalArgumentException: size must be > 0
    at com.netflix.priam.identity.InstanceIdentity.(InstanceIdentity.java:46)
    at com.netflix.priam.identity.InstanceIdentity.class(InstanceIdentity.java:25)
    while locating com.netflix.priam.identity.InstanceIdentity
    for parameter 2 at com.netflix.priam.PriamServer.(PriamServer.java:31)
    at com.netflix.priam.PriamServer.class(PriamServer.java:31)
    while locating com.netflix.priam.PriamServer

1 error
at com.google.inject.internal.InjectorImpl$4.get(InjectorImpl.java:987)
at com.google.inject.internal.InjectorImpl.getInstance(InjectorImpl.java:1013)
at com.netflix.priam.defaultimpl.InjectedWebListener.getInjector(InjectedWebListener.java:36)
at com.google.inject.servlet.GuiceServletContextListener.contextInitialized(GuiceServletContextListener.java:45)
at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4779)
at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5273)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
at org.apache.catalina.manager.ManagerServlet.start(ManagerServlet.java:1247)
at org.apache.catalina.manager.HTMLManagerServlet.start(HTMLManagerServlet.java:747)
at org.apache.catalina.manager.HTMLManagerServlet.doPost(HTMLManagerServlet.java:222)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:641)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:722)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:305)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
at org.apache.catalina.filters.CsrfPreventionFilter.doFilter(CsrfPreventionFilter.java:187)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
at org.apache.catalina.filters.SetCharacterEncodingFilter.doFilter(SetCharacterEncodingFilter.java:108)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:224)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:169)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:581)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:168)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:98)
at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:927)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:407)
at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:987)
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:579)
at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:307)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:679)
Caused by: java.lang.IllegalArgumentException: size must be > 0
at com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
at com.netflix.priam.utils.TokenManager.initialToken(TokenManager.java:26)
at com.netflix.priam.utils.TokenManager.createToken(TokenManager.java:53)
at com.netflix.priam.identity.InstanceIdentity$GetNewToken.retriableCall(InstanceIdentity.java:158)
at com.netflix.priam.identity.InstanceIdentity$GetNewToken.retriableCall(InstanceIdentity.java:137)
at com.netflix.priam.utils.RetryableCallable.call(RetryableCallable.java:42)
at com.netflix.priam.identity.InstanceIdentity.init(InstanceIdentity.java:91)
at com.netflix.priam.identity.InstanceIdentity.(InstanceIdentity.java:51)
at com.netflix.priam.identity.InstanceIdentity$$FastClassByGuice$$8fae8df.newInstance()
at com.google.inject.internal.cglib.reflect.$FastConstructor.newInstance(FastConstructor.java:40)
at com.google.inject.internal.DefaultConstructionProxyFactory$1.newInstance(DefaultConstructionProxyFactory.java:60)
at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:85)
at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:254)
at com.google.inject.internal.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:46)
at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1031)
at com.google.inject.internal.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:40)
at com.google.inject.Scopes$1$1.get(Scopes.java:65)
at com.google.inject.internal.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:40)
at com.google.inject.internal.SingleParameterInjector.inject(SingleParameterInjector.java:38)
at com.google.inject.internal.SingleParameterInjector.getAll(SingleParameterInjector.java:62)
at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:84)
at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:254)
at com.google.inject.internal.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:46)
at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1031)
at com.google.inject.internal.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:40)
at com.google.inject.Scopes$1$1.get(Scopes.java:65)
at com.google.inject.internal.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:40)
at com.google.inject.internal.InjectorImpl$4$1.call(InjectorImpl.java:978)
at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1024)
at com.google.inject.internal.InjectorImpl$4.get(InjectorImpl.java:974)
... 33 more
Apr 21, 2012 12:05:23 AM org.apache.catalina.core.StandardContext startInternal
SEVERE: Error listenerStart
Apr 21, 2012 12:05:23 AM org.apache.catalina.core.StandardContext startInternal
SEVERE: Context [/Priam] startup failed due to previous errors
Apr 21, 2012 12:05:23 AM org.apache.catalina.loader.WebappClassLoader clearReferencesThreads
SEVERE: The web application [/Priam] appears to have started a thread named [com.google.inject.internal.util.$Finalizer] but has failed to stop it. This is very likely to create a memory leak.
Apr 21, 2012 12:05:23 AM org.apache.catalina.loader.WebappClassLoader clearReferencesThreads
SEVERE: The web application [/Priam] appears to have started a thread named [DefaultQuartzScheduler_Worker-1] but has failed to stop it. This is very likely to create a memory leak.
Apr 21, 2012 12:05:23 AM org.apache.catalina.loader.WebappClassLoader clearReferencesThreads
SEVERE: The web application [/Priam] appears to have started a thread named [DefaultQuartzScheduler_Worker-2] but has failed to stop it. This is very likely to create a memory leak.
Apr 21, 2012 12:05:23 AM org.apache.catalina.loader.WebappClassLoader clearReferencesThreads
SEVERE: The web application [/Priam] appears to have started a thread named [DefaultQuartzScheduler_Worker-3] but has failed to stop it. This is very likely to create a memory leak.
Apr 21, 2012 12:05:23 AM org.apache.catalina.loader.WebappClassLoader clearReferencesThreads
SEVERE: The web application [/Priam] appears to have started a thread named [DefaultQuartzScheduler_Worker-4] but has failed to stop it. This is very likely to create a memory leak.
Apr 21, 2012 12:05:23 AM org.apache.catalina.loader.WebappClassLoader clearReferencesThreads
SEVERE: The web application [/Priam] appears to have started a thread named [DefaultQuartzScheduler_Worker-5] but has failed to stop it. This is very likely to create a memory leak.
Apr 21, 2012 12:05:23 AM org.apache.catalina.loader.WebappClassLoader clearReferencesThreads
SEVERE: The web application [/Priam] appears to have started a thread named [DefaultQuartzScheduler_Worker-6] but has failed to stop it. This is very likely to create a memory leak.
Apr 21, 2012 12:05:23 AM org.apache.catalina.loader.WebappClassLoader clearReferencesThreads
SEVERE: The web application [/Priam] appears to have started a thread named [DefaultQuartzScheduler_Worker-7] but has failed to stop it. This is very likely to create a memory leak.
Apr 21, 2012 12:05:23 AM org.apache.catalina.loader.WebappClassLoader clearReferencesThreads
SEVERE: The web application [/Priam] appears to have started a thread named [DefaultQuartzScheduler_Worker-8] but has failed to stop it. This is very likely to create a memory leak.
Apr 21, 2012 12:05:23 AM org.apache.catalina.loader.WebappClassLoader clearReferencesThreads
SEVERE: The web application [/Priam] appears to have started a thread named [DefaultQuartzScheduler_Worker-9] but has failed to stop it. This is very likely to create a memory leak.
Apr 21, 2012 12:05:23 AM org.apache.catalina.loader.WebappClassLoader clearReferencesThreads
SEVERE: The web application [/Priam] appears to have started a thread named [DefaultQuartzScheduler_Worker-10] but has failed to stop it. This is very likely to create a memory leak.
Apr 21, 2012 12:05:23 AM org.apache.catalina.loader.WebappClassLoader clearReferencesThreads
SEVERE: The web application [/Priam] appears to have started a thread named [Timer-1] but has failed to stop it. This is very likely to create a memory leak.
Apr 21, 2012 12:05:23 AM org.apache.catalina.loader.WebappClassLoader clearReferencesThreads
SEVERE: The web application [/Priam] appears to have started a thread named [DefaultQuartzScheduler_QuartzSchedulerThread] but has failed to stop it. This is very likely to create a memory leak.
Apr 21, 2012 12:05:23 AM org.apache.catalina.loader.WebappClassLoader checkThreadLocalMapForLeaks
SEVERE: The web application [/Priam] created a ThreadLocal with key of type [com.google.inject.internal.InjectorImpl$1](value [com.google.inject.internal.InjectorImpl$1@313c7b52]) and a value of type [java.lang.Object[]](value [[Ljava.lang.Object;@79c45dbe]) but failed to remove it when the web application was stopped. Threads are going to be renewed over time to try and avoid a probable memory leak.

Various cassadmin api calls fail

A fair number of the /Priam/REST/v1/cassadmin calls fail like this:

 INFO 04:19:51,323 node tool info being called
 INFO 04:19:51,323 JMX info being called
Jul 3, 2012 4:19:51 AM com.sun.jersey.spi.container.ContainerResponse mapMappableContainerException
SEVERE: The RuntimeException could not be mapped to a response, re-throwing to the HTTP container
java.lang.ClassCastException: org.apache.cassandra.dht.BigIntegerToken cannot be cast to java.lang.String
        at org.apache.cassandra.tools.NodeProbe.getEndpoint(NodeProbe.java:565)
        at org.apache.cassandra.tools.NodeProbe.getDataCenter(NodeProbe.java:578)
        at com.netflix.priam.utils.JMXNodeTool.info(JMXNodeTool.java:169)
        at com.netflix.priam.resources.CassandraAdmin.cassInfo(CassandraAdmin.java:98)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
        at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMeth
odDispatchProvider.java:205)
        at com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
        at com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
        at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
        at com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
        at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
        at com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
        at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469)
        at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400)
        at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349)
        at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339)
        at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)
        at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537)
        at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:708)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
        at com.google.inject.servlet.ServletDefinition.doService(ServletDefinition.java:263)
        at com.google.inject.servlet.ServletDefinition.service(ServletDefinition.java:178)
        at com.google.inject.servlet.ManagedServletPipeline.service(ManagedServletPipeline.java:91)
        at com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:62)
        at com.google.inject.servlet.ManagedFilterPipeline.dispatch(ManagedFilterPipeline.java:118)
        at com.google.inject.servlet.GuiceFilter.doFilter(GuiceFilter.java:113)
        at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
        at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
        at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
        at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
        at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
        at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
        at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
        at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
        at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:857)
        at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588)
        at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
        at java.lang.Thread.run(Thread.java:662)

the "ring" api fails differently - this is what the client receives:

<html><head><title>Apache Tomcat/6.0.28 - Error report</title><style><!--H1 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:22px;} H2 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:16px;} H3 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:14px;} BODY {font-family:Tahoma,Arial,sans-serif;color:black;background-color:white;} B {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;} P {font-family:Tahoma,Arial,sans-serif;background:white;color:black;font-size:12px;}A {color : black;}A.name {color : black;}HR {color : #525D76;}--></style> </head><body><h1>HTTP Status 404 - Not Found</h1><HR size="1" noshade="noshade"><p><b>type</b> Status report</p><p><b>message</b> <u>Not Found</u></p><p><b>description</b> <u>The requested resource (Not Found) is not available.</u></p><HR size="1" noshade="noshade"><h3>Apache Tomcat/6.0.28</h3></body></html>

I haven't tried all of the commands, but these are some of the more useful commands.

Unable to insert data into the column family of cassandra created using priam

Hi,

We are unable to insert data into the cassandra cluster created using priam.

When we created the key space and column family, and gave "list " command, it returned null. But if we create a cassandra cluster manually, on listing an empty column family it gives "0".

Please let us know if you know the cause of this issue.

Thanks,
Nithya

need help with failed install - IOException on TestRestore

I cannot get the jars built on an AWS EC2 -- although I can get it to successfully build on my local machine (mac).

I keep getting the following error when I run ./gradlew build:

  • Test com.netflix.priam.backup.TestRestore FAILED

When I run with --info flag, I get::

Test testRestore(com.netflix.priam.backup.TestRestore) FAILED: java.io.IOException: Cannot run program "teststopscript" (in directory "/"): java.io.IOException: error=2, No such file or directory
Test com.netflix.priam.backup.TestRestore FAILED
Test testRestoreLatest(com.netflix.priam.backup.TestRestore) FAILED: java.io.IOException: Cannot run program "teststopscript" (in directory "/"): java.io.IOException: error=2, No such file or directory
Test testNoSnapshots(com.netflix.priam.backup.TestRestore) FAILED: java.io.IOException: Cannot run program "teststopscript" (in directory "/"): java.io.IOException: error=2, No such file or directory
Test testRestoreFromDiffCluster(com.netflix.priam.backup.TestRestore) FAILED: java.io.IOException: Cannot run program "teststopscript" (in directory "/"): java.io.IOException: error=2, No such file or directory

I am running with Sun Java JDK 1.6_35 and I am using ami = Amazon Linux 2012.03 on a m1.xlarge instance.

Any help or direction is appreciated.

  • Ryan

Not sure how to properly update the wiki, here's some clarification

I don't seem to be able to fork the wiki on github, so I'd like to suggest the following (or something like it) be added to the top of the REST-API.textfile file:

h3. Common prefix

All API calls are prefixed by the following:

"http://127.0.0.1:8080/Priam/REST"

E.g. to invoke get_token, which is under "/v1/cassconfig", the call is "http://127.0.0.1:8080/Priam/REST/v1/cassconfig/get_token"

config.getProperty doesn't check System properties

It looks like cluster_name can't be set from the command-line, specifically I noticed that -Dpriam.clustername doesn't get respected, and results in the default name always. This seems to work:

    @Override
    public String getAppName()
    {
        if (System.getProperty(CONFIG_CLUSTER_NAME) != null)
            return System.getProperty(CONFIG_CLUSTER_NAME);

        return config.getProperty(CONFIG_CLUSTER_NAME, DEFAULT_CLUSTER_NAME);
    }

but what should be happening here?

priam.zones.available is not read via System.getProperty

It seems that getRacs() will always fallback to the default AZs because config.getProperty won't find a property set on the command line via -Dpriam.zones.available. For my testing, I've done this:

    @Override
    public List<String> getRacs()
    {
        // XXX: this mandates -Dpriam.zones.available be set
        return Arrays.asList(System.getProperty(CONFIG_AVAILABILITY_ZONES).split(","));
        // return config.getList(CONFIG_AVAILABILITY_ZONES, DEFAULT_AVAILABILITY_ZONES);
    }

but I've got no idea if this is the right approach.

Couldnt execute the task because of null

I've followed the steps in the setup guide and I've nearly gotten Priam running. I see entries in the InstanceIdentity sdb domain. However, after the configuration steps it fails with Quartz scheduler messages that repeat indefinitely. I'm no java expert so I'm probably doing something wrong.. but the gradle build completes without error and Priam seems to pick up my PriamProperties customizations. Log below.. any ideas?

2012-08-28 15:28:30.0779 INFO pool-2-thread-1 com.netflix.priam.utils.SystemUtils Calling URL API: http://169.254.169.254/latest/meta-data/placement/availability-zone returns: us-east-1a
2012-08-28 15:28:30.0781 INFO pool-2-thread-1 com.netflix.priam.utils.SystemUtils Calling URL API: http://169.254.169.254/latest/meta-data/public-hostname returns: REDACTED
2012-08-28 15:28:30.0782 INFO pool-2-thread-1 com.netflix.priam.utils.SystemUtils Calling URL API: http://169.254.169.254/latest/meta-data/public-ipv4 returns: REDACTED
2012-08-28 15:28:30.0784 INFO pool-2-thread-1 com.netflix.priam.utils.SystemUtils Calling URL API: http://169.254.169.254/latest/meta-data/instance-id returns: REDACTED
2012-08-28 15:28:30.0785 INFO pool-2-thread-1 com.netflix.priam.utils.SystemUtils Calling URL API: http://169.254.169.254/latest/meta-data/instance-type returns: m1.xlarge
2012-08-28 15:28:31.0535 INFO pool-2-thread-1 com.netflix.priam.defaultimpl.PriamConfiguration REGION set to us-east-1, ASG Name set to REDACTED-useast1a
2012-08-28 15:28:31.0647 INFO pool-2-thread-1 com.netflix.priam.defaultimpl.PriamConfiguration appid used to fetch properties is: REDACTED
2012-08-28 15:28:31.0745 INFO pool-2-thread-1 org.quartz.simpl.SimpleThreadPool Job execution threads will use class loader of thread: pool-2-thread-1
2012-08-28 15:28:31.0763 INFO pool-2-thread-1 org.quartz.core.SchedulerSignalerImpl Initialized Scheduler Signaller of type: class org.quartz.core.SchedulerSignalerImpl
2012-08-28 15:28:31.0765 INFO pool-2-thread-1 org.quartz.core.QuartzScheduler Quartz Scheduler v.1.7.3 created.
2012-08-28 15:28:31.0767 INFO pool-2-thread-1 org.quartz.simpl.RAMJobStore RAMJobStore initialized.
2012-08-28 15:28:31.0767 INFO pool-2-thread-1 org.quartz.impl.StdSchedulerFactory Quartz scheduler 'DefaultQuartzScheduler' initialized from default resource file in Quartz package: 'quartz.properties'
2012-08-28 15:28:31.0767 INFO pool-2-thread-1 org.quartz.impl.StdSchedulerFactory Quartz scheduler version: 1.7.3
2012-08-28 15:28:31.0767 INFO pool-2-thread-1 org.quartz.core.QuartzScheduler JobFactory set to: com.netflix.priam.scheduler.GuiceJobFactory@1294aa42
2012-08-28 15:28:31.0878 INFO pool-2-thread-1 com.netflix.priam.identity.InstanceIdentity My token: 1808575600
2012-08-28 15:28:31.0879 INFO pool-2-thread-1 org.quartz.core.QuartzScheduler Scheduler DefaultQuartzScheduler_$_NON_CLUSTERED started.
2012-08-28 15:28:32.0053 INFO pool-2-thread-1 org.apache.cassandra.db.HintedHandOffManager cluster_name: REDACTED
initial_token: null
hinted_handoff_enabled: true
max_hint_window_in_ms: 8
hinted_handoff_throttle_delay_in_ms: 1
authenticator: org.apache.cassandra.auth.AllowAllAuthenticator
authority: org.apache.cassandra.auth.AllowAllAuthority
partitioner: org.apache.cassandra.dht.RandomPartitioner
data_file_directories:
/var/lib/cassandra/data
commitlog_directory: /var/lib/cassandra/commitlog
key_cache_size_in_mb: null
key_cache_save_period: 14400
row_cache_size_in_mb: 0
row_cache_save_period: 0
row_cache_provider: SerializingCacheProvider
saved_caches_directory: /var/lib/cassandra/saved_caches
commitlog_sync: periodic
commitlog_sync_period_in_ms: 10000
commitlog_segment_size_in_mb: 32
seed_provider:
class_name: com.netflix.priam.cassandra.NFSeedProvider
parameters:
seeds: 127.0.0.1
flush_largest_memtables_at: 0.75
reduce_cache_sizes_at: 0.85
reduce_cache_capacity_to: 0.6
concurrent_reads: 32
concurrent_writes: 32
memtable_flush_queue_size: 4
trickle_fsync: false
trickle_fsync_interval_in_kb: 10240
storage_port: 7101
ssl_storage_port: 7101
listen_address: null
rpc_address: null
rpc_port: 9160
rpc_keepalive: true
rpc_server_type: sync
thrift_framed_transport_size_in_mb: 15
thrift_max_message_length_in_mb: 16
incremental_backups: true
snapshot_before_compaction: false
auto_snapshot: true
column_index_size_in_kb: 64
in_memory_compaction_limit_in_mb: 128
multithreaded_compaction: false
compaction_throughput_mb_per_sec: 8
compaction_preheat_key_cache: true
rpc_timeout_in_ms: 10000
endpoint_snitch: org.apache.cassandra.locator.Ec2Snitch
dynamic_snitch_update_interval_in_ms: 100
dynamic_snitch_reset_interval_in_ms: 600000
dynamic_snitch_badness_threshold: 0.1
request_scheduler: org.apache.cassandra.scheduler.NoScheduler
index_interval: 128
encryption_options:
internode_encryption: none
keystore: conf/.keystore
keystore_password: cassandra
truststore: conf/.truststore
truststore_password: cassandra
auto_bootstrap: true
2012-08-28 15:28:32.0065 INFO pool-2-thread-1 com.netflix.priam.utils.SystemUtils Starting cassandra server ....Join ring=true
2012-08-28 15:28:32.0069 INFO pool-2-thread-1 com.netflix.priam.utils.SystemUtils Starting cassandra server ....
2012-08-28 15:28:32.0106 ERROR DefaultQuartzScheduler_Worker-1 com.netflix.priam.scheduler.Task Couldnt execute the task because of null
java.lang.NullPointerException
at com.netflix.priam.backup.IncrementalBackup.execute(IncrementalBackup.java:36)
at com.netflix.priam.scheduler.Task.execute(Task.java:78)
at org.quartz.core.JobRunShell.run(JobRunShell.java:199)
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:546)
2012-08-28 15:28:32.0727 INFO Timer-0 org.quartz.utils.UpdateChecker New update(s) found: 1.8.5 [http://www.terracotta.org/kit/reflector?kitID=default&pageID=QuartzChangeLog]

Examples for property overrides & compability

I have two problems. I can't seem to get the simpleDB title right. Our cluster name has spaces in it 'my test cluster', and simpleDB doesnt like that from what I can tell. Could you provide an example with spaces or one without for say "priam.heap.size" property.

The other question which I saw on the blog post, but have not seen an answer to is Priam Cassandra 1.0.X compatible?

Thanks!

Unable to build on Ubuntu 12.04/OpenJDK

Hello,

I'm attempting to build the master branch using Ubuntu 12.04 x64/OpenJDK7 and I'm receiving a NullPointerException. I've tried OpenJDK6 as well and gotten the same error. Can anyone point me in the right direction?

Thanks

Full debug trace here:
http://filebin.ca/GNWqW8GXWAv/output.txt

16:13:59.468 [ERROR] [org.gradle.BuildExceptionReporter] > java.lang.NullPointerException (no error message)

java -version

java version "1.7.0_07"
OpenJDK Runtime Environment (IcedTea7 2.3.2) (7u7-2.3.2-1ubuntu0.12.04.1)
OpenJDK 64-Bit Server VM (build 23.2-b09, mixed mode)

Priam never marks a node as dead

I have seen the mechanism in the code that when an instance comes up and a token is chosen, PRiam first looks to see if there are any dead nodes by looking at SimpleDB InstanceIdentity for instances with appId of -dead. However, I cannot find anywhere in Priam's code that this property is set marking the node when it is dead. I intentionally caused a failure in cassandra making it look like it has failed, but Priam never marked it as dead. When I killed the node and ASG brought up another instance, PRiam was unable to replace the dead node with the same token. The way to make it work was to first forcefully use nodetool removetoken to remove the token of the dead node and then priam was able to replace the node. Please advice.

1 node cluster ?

I know it might sound stupid,
but is there a way to set up Priam working with just one single Node?

I tried setting up a ASG with max number servers = 1
and configured simple db to use just one availability zone.

But Cassandra won't start up as it "can't find the seeds"

Any suggestions?

awscrededentials.conf in git, but it's singular on the filesystem

In git, this file exists:

~/dvcs/Priam $ git blame src/main/resources/conf/awscredentials.properties 
52d95031 (Praveen Sadhu 2012-01-27 17:20:57 -0800 1) AWSACCESSID=""
52d95031 (Praveen Sadhu 2012-01-27 17:20:57 -0800 2) AWSKEY=""

However on the filesystem, using that filename gives you an error:

ERROR 17:05:48,759 Exception with credential file 
java.io.FileNotFoundException: /etc/awscredential.properties (No such file or directory)
        at java.io.FileInputStream.open(Native Method)
        at java.io.FileInputStream.<init>(FileInputStream.java:137)

Is there any reason that the file in git has a different name from the file that should reside on the filesystem?

awscrededential.conf comes with double quotes in git

However, if you enclose the AWSACCESSID and/or the AWSKEY in double quotes, they are not stripped, and they break when authenticating with an unhelpful message:

The unhelpful message is a generic "authentication filed" style message:

ERROR 21:42:11,014 AWS was not able to validate the provided access credentials
Status Code: 401, AWS Service: AmazonEC2, AWS Request ID: 7424e979-f990-47e1-9cd9-963d7b6b5ed9, AWS Error Code: AuthFailure, AWS Error Message: AWS was not able to validate the provided access credentials
        at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:548)
        at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:288)
        at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:170)
        at com.amazonaws.services.ec2.AmazonEC2Client.invoke(AmazonEC2Client.java:5569)

I think it'd be very helpful if you'd consider a couple of changes:

  • Add more logging messages to startup, especially PriamConfiguration.java's initialize() method.
  • Strip leading and trailing quotes and whitespace from the AWSKEY and the AWSACCESSID
  • Document whether or not quotes are OK, based on the prior suggestion.

I am not a java programmer, or I'd submit pull requests for this sort of trivial stuff. I can't find the easy way to trim non-whitespace (specifically quotes) though. I can submit a pull request for additional logging messages if that'd save you time.

Thanks,

-Peter

InstanceIdentity uses clustername instead of appId to identify clusters

Properties for a cluster are set with a appId which is generated from the ASG in which a node is, but the appid which is entered in the InstanceIdentity domain is the clustername. This leads to severe issues if multiple clusters have the same name. Note that it is currently not possible to select the clustername with priam.clustername property (see issue #64 )

Using IAMCredential

I see some references to using IAM roles on the setup wiki page, but not any specific details on how it should be enabled. Is it still in development?

Pull request #35 breaks backups on cassandra <1.1.0

This change:

diff --git a/src/main/java/com/netflix/priam/backup/IncrementalBackup.java b/src/main/java/com/netflix/priam/backup/IncrementalBackup.java
index 18b8a62..cca3850 100644
--- a/src/main/java/com/netflix/priam/backup/IncrementalBackup.java
+++ b/src/main/java/com/netflix/priam/backup/IncrementalBackup.java
@@ -35,10 +35,13 @@ public class IncrementalBackup extends AbstractBackup
         logger.debug("Scanning for backup in: " + dataDir.getAbsolutePath());
         for (File keyspaceDir : dataDir.listFiles())
         {
-            File backupDir = new File(keyspaceDir, "backups");
-            if (!isValidBackupDir(keyspaceDir, backupDir))
-                continue;
-            upload(backupDir, BackupFileType.SST);
+            for (File columnFamilyDir : keyspaceDir.listFiles())
+            {
+                File backupDir = new File(columnFamilyDir, "backups");
+                if (!isValidBackupDir(keyspaceDir, columnFamilyDir, backupDir))
+                    continue;
+                upload(backupDir, BackupFileType.SST);
+            }
         }
     }

Breaks cassandra 1.0 backup completely. It results in this behavior from debug messages I added to troubleshoot this:

2012-07-03 12:57:01.0579 INFO http-8080-1 com.netflix.priam.backup.SnapshotBackup snapshotName: 201207031257
2012-07-03 12:57:01.0579 INFO http-8080-1 com.netflix.priam.backup.SnapshotBackup Starting snapshot 201207031257
2012-07-03 12:57:01.0753 INFO http-8080-1 com.netflix.priam.backup.SnapshotBackup SnapshotBackup.execute dataDir: /mnt/data/db/cassandra
2012-07-03 12:57:01.0753 INFO http-8080-1 com.netflix.priam.backup.SnapshotBackup keyspaceDir: /mnt/data/db/cassandra/system, class: class java.io.F
ile, name: /mnt/data/db/cassandra/system
2012-07-03 12:57:01.0754 INFO http-8080-1 com.netflix.priam.backup.SnapshotBackup Trying columnFamilyDir: /mnt/data/db/cassandra/system/LocationInfo
-hd-7-Statistics.db, snpDir: /mnt/data/db/cassandra/system/LocationInfo-hd-7-Statistics.db/snapshots
2012-07-03 12:57:01.0754 INFO http-8080-1 com.netflix.priam.backup.SnapshotBackup Trying columnFamilyDir: /mnt/data/db/cassandra/system/Migrations-h
d-2-Index.db, snpDir: /mnt/data/db/cassandra/system/Migrations-hd-2-Index.db/snapshots
2012-07-03 12:57:01.0755 INFO http-8080-1 com.netflix.priam.backup.SnapshotBackup Trying columnFamilyDir: /mnt/data/db/cassandra/system/LocationInfo
-hd-6-Filter.db, snpDir: /mnt/data/db/cassandra/system/LocationInfo-hd-6-Filter.db/snapshots
2012-07-03 12:57:01.0755 INFO http-8080-1 com.netflix.priam.backup.SnapshotBackup Trying columnFamilyDir: /mnt/data/db/cassandra/system/Versions-hd-
1-Statistics.db, snpDir: /mnt/data/db/cassandra/system/Versions-hd-1-Statistics.db/snapshots

This should be wrapped in a version check, or in a separate branch.

Debian compatibility

I'm trying to bend Priam to work on Debian. As already pointed out by codyaray, there is a problem with JSVC. However, I've also run into other challenges - consider this a help to Debian users:

  • Cassandra.yaml location: TuneCassandra.java assumes cassandra.yaml is placed in CassHome/conf/cassandra.yaml. On Debian with the Apache-provided Cassandra installation, this is instead CassHome/cassandra.yaml. Changing /etc/init.d/cassandra is of course possible, but package updates may override this. Suggestion: Allow setting the configuration directory as a property.
  • Cassandra 1.1.x compatibility: The default provided cassandra.yaml includes option sliced_buffer_size_in_kb, which was removed prior to the 1.1.0 release. Removing the option allows Cassandra 1.1.0 to start.
  • FYI: Due to problems with the jersey packages (their pom.xml points to the old location for net-java:jvnet-parent), compilation is quite tricky (it involves finding the correct pom.xml for jvnet-parent and manually overwriting it). The 1.13 release should fix this.
  • FYI: We're using Jetty as the Web Container, which seems to work flawlessly.

Also, I've updated https://github.com/Netflix/Priam/wiki/Setup to clarify SimpleDB domain setup, required user rights for the Web Container and the properties semantics.

I have yet to get the tokens returned by Priam to match the tokens used by Cassandra, but I expect this to be caused by the JSVC issue.

Seed node in a cluster

I am wondering is a seen node in the aws cluster or does that server reside outside of a as scaling group?
I setup a auto scalling group and when I scaled it down from 3 nodes to 2 nodes the ip address internal and external changed on the seed node in the ring?

Doubling the Size of Cluster Doesn't Create a Balanced Ring

I have tried to double the size of cluster by doubling the number of minimum instance in 3 participating auto scaling group each from 3 to 6 to upgrade the ring from 9 nodes to 18 nodes. However I end up with an imbalanced ring. Please advice what I am doing wrong. From the code of PriamInstanceFactory.java I can tell the calculation is done based on the maximum number of instances in ASG, but I cannot figure why scale up, creates an imbalance. I suppose there is a right way to do this which is not clear as Netflix always mentioned in their blog posts that they benchmark doubling the size of cluster with Priam.

Old Ring:
Address DC Rack Status State Load Effective-Ownership Token
132332031580364958013534569558607324498
xxxxxx.39.144 us-east 1a Up Normal 12.13 MB 33.33% 1808575600
xxxxxx157.165 us-east 1b Up Normal 12.15 MB 33.33% 9452287970026068429538183541579914807
xxxxxx85.183 us-east 1c Up Normal 12.15 MB 33.33% 18904575940052136859076367081351254014
xxxxxx60.61 us-east 1a Up Normal 12.16 MB 33.33% 56713727820156410577229101240436610842
xxxxxx67.5 us-east 1b Up Normal 12.14 MB 33.33% 66166015790182479006767284780207950049
xxxxxx110.200 us-east 1c Up Normal 12.17 MB 33.33% 75618303760208547436305468319979289256
xxxxxx77.115 us-east 1a Up Normal 12.13 MB 33.33% 113427455640312821154458202479064646084
xxxxxx234.163 us-east 1b Up Normal 12.15 MB 33.33% 122879743610338889583996386018835985291
xxxxxx26.159 us-east 1c Up Normal 12.15 MB 33.33% 132332031580364958013534569558607324498

New Ring:

Address DC Rack Status State Load Effective-Ownership Token
151236607520417094872610936638150002896
xxxxxx39.144 us-east 1a Up Normal 18.71 MB 16.67% 1808575600
xxxxxx157.165 us-east 1b Up Normal 18.7 MB 19.44% 9452287970026068429538183541579914807
xxxxxx.85.183 us-east 1c Up Normal 18.74 MB 22.22% 18904575940052136859076367081351254014
xxxxxx60.61 us-east 1a Up Normal 12.17 MB 33.33% 56713727820156410577229101240436610842
xxxxxx67.5 us-east 1b Up Normal 12.15 MB 33.33% 66166015790182479006767284780207950049
xxxxxx110.200 us-east 1c Up Normal 12.18 MB 33.33% 75618303760208547436305468319979289256
xxxxxx252.67 us-east 1a Up Normal 9.44 MB 16.67% 85070591730234615865843651859750628454
xxxxxx60.86 us-east 1b Up Normal 7.87 MB 13.89% 89796735715247650080612743629636298057
xxxxxx123.155 us-east 1c Up Normal 6.29 MB 11.11% 94522879700260684295381835399521967660
xxxxxx38.58 us-east 1a Up Normal 18.75 MB 16.67% 113427455640312821154458202479064646072
xxxxxx.77.115 us-east 1a Up Normal 18.78 MB 0.00% 113427455640312821154458202479064646084
xxxxxx24.218 us-east 1b Up Normal 9.38 MB 16.67% 118153599625325855369227294248950315675
xxxxxx185.221 us-east 1c Up Normal 9.4 MB 16.67% 122879743610338889583996386018835985278
xxxxxx234.163 us-east 1b Up Normal 18.74 MB 2.78% 122879743610338889583996386018835985291
xxxxxx26.159 us-east 1c Up Normal 18.71 MB 5.56% 132332031580364958013534569558607324498
xxxxxx.145.200 us-east 1a Up Normal 9.37 MB 16.67% 141784319550391026443072753098378663690
xxxxxx.7.166 us-east 1b Up Normal 7.82 MB 13.89% 146510463535404060657841844868264333293
xxxxxx.108.241 us-east 1c Up Normal 6.3 MB 11.11% 151236607520417094872610936638150002896

Help with Install

I am trying to install Priam and havving a bit of trouble.
The setup is unclear and I would like to clarify a few things.

Fisrt questions are
I am setting it up in a VPC with no public access

I changed a few things pub;lic host name and public ipv4

private final String PUBLIC_HOSTNAME = SystemUtils.getDataFromUrl("http://169.254.169.254/latest/meta-data/hostname");
private final String PUBLIC_IP = SystemUtils.getDataFromUrl("http://169.254.169.254/latest/meta-data/local-ipv4");

next question
this is my auto scalling group
AUTO-SCALING-GROUP dev-cass-mobile Cassandra-Mobile us-west-2b 1 1 1

az.asgname = dev-cass-mobile or Cassandra-Mobile
az.region = us-west-2b or is there suppost to be another variable?

private static final String CONFIG_ASG_NAME = PRIAM_PRE + ".az.asgname";
private static final String CONFIG_REGION_NAME = PRIAM_PRE + ".az.region";

next question

Do I user the came variables for these fields?
ASG_NAME= dev-cass-mobile or Cassandra-Mobile
EC2_REGION = us-west-2b ?
private static String ASG_NAME = System.getenv("ASG_NAME");
private static String REGION = System.getenv("EC2_REGION");

is there anything else I need to do in the config.

also what exactly do I need to do with the simpleDB? I did create the two SimpleDB domains: InstanceIdentity & PriamProperties do I have to add data to them?

Thanks
Joel

Document what "cassandra.yaml" means in different contexts?

My datastax installation of cassandra creates /etc/cassandra/cassandra.yaml. When I start Priam, and complains about /etc/cassandra/conf/cassandra.yaml being present/not present.

#24 makes mention of "default configuration file" but doesn't provide any context for what the default file is, and what the non-default cassandra.yaml is. Clearing up these distinctions in the documentation in terms of their paths relative to /etc/cassandra, and how Priam uses them seems necessary for understanding Priam.

How to restore to a different cluster

The Wiki states: "Data from a snapshot can be restored to the same cluster or a different cluster."
However, it does not provide the information on how to restore to a different cluster.
Specifically, how can I point to the proper backup files created on a different cluster.
Can somebody provide an example please?
Thank you.

Each keyspace backed up should be logged at INFO

Currently only the end of the backup is logged. Config errors/mismatches can result in zero keyspaces being backed up to S3, and the snapshot that was generated will be removed, but nothing copied to S3. This should be logged, probably as a WARNING message.

When priam restores when running as root, cleanDirectory changes the owner of the data dir to root

So if I run cassandra as a non-root user (e.g. cassandra) and I run Priam in a servlet container as root, then after the first restore, the data directory will no longer be writable by cassandra, since it will now look like this:

pn@ip-10-242-167-195:/mnt/data/db$ ls -laF
total 20
drwxr-xr-x 3 root root 4096 2012-07-01 23:04 ./
drwxr-xr-x 4 root root 4096 2012-07-01 20:54 ../
drwxr-xr-x 2 root root 4096 2012-07-01 23:04 cassandra/
-rw-r--r-- 1 root root 7259 2012-07-01 23:04 meta.json

This breaks cassandra.

I'm running Priam as root because the docs say this is necessary for start+stop, but it looks like com.netflix.priam.utils.SystemUtils class will try to run start/stop via sudo. If tomcat and cassandra both run as the cassandra user, this problem should be solved as long as the cassandra user is in the sudoers file.

Issues running priam on AWS instances: Error ListenerStart

Hello Praveen,
I am setting up Priam on AWS instances and facing the following issues:
steps done so far:

  1. Downloaded the tarfile, extracted and used maven to build it on aws linux ami instance ( m1.small)
  2. Installed cassandra 0.8
  3. copied the jar file created by maven to cassandra home folder.
  4. installed TOmcat 6 and copied the WAR to webapp folder
  5. created awscrendentials.properties and add the credentials ( access id and key)
  6. Restarted the tomcat server

we are getting the following error

INFO: validateJarFile(/usr/share/tomcat6/webapps/Priam/WEB-INF/lib/servlet-api-2.5-20081211.jar) - jar not loaded. See Servlet Spec 2.3, section 9.7.2. Offending class: javax/servlet/Servlet.class
log4j: Parsing for [root] with value=[INFO, R].
log4j: Level token is [INFO].
log4j: Category root set to INFO
log4j: Parsing appender named "R".
log4j: Parsing layout options for "R".
log4j: Setting property [conversionPattern] to [%d{yyyy-MM-dd HH:mm:ss.SSSS} %p %t %c %m%n].
log4j: End of parsing for "R".
log4j: Setting property [maxBackupIndex] to [5].
log4j: Setting property [file] to [/usr/share/tomcat6/logs/tomcat.log].
log4j: Setting property [maxFileSize] to [5MB].
log4j: setFile called: /usr/share/tomcat6/logs/tomcat.log, true
log4j: setFile ended
log4j: Parsed "R" options.
log4j: Finished configuring.
Feb 29, 2012 1:56:09 PM org.apache.catalina.core.StandardContext start
SEVERE: Error listenerStart
Feb 29, 2012 1:56:09 PM org.apache.catalina.core.StandardContext start
SEVERE: Context [/Priam] startup failed due to previous errors

I am stuck hence please help me resolving the problem.

thanks,
Nithya

Please document that jetty > 7.6.2 probably doesn't work with Priam

Trying to use jetty > 7.6.2 seems to lead to it not loading classes that are present in the war file. Downgrading to 7.6.2 works.

I'm a newb at servlet containers, so for anyone else out there, noting that issues similar to this crop up:

http://comments.gmane.org/gmane.comp.web.shibboleth.user/23065

and that as a newb there's no indication of how to fix it, and random stabs in the dark haven't worked (e.g. trying to add the priam jar to the jetty lib/ext doesn't help), it could help to document this, and maybe someone else can describe how and why this is happening and describe a fix.

Small issue with clustering

Hi,

I've been trying to get Priam working with cassandra for my company.

I used the Datastax AMI as a basis for the system as I know it works correctly normally.

I have created 3 instances in eu-west, one in each of the availability zones. They are in auto scaling groups, the SimpleDB properties are being loaded and the cassandra.yaml is being updated appropriately. Backups are also being written to S3 correctly.

However, when I run nodetool -h localhost on each of the 3 hosts they only show themselves and they are bound to 127.0.0.1 instead of their private IPs. I can also run nodetool remotely and get the same information.

Have I missed a step that would explain why the nodes are not joining the cluster properly?

I also notice that you enforce the American SDB regions, I modified the code to read in the SDB region from a file. Was there any reason this was not enabled in your build for any reason? Would you be willing to accept a pull request when I have some tests, to ensure its functionality, completed?

Thanks,
Matthew

priam.clustername property is ignored

according to the Properties Reference the clustername for a given appId can be set by the property
'priam.clustername'
however setting it has no effect and clusters always get the default name 'cass_cluster'.

This unfortunately leads to several followup issues with i.e. with backups and node discovery etc.

The 1.0 branch doesn't work with cassandra-1.0.10

What's the last known working version of cassandra, priam, and tomcat? Trying to use the current 1.0 checkout, e49e963, trying to load that into a tomcat6 server results in errors like this:

log4j: Finished configuring.
 INFO 03:13:16,382 Calling URL API: http://169.254.169.254/latest/meta-data/placement/availability-zone returns: us-east-1e
 INFO 03:13:16,384 Calling URL API: http://169.254.169.254/latest/meta-data/public-hostname returns: ec2-174-129-69-127.compute-1.amazonaws.com
 INFO 03:13:16,385 Calling URL API: http://169.254.169.254/latest/meta-data/public-ipv4 returns: 174.129.69.127
 INFO 03:13:16,386 Calling URL API: http://169.254.169.254/latest/meta-data/instance-id returns: i-a21ac6da
 INFO 03:13:16,387 Calling URL API: http://169.254.169.254/latest/meta-data/instance-type returns: c1.medium
Jul 6, 2012 3:13:17 AM org.apache.catalina.core.StandardContext start
SEVERE: Error listenerStart
Jul 6, 2012 3:13:17 AM org.apache.catalina.core.StandardContext start
SEVERE: Context [/Priam] startup failed due to previous errors
Jul 6, 2012 3:13:17 AM org.apache.catalina.loader.WebappClassLoader clearReferencesThreads
SEVERE: The web application [/Priam] appears to have started a thread named [com.google.inject.internal.util.$Finalizer] but has failed to stop it. This is very likely to create a memory
 leak.
Jul 6, 2012 3:13:17 AM org.apache.catalina.loader.WebappClassLoader clearThreadLocalMap
SEVERE: The web application [/Priam] created a ThreadLocal with key of type [null] (value [com.google.inject.internal.InjectorImpl$1@4d898115]) and a value of type [java.lang.Object[]] (
value [[Ljava.lang.Object;@7e79b177]) but failed to remove it when the web application was stopped. This is very likely to create a memory leak.
log4j: log4j called after unloading, see http://logging.apache.org/log4j/1.2/faq.html#unload.
java.lang.IllegalStateException: Class invariant violation
        at org.apache.log4j.LogManager.getLoggerRepository(LogManager.java:199)
        at org.apache.log4j.LogManager.getLogger(LogManager.java:228)
        at org.slf4j.impl.Log4jLoggerFactory.getLogger(Log4jLoggerFactory.java:73)
        at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:243)
        at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:255)
        at com.netflix.priam.utils.Throttle.<clinit>(Throttle.java:32)
        at sun.misc.Unsafe.ensureClassInitialized(Native Method)
        at sun.reflect.UnsafeFieldAccessorFactory.newFieldAccessor(UnsafeFieldAccessorFactory.java:25)
        at sun.reflect.ReflectionFactory.newFieldAccessor(ReflectionFactory.java:122)
        at java.lang.reflect.Field.acquireFieldAccessor(Field.java:918)
        at java.lang.reflect.Field.getFieldAccessor(Field.java:899)
        at java.lang.reflect.Field.set(Field.java:657)
        at org.apache.catalina.loader.WebappClassLoader.clearReferencesStaticFinal(WebappClassLoader.java:2023)
        at org.apache.catalina.loader.WebappClassLoader.clearReferences(WebappClassLoader.java:1883)
        at org.apache.catalina.loader.WebappClassLoader.stop(WebappClassLoader.java:1787)
        at org.apache.catalina.loader.WebappLoader.stop(WebappLoader.java:738)
        at org.apache.catalina.core.StandardContext.stop(StandardContext.java:4812)
        at org.apache.catalina.core.StandardContext.start(StandardContext.java:4675)
        at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:791)
        at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:771)
        at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:546)
        at org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:905)
        at org.apache.catalina.startup.HostConfig.deployWARs(HostConfig.java:740)
        at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:500)
        at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1277)
        at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:321)
        at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:119)
        at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1053)
        at org.apache.catalina.core.StandardHost.start(StandardHost.java:785)
        at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1045)
        at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:445)
        at org.apache.catalina.core.StandardService.start(StandardService.java:519)
        at org.apache.catalina.core.StandardServer.start(StandardServer.java:710)
        at org.apache.catalina.startup.Catalina.start(Catalina.java:581)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289)
        at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:414)

I'm muddling along with what's in master, but so much is broken that it's not particularly useful.

If there's a last known-good commit that I should be using, I can start from there.

Thanks

-Peter

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.