Git Product home page Git Product logo

spring-boot-data-geode's Introduction

icon?job=spring boot data geode%2F2.0

Spring Boot for Apache Geode

Spring Boot for Apache Geode (SBDG) extends Spring Boot to also include auto-configuration and other convention & configuration features to simplify the development of Spring applications using Apache Geode.

NOTICE

2023-January-17:

At the end of 2022, VMware announced the general availability of the Spring for VMware GemFire portfolio of projects.

While these Spring based projects for VMware GemFire are open source and a succession to the Spring for Apache Geode projects, they are not a replacement. VMware GemFire forked from the Apache Geode project and is not open source. Additionally, newer Apache Geode and VMware GemFire clients are not backwards compatible with older Apache Geode and VMware GemFire servers. You can begin the transition by starting here.

Alternatively, the Spring portfolio provides first-class integration with other comparable caching providers. Also, see here and here.

Finally, keep in mind, the Spring for Apache Geode projects will still be maintained until OSS and commercial support ends. Maintenance will only include CVE and critical fixes. No new features or major enhancements will be made. The Spring Boot for Apache Geode support timelines can be viewed here. Also see the Version Compatibility Matrix for up-to-date dependency and version details.

2022-October-24:

See the October 24th NOTICE on the Spring Data for Apache Geode GitHub project page for complete details.

Project Features

SBDG adds dedicated Spring Boot auto-configuration and actuator support for Apache Geode and integrates with other Spring projects in addition to 3rd-party Java libraries.

Among other things, this project builds on Spring Boot and Spring Data for Apache Geode (SDG) to offer:

  1. Auto-configures an Apache Geode ClientCache instance automatically when Spring Data for Apache Geode (SDG) is on the application’s CLASSPATH.

  2. Auto-configures Apache Geode as a caching provider in Spring’s Cache Abstraction when Spring Data for Apache Geode (SDG) is on the application’s CLASSPATH to solve caching uses cases.

  3. Auto-configures Spring Data for Apache Geode (SDG) Repositories when Spring Data for Apache Geode (SDG) is on the application’s CLASSPATH and Spring Boot detects SDG Repositories in your Spring Boot application to solve persistent use cases.

  4. Auto-configures Apache Geode Functions when Spring Data for Apache Geode (SDG) is on the application’s CLASSPATH and Spring Boot auto-detects SDG Function implementations or executions to solve distributed compute problems.

  5. Auto-configures Apache Geode CQ when Spring Data for Apache Geode (SDG) is on the application’s CLASSPATH and Spring Boot auto-detects SDG CQ query declarations on application components to solve (near) realtime event stream processing use cases.

  6. Auto-configures Apache Geode as a HTTP Session state management provider when Spring Session for Apache Geode (SSDG) is on the application’s CLASSPATH.

  7. Auto-configures Apache Geode Security including Authentication & Authorization (Auth) as well as Transport Layer Security (TLS) using SSL.

  8. Provides additional support for Spring Boot and Spring Data for Apache Geode applications deployed to VMware Tanzu Application Service (TAS) using VMware Tanzu GemFire for VMs.

  9. Provides first-class support for Unit & Integration Testing in your Spring Boot applications using Apache Geode with Spring Test for Apache Geode (STDG).

This, along with many other benefits, are provided by this project.

Learn

The following SBDG versions are currently maintained and developed.

Table 1. Supported Versions
Version Reference Documentation Javadoc Samples

current

Ref Docs

Javadoc

Samples

2.0.0-SNAPSHOT

Ref Docs

Javadoc

Samples

2.0.0-M5

Ref Docs

Javadoc

Samples

1.7.6-SNAPSHOT

Ref Docs

Javadoc

Samples

1.7.5

Ref Docs

Javadoc

Samples

1.6.14-SNAPSHOT

Ref Docs

Javadoc

Samples

1.6.13

Ref Docs

Javadoc

Samples

The following SBDG versions have reached their End-of-Life (EOL).

Table 2. Unsupported (EOL) Versions
Version Reference Documentation Javadoc Samples

1.5.15-SNAPSHOT

Ref Docs

Javadoc

Samples

1.5.14

Ref Docs

Javadoc

Samples

1.4.14-SNAPSHOT

Ref Docs

Javadoc

Samples

1.4.13

Ref Docs

Javadoc

Samples

1.3.13.BUILD-SNAPSHOT

Ref Docs

Javadoc

Samples

1.3.12.RELEASE

Ref Docs

Javadoc

Samples

1.2.14.BUILD-SNAPSHOT

Ref Docs

Javadoc

Samples

1.2.13.RELEASE

Ref Docs

Javadoc

Samples

1.1.12.BUILD-SNAPSHOT

Ref Docs

Javadoc

Samples

1.1.11.RELEASE

Ref Docs

Javadoc

Samples

1.0.2.BUILD-SNAPSHOT

Ref Docs

Javadoc

Samples

1.0.1.RELEASE

Ref Docs

Javadoc

Samples

See Spring Boot’s Releases in the Support Versions Wiki page for more details.

Get Started!

To start using SBDG immediately, simply add the following dependency to your Spring Boot application Maven POM or Gradle build file:

SBDG Maven POM dependency
<dependency>
    <groupId>org.springframework.geode</groupId>
    <artifactId>spring-geode-starter</artifactId>
    <version>2.0.0-M5</version>
</dependency>
SBDG Gradle build dependency
dependencies {
    compile "org.springframework.geode:spring-geode-starter:2.0.0-M5"
}

If you are using a SNAPSHOT or MILESTONE version of SBDG, perhaps to pick up a bug fix, improvement or new feature, be sure to declare the appropriate Spring Repository. For example, the when using a MILESTONE (e.g. M1), declare the Spring Milestone Repository.

Spring Milestone Repository declared in Maven POM
<repositories>
    <repository>
        <name>spring-milestone</name>
        <url>https://repo.spring.io/milestone</url>
    </repository>
</repositories>
Spring Milestone Repository declare in build.gradle
repositories {
    maven { url "https://repo.spring.io/milestone" }
}
Note
To use a SNAPSHOT, simply change the URL from https://repo.spring.io/milestone to https://repo.spring.io/snapshot.
Note
Spring SNAPSHOT and MILESTONE artifacts are not published to Maven Central. Only GA release bits are published to Maven Central. When using GA bits, you do not need to declare a Repository for Maven Central when using Maven. You do need to declare mavenCentral() when using Gradle.

Getting Started with Spring Initializer

To make the task of creating a project even easier, the Spring Team recommends that you start at start.spring.io.

Use this link to create a Spring Boot project using Apache Geode.

In addition to declaring the SBDG dependency, org.springframework.geode:spring-geode-starter, the Maven POM or Gradle build file generated with Spring Initializer at start.spring.io includes the SBDG BOM, conveniently declared in a dependency management block in both Maven and Gradle projects. This is convenient when you anticipate that you will need to use more than 1 SBDG module.

For example, if you will also be using the org.springframework.geode:spring-geode-starter-session module for your (HTTP) Session management needs, or perhaps the org.springframework.geode:spring-geode-starter-test module to write Unit & Integration Tests for your Spring Boot, Apache Geode applications, then you can simply add the dependency and let the BOM manage the version for you. This also makes it easier to switch versions without having to change all the dependencies; simply change the version of the BOM.

Simple Spring Boot, Apache Geode application

In this section, we build a really simple Spring Boot application using Apache Geode showing you how to get started quickly, easily and reliably.

For our example, we will create and persist a User to Apache Geode, then lookup the User by name.

We start by defining our User application domain model class.

User class
@Getter
@ToString
@EqualsAndHashCode
@RequiredArgsConstructor
@Region("Users")
class User {

	@lombok.NonNull @Id
	private final String name;

}

We use Project Lombok to simplify the implementation of our User class. Otherwise, the only requirement to store Users in Apache Geode is to declare the User to data store mapping. We do this by annotating the User class with the SDG @Region mapping annotation along with declaring the User.name property to be the ID of User instances.

By declaring the @Region mapping annotation we are stating that instances of User will be stored in an Apache Geode cache Region named “Users”. The Spring Data @Id annotation serves to declare the identifier for a User object stored in Apache Geode. This is not unlike JPA’s @javax.persistence.Table and @javax.persistence.Id mapping annotations.

Note
An Apache Geode Region is equivalent to a database table and the cache is equivalent to a database schema. A database schema is a namespace for a collection of tables whereas an Apache Geode cache is a namespace for a group of Regions that hold the data. Each data store has its own data structure to organize and manage data. An RDBMS uses a tabular data structure. Graph databases use a graph. Well, Apache Geode uses a Region, which is simply a key/value data structure, or a map. In fact, an Apache Geode Region implements java.util.Map (indirectly) and is essentially a distributed, horizontally scalable, highly concurrent, low-latency (among other things) Map implementation.

Next, let’s define a Spring Data CrudRepository to persist and access Users stored in Apache Geode.

UserRepository
interface UserRepository extends CrudRepository<User, String> { }

Finally, let’s create a Spring Boot application to tie everything together.

Spring Boot, Apache Geode application
@Slf4j
@SpringBootApplication
@EnableClusterAware
@EnableEntityDefinedRegions(basePackageClasses = User.class)
public class UserApplication {

	public static void main(String[] args) {
		SpringApplication.run(UserApplication.class, args);
	}

	@Bean
	@SuppressWarnings("unused")
	ApplicationRunner runner(UserRepository userRepository) {

		return args -> {

			long count = userRepository.count();

			assertThat(count).isZero();

			log.info("Number of Users [{}]", count);

			User jonDoe = new User("jonDoe");

			log.info("Created User [{}]", jonDoe);

			userRepository.save(jonDoe);

			log.info("Saved User [{}]", jonDoe);

			count = userRepository.count();

			assertThat(count).isOne();

			log.info("Number of Users [{}]", count);

			User jonDoeFoundById = userRepository.findById(jonDoe.getName()).orElse(null);

			assertThat(jonDoeFoundById).isEqualTo(jonDoe);

			log.info("Found User by ID (name) [{}]", jonDoeFoundById);
		};
	}
}

@Getter
@ToString
@EqualsAndHashCode
@RequiredArgsConstructor
@Region("Users")
class User {

	@lombok.NonNull @Id
	private final String name;

}

interface UserRepository extends CrudRepository<User, String> { }

The UserApplication class is annotated with @SpringBootApplication making it a proper Spring Boot application. With SBDG on the classpath, this effectively makes our application an Apache Geode application as well. SBDG will auto-configure an Apache Geode ClientCache instance by default when SBDG is on the application classpath.

With the SDG @Region mapping annotation, we declared that instances of User will be stored in the “Users” Region. However, we have not yet created a “Users” Region. This is where the @EnableEntityDefinedRegions annotation comes in handy. Like JPA/Hibernate’s ability to create database tables from our @Entity declared classes, SDG’s @EnableEntityDefinedRegions annotation scans the classpath for application entity classes (e.g. User) and detects any classes annotated with @Region in order to create the named Region required by the application to persist data. The basePackageClasses attribute is a type-safe way to limit the scope of the scan.

While useful and convenient during development, @EnableEntityDefinedRegions was not made into an auto-configuration feature by default since there are many ways to define and configure a Region, which varies from data type to data type (e.g. transactional data vs. reference data), and varies greatly by use case and requirements.

We make use of 1 more powerful annotation, SBDG’s @EnableClusterAware, which allows you to switch between local-only, embedded development to a client/server topology with no code or configuration changes.

Tip
You can learn more about the @EnableClusterAware annotation in SBDG’s reference documentation (see here and in the Getting Started Sample).

Our Java main method uses Spring Boot’s SpringApplication class to bootstrap the the Apache Geode ClientCache application.

Finally, we declare an ApplicationRunner bean to persist a User and then lookup the stored User by ID (or "name"). Along the way, we log the operations to see the application in action.

Example application log output (formatted to fit this screen)
...
2021-01-26 20:46:34.842  INFO 33218 --- [main] example.app.user.UserApplication : Started UserApplication in 4.561 seconds (JVM running for 5.152)
2021-01-26 20:46:34.996  INFO 33218 --- [main] example.app.user.UserApplication : Number of Users [0]
2021-01-26 20:46:34.996  INFO 33218 --- [main] example.app.user.UserApplication : Created User [User(name=jonDoe)]
2021-01-26 20:46:35.025  INFO 33218 --- [main] example.app.user.UserApplication : Saved User [User(name=jonDoe)]
2021-01-26 20:46:35.027  INFO 33218 --- [main] example.app.user.UserApplication : Number of Users [1]
2021-01-26 20:46:35.029  INFO 33218 --- [main] example.app.user.UserApplication : Found User by ID (name) [User(name=jonDoe)]
...

That’s it! That’s all!

We have just created a simple Spring Boot application using Apache Geode to persist and access data.

Where To Next

To continue your journey of learning, see the Reference Documentation and jump into the Examples below.

Examples

The single, most relevant "source of truth" on how to get started quickly, easily and reliably, using Spring Boot for Apache Geode (SBDG) to solve problems, is to start with the Samples. There, you will find different examples with documentation and code showing you how to use SBDG to effectively handle specific application concerns, like Caching.

Additionally, there are examples that walk you through the evolution of SBDG to really showcase what SBDG affords you. The examples start by building a simple Spring Boot application using Apache Geode’s API only. Then, the app is rebuilt using Spring Data for Apache Geode (SDG) to show the simplifications that SDG brings to the table. Finally, the app is rebuilt once more using SBDG to demonstrate the full power of Apache Geode when combined with Spring Boot. The examples can be found in the PCCDemo GitHub repository. Each app can be deployed to Pivotal CloudFoundry (PCF) and bound to a Pivotal Cloud Cache (PCC) service instance. By using SBDG, little to no code or configuration changes are required to run the app locally and then later deploy the same app to a managed environment like PCF. It just works!

Then, there is the Temperature Service example app showcasing an Internet of Things (IoT) and Event Stream Processing (ESP) Use Case to manage Temperature Sensors and Monitors, powered by Apache Geode with the help of SBDG to make the application configuration and implementation as simple as can be.

Spring Boot Project Site

You can find documentation, issue management, support, samples, and guides for using Spring Boot at https://projects.spring.io/spring-boot/

Code of Conduct

Please see our code of conduct

Reporting Security Vulnerabilities

Please see our Security policy.

License

Spring Boot and Spring Boot for Apache Geode is Open Source Software released under the Apache 2.0 license.

spring-boot-data-geode's People

Contributors

jujoramos avatar jxblum avatar making avatar msecrist avatar rwinch avatar spring-operator avatar wlund-pivotal avatar yozaner1324 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

spring-boot-data-geode's Issues

Remove unnecessary, explicit exclusion on 'javax.sevlet:javax.servlet-api'

The explicit exclusion on javax.servlet:javax.servlet-api in SBDG will be unnecessary once SD[G] Moore/2.2 is rebased on the Apache Geode version that includes a fix for GEODE-7107.

At this point, SBDG can then be rebased on SDG Moore-X (where X is in [Moore-RC3, Moore-RELEASE, Moore-SR1, ... ], i.e. which ever version of SDG Moore is rebased on the Apache Geode version including the fix for GEODE-7107) thereby making the exclude no longer necessary.

Autowiring a GemfireTemplate into the application is not working in all cases

Autowiring the {{o.s.d.g.GemfireTemplate}} as per the documentation does not work as documented.

Seems it has to do with the lazy initialization of the Region. If the Repository is annotated wit DependsOn("Customers"), then the problem is resolved.

@Repository
public class DemoRepository {

	@Autowired
	@Qualifier("customersTemplate")
	private GemfireTemplate customersTemplate;

	public void putData(String key, String value) {
		customersTemplate.put(key, value);
	}
}

Fails with:

Description:

Field customersTemplate in com.example.demo.client.repo.DemoRepository required a bean of type 'org.springframework.data.gemfire.GemfireTemplate' that could not be found.


Action:

Consider defining a bean of type 'org.springframework.data.gemfire.GemfireTemplate' in your configuration.

Logging should reuse existing logging format rather that overwriting it

Adding this starter in an empty Spring Boot application alters the standard logging format, e.g. using 1.1.0.M3:


  .   ____          _            __ _ _
 /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
 \\/  ___)| |_)| | | | | || (_| |  ) ) ) )
  '  |____| .__|_| |_|_| |_\__, | / / / /
 =========|_|==============|___/=/_/_/_/
 :: Spring Boot ::        (v2.1.6.RELEASE)

[info 2019/07/15 14:21:55.653 CEST <main> tid=0x1] Starting DemoApplication on gemini.lan with PID 24874 (/Users/snicoll/Downloads/test-geode/target/classes started by snicoll in /Users/snicoll/Downloads/test-geode)

[info 2019/07/15 14:21:55.656 CEST <main> tid=0x1] No active profile set, falling back to default profiles: default

2019-07-15 14:21:56.171  INFO 24874 --- [           main] .s.d.r.c.RepositoryConfigurationDelegate : Bootstrapping Spring Data repositories in DEFAULT mode.
2019-07-15 14:21:56.184  INFO 24874 --- [           main] .s.d.r.c.RepositoryConfigurationDelegate : Finished Spring Data repository scanning in 6ms. Found 0 repository interfaces.
[info 2019/07/15 14:21:56.309 CEST <main> tid=0x1] @Bean method PdxConfiguration.pdxDiskStoreAwareBeanFactoryPostProcessor is non-static and returns an object assignable to Spring's BeanFactoryPostProcessor interface. This will result in a failure to process annotations such as @Autowired, @Resource and @PostConstruct within the method's declaring @Configuration class. Add the 'static' modifier to this method to avoid these container lifecycle issues; see @Bean javadoc for complete details.

[info 2019/07/15 14:21:56.405 CEST <main> tid=0x1] Bean 'org.springframework.data.gemfire.config.annotation.ContinuousQueryConfiguration' of type [org.springframework.data.gemfire.config.annotation.ContinuousQueryConfiguration$$EnhancerBySpringCGLIB$$935c745d] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)

[info 2019/07/15 14:21:56.410 CEST <main> tid=0x1] Bean 'org.springframework.geode.boot.autoconfigure.RegionTemplateAutoConfiguration' of type [org.springframework.geode.boot.autoconfigure.RegionTemplateAutoConfiguration$$EnhancerBySpringCGLIB$$73f3c190] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)

[info 2019/07/15 14:21:57.259 CEST <main> tid=0x1] 
---------------------------------------------------------------------------
  
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with this
  work for additional information regarding copyright ownership.

...

I guess the logging infrastructure is configured on startup at some point. In the example above we can see two log entries with the "Spring Boot format" mangled with another one.

404 issue with PCC

Hello,

I am trying to test {{@EnableClusterConfiguration}} with PCC and I am getting the following exception:

Caused by: org.springframework.web.client.HttpClientErrorException: 404 Not Found
	at org.springframework.web.client.DefaultResponseErrorHandler.handleError(DefaultResponseErrorHandler.java:94)
	at org.springframework.web.client.DefaultResponseErrorHandler.handleError(DefaultResponseErrorHandler.java:79)
	at org.springframework.web.client.ResponseErrorHandler.handleError(ResponseErrorHandler.java:63)
	at org.springframework.web.client.RestTemplate.handleResponse(RestTemplate.java:730)
	at org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:688)

Include support for using Micrometer and Spring Boot Actuator to provide runtime metrics for Apache Geode and VMware Tanzu GemFire.

Spring Boot for Apache Geode (SBDG) already includes Spring Boot Actuator support with extensive metrics and runtime configuration metadata for the Apache Geode cache instance, whether a client or a peer node in the cluster.

However, the Spring Boot Actuator integration needs to be refactored from the old model to the new HealthIndicators.

Additionally, now that Observability is a major theme for the Spring Framework 6 generation, Micrometer 2.0 integration should be evaluated.

Remove unnecessary, explicit exclusion on 'org.apache.logging.log4j:log4j-core'

The explicit exclusion on org.apache.logging.log4j:log4j-core in SBDG will be unnecessary once SD[G] Moore/2.2 is rebased on Apache Geode 1.9.1, which will include a fix for GEODE-7058.

At this point, SBDG can then be rebased on SDG Moore-X (where X is in [Moore-RC3, Moore-RELEASE, Moore-SR1, ... ], i.e. which ever version of SDG Moore is rebased on Apache Geode 1.9.1) thereby making the exclude no longer necessary.

Workaround Pivotal Spring Cloud Services bug consuming and discarding EnumerablePropertySources from the Environment

When the org.springframework.cloud:spring-cloud-services-starter-service-registry dependency is added to the CLASSPATH of a Spring Boot application (and specifically, a Spring Boot for Pivotal GemFire (SBDG) application, i.e. org.springframework.geode:spring-gemfire-starter), Pivotal Spring Cloud Services will create a "bootstrap" ApplicationContext with a "bootstrap" Environment.

During the creation of this "bootstrap" ApplicationContext and Environment, specific property sets (aka PropertySources) are copied from the "main" ApplicationContext, Environment object to the "bootstrap" Environment. And, in this particular case, the boot.data.gemfire.cloudcache configuration ProperySource is copied from "main" to "bootstrap". This custom SBDG specific PropertySource contains essential information from the environment/context in which the SBDG app is deploy (e.g. PCF when using PCC). 1 such piece of pertinent information is Authentication credentials extracted from the VCAP environment variables to allow a Spring Boot, Pivotal GemFire cache client application to authenticate with the bound PCC cluster when deployed to PCF.

However, Pivotal Spring Cloud Services is very specific about which PropertySources it collects from the "main" Environment to include in the "bootstrap" Environment. It specifically collects EnumerablePropertySources and later discards them, which then prevents the Spring Boot, Pivotal GemFire cache client application from successfully authenticating with the bound PCC cluster when deployed to PCF.

This enhancement works around this Pivotal Spring Cloud Services behavior (bug?).

Add Sample with Guide and Example Code for Getting Started

This ticket tracks the work to create a new Sample with a Guide and Example Code on getting started quickly, easily and reliable with Spring Boot for Apache Geode or Pivotal GemFire (SBDG).

This guide will walk users through:

  1. Creating a new SBDG project with Spring Initializer at https://start.spring.io[start.spring.io].
  2. We then show how to create a simple SBDG application that persists data into Apache Geode, locally.
  3. Next, we switch the application to use a client/server topology.
  4. Finally, we show how to push the application to a managed platform (e.g. Pivotal Platform formerly known as Pivotal CloudFoundry (PCF) using Pivotal Cloud Cache (PCC)).

Change 'org.springframework.data:spring-data-geode-test' dependency in test starters from testCompile to compile

Currently, the org.springframework.data:spring-data-geode-test dependency in the spring-geode-starter-test module and the org.springframework.data:spring-data-gemfire-test dependency in the spring-gemfire-starter-test module are test-compile dependencies.

This prevents the Spring Test for Apache Geode & Pivotal GemFire (STDG) test framework and library from being used as a test-compile dependency in users' application development projects using Apache Geode, or Pivotal GemFire, to write Unit and Integration Tests.

The STDG dependencies declared in the test starters need to be made into compile-time dependencies.

Add capability to select a specific user when connecting to Geode

PCC is process of enabling instance sharing on PCF, using which one app could gain access to a shared cluster from another org and space. A new user role readonly is introduced with this new feature so that when instances are shared to APPS of other org only read-only access is granted.

We need a way to let SBDG apps to work with readonly credentials, this means that the APPS cannot push configuration etc.

In the current state the service key will only have readonly user and not other users in the service binding, so rather than asking user to configure which role to pick up it may be as simple as just use existing user from the binding.

Add `@EnableClusterAware` annotation and support

This new annotation will enable users to seamlessly switch between "local-only" mode and client/server, which will particularly help when switching between various environments (e.g. DEV to TEST), or rather non-managed to managed.

A good example of this would be when the application developers are developing locally and then deploying to PCF on an ad hoc basis, connecting their Spring Boot applications to PCC.

Conversely, if users need to move from PCF back to local (without the presence of a server cluster, perhaps), for example when debugging or writing additional tests for functionality, then SBDG will enable this transition smoothly without additional code or configuration changes.

@ClientCacheApplication - Overrides log

@ClientCacheApplication overrides my logger settings. None of my log4j information is carried since I have declared this.

In my application, I don't need client cache either. Is there a way to avoid using this? Kindly advise?

If I comment on this line I get below error -

Consider defining a bean named 'gemfireCache' in your configuration

Changes made:

@Configuration
// @ClientCacheApplication(name = "GemFireClientCacheApplication", durableClientId = "store", keepAlive = true, readyForEvents = true, subscriptionEnabled = true)**
@EnableGemfireCaching
@EnableSecurity
@EnablePdx
public class GemfireConfiguration {

}

Log shows the override. Please let me know if how I can avoid it.

Log4J 2 Configuration:
2018-12-06T20:11:26.807-05:00 [APP/PROC/WEB/0] [OUT] jar:file:/home/vcap/app/BOOT-INF/lib/geode-core-9.1.1.jar!/log4j2.xml

[OPTIONAL] Enable configuration of Apache Geode/PCC using both spring.data.gemfire.* and spring.data.geode.* properties.

This new feature would add support for both spring.data.gemfire.* as well as spring.data.geode.* properties.

Of course, some sort of precedence is required if duplicate, but equivalent properties are configured, for example:

spring.data.gemfire.cache.log.level=INFO
...

spring-data-geode.cache.log.level=WARN

For instance, should the log-level of the cache be WARN or INFO? 1 strategy could be that the last property definition wins. However, this can get quite confusing when the definitions are spread across multiple locations and property files, in addition to the use of Spring profiles.

Add starters for Pivotal Cloud Cache

This enhancement will include the following Spring Boot starters for Pivotal Cloud Cache (PCC):

  • spring-cloudcache-starter
  • spring-cloudcache-starter-actuator
  • spring-cloudcache-starter-session
  • spring-cloudcache-starter-test

Add smoke tests to validate use of the starter in a vanilla application

In the recent past, a number of issues were discovered adding the spring-geode-starter to a vanilla Spring Boot application. Given that this project built successfully and a release was made with those issues, I think a number of integration/smoke tests are missing.

The two recent examples I have in mind are:

  • With the 1.1 line adding the starter alone was working but combining the starter with any other starter would break without an explicit library exclusion (see #42)
  • With the 1.2 line adding the starter alone would break as both spring-mvc and servlet-api were present in the classpath, triggering an embedded application server bootstrap.

While both theses issues are actually located outside of this project (and workarounds have been swiftly applied), the integration in a Spring Boot application was broken and having an automated way of discovering such issue prior of a release is necessary.

I don't know how easy that's going to be. Perhaps testcontainers can help to make sure a vanilla geode instance is running for such tests.

HTTP client does not authenticate when pushing cluster config from client to server using @EnableClusterConfiguration with PCC 1.5.

Currently, when a Spring Boot, Apache Geode/Pivotal GemFire ClientCache application is annotated with @EnableClusterConfiguration(useHttp=true) and deployed to PCF, connecting to PCC (1.5), an Exception is thrown:

2018-11-15T13:17:50.07-0500 [APP/PROC/WEB/0] OUT [error 2018/11/15 18:17:50.069 UTC <main> tid=0x1] Application run failed
   2018-11-15T13:17:50.07-0500 [APP/PROC/WEB/0] OUT org.springframework.context.ApplicationContextException: Failed to start bean 'gemfireClusterSchemaObjectInitializer'; nested exception is org.springframework.web.client.HttpClientErrorException: 401 Unauthorized
   2018-11-15T13:17:50.07-0500 [APP/PROC/WEB/0] OUT 	at org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:185)
   2018-11-15T13:17:50.07-0500 [APP/PROC/WEB/0] OUT 	at org.springframework.context.support.DefaultLifecycleProcessor.access$200(DefaultLifecycleProcessor.java:53)
   2018-11-15T13:17:50.07-0500 [APP/PROC/WEB/0] OUT 	at org.springframework.context.support.DefaultLifecycleProcessor$LifecycleGroup.start(DefaultLifecycleProcessor.java:360)
   2018-11-15T13:17:50.07-0500 [APP/PROC/WEB/0] OUT 	at org.springframework.context.support.DefaultLifecycleProcessor.startBeans(DefaultLifecycleProcessor.java:158)
   2018-11-15T13:17:50.07-0500 [APP/PROC/WEB/0] OUT 	at org.springframework.context.support.DefaultLifecycleProcessor.onRefresh(DefaultLifecycleProcessor.java:122)
   2018-11-15T13:17:50.07-0500 [APP/PROC/WEB/0] OUT 	at org.springframework.context.support.AbstractApplicationContext.finishRefresh(AbstractApplicationContext.java:883)
   2018-11-15T13:17:50.07-0500 [APP/PROC/WEB/0] OUT 	at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.finishRefresh(ServletWebServerApplicationContext.java:161)
   2018-11-15T13:17:50.07-0500 [APP/PROC/WEB/0] OUT 	at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:551)
   2018-11-15T13:17:50.07-0500 [APP/PROC/WEB/0] OUT 	at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:140)
   2018-11-15T13:17:50.07-0500 [APP/PROC/WEB/0] OUT 	at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:754)
   2018-11-15T13:17:50.07-0500 [APP/PROC/WEB/0] OUT 	at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:386)
   2018-11-15T13:17:50.07-0500 [APP/PROC/WEB/0] OUT 	at org.springframework.boot.SpringApplication.run(SpringApplication.java:307)
   2018-11-15T13:17:50.07-0500 [APP/PROC/WEB/0] OUT 	at org.springframework.boot.SpringApplication.run(SpringApplication.java:1242)
   2018-11-15T13:17:50.07-0500 [APP/PROC/WEB/0] OUT 	at org.springframework.boot.SpringApplication.run(SpringApplication.java:1230)
   2018-11-15T13:17:50.07-0500 [APP/PROC/WEB/0] OUT 	at io.pivotal.cloudcache.app.CloudcachePizzaStoreApplication.main(CloudcachePizzaStoreApplication.java:37)
   2018-11-15T13:17:50.07-0500 [APP/PROC/WEB/0] OUT 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   2018-11-15T13:17:50.07-0500 [APP/PROC/WEB/0] OUT 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
   2018-11-15T13:17:50.07-0500 [APP/PROC/WEB/0] OUT 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   2018-11-15T13:17:50.07-0500 [APP/PROC/WEB/0] OUT 	at java.lang.reflect.Method.invoke(Method.java:498)
   2018-11-15T13:17:50.07-0500 [APP/PROC/WEB/0] OUT 	at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:48)
   2018-11-15T13:17:50.07-0500 [APP/PROC/WEB/0] OUT 	at org.springframework.boot.loader.Launcher.launch(Launcher.java:87)
   2018-11-15T13:17:50.07-0500 [APP/PROC/WEB/0] OUT 	at org.springframework.boot.loader.Launcher.launch(Launcher.java:50)
   2018-11-15T13:17:50.07-0500 [APP/PROC/WEB/0] OUT 	at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:51)
   2018-11-15T13:17:50.07-0500 [APP/PROC/WEB/0] OUT Caused by: org.springframework.web.client.HttpClientErrorException: 401 Unauthorized
   2018-11-15T13:17:50.07-0500 [APP/PROC/WEB/0] OUT 	at org.springframework.web.client.DefaultResponseErrorHandler.handleError(DefaultResponseErrorHandler.java:94)
   2018-11-15T13:17:50.07-0500 [APP/PROC/WEB/0] OUT 	at org.springframework.web.client.DefaultResponseErrorHandler.handleError(DefaultResponseErrorHandler.java:79)
   2018-11-15T13:17:50.07-0500 [APP/PROC/WEB/0] OUT 	at org.springframework.web.client.ResponseErrorHandler.handleError(ResponseErrorHandler.java:63)
   2018-11-15T13:17:50.07-0500 [APP/PROC/WEB/0] OUT 	at org.springframework.web.client.RestTemplate.handleResponse(RestTemplate.java:730)
   2018-11-15T13:17:50.07-0500 [APP/PROC/WEB/0] OUT 	at org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:688)
   2018-11-15T13:17:50.07-0500 [APP/PROC/WEB/0] OUT 	at org.springframework.web.client.RestTemplate.exchange(RestTemplate.java:622)
   2018-11-15T13:17:50.07-0500 [APP/PROC/WEB/0] OUT 	at org.springframework.data.gemfire.config.admin.remote.RestHttpGemfireAdminTemplate.createRegion(RestHttpGemfireAdminTemplate.java:189)
   2018-11-15T13:17:50.07-0500 [APP/PROC/WEB/0] OUT 	at org.springframework.data.gemfire.config.schema.definitions.RegionDefinition.create(RegionDefinition.java:124)
   2018-11-15T13:17:50.07-0500 [APP/PROC/WEB/0] OUT 	at org.springframework.data.gemfire.config.annotation.ClusterConfigurationConfiguration$ClusterSchemaObjectInitializer.lambda$null$0(ClusterConfigurationConfiguration.java:275)
   2018-11-15T13:17:50.07-0500 [APP/PROC/WEB/0] OUT 	at java.util.Optional.ifPresent(Optional.java:159)
   2018-11-15T13:17:50.07-0500 [APP/PROC/WEB/0] OUT 	at org.springframework.data.gemfire.config.annotation.ClusterConfigurationConfiguration$ClusterSchemaObjectInitializer.lambda$start$1(ClusterConfigurationConfiguration.java:275)
   2018-11-15T13:17:50.07-0500 [APP/PROC/WEB/0] OUT 	at java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184)
   2018-11-15T13:17:50.07-0500 [APP/PROC/WEB/0] OUT 	at java.util.stream.SortedOps$SizedRefSortingSink.end(SortedOps.java:352)
   2018-11-15T13:17:50.07-0500 [APP/PROC/WEB/0] OUT 	at java.util.stream.Sink$ChainedReference.end(Sink.java:258)
   2018-11-15T13:17:50.07-0500 [APP/PROC/WEB/0] OUT 	at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
   2018-11-15T13:17:50.07-0500 [APP/PROC/WEB/0] OUT 	at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
   2018-11-15T13:17:50.07-0500 [APP/PROC/WEB/0] OUT 	at java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151)
   2018-11-15T13:17:50.07-0500 [APP/PROC/WEB/0] OUT 	at java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174)
   2018-11-15T13:17:50.07-0500 [APP/PROC/WEB/0] OUT 	at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
   2018-11-15T13:17:50.07-0500 [APP/PROC/WEB/0] OUT 	at java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418)
   2018-11-15T13:17:50.07-0500 [APP/PROC/WEB/0] OUT 	at org.springframework.data.gemfire.config.annotation.ClusterConfigurationConfiguration$ClusterSchemaObjectInitializer.start(ClusterConfigurationConfiguration.java:274)
   2018-11-15T13:17:50.07-0500 [APP/PROC/WEB/0] OUT 	at org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:182)
   2018-11-15T13:17:50.07-0500 [APP/PROC/WEB/0] OUT 	... 22 more
   2018-11-15T13:17:50.13-0500 [APP/PROC/WEB/0] OUT Exit status 1

GemFireSecurityException: Error: Anonymous User on running application on PCF

We received an exception on a running application (i.e. no deployment or restart in between). It seems like somehow the PCC binding got messed up or dropped. After restarting the server it got fixed. Can someone please check what could possibly the issue so that we can avoid this in future on production? Below is stack trace from the log:

2018-11-28T16:28:53.424-05:00 [APP/PROC/WEB/0] [OUT] at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:478) [tomcat-embed-core-8.5.23.jar!/:8.5.23]
2018-11-28T16:28:53.424-05:00 [APP/PROC/WEB/0] [OUT] at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:140) [tomcat-embed-core-8.5.23.jar!/:8.5.23]
2018-11-28T16:28:53.424-05:00 [APP/PROC/WEB/0] [OUT] at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:81) [tomcat-embed-core-8.5.23.jar!/:8.5.23]
2018-11-28T16:28:53.424-05:00 [APP/PROC/WEB/0] [OUT] at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:87) [tomcat-embed-core-8.5.23.jar!/:8.5.23]
2018-11-28T16:28:53.424-05:00 [APP/PROC/WEB/0] [OUT] at org.apache.catalina.valves.RemoteIpValve.invoke(RemoteIpValve.java:677) [tomcat-embed-core-8.5.23.jar!/:8.5.23]
2018-11-28T16:28:53.424-05:00 [APP/PROC/WEB/0] [OUT] at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:342) [tomcat-embed-core-8.5.23.jar!/:8.5.23]
2018-11-28T16:28:53.424-05:00 [APP/PROC/WEB/0] [OUT] at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:803) [tomcat-embed-core-8.5.23.jar!/:8.5.23]
2018-11-28T16:28:53.424-05:00 [APP/PROC/WEB/0] [OUT] at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66) [tomcat-embed-core-8.5.23.jar!/:8.5.23]
2018-11-28T16:28:53.424-05:00 [APP/PROC/WEB/0] [OUT] at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:868) [tomcat-embed-core-8.5.23.jar!/:8.5.23]
2018-11-28T16:28:53.424-05:00 [APP/PROC/WEB/0] [OUT] at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1459) [tomcat-embed-core-8.5.23.jar!/:8.5.23]
2018-11-28T16:28:53.424-05:00 [APP/PROC/WEB/0] [OUT] at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49) [tomcat-embed-core-8.5.23.jar!/:8.5.23]
2018-11-28T16:28:53.424-05:00 [APP/PROC/WEB/0] [OUT] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_192]
2018-11-28T16:28:53.424-05:00 [APP/PROC/WEB/0] [OUT] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_192]
2018-11-28T16:28:53.424-05:00 [APP/PROC/WEB/0] [OUT] at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) [tomcat-embed-core-8.5.23.jar!/:8.5.23]
2018-11-28T16:28:53.424-05:00 [APP/PROC/WEB/0] [OUT] at java.lang.Thread.run(Thread.java:748) [na:1.8.0_192]
2018-11-28T16:28:53.424-05:00 [APP/PROC/WEB/0] [OUT] Caused by: org.apache.geode.security.GemFireSecurityException: Error: Anonymous User
2018-11-28T16:28:53.424-05:00 [APP/PROC/WEB/0] [OUT] at org.apache.geode.internal.security.IntegratedSecurityService.getSubject(IntegratedSecurityService.java:117) ~[geode-core-1.0.0-incubating.jar!/:na]
2018-11-28T16:28:53.424-05:00 [APP/PROC/WEB/0] [OUT] at org.apache.geode.internal.security.IntegratedSecurityService.authorize(IntegratedSecurityService.java:231) ~[geode-core-1.0.0-incubating.jar!/:na]
2018-11-28T16:28:53.424-05:00 [APP/PROC/WEB/0] [OUT] at org.apache.geode.internal.security.IntegratedSecurityService.authorize(IntegratedSecurityService.java:219) ~[geode-core-1.0.0-incubating.jar!/:na]
2018-11-28T16:28:53.424-05:00 [APP/PROC/WEB/0] [OUT] at org.apache.geode.internal.security.IntegratedSecurityService.authorize(IntegratedSecurityService.java:209) ~[geode-core-1.0.0-incubating.jar!/:na]
2018-11-28T16:28:53.424-05:00 [APP/PROC/WEB/0] [OUT] at org.apache.geode.internal.cache.tier.sockets.command.KeySet.cmdExecute(KeySet.java:99) ~[geode-core-1.0.0-incubating.jar!/:na]
2018-11-28T16:28:53.424-05:00 [APP/PROC/WEB/0] [OUT] at org.apache.geode.internal.cache.tier.sockets.BaseCommand.execute(BaseCommand.java:162) ~[geode-core-1.0.0-incubating.jar!/:na]
2018-11-28T16:28:53.424-05:00 [APP/PROC/WEB/0] [OUT] at org.apache.geode.internal.cache.tier.sockets.ServerConnection.doNormalMsg(ServerConnection.java:785) ~[geode-core-1.0.0-incubating.jar!/:na]
2018-11-28T16:28:53.424-05:00 [APP/PROC/WEB/0] [OUT] at org.apache.geode.internal.cache.tier.sockets.LegacyServerConnection.doOneMessage(LegacyServerConnection.java:85) ~[na:na]
2018-11-28T16:28:53.424-05:00 [APP/PROC/WEB/0] [OUT] at org.apache.geode.internal.cache.tier.sockets.ServerConnection.run(ServerConnection.java:1166) ~[geode-core-1.0.0-incubating.jar!/:na]
2018-11-28T16:28:53.424-05:00 [APP/PROC/WEB/0] [OUT] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_192]
2018-11-28T16:28:53.424-05:00 [APP/PROC/WEB/0] [OUT] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_192]
2018-11-28T16:28:53.424-05:00 [APP/PROC/WEB/0] [OUT] at org.apache.geode.internal.cache.tier.sockets.AcceptorImpl$1$1.run(AcceptorImpl.java:557) ~[geode-core-1.0.0-incubating.jar!/:na]
2018-11-28T16:28:53.424-05:00 [APP/PROC/WEB/0] [OUT] at java.lang.Thread.run(Thread.java:745) [na:1.8.0_192]

Improve the intelligence around enabling Auto-configuration for CQs

By default, SBDG will auto-configure CQs for any Spring Boot, ClientCache application without regard to whether a server or cluster exists in the first place.

When the client runs in local-only mode, or in a local topology (i.e. client LOCAL Regions), then as of Apache Geode 1.9, Geode will throw benign Exceptions in the log output of the application while it attempts to satisfy the minimum pre-fill connection count of the Pool (e.g. "DEFAULT" Pool) used to register CQs, if any.

java.net.ConnectException: Connection refused (Connection refused)
	at java.net.PlainSocketImpl.socketConnect(Native Method) ~[na:1.8.0_192]
	at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) ~[na:1.8.0_192]
	at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206) ~[na:1.8.0_192]
	at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) ~[na:1.8.0_192]
	at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) ~[na:1.8.0_192]
	at java.net.Socket.connect(Socket.java:589) ~[na:1.8.0_192]
	at org.apache.geode.internal.net.SocketCreator.connect(SocketCreator.java:946) ~[geode-core-1.9.0.jar:na]
	at org.apache.geode.internal.net.SocketCreator.connect(SocketCreator.java:887) ~[geode-core-1.9.0.jar:na]
	at org.apache.geode.internal.net.SocketCreator.connectForClient(SocketCreator.java:859) ~[geode-core-1.9.0.jar:na]
	at org.apache.geode.cache.client.internal.ConnectionImpl.connect(ConnectionImpl.java:106) ~[geode-core-1.9.0.jar:na]
	at org.apache.geode.cache.client.internal.ConnectionConnector.connectClientToServer(ConnectionConnector.java:75) ~[geode-core-1.9.0.jar:na]
	at org.apache.geode.cache.client.internal.ConnectionFactoryImpl.createClientToServerConnection(ConnectionFactoryImpl.java:111) ~[geode-core-1.9.0.jar:na]
	at org.apache.geode.cache.client.internal.QueueManagerImpl.initializeConnections(QueueManagerImpl.java:452) [geode-core-1.9.0.jar:na]
	at org.apache.geode.cache.client.internal.QueueManagerImpl.start(QueueManagerImpl.java:290) [geode-core-1.9.0.jar:na]
	at org.apache.geode.cache.client.internal.PoolImpl.start(PoolImpl.java:337) [geode-core-1.9.0.jar:na]
	at org.apache.geode.cache.client.internal.PoolImpl.finishCreate(PoolImpl.java:176) [geode-core-1.9.0.jar:na]
	at org.apache.geode.cache.client.internal.PoolImpl.create(PoolImpl.java:162) [geode-core-1.9.0.jar:na]
	at org.apache.geode.internal.cache.PoolFactoryImpl.create(PoolFactoryImpl.java:372) [geode-core-1.9.0.jar:na]
	at org.apache.geode.internal.cache.GemFireCacheImpl.determineDefaultPool(GemFireCacheImpl.java:2902) [geode-core-1.9.0.jar:na]
	at org.apache.geode.internal.cache.GemFireCacheImpl.getDefaultPool(GemFireCacheImpl.java:1151) [geode-core-1.9.0.jar:na]
	at org.apache.geode.internal.cache.GemFireCacheImpl.getQueryService(GemFireCacheImpl.java:4161) [geode-core-1.9.0.jar:na]
	at org.apache.geode.internal.cache.GemFireCacheImpl.getQueryService(GemFireCacheImpl.java:255) [geode-core-1.9.0.jar:na]
	at org.springframework.data.gemfire.listener.ContinuousQueryListenerContainer.setCache(ContinuousQueryListenerContainer.java:421) [spring-data-geode-2.2.0.RC3.jar:2.2.0.RC3]
	at org.springframework.data.gemfire.config.annotation.ContinuousQueryConfiguration.continuousQueryListenerContainer(ContinuousQueryConfiguration.java:202) [spring-data-geode-2.2.0.RC3.jar:2.2.0.RC3]
	at org.springframework.data.gemfire.config.annotation.ContinuousQueryConfiguration$$EnhancerBySpringCGLIB$$d4ae8c50.CGLIB$continuousQueryListenerContainer$6(<generated>) [spring-data-geode-2.2.0.RC3.jar:2.2.0.RC3]
	at org.springframework.data.gemfire.config.annotation.ContinuousQueryConfiguration$$EnhancerBySpringCGLIB$$d4ae8c50$$FastClassBySpringCGLIB$$b90f023d.invoke(<generated>) [spring-data-geode-2.2.0.RC3.jar:2.2.0.RC3]
	at org.springframework.cglib.proxy.MethodProxy.invokeSuper(MethodProxy.java:244) [spring-core-5.2.0.RC2.jar:5.2.0.RC2]
	at org.springframework.context.annotation.ConfigurationClassEnhancer$BeanMethodInterceptor.intercept(ConfigurationClassEnhancer.java:363) [spring-context-5.2.0.RC2.jar:5.2.0.RC2]
	at org.springframework.data.gemfire.config.annotation.ContinuousQueryConfiguration$$EnhancerBySpringCGLIB$$d4ae8c50.continuousQueryListenerContainer(<generated>) [spring-data-geode-2.2.0.RC3.jar:2.2.0.RC3]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_192]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_192]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_192]
	at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_192]
	at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:154) [spring-beans-5.2.0.RC2.jar:5.2.0.RC2]
	at org.springframework.beans.factory.support.ConstructorResolver.instantiate(ConstructorResolver.java:640) [spring-beans-5.2.0.RC2.jar:5.2.0.RC2]
	at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:625) [spring-beans-5.2.0.RC2.jar:5.2.0.RC2]
	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(AbstractAutowireCapableBeanFactory.java:1339) [spring-beans-5.2.0.RC2.jar:5.2.0.RC2]
	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1178) [spring-beans-5.2.0.RC2.jar:5.2.0.RC2]
	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:557) [spring-beans-5.2.0.RC2.jar:5.2.0.RC2]
	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:517) [spring-beans-5.2.0.RC2.jar:5.2.0.RC2]
	at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:323) [spring-beans-5.2.0.RC2.jar:5.2.0.RC2]
	at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) ~[spring-beans-5.2.0.RC2.jar:5.2.0.RC2]
	at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:321) [spring-beans-5.2.0.RC2.jar:5.2.0.RC2]
	at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:202) [spring-beans-5.2.0.RC2.jar:5.2.0.RC2]
	at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:878) ~[spring-beans-5.2.0.RC2.jar:5.2.0.RC2]
	at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:877) ~[spring-context-5.2.0.RC2.jar:5.2.0.RC2]
	at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:549) ~[spring-context-5.2.0.RC2.jar:5.2.0.RC2]
	at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:141) ~[spring-boot-2.2.0.BUILD-SNAPSHOT.jar:2.2.0.BUILD-SNAPSHOT]
	at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:747) ~[spring-boot-2.2.0.BUILD-SNAPSHOT.jar:2.2.0.BUILD-SNAPSHOT]
	at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:397) ~[spring-boot-2.2.0.BUILD-SNAPSHOT.jar:2.2.0.BUILD-SNAPSHOT]
	at org.springframework.boot.SpringApplication.run(SpringApplication.java:315) ~[spring-boot-2.2.0.BUILD-SNAPSHOT.jar:2.2.0.BUILD-SNAPSHOT]
	at example.app.caching.lookaside.BootGeodeLookAsideCachingApplication.main(BootGeodeLookAsideCachingApplication.java:40) ~[classes/:na]

This Exceptions continue to clutter up (add noise to) the log output of the running application without adding any real value. It is, for all intents and purpose, a useless, unhelpful Exception that user's applications cannot do anything about anyway.

Furthermore, a local-only application does not care whether a server or cluster of servers is available or not, as it is not critical to the proper function of the application.

If, however, the application was a legitimate client/server application in the Apache Geode client/server topology with CQs listeners (i.e. POJO methods annotated with @ContinuousQuery), then the application would fail-fast if a server or cluster was not available since SBDG would go onto register the application CQs (as expressed in the @ContinousQuery annotations on POJO handler methods) on startup.

Still, more care should be taken to avoid this ugly situation triggered and exposed by SBDG in the first place. Therefore, this ticket serves to improve on the intelligent registration of CQ Auto-configuration.

Add dedicated support for in-line caching.

Currently, Apache Geode/Pivotal GemFire offers the JDBC Connector. However, this proves to be too limited in practice to be useful. Therefore, SBDG will offer dedicated support for inline caching that achieves 2 primary goals:

  1. Support for any backend data store (e.g. RDBMS, Document, Graph, Key/Value, other) by integrating the Spring Data Repositories infrastructure.

  2. Support for complex mapping. E.g. when using an RDBMS to back the cache for in-line caching, enable users to use Hibernate, or any other ORM tool (e.g. EclipseLink) to handle complex object hierarchies and relationships.

On top of these primary goals, this change will also include dedicated support configure in-line caching regardless of the context (e.g. Standalone vs. Cloud).

Consider adding an 'org.eclipse.jetty:jetty-server' exclusion to the starters

The org.eclipse.jetty:jetty-server dependency (transitively) pulls in the javax.servlet:javax.servlet-api dependency on the application classpath, which causes Spring Boot to think the user's application is always a Web application, and as such, Spring Boot will bootstrap an embedded Servlet Container (e.g. Tomcat, Jetty).

This prevents the user from creating simply non-Web application Spring Boot applications.

ClientCache Region default is PROXY; unclear section 4 prose implies default is LOCAL

In this version of the SBDG docs, https://docs.spring.io/autorepo/docs/spring-boot-data-geode-build/1.0.1.RELEASE, I became confused. Section 4 (Building ClientCache Applications) has 2 sentences that gave me the impression that the default for any region in the ClientCache would be LOCAL. (It isn't. The default is PROXY.)

Here are the 2 problematic sentences:

However, the ClientCache instance does not require a GemFire/Geode sever (i.e. CacheServer) to be running in order to use the ClientCache instance. It is perfectly valid to create a cache client and perform local data access operations on LOCAL Regions.

Later on, when needed, you can expand your Spring Boot, ClientCache application into a fully functional client/server architecture by changing the client Region’s data policy from LOCAL to PROXY or CACHING_PROXY, and send/receive data to/from 1 or more servers, respectively.

I believe that these sentences were written with the unstated assumption that a developer would know to set the client cache first as a near cache, by setting
@EnableCachingDefinedRegions(clientRegionShortcut = ClientRegionShortcut.LOCAL)

The @EnableCachingDefinedRegions is discussed in section 6. People like me tend to read sections in order when first attempting to learn about something new. So, I read that second problematic sentence (the one that starts with "Later on") as implying that the default client cache would have regions of type LOCAL. And, I read that sentence before reading section 6.


To make this more clear, I think I would change that first sentence to something more like:

However, the ClientCache instance does not require a GemFire/Geode sever (i.e. CacheServer) to be running in order to use the ClientCache instance. It is perfectly valid to create a cache client, specify any regions to be of type LOCAL, and perform local data access operations on those LOCAL Regions.

It might be helpful to also include a link to an example or section that defines how to use the @EnableCachingDefinedRegions annotation.

[OPTIONAL] Add support to log warnings about the use of certain Annotations (e.g. @Indexed) in the absence of the enabling Annotation (e.g. @EnableIndexing).

While this is rather difficult to achieve in practice, it may be worth exploring the possibilities.

One of the challenges in properly detecting the use of certain annotations (e.g. @Indexed) on application domain objects without the associated "enabling" annotation (e.g. @EnableIndexing) is that, by way of example, the application domain model objects containing @Indexed annotations are not even picked up unless the @EnableIndexing annotation is present since that triggers a "scan", which is necessary to 1) limit the scope of the application object types introspected as well as 2) detect whether any @Indexed annotations have been declared. Without the scan, the objects containing the annotations are not even inspected in the first place, so how would the infrastructure know the @Indexed annotation was used without @EnableIndexing, for instance. The "enabling" annotation is needed to begin the scan for the object-level annotations, like @Indexed.

Generated POMs refer to Spring Security

Generated poms of this project have wrong metadata:

  • The SCM url refers to the Spring Security project
  • The list of developers contains Rob and Joe (which I believe is an oversight as well).
  • The project URL refers to Spring Security

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.