Git Product home page Git Product logo

jhipster-registry's Introduction

JHipster Registry

Build Status Docker Status Docker Pulls

This is the JHipster registry service, based on Spring Cloud Netflix, Eureka and Spring Cloud Config.

Full documentation is available on the JHipster documentation for microservices.

Deploy to Heroku

Click this button to deploy your instance of the registry:

Deploy to Heroku

There are a few limitations when deploying to Heroku.

  • The registry will only work with native configuration (and not Git config).
  • The registry service cannot be scaled up to multiple dynos to provide redundancy. You must deploy multiple applications (i.e. click the button more than once). This is because Eureka requires distinct URLs to synchronize in-memory state between instances.

Running locally

To run the cloned repository;

  • For development run ./mvnw -Pdev,webapp to just start in development or run ./mvnw and run npm install && npm start for hot reload of client side code.
  • For production profile run ./mvnw -Pprod

HashiCorp Vault Integration

Development Mode

JHipster Registry default integration uses a vault server with an in-memory backend. The data shall not be persist and shall require you to configure secrets after every restart. The in-memory configuration provides an easy way to test out the integration and later switch to the recommended server mode.

  • Start vault server docker container:
docker-compose -f src/main/docker/vault.yml up -d
  • The default configured root token is jhipster-registry. We shall use the default secrets engine backend mounted on the secrets path. Configure secrets using either of ui, cli or http.
  • Create a new secret sub-path jhipster-registry/dev and add the following secret in JSON format. Here jhipster-registry refers to the application name and dev refers to the development profile. Do follow the same convention to configure secrets of other applications.
{
  "spring.security.user.password": "admin123!"
}
  • Start JHipster Registry server in development mode using the following command (skipping execution of test cases):
./mvnw -DskipTests
  • After successful start, open http://localhost:8761/ in a browser. You shall require entering a new password as provided in the above vault configuration.

Server Mode

JHipster Registry also provides configuration to use the native file system as the persistent backend.

  • Uncomment the following configurations in vault.yml. You can refer config.hcl to view provided vault server configurations:
command: server
volumes:
  - ./vault-config/config:/vault/config
  - ./vault-config/logs:/vault/logs
  - ./vault-config/data:/vault/file
  • Start vault server docker container:
docker-compose -f src/main/docker/vault.yml up -d
  • Open vault server ui to initialize master key shares. In this guide, we shall enter 1 as the number of key shares and 1 as the key threshold value. Do refer to vault documentation for recommended configuration. Note down the initial root token and the key and keep it at a safe place. You shall require the key to unseal the vault server after a restart.
  • Enable secret engine backend kv and use secret as the mount path.
  • Create a new secret sub-path jhipster-registry/dev and add the following secrets in JSON format. Here jhipster-registry refers to the application name and dev refers to the development profile. Do follow the same convention to configure secrets of other applications.
{
  "spring.security.user.password": "admin123!"
}
  • In this guide, we shall use the token authentication mechanism to retrieve secrets from the vault server. Update bootstrap.yml to specify root token in place of default dev token.
vault:
  authentication: token
  token: jhipster-registry # In server mode, provide a token having read access on secrets
  • Start JHipster Registry server in development mode using the following command (skipping execution of test cases):
./mvnw -DskipTests
  • After successful start, you shall require entering a new password as provided in vault.

OAuth 2.0 and OpenID Connect

OAuth is a stateful security mechanism, like HTTP Session. Spring Security provides excellent OAuth 2.0 and OIDC support, and this is leveraged by JHipster. If you’re not sure what OAuth and OpenID Connect (OIDC) are, please see What the Heck is OAuth?

Please note that JSON Web Token (JWT) is the default option when using the JHipster Registry. It has to be started with oauth2 spring profile to enable the OAuth authentication.

In order to run your JHipster Registry with OAuth 2.0 and OpenID Connect:

  • For development run SPRING_PROFILES_ACTIVE=dev,oauth2,native ./mvnw
  • For production you can use environment variables. For example:
export SPRING_PROFILES_ACTIVE=prod,oauth2,api-docs

Keycloak

Keycloak is the default OpenID Connect server configured with JHipster.

If you want to use Keycloak, you can follow the documentation for Keycloak

Okta

If you'd like to use Okta instead of Keycloak, you can follow the documentation for Okta

Auth0

If you'd like to use Auth0 instead of Keycloak, you can follow the documentation for Auth0

*NOTE: Using the JHipster Registry, add URLs for port 8761 too ("Allowed Callback URLs" and "Allowed Logout URLs")

jhipster-registry's People

Contributors

abdelmoghit1 avatar anunnakian avatar clementdessoude avatar coderla avatar dajay avatar danielfran avatar deepu105 avatar delverdev avatar dependabot[bot] avatar docswebapps avatar erikkemperman avatar g-boy05 avatar gmarziou avatar jdubois avatar jhipster-bot avatar jkutner avatar julienmrgrd avatar juliensadaoui avatar maivantan1992 avatar mraible avatar mshima avatar mwwbf avatar pascalgrimaud avatar pierrebesson avatar prasanth08 avatar ruddell avatar sudharakap avatar tarkil avatar vishal423 avatar xetys avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

jhipster-registry's Issues

Peer Awareness

I am trying to setup peer awareness in application.yml using the below configuration but I when I start the server dont see any logs trying to ping the "peer" host. Per the configuration this should be showing a lot of chatter http://cloud.spring.io/spring-cloud-netflix/spring-cloud-netflix.html. Am I missing something here.

eureka:
    instance:
        appname: registry
        hostname: localhost
        prefer-ip-address: true
    server:
        enable-self-preservation: true
    client:
        registerWithEureka: false
        fetchRegistry: false
        serviceUrl:
            defaultZone: http://peer2/eureka/

jhipster-registry container cannot resolv names outside a stack on docker cloud

I did see this problem before and it occurs because of the image using to create the container. On generator-jhipster I had to move to java:8 instead of alpine.

Unfortunately the trade off is, my applications had 250MB using alpine, moving to java:8 it become to have more than 850MB!

The impact here is: jhipster-registry must be inside the stack you are going to create your gateways and applications and like in my case where the jhipster-console is in another stack it makes both blind of each other.

Stacks can help you organize your life and speed up your management mainly when you have many containers like monitoring, datastores, etc.

I think here, it is less an issue and more an advice about what happens using alpine on Docker Cloud for now.

Check that the Heroku button work with the current "develop" branch

This can only be checked by @jkutner -> Joe, can you tell me when you can work on this? This will soon be a blocker issue for us, I'm sorry but I only thought of this now. @JulienMrgrd should be available to help you, you can ask him directly.

  • This is to check https://elements.heroku.com/buttons/jhipster/jhipster-registry still works fine
  • On our develop branch we have migrated to Angular 4 -> this breaks the current system, as you need to run Yarn so the TypeScript code is compiled into JavaScript
  • There are several ways to do this, but basically you can do: yarn install && mvn after checking out the project and it will work

Run as a service

I would like to run the jhipster registry as a service link

It is working fine for the jhipster gateway.

Am I missing something, or is it not implemeted (yet)?

Thanks

Setup Travis for the registry

Although the registry doesn't have a lot of code we should still setup Travis for it. It would help when upgrading dependencies.
What we could do:

  • mvn package and launch the jar to see if there are errors
  • generate a sample microservice and see if it can connect to the registry
  • add config files and request them with the config server API

Any other ideas ?

Secure the JHipster Registry

The Registry should be secured:

  • Applications should connect to the registry using HTTP Basic security (that's the easiest way to implement security for the moment)
  • Users should also use a login/password -> we will use what already exist for JHipster applications
  • SSL should be an option in the future -> we could use letsencrypt for public Registry, but as this is more complex to set up, this will be done later

Memory usage goes up

Hi,

I cloned the projects and trying to execute it over a linux machine. All containers are created just fine, I can see them with "docker ps" and look at usage with "docker stats".

However, memory keeps going up and up.. eventualy my laptop (core i5, 10gb RAM) stops responding and die!

I tried with mem_limit on each service definition on docker_compose.yml, and I can see the limit working on docker stats... however memory keeps going up and when the container gets > 95% usage it does dead... container stops working and is removed. So.. this setup just prevents my pc to die.. but nothing else.

I also tried with a JHipster project and set JAVA_OPTS for -Xmx and other memory parameters.. but the result is the same.

Can you please help me on this?

Thanks!

Enable use of a ssh git configuration source with docker

I have managed to make the registry's docker image use a ssh git configuration source. This only works with docker. It takes advantages of a feature provided by Spring Cloud Config that uses the local ssh keys in ~/.ssh.

  • install openssh (this should be added to the Dockerfile): apk update && apk add openssh

Those are the setup instruction that should be added to the docs, they need to be done only the first time.

  • Go inside the container: docker-compose exec jhipster-registry sh
  • Generate keys: ssh-keygen
  • Copy the content public key in /root/.ssh/id_rsa.pub to remote git repo (for example in github settings)
  • Test the ssh connection: ssh -T [email protected]
  • Reply yes to add the git repo host to the list of known hosts:
The authenticity of host 'github.com (192.30.252.131)' can't be established.
Are you sure you want to continue connecting (yes/no)? 
  • Restart the registry without removing the container, then it should be able to read a git repo secured by ssh.

Then, in order not to lose our setup every time we recreate or update the container, we can mount the /root/.ssh/ folder to a docker volume (I'm not really sure of the security implications of this but if it's on the same machine it should be alright).

In the end I use this docker-compose file
jhipster-registry-git.yml

version: '2'
services:
    jhipster-registry:
        container_name: jhipster-registry
        image: jhipster/jhipster-registry
        volumes:
            - ./ssh/:/root/.ssh/
        environment:
             - SPRING_PROFILES_ACTIVE=prod
             - GIT_URI=ssh:[email protected]:PierreBesson/test-registry.git
        ports:
            - 8761:8761

You can also mount your own ~/.ssh instead of a ./ssh subdirectory to have it work out of the box with your repos.

Correct code style to align to Generated code

In many of the code added as part of the migration to NG2 I found the code style of the generator is not respected. they need to be aligned so that we dont spend time during every upgrade

[BUG]`./mvnw -Pprod` is invalid

./mvnw -Pprod

#output

----------------------------------------------------------
	Application 'jhipster-registry' is running! Access URLs:
	Local: 		http://127.0.0.1:8761
	External: 	http://xxx:8761
	Profile(s): 	[native, dev]
----------------------------------------------------------
#mvn -Pprod is invalid too

but export SPRING_PROFILES_ACTIVE=prod,native,swagger && ./mvnw it's ok

----------------------------------------------------------
	Application 'jhipster-registry' is running! Access URLs:
	Local: 		http://127.0.0.1:8761
	External: 	http://xxx:8761
	Profile(s): 	[prod, native, swagger]
----------------------------------------------------------

this way from the https://github.com/jhipster/jhipster-registry/blob/master/src/main/docker/dev/Dockerfile#L13

JHipster Version(s)
[email protected] /home/xxx/jhipster-registry
└── [email protected] 

JHipster configuration, a .yo-rc.json file generated in the root folder
{
  "generator-jhipster": {
    "jhipsterVersion": "3.4.0",
    "baseName": "jhipsterRegistry",
    "packageName": "io.github.jhipster.registry",
    "packageFolder": "io/github/jhipster/registry",
    "authenticationType": "jwt",
    "hibernateCache": "no",
    "clusteredHttpSession": "no",
    "websocket": "no",
    "databaseType": "no",
    "devDatabaseType": "no",
    "prodDatabaseType": "no",
    "searchEngine": "no",
    "buildTool": "maven",
    "enableSocialSignIn": false,
    "useSass": false,
    "enableTranslation": false,
    "applicationType": "microservice",
    "testFrameworks": [],
    "jhiPrefix": "jhi",
    "skipClient": true,
    "skipUserManagement": true,
    "serverPort": "8761",
    "jwtSecretKey": "xxx"
  }
}
Entity configuration(s) entityName.json files generated in the .jhipster directory

ls: no such file or directory: .jhipster/*.json

Browsers and Operating System

java version "1.8.0_121"
Java(TM) SE Runtime Environment (build 1.8.0_121-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode)

git version 2.7.4

node: v7.6.0

npm: 4.1.2

yeoman: 1.8.5

yarn: 0.20.3

Add prod environment variables to dockerfile

Currently when running the registry with docker, only the dev,native profile is usable. Because it is not possible to specify the git uri for the configuration source.

I propose to add the following:

  • GIT_URI
  • GIT_SEARCH_PATH

And then pass them to the war as command line args.

We have already made the changes in the docker-compose generator.
@pascalgrimaud is it OK for you. Or do you have a better solution ?

Registry does not run in dev mode locally

When initially running the registry locally, it does not start with the dev profile but instead uses the prod profile.

Steps to Reproduce:

git clone https://github.com/jhipster/jhipster-registry
cd jhipster-registry
mvn

In the console output, you will see the following:

2016-04-17 14:20:09.526  INFO 19436 --- [           main] i.g.jhipster.registry.JHipsterRegistry   : The following profiles are active: **prod,native**
2016-04-17 14:20:14.689  INFO 19436 --- [           main] i.g.jhipster.registry.JHipsterRegistry   : Running with Spring profile(s) : **[prod, native]**

jhipster microservice box

Really thank you for the new microservice jhipster generator.

I have used it to build a cloud box and I have some notices and issues:

  • The gateway microservice generator doesn't include anymore grunt and it's gulp by default
  • What about authentication and security, I was expecting to find an authserver microservice option.
  • After running my gateway app, I'm not able to get the jhipster welcome page, Is it a Zuul routing issue ?

I have pushed my microservices to https://github.com/jgasmi/jhipster-cloud.

Thank you in advance for any help.

Registry home dashboard

The current home screen is kind of blank, I would like to add a dashboard with some relevant info about apps, replicas etc so that you get a overview, kind of like the spring registry dashboard. @jdubois what do you think? shall I do it?

Improve "configuration" screen

This screen could have the following useful features:

  • Configuration source type: either git or filesystem (native profile)
  • Label : master by default (only with git ?)
  • Config source URI: either git or filesystem with the specified search-path
  • Autocompletion
  • Instruction for setting up a git with username/password (similar to the instruction in the ssh screen)
  • Read the application, profile and label from the URL hash (routes) so that we can link to it from the apps startup logs

Docker Build Fails

When running docker build locally, the build fails with:

$ docker build -f src/main/docker/dev/Dockerfile .
Sending build context to Docker daemon 116.8 MB
Step 1 : FROM alpine:3.3
 ---> d7a513a663c1
...
(25/25) Installing openjdk8 (8.77.03-r0)
Executing busybox-1.24.1-r7.trigger
Executing ca-certificates-20160104-r2.trigger
Executing java-common-0.1-r0.trigger
OK: 141 MiB in 36 packages
 ---> b0218fb4744b
Removing intermediate container 749ba252f4c2
Step 4 : ENV SPRING_PROFILES dev
 ---> Running in d1584e159d60
 ---> af1a04d2bb47
Step 5 : ADD *.war /jhipster-registry.war
No source files were specified

Issue: The build fails because it cannot find the target/jhipster-registry-0.0.1-SNAPSHOT.war file.

Note: Because the build fails, docker does not successfully tag the built image (since it is still technically an intermediate container). So running docker images produces

$ docker images
REPOSITORY                         TAG                 IMAGE ID            CREATED             SIZE
<none>                             <none>              af1a04d2bb47        10 minutes ago      145.5 MB

Any attempt to run this image will result in:

$ docker run af1a04d2bb47
C:\Program Files\Docker Toolbox\docker.exe: Error response from daemon: No command specified.
See 'C:\Program Files\Docker Toolbox\docker.exe run --help'.

This is because the Dockerfile does not set the default CMD until Step 9.

Steps to Reproduce:

  • Install the latest version of Docker Toolbox (v1.11.0)
  • Create a new local copy of the registry
    git clone https://github.com/jhipster/jhipster-registry && cd jhipster-registry
  • Build the registry
    mvn -Pdev package
  • Build the docker image
    docker build -f src/main/docker/dev/Dockerfile .

OS: Windows 7 Pro

jhipster-registry not support git config

docker-compose.yml

version: '2'
services:
    jhipster-registry:
        image: jhipster/jhipster-registry:v2.6.0
        #volumes:
        #    - ./central-server-config:/central-config
        # By default the JHipster Registry runs with the "prod" and "native"
        # Spring profiles.
        # "native" profile means the filesystem is used to store data, see
        # http://cloud.spring.io/spring-cloud-config/spring-cloud-config.html
        environment:
            - SPRING_PROFILES_ACTIVE=dev
            - GIT_URI=https://github.com/jhipster/jhipster-registry/
        ports:
            - 8761:8761

the application-dev.yml#L64

spring:
    profiles:
        active: dev
        include: native #?maybe  should comment this?

the config/config.controller.js#L26

function load () {
            ProfileService.getProfileInfo().then(function (response) {
                vm.activeRegistryProfiles = response.activeProfiles;
                vm.isNative = vm.activeRegistryProfiles.includes('native'); // the vm.isNative allways true
                vm.nativeSearchLocation = response.nativeSearchLocation;
                vm.gitUri = response.gitUri;
                vm.gitSearchLocation = response.gitSearchLocation;
            });
        }

the config/config.html#L32-L40

        <tr ng-if="vm.isNative" >
            <td>Native (Local Filesystem)</td>
            <td>{{ vm.nativeSearchLocation }}</td>
        </tr>
        <tr ng-if="!vm.isNative" > <!-- never show -->
            <td>Git</td>
            <td>{{ vm.gitUri }}</td>
            <td>/{{ vm.gitSearchLocation }}</td>
        </tr>

"mvn -Pprod package" build failed

Used "mvn -e -X -Pprod package", got error below:
[INFO] [16:51:13] gulp-imagemin: Minified 0 images
[ERROR] fs.js:839
[ERROR] return binding.lstat(pathModule._makeLong(path));
[ERROR] ^
[ERROR]
[ERROR] Error: ENOENT: no such file or directory, lstat '/Users/renjun/ttt/jh/jhipster-registry/src/main/webapp/app/templates.js'
[ERROR] at Error (native)
[ERROR] at Object.fs.lstatSync (fs.js:839:18)
[ERROR] at DestroyableTransform.TransformStream as _transform
[ERROR] at DestroyableTransform.Transform._read (/Users/renjun/ttt/jh/jhipster-registry/node_modules/through2/node_modules/readable-stream/lib/_stream_transform.js:159:10)
[ERROR] at DestroyableTransform.Transform._write (/Users/renjun/ttt/jh/jhipster-registry/node_modules/through2/node_modules/readable-stream/lib/_stream_transform.js:147:83)
[ERROR] at doWrite (/Users/renjun/ttt/jh/jhipster-registry/node_modules/through2/node_modules/readable-stream/lib/_stream_writable.js:313:64)
[ERROR] at writeOrBuffer (/Users/renjun/ttt/jh/jhipster-registry/node_modules/through2/node_modules/readable-stream/lib/_stream_writable.js:302:5)
[ERROR] at DestroyableTransform.Writable.write (/Users/renjun/ttt/jh/jhipster-registry/node_modules/through2/node_modules/readable-stream/lib/_stream_writable.js:241:11)
[ERROR] at write (/Users/renjun/ttt/jh/jhipster-registry/node_modules/gulp-concat/node_modules/readable-stream/lib/_stream_readable.js:623:24)
[ERROR] at flow (/Users/renjun/ttt/jh/jhipster-registry/node_modules/gulp-concat/node_modules/readable-stream/lib/_stream_readable.js:632:7)
[ERROR] at DestroyableTransform.pipeOnReadable (/Users/renjun/ttt/jh/jhipster-registry/node_modules/gulp-concat/node_modules/readable-stream/lib/stream_readable.js:664:5)
[ERROR] at emitNone (events.js:67:13)
[ERROR] at DestroyableTransform.emit (events.js:166:7)
[ERROR] at emitReadable
(/Users/renjun/ttt/jh/jhipster-registry/node_modules/gulp-concat/node_modules/readable-stream/lib/_stream_readable.js:448:10)
[ERROR] at emitReadable (/Users/renjun/ttt/jh/jhipster-registry/node_modules/gulp-concat/node_modules/readable-stream/lib/_stream_readable.js:444:5)
[ERROR] at readableAddChunk (/Users/renjun/ttt/jh/jhipster-registry/node_modules/gulp-concat/node_modules/readable-stream/lib/_stream_readable.js:187:9)
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------

Base Dockerfile on pre-build java alpine image

Currently our Dockerfile does all the work to install Java on an alpine base image. This is taking some time when (re)building the image.

I think we could speed-up the build if we based our Dockerfile on something like : anapsix/docker-alpine-java. @pascalgrimaud, I think it might be your inspiration for the image anyway. Also, it will be annoying to upgrade the version number ourselves so I suggest that we base our image on anapsix/alpine-java while we wait for an official java image based on Alpine. What do you think ?

Bootswatch offline

configserver.status is wrong

I have a service srv1 in dev profile, when it starts and connects to the registry it prints:

Config Server:  http://localhost:8761/config/master/srv1-dev.yml

This URL returns a 404 error, the correct URL should be:

Config Server:  http://localhost:8761/config/srv1/dev/master

This is configured in application.yml:

# Property used on app startup to check the config server status
configserver:
    name: JHipster Sample Spring Cloud Config
    status: ${spring.cloud.config.uri}/${spring.cloud.config.label}/${spring.cloud.config.name}-${spring.cloud.config.profile}.yml

Some little fixes to bring before next release

  • On develop branch

(some of these fixes concerns eventually the generator too)

  • Footer always at the bottom of the page.
  • The Spring Cloud Configuration page : dropdown doesn't work and "404 Failed to load resource".
  • The code-block color inside SSH page.
  • In the Sign-in modal : "Sign in" text duplicated and unavailable links ("Did you forget your password?", "You don't have an account yet? Register a new account").
  • And the most important, the issue #130

EDIT :

DiscoveryComposite - eureka UNKNOWN

Hello,

I have a problem with eureka it seems. I start jhipster-registry. After I login to the interface i see a problem with DiscoveryComposite - eureka. The status is UNKNOWN.

image

Can someone explain me what this means?

[Question] Make a temporary config override for a specific application?

We deploy our microservices as well as jhipster registry using Docker.

Is there a way to make a temp change to one of the microservices that pull their configuration from jhipster registry without modifying the config in the central repo? Any of the following will work:

  1. Environment variables (which I can set in docker-compose.yml for jhipster-registry and/or the microservices)
  2. curl -XPOST ... to some jhipster registry endpoint

Or even something else.

[Proposal] Split the config & registry into separate instances

Is there a reason that both the registry & config servers were combined? As far as scaling this in a production environment, I think it would be better to separate the config instance(s) from the core registry. [Not that it would really break anything if it wasn't]

I am still new to Eureka, but I get the sense that we would be able to use the Eureka based registry beyond the current gateway discovery. Could be completely wrong, not sure, but just thought I would bring it up.

Add Zuul support, and allow monitoring of all services from the Registry

  • Add Zuul support to the Registry, so it can route requests to microservices/gateways/monoliths. The URL would be in the form /services/{application-name}/api/management
  • When #98 is done, have a full "Admin" menu, that uses an internal Zuul to access each microservice, gateway and monolith. Each "admin" entry will have a top drop-down list (like we have currently on Swagger for gateways), where the service can be selected.
  • Make the "admin" menu optional on the gateways and monoliths (as they can be monitored with the Registry)

Empty home page after login

  • develop branch

When you start the Regsitry (with yarn start) without a token, after sign in, you are redirected to a blank home page. If you refresh or quit/back the home page, it becomes complete.

capture d ecran de 2017-05-03 10-01-13

  • Trace 1 (empty home page)
    -> (click on "sign in" button, in the modal, with admin/admin)
    -> login() function (in login.component.ts)
    -> identity(), line 61 (principal.service.ts)
    -> login() again
    -> registerAuthenticationSuccess(), line 53 (home.component.ts)
    -> login(), line 69, with redirect = null

  • Trace 2 (completed home page)
    -> (refresh or quit/back)
    -> canActivate() function (in user-route-access-service.ts)
    -> the home component's ngOnInit
    -> identity(), line 61
    -> ngOnInit, inside "then" and populateDashboard() (home.component.ts)

We've to find a way to correctly init (by calling ngOnInit) the home component.
Any ideas ?

Note : Maybe related to jhipster/generator-jhipster#5658 and jhipster/generator-jhipster#5690

EDIT : And jhipster/generator-jhipster#5574

Docker container

I've just worked on a Dockerfile to containerize this application :

  • based on the image java:8
  • Docker Hub : it can be automated or manually to the JHipster organization

Command to launch the container (currently, on my local) :

docker run -p 8761:8761 -d jhipster/jhipster-registry

@jdubois : can I add it to Docker Hub JHipster organization and should I configure Docker Hub to automated this application ?
cc @PierreBesson too because I don't know if you work on the registry

With this new image, we'll be able to test gateway / microservices (jhipster/generator-jhipster#2804)

Glyphicons font file 404 on prod profile

When Building and running the registry with the prod profile, I get 404s on glyphicons font files:

http://192.168.2.10:8761/content/fonts/glyphicons-halflings-regular.woff2
http://192.168.2.10:8761/content/fonts/glyphicons-halflings-regular.woff
http://192.168.2.10:8761/content/fonts/glyphicons-halflings-regular.ttf

I've checked the 'target/www/content' folder. It contains a 'css' and 'images' folder, but no 'fonts' folder. And of course, the fonts folder is also missing in the generated war-file.

JHipster Registry 2.3.0

{
  "generator-jhipster": {
    "jhipsterVersion": "3.4.0",
    "baseName": "jhipsterRegistry",
    "packageName": "io.github.jhipster.registry",
    "packageFolder": "io/github/jhipster/registry",
    "authenticationType": "jwt",
    "hibernateCache": "no",
    "clusteredHttpSession": "no",
    "websocket": "no",
    "databaseType": "no",
    "devDatabaseType": "no",
    "prodDatabaseType": "no",
    "searchEngine": "no",
    "buildTool": "maven",
    "enableSocialSignIn": false,
    "useSass": false,
    "enableTranslation": false,
    "applicationType": "microservice",
    "testFrameworks": [],
    "jhiPrefix": "jhi",
    "skipClient": true,
    "skipUserManagement": true,
    "serverPort": "8080",
    "jwtSecretKey": "c9d37cefc48581919939d587c750ea215020765b"
  }
}

Am I doing something wrong, or is this a bug

Fix Docker Hub automated build tags

ping @pascalgrimaud
Currently, the docker hub automated build is not correctly handling tags (see https://hub.docker.com/r/jhipster/jhipster-registry/builds/).
What we would like to have is:

  • jhipster-registry:latest -> get the latest tagged release of the registry
  • jhipster-registry:v1.0.0 -> get the version explicitly tagged as 1.0.0
  • jhipster-registry:master -> version from master

Also we when doing a release on github, it should create a new tag on docker-hub.

[FEATURE] Add a deploy to docker cloud button

Similar to the deploy to heroku button.

Try it here:

Deploy to Docker Cloud

It would simply require to add a docker-cloud.yml file at the root of the repo similar to what we have with the Procfile. On the contrary to what you would think at first, even though it comes from Docker, it is not entirely compatible with the latest docker-compose syntax as it doesn't support version: '2'.

how to config high availability config-server

In config client:

the single node config-server configuration running ok:

  config:
     label: master
     name: config-client
     profile: dev
     uri: http://admin:[email protected]:8761/config

but when i config as discovery enable , the application run failed:

spring:
  application:
    name: config-client    
    config:
      profile: dev
      label: master
      discovery:
        enabled: true                                
        serviceId: jhipster -registry(this is  the name JHipster  Registry  application)  
      name: config-client


eureka:
    instance:
        prefer-ip-address: true
    client:
        enabled: true
        healthcheck:
            enabled: true
        registerWithEureka: true
        fetchRegistry: true
        serviceUrl:
            defaultZone: http://admin:admin@localhost:8761/eureka/

[Feature] allow the Registry to do REST calls to management APIs of the apps

The Registry could have a specific user/password that allows it to access the applications (microservices and gateway):

  • It could be able to access the /health and /info endpoints, so it could display better information on the applications
  • It could even have its admin UI point to the metrics, logs, configuration endpoints, so you could manage everything from the Registry

That means the Registry back-end would do REST requests to the applications, probably using the Spring REST template.

Fix the docker image build

Currently the prod build is broken which means that when running the "develop" docker image (docker run -p 8761:8761 jhipster/jhipster-registry:develop), you get a blank page 😢 .

This should be fixed and using the recent advances in the generator we should even have minification and Angular 4 improvements.

Maybe missing files/installations in Dockerfile

I can see new files at root folder:

  • bower.json
  • package.json
  • gulpfile.js
  • and more..

There are maybe some issues:

=> we need to investigate a little bit to confirm the jhipster/jhipster-registry works well. Maybe it's possible with https://github.com/jhipster/jhipster-registry/blob/master/pom.xml#L330-L331

[HELP]jhipster-registry's springCloudConfig with the git does not work

jhipster-registry's version is 2.6.0

src/main/resources/config/application-prod.yml

spring:
    profiles:
        active: prod
    cloud:
        config:
            server:
                #native:
                    #search-locations: file:./central-config
                git:
                    uri: https://github.com/spring-cloud-samples/config-repo
                prefix: /config

export SPRING_PROFILES_ACTIVE=prod,native,swagger && ./mvnw it's successful

output

----------------------------------------------------------
	Application 'jhipster-registry' is running! Access URLs:
	Local: 		http://127.0.0.1:8761
	External: 	http://xx.xx.xx.xx:8761
	Profile(s): 	[prod, native, swagger]
----------------------------------------------------------

nothing.png

Ctrl+C stop the server

git clone https://github.com/spring-cloud-samples/config-repo

change the src/main/resources/config/application-prod.yml

spring:
    profiles:
        active: prod
    cloud:
        config:
            server:
                native:
                    search-locations: file:./config-repo
                #git:
                    #uri: https://github.com/spring-cloud-samples/config-repo
                prefix: /config

it's ok

ok.png

and the foo-db.properties

Jhipster - Application configuration with the JHipster Registry

JHipster Version(s)
[email protected] /home/xxx/gateway
├── UNMET PEER DEPENDENCY @angular/[email protected]
├── UNMET PEER DEPENDENCY @angular/[email protected]
├── [email protected] 
└── UNMET PEER DEPENDENCY [email protected]

JHipster configuration, a .yo-rc.json file generated in the root folder
{
  "generator-jhipster": {
    "promptValues": {
      "packageName": "com.shunneng.gateway",
      "nativeLanguage": "zh-cn"
    },
    "jhipsterVersion": "4.1.1",
    "baseName": "gateway",
    "packageName": "com.shunneng.gateway",
    "packageFolder": "com/shunneng/gateway",
    "serverPort": "8080",
    "authenticationType": "uaa",
    "uaaBaseName": "uaa",
    "hibernateCache": "hazelcast",
    "clusteredHttpSession": false,
    "websocket": false,
    "databaseType": "sql",
    "devDatabaseType": "h2Disk",
    "prodDatabaseType": "mysql",
    "searchEngine": false,
    "messageBroker": false,
    "serviceDiscoveryType": "eureka",
    "buildTool": "maven",
    "enableSocialSignIn": false,
    "clientFramework": "angular2",
    "useSass": false,
    "clientPackageManager": "yarn",
    "applicationType": "gateway",
    "testFrameworks": [
      "gatling",
      "cucumber",
      "protractor"
    ],
    "jhiPrefix": "jhi",
    "enableTranslation": true,
    "nativeLanguage": "zh-cn",
    "languages": [
      "zh-cn"
    ]
  }
}
Entity configuration(s) entityName.json files generated in the .jhipster directory

ls: no such file or directory: .jhipster/*.json

Browsers and Operating System

java version "1.8.0_121"
Java(TM) SE Runtime Environment (build 1.8.0_121-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode)

git version 2.7.4

node: v7.6.0

npm: 4.1.2

yeoman: 1.8.5

yarn: 0.20.3

Update to latest JHipster revision and lint

The generated code needs to be synced with latest JHipster version and needs to be linted as currently there are lot of code quality/consistency issues

cc @jdubois I think we need to do a review before the release

SSH public key could not be loaded: /root/.ssh/id_rsa.pub

I tried to create jhipster-registry docker container with dokku. When I generate id_rsa.pub via ssh-keygen, I get next log:

2016-12-10 08:34:05.744  INFO 1 --- [           main] i.g.jhipster.registry.JHipsterRegistry   :
----------------------------------------------------------
	Application 'jhipster-registry' is running! Access URLs:
	Local: 		http://127.0.0.1:8761
	External: 	http://172.17.0.4:8761
----------------------------------------------------------
2016-12-10 08:34:10.405  WARN 1 --- [  XNIO-2 task-4] i.g.j.registry.web.rest.SshResource      : SSH public key could not be loaded: /root/.ssh/id_rsa.pub
2016-12-10 08:35:05.545  INFO 1 --- [a-EvictionTimer] c.n.e.registry.AbstractInstanceRegistry  : Running the evict task with compensationTime 0ms
2016-12-10 08:36:05.545  INFO 1 --- [a-EvictionTimer] c.n.e.registry.AbstractInstanceRegistry  : Running the evict task with compensationTime 0ms
2016-12-10 08:37:05.548  INFO 1 --- [a-EvictionTimer] c.n.e.registry.AbstractInstanceRegistry  : Running the evict task with compensationTime 3ms

dokku version: 0.7.2
jhipster-registry tag: v2.5.5


I tried to write via exec "chmod 777 /root/.ssh/id_rsa.pub", but I get the same error.
What am I doing wrong?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.