Git Product home page Git Product logo

activiti-cloud-full-chart's Introduction

activiti-cloud-full-chart

Getting Started Guide

More information:

Running on Docker Desktop

Install Docker Desktop and make sure the included single node Kubernetes cluster is started.

Install the latest version of Helm.

Add the magic host.docker.internal hostname to your hosts file:

sudo echo "127.0.0.1        host.docker.internal" > /etc/hosts

Install a recent version of ingress-nginx:

helm install --repo https://kubernetes.github.io/ingress-nginx ingress-nginx ingress-nginx

Update all dependencies:

helm dependency update charts/activiti-cloud-full-example

Create Activiti Keycloak Client Kubernetes secret in the activiti namespace:

kubectl create secret generic activiti-keycloak-client \
   --namespace activiti \
   --from-literal=clientId=activiti-keycloak \
   --from-literal=clientSecret=`uuidgen`

Create a values.yaml file with any customised values from the default values.yaml you want, as documented in the chart README.

In your local installation to start with, this would be:

global:
  gateway:
    host: host.docker.internal
  keycloak:
    host: host.docker.internal
    clientSecretName: activiti-keycloak-client
    useExistingClientSecret: true

Alternatively, you can create Activiti Keycloak Client Kubernetes secret with Helm with the following values:

global:
  gateway:
    host: host.docker.internal
  keycloak:
    host: host.docker.internal
    clientSecret: changeit

In a generic cluster install, you can just add --set global.gateway.domain=$YOUR_CLUSTER_DOMAIN to the helm command line, provided your DNS is configured with a wildcard entry *.$YOUR_CLUSTER_DOMAIN pointing to your cluster ingress.

Install or upgrade an existing installation:

helm upgrade --install \
  --atomic --create-namespace --namespace activiti \
  -f values.yaml \
  activiti charts/activiti-cloud-full-example

Uninstall:

helm uninstall --namespace activiti activiti

WARNING All the PVCs are not deleted by helm uninstall and that should be done manually unless you want to keep data for another install.

kubectl get pvc --namespace activiti
kubectl delete pvc --namespace activiti ...

or just delete the namespace fully:

kubectl delete ns activiti

As an alternative, generate a Kubernetes descriptor you can analyse or apply offline using kubectl apply -f output.yaml:

helm template --validate \
  --atomic --create-namespace --dependency-update --namespace activiti \
  -f values.yaml \
  activiti charts/activiti-cloud-full-example

Enabling message partitioning

In order to enable partitioning provide the following extra values (partitionCount defines how many partitions will be used and the Helm deployment will create that many replicaSets of query service and configure Rb service with the number of supported partitions in Query):

global:
  messaging:
    # global.messaging.partitioned -- enables partitioned messaging in combination with messaging.enabled=true && messaging.role=producer|consumer
    partitioned: true
    # global.messaging.partitionCount -- configures number of partitioned consumers
    partitionCount: 4

Use Kafka instead of Rabbit MQ

In order to switch the message broker to Kafka add the following extra values

global:
  messaging:
    broker: kafka
kafka:
  enabled: true
rabbitmq:
  enabled: false

Message Partitioning with Kafka

Kafka has different architecture from RabbitMQ. One Kafka topic can be served by a number of partitions independent from the consumer number (greater or equal).

Configuring the Kafka broker in the helm chart it is possible to specify partitionCount greater or equal to the replicaCount (the consumer number). Defining these two number independently allow the user to instantiate consumers only if it is needed, avoiding to waste resources.

global:
  messaging:
    partitioned: true
    # global.messaging.partitionCount -- set the Kafka partition number
    partitionCount: 4

activiti-cloud-query:
  # replicaCount -- set the Kafka consumer number
  replicaCount: 2

Enabling HorizontalPodAutoscaler (HPA)

Kubernetes supports horizontal scalability through Horizontal Pod Autoscaler (HPA) mechanism. In activiti-cloud-full-charts it is now possible to enable HPA for the runtime-bundle and activiti-cloud-query microservices.

Requirements

The HorizontalPodAutoscaler can fetch metrics from aggregated APIs that, for Kubernetes (metrics.k8s.io), are provided by an add-on named Metrics Server.

So, Metric Server needs to be installed and launched to use the HPA feature. Please refer to this page for its installation.

HPA Configuration

In the activiti-cloud-full-chart the HorizontalPodAutoscaler is disabled by default for backward compatibility. Please add the following configuration to your values.yaml to enable and use it:

runtime-bundle:
  hpa:
    enabled: true
    minReplicas: 1
    maxReplicas: 6
    cpu: 90
    memory: "2000Mi"
activiti-cloud-query:
  hpa:
    enabled: true
    minReplicas: 1
    maxReplicas: 4
    cpu: 90

This configuration (present in the hpa-values.yaml file in this repository) enable HPA for both runtime-bundle and activiti-cloud-query.

โš ๏ธ WARNING: the provided values are just an example. Please adjust the values to your specific use case.

Configuration Properties

Name Description Default
enabled enables the HPA feature false
minReplicas starting number of replicas to be spawned
maxReplicas max number of replicas to be spawned
cpu +1 replica over this average % CPU value
memory +1 replica over this average memory value
scalingPolicesEnabled enables the scaling policies true

Scaling Polices

Scaling policies allow Kubernetes to stabilize the number of pods when there are swift fluctuations of the load. The scale-down policies are configured so that:

  • only 1 pod can be dismissed every minute.
  • only 15% of the number of pods can be dismissed every minute.
  • the policy that scales down more pods will be triggered first.

The scale-up policies are the default Kubernetes ones.

These policies are always enabled until a scalingPolicesEnabled: false is specified in the configuration.

Activiti Cloud Query and HPA

Activiti Cloud supports both RabbitMQ and Kafka message broker. Activiti Cloud Query is a consumer of the message broker, so we need to be extra careful in the configuration of the automatic scalability in order to keep it working properly.

As a general rule, the automatic horizontal scalability for the query consumers should be enabled only when the Activiti Cloud has enabled partitioning.

Activiti Cloud Query and HPA with Kafka

In a partitioned installation, Kafka allows the consumers to connect to one or more partitions with the maximum ratio of 1:1 between partitions and consumers.

So when configuring HPA please don't specify the maxReplicas value greater than the partitionCount.

Activiti Cloud Query and HPA with RabbitMQ

When partitioning RabbitMQ the configuration will spawn one replica for every partition, so you should avoid activating the HorizontalPodAutoscaler in this case.

CI/CD

Running on GH Actions.

To skip running release pipeline stages, simply add [skip ci] to your commit message.

For Dependabot PRs to be validated by CI, the label "CI" should be added to the PR.

Requires the following secrets to be set:

Name Description
BOT_GITHUB_TOKEN Token to launch other builds on GH
BOT_GITHUB_USERNAME Username to issue propagation PRs
RANCHER2_URL Rancher URL for tests
RANCHER2_ACCESS_KEY Rancher access key for tests
RANCHER2_SECRET_KEY Rancher secret key for tests
SLACK_NOTIFICATION_BOT_TOKEN Token to notify slack on failure

Formatting

The local .editorconfig file is leveraged for automated formatting.

See documentation at pre-commit.

To run all hooks locally:

pre-commit run -a

activiti-cloud-full-chart's People

Contributors

alfresco-build avatar almerico avatar andrea-ligios avatar atchertchian avatar dependabot[bot] avatar erdemedeiros avatar gicappa avatar gionn avatar giovanni007 avatar igdianov avatar jesty avatar mergify[bot] avatar miguelruizdev avatar mteodori avatar pow-devops2020 avatar salaboy avatar vitoalbano avatar wojciech-piotrowiak avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

activiti-cloud-full-chart's Issues

Instructions fail on install/update step due to services "activiti-postgresql-headless" already exists error

The error:
Error: release activiti failed, and has been uninstalled due to atomic being set: services "activiti-postgresql-headless" already exists helm.go:84: [debug] services "activiti-postgresql-headless" already exists release activiti failed, and has been uninstalled due to atomic being set
In the debug I see:
client.go:477: [debug] Starting delete for "activiti-postgresql-headless" Service client.go:481: [debug] Ignoring delete failure for "activiti-postgresql-headless" /v1, Kind=Service: services "activiti-postgresql-headless" not found client.go:477: [debug] Starting delete for "activiti-postgresql-headless" StatefulSet
So, the postgresql-headless both exists and doesn't exist. One of these is lying, and given the previous error I've reported I"m guessing it's the error message, and the debug log is telling the true.

FATAL: could not open relation mapping file "global/pg_filenode.map": No such file or directory

Hi
When i try to install activiti on Kubernetes according to the docs here . Postgres can not work fine, logs is as below:

WARN  ==> Data directory is set with a legacy value, adapting POSTGRESQL_DATA_DIR...
WARN  ==> POSTGRESQL_DATA_DIR set to "/bitnami/postgresql/data"!!

Welcome to the Bitnami postgresql container
Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-postgresql
Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-postgresql/issues
Send us your feedback at [email protected]

INFO  ==> ** Starting PostgreSQL setup **
INFO  ==> Validating settings in POSTGRESQL_* env vars..
INFO  ==> Initializing PostgreSQL database...
INFO  ==> postgresql.conf file not detected. Generating it...
INFO  ==> pg_hba.conf file not detected. Generating it...
INFO  ==> Deploying PostgreSQL with persisted data...
INFO  ==> Cleaning stale postmaster.pid file
INFO  ==> Configuring replication parameters
INFO  ==> Loading custom scripts...
INFO  ==> Enabling remote connections
INFO  ==> Stopping PostgreSQL...

INFO  ==> ** PostgreSQL setup finished! **
INFO  ==> ** Starting PostgreSQL **
2020-05-15 04:17:12.149 GMT [1] LOG:  listening on IPv4 address "0.0.0.0", port 5432
2020-05-15 04:17:12.149 GMT [1] LOG:  listening on IPv6 address "::", port 5432
2020-05-15 04:17:12.271 GMT [1] LOG:  listening on Unix socket "/tmp/.s.PGSQL.5432"
2020-05-15 04:17:13.746 GMT [157] LOG:  database system was interrupted; last known up at 2020-05-15 04:14:32 GMT
2020-05-15 04:17:18.787 GMT [164] FATAL:  the database system is starting up
2020-05-15 04:17:28.800 GMT [171] FATAL:  the database system is starting up
2020-05-15 04:17:38.777 GMT [178] FATAL:  the database system is starting up
2020-05-15 04:17:43.682 GMT [184] FATAL:  the database system is starting up
2020-05-15 04:17:48.817 GMT [191] FATAL:  the database system is starting up
2020-05-15 04:17:53.703 GMT [197] FATAL:  the database system is starting up
2020-05-15 04:17:58.810 GMT [204] FATAL:  the database system is starting up
2020-05-15 04:18:03.704 GMT [216] FATAL:  the database system is starting up
2020-05-15 04:18:07.476 GMT [157] LOG:  database system was not properly shut down; automatic recovery in progress
2020-05-15 04:18:07.776 GMT [157] LOG:  invalid record length at 0/1000098: wanted 24, got 0
2020-05-15 04:18:07.776 GMT [157] LOG:  redo is not required
2020-05-15 04:18:08.784 GMT [222] FATAL:  the database system is starting up
2020-05-15 04:18:09.178 GMT [1] LOG:  database system is ready to accept connections
2020-05-15 04:18:09.378 GMT [226] FATAL:  could not open relation mapping file "global/pg_filenode.map": No such file or directory
2020-05-15 04:18:09.478 GMT [1] LOG:  autovacuum launcher process (PID 226) exited with exit code 1
2020-05-15 04:18:09.478 GMT [1] LOG:  terminating any other active server processes
2020-05-15 04:18:09.729 GMT [1] LOG:  all server processes terminated; reinitializing
2020-05-15 04:18:10.248 GMT [230] LOG:  database system was interrupted; last known up at 2020-05-15 04:18:09 GMT
2020-05-15 04:18:13.719 GMT [236] FATAL:  the database system is in recovery mode
2020-05-15 04:18:18.824 GMT [243] FATAL:  the database system is in recovery mode
2020-05-15 04:18:23.695 GMT [249] FATAL:  the database system is in recovery mode
2020-05-15 04:18:28.807 GMT [255] FATAL:  the database system is in recovery mode
2020-05-15 04:18:33.697 GMT [262] FATAL:  the database system is in recovery mode
2020-05-15 04:18:33.924 GMT [1] LOG:  received smart shutdown request
2020-05-15 04:18:38.770 GMT [269] FATAL:  the database system is shutting down
2020-05-15 04:18:48.795 GMT [275] FATAL:  the database system is shutting down
2020-05-15 04:18:56.046 GMT [230] LOG:  database system was not properly shut down; automatic recovery in progress
2020-05-15 04:18:56.346 GMT [230] LOG:  invalid record length at 0/1000108: wanted 24, got 0
2020-05-15 04:18:56.346 GMT [230] LOG:  redo is not required
2020-05-15 04:18:57.747 GMT [1] LOG:  abnormal database system shutdown
2020-05-15 04:18:57.779 GMT [1] LOG:  database system is shut down

I checked the data directory, there is really do not have the global/pg_filenode.map file. So the activiti is not running. I changed the postgres image, the same issue. If i copy the global/pg_filenode.map file from another postgres, it is running. But meet another issue

2020-05-15 09:02:33.186 GMT [1430] DETAIL:  Role "postgres" does not exist.
	Connection matched pg_hba.conf line 2: "host     all             all             ::1/128                 md5"
2020-05-15 09:02:33.341 GMT [1431] FATAL:  password authentication failed for user "postgres"
2020-05-15 09:02:33.341 GMT [1431] DETAIL:  Role "postgres" does not exist.

But the env POSTGRES_PASSWORD and POSTGRES_USER have the value.

POSTGRES_PASSWORD=password
POSTGRES_USER=postgres

When using the username and password to login the postgres, fail

$ psql -U postgres
Password for user postgres:
psql: FATAL:  password authentication failed for user "postgres"

My kubernetes version is v1.15.x.

Error: release activiti failed

Helm charts installation problem.

Helm 3
Docker 3
MacOS Mojave 14

Error: release activiti failed, and has been uninstalled due to atomic being set: timed out waiting for the condition

after try:
Error: query: failed to query with labels: rpc error: code = Unavailable desc = transport is closing

after try:
helm UPGRADE FAILED: another operation (install/upgrade/rollback) is in progress

which deployment is able to scale?

activiti-activiti-cloud-connector
activiti-activiti-cloud-modeling
activiti-activiti-cloud-query
activiti-activiti-modeling-app
activiti-runtime-bundle

which deployment is able to scale?

activiti-runtime-bundle Application run failed

activiti-runtime-bundle Application run failed, how to fix it?

Following is logs:

2022-08-15 02:18:19.281  INFO [rb,,] 8 --- [           main] ConditionEvaluationReportLoggingListener : 

Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.
2022-08-15 02:18:19.358 ERROR [rb,,] 8 --- [           main] o.s.boot.SpringApplication               : Application run failed

org.springframework.context.ApplicationContextException: Failed to start bean 'processDeployedEventProducer'; nested exception is org.activiti.engine.ActivitiIllegalArgumentException: process definition id is null
        at org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:181)
        at org.springframework.context.support.DefaultLifecycleProcessor.access$200(DefaultLifecycleProcessor.java:54)
        at org.springframework.context.support.DefaultLifecycleProcessor$LifecycleGroup.start(DefaultLifecycleProcessor.java:356)
        at java.base/java.lang.Iterable.forEach(Iterable.java:75)
        at org.springframework.context.support.DefaultLifecycleProcessor.startBeans(DefaultLifecycleProcessor.java:155)
        at org.springframework.context.support.DefaultLifecycleProcessor.onRefresh(DefaultLifecycleProcessor.java:123)
        at org.springframework.context.support.AbstractApplicationContext.finishRefresh(AbstractApplicationContext.java:935)
        at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:586)
        at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:144)
        at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:782)
        at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:774)
        at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:439)
        at org.springframework.boot.SpringApplication.run(SpringApplication.java:339)
        at org.springframework.boot.SpringApplication.run(SpringApplication.java:1340)
        at org.springframework.boot.SpringApplication.run(SpringApplication.java:1329)
        at org.activiti.cloud.runtime.RuntimeBundleApplication.main(RuntimeBundleApplication.java:27)
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.base/java.lang.reflect.Method.invoke(Method.java:566)
        at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:49)
        at org.springframework.boot.loader.Launcher.launch(Launcher.java:108)
        at org.springframework.boot.loader.Launcher.launch(Launcher.java:58)
        at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:88)
Caused by: org.activiti.engine.ActivitiIllegalArgumentException: process definition id is null
        at org.activiti.engine.impl.cmd.GetProcessDefinitionInfoCmd.execute(GetProcessDefinitionInfoCmd.java:46)
        at org.activiti.engine.impl.cmd.GetProcessDefinitionInfoCmd.execute(GetProcessDefinitionInfoCmd.java:34)
        at org.activiti.engine.impl.interceptor.CommandInvoker$1.run(CommandInvoker.java:41)
        at org.activiti.engine.impl.interceptor.CommandInvoker.executeOperation(CommandInvoker.java:82)
        at org.activiti.engine.impl.interceptor.CommandInvoker.executeOperations(CommandInvoker.java:61)
        at org.activiti.engine.impl.interceptor.CommandInvoker.execute(CommandInvoker.java:46)
        at org.activiti.engine.impl.interceptor.TransactionContextInterceptor.execute(TransactionContextInterceptor.java:52)
        at org.activiti.engine.impl.interceptor.CommandContextInterceptor.execute(CommandContextInterceptor.java:63)
        at org.activiti.spring.SpringTransactionInterceptor$1.doInTransaction(SpringTransactionInterceptor.java:51)
        at org.springframework.transaction.support.TransactionTemplate.execute(TransactionTemplate.java:140)
        at org.activiti.spring.SpringTransactionInterceptor.execute(SpringTransactionInterceptor.java:49)
        at org.activiti.engine.impl.interceptor.LogInterceptor.execute(LogInterceptor.java:33)
        at org.activiti.engine.impl.cfg.CommandExecutorImpl.execute(CommandExecutorImpl.java:59)
        at org.activiti.engine.impl.cfg.CommandExecutorImpl.execute(CommandExecutorImpl.java:54)
        at org.activiti.engine.impl.DynamicBpmnServiceImpl.getProcessDefinitionInfo(DynamicBpmnServiceImpl.java:42)
        at org.activiti.engine.impl.bpmn.deployer.BpmnDeployer.createLocalizationValues(BpmnDeployer.java:294)
        at org.activiti.engine.impl.bpmn.deployer.BpmnDeployer.deploy(BpmnDeployer.java:101)
        at org.activiti.engine.impl.persistence.deploy.DeploymentManager.deploy(DeploymentManager.java:62)
        at org.activiti.engine.impl.persistence.deploy.DeploymentManager.resolveProcessDefinition(DeploymentManager.java:127)
        at org.activiti.engine.impl.persistence.deploy.DeploymentManager.findDeployedProcessDefinitionById(DeploymentManager.java:80)
        at org.activiti.engine.impl.util.ProcessDefinitionUtil.getBpmnModel(ProcessDefinitionUtil.java:69)
        at org.activiti.engine.impl.cmd.GetBpmnModelCmd.execute(GetBpmnModelCmd.java:45)
        at org.activiti.engine.impl.cmd.GetBpmnModelCmd.execute(GetBpmnModelCmd.java:30)
        at org.activiti.engine.impl.interceptor.CommandInvoker$1.run(CommandInvoker.java:41)
        at org.activiti.engine.impl.interceptor.CommandInvoker.executeOperation(CommandInvoker.java:82)
        at org.activiti.engine.impl.interceptor.CommandInvoker.executeOperations(CommandInvoker.java:61)
        at org.activiti.engine.impl.interceptor.CommandInvoker.execute(CommandInvoker.java:46)
        at org.activiti.engine.impl.interceptor.TransactionContextInterceptor.execute(TransactionContextInterceptor.java:52)
        at org.activiti.engine.impl.interceptor.CommandContextInterceptor.execute(CommandContextInterceptor.java:63)
        at org.activiti.spring.SpringTransactionInterceptor$1.doInTransaction(SpringTransactionInterceptor.java:51)
        at org.springframework.transaction.support.TransactionTemplate.execute(TransactionTemplate.java:140)
        at org.activiti.spring.SpringTransactionInterceptor.execute(SpringTransactionInterceptor.java:49)
        at org.activiti.engine.impl.interceptor.LogInterceptor.execute(LogInterceptor.java:33)
        at org.activiti.engine.impl.cfg.CommandExecutorImpl.execute(CommandExecutorImpl.java:59)
        at org.activiti.engine.impl.cfg.CommandExecutorImpl.execute(CommandExecutorImpl.java:54)
        at org.activiti.engine.impl.RepositoryServiceImpl.getBpmnModel(RepositoryServiceImpl.java:146)
        at org.activiti.runtime.api.model.impl.APIProcessDefinitionConverter.from(APIProcessDefinitionConverter.java:42)
        at org.activiti.runtime.api.model.impl.APIProcessDefinitionConverter.from(APIProcessDefinitionConverter.java:25)
        at org.activiti.runtime.api.model.impl.ListConverter.from(ListConverter.java:27)
        at org.activiti.spring.ProcessDeployedEventProducer.doStart(ProcessDeployedEventProducer.java:54)
        at org.activiti.spring.AbstractActivitiSmartLifeCycle.start(AbstractActivitiSmartLifeCycle.java:83)
        at org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:178)
        ... 23 common frames omitted

Ingresses don't work out of the box for on-prem Kubernetes deployment.

I've been trying to deploy the Activiti full example to an on-premise Kubernetes cluster and have found that the Ingresses don't work as expected. I've only seen support for GKE and EKS in the docs, so I wonder if I can get help on this.

I followed instructions in the docs and used Helm with the custom gateway value. After installationI am not able to view any of the services at the URL's that the Helm installation outputs, such as http://gateway-activiti./modeling. All pods and services are up, and if I change the service to type NodePort it does work. From what I can tell so far the Ingresses seem to be conflicting with each other because if I modify the URL in the modeling Ingress to something different then it does seem to work.

Any help here is appreciated.

Instructions fail on the install/upgrade step due to missing pod.

Error is:
helm upgrade --install \ --atomic --create-namespace --namespace activiti \ -f values.yaml \ activiti charts/activiti-cloud-full-example Release "activiti" does not exist. Installing it now. coalesce.go:220: warning: cannot overwrite table with non table for activiti-cloud-identity.postgresql.ldap.tls (map[]) Error: release activiti failed, and has been uninstalled due to atomic being set: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline

Adding debug gets me to this:
Deployment is not ready: activiti/activiti-activiti-cloud-connector. 0 out of 1 expected pods are ready
Clearly, there are too many 'activiti' there.
Removing the activiti in the requirements.yaml file fixes the problem.
Where there is: alias: "activiti-, just remove that "activiti-".

errors when enable https ingress

I config values.yaml

global:
gateway:
http: "false"

after login https://gateway-activiti-sit.xxx.com/modeling/

https://gateway-activiti-sit.xxxx.com/modeling-service/v1/schemas/CONNECTOR

returns 500 error

kubectl logs activiti-activiti-cloud-modeling-bdf469bfc-pptkd -n activiti-sit

2022-08-17 02:43:45.109 WARN 8 --- [nio-8080-exec-9] o.keycloak.adapters.KeycloakDeployment : Failed to load URLs from https://identity-activiti-sit.xxx.com/auth/realms/activiti/.well-known/openid-configuration

javax.net.ssl.SSLHandshakeException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
        at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:128)
        at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:321)
        at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:264)
        at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:259)
        at java.base/sun.security.ssl.CertificateMessage$T13CertificateConsumer.checkServerCerts(CertificateMessage.java:1329)
        at java.base/sun.security.ssl.CertificateMessage$T13CertificateConsumer.onConsumeCertificate(CertificateMessage.java:1204)
        at java.base/sun.security.ssl.CertificateMessage$T13CertificateConsumer.consume(CertificateMessage.java:1151)
        at java.base/sun.security.ssl.SSLHandshake.consume(SSLHandshake.java:392)
        at java.base/sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:444)
        at java.base/sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:421)
        at java.base/sun.security.ssl.TransportContext.dispatch(TransportContext.java:178)
        at java.base/sun.security.ssl.SSLTransport.decode(SSLTransport.java:164)
        at java.base/sun.security.ssl.SSLSocketImpl.decode(SSLSocketImpl.java:1152)
        at java.base/sun.security.ssl.SSLSocketImpl.readHandshakeRecord(SSLSocketImpl.java:1063)
        at java.base/sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:402)
        at org.apache.http.conn.ssl.SSLSocketFactory.createLayeredSocket(SSLSocketFactory.java:570)
        at org.keycloak.adapters.SniSSLSocketFactory.createLayeredSocket(SniSSLSocketFactory.java:119)
        at org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:554)
        at org.keycloak.adapters.SniSSLSocketFactory.connectSocket(SniSSLSocketFactory.java:114)
        at org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:415)
        at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180)
        at org.apache.http.impl.conn.AbstractPoolEntry.open(AbstractPoolEntry.java:144)
        at org.apache.http.impl.conn.AbstractPooledConnAdapter.open(AbstractPooledConnAdapter.java:134)
        at org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:605)
        at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:440)
        at org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:835)
        at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83)
        at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:108)
        at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:56)
        at org.keycloak.adapters.KeycloakDeployment.getOidcConfiguration(KeycloakDeployment.java:221)
        at org.keycloak.adapters.KeycloakDeployment.resolveUrls(KeycloakDeployment.java:179)
        at org.keycloak.adapters.KeycloakDeployment.getRealmInfoUrl(KeycloakDeployment.java:237)
        at org.keycloak.adapters.rotation.AdapterTokenVerifier.createVerifier(AdapterTokenVerifier.java:107)
        at org.keycloak.adapters.rotation.AdapterTokenVerifier.verifyToken(AdapterTokenVerifier.java:47)
        at org.keycloak.adapters.BearerTokenRequestAuthenticator.authenticateToken(BearerTokenRequestAuthenticator.java:103)
        at org.keycloak.adapters.BearerTokenRequestAuthenticator.authenticate(BearerTokenRequestAuthenticator.java:88)
        at org.keycloak.adapters.RequestAuthenticator.authenticate(RequestAuthenticator.java:67)
        at org.keycloak.adapters.tomcat.AbstractKeycloakAuthenticatorValve.authenticateInternal(AbstractKeycloakAuthenticatorValve.java:203)
        at org.keycloak.adapters.tomcat.KeycloakAuthenticatorValve.authenticate(KeycloakAuthenticatorValve.java:50)
        at org.keycloak.adapters.tomcat.KeycloakAuthenticatorValve.doAuthenticate(KeycloakAuthenticatorValve.java:57)
        at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:632)
        at org.keycloak.adapters.tomcat.AbstractKeycloakAuthenticatorValve.invoke(AbstractKeycloakAuthenticatorValve.java:181)
        at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:143)
        at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92)
        at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:78)
        at org.apache.catalina.valves.RemoteIpValve.invoke(RemoteIpValve.java:764)
        at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:357)
        at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:374)
        at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:65)
        at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:893)
        at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1707)
        at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
        at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
        at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
        at java.base/sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:385)
        at java.base/sun.security.validator.PKIXValidator.engineValidate(PKIXValidator.java:290)
        at java.base/sun.security.validator.Validator.validate(Validator.java:264)
        at java.base/sun.security.ssl.X509TrustManagerImpl.validate(X509TrustManagerImpl.java:321)
        at java.base/sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:221)
        at java.base/sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:129)
        at java.base/sun.security.ssl.CertificateMessage$T13CertificateConsumer.checkServerCerts(CertificateMessage.java:1313)
        ... 51 common frames omitted
Caused by: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
        at java.base/sun.security.provider.certpath.SunCertPathBuilder.build(SunCertPathBuilder.java:141)
        at java.base/sun.security.provider.certpath.SunCertPathBuilder.engineBuild(SunCertPathBuilder.java:126)
        at java.base/java.security.cert.CertPathBuilder.build(CertPathBuilder.java:297)
        at java.base/sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:380)
        ... 57 common frames omitted

How fix or set keys?

Cannot access any service in Helm deployment

Hello,

I deployed Activiti 8.1 via Helm.
Output is as follows:

-              _   _       _ _   _    _____ _                 _
     /\       | | (_)     (_) | (_)  / ____| |               | |
    /  \   ___| |_ ___   ___| |_ _  | |    | | ___  _   _  __| |
   / /\ \ / __| __| \ \ / / | __| | | |    | |/ _ \| | | |/ _` |
  / ____ \ (__| |_| |\ V /| | |_| | | |____| | (_) | |_| | (_| |
 /_/    \_\___|\__|_| \_/ |_|\__|_|  \_____|_|\___/ \__,_|\__,_|
 Version: 8.1.0

Thank you for installing activiti-cloud-full-example-8.1.0

Your release is named activiti.

To learn more about the release, try:

  $ helm status activiti
  $ helm get activiti

Get the application URLs:

Activiti Gateway         : https://activiti.k8s.example.com
Activiti Identity        : https://keycloak.k8s.example.com/auth

Activiti Runtime Bundle  : https://activiti.k8s.example.com/rb
Activiti Cloud Connector : https://activiti.k8s.example.com/example-cloud-connector
Activiti Query           : https://activiti.k8s.example.com/query
Activiti Audit           : https://activiti.k8s.example.com/audit
Notifications GraphiQL   : https://activiti.k8s.example.com/notifications/graphiql
Notifications WebSockets : https://activiti.k8s.example.com/notifications/ws/graphql
Notifications Graphql    : https://activiti.k8s.example.com/notifications/graphql

I'm using ingress-nginx/ingress-nginx.
Btw. Helm is printing out it's https:// but actually it is http://

When calling http://activiti.k8s.example.com/ I get a HTTP 404.
When calling 'http://keycloak.k8s.example.com/auth' I get the Keycloak interface. -> OK
When calling 'http://keycloak.k8s.example.com/modeling-service' (or any other service) I get the following output in browser:

{
  "links" : [ {
    "rel" : "profile",
    "href" : "http://activiti.k8s.example.com/rb/profile"
  } ]
}

Any ideas or clues?

Thank you in advance!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.