lensesio / secret-provider Goto Github PK
View Code? Open in Web Editor NEWOpen Source Secret Provider plugin for the Kafka Connect framework
Home Page: https://lenses.io
License: Apache License 2.0
Open Source Secret Provider plugin for the Kafka Connect framework
Home Page: https://lenses.io
License: Apache License 2.0
We are getting below error for the connectors using lenses.io , Post platform upgradation to 7.5.3
Earlier its good with 5.5.2 version. I want to know whether 0.0.2 is compatible with confluent platform 7.5.3 and java 8
When we try to add the lenses path to the executable file of the connect worker node as below, All the connecter will be failed state
CLASSPATH="/usr/share/java/kafka-connect-replicator/replicator-rest-extension-7.5.3.jar:/usr/share/java/secret-provider/secret-provider-0.0.2-all.jar:/usr/share/java/confluent-security/connect/kafka-client-plugins-7.5.3-ce.jar"
current Classpath path we have is:
CLASSPATH="/usr/share/java/kafka-connect-replicator/replicator-rest-extension-7.5.3.jar:/usr/share/java/confluent-security/connect/kafka-client-plugins-7.5.3-ce.jar"
Below is the status of one of the failed connectors which is using lenses io:
{"name":"EMPSTG.stg.transportation.train-reservation.raw.v1.Replicator","connector":{"state":"FAILED","worker_id":"zcc1kfconvml01d.cn.ca:8083","trace":"java.lang.NoClassDefFoundError: Could not initialize class com.azure.core.implementation.http.HttpClientProviders\n\tat com.azure.core.http.HttpClient.createDefault(HttpClient.java:27)\n\tat com.azure.core.http.HttpPipelineBuilder.build(HttpPipelineBuilder.java:60)\n\tat com.azure.security.keyvault.secrets.SecretClientBuilder.buildAsyncClient(SecretClientBuilder.java:162)\n\tat com.azure.security.keyvault.secrets.SecretClientBuilder.buildClient(SecretClientBuilder.java:104)\n\tat io.lenses.connect.secrets.providers.AzureSecretProvider.$anonfun$get$1(AzureSecretProvider.scala:61)\n\tat scala.collection.MapLike.getOrElse(MapLike.scala:131)\n\tat scala.collection.MapLike.getOrElse$(MapLike.scala:129)\n\tat scala.collection.AbstractMap.getOrElse(Map.scala:63)\n\tat io.lenses.connect.secrets.providers.AzureSecretProvider.get(AzureSecretProvider.scala:62)\n\tat org.apache.kafka.common.config.ConfigTransformer.transform(ConfigTransformer.java:103)\n\tat org.apache.kafka.connect.runtime.WorkerConfigTransformer.transform(WorkerConfigTransformer.java:58)\n\tat org.apache.kafka.connect.storage.ClusterConfigState.connectorConfig(ClusterConfigState.java:152)\n\tat org.apache.kafka.connect.runtime.distributed.DistributedHerder.startConnector(DistributedHerder.java:1894)\n\tat org.apache.kafka.connect.runtime.distributed.DistributedHerder.lambda$getConnectorStartingCallable$38(DistributedHerder.java:1926)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:266)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat java.lang.Thread.run(Thread.java:750)\n"},"tasks":[{"id":0,"state":"FAILED","worker_id":"zcc1kfconvml01d.cn.ca:8083","trace":"java.lang.NoClassDefFoundError: Could not initialize class com.azure.core.implementation.http.HttpClientProviders\n\tat com.azure.core.http.HttpClient.createDefault(HttpClient.java:27)\n\tat com.azure.core.http.HttpPipelineBuilder.build(HttpPipelineBuilder.java:60)\n\tat com.azure.security.keyvault.secrets.SecretClientBuilder.buildAsyncClient(SecretClientBuilder.java:162)\n\tat com.azure.security.keyvault.secrets.SecretClientBuilder.buildClient(SecretClientBuilder.java:104)\n\tat io.lenses.connect.secrets.providers.AzureSecretProvider.$anonfun$get$1(AzureSecretProvider.scala:61)\n\tat scala.collection.MapLike.getOrElse(MapLike.scala:131)\n\tat scala.collection.MapLike.getOrElse$(MapLike.scala:129)\n\tat scala.collection.AbstractMap.getOrElse(Map.scala:63)\n\tat io.lenses.connect.secrets.providers.AzureSecretProvider.get(AzureSecretProvider.scala:62)\n\tat org.apache.kafka.common.config.ConfigTransformer.transform(ConfigTransformer.java:103)\n\tat org.apache.kafka.connect.runtime.WorkerConfigTransformer.transform(WorkerConfigTransformer.java:58)\n\tat org.apache.kafka.connect.storage.ClusterConfigState.connectorConfig(ClusterConfigState.java:152)\n\tat org.apache.kafka.connect.runtime.distributed.DistributedHerder.startTask(DistributedHerder.java:1819)\n\tat org.apache.kafka.connect.runtime.distributed.DistributedHerder.lambda$getTaskStartingCallable$33(DistributedHerder.java:1872)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:266)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat java.lang.Thread.run(Thread.java:750)\n"},{"id":1,"state":"FAILED","worker_id":"zcc1kfconvml01d.cn.ca:8083","trace":"java.lang.NoClassDefFoundError: Could not initialize class com.azure.core.implementation.http.HttpClientProviders\n\tat com.azure.core.http.HttpClient.createDefault(HttpClient.java:27)\n\tat com.azure.core.http.HttpPipelineBuilder.build(HttpPipelineBuilder.java:60)\n\tat com.azure.security.keyvault.secrets.SecretClientBuilder.buildAsyncClient(SecretClientBuilder.java:162)\n\tat com.azure.security.keyvault.secrets.SecretClientBuilder.buildClient(SecretClientBuilder.java:104)\n\tat io.lenses.connect.secrets.providers.AzureSecretProvider.$anonfun$get$1(AzureSecretProvider.scala:61)\n\tat scala.collection.MapLike.getOrElse(MapLike.scala:131)\n\tat scala.collection.MapLike.getOrElse$(MapLike.scala:129)\n\tat scala.collection.AbstractMap.getOrElse(Map.scala:63)\n\tat io.lenses.connect.secrets.providers.AzureSecretProvider.get(AzureSecretProvider.scala:62)\n\tat org.apache.kafka.common.config.ConfigTransformer.transform(ConfigTransformer.java:103)\n\tat org.apache.kafka.connect.runtime.WorkerConfigTransformer.transform(WorkerConfigTransformer.java:58)\n\tat org.apache.kafka.connect.storage.ClusterConfigState.connectorConfig(ClusterConfigState.java:152)\n\tat org.apache.kafka.connect.runtime.distributed.DistributedHerder.startTask(DistributedHerder.java:1819)\n\tat org.apache.kafka.connect.runtime.distributed.DistributedHerder.lambda$getTaskStartingCallable$33(DistributedHerder.java:1872)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:266)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat java.lang.Thread.run(Thread.java:750)\n"}],"type":"source"}
Hi, I was updating our packages for security vulnerabilities and updated secret-providers to version 2.3.0. But still our image scanner reported these issues
CVE-2023-44487 - io.netty:netty-codec-http2 - GHSA-xpw8-rcwv-8f8p
Severity: High
Type: Package Vulnerability
Name: io.netty:netty-codec-http2
Installed version / Fixed version : 0:4.1.89.Final / 4.1.100.Final
Can you please update the package in this repo or let us know if there is a version that has this fixed. We can use that instead.
Thanks.
Hi, any tips about how to use AWS IAM authentication type with aws.request.headers dinamically?
I am able to use, but sts session last only 15 minutes...The parameter aws.request.headers has to be signed I am only able to generate once in the Kafka Connect's bootstrap, after 10 minutes (token.renewal.ms) due to my aws.request.headers be static, it starts to fail because my session has expired
Thank you all in advance
Currently it does not seem to be possible to retrieve secrets from a Vault Secrets Engine that has a path with more than one segment so for example secrets/dev
. For this to work one would have to expose prefixPathDepth config from the Vault Client.
Hi,
I have imported my .pfx file in azure keyvault(add certificate option , not add secret).
will the code able to read the .pfx file and write to the host filesystem with the same format ??
(and also same applies for pem)
Thanks
Devaraj
Hello,
I'm using the secrets-provider with AWS secrets manager. Sometimes some workers of our kafka connect get stuck and stay totally freeze looking up for the secrets values. The worker just gets healthy again when we manually perform the restart of the pod. We are using secrets secret-provider-2.1.6-all.jar
with confluent kafka connect version 5.5.0. Have someone already seen this?
The logs of the worker at the moment it is "stuck", and due to the poll is taking so longer, the worker leaves the kafka consumer group after the timeout of max.poll.interval.ms has exceeded. After the restart of the pod, everything works fine.
[2021-08-05 09:21:30,049] INFO Looking up value at [bd-customer-kyc-periodic-validation-prod] for key [username] (io.lenses.connect.secrets.providers.AWSSecretProvider)
[2021-08-05 09:22:27,016] INFO Looking up value at [bd-customer-kyc-periodic-validation-prod] for key [password] (io.lenses.connect.secrets.providers.AWSSecretProvider)
[2021-08-05 09:22:28,303] INFO Looking up value at [bd-customer-kyc-periodic-validation-prod] for key [username] (io.lenses.connect.secrets.providers.AWSSecretProvider)
[2021-08-05 09:22:28,817] INFO [Worker clientId=connect-1, groupId=kafka-connect-bkofin-group] Member connect-1-c1a82d66-3fc1-47fd-a279-fd434766aaeb sending LeaveGroup request to coordinator broker5.pagseguro.intranet:9092 (id: 2147483643 rack: null) due to consumer poll timeout has expired. This means the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms, which typically implies that the poll loop is spending too much time processing messages. You can address this either by increasing max.poll.interval.ms or by reducing the maximum size of batches returned in poll() with max.poll.records. (org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
[2021-08-05 09:22:28,887] INFO Looking up value at [bd-customer-kyc-periodic-validation-prod] for key [password] (io.lenses.connect.secrets.providers.AWSSecretProvider)
[2021-08-05 09:22:29,218] INFO Looking up value at [bd-customer-kyc-periodic-validation-prod] for key [username] (io.lenses.connect.secrets.providers.AWSSecretProvider)
[2021-08-05 09:31:06,150] INFO Looking up value at [bd-court-order-outbox-prod] for key [password] (io.lenses.connect.secrets.providers.AWSSecretProvider)
[2021-08-05 09:31:07,242] INFO Looking up value at [bd-court-order-outbox-prod] for key [username] (io.lenses.connect.secrets.providers.AWSSecretProvider)
[2021-08-05 09:38:22,219] INFO Looking up value at [bd-ccs-bdv-prod] for key [password] (io.lenses.connect.secrets.providers.AWSSecretProvider)
[2021-08-05 09:38:23,477] INFO Looking up value at [bd-ccs-bdv-prod] for key [username] (io.lenses.connect.secrets.providers.AWSSecretProvider)
[2021-08-05 09:38:27,818] INFO Looking up value at [bd-customer-kyc-periodic-validation-prod] for key [password] (io.lenses.connect.secrets.providers.AWSSecretProvider)
[2021-08-05 09:38:28,095] INFO Looking up value at [bd-customer-kyc-periodic-validation-prod] for key [username] (io.lenses.connect.secrets.providers.AWSSecretProvider)
[2021-08-05 09:42:18,414] INFO Looking up value at [bd-ccs-bdv-prod] for key [password] (io.lenses.connect.secrets.providers.AWSSecretProvider)
[2021-08-05 09:42:19,355] INFO Looking up value at [bd-ccs-bdv-prod] for key [username] (io.lenses.connect.secrets.providers.AWSSecretProvider)```
I'm using this connector and I notice when we boot up the kafka connect service the provider works fine, if after some time we refresh a connector and we need to retrieve a secret the token we get ExpiredTokenException
We're using InstanceProfiles to provide the credentials to the box, as this is considered best practice for providing credentials to the resource rather than using static credentials. This provides temporary credentials to the resource that should be refreshed.
Caused by: com.amazonaws.services.secretsmanager.model.AWSSecretsManagerException: The security token included in the request is expired (Service: AWSSecretsManager; Status Code: 400; Error Code: ExpiredTokenException;
Looking at the code
new DefaultAWSCredentialsProviderChain()
val credentialProvider = settings.authMode match {
case AuthMode.CREDENTIALS =>
new AWSStaticCredentialsProvider(new BasicAWSCredentials(settings.accessKey, settings.secretKey.value()))
case _ =>
new DefaultAWSCredentialsProviderChain()
}
AWSSecretsManagerClientBuilder
.standard()
.withCredentials(credentialsProvider)
.withRegion(settings.region)
.build()
Hi,
I'm using version 2.1.6 of the secret provider plugin to access Azure keyvault secrets from a Kafka connector.
When updating the configuration of the connector I receive an error ( "No value found for 'UTF-8'") when retrieving the secret if my Azure keyvault secret has the default "file-enconding" tag with value "utf-8".
The issue does not appear if the tag is not present or if I change the value of the tag to "UTF8".
The tag "file-enconding" with value "utf-8" is added automatically if the secret is created with az cli and cannot be removed as described in this issue
Ideally the secret provider should be able to use the default value of the tag created by az cli.
Thanks
Dear Maintainers,
I'm currently running into an issue with the VaultSecretProvider
that from my current understanding is subject to a design flaw.
We are using the AppRole
authentication method that allows the Vault client to generate a token with a max TTL of 60m. After the Kafka Connect startup the VaultSecretProvider
works fine for 60 minutes and afterwards starts to throw the following error:
WARN Failed to renew the Kerberos ticket (io.lenses.connect.secrets.async.AsyncFunctionLoop:43)
com.bettercloud.vault.VaultException: Vault responded with HTTP status code: 403
Response body: {"errors":["permission denied"]}
From my understanding this is caused by the AsyncFunctionLoop that tries to renew the Vault token on a fixed period (default 60000ms). Due to the fact that our current AppRole
is setup to generate non periodic tokens (which is the default), these tokens are not renewable indefinitely.
Docs Ref HashiCorp: https://www.vaultproject.io/docs/concepts/tokens#periodic-tokens
For me the current VaultSecretProvider
looks like it only supports the usage of periodic tokens (which are kind a special in the Vault universe). Standard service tokens that are subject to a max TTL should all run into the same problem sooner or later.
We have several Vault Agents running in our infrastructure which I would consider somethign like a "reference implementation" for automatic daemon-like token renewals. These agents renew the service tokens until there max TTL is reached and issue a reauthentication afterwards to acquire a fresh token. This however doesn't seem to be possible with the VaultSecretProvider
provider as it only authentications once on the providers configure()
call.
I would like to get some insights from other users and/or the maintainers on this problem and how they handle it. I would like to make sure that I'm not missing something here.
Hey guys, I'm trying to use the VaultSecretProvider withiin my connect cluster (distributed).
Here is how I add it to my docker configs:
image: confluentinc/cp-kafka-connect-base:5.5.2"
I've placed the secret provider secret-provider-2.1.5-all.jar
in /local/secret-providers
CONNECT_PLUGIN_PATH: "/usr/share/java,/connect-plugins,/usr/share/confluent-hub-components,/local/connector-plugins,/local/secret-providers"
CONNECT_CONFIG_PROVIDERS: "vault"
CONNECT_CONFIG_PROVIDERS_VAULT_CLASS: "io.lenses.connect.secrets.providers.VaultSecretProvider"
CONNECT_CONFIG_PROVIDERS_VAULT_PARAM_ADDR: "url"
CONNECT_CONFIG_PROVIDERS_VAULT_PARAM_AUTH_METHOD: "token"
CONNECT_CONFIG_PROVIDERS_VAULT_PARAM_TOKEN: "${VAULT_TOKEN}"
CONNECT_CONFIG_PROVIDERS_VAULT_PARAM_FILE_DIR: "local/connector-files/vault"
the resulting properties file becomes:
plugin.path=/usr/share/java,/connect-plugins,/usr/share/confluent-hub-components,/local/connector-plugins,/local/secret-providers
config.providers.vault.param.token=<token>
config.providers.vault.param.auth.method=token
config.providers.vault.param.file.dir=local/connector-files/vault
config.providers.vault.param.addr=url
config.providers.vault.class=io.lenses.connect.secrets.providers.VaultSecretProvider
config.providers=vault
However, on startup I end up with the following stacktrace
[main] ERROR io.confluent.admin.utils.cli.KafkaReadyCommand - Error while running kafka-ready.
org.apache.kafka.common.config.ConfigException: Invalid value java.lang.ClassNotFoundException: io.lenses.connect.secrets.providers.VaultSecretProvider for configuration Invalid config:io.lenses.connect.secrets.providers.VaultSecretProvider ClassNotFoundException exception occurred
at org.apache.kafka.common.config.AbstractConfig.instantiateConfigProviders(AbstractConfig.java:538)
at org.apache.kafka.common.config.AbstractConfig.resolveConfigVariables(AbstractConfig.java:477)
at org.apache.kafka.common.config.AbstractConfig.<init>(AbstractConfig.java:107)
at org.apache.kafka.common.config.AbstractConfig.<init>(AbstractConfig.java:142)
at org.apache.kafka.clients.admin.AdminClientConfig.<init>(AdminClientConfig.java:216)
at org.apache.kafka.clients.admin.Admin.create(Admin.java:71)
at org.apache.kafka.clients.admin.AdminClient.create(AdminClient.java:49)
at io.confluent.admin.utils.ClusterStatus.isKafkaReady(ClusterStatus.java:138)
at io.confluent.admin.utils.cli.KafkaReadyCommand.main(KafkaReadyCommand.java:150)
If I was to remove the config.providers=vault
I instead see this line within the connect logs
[2021-01-28 20:16:59,022] INFO Added plugin 'io.lenses.connect.secrets.providers.VaultSecretProvider' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader)
Any suggestions? Thanks!
Goodday
I was very glad with the solution you are creating. But unfortionately it doesnt work entire;ly correct.
First up i was struggling with the auth keys. The enviroment variables i cant get working and the credenitials i had to change the "config.providers.aws.param.aws.client.key=your-client-key" into "config.providers.aws.param.aws.access.key" to get the connection working with aws.
Then when i changed the connection.password into "${aws:ems-dev-eventmanagement-db-secret:password}" it unfortunately failed.
In trace logging i found this part which i think might be the problem but i'm not sure:
[2020-06-15 05:47:08,261] DEBUG http-outgoing-7 << "{"ARN":"arn:aws:secretsmanager:eu-west-1:577868951105:secret:ems-dev-eventmanagement-db-secret-B0TWsl","CreatedDate":1.591854819443E9,"Name":"ems-dev-eventmanagement-db-secret","SecretString":"{"username":"eventmanagement_user","password":"xxxxxx","engine":"mysql","host":"dev-ems.coollnhgrzes.eu-west-1.rds.amazonaws.com","port":"3306","dbname":"eventmanagement"}","VersionId":"2f3cc10f-b194-4a1f-bef5-523a03f68a70","VersionStages":["AWSCURRENT"]}" (org.apache.http.wire)
[2020-06-15 05:47:08,261] DEBUG http-outgoing-7 << HTTP/1.1 200 OK (org.apache.http.headers)
[2020-06-15 05:47:08,261] DEBUG http-outgoing-7 << Date: Mon, 15 Jun 2020 05:47:08 GMT (org.apache.http.headers)
[2020-06-15 05:47:08,261] DEBUG http-outgoing-7 << Content-Type: application/x-amz-json-1.1 (org.apache.http.headers)
[2020-06-15 05:47:08,261] DEBUG http-outgoing-7 << Content-Length: 491 (org.apache.http.headers)
[2020-06-15 05:47:08,261] DEBUG http-outgoing-7 << Connection: keep-alive (org.apache.http.headers)
[2020-06-15 05:47:08,261] DEBUG http-outgoing-7 << x-amzn-RequestId: fef90d70-733b-470f-8987-17ad2c6f38a3 (org.apache.http.headers)
[2020-06-15 05:47:08,261] DEBUG Connection can be kept alive for 60000 MILLISECONDS (org.apache.http.impl.execchain.MainClientExec)
[2020-06-15 05:47:08,261] TRACE Parsing service response JSON (com.amazonaws.request)
[2020-06-15 05:47:08,261] DEBUG Connection [id: 7][route: {s}->https://secretsmanager.eu-west-1.amazonaws.com:443] can be kept alive for 60.0 seconds (org.apache.http.impl.conn.PoolingHttpClientConnectionManager)
[2020-06-15 05:47:08,261] DEBUG http-outgoing-7: set socket timeout to 0 (org.apache.http.impl.conn.DefaultManagedHttpClientConnection)
[2020-06-15 05:47:08,261] DEBUG Connection released: [id: 7][route: {s}->https://secretsmanager.eu-west-1.amazonaws.com:443][total kept alive: 1; route allocated: 1 of 50; total allocated: 1 of 50] (org.apache.http.impl.conn.PoolingHttpClientConnectionManager)
[2020-06-15 05:47:08,262] TRACE Done parsing service response (com.amazonaws.request)
[2020-06-15 05:47:08,262] DEBUG Received successful response: 200, AWS Request ID: fef90d70-733b-470f-8987-17ad2c6f38a3 (com.amazonaws.request)
[2020-06-15 05:47:08,262] DEBUG x-amzn-RequestId: fef90d70-733b-470f-8987-17ad2c6f38a3 (com.amazonaws.requestId)
I run kafka connect in kubernetes with the following plugins - versions:
FROM confluentinc/cp-kafka-connect:5.5.0
COPY extras/mariadb-java-client-2.1.0.jar /usr/share/java/kafka/
COPY extras/mysql-connector-java-8.0.15.jar /usr/share/java/kafka-connect-jdbc
COPY extras/ojdbc8.jar /usr/share/java/kafka-connect-jdbc
COPY extras/secret-provider-0.0.2-all.jar /usr/share/java/
and the connector i'm trying to install:
curl -X PUT
http://ems-kafka-dev-cp-kafka-connect.ems-dev:8083/connectors/JdbcSourceConnectorEventManagement/config
-H 'Content-Type: application/json'
-H 'Accept: application/json'
-d '{
"connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector",
"timestamp.column.name": "modified_at",
"config.providers.aws.param.aws.access.key": "xxxxxxxxxx",
"config.providers.aws.param.aws.secret.key": "xxxxxxxxx",
"connection.password": "${aws:ems-dev-eventmanagement-db-secret:password}",
"tasks.max": "1",
"config.providers": "aws",
"table.types": "VIEW",
"table.whitelist": "trigger_recipe_event_view,recipe_business_rules_with_parameters_view,event_linked_active_recipes_view",
"mode": "timestamp",
"topic.prefix": "connected-keyless-",
"poll.interval.ms": "60000",
"config.providers.aws.param.aws.auth.method": "default",
"config.providers.aws.param.aws.region": "eu-west-1",
"value.converter": "org.apache.kafka.connect.json.JsonConverter",
"key.converter": "org.apache.kafka.connect.json.JsonConverter",
"config.providers.aws.class": "io.lenses.connect.secrets.providers.AWSSecretProvider",
"validate.non.null": "false",
"connection.attempts": "3",
"batch.max.rows": "100",
"connection.backoff.ms": "10000",
"timestamp.delay.interval.ms": "0",
"table.poll.interval.ms": "60000",
"connection.user": "${aws:ems-dev-eventmanagement-db-secret:username}",
"config.providers.aws.param.file.dir": "/connector-files/aws",
"connection.url": "jdbc:mysql://dev-ems.coollnhgrzes.eu-west-1.rds.amazonaws.com:3306/eventmanagement",
"numeric.precision.mapping": "false"
}'
The cert login method does not work at all as you are not passing the trust-store or PEM cert and key when logging in. The result is the Vault server responds with:
[2021-02-25 06:47:50,043] INFO Initializing client with mode [CERT] (io.lenses.connect.secrets.providers.VaultSecretProvider)
[2021-02-25 06:47:50,452] ERROR Stopping due to error (org.apache.kafka.connect.cli.ConnectDistributed)
com.bettercloud.vault.VaultException: Vault responded with HTTP status code: 400
Response body: {"errors":["missing client token"]}
See:
https://github.com/BetterCloud/vault-java-driver#java-keystore-jks-based-config
There is a security concern while passing these 2 parameters in the worker.
config.providers.aws.param.aws.access.key=your-client-key
config.providers.aws.param.aws.secret.key=your-secret-key
Im going to deploy this plugin into an EC2 instance that already has an IAM role attached with Secret manager permission. Can I leverage that option instead of passing the keys in the worker properties?
Installed Resource
json-smart 2.4.8
Fixed Version
2.4.9
I have a secret in AWS Secret manager with this name dev/debezium/mysql/testservice/password
I tried to use this name in my source connector.
"database.password": "${aws:dev/debezium/mysql/testservice/password:mysql_pass}",
In my worker properties, the file.dir=/etc/kafka
And the error I got is,
java.nio.file.AccessDeniedException: /etc/kafka/dev
But when I created empty folders in the same SECRET NAME, then its resolved.
mkdir -p /etc/kafka/dev/debezium/mysql/testservice/password
I tried to set it using default authentication because both connect and AWS Secret Manager are in the same AWS account.
config was as follows:
aws.access.key = null
aws.auth.method = default
aws.region = eu-central-1
aws.secret.key = null
but i would get the following exception:
ERROR Stopping due to error (org.apache.kafka.connect.cli.ConnectDistributed:86)
org.apache.kafka.connect.errors.ConnectException: aws.access.key not set
at io.lenses.connect.secrets.config.AbstractConfigExtensions$AbstractConfigExtension$.raiseException$extension(AbstractConfigProvider.scala:26)
at io.lenses.connect.secrets.config.AbstractConfigExtensions$AbstractConfigExtension$.$anonfun$getStringOrThrowOnNull$extension$1(AbstractConfigProvider.scala:17)
at scala.Option.getOrElse(Option.scala:189)
the call getStringOrThrowOnNull in https://github.com/lensesio/secret-provider/blob/master/src/main/scala/io/lenses/connect/secrets/config/AWSProviderSettings.scala#L26-L29 is too strict, as they are checked for being empty afterwards anyway.
If VaultProvider is used to seek secrets from a KV v2 secret engine in Hashicorp Vault, the lease_duration is often set to 0 since it doesn't seem possible to set lease_durations on kv/v2 secrets nor does it respect Vault's default system TTL. This means the expiry-time is always clock-time and the cache is effectively useless.
To overcome this, secret-provider could add configurable Long to the VaultSetting
: overrideLeaseDuration
which VaultProvider
will use in place of getLeaseDuration
from Vault's LogicalResponse
.
If there is interest here I can submit a PR.
Is there any chance you could publish 2.1.6 and 2.1.5 to Maven Central? They appear to be missing and the latest it seems to have is 2.1.4.
Link: https://search.maven.org/artifact/io.lenses/secret-provider
Hi, I'm trying to use your vault integration with k8s auth, but it doesn't work.
I got the following error
2022-02-15 18:05:38,790 INFO Initializing client with mode [KUBERNETES] (io.lenses.connect.secrets.providers.VaultSecretProvider) [main]
2022-02-15 18:05:39,312 ERROR Stopping due to error (org.apache.kafka.connect.cli.ConnectDistributed) [main]
com.bettercloud.vault.VaultException: Vault responded with HTTP status code: 400
Response body: {"errors":["missing client token"]}
at com.bettercloud.vault.api.Auth.loginByJwt(Auth.java:1045)
at com.bettercloud.vault.api.Auth.loginByKubernetes(Auth.java:1113)
at io.lenses.connect.secrets.providers.VaultHelper.$anonfun$createClient$5(VaultHelper.scala:84)
at scala.Option.map(Option.scala:230)
at io.lenses.connect.secrets.providers.VaultHelper.createClient(VaultHelper.scala:81)
at io.lenses.connect.secrets.providers.VaultHelper.createClient$(VaultHelper.scala:19)
at io.lenses.connect.secrets.providers.VaultSecretProvider.createClient(VaultSecretProvider.scala:31)
at io.lenses.connect.secrets.providers.VaultSecretProvider.configure(VaultSecretProvider.scala:43)
at org.apache.kafka.common.config.AbstractConfig.instantiateConfigProviders(AbstractConfig.java:572)
at org.apache.kafka.common.config.AbstractConfig.resolveConfigVariables(AbstractConfig.java:515)
at org.apache.kafka.common.config.AbstractConfig.<init>(AbstractConfig.java:107)
at org.apache.kafka.common.config.AbstractConfig.<init>(AbstractConfig.java:129)
at org.apache.kafka.connect.runtime.WorkerConfig.<init>(WorkerConfig.java:452)
at org.apache.kafka.connect.runtime.distributed.DistributedConfig.<init>(DistributedConfig.java:405)
at org.apache.kafka.connect.cli.ConnectDistributed.startConnect(ConnectDistributed.java:95)
at org.apache.kafka.connect.cli.ConnectDistributed.main(ConnectDistributed.java:80)
I've created a service account and role binding, also I've created k8s auth in vault.
Also I've tested that auth works inside container:
curl -k --request POST --data '{"jwt": "jwt-from-service-account", "role": "kv-k8s-test-kafka-connect-cluster-role"}' https://vault_addr:8200/v1/auth/k8s-cluster/login
{
"request_id": "5433b4ac-ab83-8c84-2f52-e95cff9af9eb",
"lease_id": "",
"renewable": false,
"lease_duration": 0,
"data": null,
"wrap_info": null,
"warnings": null,
"auth": {
"client_token": "s.token",
"accessor": "eKnQVOKH7H",
"policies": [
"default",
"policy-kv-k8s-kafka-connect-cluster-read"
],
I don't understand why plugin can't get a secret with missing client token
error
Hello!
Here is hopefully a minor one..
I see in the docs (for example: https://docs.lenses.io/5.2/connectors/connect-secrets/env/) it has always seemed to refer to config.providers.env.file.dir
for the file.dir
property of the Environment config provider.
If I configure my worker in this way, when I look in the Connect server log after the Environment config provider has been loaded, the file.dir
property has a no value (it is blank).
If I instead configure my worker with config.providers.env.param.file.dir
(note .param.
after the provider name but before the parameter name) like you can see on other providers such as https://docs.lenses.io/5.2/connectors/connect-secrets/vault/, then I can see this file.dir
property value show up in my Connect logs and it is no longer blank.
Either it is a mismatch in the docs or a mismatch in how Connect seems to be picking this up?
Hi team,
It's really really fantastic to see this project being active again.
We would like to support custom Authentication path for Vault K8s authentication, rather than the default one v1/auth/kubernetes/login
as specified in official doc
The current way is relying on the default value when calling loginByKubernetes()
method
where the path is hardcoded to (source)
public AuthResponse loginByJwt(final String provider, final String role, final String jwt)
throws VaultException {
return retry(attempt -> {
// HTTP request to Vault
final String requestJson = Json.object().add("role", role).add("jwt", jwt)
.toString();
final RestResponse restResponse = new Rest()
.url(config.getAddress() + "/v1/auth/" + provider + "/login")
vault-java-driver
has not been maintaiend in a few years with the reply from the original author (link)vault-java-driver
, the author has mentioned a few approaches in his post. I think the least disruptive would be switching to a community fork https://github.com/jopenlibs/vault-java-driver, which was very actively developed. I could submit another PR to add that feature on their side as well.Please let me know what you think @davidsloan. Sorry for directly tagging you as your recent activity in this repo is the only reason that motivate me to write this lengthy issue ๐
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.