Git Product home page Git Product logo

secret-provider's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

secret-provider's Issues

Compatability check: Secret Provider 0.0.2 with confluent platform 7.5.3 and Java 8

We are getting below error for the connectors using lenses.io , Post platform upgradation to 7.5.3
Earlier its good with 5.5.2 version. I want to know whether 0.0.2 is compatible with confluent platform 7.5.3 and java 8

When we try to add the lenses path to the executable file of the connect worker node as below, All the connecter will be failed state

CLASSPATH="/usr/share/java/kafka-connect-replicator/replicator-rest-extension-7.5.3.jar:/usr/share/java/secret-provider/secret-provider-0.0.2-all.jar:/usr/share/java/confluent-security/connect/kafka-client-plugins-7.5.3-ce.jar"

current Classpath path we have is:
CLASSPATH="/usr/share/java/kafka-connect-replicator/replicator-rest-extension-7.5.3.jar:/usr/share/java/confluent-security/connect/kafka-client-plugins-7.5.3-ce.jar"

Below is the status of one of the failed connectors which is using lenses io:

{"name":"EMPSTG.stg.transportation.train-reservation.raw.v1.Replicator","connector":{"state":"FAILED","worker_id":"zcc1kfconvml01d.cn.ca:8083","trace":"java.lang.NoClassDefFoundError: Could not initialize class com.azure.core.implementation.http.HttpClientProviders\n\tat com.azure.core.http.HttpClient.createDefault(HttpClient.java:27)\n\tat com.azure.core.http.HttpPipelineBuilder.build(HttpPipelineBuilder.java:60)\n\tat com.azure.security.keyvault.secrets.SecretClientBuilder.buildAsyncClient(SecretClientBuilder.java:162)\n\tat com.azure.security.keyvault.secrets.SecretClientBuilder.buildClient(SecretClientBuilder.java:104)\n\tat io.lenses.connect.secrets.providers.AzureSecretProvider.$anonfun$get$1(AzureSecretProvider.scala:61)\n\tat scala.collection.MapLike.getOrElse(MapLike.scala:131)\n\tat scala.collection.MapLike.getOrElse$(MapLike.scala:129)\n\tat scala.collection.AbstractMap.getOrElse(Map.scala:63)\n\tat io.lenses.connect.secrets.providers.AzureSecretProvider.get(AzureSecretProvider.scala:62)\n\tat org.apache.kafka.common.config.ConfigTransformer.transform(ConfigTransformer.java:103)\n\tat org.apache.kafka.connect.runtime.WorkerConfigTransformer.transform(WorkerConfigTransformer.java:58)\n\tat org.apache.kafka.connect.storage.ClusterConfigState.connectorConfig(ClusterConfigState.java:152)\n\tat org.apache.kafka.connect.runtime.distributed.DistributedHerder.startConnector(DistributedHerder.java:1894)\n\tat org.apache.kafka.connect.runtime.distributed.DistributedHerder.lambda$getConnectorStartingCallable$38(DistributedHerder.java:1926)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:266)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat java.lang.Thread.run(Thread.java:750)\n"},"tasks":[{"id":0,"state":"FAILED","worker_id":"zcc1kfconvml01d.cn.ca:8083","trace":"java.lang.NoClassDefFoundError: Could not initialize class com.azure.core.implementation.http.HttpClientProviders\n\tat com.azure.core.http.HttpClient.createDefault(HttpClient.java:27)\n\tat com.azure.core.http.HttpPipelineBuilder.build(HttpPipelineBuilder.java:60)\n\tat com.azure.security.keyvault.secrets.SecretClientBuilder.buildAsyncClient(SecretClientBuilder.java:162)\n\tat com.azure.security.keyvault.secrets.SecretClientBuilder.buildClient(SecretClientBuilder.java:104)\n\tat io.lenses.connect.secrets.providers.AzureSecretProvider.$anonfun$get$1(AzureSecretProvider.scala:61)\n\tat scala.collection.MapLike.getOrElse(MapLike.scala:131)\n\tat scala.collection.MapLike.getOrElse$(MapLike.scala:129)\n\tat scala.collection.AbstractMap.getOrElse(Map.scala:63)\n\tat io.lenses.connect.secrets.providers.AzureSecretProvider.get(AzureSecretProvider.scala:62)\n\tat org.apache.kafka.common.config.ConfigTransformer.transform(ConfigTransformer.java:103)\n\tat org.apache.kafka.connect.runtime.WorkerConfigTransformer.transform(WorkerConfigTransformer.java:58)\n\tat org.apache.kafka.connect.storage.ClusterConfigState.connectorConfig(ClusterConfigState.java:152)\n\tat org.apache.kafka.connect.runtime.distributed.DistributedHerder.startTask(DistributedHerder.java:1819)\n\tat org.apache.kafka.connect.runtime.distributed.DistributedHerder.lambda$getTaskStartingCallable$33(DistributedHerder.java:1872)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:266)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat java.lang.Thread.run(Thread.java:750)\n"},{"id":1,"state":"FAILED","worker_id":"zcc1kfconvml01d.cn.ca:8083","trace":"java.lang.NoClassDefFoundError: Could not initialize class com.azure.core.implementation.http.HttpClientProviders\n\tat com.azure.core.http.HttpClient.createDefault(HttpClient.java:27)\n\tat com.azure.core.http.HttpPipelineBuilder.build(HttpPipelineBuilder.java:60)\n\tat com.azure.security.keyvault.secrets.SecretClientBuilder.buildAsyncClient(SecretClientBuilder.java:162)\n\tat com.azure.security.keyvault.secrets.SecretClientBuilder.buildClient(SecretClientBuilder.java:104)\n\tat io.lenses.connect.secrets.providers.AzureSecretProvider.$anonfun$get$1(AzureSecretProvider.scala:61)\n\tat scala.collection.MapLike.getOrElse(MapLike.scala:131)\n\tat scala.collection.MapLike.getOrElse$(MapLike.scala:129)\n\tat scala.collection.AbstractMap.getOrElse(Map.scala:63)\n\tat io.lenses.connect.secrets.providers.AzureSecretProvider.get(AzureSecretProvider.scala:62)\n\tat org.apache.kafka.common.config.ConfigTransformer.transform(ConfigTransformer.java:103)\n\tat org.apache.kafka.connect.runtime.WorkerConfigTransformer.transform(WorkerConfigTransformer.java:58)\n\tat org.apache.kafka.connect.storage.ClusterConfigState.connectorConfig(ClusterConfigState.java:152)\n\tat org.apache.kafka.connect.runtime.distributed.DistributedHerder.startTask(DistributedHerder.java:1819)\n\tat org.apache.kafka.connect.runtime.distributed.DistributedHerder.lambda$getTaskStartingCallable$33(DistributedHerder.java:1872)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:266)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat java.lang.Thread.run(Thread.java:750)\n"}],"type":"source"}

CVE-2023-44487 - io.netty:netty-codec-http2

Hi, I was updating our packages for security vulnerabilities and updated secret-providers to version 2.3.0. But still our image scanner reported these issues

CVE-2023-44487 - io.netty:netty-codec-http2 - GHSA-xpw8-rcwv-8f8p
Severity: High
Type: Package Vulnerability
Name: io.netty:netty-codec-http2
Installed version / Fixed version : 0:4.1.89.Final / 4.1.100.Final

Can you please update the package in this repo or let us know if there is a version that has this fixed. We can use that instead.
Thanks.

AWS IAM authentication - Header signed

Hi, any tips about how to use AWS IAM authentication type with aws.request.headers dinamically?

I am able to use, but sts session last only 15 minutes...The parameter aws.request.headers has to be signed I am only able to generate once in the Kafka Connect's bootstrap, after 10 minutes (token.renewal.ms) due to my aws.request.headers be static, it starts to fail because my session has expired

Thank you all in advance

PFX and Pem certificates support

Hi,

I have imported my .pfx file in azure keyvault(add certificate option , not add secret).
will the code able to read the .pfx file and write to the host filesystem with the same format ??
(and also same applies for pem)

Thanks
Devaraj

Problems to look up aws secrets value

Hello,

I'm using the secrets-provider with AWS secrets manager. Sometimes some workers of our kafka connect get stuck and stay totally freeze looking up for the secrets values. The worker just gets healthy again when we manually perform the restart of the pod. We are using secrets secret-provider-2.1.6-all.jar with confluent kafka connect version 5.5.0. Have someone already seen this?

The logs of the worker at the moment it is "stuck", and due to the poll is taking so longer, the worker leaves the kafka consumer group after the timeout of max.poll.interval.ms has exceeded. After the restart of the pod, everything works fine.

[2021-08-05 09:21:30,049] INFO Looking up value at [bd-customer-kyc-periodic-validation-prod] for key [username] (io.lenses.connect.secrets.providers.AWSSecretProvider)
[2021-08-05 09:22:27,016] INFO Looking up value at [bd-customer-kyc-periodic-validation-prod] for key [password] (io.lenses.connect.secrets.providers.AWSSecretProvider)
[2021-08-05 09:22:28,303] INFO Looking up value at [bd-customer-kyc-periodic-validation-prod] for key [username] (io.lenses.connect.secrets.providers.AWSSecretProvider)
[2021-08-05 09:22:28,817] INFO [Worker clientId=connect-1, groupId=kafka-connect-bkofin-group] Member connect-1-c1a82d66-3fc1-47fd-a279-fd434766aaeb sending LeaveGroup request to coordinator broker5.pagseguro.intranet:9092 (id: 2147483643 rack: null) due to consumer poll timeout has expired. This means the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms, which typically implies that the poll loop is spending too much time processing messages. You can address this either by increasing max.poll.interval.ms or by reducing the maximum size of batches returned in poll() with max.poll.records. (org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
[2021-08-05 09:22:28,887] INFO Looking up value at [bd-customer-kyc-periodic-validation-prod] for key [password] (io.lenses.connect.secrets.providers.AWSSecretProvider)
[2021-08-05 09:22:29,218] INFO Looking up value at [bd-customer-kyc-periodic-validation-prod] for key [username] (io.lenses.connect.secrets.providers.AWSSecretProvider)
[2021-08-05 09:31:06,150] INFO Looking up value at [bd-court-order-outbox-prod] for key [password] (io.lenses.connect.secrets.providers.AWSSecretProvider)
[2021-08-05 09:31:07,242] INFO Looking up value at [bd-court-order-outbox-prod] for key [username] (io.lenses.connect.secrets.providers.AWSSecretProvider)
[2021-08-05 09:38:22,219] INFO Looking up value at [bd-ccs-bdv-prod] for key [password] (io.lenses.connect.secrets.providers.AWSSecretProvider)
[2021-08-05 09:38:23,477] INFO Looking up value at [bd-ccs-bdv-prod] for key [username] (io.lenses.connect.secrets.providers.AWSSecretProvider)
[2021-08-05 09:38:27,818] INFO Looking up value at [bd-customer-kyc-periodic-validation-prod] for key [password] (io.lenses.connect.secrets.providers.AWSSecretProvider)
[2021-08-05 09:38:28,095] INFO Looking up value at [bd-customer-kyc-periodic-validation-prod] for key [username] (io.lenses.connect.secrets.providers.AWSSecretProvider)
[2021-08-05 09:42:18,414] INFO Looking up value at [bd-ccs-bdv-prod] for key [password] (io.lenses.connect.secrets.providers.AWSSecretProvider)
[2021-08-05 09:42:19,355] INFO Looking up value at [bd-ccs-bdv-prod] for key [username] (io.lenses.connect.secrets.providers.AWSSecretProvider)```

AWS credentials expiring when using an Instance Profile for credentials

I'm using this connector and I notice when we boot up the kafka connect service the provider works fine, if after some time we refresh a connector and we need to retrieve a secret the token we get ExpiredTokenException

We're using InstanceProfiles to provide the credentials to the box, as this is considered best practice for providing credentials to the resource rather than using static credentials. This provides temporary credentials to the resource that should be refreshed.

Caused by: com.amazonaws.services.secretsmanager.model.AWSSecretsManagerException: The security token included in the request is expired (Service: AWSSecretsManager; Status Code: 400; Error Code: ExpiredTokenException;

Looking at the code

val credentials = new AWSStaticCredentialsProvider(credentialProvider)
The StaticCredentialProvider is used, this means it will not check again for new credentials. I think the preferred option here is to actually pass new DefaultAWSCredentialsProviderChain()

    val credentialProvider = settings.authMode match {
      case AuthMode.CREDENTIALS =>
        new AWSStaticCredentialsProvider(new BasicAWSCredentials(settings.accessKey, settings.secretKey.value()))
      case _ =>
        new DefaultAWSCredentialsProviderChain()
    }
    AWSSecretsManagerClientBuilder
      .standard()
      .withCredentials(credentialsProvider)
      .withRegion(settings.region)
      .build()

Azure secret provider throws exception with default secret file-enconding tag

Hi,
I'm using version 2.1.6 of the secret provider plugin to access Azure keyvault secrets from a Kafka connector.
When updating the configuration of the connector I receive an error ( "No value found for 'UTF-8'") when retrieving the secret if my Azure keyvault secret has the default "file-enconding" tag with value "utf-8".

The issue does not appear if the tag is not present or if I change the value of the tag to "UTF8".

The tag "file-enconding" with value "utf-8" is added automatically if the secret is created with az cli and cannot be removed as described in this issue

Ideally the secret provider should be able to use the default value of the tag created by az cli.

Thanks

HashiCorp VaultSecretProvider Reauthentication and Max Token TTL

Dear Maintainers,

I'm currently running into an issue with the VaultSecretProvider that from my current understanding is subject to a design flaw.

We are using the AppRole authentication method that allows the Vault client to generate a token with a max TTL of 60m. After the Kafka Connect startup the VaultSecretProvider works fine for 60 minutes and afterwards starts to throw the following error:

WARN Failed to renew the Kerberos ticket (io.lenses.connect.secrets.async.AsyncFunctionLoop:43)
   com.bettercloud.vault.VaultException: Vault responded with HTTP status code: 403
   Response body: {"errors":["permission denied"]}

From my understanding this is caused by the AsyncFunctionLoop that tries to renew the Vault token on a fixed period (default 60000ms). Due to the fact that our current AppRole is setup to generate non periodic tokens (which is the default), these tokens are not renewable indefinitely.

Docs Ref HashiCorp: https://www.vaultproject.io/docs/concepts/tokens#periodic-tokens

For me the current VaultSecretProvider looks like it only supports the usage of periodic tokens (which are kind a special in the Vault universe). Standard service tokens that are subject to a max TTL should all run into the same problem sooner or later.

We have several Vault Agents running in our infrastructure which I would consider somethign like a "reference implementation" for automatic daemon-like token renewals. These agents renew the service tokens until there max TTL is reached and issue a reauthentication afterwards to acquire a fresh token. This however doesn't seem to be possible with the VaultSecretProvider provider as it only authentications once on the providers configure() call.

I would like to get some insights from other users and/or the maintainers on this problem and how they handle it. I would like to make sure that I'm not missing something here.

Kafka Connect won't start when setting vaultprovider class

Hey guys, I'm trying to use the VaultSecretProvider withiin my connect cluster (distributed).

Here is how I add it to my docker configs:

image: confluentinc/cp-kafka-connect-base:5.5.2"

I've placed the secret provider secret-provider-2.1.5-all.jar in /local/secret-providers

CONNECT_PLUGIN_PATH: "/usr/share/java,/connect-plugins,/usr/share/confluent-hub-components,/local/connector-plugins,/local/secret-providers"
CONNECT_CONFIG_PROVIDERS: "vault"
CONNECT_CONFIG_PROVIDERS_VAULT_CLASS: "io.lenses.connect.secrets.providers.VaultSecretProvider"
CONNECT_CONFIG_PROVIDERS_VAULT_PARAM_ADDR: "url"
CONNECT_CONFIG_PROVIDERS_VAULT_PARAM_AUTH_METHOD: "token"
CONNECT_CONFIG_PROVIDERS_VAULT_PARAM_TOKEN: "${VAULT_TOKEN}"
CONNECT_CONFIG_PROVIDERS_VAULT_PARAM_FILE_DIR: "local/connector-files/vault"

the resulting properties file becomes:

plugin.path=/usr/share/java,/connect-plugins,/usr/share/confluent-hub-components,/local/connector-plugins,/local/secret-providers
config.providers.vault.param.token=<token>
config.providers.vault.param.auth.method=token
config.providers.vault.param.file.dir=local/connector-files/vault
config.providers.vault.param.addr=url
config.providers.vault.class=io.lenses.connect.secrets.providers.VaultSecretProvider
config.providers=vault

However, on startup I end up with the following stacktrace

[main] ERROR io.confluent.admin.utils.cli.KafkaReadyCommand - Error while running kafka-ready.
org.apache.kafka.common.config.ConfigException: Invalid value java.lang.ClassNotFoundException: io.lenses.connect.secrets.providers.VaultSecretProvider for configuration Invalid config:io.lenses.connect.secrets.providers.VaultSecretProvider ClassNotFoundException exception occurred
        at org.apache.kafka.common.config.AbstractConfig.instantiateConfigProviders(AbstractConfig.java:538)
        at org.apache.kafka.common.config.AbstractConfig.resolveConfigVariables(AbstractConfig.java:477)
        at org.apache.kafka.common.config.AbstractConfig.<init>(AbstractConfig.java:107)
        at org.apache.kafka.common.config.AbstractConfig.<init>(AbstractConfig.java:142)
        at org.apache.kafka.clients.admin.AdminClientConfig.<init>(AdminClientConfig.java:216)
        at org.apache.kafka.clients.admin.Admin.create(Admin.java:71)
        at org.apache.kafka.clients.admin.AdminClient.create(AdminClient.java:49)
        at io.confluent.admin.utils.ClusterStatus.isKafkaReady(ClusterStatus.java:138)
        at io.confluent.admin.utils.cli.KafkaReadyCommand.main(KafkaReadyCommand.java:150)

If I was to remove the config.providers=vault I instead see this line within the connect logs

[2021-01-28 20:16:59,022] INFO Added plugin 'io.lenses.connect.secrets.providers.VaultSecretProvider' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader)

Any suggestions? Thanks!

i think the aws secrets from secret manager aren't parsed correctly and thus stay empty

Goodday

I was very glad with the solution you are creating. But unfortionately it doesnt work entire;ly correct.

First up i was struggling with the auth keys. The enviroment variables i cant get working and the credenitials i had to change the "config.providers.aws.param.aws.client.key=your-client-key" into "config.providers.aws.param.aws.access.key" to get the connection working with aws.

Then when i changed the connection.password into "${aws:ems-dev-eventmanagement-db-secret:password}" it unfortunately failed.

In trace logging i found this part which i think might be the problem but i'm not sure:

[2020-06-15 05:47:08,261] DEBUG http-outgoing-7 << "{"ARN":"arn:aws:secretsmanager:eu-west-1:577868951105:secret:ems-dev-eventmanagement-db-secret-B0TWsl","CreatedDate":1.591854819443E9,"Name":"ems-dev-eventmanagement-db-secret","SecretString":"{"username":"eventmanagement_user","password":"xxxxxx","engine":"mysql","host":"dev-ems.coollnhgrzes.eu-west-1.rds.amazonaws.com","port":"3306","dbname":"eventmanagement"}","VersionId":"2f3cc10f-b194-4a1f-bef5-523a03f68a70","VersionStages":["AWSCURRENT"]}" (org.apache.http.wire)
[2020-06-15 05:47:08,261] DEBUG http-outgoing-7 << HTTP/1.1 200 OK (org.apache.http.headers)
[2020-06-15 05:47:08,261] DEBUG http-outgoing-7 << Date: Mon, 15 Jun 2020 05:47:08 GMT (org.apache.http.headers)
[2020-06-15 05:47:08,261] DEBUG http-outgoing-7 << Content-Type: application/x-amz-json-1.1 (org.apache.http.headers)
[2020-06-15 05:47:08,261] DEBUG http-outgoing-7 << Content-Length: 491 (org.apache.http.headers)
[2020-06-15 05:47:08,261] DEBUG http-outgoing-7 << Connection: keep-alive (org.apache.http.headers)
[2020-06-15 05:47:08,261] DEBUG http-outgoing-7 << x-amzn-RequestId: fef90d70-733b-470f-8987-17ad2c6f38a3 (org.apache.http.headers)
[2020-06-15 05:47:08,261] DEBUG Connection can be kept alive for 60000 MILLISECONDS (org.apache.http.impl.execchain.MainClientExec)
[2020-06-15 05:47:08,261] TRACE Parsing service response JSON (com.amazonaws.request)
[2020-06-15 05:47:08,261] DEBUG Connection [id: 7][route: {s}->https://secretsmanager.eu-west-1.amazonaws.com:443] can be kept alive for 60.0 seconds (org.apache.http.impl.conn.PoolingHttpClientConnectionManager)
[2020-06-15 05:47:08,261] DEBUG http-outgoing-7: set socket timeout to 0 (org.apache.http.impl.conn.DefaultManagedHttpClientConnection)
[2020-06-15 05:47:08,261] DEBUG Connection released: [id: 7][route: {s}->https://secretsmanager.eu-west-1.amazonaws.com:443][total kept alive: 1; route allocated: 1 of 50; total allocated: 1 of 50] (org.apache.http.impl.conn.PoolingHttpClientConnectionManager)
[2020-06-15 05:47:08,262] TRACE Done parsing service response (com.amazonaws.request)
[2020-06-15 05:47:08,262] DEBUG Received successful response: 200, AWS Request ID: fef90d70-733b-470f-8987-17ad2c6f38a3 (com.amazonaws.request)
[2020-06-15 05:47:08,262] DEBUG x-amzn-RequestId: fef90d70-733b-470f-8987-17ad2c6f38a3 (com.amazonaws.requestId)

I run kafka connect in kubernetes with the following plugins - versions:

FROM confluentinc/cp-kafka-connect:5.5.0

COPY extras/mariadb-java-client-2.1.0.jar /usr/share/java/kafka/
COPY extras/mysql-connector-java-8.0.15.jar /usr/share/java/kafka-connect-jdbc
COPY extras/ojdbc8.jar /usr/share/java/kafka-connect-jdbc
COPY extras/secret-provider-0.0.2-all.jar /usr/share/java/

and the connector i'm trying to install:

curl -X PUT
http://ems-kafka-dev-cp-kafka-connect.ems-dev:8083/connectors/JdbcSourceConnectorEventManagement/config
-H 'Content-Type: application/json'
-H 'Accept: application/json'
-d '{
"connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector",
"timestamp.column.name": "modified_at",
"config.providers.aws.param.aws.access.key": "xxxxxxxxxx",
"config.providers.aws.param.aws.secret.key": "xxxxxxxxx",
"connection.password": "${aws:ems-dev-eventmanagement-db-secret:password}",
"tasks.max": "1",
"config.providers": "aws",
"table.types": "VIEW",
"table.whitelist": "trigger_recipe_event_view,recipe_business_rules_with_parameters_view,event_linked_active_recipes_view",
"mode": "timestamp",
"topic.prefix": "connected-keyless-",
"poll.interval.ms": "60000",
"config.providers.aws.param.aws.auth.method": "default",
"config.providers.aws.param.aws.region": "eu-west-1",
"value.converter": "org.apache.kafka.connect.json.JsonConverter",
"key.converter": "org.apache.kafka.connect.json.JsonConverter",
"config.providers.aws.class": "io.lenses.connect.secrets.providers.AWSSecretProvider",
"validate.non.null": "false",
"connection.attempts": "3",
"batch.max.rows": "100",
"connection.backoff.ms": "10000",
"timestamp.delay.interval.ms": "0",
"table.poll.interval.ms": "60000",
"connection.user": "${aws:ems-dev-eventmanagement-db-secret:username}",
"config.providers.aws.param.file.dir": "/connector-files/aws",
"connection.url": "jdbc:mysql://dev-ems.coollnhgrzes.eu-west-1.rds.amazonaws.com:3306/eventmanagement",
"numeric.precision.mapping": "false"
}'

For cert login, there is no way to provide a client cert and key

The cert login method does not work at all as you are not passing the trust-store or PEM cert and key when logging in. The result is the Vault server responds with:

[2021-02-25 06:47:50,043] INFO Initializing client with mode [CERT] (io.lenses.connect.secrets.providers.VaultSecretProvider)
[2021-02-25 06:47:50,452] ERROR Stopping due to error (org.apache.kafka.connect.cli.ConnectDistributed)
com.bettercloud.vault.VaultException: Vault responded with HTTP status code: 400
Response body: {"errors":["missing client token"]}

See:
https://github.com/BetterCloud/vault-java-driver#java-keystore-jks-based-config

Use AWS IAM role - EC2

There is a security concern while passing these 2 parameters in the worker.

config.providers.aws.param.aws.access.key=your-client-key
config.providers.aws.param.aws.secret.key=your-secret-key

Im going to deploy this plugin into an EC2 instance that already has an IAM role attached with Secret manager permission. Can I leverage that option instead of passing the keys in the worker properties?

CVE-2023-1370

Installed Resource
json-smart 2.4.8

Fixed Version
2.4.9

AWS Secret names with slash '/'

I have a secret in AWS Secret manager with this name dev/debezium/mysql/testservice/password

I tried to use this name in my source connector.

 "database.password": "${aws:dev/debezium/mysql/testservice/password:mysql_pass}",

In my worker properties, the file.dir=/etc/kafka

And the error I got is,

java.nio.file.AccessDeniedException: /etc/kafka/dev

But when I created empty folders in the same SECRET NAME, then its resolved.

mkdir -p /etc/kafka/dev/debezium/mysql/testservice/password

aws.access.key and aws.secret.key are required although aws.auth.mode=default

I tried to set it using default authentication because both connect and AWS Secret Manager are in the same AWS account.

config was as follows:
aws.access.key = null
aws.auth.method = default
aws.region = eu-central-1
aws.secret.key = null

but i would get the following exception:

ERROR Stopping due to error (org.apache.kafka.connect.cli.ConnectDistributed:86)
 org.apache.kafka.connect.errors.ConnectException: aws.access.key not set
	at io.lenses.connect.secrets.config.AbstractConfigExtensions$AbstractConfigExtension$.raiseException$extension(AbstractConfigProvider.scala:26)
 	at io.lenses.connect.secrets.config.AbstractConfigExtensions$AbstractConfigExtension$.$anonfun$getStringOrThrowOnNull$extension$1(AbstractConfigProvider.scala:17)
	at scala.Option.getOrElse(Option.scala:189)

the call getStringOrThrowOnNull in https://github.com/lensesio/secret-provider/blob/master/src/main/scala/io/lenses/connect/secrets/config/AWSProviderSettings.scala#L26-L29 is too strict, as they are checked for being empty afterwards anyway.

Hashicorp VaultProvider config option to override secret's lease duration.

If VaultProvider is used to seek secrets from a KV v2 secret engine in Hashicorp Vault, the lease_duration is often set to 0 since it doesn't seem possible to set lease_durations on kv/v2 secrets nor does it respect Vault's default system TTL. This means the expiry-time is always clock-time and the cache is effectively useless.

To overcome this, secret-provider could add configurable Long to the VaultSetting: overrideLeaseDuration which VaultProvider will use in place of getLeaseDuration from Vault's LogicalResponse.

If there is interest here I can submit a PR.

vault kubernetes auth doesn't work

Hi, I'm trying to use your vault integration with k8s auth, but it doesn't work.
I got the following error

2022-02-15 18:05:38,790 INFO Initializing client with mode [KUBERNETES] (io.lenses.connect.secrets.providers.VaultSecretProvider) [main]
2022-02-15 18:05:39,312 ERROR Stopping due to error (org.apache.kafka.connect.cli.ConnectDistributed) [main]
com.bettercloud.vault.VaultException: Vault responded with HTTP status code: 400
Response body: {"errors":["missing client token"]}

	at com.bettercloud.vault.api.Auth.loginByJwt(Auth.java:1045)
	at com.bettercloud.vault.api.Auth.loginByKubernetes(Auth.java:1113)
	at io.lenses.connect.secrets.providers.VaultHelper.$anonfun$createClient$5(VaultHelper.scala:84)
	at scala.Option.map(Option.scala:230)
	at io.lenses.connect.secrets.providers.VaultHelper.createClient(VaultHelper.scala:81)
	at io.lenses.connect.secrets.providers.VaultHelper.createClient$(VaultHelper.scala:19)
	at io.lenses.connect.secrets.providers.VaultSecretProvider.createClient(VaultSecretProvider.scala:31)
	at io.lenses.connect.secrets.providers.VaultSecretProvider.configure(VaultSecretProvider.scala:43)
	at org.apache.kafka.common.config.AbstractConfig.instantiateConfigProviders(AbstractConfig.java:572)
	at org.apache.kafka.common.config.AbstractConfig.resolveConfigVariables(AbstractConfig.java:515)
	at org.apache.kafka.common.config.AbstractConfig.<init>(AbstractConfig.java:107)
	at org.apache.kafka.common.config.AbstractConfig.<init>(AbstractConfig.java:129)
	at org.apache.kafka.connect.runtime.WorkerConfig.<init>(WorkerConfig.java:452)
	at org.apache.kafka.connect.runtime.distributed.DistributedConfig.<init>(DistributedConfig.java:405)
	at org.apache.kafka.connect.cli.ConnectDistributed.startConnect(ConnectDistributed.java:95)
	at org.apache.kafka.connect.cli.ConnectDistributed.main(ConnectDistributed.java:80)

I've created a service account and role binding, also I've created k8s auth in vault.
Also I've tested that auth works inside container:

curl -k --request POST --data '{"jwt": "jwt-from-service-account", "role": "kv-k8s-test-kafka-connect-cluster-role"}' https://vault_addr:8200/v1/auth/k8s-cluster/login
{
  "request_id": "5433b4ac-ab83-8c84-2f52-e95cff9af9eb",
  "lease_id": "",
  "renewable": false,
  "lease_duration": 0,
  "data": null,
  "wrap_info": null,
  "warnings": null,
  "auth": {
    "client_token": "s.token",
    "accessor": "eKnQVOKH7H",
    "policies": [
      "default",
      "policy-kv-k8s-kafka-connect-cluster-read"
    ],

I don't understand why plugin can't get a secret with missing client token error

Environment Secrets file.dir worker property in documentation

Hello!
Here is hopefully a minor one..

I see in the docs (for example: https://docs.lenses.io/5.2/connectors/connect-secrets/env/) it has always seemed to refer to config.providers.env.file.dir for the file.dir property of the Environment config provider.

If I configure my worker in this way, when I look in the Connect server log after the Environment config provider has been loaded, the file.dir property has a no value (it is blank).

If I instead configure my worker with config.providers.env.param.file.dir (note .param. after the provider name but before the parameter name) like you can see on other providers such as https://docs.lenses.io/5.2/connectors/connect-secrets/vault/, then I can see this file.dir property value show up in my Connect logs and it is no longer blank.

Either it is a mismatch in the docs or a mismatch in how Connect seems to be picking this up?

Support for custom K8s authentication path

Hi team,

It's really really fantastic to see this project being active again.

Goal

We would like to support custom Authentication path for Vault K8s authentication, rather than the default one v1/auth/kubernetes/login as specified in official doc

Problem

    public AuthResponse loginByJwt(final String provider, final String role, final String jwt)
            throws VaultException {
        return retry(attempt -> {
            // HTTP request to Vault
            final String requestJson = Json.object().add("role", role).add("jwt", jwt)
                    .toString();
            final RestResponse restResponse = new Rest()
                    .url(config.getAddress() + "/v1/auth/" + provider + "/login")
  • However the repo vault-java-driver has not been maintaiend in a few years with the reply from the original author (link)

My proposed solution

  • For the secret-provider side, I did a rough sketch a while ago dttung2905#1. Of course there are a few things to take care as well. For example, how to fallback when user does not provide any authentication path
  • For vault-java-driver, the author has mentioned a few approaches in his post. I think the least disruptive would be switching to a community fork https://github.com/jopenlibs/vault-java-driver, which was very actively developed. I could submit another PR to add that feature on their side as well.

Please let me know what you think @davidsloan. Sorry for directly tagging you as your recent activity in this repo is the only reason that motivate me to write this lengthy issue ๐Ÿ˜„

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.