trinodb / charts Goto Github PK
View Code? Open in Web Editor NEWLicense: Apache License 2.0
License: Apache License 2.0
Hi there!
I recently attached SSDs for caching purposes and enabled them by mounting the volume at /data/trino/cache. However, upon doing so, I encountered the following exception from Alluxio:
IllegalArgumentException: Cannot write to cache directory /data/trino/cache.
After some investigation, I suspect that I need to specify the fsGroup to 1000
in pod.spec.securityContext
. Currently, the chart supports runAsGroup and runAsUser in securityContext:
...
{{- with .Values.securityContext }}
securityContext:
runAsUser: {{ .runAsUser }}
runAsGroup: {{ .runAsGroup }}
{{- end }}
...
The volume declared into pod spec:
...
- name: ebs-cache-volume
ephemeral:
volumeClaimTemplate:
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
volumeMode: Filesystem
...
Could you please confirm if adding fsGroup: 1000 to the pod.spec.securityContext
would resolve this issue? If not, any guidance on how to properly configure the security context for SSD caching would be greatly appreciated.
Thanks in advance for your help!
Hello, getting the following error in a brand new 0.26.0 chart install.
It happens when I use the --set global.image.tag
, like:
helm install trino-sql -f deployments/k8s/values.yaml --namespace trino-sql --set global.image.tag=latest deployments/k8s/
Error: INSTALLATION FAILED: failed parsing --set data: unable to parse key: interface conversion: interface {} is nil, not map[string]interface {}
Trino supports multiple authentication types: https://trino.io/docs/current/security/authentication-types.html#multiple-authentication-types
Current chart oriented for single-valued auth type.
Hello Team,
We are encountering the following issue randomly.
Below are the details:
Trino Version: 450
Kubernetes Setup Version: 0.25.0
Metastore: Hive
Catalog Used: Hive
io.trino.spi.TrinoException: Unexpected response from
http://worker-ip:port/v1/task/20240726_064136_01430_tuucc.2.0.0?summarize
at io.trino.server.remotetask.SimpleHttpResponseHandler.onSuccess(SimpleHttpResponseHandler.java:70)
at io.trino.server.remotetask.SimpleHttpResponseHandler.onSuccess(SimpleHttpResponseHandler.java:27)
at com.google.common.util.concurrent.Futures$CallbackListener.run(Futures.java:1137)
at io.airlift.concurrent.BoundedExecutor.drainQueue(BoundedExecutor.java:79)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
at java.base/java.lang.Thread.run(Thread.java:1570)
Caused by: java.lang.IllegalArgumentException: Unable to create class io.trino.execution.TaskInfo from JSON response:
[io.airlift.jaxrs.JsonMapperParsingException: Invalid json for Java type io.trino.server.TaskUpdateRequest
at io.airlift.jaxrs.AbstractJacksonMapper.readFrom(AbstractJacksonMapper.java:123)
at io.airlift.jaxrs.JsonMapper.readFrom(JsonMapper.java:41)
at org.glassfish.jersey.message.internal.ReaderInterceptorExecutor$TerminalReaderInterceptor.invokeReadFrom(ReaderInterceptorExecutor.java:235)
at org.glassfish.jersey.message.internal.ReaderInterceptorExecutor$TerminalReaderInterceptor.aroundReadFrom(ReaderInterceptorExecutor.java:214)
at org.glassfish.jersey.message.internal.ReaderInterceptorExecutor.proceed(ReaderInterceptorExecutor.java:134)
at org.glassfish.jersey.server.internal.MappableExceptionWrapperInterceptor.aroundReadFrom(MappableExceptionWrapperInterceptor.java:49)
at org.glassfish.jersey.message.internal.ReaderInterceptorExecutor.proceed(ReaderInterceptorExecutor.java:134)
at org.glassfish.jersey.message.internal.MessageBodyFactory.readFrom(MessageBodyFactory.java:1072)
at org.glassfish.jersey.message.internal.InboundMessageContext.readEntity(InboundMessageContext.java:657)
at org.glassfish.jersey.server.ContainerRequest.readEntity(ContainerRequest.java:290)
at org.glassfish.jersey.server.internal.inject.EntityParamValueParamProvider$EntityValueSupplier.apply(EntityParamValueParamProvider.java:73)
at org.glassfish.jersey.server.internal.inject.EntityParamValueParamProvider$EntityValueSupplier.apply(EntityParamValueParamProvider.java:56)
at org.glassfish.jersey.server.spi.internal.ParamValueFactoryWithSource.apply(ParamValueFactoryWithSource.java:50)
at org.glassfish.jersey.server.spi.internal.ParameterValueHelper.getParameterValues(ParameterValueHelper.java:68)
at org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$AbstractMethodParamInvoker.getParamValues(JavaResourceMethodDispatcherProvider.java:109)
at org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$VoidOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:159)
at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:93)
at org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:478)
at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:400)
at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:81)
at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:263)
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:248)
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:244)
at org.glassfish.jersey.internal.Errors.process(Errors.java:292)
at org.glassfish.jersey.internal.Errors.process(Errors.java:274)
at org.glassfish.jersey.internal.Errors.process(Errors.java:244)
at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:266)
at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:242)
at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:697)
at org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:394)
at org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:346)
at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:358)
at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:312)
at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:205)
at org.eclipse.jetty.ee10.servlet.ServletHolder.handle(ServletHolder.java:736)
at org.eclipse.jetty.ee10.servlet.ServletHandler$ChainEnd.doFilter(ServletHandler.java:1614)
at io.airlift.http.server.TraceTokenFilter.doFilter(TraceTokenFilter.java:62)
at org.eclipse.jetty.ee10.servlet.FilterHolder.doFilter(FilterHolder.java:205)
at org.eclipse.jetty.ee10.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1586)
at io.airlift.http.server.TimingFilter.doFilter(TimingFilter.java:51)
at org.eclipse.jetty.ee10.servlet.FilterHolder.doFilter(FilterHolder.java:205)
at org.eclipse.jetty.ee10.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1586)
at org.eclipse.jetty.ee10.servlet.ServletHandler$MappedServlet.handle(ServletHandler.java:1547)
at org.eclipse.jetty.ee10.servlet.ServletChannel.dispatch(ServletChannel.java:824)
at org.eclipse.jetty.ee10.servlet.ServletChannel.handle(ServletChannel.java:436)
at org.eclipse.jetty.ee10.servlet.ServletHandler.handle(ServletHandler.java:464)
at org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:597)
at org.eclipse.jetty.server.handler.ContextHandler.handle(ContextHandler.java:851)
at org.eclipse.jetty.server.Handler$Wrapper.handle(Handler.java:740)
at org.eclipse.jetty.server.handler.EventsHandler.handle(EventsHandler.java:81)
at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:151)
at org.eclipse.jetty.server.Handler$Wrapper.handle(Handler.java:740)
at org.eclipse.jetty.server.handler.EventsHandler.handle(EventsHandler.java:81)
at org.eclipse.jetty.server.Server.handle(Server.java:179)
at org.eclipse.jetty.server.internal.HttpChannelState$HandlerInvoker.run(HttpChannelState.java:635)
at org.eclipse.jetty.server.internal.HttpConnection.onFillable(HttpConnection.java:411)
at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:322)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:99)
at org.eclipse.jetty.io.SelectableChannelEndPoint$1.run(SelectableChannelEndPoint.java:53)
at org.eclipse.jetty.util.thread.strategy.AdaptiveExecutionStrategy.runTask(AdaptiveExecutionStrategy.java:478)
at org.eclipse.jetty.util.thread.strategy.AdaptiveExecutionStrategy.consumeTask(AdaptiveExecutionStrategy.java:441)
at org.eclipse.jetty.util.thread.strategy.AdaptiveExecutionStrategy.tryProduce(AdaptiveExecutionStrategy.java:293)
at org.eclipse.jetty.util.thread.strategy.AdaptiveExecutionStrategy.run(AdaptiveExecutionStrategy.java:201)
at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:311)
at org.eclipse.jetty.util.thread.MonitoredQueuedThreadPool$1.run(MonitoredQueuedThreadPool.java:73)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:979)
at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.doRunJob(QueuedThreadPool.java:1209)
at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1164)
at java.base/java.lang.Thread.run(Thread.java:1570)
Caused by: com.fasterxml.jackson.databind.JsonMappingException: Unknown handle id: hive:io.trino.plugin.hive.HiveTableHandle (through reference chain: io.trino.server.TaskUpdateRequest["fragment"]->io.trino.sql.planner.PlanFragment["root"]->io.trino.sql.planner.plan.AggregationNode["source"]->io.trino.sql.planner.plan.ProjectNode["source"]->io.trino.sql.planner.plan.FilterNode["source"]->io.trino.sql.planner.plan.TableScanNode["table"]->io.trino.metadata.TableHandle["connectorHandle"])
at com.fasterxml.jackson.databind.JsonMappingException.wrapWithPath(JsonMappingException.java:402)
at com.fasterxml.jackson.databind.JsonMappingException.wrapWithPath(JsonMappingException.java:361)
at com.fasterxml.jackson.databind.deser.BeanDeserializerBase.wrapAndThrow(BeanDeserializerBase.java:1937)
at com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeWithErrorWrapping(BeanDeserializer.java:572)
at com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeUsingPropertyBased(BeanDeserializer.java:440)
at com.fasterxml.jackson.databind.deser.BeanDeserializerBase.deserializeFromObjectUsingNonDefault(BeanDeserializerBase.java:1493)
at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserializeFromObject(BeanDeserializer.java:348)
at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:185)
at com.fasterxml.jackson.databind.deser.SettableBeanProperty.deserialize(SettableBeanProperty.java:545)
at com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeWithErrorWrapping(BeanDeserializer.java:570)
at com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeUsingPropertyBased(BeanDeserializer.java:440)
at com.fasterxml.jackson.databind.deser.BeanDeserializerBase.deserializeFromObjectUsingNonDefault(BeanDeserializerBase.java:1493)
at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserializeFromObject(BeanDeserializer.java:348)
at com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeOther(BeanDeserializer.java:220)
at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:187)
at com.fasterxml.jackson.databind.jsontype.impl.AsPropertyTypeDeserializer._deserializeTypedForId(AsPropertyTypeDeserializer.java:170)
at com.fasterxml.jackson.databind.jsontype.impl.AsPropertyTypeDeserializer.deserializeTypedFromObject(AsPropertyTypeDeserializer.java:136)
at com.fasterxml.jackson.databind.deser.AbstractDeserializer.deserializeWithType(AbstractDeserializer.java:263)
at com.fasterxml.jackson.databind.deser.SettableBeanProperty.deserialize(SettableBeanProperty.java:542)
at com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeWithErrorWrapping(BeanDeserializer.java:570)
at com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeUsingPropertyBased(BeanDeserializer.java:440)
at com.fasterxml.jackson.databind.deser.BeanDeserializerBase.deserializeFromObjectUsingNonDefault(BeanDeserializerBase.java:1493)
at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserializeFromObject(BeanDeserializer.java:348)
at com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeOther(BeanDeserializer.java:220)
at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:187)
at com.fasterxml.jackson.databind.jsontype.impl.AsPropertyTypeDeserializer._deserializeTypedForId(AsPropertyTypeDeserializer.java:170)
at com.fasterxml.jackson.databind.jsontype.impl.AsPropertyTypeDeserializer.deserializeTypedFromObject(AsPropertyTypeDeserializer.java:136)
at com.fasterxml.jackson.databind.deser.AbstractDeserializer.deserializeWithType(AbstractDeserializer.java:263)
at com.fasterxml.jackson.databind.deser.SettableBeanProperty.deserialize(SettableBeanProperty.java:542)
at com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeWithErrorWrapping(BeanDeserializer.java:570)
at com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeUsingPropertyBased(BeanDeserializer.java:440)
at com.fasterxml.jackson.databind.deser.BeanDeserializerBase.deserializeFromObjectUsingNonDefault(BeanDeserializerBase.java:1493)
at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserializeFromObject(BeanDeserializer.java:348)
at com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeOther(BeanDeserializer.java:220)
at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:187)
at com.fasterxml.jackson.databind.jsontype.impl.AsPropertyTypeDeserializer._deserializeTypedForId(AsPropertyTypeDeserializer.java:170)
at com.fasterxml.jackson.databind.jsontype.impl.AsPropertyTypeDeserializer.deserializeTypedFromObject(AsPropertyTypeDeserializer.java:136)
at com.fasterxml.jackson.databind.deser.AbstractDeserializer.deserializeWithType(AbstractDeserializer.java:263)
at com.fasterxml.jackson.databind.deser.SettableBeanProperty.deserialize(SettableBeanProperty.java:542)
at com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeWithErrorWrapping(BeanDeserializer.java:570)
at com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeUsingPropertyBased(BeanDeserializer.java:440)
at com.fasterxml.jackson.databind.deser.BeanDeserializerBase.deserializeFromObjectUsingNonDefault(BeanDeserializerBase.java:1493)
at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserializeFromObject(BeanDeserializer.java:348)
at com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeOther(BeanDeserializer.java:220)
at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:187)
at com.fasterxml.jackson.databind.jsontype.impl.AsPropertyTypeDeserializer._deserializeTypedForId(AsPropertyTypeDeserializer.java:170)
at com.fasterxml.jackson.databind.jsontype.impl.AsPropertyTypeDeserializer.deserializeTypedFromObject(AsPropertyTypeDeserializer.java:136)
at com.fasterxml.jackson.databind.deser.AbstractDeserializer.deserializeWithType(AbstractDeserializer.java:263)
at com.fasterxml.jackson.databind.deser.SettableBeanProperty.deserialize(SettableBeanProperty.java:542)
at com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeWithErrorWrapping(BeanDeserializer.java:570)
at com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeUsingPropertyBased(BeanDeserializer.java:440)
at com.fasterxml.jackson.databind.deser.BeanDeserializerBase.deserializeFromObjectUsingNonDefault(BeanDeserializerBase.java:1493)
at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserializeFromObject(BeanDeserializer.java:348)
at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:185)
at com.fasterxml.jackson.databind.deser.std.ReferenceTypeDeserializer.deserialize(ReferenceTypeDeserializer.java:207)
at com.fasterxml.jackson.databind.deser.SettableBeanProperty.deserialize(SettableBeanProperty.java:545)
at com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeWithErrorWrapping(BeanDeserializer.java:570)
at com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeUsingPropertyBased(BeanDeserializer.java:440)
at com.fasterxml.jackson.databind.deser.BeanDeserializerBase.deserializeFromObjectUsingNonDefault(BeanDeserializerBase.java:1493)
at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserializeFromObject(BeanDeserializer.java:348)
at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:185)
at com.fasterxml.jackson.databind.deser.DefaultDeserializationContext.readRootValue(DefaultDeserializationContext.java:342)
at com.fasterxml.jackson.databind.ObjectMapper._readValue(ObjectMapper.java:4881)
at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:3104)
at io.airlift.jaxrs.AbstractJacksonMapper.readFrom(AbstractJacksonMapper.java:110)
... 68 more
Caused by: java.lang.IllegalArgumentException: Unknown handle id: hive:io.trino.plugin.hive.HiveTableHandle
at com.google.common.base.Preconditions.checkArgument(Preconditions.java:220)
at io.trino.metadata.HandleResolver.getHandleClass(HandleResolver.java:56)
at io.trino.metadata.AbstractTypedJacksonModule$InternalTypeResolver.typeFromId(AbstractTypedJacksonModule.java:170)
at com.fasterxml.jackson.databind.jsontype.impl.TypeDeserializerBase._findDeserializer(TypeDeserializerBase.java:159)
at com.fasterxml.jackson.databind.jsontype.impl.AsPropertyTypeDeserializer._deserializeTypedForId(AsPropertyTypeDeserializer.java:151)
at com.fasterxml.jackson.databind.jsontype.impl.AsPropertyTypeDeserializer.deserializeTypedFromObject(AsPropertyTypeDeserializer.java:136)
at com.fasterxml.jackson.databind.jsontype.impl.AsPropertyTypeDeserializer.deserializeTypedFromAny(AsPropertyTypeDeserializer.java:240)
at io.trino.metadata.AbstractTypedJacksonModule$InternalTypeDeserializer.deserialize(AbstractTypedJacksonModule.java:91)
at com.fasterxml.jackson.databind.deser.SettableBeanProperty.deserialize(SettableBeanProperty.java:545)
at com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeWithErrorWrapping(BeanDeserializer.java:570)
... 129 more
]
at io.airlift.http.client.FullJsonResponseHandler$JsonResponse.<init>(FullJsonResponseHandler.java:107)
at io.airlift.http.client.FullJsonResponseHandler.handle(FullJsonResponseHandler.java:67)
at io.airlift.http.client.FullJsonResponseHandler.handle(FullJsonResponseHandler.java:36)
at io.airlift.http.client.jetty.JettyResponseFuture.processResponse(JettyResponseFuture.java:130)
at io.airlift.http.client.jetty.JettyResponseFuture.completed(JettyResponseFuture.java:104)
at io.airlift.http.client.jetty.BufferingResponseListener.onComplete(BufferingResponseListener.java:89)
at org.eclipse.jetty.client.transport.ResponseListeners.notifyComplete(ResponseListeners.java:350)
at org.eclipse.jetty.client.transport.ResponseListeners.lambda$addCompleteListener$7(ResponseListeners.java:335)
at org.eclipse.jetty.client.transport.ResponseListeners.notifyComplete(ResponseListeners.java:350)
at org.eclipse.jetty.client.transport.ResponseListeners.notifyComplete(ResponseListeners.java:342)
at org.eclipse.jetty.client.transport.HttpReceiver.terminateResponse(HttpReceiver.java:420)
at org.eclipse.jetty.client.transport.HttpReceiver.terminateResponse(HttpReceiver.java:402)
at org.eclipse.jetty.client.transport.HttpReceiver.lambda$responseSuccess$3(HttpReceiver.java:367)
at org.eclipse.jetty.util.thread.SerializedInvoker$Link.run(SerializedInvoker.java:191)
at org.eclipse.jetty.util.thread.SerializedInvoker.run(SerializedInvoker.java:117)
at org.eclipse.jetty.client.transport.HttpReceiver$ContentSource.invokeDemandCallback(HttpReceiver.java:778)
at org.eclipse.jetty.client.transport.HttpReceiver$ContentSource.onDataAvailable(HttpReceiver.java:717)
at org.eclipse.jetty.client.transport.HttpReceiver.responseContentAvailable(HttpReceiver.java:325)
at org.eclipse.jetty.client.transport.internal.HttpReceiverOverHTTP.receive(HttpReceiverOverHTTP.java:82)
at org.eclipse.jetty.client.transport.internal.HttpChannelOverHTTP.receive(HttpChannelOverHTTP.java:97)
at org.eclipse.jetty.client.transport.internal.HttpConnectionOverHTTP.onFillable(HttpConnectionOverHTTP.java:207)
at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:322)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:99)
at org.eclipse.jetty.io.SelectableChannelEndPoint$1.run(SelectableChannelEndPoint.java:53)
at org.eclipse.jetty.util.thread.strategy.AdaptiveExecutionStrategy.runTask(AdaptiveExecutionStrategy.java:478)
at org.eclipse.jetty.util.thread.strategy.AdaptiveExecutionStrategy.consumeTask(AdaptiveExecutionStrategy.java:441)
at org.eclipse.jetty.util.thread.strategy.AdaptiveExecutionStrategy.tryProduce(AdaptiveExecutionStrategy.java:293)
at org.eclipse.jetty.util.thread.strategy.AdaptiveExecutionStrategy.run(AdaptiveExecutionStrategy.java:201)
at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:311)
at org.eclipse.jetty.util.thread.MonitoredQueuedThreadPool$1.run(MonitoredQueuedThreadPool.java:73)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:979)
at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.doRunJob(QueuedThreadPool.java:1209)
at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1164)
... 1 more
Caused by: java.lang.IllegalArgumentException: Invalid JSON bytes for [simple type, class io.trino.execution.TaskInfo]
at io.airlift.json.JsonCodec.fromJson(JsonCodec.java:202)
at io.airlift.http.client.FullJsonResponseHandler$JsonResponse.<init>(FullJsonResponseHandler.java:104)
... 33 more
Caused by: com.fasterxml.jackson.core.JsonParseException: Unrecognized token 'io': was expecting (JSON String, Number, Array, Object or token 'null', 'true' or 'false')
at [Source: REDACTED (`StreamReadFeature.INCLUDE_SOURCE_IN_LOCATION` disabled); line: 1, column: 4]
at com.fasterxml.jackson.core.JsonParser._constructError(JsonParser.java:2567)
at com.fasterxml.jackson.core.JsonParser._constructReadException(JsonParser.java:2593)
at com.fasterxml.jackson.core.JsonParser._constructReadException(JsonParser.java:2601)
at com.fasterxml.jackson.core.base.ParserMinimalBase._reportError(ParserMinimalBase.java:765)
at com.fasterxml.jackson.core.json.UTF8StreamJsonParser._reportInvalidToken(UTF8StreamJsonParser.java:3659)
at com.fasterxml.jackson.core.json.UTF8StreamJsonParser._handleUnexpectedValue(UTF8StreamJsonParser.java:2747)
at com.fasterxml.jackson.core.json.UTF8StreamJsonParser._nextTokenNotInObject(UTF8StreamJsonParser.java:867)
at com.fasterxml.jackson.core.json.UTF8StreamJsonParser.nextToken(UTF8StreamJsonParser.java:753)
at com.fasterxml.jackson.databind.ObjectReader._initForReading(ObjectReader.java:357)
at com.fasterxml.jackson.databind.ObjectReader._bind(ObjectReader.java:2089)
at com.fasterxml.jackson.databind.ObjectReader.readValue(ObjectReader.java:1249)
at io.airlift.json.JsonCodec.fromJson(JsonCodec.java:197)
... 34 more
Hello, I already set https behind a ingress with a cert, enable shared secret and configure a authentication type PASSWORD but still unable to set the configuration for password file.
this is my current config values.yaml
image:
tag: ""
server:
workers: 3
config:
https:
enabled: true
port: 8443
keystore:
path: ""
authenticationType: "PASSWORD"
auth:
passwordAuth: "XXXX:XXXX"
refreshPeriod: 1m
additionalConfigProperties:
- "internal-communication.shared-secret=XXXX"
coordinator:
secretMounts:
- name: trino-password-authentication
secretName: trino-file-authentication
path: /etc/trino/auth
jvm:
maxHeapSize: "8G"
worker:
jvm:
maxHeapSize: "8G"
service:
type: "NodePort"
port: 8080
ingress:
enabled: true
className: ""
annotations:
kubernetes.io/ingress.class: "nginx"
hosts:
- host: somedomain.com
paths:
- backend:
service:
name: trino-cluster
port:
number: 8080
path: /
pathType: Prefix
- http:
paths:
- backend:
service:
name: trino-cluster
port:
number: 8080
path: /*
pathType: ImplementationSpecific
tls:
- hosts:
- somedomain.com
secretName: secret-tls
I got no errors just is not enable the auth by password in the login screen. of course XXXX are the password and user.
in the case creating the password I followed this procedure
Creating a password file#
Password files utilizing the bcrypt format can be created using the htpasswd utility from the Apache HTTP Server. The cost must be specified, as Trino enforces a higher minimum cost than the default.
**Create an empty password file to get started:
touch password.db
Add or update the password for the user test:
htpasswd -B -C 10 password.db test**
and about the shared-secret I use
openssl rand 512 | base64
Good afternoon
Is it possible to transfer sensitive data from configmap to secret kubernetes?
For example: Database connection data
Currently it is not possible to easily use dynamic
catalog management because ConfigMaps
are mounted as ReadOnly into the coordinator Pod.
It would be nice having a way to work around that, do you have any ideas? Creating an EmptyDir and then moving the files from the current ConfigMap there?
Edit: Created a PR: #149
When using aws-load-balancer-controller, it is useful to pass annotations in the Service ( https://github.com/trinodb/charts/blob/trino-0.19.0/charts/trino/templates/service.yaml ) to denote what properties the load balancer could have.
Proposed solution:
From https://github.com/trinodb/charts/blob/trino-0.19.0/charts/trino/values.yaml#L226
service:
type: ClusterIP
port: 8080
annotations: {} # <--------- this field is new
From https://github.com/trinodb/charts/blob/trino-0.19.0/charts/trino/templates/service.yaml
metadata:
name: {{ template "trino.fullname" . }}
labels:
app: {{ template "trino.name" . }}
chart: {{ template "trino.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
#######
# This section is new
########
{{- with .Values.service.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
#######
# /This section is new
########
spec:
I could make a pull request if you'd like.
Currently, the namespace option is ignored by the helm chart.
helm template . --namespace <my_namespace>
I want to include the namespace in the output resources and am glad if this is supported.
accessControl
properties need to be on worker nodes to properly implement graceful shutdowns. Without that, the preStop api request will result in a 403. A work around is to add the access control properties via additionalConfigFiles
. I believe it would be helpful to have the properties added to the worker configmap as well, so that the properties are present on both the coordinator and worker nodes.
Some of my learnings outlined on the Trino slack here.
Hey,
Im trying to secure our installation with HTTPS certificate and internal TLS so that i can use LDAP for authentication. However i run into the below error.
javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
This occurs when i connect via 'https://internal-fqdn'. Our organization has a wildcard certificate issued by GoDaddy and that has been added to the cluster in the form of a secret. Then this certificate is terminated on the ingress with the backend protocol set to HTTPS.
If i visit the site via a web browser the site says there is a certificate and that it is valid. However if i connect via the Trino.jar application with the command ./trino.jar https://internal-fqdn
and then run show catalogs;
the error appears. Remove the TLS and connect via http and this error does not occur. any suggestions?
For context too i have the following configuration in our helm values file as well
additionalConfigProperties:
[
#To allow the certificate to be terminated at the ingress
http-server.process-forwarded=true,
#This is required for the nodes and coordiantor to encypt traffic between each other
internal-communication.shared-secret={redacted secret phrase},
internal-communication.https.required=true,
#Not needed according to https://trino.io/docs/current/security/tls.html#https-secure-directly:~:text=This%20is%20why%20you%20do%20not%20need%20to%20configure%20http%2Dserver.https.enabled%3Dtrue
#http-server.https.enabled=true,
#http-server.https.port=8443
]
Not able to deploy multiple clusters using helm chart due to configmap conflicts (caused by this line for example)
chart version: 0.20.0
Templating configmap objects name using predefined template in _helpers.tpl
e.g.
instead of trino-resource-groups-volume-coordinator
use trino-resource-groups-volume-{{ template "trino.coordinator" . }}
We want to specify pod.spec.containers.securityContext
so I'm glad if this is supported:
e.g. Superset supports that:
https://github.com/apache/superset/blob/8c32c6da169afec312923e516850d90a69e78f46/helm/superset/values.yaml#L336
Hello,
I have been working on getting JMX Java Agent working with my Trino installation and came across this issue where:
In order to get Trino JMX Java Agent working, the .jar file needs to be present on the trino machines, and right now the
charts do not support additionalVolumes so that I can add this file as a volumeMount.
Is there an alternative way to do it, or it will actually depend on something like additionalVolumes being included in the Chart ?
Thank you.
#204 nodeport service is a great feature, can you release a new version for it?
Cannot use TLS encryption b/w Ingress controller and the Service.
server:
config:
https:
enabled: true
service:
type: ClusterIP
port: 8443
ingress:
enabled: true
annotations:
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
The service created would be mapped to 8443 port of the pods, thus facilitating the TLS encryption between the Ingress controller and the Trino Coordinator Pod.
The port 8443 is assigned to http-server.http.port
, which makes the process attempt to listen on the port twice, and ends up in an exception:
UncheckedIOException: Failed to bind to /0.0.0.0:8443
Currently, the Trino Helm chart does not allow setting resource requests and limits for the JMX exporter. This limitation causes issues on Kubernetes clusters that require resource specifications for CPU and memory as a prerequisite.
Error creating: pods "trino-helm-etl-coordinator-8655d99488-5jlwb" is forbidden: failed quota: quota-bitrino-dev: must specify limits.cpu for: jmx-exporter; limits.memory for: jmx-exporter; requests.cpu for: jmx-exporter; requests.memory for: jmx-exporter
This restriction prevents deployment on clusters with strict resource quota policies, hindering the ability to use the Trino Helm chart in such environments.
Please provide a way to configure resource requests and limits for the JMX exporter in the Helm chart values.
In the values.yaml the default value for setting additionalProperties is set as {}
, suggesting that a dict object is expected: https://github.com/trinodb/charts/blob/main/charts/trino/values.yaml#L199-L207. However, in the templates this value is processed using range
(https://github.com/trinodb/charts/blob/main/charts/trino/templates/configmap-coordinator.yaml#L62-L64) suggesting a list of values. In theory, using the same approach as when configuring the additionalCatalog
(
charts/charts/trino/templates/configmap-catalog.yaml
Lines 22 to 25 in 533ace5
additionalConfigProperties:
new-properties: |
internal-communication.shared-secret=${ENV:TRINO_SHARED_SECRET}
http-server.process-forwarded=true
more-properties: |
web-ui.authentication.type=oauth2
However, the template is missing correct indentation (nindent 4
). So instead one has to set the properties as a list:
additionalConfigProperties:
- "internal-communication.shared-secret=${ENV:TRINO_SHARED_SECRET}"
- "http-server.process-forwarded=true"
- "web-ui.authentication.type=oauth2"
To make configuration more consistent I would suggest to either:
additionalConfigProperties
to string so it can be configured as follows:
additionalConfigProperties: |
internal-communication.shared-secret=${ENV:TRINO_SHARED_SECRET}
http-server.process-forwarded=true
web-ui.authentication.type=oauth2
nindent 4
.Personally, I'd prefer 1. since its more obvious how the string settings correspond to the trino properties. In 2. the nested new-properties
, more-properties
don't serve a purpose anyway.
I am a committer on the Apache Pulsar project, and we have created a connector based on the 368 version of Trino. Now we are unable to deploy it using the latest version of the Trino helm chart.
The Trino Helm Version
helm list -A
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
trino-cluster trino 1 2023-12-12 09:09:13.0732119 -0800 PST deployed trino-0.14.0 432
Steps to Reproduce
helm install -f 368.yaml $RELEASE_NAME trino/trino --namespace $TRINO_K8S_NS --create-namespace
Contents of the 368.yaml file:
image:
tag: "368"
pullPolicy: "Always"
Observed Error Behavior in the Worker Pods
Step1: Confirm that image tagged 368 is downloaded
kubectl describe <worker pod>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 23s default-scheduler Successfully assigned trino/trino-cluster-worker-c4d5bbcc8-xflxh to k8s-node00
Normal Pulled 19s kubelet Successfully pulled image "trinodb/trino:368" in 1.254727643s (2.728144379s including waiting)
Normal Pulling 18s (x2 over 22s) kubelet Pulling image "trinodb/trino:368"
Normal Pulled 16s kubelet Successfully pulled image "trinodb/trino:368" in 1.299575889s (2.112766319s including waiting)
Normal Created 16s (x2 over 19s) kubelet Created container trino-worker
Normal Started 16s (x2 over 19s) kubelet Started container trino-worker
Warning BackOff 12s (x4 over 15s) kubelet Back-off restarting failed container trino-worker in pod trino-cluster-worker-c4d5bbcc8-xflxh_trino(ecb3e37d-67b2-4199-b6b7-94c0da17b5a7)
Step 2: Check the pod logs for the error
kubectl logs <worker pod>
+ set +e
+ grep -s -q node.id /etc/trino/node.properties
+ NODE_ID_EXISTS=1
+ set -e
+ NODE_ID=
+ [[ 1 != 0 ]]
+ NODE_ID=-Dnode.id=trino-cluster-worker-59bff658d4-94mf8
+ exec /usr/lib/trino/bin/launcher run --etc-dir /etc/trino -Dnode.id=trino-cluster-worker-59bff658d4-94mf8
Error occurred during initialization of VM
Could not find agent library /usr/lib/trino/bin/libjvmkill.so in absolute path, with error: /usr/lib/trino/bin/libjvmkill.so: cannot open shared object file: No such file or directory
FWIW, it appears that starting the chart version 412, there is a step to explicitly copy this file into the image at the /usr/lib/trino/bin
folder.
I wanted to enable OAuth but Trino crashes on startup without any helpful error. below is my chart config aswell as the stacktrace I get.
enabling debug logs didnt show anything different.
Stacktrace mentions 3 errors
but doesnt show them ๐คฏ
When removing the server authentication type Trino starts up !
I tried with 436
(latest greatest) and 432
(latest chart default)
...
- server:
config:
authenticationType: oauth2
- additionalConfigProperties:
- http-server.authentication.oauth2.issuer=https://foo.bar
- http-server.authentication.oauth2.auth-url=https://foo.bar/oauth/authorize
- http-server.authentication.oauth2.token-url=https://foo.bar/oauth/token
- http-server.authentication.oauth2.jwks-url=https://foo.bar/oauth/discovery/keys
- http-server.authentication.oauth2.userinfo-url=https://foo.bar/oauth/userinfo
- http-server.authentication.oauth2.oidc.discovery=false
- http-server.authentication.oauth2.client-id=42deadbeef
- http-server.authentication.oauth2.client-secret=1337cafebabe
- web-ui.authentication.type=oauth2
...
2024-01-22T17:45:54.343Z INFO main Bootstrap transaction.max-finishing-concurrency 1 1 โ
โ at ProvisionListenerStackCallback$Provision.provision(ProvisionListenerStackCallback.java:117) โ
โ at ProvisionListenerStackCallback.provision(ProvisionListenerStackCallback.java:66) โ
โ at InternalProviderInstanceBindingImpl$CyclicFactory.get(InternalProviderInstanceBindingImpl.java:164) โ
โ at ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:40) โ
โ at SingletonScope$1.get(SingletonScope.java:169) โ
โ at InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:45) โ
โ at FactoryProxy.get(FactoryProxy.java:60) โ
โ at ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:40) โ
โ at SingletonScope$1.get(SingletonScope.java:169) โ
โ at InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:45) โ
โ at SingleParameterInjector.inject(SingleParameterInjector.java:40) โ
โ at SingleParameterInjector.getAll(SingleParameterInjector.java:60) โ
โ at SingleMethodInjector.inject(SingleMethodInjector.java:84) โ
โ at MembersInjectorImpl.injectMembers(MembersInjectorImpl.java:146) โ
โ at MembersInjectorImpl.injectAndNotify(MembersInjectorImpl.java:101) โ
โ at Initializer$InjectableReference.get(Initializer.java:256) โ
โ at Initializer.injectAll(Initializer.java:153) โ
โ at InternalInjectorCreator.injectDynamically(InternalInjectorCreator.java:180) โ
โ at InternalInjectorCreator.build(InternalInjectorCreator.java:113) โ
โ at Guice.createInjector(Guice.java:87) โ
โ at Bootstrap.initialize(Bootstrap.java:268) โ
โ at Server.doStart(Server.java:135) โ
โ at Server.lambda$start$0(Server.java:91) โ
โ at io.trino.$gen.Trino_432____20240122_174552_1.run(Unknown Source) โ
โ at Server.start(Server.java:91) โ
โ at TrinoServer.main(TrinoServer.java:38)
|
โ 3 errors
I've run into a problem when trying to implement combination of the following:
โ managing the password.db file via an ExternalSecret (populated from Vault), which creates the secret that I pass as passwordAuthSecret
in values.yaml
โ implementing user groups
Without specifying auth: groups
, everything is fine as the helm chart does not create a Secret on its own. In this case, we can use the Secret managed by ExternalSecret as passwordAuthSecret
.
However, if there is a value for auth: groups
, Helm attempts to create an additional secret of the same name, creating a conflict. A volume called file-authentication-volume
is created from the secret, which expects both password.db
and group.db
to exist in it. However, as the secret managed by the ExternalSecret operator takes precedent, only password.db
is found.
As it doesn't make a lot of sense to manage user groups via Vault in the same way we manage passwords, I believe the best approach would be to split password.db
and group.db
into separate secrets, volumes and volumemounts. file.group-file
and file.password-file
would have to be adjusted accordingly as well in the coordinator's configmap.
Let me know if that sounds reasonable and if yes, I will create a PR. Thanks.
Hello
I would like to enable both OAUTH2 and PASSWORD authentifications
I'm using the helm chart version 0.17.0
I have the following custom values :
server:
config:
authenticationType: OAUTH2,PASSWORD
auth:
passwordAuthSecret: "trino-password-authentication"
When I do a helm tempate with the custom values, the section 'password-authenticator.properties' is not created in the configmap trino-coordinator, and the password-volume is not created in the deployment trino-coordinator. I think that it's a bug.
Thanks in advance for your help
We have configured Glue database with a sample table all pointing to S3. Using DBeaver to connect and we see the hive metastore as well as the newly created table. When we attempt to query the table and a job spawns it fails automatically with:
`
Error Type | EXTERNAL |
---|---|
Error Code | HIVE_UNKNOWN_ERROR (16777221) |
e | io.trino.spi.TrinoException: The AWS Access Key Id you provided does not exist in our records. (Service: Amazon S3; Status Code: 403; Error Code: InvalidAccessKeyId; Request ID: 1AHXWGQGSP5ZTCZC; S3 Extended Request ID: 47yhatv5V1taSDBv+rUPyeKTebF8i6phv8OHrtbqdjHNuSO8NRfMjwuTTfxbeTxG1Vx2m+s42tA=; Proxy: null) |
---|---|
` |
Inside of helm chart we have this set but it seems running any queries with the current helm chart fails.
additionalCatalogs:
hive: |
connector.name=hive
hive.metastore=glue
hive.metastore.glue.region=us-gov-west-1
hive.metastore.glue.default-warehouse-dir=s3://MY-BUCKET/data/
hive.metastore.glue.aws-access-key=KEY
hive.metastore.glue.aws-secret-key=SECRET
hive.s3.aws-access-key=KEY
hive.s3.aws-secret-key=SECRET
Did we miss adding something or is this a bug we are hitting? Thanks
hi
i have this catalog:
lakehouse.properties: | connector.name=iceberg iceberg.catalog.type=glue hive.metastore.glue.region=us-east-1 hive.metastore.glue.endpoint-url=https://glue.us-east-1.amazonaws.com iceberg.file-format=PARQUET hive.metastore.glue.aws-access-key=${TRINO_AWS_ACCESS_KEY} hive.metastore.glue.aws-secret-key=${TRINO_AWS_SECRET_KEY}
and I'm trying to replace the glue.aws-access-key/-secret-key. from secerts but with no success
I successfully created a sidecar that mounts mnt/trino/catalog/lakehouse.properties and then changed the hive.metastore.glue.aws-access-key to the real access keys
but I cannot make Trino to use this mount because look like it only uses the etc/trino path
is there any other way to "make" trino use the "mnt/trino" path as well ? or any other way to use the above not as hard coded
but as secret
thanks
Hello,
I was trying to set some logging properties with additionalLogProperties
field, with something like
additionalLogProperties:
- "log.path=var/log/server.log"
However I got an error java.lang.IllegalArgumentException: No enum constant io.airlift.log.Level./VAR/LOG/SERVER.LOG
After some trials, I found out that the logging properties may need to be set in config.properties
instead of log.properties
, which is supposed to contain only log level.
Would someone be able to valid that it's a valid issue, and confirm whether the similar issue exists for other additional*
variables in the chart ? Thanks in advance.
Hi everyone!
I'm currently trying to enable the File System Cache in my Trino cluster for the Delta Lake catalog. However, after going through the documentation, I'm struggling to find a straightforward or semantic method to enable it.
Enabling the cache in the file system seems to require mounting an emptyDir and configuring the options outlined in the documentation, which appears to be quite complex.
Additionally, emptyDirs are ephemeral and utilize node storage. To circumvent potential storage issues, I'm exploring workarounds such as mounting persistent volumes.
Has anyone encountered similar challenges or successfully activated the Delta Lake cache in Trino using this method?
Helm chart available here: https://github.com/trinodb/charts , does not seem to provide an option to configure authentication/authorization for trinodb, is this intentionally left out? If not I could take this up and try to add this functionality to the chart.
Kindly advise.
I'm working with a Helm chart for deploying Trino and need to configure the trino.s3.credentials-provider
parameter to use com.amazonaws.auth.InstanceProfileCredentialsProvider
for S3 access.
Could someone please guide where to specify the trino.s3.credentials-provider
parameter in the Helm chart configuration?
Any assistance would be appreciated!
Due to how OpenShift handles UIDs for containers by default, it is best if we do not actually set anything for securityContext.runAsUser
or securityContext.runAsGroup
since the assigned number range can be dynamic per namespace and you will get a lot of errors unless you jump through all of the hoops to create a custom Security Context Constraint.
Unfortunately due to a bug with Helm (helm/helm#12488), if your deployment process uses the -f values.yaml
kind of syntax (such as like with ArgoCD) then it is not possible to wipe out these values with null -- they will always be set as the default after the values merge.
Example values file in my own deployment (assuming the dependent trino
chart has a local alias of trino
in my chart)
trino:
securityContext:
runAsUser: null # purposefully empty
runAsGroup: null # purposefully empty
After applying my values file, the resulting Deployments will still carry the default from the Trino Chart's default values.yaml
like this:
...
securityContext:
runAsUser: 1000
runAsGroup: 1000
...
Then, when this is deployed to OpenShift, the Deployments are blocked from being created and generate loads of spam events that runAsUser etc is not within the allowed range ๐จ
I am not sure what the "right" approach is for this, and a bit of a shame about the bug in Helm. For now my work-around is that I have manually downloaded a specific version of the chart into my own repository under charts/trino/
and just commented out both runAsUser
and runAsGroup
from the default values file.
Maybe it is a good compromise to allow for mapping in the entire securityContext
from a user's own values file (i.e. the entire object that is in the user's values file will just get passed in), but not set any properties underneath it? The downside is that this might be a breaking change for some users if/when they take the version of the chart that implements this change (that they might need to specifically set these values to 1000 in their own values files after that?). Or if you want to avoid a breaking change, add some kind of true/false flag that blocks them that is enabled by default ? (though this sounds a bit messy and less fun to have lingering around...)
We are running Trino version 432 on a Kubernetes EKS cluster managed by AWS, using the Trino Helm chart version 0.18. We are experiencing random "Access Denied" errors when querying data on S3 buckets.
The error we receive is similar to the following:
Error running query: TrinoExternalError(type=EXTERNAL, name=HIVE_CANNOT_OPEN_SPLIT, message="Error opening Hive split s3://S3BUCKETNAME/S3FILE.parquet (offset=33554432, length=33554432): Read 49152 tail bytes of s3://S3BUCKETNAME/S3FILE.parquet failed: com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 11111111111111; S3 Extended Request ID: 111111/222222222+3333/4444444444+555; Proxy: null), S3 Extended Request ID: 111111/222222222+3333/4444444444+555 (Bucket: S3BUCKETNAME, Key: S3FILE.parquet)", query_id=20240522_111111_222_3333)
This error occurs randomly and affects all queries and users simultaneously. The issue is not specific to any particular S3 bucket or table, as it affects queries for all files stored in different S3 buckets.
Workarounds and Observations
Configuration
We have configured an IAM role with the necessary S3 permissions and assigned it to the Trino pods through a ServiceAccount annotation in the Helm chart:
serviceAccount:
create: true
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::AWSACCOUNT:role/TRINOROLE
I guess its safe to rule out issues related to S3 API quota limits and file modifications during querying, as the files are still intact and there are no connectivity problems. I suspect the issue may be related to certain Trino pods failing to assume the required IAM role.
Any suggestions on how to ensure that the pods consistently assume the correct IAM role? Any insights or recommendations to resolve this issue would be greatly appreciated
Hi i think the additionalJVMConfig parameter in the values.yaml should be additionalJVMConfig : [ ]
instead of additionalJVMConfig: { }
which is producing this error / bug when doing helm install and upgrade
Values i am using the additionalJVMConfig parameter :
coordinator:
additionalJVMConfig:
- -Djava.security.auth.login.config=/etc/trino/config-files/conf.jaas
- -Djava.security.krb5.conf=/etc/trino/config-files/krb5.conf
Error / bug i am getting :
coalesce.go:286: warning: cannot overwrite table with non table for trino.coordinator.additionalJVMConfig (map[])
Result : Despite of the above message the configmap is creating properly
Hello!
Please add the ability to choose statefulset instead of deployment.
if you can :)
UPD:
for workers
I need VolumeClaimTemplates
Seems like a bug when trying to add volumes to the helm chart on the coordinator:
additionalVolumes:
- name: {{ kubernetes_trino__catalog_pvc_name }}
persistentVolumeClaim:
claimName: {{ kubernetes_trino__catalog_pvc_name }}
additionalVolumeMounts:
- name: {{ kubernetes_trino__catalog_pvc_name }}
mountPath: {{ kubernetes_trino__catalog_mount_path }}
readOnly: false
Error from server: failed to create typed patch object (kube-trino/trino-coordinator; apps/v1, Kind=Deployment): .spec.template.spec.containers[name="trino-coordinator"].volumeMounts: duplicate entries for key [mountPath="/etc/trino/catalog"]
This is how it is defined in the default values (the volume is defined on both keys):
additionalVolumes: []
# coordinator.additionalVolumes -- One or more additional volumes to add to the coordinator.
# @raw
# Example:
# ```yaml
# - name: extras
# emptyDir: {}
# ```
additionalVolumeMounts: []
# coordinator.additionalVolumeMounts -- One or more additional volume mounts to add to the coordinator.
# @raw
# Example:
# - name: extras
# mountPath: /usr/share/extras
# readOnly: true
I was using 0.18 version, in the last days I upgraded to 0.25 but I noted that the service name of my trino changed:
Using this execution line:
helm upgrade --install trino-cluster trino/trino --namespace trino --create-namespace --version 0.18.0 -f values-prod1.yml
Creates:
kubectl get service -n trino
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
trino-cluster ClusterIP 172.20.55.20 <none> 8080/TCP 104d
Now doing:
helm upgrade --install trino-cluster trino/trino --namespace trino--create-namespace --version 0.25.0 -f values-prod1.yml
Creates:
kubectl get service -n trino-stage
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
trino-cluster-trino ClusterIP 172.20.151.41 <none> 8080/TCP 8d
Here you can see that the service changes from TRINO-CLUSTER to TRINO-CLUSTER-TRINO
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.