
The historical fails with null point exception, API) - #2 by Gian_Merlinoĭruid.s3.endpoint.url=s3.ĭruid.s3.endpoint.signingRegion=us-west-004ĭ3Prefix=druid/indexing-logsĪnd submit this simple job, the file is correctly stored on the bucket, but failed to be available/queryable, because the historical cannot download the file (the coordinator keep retrying forever, tho, and the “data” is not lost because it’s intelligent enough to keep the data on the indexer while the historical is not ready) "type": "index_parallel",

I’m trying for a few hours to deploy druid on k8s using Backblaze S3 as a deep-storage, but I was facing a few errors, I thought it was related to not using ZK initially, thanks Himanshu Gupta for not needing yet another ZK on the cluster, but I switched to ZK temporarily and the behavior continued.Īfter reading the docs, I set =true (assuming, if I’m understanding the phasing correctly, this would disable ACL), this is also configured in this way on this 2018 post Deep storage on Oracle Cloud (S3 compat.
