Unknown error 500 - On pushing image to private docker registry
I have created a private docker registry based on the instruction in the following link.
When I try to push an image , some layers are uploaded but after sometime it is showing 500 error.
I am not sure whether there is a permission issue because some part is uploaded in object storage. Please find the error logs I am getting in the pod
time="2023-05-04T06:30:58.435755707Z" level=error msg="unknown error completing upload: s3aws: AccessDenied:
status code: 403, request id: tx000007b34386555f60475-0064535122-909be19-default, host id: " auth.user.name="registry_user" go.version=go1.16.15 http.request.host=registry.seqrops.in http.request.id=c77656dc-abe0-4249-abb1-2fdf6b08799b http.request.method=PUT http.request.remoteaddr=172.232.67.98 http.request.uri="/v2/*/blobs/uploads/ffc73f4f-bedf-448d-83d8-e4d36798a2f5?_state=69rOn0gAbD2bUMTxlax102RZa-HYwfJUPdHvmNSC4rt7Ik5hbWUiOiJzZXFyb3BzL2Fzc2V0bW9kZWwtc3ZjIiwiVVVJRCI6ImZmYzczZjRmLWJlZGYtNDQ4ZC04M2Q4LWU0ZDM2Nzk4YTJmNSIsIk9mZnNldCI6NTE2OTcwNjQsIlN0YXJ0ZWRBdCI6IjIwMjMtMDUtMDRUMDY6MzA6MzFaIn0%3D&digest=sha256%3A407c40dc4dcd8e1dedce22309be211c3c7001477259a3b8a579cad357ab9efcd" http.request.useragent="docker/20.10.24+azure-1 go/go1.19.6 git-commit/5d6db842238e3c4f5f9fb9ad70ea46b35227d084 kernel/5.15.0-1036-azure os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.24+azure-1 (linux))" vars.name="" vars.uuid=ffc73f4f-bedf-448d-83d8-e4d36798a2f5
Please note I am trying to upload image from azure pipeline.
Regards
Vineeth
11 Replies
It looks like you're running into an issue where your Docker image upload to your private registry is failing because of an "AccessDenied" error. This error could be caused by a few different things, like incorrect credentials or a wonky bucket policy.
To get things working, I recommend first double-checking that you have the correct credentials with the right permissions set. Then you will want to make sure that your bucket policy is properly configured. If you have already attempted these steps, you may want to review the following links I discovered while researching the error.
A Github user discovered that a similar error was caused by an incorrect IAM policy permission:
A StackOverflow user noted that adding s3:ListBucketMultipartUploads
to the bucket-level permissions block resolved a similar error:
Ultimately, if the issue persists, you may want to take a look at additional logs to see if there are any other error messages.
--Eric
I have the similar issue as well.
I currently have two applications, and previously both of their Docker images were successfully pushed to my private Docker registry. However, one of the applications is now encountering an issue where it fails to push its image and enters a retry loop. I haven't made any changes or updates related to my Docker registry.
Ran into a similar problem using Linode Object Storage. Seems like something is wrong on Linode side
This system is somewhat complicated so there could be a few different points of failure. Are you able to Upload other objects to that same bucket? Have you checked your SSL certificates to ensure they're still valid?
Similar to the Stack exchange Eric shared, this one also mentions adding a Bucket Policy as a potential solution.
I'm having a similar issue, but strangely only in certain OBJ regions. I have a private docker registry using OBJ storage, and I want to be able to set it up on any region. So far, I've tried us-east
, us-ord
, us-iad
, and eu-central
. It works perfectly in us-east
and eu-central
but in us-ord
and us-iad
, I get the same "s3aws: AccessDenied" error, e.g.:
time="2023-08-17T20:15:33.510346081Z" level=error msg="unknown error completing upload: s3aws: AccessDenied:
status code: 403,….
I only get this error for some of the images that I upload to the registry, though. Some go through with no errors. It seems like the ones that don't work are larger images with some larger layers, but I haven't fully verified that fact yet. That could suggest that there is an issue with policy permissions related to *MultipartUpload actions, but I've verified that the policies are the same across instances using different OBJ regions, and that they all include the necessary permissions, e.g.:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "auto-uid”,
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam:::user/my-user-id”
},
"Action": [
"s3:AbortMultipartUpload",
"s3:DeleteBucketWebsite",
"s3:DeleteObject",
"s3:DeleteObjectTagging",
"s3:DeleteObjectVersion",
"s3:DeleteObjectVersionTagging",
"s3:DeleteReplicationConfiguration",
"s3:PutBucketCORS",
"s3:PutBucketLogging",
"s3:PutBucketNotification",
"s3:PutBucketTagging",
"s3:PutBucketVersioning",
"s3:PutBucketWebsite",
"s3:PutLifecycleConfiguration",
"s3:PutObject",
"s3:PutObjectTagging",
"s3:PutObjectAcl",
"s3:PutObjectVersionAcl",
"s3:PutObjectVersionTagging",
"s3:PutReplicationConfiguration",
"s3:RestoreObject",
"s3:GetBucketAcl",
"s3:GetBucketCORS",
"s3:GetBucketLocation",
"s3:GetBucketLogging",
"s3:GetBucketNotification",
"s3:GetBucketPolicy",
"s3:GetBucketTagging",
"s3:GetBucketVersioning",
"s3:GetBucketWebsite",
"s3:GetLifecycleConfiguration",
"s3:GetObjectAcl",
"s3:GetObject",
"s3:GetObjectTagging",
"s3:GetObjectTorrent",
"s3:GetReplicationConfiguration",
"s3:GetObjectVersionAcl",
"s3:GetObjectVersion",
"s3:GetObjectVersionTagging",
"s3:GetObjectVersionTorrent",
"s3:ListBucketMultipartUploads",
"s3:ListBucket",
"s3:ListBucketVersions",
"s3:ListMultipartUploadParts"
],
"Resource": [
"arn:aws:s3:::my-bucket”,
"arn:aws:s3:::my-bucket/*”
]
}
]
}
Is it possible that some OBJ regions do not properly support multipart uploads? Or have some different configuration?
Any other suggestions on what to try?
Follow up: I was able to understand the issue a little better and, more importantly, find a workaround. This post directed me to the fact that the issue was a bug in docker registry when uploading layers using multipart uploads. I was able to get around the bug by setting the storage.s3.multipartcopythresholdsize
parameter in the docker registry to a very large value, e.g 5368709120
(5GB).
Using multipartcopythresholdsize
seems promising - can I ask, if you are using twuni/docker-registry as recommended in the Linode docs, exactly where you configured this?
I am struggling to find any documentation for that project with this particular setting.
Appreciate any help on this
Okay, I think I figured it out, posting here for anyone else using twuni/docker-registry
, you will need to add the following to your docker-configs.yml
file (top level property):
configData:
storage:
s3:
multipartcopythresholdsize: 5368709120
You can confirm by shelling into the container and running
cat /etc/docker/registry/config.yml
Thanks for coming back to answer your own question, @brendonboshell. It always makes my day to see someone want to help the next person who has the same problem. I appreciate your kindness.
I will look into our documentation and see if I can find where we might add something to help with this and pass it to our Docs team.