✓ Solved

S3: list_objects raises botocore.NoSuchKey, but download_file and delete_object work

Greetings, ah, noders? Nodules? Linos?

I have a problem with the object store where I can read and write to it but I can't list it - only on Linode.

This one's a puzzler! Complete code is below.

The problem.

I am developing an application that uses linode and Backblaze as two parallel S3 stores for backup and archiving.

I can connect to the object store, call upload_file, download_file, or delete_object.

But on Linode, when I call list_objects or list_objects_v2 it always throws botocore.errorfactory.NoSuchKey. When I call it on the parallel Backblaze object store, it works as expected. (And I had the recollection that list_objects* was working for both services last but can't be sure.)

Code

I boiled it down to a minimal reproducible example with no external dependencies below.

For each of two S3 services, the script uploads to a test key, then downloads from that test key to a local file, lists the bucket, and finally deletes the object (and removes the local file).

Seven of these eight S3 operations succeed, but on Linode, listing the bucket using client.list_objects/client.list_objects_v2 raises a NoSuchKey exception (detailed below).

import boto3
import os

CFGS = {
    'backblaze': {
        'aws_access_key_id': 'XXX',
        'aws_secret_access_key': 'XXX',
        'endpoint_url': 'https://s3.us-west-004.backblazeb2.com'
    },

    'linode': {
        'aws_access_key_id': 'XXX',
        'aws_secret_access_key': 'XXX',
        'endpoint_url': 'https://engora-old.us-east-1.linodeobjects.com',
    },
}
BUCKET = 'engora-old'

for name, cfg in CFGS.items():
    print(name)
    client = boto3.client('s3', **cfg)

    key = 'engora-test-key'

    client.upload_file(Bucket=BUCKET, Key=key, Filename=__file__)
    print(f'  Upload: {__file__} --> {key}')

    client.download_file(Bucket=BUCKET, Key=key, Filename=key)
    print(f'  Download: {key} -> ./{key}\n')

    try:
        response = client.list_objects(Bucket=BUCKET)
        # response = client.list_objects_v2(Bucket=BUCKET)  # same

        for o in response['Contents']:
            print('  Key:', o['Key'])
        print()

    finally:
        client.delete_object(Bucket=BUCKET, Key=key)
        print(f'Deleted {key} from {name}')

        os.remove(key)
        print(f'Removed file ./{key}\n')

Results

backblaze
  Upload: /code/engora-search/scripts/list_s3.py --> engora-test-key
  Download: engora-test-key -> ./engora-test-key

  Key: crawl.tar.bz2
  Key: engora-test-key
  Key: httpcache-20220114-194900.sql.gz
  Key: list_s3.py

Deleted engora-test-key from backblaze
Removed file ./engora-test-key

linode
  Upload: /code/engora-search/scripts/list_s3.py --> engora-test-key
  Download: engora-test-key -> ./engora-test-key

Deleted engora-test-key from linode
Removed file ./engora-test-key

Traceback (most recent call last):
  File "list_s3.py", line 32, in <module>
    response = client.list_objects(Bucket=BUCKET)
  File "/Users/tom/.virtualenvs/engora/lib/python3.8/site-packages/botocore/client.py", line 391, in _api_call
    return self._make_api_call(operation_name, kwargs)
  File "/Users/tom/.virtualenvs/engora/lib/python3.8/site-packages/botocore/client.py", line 719, in _make_api_call
    raise error_class(parsed_response, operation_name)
botocore.errorfactory.NoSuchKey: An error occurred (NoSuchKey) when calling the ListObjects operation: Unknown

And here are system stats if needed :

$ python --version
Python 3.8.10

$ pip freeze | grep boto
boto3==1.20.46
botocore==1.23.46

$ uname -a 
Darwin bantam.local 19.6.0 Darwin Kernel Version 19.6.0: Thu Sep 16 20:58:47 PDT 2021; root:xnu-6153.141.40.1~1/RELEASE_X86_64 x86_64

5 Replies

✓ Best Answer

'Noders', I like that, lol

Tom, I was able to reproduce and resolve the error with one of my own buckets. When I removed my equivalent of the bucket name from the endpoint_url as you have in line 14, the tests ran cleanly.

(I personally preferred Linos, but I know it's a bit rude. :-D)

Wow, thanks! Sorry for the delay, it was the middle of the night here.

And I tried it and it… worked!

But… I copied that endpoint_url right from the linode page. In fact, let me check - engora-old.us-east-1.linodeobjects.com - copied and pasted right from linode's page at https://cloud.linode.com/object-storage/buckets

Also, I am fairly sure this did work before with that bucket name!

I'm not going to look a gift bucket in the mouth, however. Back to work!

But… I copied that endpoint_url right from the linode page.

That is your bucket URL - the URL you would use to access objects in your bucket over HTTPS.

The endpoint URL is like the point of authentication for the API. You would typically include the bucket name in your upload/get requests, so you don’t also need the bucket name in the URL.

Thanks! Very instructive.

Have an excellent weekend, and drop by for a beverage if you are ever in Amsterdam! (Engora's in Boston, but I live here…)

Have an excellent weekend

Thanks, you too (well, the few hours left!)

and drop by for a beverage if you are ever in Amsterdam!

I’ll be sure to remember that, thank you!

Reply

Please enter an answer
Tips:

You can mention users to notify them: @username

You can use Markdown to format your question. For more examples see the Markdown Cheatsheet.

> I’m a blockquote.

I’m a blockquote.

[I'm a link] (https://www.google.com)

I'm a link

**I am bold** I am bold

*I am italicized* I am italicized

Community Code of Conduct