How Linode Object Storage is made compatible with S3
Hi,
I am assessing the potential of migrating over to Linode Object storage. I've read that Linode Object Storage is compatible with S3, but not sure how compatibility is being made. Can anyone share some light on this?
For example, I have the following python function for uploading my assets to s3 in my web app.
def upload_file_to_aws_s3(url='', objectid='', file_name=''):
file_url = ''
# get the connection of AWS S3 Bucket
s3 = boto3.resource(
's3',
aws_access_key_id=AWS_ACCESS_KEY_ID,
aws_secret_access_key=AWS_SECRET_ACCESS_KEY,
# region_name=AWS_REGION
)
if re.match('(?:http|https)://', url) is None:
url = "https:"+url
response = requests.get(url)
if response.status_code == 200:
raw_data = response.content
url_parser = urlparse(url)
key = objectid + "/" + file_name
try:
# Write the raw data as byte in new file_name in the server
with open(file_name, 'wb') as new_file:
new_file.write(raw_data)
# Open the server file as read mode and upload in AWS S3 Bucket.
data = open(file_name, 'rb')
s3.Bucket(AWS_BUCKET_NAME).put_object(
Key=key, Body=data, ContentType='image', ACL='public-read')
data.close()
# Format the return URL of upload file in S3 Bucjet
file_url = 'https://%s.%s/%s' % (AWS_BUCKET_NAME,
AWS_S3_ENDPOINT, key)
except Exception as e:
print("Error in file upload %s." % (str(e)))
finally:
# Close and remove file from Server
new_file.close()
os.remove(file_name)
print("Product image uploaded in S3: %s " %
(file_url))
else:
print("Cannot parse url")
return file_url
5 Replies
Linode Object Storage (and several other S3-compatible services) implement the same API as AWS S3.
So the URLs/endpoints, parameters, encryption methods and response data are all the same as those used on AWS.
The only difference as far as a client application is concerned is the endpoint (hostname) to which it is talking to.
I believe the storage software systems (in Linode’s case, Ceph) commit to compatibility with a specific version of AWS’ API. So if AWS come out with a new feature in S3, it won’t be replicated in Linode (or other S3-compatible service) until the software replicates the API call in their system.
I believe all official AWS S3 clients will work with Linode Object Storage. I personally use the AWS PHP SDK, Rclone and Cyberduck - none of which I’ve had any compatibility issue with. I also have a project coming up where I’ll use the AWS Go SDK with Linode Object Storage.
I am also using the AWS API and it works really great for the most part. We use it with time expiring signed URLs. Cloudfront integration also works fine, as we serve some files that way.
The only thing I'm having an issue with is using a custom domain for us (storage.mydomain.com, for instance). I can't quite find a way to use that URL. I think since on Amazon you enter this on a bucket by bucket basis on S3 it's being used outside the API to verify the signature. Linode reports a signature mismatch on every permutation I've tried..
Just shooting a guess here (may not work or you may have already tried it) do you need to use your custom domain as the ‘endpoint URL’ in place of the XX-XXX-X.linodeobjects.com?
@andysh -- I was planning on posting about the custom domain as I've done quite a few tests.
They support it for "web serving" but it seems for signed requests it doesn't work. My hunch is because Linode has no record of what the CNAME is so when they do the validation on the signed request it doesn't match. On Amazon S3 you can enter -- per bucket -- the CNAME you use. When checking the signed request at Amazon, they may use this to verify the request.
That's only a hunch, but I'll post all the permutations I've tried….
That's weird. I replaced linode endpoint, access key, secret key with my AWS S3 credentials, and I got the below errors.
Error in file upload An error occurred (InvalidAccessKeyId) when calling the PutObject operation: The AWS Access Key Id you provided does not exist in our records..
Any idea why this happens?