How to publish to object storage with github action
First of, I'm loving the Object storage static hosting, its blown my mind!
I am using an S3 compatible action on github to automate publishing releases of a Hugo site to my storage, but I can see an error when trying to authenticate: InvalidAccessKeyId: The AWS Access Key Id you provided does not exist in our records.
I am using an access key/value that I've added to my object storage, with the deploy url in my Hugo config being URL = s3://BUCKET.eu-central-1.linodeobjects.com
. I am not sure whether the Hugo deploy url is wrong or if the action I am using on github is Amazon specific.
I have tried to find a linode guide for setting this up, but would appreciate any pointers/help on how to set this up.
Thanks.
5 Replies
You could maybe use linode-cli.
There is already an action for it https://github.com/marketplace/actions/linode-cli.
So you could do something like this:
name: Test Linode cli
on: push
jobs:
job-name:
steps:
- uses: actions/checkout@master
- name: Setup Linode cli
uses: brendon1555/setup-linode-cli@master
with:
LINODE_CLI_TOKEN: ${{ secrets.LINODE_CLI_TOKEN }}
- run: linode-cli obj put data.json my-bucket
Apart from that, maybe you are right and this always hits AWS. You could try to run plain python interpreter and install s3cmd yourself. Its the same protocol. In fact linode-cli just wraps it.
Oh wow, ok thanks for that, I didn't see there was a linode CLI action already. I will try that.
There's also the s3fs FUSE file system:
FUSE-based file system backed by Amazon S3. s3fs is a FUSE filesystem that allows you to mount an Amazon S3 bucket as a local filesystem. It stores files natively and transparently in S3 (i.e., you can use other programs to access the same files). Maximum file size=64GB (limited by s3fs, not Amazon). s3fs is stable and is being used in a number of production environments, e.g., rsync(1) backup to s3.
That way you mount your bucket just like any other filesystem and programs (like git(1) or apache2(8)) deal with the contents of your bucket just like regular files.
See:
https://devops.ionos.com/tutorials/use-s3fs-fuse-to-access-s3-object-storage-on-centos-7/
https://stackoverflow.com/questions/12316837/how-to-mount-a-amazon-s3-bucket-by-using-fuse-s3fs
It's installable on all the big distros (on Debian 10 the version is 1.84-1). If you don't have it, one of the links shows you how to build/install it from source. It's being maintained… The last commit on github.com was 8 days ago.
I've used other FUSE drivers (e.g., FUSE-ext4 on a Mac) in the past but not this one. YMMV.
-- sw
I have published a simple wrapper for S3cmd that is pre-configured to work with linode. Make sure you run this somewhere where python is available. ubuntu-latest is just fine.
https://github.com/marketplace/actions/use-s3cmd
- name: Set up S3cmd cli tool
uses: s3-actions/s3cmd@v0
with:
cluster: 'eu-central-1'
access_key: ${{ secrets.S3_ACCESS_KEY }}
secret_key: ${{ secrets.S3_SECRET_KEY }}
- name: Interact with object storage
run: |
echo 'foo' >> bar
s3cmd put bar s3://foobarbaz
Thanks for the replies, much appreciated!