Not a question. A request. S3 to Linode.

Like many of us (I presume) are moving from S3 to Linode. For our own reasons. I live in the Philippines. Internet is a bit of a hassle here. It's slow. What would it be if you create an IAM on AWS and we can give you access to our S3 buckets. For many people in the world it would not be much of a thing. But if you live in a place with slow internet, this customer service would be golden.

I know, I probably ask for too much. If I were you it would be a big thing to ask. But we could give access to your IAM and have you the opportunity to transfer files. Maybe put a limit to what is acceptable. Like 20G or whatever.

It's just an idea. Maybe something to consider. It won't change my mind. I'm your new customer to stay.

4 Replies

To help ensure the security of our customers' services, our support scope won't allow us to perform this transfer ourselves, even if you were to provide us credentials. That being said, we're always interested in your success on our platform, so it'll be my pleasure to explain how you can go about transferring this data given your unreliable Internet connection.

I believe that the best way to perform this transfer would be to create an intermediary Linode in one of our data centers. This will allow you to avoid the use of your home Internet connection as much as possible. In case you need to perform any transfers of data from your home connection to Object Storage, this Linode will also provide a place to robustly upload this data before moving it to Object Storage.

I will be referencing our Object Storage documentation page below throughout this reply:

Object Storage Access Keys

Before you perform any Object Storage actions, you will need to generate a set of keys:

Setting up your Object Storage client

Depending on your preferences, you can use either a command-line utility or a graphical utility to transfer data from your home computer to Object Storage.

There are many command-line utilities available for the S3 protocol which Object Storage uses. The main ones we recommend are our own Linode CLI and s3cmd. You may use these instructions to install and set up these applications:

For a graphical utility, we recommend Cyberduck. Here are our setup instructions for this program:

When transferring data from your personal computer directly to Object Storage, you will need to install and set up these on your personal computer. However, if you'd like to transfer data from your existing Amazon S3 bucket to Linode Object Storage, you will also need to install and set up these utilities on your Linode itself.

First, of course, you will need to create the Linode instance, which I will explain in the next section.

Creating a Linode to handle data transfers

As recommended earlier, I think the best solution for you to avoid your unreliable Internet connection would be to create a Linode instance on your account. I recommend creating this Linode instance in the closest data center to your Object Storage services.

You may create this Linode from this page:

Our plan specs and pricing are listed here:

The main difference between these Linode plans for your needs will be pricing, SSD storage space, and maximum bandwidth.

Here are links to our introductory guides for setting up and securing this Linode:

Alternatively, you may deploy our "Secure Your Server" Marketplace App to have our systems perform this setup:

After you set up and secure this Linode, you may proceed to use it for working with your Object Storage services.

Using your Linode to transfer data to Object Storage

After you set up your Linode, you can then install one of the command-line utilities onto this Linode. This will allow it to facilitate the transfer of data to your Object Storage bucket.

I would take a moment to install either the Linode CLI or s3cmd using the instructions above, then proceed to follow the instructions in the rest of this section depending on your needs.

Uploading data to your Linode in preparation for transfer to Object Storage

You can upload data from your computer to your Linode's SSD disk before transferring it to Object Storage. This intermediate step performs a more robust transfer that can overcome the problems of an unreliable Internet connection.

I recommend the rsync utility to perform this transfer:

This will give you options to keep partially transferred data, then reinitiate the transfer in case your connection breaks. You will want to use the --partial option as mentioned in this StackOverflow answer:

You may also be interested in the --partial-dir and --append-verify options as mentioned here:

After you transfer this data, you may then use the Linode CLI or s3cmd to transfer this data to your Object Storage bucket.

Transferring data from Amazon S3 to Linode Object Storage

As you may be aware, both Linode's Object Storage and Amazon's S3 use the same communication and storage protocol, which is also called S3. Since S3 doesn't easily support transferring data between different providers, you will need to copy the data over to your Linode first, then transfer it into your Object Storage bucket.

Our Linode CLI doesn't interface with the S3 services of other providers, so you will need to use s3cmd to transfer this data from Amazon S3 first. You will use the s3cmd get subcommand followed by the s3:// URL for your bucket:

You will need to specify the correct Amazon S3 endpoint and access tokens to connect to this service. You may want to consider using a separate configuration file for this purpose.

In addition, you may use the * wildcard to transfer entire directories, including the bucket's entire contents:

s3cmd get s3://amazon-s3-bucket/*

Once you transfer these files over to your Linode, you may then use Linode CLI or s3cmd to upload these files to your Linode. As mentioned in the Object Storage guide's subsections, the commands to achieve this are linode-cli obj put --acl-public or s3cmd put. If using s3cmd, be sure that you are using the correct configuration settings to interface with your Linode services instead of your Amazon services.

In case you receive any "permission denied" errors when doing this, I would confirm the accuracy of your access keys, taking care to reference the correct keys to the appropriate provider. You may also need to review and adjust the Access Control Lists and Bucket Policies of your Object Storage and/or Amazon S3 services:

Combine both put and get operations into a single command with Rclone

There is a utility that you can use called Rclone to simplify the transfer of objects between S3 buckets on different providers:

It appears that you can perform this transfer using a single command, rclone copy

This will still perform a two-step download and reupload of the data, simply combining these operations into a single command. As a result, it's still advisable to run this transfer command directly on your Linode.

Conclusion

I hope you find this information helpful for performing this transfer! We're always interested in your success on our platform, so it was my pleasure to provide you this information even if we aren't able to perform this transfer ourselves.

If you have any other questions after reading this information, please don't hesitate to follow up with another question.

I have an idea, but it only works for Laravel users. It's easy to work with different filesystems on Laravel. I'll write an artisan command that can handle a copy of folders/files from S3 to Linode. In this case, a production server can handle the S3 to Linode copies. Directly.

In my case, all 3 of them (S3, Linode and server) are in Singapore. Probably in the same telecom room designed for customer equipment. I have the feeling that copying 1.000's of files will go very fast. And simple, with one command. My problem is not the amount of gig's, it's the amount of files (100's of thousands of pics).

Later on today, I'll write the code, test it and share it here to make life easier for Laravel users.

For Laravel users, a helper to move your files from S3 to Linode

It might work for Laravel 5 and 6 but I can't test it. It certainly works for L7 and L8.

How does it work? It runs on your server. You can also run it on your testserver but beware that the speed of execution will depend on your home/office internet speed. It's a artisan command that will copy your files from S3 to Linode (S3 compatible object storage). The folderstructure stays the same.

How to implement it?

Step 1
Set up your config/filesystem.php file accordingly as you add this:

     'linode'      => [
        'driver'     => 's3',
        'key'        => env('LINODE_S3_ACCESS_KEY'),
        'secret'     => env('LINODE_S3_SECRET'),
        'endpoint'   => env('LINODE_S3_ENDPOINT'),
        'region'     => env('LINODE_S3_REGION'),
        'bucket'     => env('LINODE_S3_BUCKET'),
        'url'        => env('LINODE_S3_BUCKET_URL'),
        'visibility' => 'public',
    ],

And set your .env file with the necessary data. for more info, check this stackoverflow answer

step 2
Run the command 'php artisan make:command LinodeCopy' It will create a file at 'app/Console/Commands/LinodeCopy.php'

Replace all code in the file with this gist I've uploaded

step 3
Upload to your production server.

step 4
Run the command php artisan help linode-copy to see the options. For example php artisan linode-copy avatars is a request to copy the /avatars folder from S3 to Linode. If you want all the subdirectories included, you can run php artisan linode-copy -D avatars

It will show the amount of files. If the number is high, you opt out. The terminal will ask if you want to continue. If you have a lot of files, I recommend you sync folder by folder, not all at once.

If the server runs out of memory, you will notice you know exactly at what #file it broke. You can add --counter 2577 to your artisan command if it hang on 2578. The reason is to avoid running over the previous 2577 files on Linode. You already know they exist. It bypasses them.

The command php artisan linode-copy will copy only files on your root while php artisan linode-copy -D will copy all files, including every subfolder.

Remark 1: This is will probably not work for big files. You may have to tweak the code to stream files instead of copy them. Your server will quickly run out of memory.

Remark 2: This software is experimental, use at your own risk and responsibility. It is designed for my personal usage of the S3 to Linode conversion which are pictures. I can't test it for big files.

Update I've tested it with over 30.000 files. It runs out of memory after a while. Just rerun the same command. All files already on Linode are ignored and untouched on S3. It simply continues where it left off. Especially if you add --counter 9999 to it, wherever you left off. It speaks for itself. The good news is: it works very fast, at about 7 to 10 files a second!

Good luck!

You can also check the marketplace app NirvaShare https://www.linode.com/marketplace/apps/nirvashare/nirvashare/

It has to capability to share and collaborate files from object storage across internal and external users with fine access control. Easy integration with AWS SSO, ActiveDirectory, Google workspace, Salesforce, etc.

Technically, you also grant AWS SSO users to access files in Linode object storage with a cross cloud access.

Reply

Please enter an answer
Tips:

You can mention users to notify them: @username

You can use Markdown to format your question. For more examples see the Markdown Cheatsheet.

> I’m a blockquote.

I’m a blockquote.

[I'm a link] (https://www.google.com)

I'm a link

**I am bold** I am bold

*I am italicized* I am italicized

Community Code of Conduct