Linode Block Storage (Newark beta)
What is the Linode Block Storage service?
The Linode Block Storage service allows you to create and attach additional storage volumes to your Linode instances. These storage volumes persist independently of Linode instances, but can easily be attached from one Linode to another without the need to reboot. Volumes attached to Linodes appear as block devices and can be formatted and mounted just like any other block device.
Block Storage Volumes are highly available with 3x replication. They're fast - built on great engineering, NVMe/SSD hardware, and a fast network. They're affordable - $0.10 per GB (free during the beta) and no usage fees. They're cloud: elastic, scalable, expandable, resizable, etc. You can hot-plug them into and out of running Linodes. Oh, and you can boot off of them, too.
What can I do?
* Create Volumes
Remove Volumes
Resize Volumes
Attach a Volume to a Linode
Detach a Volume from a Linode
Add Volumes to your Linode Configuration Profile block device assignments (changes take effect next boot)
Snapshotting, cloning, and Volume backups are not implemented - but may be in the future.
Can I attach a Volume to multiple Linodes?
Nope. A Volume can only be attached to one Linode at a time.
How big of a Volume can I create?
Between 1 GB and 1024 GB for now. This is a beta, after all. After the beta, the max volume size may be larger.
How many Volumes can I attach to a Linode at the same time?
Up to 8.
Can I mount Volumes across datacenters?
No. Volumes and instances must be in the same region.
Is there API support?
Yes. Documentation coming soon!
How does the beta work?
* The beta is free - there will be no storage costs.
Volumes can only be created in our Newark, NJ datacenter. You will need to have at least one Linode there.
Let's be honest - this is a beta. You probably shouldn't store any data on it that you can't afford to lose.
How can I get in on this?
The Block Storage beta is public. You can click [](Manage Volumes[Manage Volumes](Manage Volumes[](Linode index page[Linode index page](Linode index page
Thanks,
-Chris
48 Replies
I can't use it yet, since we're in Fremont and need way more than 100GB, plus the lack of backups is worrisome, but I'm excited about the potential to use it when it's out of beta. It was very badly needed at Linode, and it looks fantastic!
I'd love to see a feature that allows sharing the same block storage across multiple Linodes, that'd be sweet. Any chance it could be supported in the future?
Could you also address the lack of backup options? I wouldn't mind no snapshotting and cloning, but being able to back up at intervals or on demand like we can do with regular nodes is pretty critical.
archon810: It's doubtful that this will ever be integrated into the Linode Backup Service as you know it today. In part because they're just two different systems, and also because we can't, for instance, include backing up 100s of GBs of data on a $5 Linode's backup service which currently only costs a couple bucks.
If we did anything, it would probably be to automate creating a NEW volume and copying the data over. But the rates (cost) would be the same. You could automate this yourself, for now, by creating a new volume, attaching it to the Linode with the volume you want to back up, copying the data, and unmounting, unplugging.
I think a popular use of these Volumes will be people performing backups TO them.
-Chris
How does rebooting such a volume work? Will it automatically reconnect to the mount point inside the Linode using some Linode customizations and tools, or will it rely on manually running the mount command? Right now we know if the Linode is up and operational, it will have the required storage, but with the system split into 2 parts, I'm going to need to prepare to handle each part going down individually and handling such new cases gracefully.
Will you be publishing best practices for using this new block storage?
Volumes are managed via your Configuration Profile block device assignments. When you attach a volume through the "attach" workflow, it's automatically added to your running config profile and then hotplugged. On reboots, volumes referenced by the booting config profile are also attached on boot. Managing /etc/fstab entries is still up to you, however.
Congratulations on this and I really hope that it brings some good news (i.e bigger volumes and availability in more data centers) as soon as it comes out of its beta.
Thanks! The 100G size max is only for during the beta. Once we launch, max volume size will be much larger… As for availability in other DCs after beta, we'll work on getting this deployed as fast as we can - once we're good to go.
Are there any plans for HDD block storage?
Unlikely.
Thanks for the feedback and questions!
-Chris
Will the price stay at $0.10 per GB or decrease / increase on bigger volumes?
Is it possible to hot-swap a volume from one Linode to a second one without losing it's contents? (without reformating or rebooting)
Any ETA for Frankfurt?
I currently use S3QL with some unnamed storage provider and i'd enjoy having a few GB somewhat more local.
I would like a cheeper block storage based on HDD, but I wouldn't necessarily agree that such a thing is low quality. Perhaps disk read/writes would be slower, but I'd find it more worth it if the service was, say, 0.05 per GB. With that said, I think it's price is good as it currently is, considering the redundancy, and the use of SSDs. I'm also hopeful that, if the disk space on Linode plans increase, the block storage price would decrease to match it.
I think it's pretty cool that you can boot from your block storage disks. Perhaps this could introduce an option in the future, to have a diskless Linode, with a disk or disks provided by block storage only. I could find that pretty useful, and that could open up other options such as quick upgrades of ram and CPU without migrating disks, and without using much, if any, storage on the hardware hosting the Linode. I'd imagine that if such a thing were available, the resources would be lower in price. At the current price point of block storage, for the $5.00 plan, this would amount to the resources without a disk costing $2.60, and the disk costing $2.40. This is all speculation and ideas, but it's fun to think of new and interesting things.
I look forward to being able to test and/or use the block storage service in Dallas, where I already have a Linode I wouldn't have to pay extra for. I already have some plans and uses for such a service, as my main concern, at this point, is disk space far more than other resources. Having a potential diskless Linode as an option would also be fun, and something I could easily use, too, using only block storage disks.
Linode just keeps getting better as time goes on, something I'm glad to see!
Blake
@Tech10:
I would like a cheeper block storage based on HDD, but I wouldn't necessarily agree that such a thing is low quality. Perhaps disk read/writes would be slower, but I'd find it more worth it if the service was, say, 0.05 per GB.
I think at that point that's when you'd use FUSE and S3. Maybe it's just me, but I see block storage like this mainly used for things like large databases or an intermediate place for backups before pushing them out to S3 or Glacier. If, for example, you're moving infrequently accessed data to it then you're probably better off using S3 standard or infrequent access, both of which are cheap and generally fast enough.
I am currently using S3 storage with one of the big providers. S3QL actually works very well in that kind of application, since it uses a (large) local cache that is very quick in delivering the items requested often (recent emails, files added and shared in nextcloud etc). You notice waiting times rarely, mostly when looking for a rarely touched file. I was secretly hoping Linode would provide an s3 backend, but whatever way they do their block storage, I'll buy. Nothing beats having it in the same datacenter. And I rather give my (little) money to Linode anyway
@johnnychicago:
@carmp3fan, I would not necessarily think so. I am mostly providing services (email, nextcloud etc) for a relatively small group of users, and having block storage in the datacenter would work very smooth in that application - HDD would be perfectly ok for those kind of needs.
I don't disagree on email (somewhat do for Nextcloud considering you can use it directly from S3), but I still don't see it as a big reason for moving to it. I've been using Linode for my own mail server for years and even with my hoarding tendencies for the last 10+ years, I've only accumulated 4.7G, so I run fine on a 1024.
If you'll notice, I didn't say anything about a web server. Mostly because it would likely be cheaper and faster to have the storage intensive photos and videos stored in AWS than on your low-end Linode. That makes assumptions that that is easy to do with whatever software you are using.
@johnnychicago:
I am currently using S3 storage with one of the big providers. S3QL actually works very well in that kind of application, since it uses a (large) local cache that is very quick in delivering the items requested often (recent emails, files added and shared in nextcloud etc). You notice waiting times rarely, mostly when looking for a rarely touched file. I was secretly hoping Linode would provide an s3 backend, but whatever way they do their block storage, I'll buy. Nothing beats having it in the same datacenter. And I rather give my (little) money to Linode anyway
:)
I've not used S3QL, but it looks like something I should. I really wish someone would create something (I'm not capable) that would use both S3 and BackBlaze B2. Kind of like RAID across providers.
For example, I'm working on a FreeBSD deployment and would like to use the block storage.
@impact:
Is the block storage a raw format that will allow us to format it with whatever file system we like?
For example, I'm working on a FreeBSD deployment and would like to use the block storage.
They are treated as block devices just like the normal disk of your linode. So you can format them with whatever file system you want.
> block devices and can be formatted and mounted just like any other block device
So excited, I hope it goes fully live soon.
We've got the second beta cluster in the pipeline. 90% certain it will go to us-west (Fremont, CA). It's looking about 4 or 5 weeks out.
After this one, the remaining DCs will go much more quickly and simultaneously.
-Chris
Bit sad to see the 1/5th speeds of local SSD. I understand it's network drives but was hoping it would have a lot smaller gap so it could be used for Databases and such. Will have to see what it is like once it comes out of beta.
@viion:
Great news Caker,
Bit sad to see the 1/5th speeds of local SSD. I understand it's network drives but was hoping it would have a lot smaller gap so it could be used for Databases and such. Will have to see what it is like once it comes out of beta.
Probably related to triple redundancy feature. Similar performance drop was seen when I tested another provider's Ceph based triple redundant SSD storage.
@caker:
Hello,
We've got the second beta cluster in the pipeline. 90% certain it will go to us-west (Fremont, CA). It's looking about 4 or 5 weeks out.
After this one, the remaining DCs will go much more quickly and simultaneously.
-Chris
Great news !
-Chris
"Sorry but we still don't have a definite time frame for that. I'm hoping before 2018 but I can't give any time frame with certainty. I wouldn't want to mislead you."
Sounds like it could be 2 - 3 months at least. I'm still hopeful Freemont will role out sooner (sounds like a week or two!) and that estimate is for a system wide implementation, but we've made other plans to deal with our storage needs until 2018 to be safe.
Can't wait for this to get going, even just in beta.
Which datacenter is next? What kind of ETA?
@axchost:
Is the Block Storage SSD?
The new Block Storage build in Fremont is using spinning disks, but we’re still working on what the final version of Block Storage will look like.
-Blake
@TheJosh:
When I look in the volumes area of the Linode Manager, I can now see Freemont CA listed.
Which datacenter is next? What kind of ETA?
We don't currently have an idea of what data center is next or when the full release will be, but I can confirm that Fremont will be up and running soon. You can follow our blog at blog.linode.com for info about our releases.
-Blake
@bmartin:
The new Block Storage build in Fremont is using spinning disks, but we’re still working on what the final version of Block Storage will look like.
-Blake
I thought all block storage was going to be SSD (per announcement first post on this thread). Does spinning disk mean it's going to be cheaper?
@mjrpes:
@bmartin:The new Block Storage build in Fremont is using spinning disks, but we’re still working on what the final version of Block Storage will look like.
-Blake
I thought all block storage was going to be SSD (per announcement first post on this thread). Does spinning disk mean it's going to be cheaper?
Hey there! Block Storage in Newark is all SSD, however with the Block Storage beta in Fremont we are building it using spinning drives. I'm sorry we weren't entirely clear on that point. As for how or if this would affect the price, I don't have any word on that right now.
That being said, this is currently in beta and we're testing systems out. I can't say what the final product will look like exactly.
If you have any other questions, please let us know.
@scrane:
Hey there! Block Storage in Newark is all SSD, however with the Block Storage beta in Fremont we are building it using spinning drives. I'm sorry we weren't entirely clear on that point. As for how or if this would affect the price, I don't have any word on that right now.
That being said, this is currently in beta and we're testing systems out. I can't say what the final product will look like exactly.
If you have any other questions, please let us know.
Have the capacity/size caps been expanded beyond 1TB per volume in Newark yet?
@zigmoo:
@scrane:Hey there! Block Storage in Newark is all SSD, however with the Block Storage beta in Fremont we are building it using spinning drives. I'm sorry we weren't entirely clear on that point. As for how or if this would affect the price, I don't have any word on that right now.
That being said, this is currently in beta and we're testing systems out. I can't say what the final product will look like exactly.
If you have any other questions, please let us know.
Have the capacity/size caps been expanded beyond 1TB per volume in Newark yet?
Nope, we're still at 1TB per volume in Newark for now.
Just curious: is 1TB not sufficient for your usage? What size volume would you need to create for block storage to be useful to you?
- Jim
1. Start with a volume attached to a running Linode, unmounted
2. Resize the volume in the manager
3. Detach the volume. This will appear in dmesg/console:
[1791385.124252] sd 0:0:2:3: [sdc] Synchronizing SCSI cache
[1791385.127212] sd 0:0:2:3: [sdc] Synchronize Cache(10) failed: Result: hostbyte=0x04 driverbyte=0x00
4. Re-attach the volume. The Linode will shut down. I'm not sure if there's anything printed in the console at this point since I was using SSH instead of Lish. The SSH session just died.
# dd if=/dev/zero of=/mnt/test/test1.img bs=1024 count=1000 oflag=dsync
1000+0 records in
1000+0 records out
1024000 bytes (1.0 MB, 1000 KiB) copied, 7.40577 s, 138 kB/s
This is with default ext4 settings on a 70GB volume / partition.
conv=fdatasync
instead of oflag=dsync
and the performance is much better, about 300 MB/s for the volume and a bit faster for a native linode partition.
(With oflag=dsync
writing to the native partition was about 1.3 MB/s, about 10 times faster than writing to the volume.)
Going to lock this thread out.
Thanks,
-Chris
I have FreeBSD 12.1 deployed on a Linode. I've created a volume via the block storage service and have attached it to my linode. It is not showing up, The howto references /dev/disk/ something but there is no /dev/disk tree and the drive is not visible. Is there an additional step I have to do?