이제 Linode는 즉시 Docker를 지원합니다.
Docker를 사용하면 애플리케이션을 위한 경량 컨테이너를 생성하고 다른 사용자가 생성한 이미지를 사용할 수 있습니다.
Docker의 최신 릴리스인 0.7은 더 광범위한 표준 커널 구성 옵션을 지원하는 데 중점을 두었으며 이에 맞게 조정된 새 커널 (3.12.6)을 릴리스했습니다. 이번 릴리스부터 pv-grub을 통해 사용자 지정 커널을 사용할 필요 없이 기본 Linode 커널과 함께 Docker를 사용할 수 있습니다.
내 Linode에 Docker 설치
- 최신 커널을 실행하고 있는지 확인하십시오. 다운로드하려면 재부팅해야 할 수 있습니다.
- 훌륭한 설명서를 따라 Docker를 설치하십시오. Docker 사용 시작
Hello World 예제를 실행하여 사용해 보거나 실제로 들어가서 Redis 서비스를 설정해 보세요! 모든 Docker 예제를 확인하거나 Docker 이미지 색인을 검색하여 자세히 알아보세요.
즐길!
댓글 (17)
Awesome! Note that Docker is for 64bit only I believe. Someone else may confirm?
That’s correct. From my understanding, they’re currently imposing the 64 bit restriction because it keeps their code cleaner and the benefits of having a 32 bit “host” are minimal.
Our 32 bit kernel has all the right bits flipped as well, so if they do start supporting 32 bit down the line, it’ll work with our kernel too.
I’ll be switching my linode instances today.
Thanks for the awesome work, guys.
I’m trying to understand why people pushing to use docker on cloud instead of dedicated server. In the cloud it adds additional unnecessary level of abstraction.
@Nikola
Ease of deployment.
Ease of Deployment, Manageability, portability and reduces the security surface while able to use less hardware to support the larger VM’s. Dedicated VM’s are in the past.
I was asked if it there is a way to autoscale Docker containers?
@nikola, here is my guess:
If you have a docker instance on your local dev machine, and a docker instance on your dedicated server, you can develop locally & have more confidence it will Just Work ™ when you push to the cloud. Much easier to configure a local docker & a remote docker identically than to configure your dev machine & cloud identically. Can anyone confirm: is that the general idea?
This is very interesting. With a little glue, you can use this to scale linode(s) up/down the way you can scale aws, except that linode’s VMs don’t suck 🙂
Maybe that’s for Q2, after the SSD hybrid rolls out in Q1? Pretty exciting stuff, thanks!
Great stuff, keep it coming!
@NathanielInKS Check out the Flynn project (https://flynn.io) … Afaiu it aims to create an auto-scaling Heroku-like wrapper around Docker. Still in developer preview at the moment though I think.
If I have a linode setup “the old way” (Ubuntu 12.04). With pv-grub. Can I just switch to the new kernel, check xenify and reboot?
Also does this new kernel have the kernel options for limiting memory on docker containers: “cgroup_enable=memory swapaccount=1” enabled?
it seems that linux-image-extra-3.12.6-* is not included in the default repos.
I just checked this with my ubuntu box. I switched off pv-grub then proceeded to load the latest daily build of lxc. Note that if you are on Ubuntu 12.04 you might have to do it like this.
apt-get install –no-install-recommends lxc cgroup-lite lxc-templates
This is due to a recommend entry for uidgen which is unavailable. I’m not sure why it was added though.
Anyways I ran lxc-checkconfig and confirmed all necessary supports are enabled for lxc to run all by itself. 🙂
shinji@icarus:~$ uname -a
Linux icarus.robertpendell.com 3.12.6-x86-linode55 #2 SMP Tue Jan 14 08:41:36 EST 2014 i686 i686 i386 GNU/Linux
shinji@icarus:~$ sudo lxc-checkconfig
— Namespaces —
Namespaces: enabled
Utsname namespace: enabled
Ipc namespace: enabled
Pid namespace: enabled
User namespace: enabled
Network namespace: enabled
Multiple /dev/pts instances: enabled
— Control groups —
Cgroup: enabled
Cgroup clone_children flag: enabled
Cgroup device: enabled
Cgroup sched: enabled
Cgroup cpu account: enabled
Cgroup memory controller: missing
Cgroup cpuset: enabled
— Misc —
Veth pair device: enabled
Macvlan: enabled
Vlan: enabled
File capabilities: enabled
Note : Before booting a new kernel, you can check its configuration
usage : CONFIG=/path/to/config /usr/bin/lxc-checkconfig
Note disregard the memory contoller being marked as missing. As far as I know it requires a kernel startup option to be set which we can’t do with the linode kernels. Also it isn’t enabled in the config anyways probably for the same reason. This only prevents you from setting a memory limit on the containers.
Error: container_delete: Cannot destroy container d5d4d7f442d7: Driver devicemapper failed to remove init filesystem d5d4d7f442d74b17824cbcf1216cb3730053f8cfaefe8a1ea12d328451fc36d7-init: Error running removeDevice
2014/02/19 10:55:59 Error: failed to remove one or more containers
how to fix it?
Same problem…
Driver devicemapper failed to remove init filesystem
Driver devicemapper failed to remove root filesystem
Not sure how to fix this yet…
That’s an upstream Docker problem, not something Linode-specific. I’d recommend reaching out to them via their bug tracker.