Is the hype justified?
Is the hype justified?
Other urls found in this thread:
infoq.com
paintschainer.preferred.tech
twitter.com
no you idiot
why not cuntface
>running an entire fucking operating system in a container just to distribute one application or some config files
this is less efficient than client-side js meme
Let's say you want to host 100 times they same application for different people, aren't 100 docker containers more resource efficient than 100 vms?
Or you want to test your application for different distributions, then you don't have to install them in a vm
Not the entire os you tard.
>how does LXC work
fucking idiot
everything except the kernel
>want to run 100 of the same application
>run 100 operating system snapshots with one application running in each
Does it make sense to you to have a whole multi-GB debian install for each container just to have a few instances of your 100kb Rails app?
>Boot up various platforms in the blink of an eye
>No need for virtualization unless you're on a dinosaur OS
>Super easy to run a platform of your choice thanks to imaging
Yes
no
>Does it make sense to you to have a whole multi-GB debian install for each container just to have a few instances of your 100kb Rails app?
Probably not. But just because you can do retarded things with it such as using it to run vim instead of just installing vim on your system doesn't mean it doesn't have legit use cases.
but is not bare LXC they are using
sure. it has some use cases. but I'd say >90% of people don't need it and are using it wrong anyway. Just like people abused vms before it.
secure your system and you don't need to have so much overhead.
in other words, it benefits some people, but "the hype" is not "justified"
Not true. With the rise of microservice architecture, container systems are becoming invaluable in two areas.
The first is development, where you can have a miniature version of your entire production environment running locally without demanding as many resources or the exclusivity that a VM does.
Second is that if you have a machine that needs to perform a large number of tasks, it can be safer to run them inside of a container where there are defined limits so that if something goes wrong, it doesn't take down the whole system.
So use venv?
>development
just install whatever you need on your own host OS. Why do you need to install another OS in your OS to install programs? Your OS can run programs too.
>defined limits
those limits can be imposed on processes in your OS as well.
>just install whatever you need on your own host OS. Why do you need to install another OS in your OS to install programs? Your OS can run programs too.
Your production machines are probably going to be running on different platforms, depending on what suits them the best
>those limits can be imposed on processes in your OS as well.
Implying this is different than what containers do anyways
it is
because a container has a whole other operating system inside it (minus a running kernel)
Why do you need a new systemd, new package manager, new services, new filesystem, new users, new home directories, new groups, new cron, new sshd, new base system just to run your development environment that you could run on your own system without inheriting a whole other operating system to maintain and keep up to date
Docker < LXC < systemd-nspawn
You don't need all of those things that you listed. Also, it's not a whole new operating system if it's running on the same kernel. You're conflating it with virtualization
all of those things are duplicated with both docker and lxc. Learn how it works before shilling.
>it's not a whole new operating system if it's running on the same kernel
GNU is an operating system that runs with the Linux kernel. A container contains another instance of GNU. Therefore it is another operating system.
ITT: Junior devs who will likely never touch a production system worth using LXC deride perfectly reasonably system architectures.
No you don't need Docker to run your wordpress blog. Yes, the hype is justified - Docker shines with in conjuction with tools like Mesos or Kubernetes. Developers and InfraOps will no longer have to worry where an application runs, how its deployed or how many resources are allocated to it. An organization can run their applications "IN THE CLOUD", and ops can optimize resource utilization without have to babysit every developer.
Kernel Same-page Merging
Also, the idea that people are using LXC in anywhere the same capacity as one would use VMs is fucking retarded. LXC, and Docker especially, is shit tier security/isolation wise. FreeBSD Jails is the closest thing we have to "slim" VMs.
Apparently you haven't worked with Docker nearly enough because that's not true. Also, LXC is pretty outdated by now.
Hello Stallman
I like it. It makes deployments way smoother and its just fun to work with. The money is pretty good, but I feel like its going to die down when the hype cloud clears.
Docker uses LXC itself, and all of those things are true. Go shill somewhere people are gullible and uninformed enough to buy your bullshit, like leddit.
>patches
yep you clueless dumbfuck, go home
Yes, especially combined with CoreOS and Kubernetes
Well it's lightweight/partial virtualization, not quite offering the same features as complete virtualization does, but doing the job for countless cases where virtualization was previously used.
Allows to do so much more with the same resources, so yes, it's good. Albeit obviously the hype is a bit exaggerated as always, but there's a real advantage to using it, just like virtualization.
are you implying that each of your containers do not have a whole linux distribution base install (minus kernel) in their file structures, with their own processes running, doing the same thing the base system of your computer is doing?
For containers? Yes
For Docker? Not so much.
>Docker uses LXC itself
it uses its own libcontainer, asshat
If you don't use a different OS as base for every single container you make, there is very little overhead compared to what your are implying.
Duh. Do you want to learn machine learning? But don't want to mess with the libraries on your main?
nvidia-docker run -it --name TF -p 8888:8888 tensorflow/tensorflow:latest-gpu-py3
Do you want to run a Wordpress website, but what to test it out in case you prefer drupal?
docker run -p 8080:80 wordpress
Do you want to launch 10,000 mysql servers. All with the same configs?
Create a dockerfile and it's guaranteed to be the same regardless of which computer you're running. Better yet, create a docker compose file so you can launch your site, email, and ftp as well just from a single command. All with near certain guarantees that it'll be the same regardless of which OS it's running on.
Docker is amazing for
literally wrong
there is a root filesystem image for each container
>Docker is amazing for
When you accidentally your ctrl+c
So how many GBs was downloaded for trying out WordPress?
Nah. I just gave up writing that bit out. Check the archive if you don't believe me.
Thought I erased it though
Man I don't know. Try it out yourself
>microservice architecture
This is even more hype than Docker.
kys
Considering binary compatibility is the greatest challenge you face when packaging software, it's more than justified.
Great stuff for dev environments.
docker is generally single-process.
Docker images have very minimal os images (as small as 5mb for Alpine) and literally only your process runs when it's started. It's not just a slightly lighter VM - it's basically a Python venv for any process with extra goodies like volumes.
Docker is not "secure" though so don't run untrusted code in it like you could in a vm. It's meant to fix the portability problem only.
I used it to run this paintschainer.preferred.tech
It was pretty good because I just downloaded the image instead of fucking with linux for a whole hour