What's Sup Forums's opinion on pic related?

What's Sup Forums's opinion on pic related?

Do you use it? If yes what for and do you have some good playbooks to share?

playbooks are dependent on your own deployment environment

However they still might give some inspiration for making some new playbooks. Also it's fun to see what others use their playbooks for.

It's shit. Poorly documented, falls into the TMTOWTDI trap, YAML a shit, and has far too many places to hide config (doubly so if you use Tower).

And if you ever let a single bad coworker touch your playbooks you end up with spaghetti worse than bash.

from a few years ago
>unarchive
>uses tar to open zip
>uses zip to open tar
until the new distro image doesn't come with the zip utility

It's good if:
- You run off the devel branch. It's somehow more stable than the releases. The maintainers are beyond horrible about backporting bugfixes, new modules, etc.
- You follow the pattern of using one repo and one master playbook to create/ensure state of everything. Operational playbooks (e.g. reboot this) should be separate to infrastructural playbooks (e.g. deploy this server).
- You are prepared to shell out or write your own modules when things get fucking retarded.

Oh, and one other thing: NEVER USE ANSIBLE GALAXY FOR ANYTHING. Seriously. It is useless.
Ansible Galaxy runs off this weird assumption that a given role can be truly independent of the rest of your infrastructure. As a result, a lot of roles try to check for every contingency they can think of, and end up bloated and slow.
But no part of a config management system can run fully independently from all other parts. You could always have something additional about your infrastructure that is incompatible with how someone else's wants to do things. (For example, someone's fancy nginx role isn't going to work if I have iptables blocking HTTP traffic, is it?)
This is why infrastructure is fundamentally collaborative, and using infracode is a really good way of ensuring that collaboration, but the myth that someone can work on some piece of infrastructure in a vacuum connected to nothing else is retarded.

Are you a chef advocate?

I use it at work, and perhaps someone can provide advice, but the sheer number of places that variables can be introduced and overridden makes it extremely difficult to make changes.
We will have playbooks for a particular app that pull in app agnostic, app specific, environment agnostic, environment specific and additional variables, so knowing where the actual value is coming from is a nightmare

Just use Puppet and mcollective :^]

Variable scoping, motherfucker, do you use it?

There is no variable scope in Ansible.

Nope, but looking into it soon. Seems like all config management systems are a little wonky one way or the other. Though one of the nice things about Ansible is it's agentless.

Well that seems remarkably dumb.

It's meant for non programmers.

>Nope, but looking into it soon. Seems like all config management systems are a little wonky one way or the other. Though one of the nice things about Ansible is it's agentless.

I've got a few friends that I really trust that have been recommending I focus on Ansible who were somewhat involved with aspects of it's development. They've convinced me that the agent less feature will be worth the quirks in relation to the other systems.

I'm looking to focus on that and Kubernetes once I'm done with these linux certs I'm working on.

Best of luck user. I work with Ansible at work and I have played with Kubernetes before. Both are finicky technologies.
I will offer you these two nuggets of advice:
1. stick to the devel branch for ansible.
2. seriously, seriously consider Amazon's Kubernetes service over rolling your own. or DC/OS. or anything else. adminning a Kubernetes cluster is a fulltime job unto itself.

Agents give you some functionality you don't otherwise get. I'd rather have a system that lets me do either, and that's why I'm more into Saltstack than Ansible.

Is there any benefit to using Kubernetes + containers for a single rack on premises deployment, or just stick to VMs and a system that can live migrate them?

Salt seems more comfy to me and scales a lot better

Docker Swarm maybe.
Or split the difference and use pcs.

Yeah it's great, I use it at work.

The Fedora Project hosts their Ansible setup for their entire infrastructure on their public git repo, I found reading through that more helpful than the docs.

how relevant it is in the age of container orchestration?

I don't know
You tell me

if you work in the field long enough, you'll just realize it's a slope of shit all the way down

just go with your own python or shell scripts