I've posted a bit about how Ansible came to be in many places, including an early article on High Scalability that kicked off one of the first big waves of adoption. Though as I've told this story time and time again to individual people, I realized it was never quite written down in full.

Ansible owes much of it's origins to time I spent at Red Hat's Emerging Technologies group, which was an R&D unit under Red Hat's CTO -- this was back around 2006. Emerging Tech was a fantastic place where a large group of people at Red Hat were able to work on basically whatever they thought people needed. It was beautiful and taught me most of what I know about Open Source. Google's 80/20 time? This was basically 100%, provided it was good for the end user. I had arrived there via doing some systems management work via IBM and Adaptec (circa 2001) and a really amazing Python startup (circa 2005) that was doing a lot of applications and custom hardware around recording digital video for police and transit applications.

At Red Hat, one of the first projects I started was Cobbler. The goal around Cobbler was to help bring the ability to manage datacenter environments into the modern era, whereas in the beginning all Red Hat really had as system-config-netboot, a basic graphical tool for adding entries into a PXE tree. Cobbler started small, being able to generate a PXE tree (but from a large-scale perspective) and having a helper tool called koan, which was one of the first tools available to help install Xen and KVM virtual machines. Together with libvirt, I started an early mailing list called et-mgmt-tools, which became one of Red Hat's first pure open source forays into open-source systems management software, Satellite server being closed source at the time. Later, we'd go on to open source Satellite Server and Cobbler would become a big component. Anyway, so Cobbler grew and grew, and became very popular and I got to speak about it all over. It grew to add features for managing DHCP and DNS environments, and helping users with mirroring package trees. We learned greatly from a wide community of users, and people would add things like power management support. We built support for virtual punch card devices on mainframes and more. Cobbler became used in major applications such as the top tier international DNS system, the servers behind games like Modern Warfare, Hollywood renderfarms, top ten supercomputers, chip design clusters, and major financial institutions on Wall Street. There was no better way to acquire a lot of operations experience than to be talking to 400 of the world's best sysadmins every day, and I'm very thankful for the experience of making them my boss. (As a flash forward, AnsibleWorks is now happy to employ James Cammarata, who did some fantastic work building a web interface to Cobbler, among other things, and now runs the project!).

At the same time, Red Hat was very interested in configuration management, and in particular we were watching Puppet and Chef come onto the scene, and I did a lot of early work with using Puppet. One of the things we developed using it was an early prototype into virtualization management of appliances called "virt-factory", which used Cobbler for base operating system provisioning and Puppet for configuring virtual machines. I'll say at the time I was slow in picking up Puppet and I spend most times working on the Cobbler side of the equation - later I'd learn to stop worrying and use the automation tool we had at the time - which was what about half of my users were doing (Chef was still a little young then). Red Hat was also investing heavily in building AMQP, so we built a API-based worker system on top of AMQP called "busrpc", and one of the most significant authors was Kevin Smith, now VP of Engineering at Opscode. Also heavily involved was my cube-mate Adrian Likins, of up2date fame among other things. It's a small world. virt-factory never took off. I think it was the complexity of having to write the detail below the virtual appliances, but Red Hat virtualization was still early too, and people still wanted the basics -- like a open source VMware, not an appliance platform. What it did though, was identify two things -- (A) Cobbler was really popular, so there was no time to persue virt-factory, and (B) the message bus component based on busrpc was super cool, and needed to be a thing. So we all went our different ways, but Red Hat would later go for a phase two, called ovirt, without the built-in configuration automation.

Greg DeKoenigsberg was very influential in community things in Fedora at the time (and was the Fedora Project lead in the past), and got Adrian and myself together with Seth Vidal, who was not only the author of yum but doing a lot of great things running Fedora Infrastructure. This ended up being very fortuitous as what we were about to do would be responsible for Ansible, and led to much future brainstorming even after I had left Red Hat. We wanted to create another very-democratic open source project at Red Hat, one that could have a wide variety of contributors and solve new problems. We thought back to busrpc. This project existed because it filled in gaps between Cobbler and Puppet. Cobbler could provision a system, and Puppet could lay down configuration files, but because Puppet was too declarative you couldn't use it to do things like reboot servers or do all the "ad hoc" tasks in between. The needs of Fedora infrastructure greatly informed this use case -- and they'd go on to become a user. In the past, Red Hat (specifically my then-boss and myself) were exploring CIM, but there weren't many good implementations of API support for Linux and it didn't feel very Linuxey to the users and that's what mattered most -- it wasn't going to be possible to build a vibrant open source project around CIM. What if there was the equivalent of an API-like channel for managing system-config-* type applications? That was the idea behind Func.

Func was based on a central server reaching out to a remote node to send automation orders. It called the central server an "overlord" and each of the remote machines "minions", because the server/client relationship was inverted, and quite possibly, we wanted to have a little fun with the naming. This architecture would get copied by at least two other projects later, so I think we established a thing! (Func wasn't perfect, but neither were the clones - one didn't allow significant hostnames and has issues with persistent connections, the other chose to forgo AMQP and had a questionable security layer). There were probably several others. Fabric and Capistrano also existed, but we wanted to have something that was more of an API and less scripty. Func would go on to be used at Tumblr, and Steve Salesvan, who was interning at Red Hat at the time, went on to maintain it for Tumblr. You can read the original Func Delegation writeup from Steve here, there are some neat pictures!

I had thought to build a configuration management system on top of Func, which I tentatively called "Remote Rocket Surgery", though Cobbler was too popular and there was not time. (Also, I'm glad I didn't do the Python DSL, that would have been unwise in retrospect!). RRS never emerged from the idea, but it was a good thing I think, as later experiences would allow for better ideas to grow.

All through this time the word "DevOps" was starting to be used a lot, and various discussions about "devops toolchains". Cobbler, Puppet, and Func formed some of the very first DevOps-friendly automation tooling, sometimes Cobbler provisioning a box and Func being used to fire off Puppet runs. Later, the phrase "DevOps" grew to be a bit more about culture, automation being more well solved. But all of this was entirely about culture -- the culture of making tools easier to use, easier to talk about, and working together -- rather than buying a one-size-fits-all tool from a large vendor. And the culture of building tools in the open, and realizing uniting sysadmins, even from competing companies, could result in better and better tooling.

Time passed and I eventually left Red Hat, and got a email from Luke Kanies (author of Puppet and CEO of what was then Reductive Labs) to think about working for Puppet, where I was a product manager for a while as employee something like #13 - - the travel schedule was absolutely crazy, so I didn't stay very long, but I enjoyed my time there, and many good people still work there. Though I do think my time there introduced me to (A) seeing a lot more Puppet and Cobbler users in the field that I hadn't met before, and (B) better knowing the different types of users looking for automation solutions and where things should be. I didn't come out of Puppet wanting to write another configuration management system (or breathe life into RRS) I had actually tried to push Puppet before in a few future places as well! However I did note while I was there language was often divisive to prospective users and the most important thing in automation tooling, simplicity, should have been more of a priority. I had wanted to simplify Puppet to bring it to the masses, so I think differences of opinion made it easier to want to avoid the 95% travel gauntlet :).

After leaving Puppet and working at new places, time again automation tools would come up. I wasn't always able to convince people, in fact, to use Puppet for automation tooling, and usually didn't succeed. One of those applications was a mid-size hosted web application where they had an IT staff of about 3 and served about (I'm remembering so could be off) 300,000 simultaneous web users. It was an automated homework grading solution -- pretty cool really, even if students don't tend to like it! When it came time to do a rolling update, the process involved everyone staying in a conference room for hours, pushing buttons in turn, and ... because humans were involved, usually some step in the process didn't go as planned or got skipped. I realized nothing existed that really solved this problem well for them. They were also unable to adopt DevOps-style automation practices due to the complexity of the tools, and in-house scripting was viewed as actually easier, but it wasn't more reliable. Not only did this give me ideas though, I did get to meet some great IT folks and developers, some of which would go on to do some really great things on the local DevOps scene, and I'm thankful for that.

Ansible didn't really start then either, though the ideas were forming together quickly!. Eventually it was inevitable. While part of starting Ansible was to show the world there was an easier way -- to take those lessons from Red Hat, the field, and a long history of building systems management applications -- it was mostly to build the tool that (A) I actually wanted to use, and (B) was a tool that you could not use for six months, come back to, and still remember. I couldn't bring myself to be settled with the tools we had to use, it was too frustrating. As a developer myself, I wanted to write development code, I didn't want to spend 50% of my time fighting with the automation tooling and have the automation itself be a source of frustration. I wanted to help all of these IT environments I was finding myself in, and also help myself as a consumer of those environments.

So Ansible began as a project, sometime in Feburary of 2012. It took off pretty quickly due to a lot of sysadmins and developers knowing me from twitter and previous work, and in particular, people like Seth from Fedora Infrastructure, being one of it's first adopters and replacing their Puppet automation with Ansible and Fedora being a great sounding board and test bed. Other folks, like Dag Wieers, were instrumental in early input -- and many folks came up to me and said they were just about to build something very similar. Somehow, the way that we were approaching automation had made new attempts at approaches controversial, in some sort of strange vein, and it just took daring to say, "I think, guys, there might be a better way". So we did it.

Ansible grew further in a very grassroots way, and continues to spread that way. Like Func, it persues a "batteries included" philosophy, allowing everyone to contribute to the main modules and forming a really vibrant community of users. It pulls together lessons learned from projects like Cobbler, func, virt-factory, Puppet, and various commercial things and projects I encountered later in the field. So yes, provisioning, configuration, app deployment, there was a need to unify these tools so we could use 1 thing, not 3 or 4.

After Ansible was already started, but before AnsibleWorks, I would also later do some work for a major networking vendor, where I was thinking I'd be hired to do a lot of commiting directly on OpenStack. It turned out our group of 15 or so people spent about six months trying to automate OpenStack with Puppet, and I spent a lot of time banging my head into a desk. (Later, once AnsibleWorks was founded, one of our Solutions Architects built content to automate OpenStack, similar in scale, in a single week, by himself!). This further fueled my resolve. I didn't want to be stuck encountering automation systems I didn't like forever, and I wanted to be liberated. It was about at this time that things really started to take off on GitHub, and not soon after we decided to form the company!

Ansible is now going full steam, we've got a good sized company supporting it and more importantly building great products on top of it, and Ansible is currently the most starred and forked configuration management tool on GitHub. Our future is in continuing to incorporate great ideas from everywhere, and continuing to make IT management as simple as it can possibly be. Ansible was arrived at by a unique series of events, which one might as well call fate. It seems it couldn't *not* be created given that road. I'm very thankful for everyone along the way that has contributed to those ideas, helped share and build experience, and made it possible. I'm even more thankful for all the amazing things people are about to do.

As Microsoft said, "Where Do You Want To Go Today?".

I prefer it as Pinky from "Pinky and the Brain" said though, "Gee Brain, what do you want to do tonight?"

Brain's response to Pinky is, naturally -- "Same thing we do every night, Pinky -- try to take over the world".

Related News

Introducing AnsibleWorks! | Hello Ansible! | Arriving At Simplicity | Next Up, AnsibleWorks | A Quest For Simplicity: AnsibleWorks In Now Ansible