Is DevOps a yet another "for profit" technocult

proposing "salvation" from the current IT datacenter difficulties ?

Slightly critical overview of some questionable aspects of DevOps

Version 2.7 (Jul 23, 2020)

You all read typical advantages cited for DevOps methodology: it supposedly helps you deliver products faster, improves company profitability (of course ;-), ensures continuous integration (without asking the key question: is this a this good idea?), removes roadblocks from your releases (without changing the level of incompetence of the management ;-), and gives you a competitive advantage (without changing the quality of developers and management, which is often dismal).

Typically fads like DevOps run for a decade of so and they are rejected and forgotten. We are probably in the middle of this path right now.

Those claims alert all seasoned IT professionals because they are very similar to claims of previous IT fads that now are completely or semi forgotten (DevOps is a Myth):

In the same way we see this happening with DevOps and Agile. Agile was all the buzz since its inception in 2001. Teams were moving to Scrum, then Kanban, now SAFE and LESS. But Agile didn’t deliver on its promise of better life. Or rather – it became so commonplace that it lost its edge. Without the hype, we now realize it has its downsides. And we now hope that maybe this new DevOps thing will make us happy.

If we are to believe DevOps advocates, we should accept that it magically invalidates Peter Principle, Parkinson law and several other realistic assessment of corporate environment and creates software paradise of Earth, kind of software communism as in "From each according to his ability, to each according to his needs - Wikipedia" :-) In reality, it often creates the environment aptly depicted in The Good Soldier Svejk (the book that NYT put in the category of Greatest Books Never Read -- or hardly read by Americans and which served an inspiration for "Catch 22" and Kurt Vonnegut's "Slaughterhouse-Five") There is something Svejkian about Heller's Yossarian, the ordinary guy trapped in the twisted logic of army regulations. Yossarian is smart enough to want to escape war by pretending he's crazy -- which makes him sane enough to be kept in the army. Hasek's piercing satire describes the confrontation of this ordinary man with the chaos of war, even if that chaos is cloaked in military discipline. What was a tragedy of WWI now is replayed as a farce in large corporate environments. Including identify craze -- the tangled politics of the Czechs, Austrians and Hungarians in Hapsburg empire ;-) and the problem of incompetence of management as in "He may be an idiot but he's our superior and we must trust that our superiors know what they are doing; there's a war on."

There were at least half-dozen previous attempts to achieve nirvana in the corporate datacenter and, especially, enterprise software development, starting from "verification revolution" initiated by Edsger W. Dijkstra, who probably was one of first "cult leader" style figure in the history of programming. It all basically comes down to this: some IT companies (for some often short period of time) somehow achieve better results than others in software development. Methodologies like DevOps, Agile, etc don’t reflect reality. They are sets of carefully constructed myths. But they are useful as a way attribute accidental success to the management and to avoid sacking of management and soothe customer frustration when we forced to admit that software delivered is a mess and does not behave as expected by the customer. It’s just a way of pushing the responsibility down from the most powerful to the least powerful — every time. It is an easier path than to admit that the quality of management was low and that software was developed in chaotic and unprofessional manner.”

DevOps is an important secular religion (techno-cult), which changes that way of delivery software in corporation (often by completely botching the whole process), but as a religion it can ignore failures and concentrates of myths. Emergence of DevOps clergy greatly helps.

OK, containers are neat (pioneered by FreeBSD and later Sun in Solaris 10 (2000)), putting your stuff in the cloud can, for certain "episodic" workloads be neat (pioneered by Amazon, two decades ago). But the key reason of all this Hoopla with DevOps is that cutting down infrastructure costs and outsourcing is a honey for higher management (as it drives up their bonuses). And this pretty mercantile consideration is a real driver of DevOps: often it is just a smoke screen for outsourcing (in accounting terms, this is shifting CapEx to OpEx, not much more).

The only consensus regarding the whole DevOps hoopla is that it is "a good thing". Nobody agrees about the details of precise definition. This should be clear for anyone who read articles about DevOps. It is by-and-large a slick marketing trick for certain companies.

Still large and successful companies such as Netflix and Amazon supposedly practice DevOps (and in case of Netflix heavily advertize it). While there are multiple definition of DevOps, the definition usually includes the usage (typically in the cloud environment) of the following methodologies ( DevOps - Wikipedia )

Most of those are "no-brainers" and should be used by any decent software development organization, but some are questionable: more frequent releases is a good idea only for minor releases and bug fixes, never for major releases. They tend to freeze the current (often deficient) architecture and prevent the necessary architectural changes.

A very important tip here is that DevOps definition is fuzzy by design as this is mainly marketing campaign for outsourcing IT infrastructure (and a very successful one), not a technological revolution (although technological progress, especially in hardware and communications, makes some things possible now that were unthinkable before 2000; smartphones alones are a real game changer)

That means that you have a certain flexibility with what to call DevOps ;-) In other words you can include something that you like and claim that this is essential for DevOps. It's a bad cook who can't name the same cutlet with 12 different names ;-) And you can even teach this thing to lemmings as a part of DevOps training, joining the cult. You can't even imagine how gullible are DevOps enthusiasts and how badly they want additional support

TIP: DevOps definition is quote fuzzy. It is mostly "this is a good thing" type of definitions. So you can include something that you like and claim that this is essential for DevOps. It's bad cook who can't name the same cutlet with 12 different names ;-) And you can even teach this thing to lemmings as a part of DevOps training.

Out of those seven "sure" things continuous delivery is probably most questionable idea. As sticking to it undermines efforts to revise the architecture of the packages. It essentially tend to "freeze" the current architecture "forever". As such it is a very bad thing, a fallacy. Moreover "continuous delivery" is nothing new as most software development projects now provide access to beta versions of the software. Providing beta versions is just another name for continues delivery :-) But only "beta addicts" use them in production environment, so the goal of this whole exercise is to engage users in testing and detecting the bugs.

As providing beta versions is just another name for continues delivery :-) you can use this fact to your advantage

The same is true to automated testing which now is sold under the umbrella of "continuous testing". Compiler developers used this since 1960th (IBM/360 compiler development efforts), and Perl developers created automated suit for this purpose decades ago. The key observation here is that while it is "a very good thing", it is not that easy to implement properly outside several narrow domains, such as programming languages, web interfaces and such. And again, in compiler development this technology is used since 60th. First tools to use command output for driving further processing were developed in 70th (where this approach was pioneered by Expect), automatic testing of web interfaces stated in mid 1990th (where several powerful commercial tools were created and marketed), etc.

Most constructive ideas associated with cloud computing were used in computational clusters for since late 1990, or so. They were reflected in Sun concept of the "grid" which originated with their purchase in 2000 Gridware, Inc. a privately owned commercial vendor of advanced computing resource management software with offices in San Jose, Calif., and Regensburg, Germany.[29] Later that year, Sun offered a free version of Gridware for Solaris and Linux, and renamed the product Sun Grid Engine( Sun Grid as cloud service was launched in March 2006 or 14 years ago)

Those ideas revolve around Sun earlier idea "network is a computer" and including the idea of using central management unit for a large group of servers or the whole datacenter (like "headnode" in computational cluster), central monitoring, logging and parallel execution tools to deliver changes to multiple server simultaneously.

Similar ideas were developed by Tivoli under the name "system management software services" starting from 1989 ( IBM acquired Tivoli in 1996). Accounting to Wikipedia (Tivoli Software )

The other important (and rather dangerous) aspect of DevOps is the attempt of elimination or, at least, a diminishing role of sysadmins in the datacenter. Of course, developers always dreamed of getting root privileges. That simplifies may things and cuts a lot of red tape ;-). But the problem here is with overcomplexity of modern Linux. Major enterprise Linux distributions such as Red hat and Suse are just tar pits of overcomplexity and can swallow naive developers who think that the can cross them alive ;-)

The idea of DevOps engineer who is wearing two hats: of sysadmin and of a programmer is a fallacy. Each field now is complex enough to require specialization. There’s no way to measure if one person is more of a DevOps engineer because the balance between knowledge of a particular programming toolset and the knowledge of Linux as an operating system is very tricky and depends on jobs responsibilities.

As most sysadmins worth its name are quite proficient in scripting. Therefore they can instantly be renamed DevOps engineers. That's allows you to join the techno-cult, but changes nothing most sysadmins do not have enough time to study a favorite scripting language in depth and operate with a small subset.

The situation with programmers is more complex. In most case they only imagine that they know Linux. In reality they know such a small subset of sysadmin knowledge that they are unable to pass even basic sysadmin certification such as Red Hat RHCSA. This fact can be profitably exploited giving you a meaningful activity under the flag of OOP (if you can't beat them, join") instead of useless and humiliating beating the drum and marching with the DevOps banner -- participating in a senseless DevOps training sessions. I once taught elements of Bash scripting a group of researchers under the flag of DevOps training :-)

Here is one tip on how to deal with ambitious (and usually reckless) developers, who try to obtain root access to the servers under the flag of DevOps. You should say that yes, you will glad to do this, but only after passing such an elementary for them test as RHCSA certification, and you are confident that for such a great specialist it is not a big deal. Usually requests ends at this point ;-) If you strongly do not like the person you can add Bash test from LinkedIn to the mix, or something similar :-).

TIP: If ambitious (and usually reckless) developers try to obtain root access to the servers under the flag of DevOps. You should say yes, you will glad to do this, but only after passing RHCSA certification

In reality what kills the idea once and for all is the complexity of modern operating systems. With the current complexity of RHEL (as of RHEL 7), the idea that a regular software developer can learn this level of complexity is completely fallacious unless you can implant to the guy the second head ( spending substantial money of classes and training can help, but a lot of sysadmin skills are based on raw experience include painful blunders and such. )

This is especially true for handing of the disasters in the data center, of SNAFU in military jargon. Also idea that sysadmin can raise their programming skill to the level of professional developers in say Python does not take into account that many sysadmin became sysadmin because they do not want to become a developers, and are happy with writing small scripts in bash and AWK.

Additional layers of complexity in a form of Ansible or Puppet help only in situation when everything works OK. Please note that you can use Ansible as "parallel ssh" for execution of "ad hoc" scripts so you can adopt it without much damage.

But it does adds a level of indirection. As soon as serious problem arise you need people capable to go down to the level of kernel and see what is happening. How can a person writing scripts in Python do this is anybody guess. As Red Hat support was by-and-large destroyed, and now by default, is not much more then a query to Red Hat tech database, you face serious problem with downtime. As an experiment create an LVM volume consisting of two disk arrays (PV1 and PV2). Then fail two disks in RAID 5 array of PV2(by simply removing them.) Now open ticket with Red Hat and see how Red Hat will help you to restore the data on LVM volume (PV1 is still intact).

Switching to the use of VM solved certain problem with maintenance and troubleshooting (availability of the baseline to which you can always return) while creating other problems due to shared networking and memory in such servers (which make VM mostly glorified workstation as for computer power, but with current capabilities of workstation this often is more than enough).

So this trend can help (although VMware will appropriate all your savings in the process;-), but here again the problem with the increased level of complexity might bite you sooner or later. The same is true for Azure, Amazon cloud, etc. For anything than rare high peak low troth workloads and experimentation they are prohibitively expensive. Even for large corporations, who can negotiate special deals with those providers.

As a side note I would like to mention that the problem of overcomplexity of Linux environment includes the increased complexity of Red Hat with version 7. I would say that with version 7 RHEL is over the head even for many sysadmins as for troubleshooting serious problems. IMHO it was a major screw up -- adding systemd (clearly Apple inspired solution) means that many old skills (and books) can't be reused. New skills need to be acquired by sysadmin and this is neither cheap, nor quick process. They now try to mask this complexity with Ansible and Web console, but that does not solve problems, only swipe them under the carpet.

Naive developers who think that system administration is "nothing special" and eagerly venture into this extremely complex minefield based on this college course in Ubuntu and running a Ubuntu box at home very soon learn the meaning of the terms SNAFU and "horror story". As in wiping, say a couple of terabytes of valuable corporate data with one click (or with one simple script.) In this case DevOps converts to Opps...

In case of wiping, say a couple of terabytes of valuable corporate data with one click (or with one simple script.) by newly minted sysadmins converted from the developers DevOps converts to Opps...

You can just cut the red tape by providing the most ambitious and capable developers (and only them, you need to be selective and institute kind of internal exam for that) access to root in virtual instances as crashing virtual machine is less serious event then crashing a physical server -- you always have (or should have) a previous version of VM ready to be re-launched in minutes. But beyond that point, God forbid. Root access to real medium size or large server (say 2 sockets or 4 sockets server with 32GB-128GB of RAM and 16TB or more of storage) running important corporate application should be reserved to people who have at least entry level Linux-based admin certification such as RHCSA, and (which is especially important) hands-on expertise with backups.

During his career each sysadmin goes though his own set of horror stories, but "missing backup" (along with creative uses or rm) is probably the leitmotiv on the most. Creating new horror stories in the particular organization is probably not what higher management with their quest for bonuses and neoliberalization of IT ( neoliberal "shareholder value" mantra means converting IT staff into contractors and outsourcing large part of the business to low cost countries like, say, Eastern European countries) meant by announcing their new plush DevOps initiative.

Each sysadmin goes though his own set of horror stories, but "missing backup" (along with creative uses or rm) is probably the leitmotiv on the most. Creating new horror stories in the particular organization is probably not what higher management with their quest for bonuses and neoliberalization of IT ( neoliberal "shareholder value" mantra means converting IT staff into contractors and outsourcing large part of the business to low cost countries like, say, Eastern European countries) meant by announcing their new plush DevOps initiative.

Good understanding of the Linux environment now requires many years of hand on 10 hours a day work experience (exactly how many years, of course, depends on the person). The minimum for reaching "master" level of a given skill is estimated to be around 10,000 hours, the earlier you start the better. Please note that many sysadmin came from hobbyists background and start tinkering with hardware in high school or earlier. So, a couple years after graduating from colleague they often have almost ten year experience. And taking into account Byzantium tendencies of mainstream programming languages (and those days you need to know several of them, say bash, Python and Javascript or Java ) 30,000 hours is a more reasonable estimate (one year is approximately 3000 working hours). Which means formula 4+6 (four years of college and 5-6 years on the job self-education ) to get to speed in any single specialty (iether programmer or system administrator). When you need to learn the second craft the process, of course can go faster (especially for programmers) but 10K hours rules still probably applies (networking staff along probably need that amount of hours).

The idea to give root access to non-trained developer who never passes, say, RHCSA Certification is actually much bigger fallacy that people assume. If you want to be particular nasty in BOFH fashion, you can give root access to business critical server to several developers (of course, only under pressure and with written approval from your management, if you are not suicidal). If you survive fallout from the subsequent SNAFU, for example wiping out 10TB of genomic data with one rm command (and you will be blamed, so you need to preserve all the emails with approvals), then you not only can remove all administrative access from those "victim of overconfidence," but get the management officially prohibit this practice. At least until the memory about this accident fades and another datacenter administration decides to repeat old mistakes.

The key slogan of DevOps movement is "all power to the developers" ;-). But while idea is noble this goal is completely unrealistic. No amount of automation can replace a role of specialist. Ansible and all other fashionable toys are good when system is running smoothly. As soon as you hit major software or hardware problem you need to operate on the lower level of abstraction and get into nitty-gritty details configuration, they are not only useless but harmful.

Try for a test add echo statement like echo "Hello DevOps" to .bashrc on your account and then ask somebody who is DevOps zealot to help to troubleshoot problem with scp ( scp stops working, but ssh to such a box still works). But, at least, Ansible is useful for automating routine tasks.

Although even in this area it is overrated and in reality does not provide much gains over much simpler and more reliable Pdsh. All those examples with creation of accounts and other staff are pretty artificial and do not pass a smell test. But some application using Ansible still might be useful (for example hardware inventory application) but only in a sense that "with enough thrust pigs can fly". See Unix Configuration Management Tools.

In any case, as a smoke screen to protect yourself from fake changes of sabotaging DevOp by DevOps zealots, Ansible deployment makes certain sense. It is clearly a part of DevOps toolkit and it can be used strictly as a clone of Pdsh until the need for more complex functionality arise.

Even in the cloud, as soon as you try to do something more or less complex you need a specialist completely devoted to learning this part of infrastructure. One problem that I noticed that most developers have a very weak (often close to zero) understanding of networking. That's their Achilles spot and that's why they often suggest and sometimes even implement a crazy WAN based solution (aka cloud solutions).

Most developers have a very weak (often close to zero) understanding of networking. That's their Achilles spot and that's why they often suggest and sometimes even implement a crazy WAN based solution (aka cloud solutions) replacing tried and true internal applications

According to Wikipedia The Fallacies of Distributed Computing are a set of common but flawed assumptions made by programmers in development of distributed applications. They were originated by Peter Deutsch (who was at the time at Sun Microsystems ), and his "eight classic fallacies" -- describing false assumptions that programmers new to distributed applications typically made.

They can be summarized as following (Wikipedia):

The network is reliable. Latency is zero. Bandwidth is infinite. The network is secure. Topology doesn't change. There is one administrator. Transport cost is zero. The network is homogeneous.

There is also another similar, but more entertaining, RFC1925 known as The Twelve Networking Truths

The Fundamental Truths

(2) No matter how hard you push and no matter what the priority, you can't increase the speed of light.

(2a) (corollary). No matter how hard you try, you can't make a baby in much less than 9 months. Trying to speed this up *might* make it slower, but it won't make it happen any quicker. Callon Informational [Page 1]

(3) With sufficient thrust, pigs fly just fine. However, this is not necessarily a good idea. It is hard to be sure where they are going to land, and it could be dangerous sitting under them as they fly overhead.

(4) Some things in life can never be fully appreciated nor understood unless experienced firsthand. Some things in networking can never be fully understood by someone who neither builds commercial networking equipment nor runs an operational network.

(5) It is always possible to aglutenate multiple separate problems into a single complex interdependent solution. In most cases this is a bad idea.

(6) It is easier to move a problem around (for example, by moving the problem to a different part of the overall network architecture) than it is to solve it.

(6a) (corollary). It is always possible to add another level of indirection.

(7) It is always something

(7a) (corollary). Good, Fast, Cheap: Pick any two (you can't have all three).

(8) It is more complicated than you think.

(9) For all resources, whatever it is, you need more.

(9a) (corollary) Every networking problem always takes longer to solve than it seems like it should.

(10) One size never fits all.

(11) Every old idea will be proposed again with a different name and a different presentation, regardless of whether it works.

(11a) (corollary). See rule 6a.

(12) In protocol design, perfection has been reached not when there is nothing left to add, but when there is nothing left to take away.

Those WAN blunders costs corporation serious money with zero of negative return on the investment. It is easy to suggest to transfer say 400TB over Atlantic for a person who do not understand what is the size of the pipeline in datacenter No.1 and datacenter No.2 in particular corporation. Or implement a monolithic 5 PB "universal storage system" which become single point of failure despite all IBM assurances that GPFS is indestructible and extremely reliable; in this case a serious GPFS bug can produce a failure after which you can kiss goodbye to several terabytes of corporate data. If you are lucky most of them are useless or duplicated somewhere else, but still...

The term "Black swans" is the name of a very rare outlier events that has severe impact. In production systems, these are problems with software or hardware that you do not suspect exits until it is way too late. When they strike they can't be fixed quickly and easily by a rollback or some other standard response from your vendor tech-support playbook. They are the events you tell new sysadmins years after the fact.

Additional and more complex automation increases probability of black swans not diminished it -- "Complex systems are intrinsically hazardous and brittle systems." Once you have multiple automation systems, their potential interactions become extremely complex and impossible to predict. For example, your automation system can restart servers that were shut down for maintenance in the middle of firmware update.

Additional and more complex automation increases probability of black swans not diminished it -- "Complex systems are intrinsically hazardous and brittle systems." For example, your automation system can restart servers that were shut down for maintenance in the middle of firmware update.

It is easy for a developer to buy a two socket server with Dell professional support configuration included and then in four years down the road discover that RIAD5 need monitoring and failure of two disks in RAID5 configuration is fatal. As well as the fact that Dell professional services did not include spare in RAID5 configuration because the developer if his naivety demanded as much disk space as possible (high end Dell controllers are capable of supporting RAID 6 with a spare which is a reliable configuration as even failure of two disk does not lead to the destruction of the disk array and potential loss of data). And if RAID5 lost two disks you are probably on your way to enrich data recovery companies such as OnTrack ;-)

Those "horror stories" can be continued indefinitely, but the point is that the level of complexity of modern IT infrastructure is such that you do need to have different specialists responsible for different parts of IT. And while separation of IT roles into developer and sysadmin has its flaws and sysadmins already benefit from leaning more programming and developer more about underlying operating system, you can't jump over mental limitation of mere mortals. Still it is true that some, especially talented, sysadmins are pretty capable to became programmers as they already know shell and/or Perl on the level that makes learning of a new scripting language a matter of several months. And talented developers can learn OS on the level close to a typical sysadmin in quite a short time. But they are exceptions, not a rule.

In any case, the good old days when a single person like Donald Knuth in late 50th early 60th was on a night shift at the university datacenter a programmer, system operator and OS engineer simultaneously are gone. That's why certifications like Certified AWS specialist and similar for other clouds like Azure started to proliferate and holders of those certification are in demand.

Attempt of DevOps reverse the history and return us to the early days of computing when programmer was simultaneously a system administrator (and sometimes even hardware engineer ;-) are doomed to be a failure. The costs of moving everything to the cloud are often exorbitant, especially when the issues with WAN are not understood beforehand and this fiasco is covered with PowerPoint presentations, destined to hide the fact that after top management got their bonuses the enterprise IT entered the state of semi-paralysis. Yes, cloud has its uses but it is far for being panacea.

The other aspect here is the question of loyalty. By outsourcing datacenter as DevOps recommends, you essentially rely of kindness of strangers in case of each more or less significant disaster. Strangers that have zero loyalty to your particular firm/company no matter how SLA are written. In other words, the drama of outsourcing helpdesk is now replayed on a higher level, and with more dangerous consequences.

Software engineering proved to be very susceptible to various fads

which resemble pseudo-religious movements (with high levels of religious fervor).

Another warning sign that DevOps adepts are not completely honest is lofty goals. Nobody in his sound mind would object to achieving stated DevOps goals

The goals of DevOps span the entire delivery pipeline. They include: Improved deployment frequency;

Faster time to market;

Lower failure rate of new releases;

Shortened lead time between fixes;

Faster mean time to recovery (in the event of a new release crashing or otherwise disabling the current system).

The only question is: Can DevOps really achieves them or is mostly a hype? Listen to this song to find out ;-)

Historically software engineering proved to be very susceptible to various fads which usually take the form of pseudo-religious movements (with high levels of religious fervor). Prophets emerge and disappear with alarming regularity (say, each ten years or so -- in other words the period long enough to forget about the previous fiasco). We already mentioned the "verification revolution", but we can dig even deeper and mention also "structured programming revolution" with its pseudo-religious crusade against goto statements (while misguided it, at least, has a positive effect on introducing additional high level control structures into programming languages; see important historically article by Donald Knuth Structured programming with goto statements cs.sjsu.edu).

Verification hoopla actually destroyed careers of several talented computer scientists such as David Gries (of Compiler Construction for Digital Computers fame; who also participated in creation of amazing, brilliant teaching PL-C compiler designed by Richard W. Conway and Thomas R. Wilcox ). It also damaged long term viability of achievements of Niklaus Wirth ( a very talented language developer, who participated in creation of Algol 60 and later developed Pascal and Modula). He also is known for his Wirth's law: "Software is getting slower more rapidly than hardware becomes faster."

Dismal duct database with some added blog features (reviews.) But if Amazon at least tries to implement several most relevant types of searches, Netflix does not. Fr rare movies it is rather challenging to find what you interested in, unless you know the exact title ;-). Movies are not consistently indexed by director and major stars. The way they deal with reviews is actually sophomoric. But the colors are nice, no question about it :-)

Amazon is much better, but in certain areas it still has "very average" or even below average quality. One particularly bad oversight is that the reviews are indexed only in several basic dimensions (number of stars, popularity (up votes) and chronological order), not by "reviewer reputation" (aka karma), number of previous reviews (first time reviewers are often fake reviewers), date of the fist review by this review written, and other more advanced criteria. For example, I can't exclude reviewers who wrote less then 2 reviews and who first wrote review less a year ago.

You also can't combine criteria in the search request. That create difficulties with detecting fake reviewers, that is a real problem on Amazon review system. As a result it requires additional, often substantial, work to filter our "fake" reviews (bought, or "friends reviews" produced by people who are not actual customers). They might even dominate Amazon rating for some products/books.

In any case that "effect" diminishes the value of Amazon rating and makes Amazon review system "second rate". Recently Amazon developers tried to compensate this by "verified purchase" criteria, but without much success, as, for example, in most cases the book can be returned and unopened product can be returned too. While some fake reviews are detectable by the total number of reviews posted (often this number is one ;-). or their style, in many cases it is not possible easily distinguish "promotional campaign" of the author of the book (or vendor of the product) from actual reviews. Shrewd companies can subsidize purchases in exchange for positive reviews. In this sense Amazon interface sucks and sucks badly.

Amazon cloud has its uses and was generally a useful innovation, so all the DevOps hoopla about it is not that bad, but it is rather expensive and is suitable mostly for loads with huge short term peaks and deep, long valleys. For example genomic decoding. The cost of running the infrastructure of a medium firm (200-300 servers) on Amazon cloud is comparable with the cost of running private datacenter with hardware support outsources and local servers providing local services (the idea of autonomous remote servers) by "local" staff which definitely has higher loyalty to the corporation and which can be more specialized. In this case all servers also can be managed from the central location which creates synergies similar to cloud and the cost of personnel is partially offset by lower cost of WAN connections due to provision on several local services (email, file storage, local knowledgebase, website, etc) by remote autonomous datacenters (using IBM terminology)/

WAN access is Achilles spot of the cloud infrastructure and costs a lot for medium and large firms with multiple locations. Remote management tools now are so powerful that a few qualified sysadmin now can run pretty large distributed datacenter with comparable (or higher) reliability to Amazon or Microsoft clouds.

Especially if servers are more or less uniform and can be organized in the grid. Of course you need to create a spreadsheet comparing the costs of those two variants, but generally a server with two 24 core CPUs, 128 GB and several terabytes of local storage can run a lot of services locally without any (or with very limited) access to WAN and it cost as much as one-two years of renting similar computational capabilities on Amazon cloud or Azure, while having 5 year manufacturer warranty.

Although details are all that matter in such cases and are individual to each company, cloud is not the only solutions for the problems that plague modern datacenter. And in many cases it is a wrong solution as outsourcing deprives the company the remnants of IT talent and it became hostage of predatory outsourcing firms. You just need to wait a certain amount of time to present it as "new" and revolutionary solution for problem caused by the DevOps and cloud ;-) Again DevOps is often a smoke screen for outsourcing and management priorities change with time after facing the harsh reality of new incarnation of "glass datacenters." People forgot how strongly centralized IBM mainframe environment (aka "glass datacenters") were hated in the past. Often executives that got the bonuses for this "revolutionary transformation" is gone in a couple of years ;-). And the new management became more open to alternative solutions as soon as they experience reality of cloud environment and the level of red tape, nepotism and cronyism rampant in Indian outsourcing companies ("everything is a contract negotiation"). Some problems just can't be swiped under the carpet.

As large enterprise brass now is hell-bent on deploying DevOps (for obvious, mostly greedy reasons connected with outsourcing and bonuses, see below ;-) it would be stupid to protest against this techno-cult. In most cases your honestly will not be appreciated, to say the least. In other words direct opposition will be a typical "career limiting move" even if you 100% right (and it is unclear whether you are: the proof of the pudding is in the eating)

So you need to adapt somehow, and try to make lemonade out of lemons. You can concentrate on what is positive on DevOps agenda for now and see how the situation looks like when the dust settles. There are two promising technologies that you can adopt under DevOps umbrella (we already mentioned Ansible):

Jira and Confluence. Both typically represent an improvement in their respective areas in comparison with product used by the company.

Both typically represent an improvement in their respective areas in comparison with product used by the company. Docker. While VM provide the same functionality Docker containers are more lightweight and docket is a well engineered application. It solves the problem of running various flavor of Linux and/or installing various complex applications in the most efficient and on-intrusive way. In other words it is one of the rare tools that makes sense, despite the fact that it is promoted by DevOps.

While VM provide the same functionality Docker containers are more lightweight and docket is a well engineered application. It solves the problem of running various flavor of Linux and/or installing various complex applications in the most efficient and on-intrusive way. In other words it is one of the rare tools that makes sense, despite the fact that it is promoted by DevOps. Git and Gitlab. While git is overhyped and is not suitable as Unix configuration files management tool (although attempts to manage /etc were made, see below) it is good enough to use for managing your scripts.

While git is overhyped and is not suitable as Unix configuration files management tool (although attempts to manage /etc were made, see below) it is good enough to use for managing your scripts. Ansible. This is amore questionable choice but on the level zero it can be used strictly for execution of ad hoc scripts. So at least it is not harmful despite introducing yet another domain specific language into the mix. If also is written in Python which is not a bad thing if you want to learn this scripting language (unfortunately it does use yet another proprietary language YAML as its application specific language) which is again you can avoid it using "ad hoc execution of commands" and using it as expensive analog of pdsh:-)). If you have NFS system available on the nodes that it can be more useful. Ansible also can be useful as a smoke screen of DevOps adoption. One advantage of Ansible is that it does not have clients you need to install and maintain on all servers. Moreover it can be used as the replacement (an overkill) of pdsh, allowing you to reuse existing skills and scripts (the same is true if you use parallel similar tools)

This is amore questionable choice but on the level zero it can be used strictly for execution of ad hoc scripts. So at least it is not harmful despite introducing yet another domain specific language into the mix. If also is written in Python which is not a bad thing if you want to learn this scripting language (unfortunately it does use yet another proprietary language YAML as its application specific language) which is again you can avoid it using "ad hoc execution of commands" and using it as expensive analog of pdsh:-)). If you have NFS system available on the nodes that it can be more useful. Ansible also can be useful as a smoke screen of DevOps adoption. One advantage of Ansible is that it does not have clients you need to install and maintain on all servers. Moreover it can be used as the replacement (an overkill) of pdsh, allowing you to reuse existing skills and scripts (the same is true if you use parallel similar tools) Creation of hybrid cloud. You can mix AWS (or Azure) with distributed domestic infrastructure. There are some tasks that can benefit from running in the cloud and that can be run in the cloud cost-effectively as cloud is expensive (especially disk storage). But short term task that do not excessive amount ot stirage and can be run using spot prices might make sense. For example, bioinformatics tasks in medium sized companies (large companies typically own a datacenter or several of them so calculations of benefits became excessively difficult as networking, air conditioning and electricity is supplied a part of large infrastructure.) At the same time any researcher in medium or large size company can have 64GB 4.5 GHz CPU workstation or, if you are financial company, a personal 4 sled server with similar parameters for each sled and say, 40 TB of storage. Even for bioinformatics with it s rare peak loads, which theoretically is ideally suited for the cloud, local facilities can be competitive (whit cloud allow management to do it to shift capital budget into expenses and claim that they saved money) And there are many tasks that are not suitable for the cloud iether due to high I/O or due to high "fast storage" (this those days mean SSD disks) requirements.



In any case, most of us probably need to "beat the drum and march with the banner" as your higher level management typically adopted some stupid slogan like "Cloud first" (sounds a lot like "Clowns first"). If the game is played carefully and skillfully you can improve life of peripheral sites dramatically as under this banner as you can improve bandwidth connecting them to the Internet claiming that it is need to utilize the cloud computing. You can also call centrally managed local servers of peripheral sites "hybrid cloud" (especially if they are organized as a grid with the central node) and as cloud concept is very fuzzy there is nothing wrong with this slight exaggeration :-)

The first thing you can do is to negotiated some valuable training. And I am not talking about this junk coursers in DevOp (although off-site, say in NYC, utilized as extra vacation they also have some value ;-). I am talking, for example, getting course in Docker and Ansible/Puppet/Chef. And if, for example, you standardize on Ansible or Puppet you can legitimately ask for Python classes.

That actually is in line DevOps philosophy as according to it sysadmins need to grow into developers and developers into sysadmins merging in one happy and highly productive family ;-) So you can navigate this landscape from the point of view of getting additional training and try to squeezed something from the stone: corporations now are notoriously tight as for training expenses.

Try to get some training out of DevOps hoopla. If, for example, you standardize on Ansible or Puppet you can legitimately ask for Python classes. That actually is in line DevOps philosophy as according to it sysadmins need to grow into developers and developers into sysadmins merging in one happy and highly productive family ;-)

So one way to play this game is to equate DevOps and some system management tool (say Ansible) or other interesting for you technology with DevOps. It is a fake movement so some exaggeration, or deviation usually will not be noticed by the adepts.

For some reason two products of Austrian firm Atlassian Jira and Confluence are associated with DevOp. Unlike the majority of DevOps associated products they are definitely above average software (Jira more so and Confluence less so; but they can be integrated which adds value) which typically is heads above what company is using. Replacing the current Helpdesk system with Jira and documentation system with Confluence might improve this two important areas of your environment and they are worth trying.

Another similar to Confluence product -- Microsoft Teams also makes sense to deploy as a part of DevOs hoopla. It is still pretty raw but the level of integration of a web forum, file repository and wiki is interesting. It also integrates well with Outlook which is the dominant email client in large corporations.

Docker has it value in both sysadmin area and in applications area. So you can claim that this is a prerequisite for DevOps and site some books on the subject. Of course, this is not a panacea from many of the existing enterprise datacenter ills, but the technology is interesting, implementation is elegant and the approach has merits, especially if you are in a proxy protected environment, which means that the installation of non standard applications a huge pain.

In case of research applications a developer often can make them run quicker in Docker environment, especially if application is available on Ubuntu or Debian in packaged form, but not on CentOS or RHEL. There are some useful collections of prepackaged application, mainly for bioscience. See for example BioContainers (the list of available containers can be found here .)

Esoteric applications that are difficult to install what which require specific libraries with version different that, say, used by your RHEL version can really benefit from Docker, as it allow your to compartmentalize "library hell" and use the most suitable for the particular application flavor of Linux (which is of course the flavor in which the application was developed). For example, many scientific application are native to Debian, which is not an enterprise Linux distribution. Docker allow you to run instances of applications installed from Debian packages on a Red Hat server ("run anywhere" meme). And if Docker is not enough you can always fall back to regular VM such as XEN.

In any case Docker is not a fad. It is an interesting and innovative "light-weight VM" technology which was first commercially introduced by Sun in Solaris 10 in 2005 (as Solaris zones) and replicated in Linux approximately 10 years later. The key idea that this is essentially packaging technology -- all "VM" share the kernel with the base OS. It is a variant of paravitualization, which produces minimal overhead in comparison with running application as a task on a real server. Much more efficient then VMware. Linux containers like Solaris zones are essentially extension of the concept of jail ( the idea of extending jail concept to more "full" VM environment originated in FreeBSD.) It is definitely worth learning, and in some cases to be widely deployed. Anyway, this is something real, not the typical shaman-style rituals that "classic" DevOps try to propagate ( Lysenkoism first played out as a real tragedy, now it degenerated into a farce ;-)

As most DevOps propagandists and zealots are technologically extremely stupid, it is not that difficult to deceive them. Sell the Docker as the key part of DevOps 2.0 toolkit, the quintessence of the cloud technology (they like the word "cloud" ) which lies in the core of DevOps 2.0 -- a hidden esoteric truth about DevOps that only real gurus know about :-) They will eat that...

On higher, datacenter level you can try to push the adoption of Red Hat OpenShift, which is a kind of in-house cloud and is cheaper and more manageable then Amazon elastic cloud, or Azure. And in some cases makes sense to deploy. That might allow you to extract from management payment for RHEL Learning Subscription. Try to equate the term "hybrid cloud" with use of OpenShift within enterprise. You can also point out, that unless you have short peak and long no load periods both Azure and AWS are pretty expensive, and it is not wise to put all eggs into one basket.

GIT is not that impressive for Linux configuration management, but sometimes can be used. See, for example, etckeeper package; if you find it useful and want to use it just claim that this is DevOps 2.0 too, and most probably you will be blessed to deploy it. And while it has drawbacks, it allows to record actions of multiple sysadmin on the server that result in changes of files in /etc directory, treating it as a software project with multiple components. So this is far from perfect but still a usable tool for solving the problem of multiple cooks in the same kitchen.

The main value here is Gitlab in that in many large enterprises it is now installed internally as a part of DevOps wave. If so, you need to learn how to use it as it is useful tool for exchange of information between distributed teams of sysadmins.

In some, rare cases you might even wish to play "continues integration" game. If corporate brass demands continuous integration to be implemented you might try to adapt software like GitLab for this purpose and use GitLab "pipelines", which provides some interesting opportunities for several scripting languages such as Python and Ruby. Automated testing part is probably the most useful. While you can always write independent scripts for automated testing, integration of them into GitLab framework often is a better deal.

But we should be aware that the key problem with "continuous integration" and closely related concept of "continuous testing" is that for testing scripts the most difficult part is the validation of the output of the test: the road to hell is always paid with good intention.

And while the idea of automated testing is good, for many scripts and software packages the implementation is complex and man power consuming. Actually this is one area were outsourcing might help. So far there was not breakthrough in this area and much depends on your own or developers abilities. Regular expression can help to check output, but they are not a panacea.

Also the most complex is your testing script the more fragile it is and has less chances to survive changes in the codebase. So complex testing scripts tend to freeze the current architecture. And as such as harmful.

In its essence, continuous delivery is an overhyped variation of the idea of the night build, used for ages. If you want to play a practical joke on your developers tell them to use Jenkins as an automated tests tool. Jenkins is a very complex tool (with multiple security vulnerabilities), that you generally should avoid. But it can be installed via Docker container. AWS has a "ready made" project How to set up a Jenkins build server which can save you from troubles and wasting your time;-). But often you do not need to install it; just associate the term Jenkins with "continuous integration" and provide for them a Docker container with Jenkins. Your advantage is that developers usually do not have deep understanding of Linux, and, especially, virtualization issues. And usually they do not want to learn too much new staff which gets them too far away from their favorite programming language.

So there is a strong, almost 100% chance, that after the period of initial excitement they will ignore this technology and stick to GIT and their old custom testing scripts ;-)

Ansible is definitely less toxic solution then Puppet. It is created a politically expedient possibility to ride the DevOps hoopla without doing any harm. I do not see any innovative ideas in it, but it is a fashionable tool heavily pushed by Red Hat (which now, BTW is an IBM company; and any seasoned sysadmin knows what that means)

Ansible is an important part of Infrastructure as Code(IaC) hoopla. So you might be able to get some training on the wave of DevOps enthusiasm in your particular organization.

It is also can be used in a way Pdsh or parallel are used, so all your previous scripts and skills are still fully applicable. In general its overly complex, introduces some questionable "system automation" language, duplicates many existing tools functionality, and is overhyped. But is small dozes and without excessive enthusiasm it is not that bad. See Unix Configuration Management Tools for more "in-depth" discussion.

You can use also Ansible as Pdsh on steroids (or continue to use pdsh pretending that this is a Ansible ;-) if idiots in your management team bough into all this DevOps crap. Just ride the wave. Mention that it is well integrates with GIT (another magic word that you need to know and use).

As I mentioned above adopting Ansible or Puppet (or pretending to adopt them ;-) might allow your to get some valuable Python training.

While I am strongly prejudiced against Puppet (and consider writing Puppet automation scripts as a new type of masturbation), Puppet also was/is the most loud in DevOps hoopla ( you can judge what type of company you are dealing with but the amount of DevOps hoopla and if they advertise Agile on their pages; Puppet does do that ;-). They also pretend to provide "continues delivery" so formally you can classify Puppet as such a tool, the pretence that might perfectly suit your needs:

No one wants to spend the day waiting for someone to manually spin up a VM or twiddle their thumbs while infrastructure updates happen. Freedom comes when you get nimble, get agile, and tackle infrastructure management in minutes, not hours. Puppet Enterprise uniquely combines the industry-standard model-driven approach we pioneered with the flexibility of task-based automation. Welcome back, lunch break. ... ... ... Continuous delivery isn’t just for developers. Build, test, and promote Puppet code, high-five your teammates, and get back to doing what you love with Continuous Delivery for Puppet Enterprise. Instead of wondering if you have the latest code running, now you can streamline workflows between Dev and Ops, deliver faster changes confidently and do it all with the visibility your teams need.

BTW the way many years ago one really talented system administrator created Perl for very similar (actually more broad) purposes. Including the attempt to make life of sysadmins easier. And his solution is still preferable today.

While I mentioned two useful tools that suit DevOps banner, there might be more. Summarizing, I think that "creative" ad hoc interpretation of DevOps might improve your bargaining position, and can serve as a draft of the plan of your fight against the stupid aspects of the DevOps onslaught.

It is important to try to dictate the terms of the game using your superiority in understanding of Linux and control of the data center servers. Depending on your circumstances and the level of religious zeal for DevOps in your organization that might be a viable strategy against unreasonable demands of developers. In any case do not surrender without a fight and please understand that a direct frontal attack on this fad is a bad strategy.

Now if some brainwashed with DevOps developer tries to enforce his newly acquired via DevOps hoopla rights to manage the servers ("we are one team"), you can instantly put him in his place pointing our that those weaklings do not know DevOps 2.0 technology and as such does not belong to this "Brave New World." Send all such enthusiast to the Docker exile, pointing out that control of physical servers is so yesterday. It is pretty effective political line that gives you better chances for survival then the direct attack on this fad.

Still, you might also need to visit a couple of brainwashing session (aka "DevOps Learning Bundle") to demonstrate your loyalty.

I think there three concepts very relevant to the discussion of DevOps:

You probably heard a little bit about so called high demand cults. They exist in techno-sphere too, and while features are somewhat waterdowned they are still recognizable (High Demand Cults):

Remember ... A group does not have to be religious to be cultic in behavior. High demand groups can be commercial, political and psychological. Be aware, especially if you are a bright, intelligent and idealistic person. The most likely person to be caught up in this type of behavioral system is the one who says “I won’t get caught. It will never happen to me. I am too intelligent for that sort of thing.” The following statements, compiled by Dr. Michael Langone, editor of Cultic Studies Journal, often characterize manipulative groups. Comparing these statements to the group with which you or a family member is involved may help you determine if this involvement is cause for concern. Love bombing: Fake display of excessively zealous, unquestioning commitment and adulation to new members. Expensive gift, etc. This is a typical practice of sociopaths and many cult leaders are sociopaths.

Fake display of excessively zealous, unquestioning commitment and adulation to new members. Expensive gift, etc. This is a typical practice of sociopaths and many cult leaders are sociopaths. Isolation: The group leader instills a polarized, "we-they" mentality and try to isolate members from their previous contacts. Members are encouraged or required to work with and/or socialize only with group members. Severing of Ties with Past, Family, Friends, Goals, and Interests - Especially if they are negative towards or impede the goals of the group.

The group leader instills a polarized, "we-they" mentality and try to isolate members from their previous contacts. Members are encouraged or required to work with and/or socialize only with group members. Severing of Ties with Past, Family, Friends, Goals, and Interests - Especially if they are negative towards or impede the goals of the group. High demand on members time, intensity of contacts: Intense involvement in cult activities work along with isolation from others. Behaviour is closely prescribed and carefully supervised. Members are expected to devote inordinate amounts of time to the group and group activities.

Intense involvement in cult activities work along with isolation from others. Behaviour is closely prescribed and carefully supervised. Members are expected to devote inordinate amounts of time to the group and group activities. Manipulation: The group's leadership induces guilt feelings in members in order to control them. The pity play is high in their list of manipulation techniques . It's okay to pity someone who has gone through difficult times, but if you find yourself feeling sorry for someone's sad story, make sure the story is true. The pity play should serves as a valuable warning sign that you are dealing with a manipulator, in this case a female sociopaths.

The group's leadership induces guilt feelings in members in order to control them. . It's okay to pity someone who has gone through difficult times, but if you find yourself feeling sorry for someone's sad story, make sure the story is true. The pity play should serves as a valuable warning sign that you are dealing with a manipulator, in this case a female sociopaths. Brainwashing: Questioning, doubt, and dissent are discouraged or even punished with a bouts of anger or (fake) withdrawal. Special techniques are used to suppress doubts about illegal or questionable or amoral practices in the group or its leader(s) and put dissidents in line. Special literature is distributed and indoctrination campaign is launched. Much like pedophiles "groom" children by pushing or encouraging them to watch porno movies and books with explicit content. If group leader requires special favors from members of the group (for example sex with female members) this all is masked under some artificial pretext like "exercise in liberation" or "exercise in compassion". Dictate and micromanagement: The group's leader practice micromanagement and dictates – sometimes in great detail – how members should think, act, what to wear and feel. Instilling amorality, end justifies the means mentality: Any action or behaviour is justifiable as long as it furthers the group's goals. The group (leader) becomes absolute truth and is above all man-made laws.

Questioning, doubt, and dissent are discouraged or even punished with a bouts of anger or (fake) withdrawal. Special techniques are used to suppress doubts about illegal or questionable or amoral practices in the group or its leader(s) and put dissidents in line. Special literature is distributed and indoctrination campaign is launched. Much like pedophiles "groom" children by pushing or encouraging them to watch porno movies and books with explicit content. If group leader requires special favors from members of the group (for example sex with female members) this all is masked under some artificial pretext like "exercise in liberation" or "exercise in compassion". My initial impression that former cultists come face with a multiplicity of losses, accompanied by a deep, and sometimes debilitating, sense of anguish. See for example interviews with defector from Mormonism on YouTube ... ... .... My hope upon initiating this research was to provide a link between cult leaders and corporate psychopath and demonstrate that cult leaders practices (that are more or less well understood and for which extensive literature exists) have a strong predictive power for the behavior of a corporate psychopath. We should not focus just on the acute and long-term distress accompanying reporting to corporate psychopath. Here are some psychological mechanisms used: Control of the Environment and Communication The control of human communication is the most basic feature of the high demand cult environment. This is the control of what the individual sees, hears, reads, writes, experiences and expresses. It goes even further than that, and controls the individuals communication with himself - his own thoughts.

The Mystique of the Organization. This seeks to provoke specific patterns of behaviour and emotion in such a way that these will appear to have arisen spontaneously from within the environment. For the manipulated person this assumes a near-mystical quality. This is not just a power trip by the manipulators. They have a sense of “higher purpose” and see themselves as being the “keepers of the truth.” By becoming the instruments of their own mystique, they create a mystical aura around the manipulating institution - the Party, the Government, the Organization, etc. They are the chosen agents to carry out this mystical imperative.

Everything is black & white. Pure and impure is defined by the ideology of the organization. Only those ideas, feelings and actions consistent with the ideology and policy are good. The individual conscience is not reliable. The philosophical assumption is that absolute purity is attainable and that anything done in the name of this purity is moral. By defining and manipulating the criteria of purity and conducting an all-out war on impurity (dissension especially) the organization creates a narrow world of guilt and shame. This is perpetuated by an ethos of continuous reform, the demand that one strive permanently and painfully for something which not only does not exist but is alien to the human condition.

Absolute “Truth” . Their “truth” is the absolute truth. It is sacred - beyond questioning. There is a reverence demanded for the leadership. They have ALL the answers. Only to them is given the revelation of “truth”.

Thought terminating clichés. Everything is compressed into brief, highly reductive, definitive-sounding phrases, easily memorized and easily expressed. There are “good” terms which represents the groups ideology and “evil” terms to represent everything outside which is to be rejected. Totalist language is intensely divisive, all-encompassing jargon, unmercifully judging. To those outside the group this language is tedious - the language of non-thought. This effectively isolates members from outside world. The only people who understand you are other members. Other members can tell if you are really one of them by how you talk.

This are the hallmarks not only unhealthy cult movements but also aberrant churches such as Church of Scientology or Prosperity theology .

Hubbard called Dianetics "a milestone for man comparable to his discovery of fire and superior to his invention of the wheel and the arch". It was an immediate commercial success and sparked what Martin Gardner calls "a nationwide cult of incredible proportions".[136] By August 1950, Hubbard's book had sold 55,000 copies, was selling at the rate of 4,000 a week and was being translated into French, German and Japanese. Five hundred Dianetic auditing groups had been set up across the United States.[137] ... ... ... The manuscript later became part of Scientology mythology.[75] An early 1950s Scientology publication offered signed "gold-bound and locked" copies for the sum of $1,500 apiece (equivalent to $15,282 in 2017). It warned that "four of the first fifteen people who read it went insane" and that it would be "[r]eleased only on sworn statement not to permit other readers to read it. Contains data not to be released during Mr. Hubbard's stay on earth."[81] ... ... ... The evidence portrays a man who has been virtually a pathological liar when it comes to his history, background and achievements. The writings and documents in evidence additionally reflect his egoism, greed, avarice, lust for power, and vindictiveness and aggressiveness against persons perceived by him to be disloyal or hostile. At the same time it appears that he is charismatic and highly capable of motivating, organizing, controlling, manipulating and inspiring his adherents. He has been referred to during the trial as a "genius," a "revered person," a man who was "viewed by his followers in awe." Obviously, he is and has been a very complex person and that complexity is further reflected in his alter ego, the Church of Scientology.[327] In October 1984 Judge Paul G. Breckenridge ruled in Armstrong's favor, saying:

The key indicator is greedy, control oriented leadership, ministers who enrich themselves at the expense of followers (one minister of Prosperity Theology asked followers to donate to him money for his new private jet). Attempts to extract money from the followers requiring payment for some kind of training, "deep truth", sacred manuscripts reading.

The person who raises uncomfortable questions or does not "get with the program" is ostracized. Questioning of the dogma is discouraged.

In reality nothing is new under the sun in software development. DevOps rehashed the ideas many of which are at least a decade old, with some that are at least 30 years old (Unix configuration management, version control). And the level of discussion is often lower than the level on which those ideas are discussed in The Mythical Man-Month, which was published in 1975 (another sign of "junk science").

What is more important that all under the surface of all those lofty goals is the burning desire of companies brass to use DevOps as another smoke screen for outsourcing. Yet another justification for "firing a lot of people."

Under the surface of all those lofty goals is the burning desire of companies brass to use DevOps as another smoke screen for outsourcing. Yet another justification for "firing a lot of people."

As in any cult, there are some grains of rationality in DevOps, along with poignant critique of the status quo. Which works well to attract the followers. Overhyping some ideas about how to cope with the current, unsatisfactory for potential followers situation in enterprise IT is another sign of a technocult. Some of those ideas are, at least on superficial level, pretty attractive. Otherwise such a technocult can't attract followers. As a rule, techno-cults generally emerge in time of huge dissatisfaction and strive by proposing "salvation" from the current difficulties:

...think this is one of the main reason why we see this DevOps movement, we are many that see this happen in many organizations, the malfunctioning organization and failing culture that can’t get operation and development to work towards common goals. In those organizations some day development give up and take care of operation by them self and let the operation guys take care of the old stuff.

Nobody can deny that there a lot of problems in corporate datacenters those days. bureaucratization is rampant and that stifle few talented people who still did not escape this environment, or switched to writing open source software during working hours, because achieving anything within the constrain of the existing organization is simply impossible ;-)

But along the this small set of rational (and old) ideas there a set of completely false, even bizarre ideas and claims, which make it a cult.. There is also fair share of Pollyanna creep. Again, it is important to understand that part of the promotion campaign success (with money for it mostly from companies who benefit from outsourcing) is connected with the fact that corporate IT brass realized than DevOps can serve well as a smote screen for another round of outsourcing of "ops". (Ulf Månsson about infrastructure )

This creates new way of working, one good example is the cultural change at Nokia Entertainment UK, presented at the DevOps conference in Göteborg, by inclusion going from 6 releases/year and 50 persons working with releases to 246 releases/year with only 4 persons, see http://www.slideshare.net/pswartout/devopsorg-how-we-are-including-almost-everyone. That story was impressive.

Pretention that this is a new technology is artfully created by inventing a new language ripe with new obscure terms. Another variant of Newspeak. And this is very important to understand. It is this language that allow to package bizarre and unproven ideas into the cloak of respectability.

Primitivism of thinking and unfounded claims of this new IT fashion (typical half-life of IT fad is less then a decade; for example, who now remembers all this verification hoopla and books published on the topic) are clearly visible in advocacy papers such as Comparing DevOps to traditional IT Eight key differences - DevOps.com. Some of the claims are clearly suspect, and smell of "management consultant speak" (an interesting variety of corporate bullshit). For example:

Traditional IT is fundamentally a risk averse organization. A CIO’s first priority is to do no harm to the business. It is the reason why IT invests in so much red tape, processes, approvals etc. All focused on preventing failure. And yet despite all these investments, IT has a terrible track record – 30% of new projects are delivered late, 50% of all new enhancements are rolled back due to quality issues, and 40% of the delay is caused by infrastructure issues. A DevOps organization is risk averse too but they also understand that failure is inevitable. So instead of trying to eliminate failure they prefer to choose when and how they fail. They prefer to fail small, fail early, and recover fast. And they have built their structure and process around it. Again the building blocks we have referred to in this article – from test driven development, daily integration, done mean deployable, small batch sizes, cell structure, automation etc. all reinforce this mindset.

Note a very disingenuous claim "IT has a terrible track record – 30% of new projects are delivered late, 50% of all new enhancements are rolled back due to quality issues, and 40% of the delay is caused by infrastructure issues." I think all previous fashionable methodologies used the same claim, because the mere complexity of large software projects inevitably leads to overruns and doubling estimated time and money is a good software engineering methodology known since publishing of Mythical man month :-)

As for another disingenuous claim "So instead of trying to eliminate failure they prefer to choose when and how they fail. They prefer to fail small, fail early, and recover fast." this is not a realistic statement. It is a fake. In case of a SNAFU you can't predict the size of the failure. Just look at history of Netflix, Amazon or Google Gmail failures. Tell this mantra "They prefer to fail small, fail early, and recover fast" to customers of Google or Amazon during the next outage

Note also such criticism is carefully swiped under the carpet and definition of DevOps evolves with time to preserve its attractiveness to new members ( DevOps is Dead! Long Live DevOps! - DevOps.com ). In other words the correct definition of DevOps is "it is a very good thing" ;-). For example:

Some are seekers on the quest for the one, true DevOps. They were misled. I’m here to say: Give it up. Whatever you find at the end of that journey isn’t it. That one true DevOps is dead. Dead and buried. The search is pointless or, kind of worse: The search misses the point altogether. DevOps began as a sort of a living philosophy: about inclusion rather than exclusion, raises up rather than chastises, increases resilience rather than assigns blame, makes smarter and more awesome rather than defines process and builds bunkers. At any rate, it was also deliberately never strictly defined. In 2010, it seemed largely about Application Release Automation (ARA), monitoring, configuration management and a lot of beginning discussion about culture and teams. By 2015, a lot of CI/CD, containers and APIs had been added. The dates are rough but my point is: That’s all still there but now DevOps discussions today also include service design and microservices. Oh, and the new shiny going by the term “serverless.” It is about all of IT as it is naturally adapting.

I will be repeating myself here. Like any repackaging effort ,DevOps presents old, exiting technologies as something revolutionary. They are not. Among measures that DevOps "movement" advocates several old ideas make sense if implemented without fanaticism. And they could be implemented before the term DevOps (initially NoOps) was invented. That includes:

The move of applications to VM, including but not limited to light weight virtual machines (Docker). And generally idea of wider adoption of light weight virtual machines in enterprise environment, instead or in addition to traditional (and expensive) VMware (I do not understand advantages of running Linux under VMware -- this is VM tuned to Windows that implements heavy virtualization (virtualization of the CPU), while Linux being open source requires para-virtualization (virtualization of system calls), actually available in XEN and derivatives such as Oracle virtual manager). Wider adoption of scriptable configuration tools and configuration management system ( Infrastructure as code -- IaC ). Truth be told this not necessary to do via Puppet and similar complex Unix configuration management packages) as well as continued delivery schemes. Although Puppet is a bad choice as it is too complex, has unanticipated quirks and rather low reliability it is a step forward. But it might be two steps back as it contributes to creating "Alice in wonderland" environment when nobody can troubleshoot the problem because of complex subsystems involved. See Unix configuration management for details. Wider usage of version management for configuration files. Such systems as GIT and Subversion is definitely underutilized in enterprise environment and it can be implemented both wider and better. None of them fits Unix configuration management requirements perfectly. But some intermediate tools can be created that compensate deficiencies Wider use of scripting and scripting languages such as Python, Ruby, shell and (to a much lesser extent) good old Perl. Of them only Python spans both Unix system administration and software development areas and as such is preferable (despite Perl being a stronghold of Unix sysadmins). That actually includes de-emphasizing Java development in favor of Python. Attempts to cut development cycle to smaller more manageable chunks (although idea of "continuous delivery" is mostly bunk). Here a lot of discretion is needed as overdoing this destroys the development process and, what is even more dangerous, de-emphasizes the value of architectural integrity and architecture as a concept (this is a heritage of Agile, which is mostly a snake oil).

Some of those innovations ( like Docker) definitely makes more sense than others because preoccupation with VMware as the only solution for virtualization in enterprise environment is unhealthy from multiple points of view.

But all of them are adding, not removing complexity. Just the source of it different (in case of Docker you also can somewhat compensate increase of the complexity due to the switch to virtual environment by the fact that you can compartmentalize most critical applications in the separate "zones (using Solaris terminology -- Docker is essentially reimplementation of Solaris zones in Linux).

May be the truth lies somewhere in between and selected and "in moderation" implementation of some of those technology in the datacenter can be really beneficial. But excessive zeal can hurt not help. In this sense presence of newly converted fanatics is a recipe for additional problems. if not a disaster.

I actually saw the situations, when implementation of DevOps brought corporate IT to a screeching halt, almost complete paralysis when nothing but maintaining (with great difficulties) of the status quo was achieved for the first two years. Outsourcing in those cases played major negative role as new staff needs large amount of time and effort to understand the infrastructure and processes and attempt to "standardize" some servers and services. Often failing dismally due to complex interrelations between them and applications in place.

While the current situation in the typical large datacenter is definitely unsatisfactory from many points of view, the first principle should be "do no harm". In many cases it might sense to announce the switch to DevOps, force people to prepare to it (especially discard old servers and unused applications in favor of virtual instances on private or public cloud) and then cancel the implementation. You probably can probably achieve 80% of positive effect this way, avoiding 80% of negative effects. Moving datacenter to a new location can also tremendously help and can be used instead of DevOps implementation :-)

While the current situation in a typical large datacenter is definitely unsatisfactory from many points of view, the first principle should be "do no harm".

Again, the main problem with DevOps hoopla is that, if implemented in full scale, it substantially increases the complexity of environment. Both virtual machines and configuration management tools provide additional "levels of indirection" which makes troubleshooting more complex and causes of failures more varied and complex. That represents problems even for seasoned Unix sysadmins to say nothing for poor developers who are thrown into this water and asked to swim.

Sometimes DevOps methodology implemented in modest scope does provide some of the claimed benefits (and some level of configuration management in a large datacenter is a must). The question is what if the optimal solution for this and how not to overdo it. But in a typical DevOps implementation this question somehow does not arise and it just degenerates into another round of centralization and outsourcing which make the situation worse. Often much worse to the level when It really became completely dysfunctional. I saw this affect of DevOps implementation in large corporations.

So it should be evaluated on case by case basic, not as panacea. As always much depends of the talent of the people who try to implement it. Also change in a large datacenter is exceedingly difficult and often degenerates into what can be called one step forward, two steps back". For example, learning such tools as Puppet or Chief requires quite a lot of effort for rather questionable return on investment as complexity precludes full utilization of the tool and it is downsized to basic staff. So automation using them is a mixed blessing.

Similarly move to lightweight VM (and virtual servers in general) which is a part of DevOps hype are easier to deploy, but load management of multiple servers running on the same box is an additional and pretty complex task. Also VMware that dominates VM scene is expensive (which mean that the lion share of saving are going to VMware, not to the enterprise which deploys it ;-) and is a bad VM for Linux. Linux needs para-virtualization not full CPU virtualization that VMware, which was designed for Windows, offers (with some interesting optimization tweaks). Docker, which is a rehash of the idea of Solaris zones is a better deal, but is a pretty new for Linux technology and it has its own limitations. Often severe as this is a light weight VM.

Junk science is and always was based on cherry-picked evidence which has carefully been selected or edited to support a pre-selected "truth". Facts that do not fit the agenda are suppressed (Groupthink). Apocalyptic yelling are also very typical. Same for Pollyanna creep. Deployment is typically top-down. Corporate management is used as an enforcement branch (corporate Lysenkoism). Here are some signs of "junk science" (Seven Eight Warning Signs of Junk Science):

Here is a non-exclusive list of seven eight symptoms to watch out for: Science by press release. It’s never, ever a good sign when ‘scientists’ announce dramatic results before publishing in a peer-reviewed journal. When this happens, we generally find out later that they were either self-deluded or functioning as political animals rather than scientists. This generalizes a bit; one should also be suspicious of, for example, science first broadcast by congressional testimony or talk-show circuit. Rhetoric that mixes science with the tropes of eschatological panic. When the argument for theory X slides from “theory X is supported by evidence” to “a terrible catastrophe looms over us if theory X is true, therefore we cannot risk disbelieving it”, you can be pretty sure that X is junk science. Consciously or unconsciously, advocates who say these sorts of things are trying to panic the herd into stampeding rather than focusing on the quality of the evidence for theory X. Rhetoric that mixes science with the tropes of moral panic. When the argument for theory X slides from “theory X is supported by evidence” to “only bad/sinful/uncaring people disbelieve theory X”, you can be even more sure that theory X is junk science. Consciously or unconsciously, advocates who say these sorts of things are trying to induce a state of preference falsification in which people are peer-pressured to publicly affirm a belief in theory X in spite of private doubts. Consignment of failed predictions to the memory hole. It’s a sign of sound science when advocates for theory X publicly acknowledge failed predictions and explain why they think they can now make better ones. Conversely, it’s a sign of junk science when they try to bury failed predictions and deny they ever made them. Over-reliance on computer models replete with bugger factors that aren’t causally justified.. No, this is not unique to climatology; you see it a lot in epidemiology and economics, just to name two fields that start with ‘e’. The key point here is that simply fitting historical data is not causal justification; there are lots of ways to dishonestly make that happen, or honestly fool yourself about it. If you don’t have a generative account of why your formulas and coupling constants look the way they do (a generative account which itself makes falsifiable predictions), you’re not doing science – you’re doing numerology. If a ‘scientific’ theory seems tailor-made for the needs of politicians or advocacy organizations, it probably has been. Real scientific results have a cross-grained tendency not to fit transient political categories. Accordingly, if you think theory X stinks of political construction, you’re probably right. This is one of the simplest but most difficult lessons in junk-science spotting! The most difficult case is recognizing that this is happening even when you agree with the cause. Past purveyers of junk science do not change their spots. One of the earliest indicators in many outbreaks of junk science is enthusiastic endorsements by people and advocacy organizations associated with past outbreaks. This one is particularly useful in spotting environmental junk science, because unreliable environmental-advocacy organizations tend to have long public pedigrees including frequent episodes of apocalyptic yelling. It is pardonable to be taken in by this the first time, but foolish by the fourth and fifth. Refusal to make primary data sets available for inspection. When people doing sound science are challenged to produce the observational and experimental data their theories are supposed to be based on, they do it. (There are a couple of principled exceptions here; particle physicists can’t save the unreduced data from particle collisions, there are too many terabytes per second of it.) It is a strong sign of junk science when a ‘scientist’ claims to have retained raw data sets but refuses to release them to critics.

If we are talking about DevOps as a software development methodology it is similar to Agile. The latter was a rather successful attempt to reshuffle a set of old ideas for fun and profit (some worthwhile, some not so much) into attractive marketable technocult and milk the resulting "movement" with books, conferences, consulting, etc.

In 2017 only few seasoned software developers believe that Agile is more then self-promotion campaign of group of unscrupulous and ambitious people, who appointed themselves as the high priests of this cult. Half-life of such "cargo cult" programming methodologies is usually around a decade, rarely two (Agile became fashionable around 1996-2000). Now it looks like Agile is well past the "hype stage" in the software methodology life cycle and attempts to resurrect it with DevOps will fail.

As DevOps carries certain political dimensions (connection to the neoliberal transformation of the society, with its fake "cult of creativity", rise of role (and income) of the top 1% and decline of IT "middle class"). Outsourcing and additional layoffs are probably the most prominent result of introduction of DevOps into real datacenters. So DevOp often serves as a Trojan horse for the switch to outsourcers and contract labor.

As DevOps carries certain political dimensions (connection to the neoliberal transformation of the society, with its fake "cult of creativity", rise of top 1% and decline of middle class), outsourcing and additional layoffs are probably the most prominent result of introduction of DevOps into real datacenters.

The whole idea that by adding some tools VM-run virtual instances and some additional management capabilities introduced by tools like Puppet you can by successfully computerized and transferred the IT to outsourcers and contractors is questionable.

No amount of ideological brainwashing you can return datacenter to the good old days in Unix minicomputers when a single person was a masters of all trades -- a developer, a system administrator and a tester. This is impossible due that current complexity of environment: there is a large gap in the level of knowledge of (excessively complex) OS by a typical sysadmin (say with RHCE certification) and a typical developer. Attempts to narrow this gap via tools (and outsourcers) which is the essence of DevOps movement can go only that far.

But the most alarming tendency is that DevOps serve as a smoke screen for further outsourcing and moving from a traditional data center to the cloud deployment model with contractors as the major element of the work force. In other words DevOps are used by corporate brass as another way to cut costs (and the costs of IT in most large manufacturing corporations are already about 1% or less; so there is not much return achievable for this cost cutting anyway).

From 1992 to 2012 Data Centers already experienced a huge technological reorganization, which might be called Intel revolution. Which dramatically increased role of Intel computers in the datacenter, introduced new server form factors such as blades and new storage technologies such as SAN and NAS. And make Linux the most poplar OS in the datacenter, displacing Solaris, AIX and HP-UX.

In addition virtualization became common in Windows world due to proliferation of VMware instances.

Faster internet and wireless technologies allowed more distributed workforce and ability for people work part of the week from home. Smartphones now exceed the power of 1996 desktop. Moreover there was already a distinct trend of the consolidation of datacenters within the large companies.

As the result in multinationals (and all large companies) companies many services, such as email and to lesser extent file storage are already provided via internal company cloud from central servers. At the same time it became clear that along with technical challenges "cloud services" create a bottleneck on WAN level and present huge threat to security and privacy. The driving force behind the cloud is the desire to synchronize and access data from several devices that people now own (desktop, laptop, smartphone, tablets. In other words to provide the access to user data from multiple devices (for example email can be read on smartphone and laptop/desktop). The first such application, ability to view corporate e-mail from the cell phone, essentially launched Blackberry smartphones into prominence.

In view of those changes managing datacenter remotely became a distinct possibility. That's why DevOps serves for higher management as a kind of "IT outsourcing manifesto". But with outsourcing the problem o loyalty comes into forefront.

Another relevant concept as for discussion of DevOps is Cargo Cult science. Cargo cult science comprises practices that have the semblance of being scientific, but do not in fact follow the scientific method. The term was first used by physicist Richard Feynman during his 1974 commencement address at the California Institute of Technology. Software development provides a fertile ground for cargo cult science. For example, talented software developers are often superstars who can follow methods not suitable for "mere mortals" like continuous delivery. The came can happen with organization, which have some unique circumstances that make continues delivery successful. For example if "development" consists of just small patches and bug fixes, while most of the codebase remains static.

I think Bill Joy (of BSD Unix, csh, NFS, Java, vi editor fame ) was such a software superstar when he produced BSD tapes. He actually created vi editor using terminal over 1200 baud modem (Bill Joy's greatest gift to man – the vi editor • The Register) -- which is excruciatingly slow. The feat which is difficult if not impossible for "mere mortal", if just for the lack of patience (transmission speed of 1200 baud modem is close to the speed with mechanical typewriters can print). The same is true for Donald Knuth who created singlehandedly a Fortran complier during a summer, when he was still a student. And Ken Thompson--who is the father of Unix. And Larry Wall who created Perl while being almost blind on one eye. But that does not mean that there practices are scalable. Brooks book "Mythical man-month" is still relevant today as it was at the moment of publication.

The biggest mistake you can make as a manager or a large and important software project is to delegate the key functions in the design to mediocre people. You can't replace software talent with the organization, although organization helps. any attempt to claim otherwise are cargo cult science. In other word tin software engineering there is no replacement for displacement.

Like Agile before DevOps emphasizes a culture of common goals (this time between "operations" and "developers" with the idea of merging them like in good old times -- the initial name was NoOps; which is pretty questionable idea taking into account the complexity of the current IT infrastructure) and get things done together, presenting itself as new IT culture.

Also details are fuzzy and contradictory. They vary from "prophet" to another. That strongly suggests that like many similar IT "fashions" before it DevOps just mean "a good idea" (remember an attributed to Mahatma Gandhi acerbic remark to a self-confident Western journalist: on the question “What do you think of Western civilization?,” he reportedly have answered, “I think it would be a good idea” :-)

While it seems the IT management is rushing to embrace the concept of DevOps (because it justifies further outsourcing under the smokescreen of new terms), nobody agrees on what it actually means. And that creates some skepticism.

DevOps paints a picture of two cultures ("operations" vs. "developers" ) once at odds ("glass datacenter"), now miraculously working together in harmony. But first of all there are weak and strong developers. There are weak and strong Unix sysadmins )and strong sysadmin are often are not bad software developers in their own set of languages, mostly scripting languages.)

The problem of huge, excessive complexity of modern IT infrastructure can't be changed with some fashionable chair reshuffling. What actually happen is that more talented members of the team get additional workload. That's why some critics claim that DevOps kills developers. Meaning "talented developers". And it certainly can be done but it is easier said then done.

The fact that DevOps is somehow connected with Agile is pretty alarming and suggests that might well be yet another "snake oil" initiative. With a bunch of talented an unprincipled salesmen who benefit from training courses, consulting gigs, published books, conferences, and other legal ways to extract money from lemmings.

When you read sentences like "DevOps as an environment where an agile relationship will take place between operations and development teams. "(https://www.quora.com/What-is-devops-and-why-is-it-important ) you quickly understand what type of people can benefit from DevOps.

The key objection to DevOps is that reliance on super-platforms such as Amazon cloud or Microsoft Azure could, in the future potentially intellectually capture the organization and remaining sysadmins (and IT staff in general), who are increasingly distant from “nuts and bolts" of operating system and hardware and operate in what essentially is proprietary environment of the particular vendor. That converts Unix sysadmin into a flavor on Windows sysadmin with less nice GUI. In other words they need put all trust in the platforms and detached themselves from "nuts and bolts" levels. That means that, in a way, they became as dependent on those platforms as opiates addicts on their drags.

The also creates a set of DevOps promoters such as cloud providers, who want to become "gatekeepers", binding users to their technology. Those gatekeepers once they became non-displaceable makes sure that the organization lost the critical mass of technical IQ in "low level" (operating system level) infrastructure and can't abandon them without much pain.

At this point they start to use this dependency to their advantage. Typically they try to estimate customers "willingness to pay" and, as a result, gradually increase the price of their services. IBM was a great practitioner of this fine art in the past. That's why everybody hated it. VMware is also proved to be quite adept in this art.

Not that Amazon cloud is cheap. It is not, even is we calculate not only hardware and electricity savings, but also (and rather generously as in one sysadmin for 50 servers) manpower saving they provide. It is often cheaper to run your own hardware within the internal cloud then Amazon unless you have a huge peaks.

The same is even more true for VMware. If in the particular organization VMware used as virtualization platform for Linux (which due to the being open source allows para- virtualization, instead of full virtualization that VMware implements) , then to talk about savings is possible only when one is sufficiently drunk. Not that such savings do not exist. They do, but lion share of them goes directly to VMware, not to the organization which deploy this platform.

The danger is that you are basically willingly allowed to capture yourself and willing to be part of this ecosystem that is controlled by one single "gatekeeper." Such a decision creates an environment in which switching costs can be immense. That's why there is such a competition amount three top players in cloud provider space for the new enterprise customers. The first who grab a particular customer is the one to control and can milk such a customer for a long, long time. Classic DevOps advocates response that "you should have negotiated better" is false because people not have enough information when they enter negotiations, and it's too late when they finally will "get it."

DevOps is presented by adherents as all singing, all dancing universal solution to all problems of mankind, or, at least, of current problems such as the overcomplexity, alienation of developers. paralysis via excessive security, and red tape that exists in the modern data center.

But the problem is that the level of complexity of modern It is such that the division of labor between sysadmins and developers is not only necessary it is vital for the success.

Such hope is ignoring the fact that there is no "techno cure" for large datacenter problems, because those problem are not only technological in nature, but also reflect complex mix of sociological (the curse of overcomplexity is one of those; see The Collapse of Complex Societies; neoliberal transformation of the enterprise with switch to outsourcing and contractor labor is another) and, especially, balance of power between various groups within the data center. Such as corporate management, developers and operation staff.

Which create pretty interesting mix from sociological point of view and simultaneously creates a set of internal conflict and constant struggle for power between various strata of the datacenter ecosystem. From this point of view DevOps clearly represents political victory for developers, and management at the expense of other players and first of all system administrators.

In a way, this idea can be viewed as a replay (under a new name) old Sun idea "network is computer". That does not mean that there is a rational element in DevOps, as the trend to merge individual servers into some kind of computational superstructure. As exemplified by Amazon cloud and Azure with their complete switch to virtual instances and attempt to diminish the role of "real hardware".

This tend exist for quite a long time. For example Sun grid was just one and early successful attempts in this direction which led to creation of the whole class of computer environment which now are known as computational clusters. DevOps can be viewed as an attempt to create "application clusters".

To the extent that it is connected with advances in virtualization such as Solaris zones and Linux containers it is not that bad if solid ideas get some marketing push.

But the key question here is: can we really eliminate the sysadmin role in such a complex environment as, for example modern Linux (as exemplified by RHEL 7) ? Does cloud environment such as Azure creates for developer the possibility to get rip of sysadmin and do it all by himself/