Two, ... Two, ... Two Conferences in One The VM and Linux Workshop is coming up next week. This year, it will be at NC A&T on June 26, 27, and 28. Just prior, Velocity Software will present their performance seminar for z/VM and zLinux systems. Attendance to the Velocity seminar is free for those attending the workshop. As Timothy Sipples mentioned, there is a lot of mainframe traffic in other fora. With respect to the workshop, I blogged about it myself ... http://sirsanta.blogspot.com/2014/05/data-sharing-versus-data-security.html Kind of late notice, but I wanted to mention it. The workshop has seen steady growth so it's likely to happen again next year. Y'all come! -- R; <>< by sirsanta June 16, 2014 in Events, Linux, z/VM

Permalink | Comments (0) | TrackBack (0)

New Security Revelations: Governments Spying More than Expected The New York Times, The Guardian, and Pro Publica are jointly reporting on new revelations about the extent of U.S. and U.K. (in particular) government surveillance of Internet communications. The revelations come primarily from U.K. GCHQ documents characterizing GCHQ and NSA capabilities. Former NSA contractor Edward Snowden obtained the documents and shared them with media outlets. Bruce Schneier, a security expert advising The Guardian, comments on the revelations and offers some practical advice. I'm still absorbing the implications of these revelations. If they're true, I tend to agree with the security experts who are concerned about risks to people and their private information. One of the important roles of government is, ostensibly, to protect its citizens. If the government continues trying to undermine IT security in various ways, the government is making its own citizens easier to attack. Which is exactly backwards, of course: a security agency should be promoting the safety and security of its citizens, not undermining it. It doesn't take a Hollywood movie or even an Edward Snowden to understand that if the "good guys" can get in then so can lots of "bad guys." And the "good guys" have a lot more to lose when they're vulnerable. I agree with Bruce Schneier that the IT engineering community will be doing a lot of work in this area over the coming weeks, months, and years to improve IT security and to better protect privacy. These revelations will also probably spur a lot of political discussion about the appropriate role of government and what the limitations on government should be. That's not a new debate, nor is it one that should ever end. In my view we must constantly remind ourselves of the Fourth Amendment to the U.S. Constitution, and we must meet or exceed that high standard. OK, what about mainframes? Bruce Schneier's advice is heavily client (end point) focused, and that's appropriate for his readership. In the world of servers and enterprise computing there are also important considerations, and I would advise all IT professionals to pay close attention to security discussions and improvements coming out of the IT engineering community. I would also point out that I see way too much carelessness. I'm not talking about whether extremely well funded government intelligence agencies can access your applications and databases. I'm talking about rank amateurs. For example, do you have 3270 "green screen" terminal connections to your mainframe, for end users and/or for administrators? If yes, are those connections encrypted? You're sending mainframe user IDs and passwords across those links every day, across your wide area network perhaps. They're not encrypted, in the year 2013 (or even 2003)? Really? When exactly are you going to take security even half seriously? As another example, is your idea of application integration to dump half your customers' most sensitive personal information into sequential files every night then FTP that — unencrypted of course — to dozens of different distributed servers, only to run a poorly secured application? How is that possibly secure? How is that being a responsible steward of your customers' private information? It isn't, yet I see it practically every day. Too many IT people think it's a good idea to copy data everywhere, all the time. There's no way you're ever going to protect your organization against even rank amateurs with that architectural approach. Stop copying data and start securing it. That means, paradoxically, opening up your mainframe to authenticated, authorized, and (usually) encrypted, direct access to application and information services. Why, just last week I had a conversation with an IT manager about this very issue. That manager questioned whether it was secure to access DB2 for z/OS directly from a PC-installed productivity tool. Compared to what? Compared to extracting all the data (not just the records the end user is supposed to be accessing) to a flat file, FTPing it (on a clear wire) to another database running on Microsoft Windows (!), then accessing it there, without any security context whatsoever? Of course that isn't secure. And I'm going to partially blame "mainframe people" — you know who you are — for setting arbitrary "security policies" which end users inevitably must circumvent in order to get their jobs done, or because they think they're "saving MIPS." I've even seen end user departments set up elaborate screen scraping tools on batteries of client PCs in order to perform data extracts, because that's what the "mainframe people" and their "security policies" require them to do to keep the business running. This madness must stop! Now, for those two organizations in the world that have eliminated the low hanging vulnerabilities and that have stopped all the madness, I would recommend getting a mainframe if you don't already have one. (If you don't have one you probably aren't one of those two organizations.) Use your mainframe as your premier security hub to better protect your organization. We don't know everything yet — I'll keep reading the press reports with great interest — but what we do know from decades of experience to the present is that mainframes, well managed, have proven especially resistant to security threats. And, I write only half jokingly, we also know that the only organizations that might rival government intelligence agencies in their political power and influence are large financial institutions. All of them would presumably scream bloody murder if their core systems were exposed. Moreover, if you want open source software, you've got it on zEnterprise. Linux on zEnterprise is 100% open source software. There are no proprietary drivers or other closed source binaries required, unlike many other hardware platforms. z/OS has a large and growing collection of open source software available, too, and you can go grab whatever you like and quickly deploy it. (On z/TPF as well.) There's also the unparalleled statement of integrity for z/OS and for z/VM. Stay vigilant, and stay safe. by Timothy Sipples September 6, 2013 in Security

Permalink | Comments (0) | TrackBack (0)

Two Big Deals in the Mobile World The U.S. Labor Day weekend has not been a restful one in the mobile communications and devices industry. Vodafone is selling its share of Verizon Wireless to Verizon in a blockbuster $130 billion cash and stock deal, and Microsoft is buying Nokia's struggling mobile phone business, other assets, and licenses for 5.4 billion euro (about $7.2 billion). What have these deals got to do with mainframes? Plenty. The mobile business is still growing rapidly, and mobile devices (smartphones and tablets) are rapidly displacing traditional PCs as the dominant application and information service clients. That growth is increasing transaction volumes and associated batch processing on mainframes. It's also encouraging existing and new mainframe customers to add applications and application functions to their mainframes, especially to support increasing demands for continuous service and improved security given the challenges mobile devices present. So how can mainframes address mobile platforms? They already do, and it's quite easy to do more. One excellent example is IBM Worklight for zEnterprise which makes it easy to support multiple mobile device types from your mainframe with functionally rich, device-appropriate, secure "apps" and mobile Web user interfaces. Another example is the IBM CICS Transaction Server Feature Pack for Mobile which is available to CICS customers at no additional charge and which supports lightweight, mobile-appropriate JSON Web services. The contrast between the two deal sizes is interesting all by itself because it demonstrates where the value has shifted in the mobile market. The smaller Nokia-Microsoft deal is an attempt to combine two weak mobile players into one in order to try to compete with Apple and Google, in particular. The trouble is that Google (especially) has a different business model with lots of services, advertising, and content, and that business model is working well. Google seems to have won the mobile OEMs who were perhaps a bit uncomfortable with Google's acquisition of Motorola, but Microsoft's acquisition of Nokia trumps their mild concern. In other words, Microsoft isn't going to get any help now from Samsung, HTC, LG, ZTE, Lenovo, and other mobile device makers. It'll also be tough for Microsoft to compete against Apple in the premium segment of the mobile device market, and Apple is also strong in content. All that said, I think the acquisition makes sense for Microsoft. Microsoft really doesn't have much choice. Nor does Nokia. When Nokia's CEO, Stephen Elop, a former Microsoft executive (and soon a Microsoft executive again it seems, perhaps even the next CEO) bet his new company on Microsoft's struggling mobile Windows platform, he set in motion a chain of events that would very likely result in Nokia's divestiture of its mobile device business to Microsoft. I don't think too many people are surprised that Microsoft is carving up Nokia now. Cynical observers might even say that was the plan all along. An interesting footnote is what happens to struggling Canadian mobile pioneer Research In Motion (RIM), makers of the Blackberry. Their new BB10 platform is technically very good, but that's never enough. According to reports RIM is at least open to the idea of selling itself to another company, but there's no perfect suitor available. None of the Chinese companies make much sense given that many of RIM's government customers would flee if such an acquisition came to fruition. HP might make some sense, but is there room for both a third and a fourth mobile platform, and would HP have any chance of finding room in the mobile market given Microsoft's still deep pockets? (Microsoft's Nokia acquisition is another piece of bad news for HP at least in terms of limiting HP's options and in terms of pulling some of Microsoft's attention away from the traditional PC business and OEMs like HP.) Samsung might be interested in acquiring RIM. Samsung would probably take BB10 and merge it with Android, retaining Android application compatibility but adding some more Samsung/RIM differentiation. That'd make some sense if the price is right. Google might have similar ideas, also at the right price. Both Samsung and Google wouldn't mind having RIM's patent portfolio. I don't see IBM being too interested except perhaps for RIM's Blackberry Enterprise Server (BES) software which another suitor might be willing to carve out for IBM, HP, or somebody else. Oracle, Facebook, and Dell are longshot candidates to buy RIM, each for different reasons. Or maybe nobody buys RIM, and we (probably) fondly remember the Blackberry much like we remember Amiga computers. It's rarely boring in the technology industry. by Timothy Sipples September 3, 2013 in CICS, Current Affairs, Financial, Web Technology

Permalink | Comments (0) | TrackBack (0)

Server Hardware Trends: A Commodity Market Plus IBM The New York Times summarized the latest IDC and Gartner server marketshare reports, highlighting the rise of the non-branded custom-built commodity server makers that supply big Internet firms such as Facebook. "Others" is now the #3 server "vendor" on a hardware revenue basis and #1 on a volume basis. (On a revenue basis, IBM is #1 and HP is #2.) These long running trends are fascinating, and I've described them before in various ways. I think it's important, though, to distinguish between IBM and HP because they have very different positions in the overall market. IBM is now the only remaining credible vendor of "high-end" servers. We've seen time and time again in many markets — retailing, to pick an excellent example — that getting stuck in the middle is a bad place to be because competitors are both attacking from below and above. The attack from below is based fundamentally on price, particularly acquisition price. Those are the "Others." The attack from above is based on value, sustained high levels of research and development to deliver innovation, and best-of-breed capabilities and qualities. That's IBM. In the middle is HP, the JCPenney of the server market. In a few more quarters Dell will probably be right there, too, but we'll see. I very much like IBM's position given these market trends, and I'm not too worried about the slight hardware revenue dip IDC and Gartner reported given the structure of that dip. IBM's high-end got higher, to put it succinctly, and there's some good evidence IBM's margins improved. Moreover, most of IBM's revenues associated with its servers are not measured by its hardware revenues alone, and that's unique to IBM. When HP sells servers they typically don't include much else from HP that customers buy. In contrast, it's very rare that an IBM server gets sold without substantial IBM content that customers buy. I don't know exactly how big the high-end server market will be, but it will continue to be a terrific business amidst the continuing explosion of information, long-term economic trends, and increasing quality demands. As long as IBM keeps finding ways to differentiate and to innovate up and down their solution set, the company will do fine, and more importantly so will its many and growing numbers of customers. However, while IBM is very much pursuing its high-end strategy with gusto, IBM is also eager to push into volume markets as well — IBM in the role of Target (and/or Costco) to offer an alternative to Walmart, metaphorically speaking. I'm referring of course to IBM's OpenPOWER Consortium with Google, NVIDIA, and others. So these Gartner and IDC reports are really not good news for HP in particular. As I've said before I don't know how HP gets out of its shrinking box. HP's CEO Meg Whitman has a tough job. by Timothy Sipples August 29, 2013 in Cloud Computing, Economics, Financial, Systems Technology

Permalink | Comments (0) | TrackBack (0)

IBM Announces the OpenPOWER Consortium IBM's Tom Rosamilia describes IBM's OpenPOWER Consortium announcement. IBM is sharing the complete blueprints for its POWER microprocessors with several major industry partners: Google, NVIDIA, Mellanox, and TYAN. Others are welcome to join. Yes, that Google, the search giant that buys many thousands of bespoke servers but which also has some of the most challenging data center-related problems in the world. Now Google gets an entire, more advanced microprocessor design to use as it pleases. It's no secret that the traditional RISC UNIX market has struggled. IBM has been steadily gobbling up UNIX server marketshare for several years as other UNIX vendors, lately HP and Oracle/Sun, collapsed. But it's not good enough to dominate a (probably) declining market, so IBM is wisely trying to expand the whole market and go all-in on Linux cloud infrastructure. IBM has got some superb launch partners in that effort. I think it's a bold IBM move but a calculated one. IBM is basically trying to replicate ARM's success in the processor licensing business but in a much different market, a market Intel currently dominates with its proprietary X86 architecture. I'm referring to massive, horizontal scale-out computing architectures in remote (typically) data centers: large Linux-based public clouds, notably Google's, but also with NVIDIA-infused GPU technologies for supercomputing (as another example). Not competing with ARM at all which, despite a few rumblings, isn't charging into data centers. Optimizing microprocessors for mobile use cases is quite different than optimizing for public cloud backends. So will Intel get "squeezed" in the middle? The middle has proven to be a dangerous place to be in the server processor business. Which is why I also remain extremely bullish on zEnterprise, by the way (and which is doing very well indeed). It's certainly an interesting development, and it's really good news for customers. Frankly IBM had to do something bold, and this move definitely qualifies. It also puts IBM's acquisition of SoftLayer into better focus. I was a little unclear how SoftLayer would fit into IBM's strategy, but now it makes a lot more sense. It also makes complete sense for IBM's launch partners to join the OpenPOWER Consortium. I like this. by Timothy Sipples August 7, 2013 in Cloud Computing, Linux, Systems Technology

Permalink | Comments (0) | TrackBack (0)

U.S. Class Action Lawsuit Filed Against Infosys The Register reports on a new class action lawsuit filed against Infosys in the United States. The lawsuit alleges that Infosys practices employment discrimination on the basis of race and national origin. I have no idea whether the lawsuit has merit or not. That said, in my view Infosys is going to have a tough time explaining how the demographic makeup of its U.S.-based workforce is not prima facie evidence of widespread employment discrimination in the U.S. Let be clear on a couple points, though. First, I'm a huge supporter of workforce diversity. Organizations are stronger and more effective, in my experience, when they have workforces consisting of talented individuals with the broadest possible range of experiences, backgrounds, and perspectives. If the lawsuit is correct, that description does not apply to today's Infosys in the U.S. The other point I would make is one I've made before on a few occasions in different ways. IT choices have great and growing impact on total staffing levels and costs. As businesses continue to try to find ways to reduce costs — or to further pad their profits if you prefer — they will continue naturally focusing on labor costs. This relentless business behavior is a major public policy challenge among other things. Like most countries, particularly among developed economies, the U.S. expects employers to follow a few rules to support some limited public policy objectives. Unemployment insurance is one example among many. In general, organizations which are taking advantage of mainframe technology, especially new mainframe technologies, have strong, highly labor-efficient IT infrastructures. Yes, that infrastructure requires some competent, experienced individuals who command reasonable salaries and workplace comforts (and should). Greater overall business efficiency and better service qualities are never free. Unfortunately there are many organizations that are not taking advantage of these mainframe-unique efficiencies and that are trying to cope with escalating staffing requirements to manage sprawling IT infrastructure that's increasingly getting out of control. As those cost pressures further mount there will be too many individuals and companies that try to bend or break the rules such as important labor laws. Maybe I just described Infosys and its behavior, or maybe not. The plaintiffs have to prove their case, and it's not particularly hard to file a lawsuit. I'll be watching this case and other, similar workplace developments to see what they portend for the future of IT employment in the U.S. and elsewhere. by Timothy Sipples August 6, 2013 in Current Affairs, People

Permalink | Comments (0) | TrackBack (0)

Canonical (Ubuntu) Needs a Mainframe: An Elaboration I forgot about this 2011 blog post. That wasn't such a great prediction, was it? I want to quote one of the comments Mark Shuttleworth wrote which I think illustrates his profound misunderstanding, a misunderstanding that might have contributed to Canonical's recent security failure: Cloud architectures are fault-tolerant by software architecture, only an idiot would pay for the same fault tolerance twice. Therefore, no matter how hard IBM tries to sell the idea of mainframes being the hardware for cloud, I don’t see it panning out that way. The whole point of the work that’s brought us cloud is to be able to do very large, very reliable services on low-cost, unreliable hardware. OK, forgetting for a moment that reliability is only one of the many qualities of service — security is another one, as Canonical has belatedly and tragically discovered — no, I disagree, and so do most IT professionals. The reason is very simple: everything in IT fails, particularly software and administration (people). Over the past few days I've repeatedly explained that IT doesn't work well unless you get both the hardware and the software right, and unless both are co-engineered to work together cooperatively, with really, really excellent, common, consistent autonomics that reduce the people risks as much as possible. That's especially true in availability engineering. One of the many beautiful aspects of the zEnterprise family of solutions — IBM's decades-long genius, really — is that IBM always expects software to fail, whether its own software or its customers' software. Only last week I heard a long and painful story from a client. That client explained in great detail how often their pure software cluster failed, leaving thousands of users with nothing to do. Programmers are not perfect. Nor are hardware designers necessarily, but "defense in depth" is extremely valuable when engineering for high availability. And that's what IBM has done and keeps doing. It's not that IBM hasn't tried other approaches. Decades ago IBM implemented software-based clustering in IMS, for example. It's merely "OK" by mainframe standards, meaning it's superb software clustering but it isn't what customers expect. IBM still supports that form of clustering, but a couple decades ago IBM introduced the first version of Parallel Sysplex which relies on a combination of common, hardware-based features and software-enabled products that exploit those hardware features. IMS is one of many examples but only one example. Parallel Sysplex evolved over the past two decades and continues to evolve and improve. (This month's announcement of RoCE memory-to-memory high performance networking is a good example. Ostensibly that's a hardware feature, and it is, but it's actually a clever, packaged, integrated combination that provides a common service to all applications, transparently. Always with multiple layers of availability and fault tolerance.) Frankly it takes an amazing amount of hubris to suggest that programmers always get it right, every time, and never ever muck up what they previously got right. Or that it's not possible to learn from other engineers who took a different approach that actually works in the real world. Again, look at Apple. Why on earth did Apple buy hardware companies? Why do they have engineers who can design chips? Why are they reportedly investigating the purchase of their own chip foundry? Both hardware and software matter to achieve a particular business outcome. Granted, Apple isn't maniacally focused on maximum qualities of service enterprise IT engineering like IBM is with its zEnterprise solutions. Apple is engineering for different outcomes than IBM and literally never compete. But the core principle is the same. I should say that I very much respect Mark Shuttleworth and his accomplishments. But I think he got this one wrong, very wrong. We all make mistakes sometimes, myself included. We hopefully learn from those mistakes, otherwise we're doomed to repeat them. by Timothy Sipples July 26, 2013 in Security

Permalink | Comments (0) | TrackBack (0)

Happy New Mainframe Day! Introducing the zBC12 (Post #2) IBM has posted a long and detailed frequently asked questions list which covers the zEnterprise BC12 and other related announcements today. Here are some more interesting details from my point of view: Ever since the IBM z890 mainframe model introduced in 2004, IBM has offered an A01 subcapacity model with a CP capacity of about 26 PCIs and 4 MSUs. The z10BC dropped the MSUs down to 3 but otherwise retained the same approximate PCI rating. That changes a bit now: the zBC12 starts at about 50 PCIs and 6 MSUs, so that's the smallest current model z/OS (and z/VSE and z/TPF) machine now available. Is that bump to 50 PCIs and 6 MSUs a problem? Well, no, not with IBM software which has been eligible for subcapacity licensing for many years. Also, admirably IBM says they're pricing the zBC12 with 50 PCIs the same as the z114 (the previous model) at about 26 PCIs. In other words, even if you still only need 26 PCIs of capacity you'll get roughly twice that capacity for the same price according to IBM. Everything's better financially then: if you wish you can keep running with 3 MSUs of software (which will perform much better on a 6 MSU machine due to the way "softcapping" works) and also hold the hardware costs level. Or you can bump up to 4, 5, or 6 MSUs of IBM software if you need it while still holding hardware costs level. All good news there! (And actually it's even better than I described.) The potential trouble is that a few other software vendors still have antiquated licensing rules, charging full capacity even if you don't run their software that way. That might be a problem in particular situations, but the solutions are by now well known, notably switching to software products that have better licensing terms.

be a problem in particular situations, but the solutions are by now well known, notably switching to software products that have better licensing terms. IBM has goosed up the I/O capacity and performance considerably, but I'll let you read about the details in that linked document above.

On the zBC12 and also the zEC12 you can now configure up to 2 zIIPs and up to 2 zAAPs per CP, including subcapacity CPs. For example, on that A01 zBC12 capacity model (with 50 PCIs) you could add up to 2 zIIPs and 2 zAAPs, and all those speciality engines operate at full speed. There's one caution, though: IBM says the zBC12 and zEC12 are the last servers to sport zAAPs. So it's possible IBM is bumping up this ratio in anticipation of zAAP retirement. My recommendation would be to order only zIIPs at this point since zIIPs can now do everything zAAPs can. If and only if you run out zIIP allotments and you could use zAAPs, then maybe get some zAAPs. Perhaps that advice is too obvious for those familiar with zIIPs and zAAPs, but I thought I'd mention it.

IBM has posted a series of new mainframe videos including this walkthrough video. More to follow, including a lot of software details as I keep looking through all these announcements. And it's software that makes the magic happen, so stay tuned. by Timothy Sipples July 23, 2013

Permalink | Comments (0) | TrackBack (0)

Happy New Mainframe Day! Introducing the zBC12 (Post #1) We saw some clues that IBM was getting ready to announce a new mainframe today, and here it is: the new zEnterprise BC12. There are a lot of IBM announcements related to the new zBC12, and I'm sifting through all the information. Here are some of the highlights as I see them. I'll provide further updates as I read all the materials IBM has released. First, the hardware itself: processor clock speed increased to 4.2 GHz, core count increased, and both total capacity and per-core capacity up more than I would have expected. A single zBC12 can provide almost 5,000 PCIs for z/OS, z/VSE, and/or z/TPF with its maximum 6 CPs. That still leaves another 7 engines for any mix of specialty cores (zIIPs, IFLs, etc.) Uniprocessor performance is above 1,000 PCIs. Also, the maximum memory is now up to 496 GB of usable RAIM-protected memory, which is another very nice bump. This new zBC12 can soak up a large amount of workload.

There's a new "LPAR absolute hardware capacity" setting on the zBC12 which presumably also will now be available on the zEC12. This setting will be mainly of interest to Linux on zEnterprise customers who want to set particular IFL capacity limits mostly for software licensing purposes.

IBM is introducing exploitation of 2 GB memory page support in the zBC12 and zEC12 starting with Java 7 for z/OS, to improve Java performance and capacity yet again.

There's a new high speed memory-to-memory adapter ("10GbE RoCE Express") which provides something analogous to HiperSocket connections but now between machines, to speed up data transmission and reduce networking overhead. This new adapter is available for both the zBC12 and zEC12.

There's another new adapter for both models called the zEDC Express which accelerates data compression.

I always wondered why IBM had 101 customer configurable cores on the zEnterprise EC12 machine. It's an odd number, and that's unusual for mainframes. Now we know: IBM reserved one core for a new Internal Firmware Processor (IFP) which is invisible. But this IFP, also included as a standard feature on the zBC12, supports the new RoCE Express and zEDC Express functions. I expect IBM will use this "hidden" processor for progressively more supporting functions, much like how SAPs provide various accounting and support services for I/O. We'll never really deal with the IFP and its control programs, but they'll be there, supporting particular new functions. Much more information and analysis will follow. Stay tuned. by Timothy Sipples July 23, 2013 in Innovation, Systems Technology

Permalink | Comments (0) | TrackBack (0)

© Copyright 2005-2011 the respective authors of The Mainframe Blog.