Data Center Pulse Blogs


Moore's Law Slowdown in CPU Performance Risks Driving Up Rack Power Density

There are a number of factors that I’ve always believed pointed towards the benefit of building infrastructure with high density (HD) in mind. I pointed at Sustainability; if you build fewer buildings to support the same gear, you are being more sustainable. I’ve suggested that trends in High Performance Compute and Big Data would lead more of us towards HD infrastructure designs.  However, it’s now highly likely that we can add a slowdown in Moore’s law to the drivers for HD.

How Moore’s Law affects infrastructure designs?

Let’s consider networking, storage (Flash in this case), and CPU as three of the primary hardware technologies that support the architecture of most modern infrastructure stacks.  Historically CPU has been in the lead and pulling away from storage and networking, but that’s changing.  More recently (2013 and later) flash and SSDs have been making huge improvements in capacity vs. cost and have in fact outstripped Moore’s Law as it relates to the CPU. Having the CPU in front created design strategies that relied on more disks with more distributed data (striped) because there was always more CPU that there was I/O.  With flash and SSD now quickly surpassing and in fact accelerating past the CPU in performance we will soon have the opposite effect.  The problem going forward will be in getting more CPU closer together and closer to the storage and therein lay the potential problem of density.

Credit for image goes to iSYS-Data.Com

Could the need for more densely packed CPUs hurt me?

Think of the CPU as a brain and storage as questions. What happens when the questions come faster than one brain can handle? The simple answer is you add another brain.  As you add more brains (CPUs) you draw more power and you create more heat.  As storage technology continues to accelerate past the CPU this problem of power draw will become increasingly problematic for older data centers that aren’t equipped to handle 12kW+ as an “average” density across all their cabinets. 

4 Other high density considerations

1: Converged infrastructure (more gear in a smaller space)

Right now, converged infrastructure sales are going through the roof. Whether you’re buying from VCE, HP, Dell, NetApp, Nutanix or SimpliVity, the story is the same…more power Scotty!

UCS Chassis – 2.0 kW – 6 chassis per cabinet = 12kW

HP Matrix Chassis – 6.0 kW – 4 chassis per cabinet = 24kW

Neither of the above examples is configured at max potential power consumption.

2: Open Compute and or HPC Oriented Infrastructure designs

Modern infrastructure is taking on more of the characteristics of High Performance Compute designs. There are a number of reasons for this change towards HPC, but one of the main drivers is the increasing use of big data and reliance on big data as part of critical infrastructure/application environments. Simply put, there are real performance gains associated with putting more compute in smaller spaces right up next to your disk farms.

3: Cost & Space – Sustainability

I’ve always liked density because I’m an efficiency guy. There are some obvious benefits to having your gear packed in to smaller spaces: better overall performance (efficiency) and less floor space, cabling, and racks (efficiency and cost).  There’s also the long term sustainability (blog) factor. There are estimates being made today that suggest cloud and IoT will drive our global IT power consumption footprint from 3% to as much as 20% of total power generated. If we continue to populate the world with low density data centers you’ll soon be able to walk on top of them all the way across the country.

4: Containers

A key push behind the use of containers after ease of deployment, etc. is efficient use of infrastructure. A big part of this efficient use of infrastructure comes from a much higher average utilization of the CPU. This higher utilization will inevitably result in greater power use per rack unit.

Next Steps

Evaluate your current data center foot print. This includes internal data centers and those you rent or lease from partners. If these data centers aren’t designed to support 500W or more per square foot or an average rack density of over 12kW then you need to consider finding new space. In some cases you might be able to retrofit, but in most the facility would literally have to be redesigned.   I’m already seeing rack densities increase significantly (many in the 30kW+ range) and more people are asking for the ability to support high density racks. This issue is coming, it will be up to you whether you let it “surprise” you or you put plans in place now to mitigate the risk. 

Why Does Cloud Bring Democracy to IT

 

The Dictator is Ruling the House of Information Technology with an Iron Fist

The ENIAC mainframe was introduced in 1946. As the first electronic computer it was a huge leap forward in how humans got work done. At the time of its invention the ENIAC was the best way to crunch numbers in the world, yet only a very small and very exclusive group of people had access to it. This is truly the era of the "haves" and "have nots" as far as Information Technology is concerned. Through the subsequent 50 years we've gone through phases of making IT available to a wider audience. In the 70s most large companies and universities had a mainframe or access to one. However, as a percentage of the business population this was still a pretty exclusive group, who mostly had limited access to a shared resource. In the 80's we began the era of distributed computing. Now pretty much any company or individual with a little cash could have full time access to some level of compute capability, from desktop PCs to mainframes, minis and towers.  At this point while technology access is fairly widespread, barrier to entry is still pretty high. The average cost of a desktop was $2,500 and software and support could add another $1000. The average mom and pop shop or home computer user couldn't afford that kind of spend on something that would be used for writing letters or doing the occasional spreadsheet. 

Fast forward from the 80's to the year 2000 and the Dictator is Now Allowing Peaceable Assembly

By 2000 the relative cost of a PC had dropped to under $1000 as compared to the late 80's prices vs. income and inflation. As you might have guessed, this dramatically broadened the accessibility of computing. There is also the little thing called the internet that came to life between the 80s and 2000. The internet lowered the barrier to entry by making information and applications available to anyone who could get on the internet and had a computer or could rent time at an access point terminal. This combination of yearly decreases in the cost of PCs and wide access to the internet continues to this day, making it a little bit easier for the average Sue/Joe to take advantage of the benefits of readily available IT tools.

What's the Problem with This Form of Government

Unfortunately while the pool of "haves" has increased dramatically every decade since the 50's, there are still a large number of businesses and home users who are still "have nots" or at best "have a little". There are myriad reasons for this continued disparity of ownership;

-          Not everyone has easy access to a broadband connection yet, especially in developing parts of the world. (Like a big portion of the undeveloped world called the US of A)

-          In major parts of the world there is no support for owning and using a computer. You couldn't get the power or the broadband necessary to put it to use

-          The big kicker is SOFTWARE! The cost of software has either held steady or increased with inflation over the years, even as every other aspect of technology has decreased markedly.

 

 

Cost of Hardware Over Time*

The above graph is relative pricing in for computers and peripherals. Generally speaking the price of a PC has dropped roughly 15% per year over the last 12 years.

Why is the cost of Software still so High and why is that a Continued Roadblock to Wider Adoption of Information Technology

Today we buy software and hardware based on a 100% or 24/7/365 use model. We continue paying for the Oracle license, or for Microsoft Word, regardless of our use characteristics. The average user of any one software package only spends a few minutes or certainly no more than a few hours a day. Paying for 24/7 vs. "paying as you need" means that in most cases we're paying 5-10X what we really should be paying. How can a 50 person company that is running on a shoestring budget afford world class IT tools, if they have to pay for them even when they're not being used? Imagine if your finance person could log in to Oracle for an hour a day and 8 hours on the last day of the month and only pay for their use? A perfect example of where this paradigm is beginning to change is in office productivity tools. As a small business or home user you can get access (unsupported) to office productivity tools that have most of the features that the comparable Microsoft products do. While the Google option isn't perfect, it is certainly a giant leap in the right direction towards democratizing IT. Think about all those pirated copies of MS Office that get spread around. True, it's a shame that they aren't being paid for, but it's probably also true that most of those copies would never have been purchased anyway. Things are pirated for a reason and in many cases it is because the buyer would never be able to justify ownership at full price.

 A Small Sampling of the Per License Cost of Some Common Software Packages Over Time

 

Year

Price approx in $US

Year

Price approx in $US

Percent increase change

Autocad lit

1997

500

2010

1000

100

Redhat Desktop

2001

60

2010

80

33

Visio

2002

500

2010

500

0

Information on software prices was retrieved from a number of different online sources, and then averaged.

Cloud has the Potential to Make the Land of Information Technology a True Democracy

The advent of cloud is no smaller an opportunity for global business and change than the introduction of the internet. We haven't begun to understand what new businesses will be created and what business models will change as a result of the wide spread access to cloud based IT services.  However, this change won't happen by accident. There are still many evil doers out there who are attempting to derail our train to democracy. 

I don't want to play the name game, but I will say that many of the biggest IT vendors in the market are working very hard to make the cloud business as usual. They're looking to maintain their margins and lock in their customers. This effort is completely contrary to many of the benefits that should be assumed as part of using or implementing cloud services.

Be very wary of big IT vendors bearing gifts of "cloud" solutions, "best" hardware platforms, and long contracts.

By its very definition Cloud is a "pay as you need it" IT model. If you're getting locked in to long contracts, and you're still making hardware purchases, you're heading down the wrong road.  Cloud has created a model that should remove the final barriers to a truly democratized IT model. A small business owner should be able to buy the service they need, in the quantity they need. They should also be able to buy a package of IT services that allows them to mimic much larger organizations.  The home user or the user in undeveloped countries should be able to access technologies and solutions through the phone, or similar handheld devices and they should only have to pay for what they use. Maybe an engineering student could really use a tool like Visio, but really only needs it for an hour a day during one semester. Instead of paying $500 for a copy that will rot on his/her computer, they can pay $10 a month for the access period that they need. This same opportunity is true for the world over. Anyone, with even the most basic of access can now work with and experience the benefits of information technology. This "equal access" to IT creates a democratic world, which allows each of us to succeed or fail on our own terms.

Why Democratic IT Equals Amazing Opportunity

Like the micro loan programs being employed in India brought opportunity to the unserved masses, cloud can bring tools and technology to a whole new set of customers who would otherwise have been denied. Cloud also has the potential to provide a much richer application landscape, giving businesses and home users capabilities that have historically only been available to large enterprises. Imagine what this new found access and capability set will mean to the generation of new business, new ideas and new ways of solving problems. Think of eBay and it's impact on small/home business and magnify that opportunity across another two billion home users and small businesses. At the large enterprise level the benefits will be in terms of millions of dollars saved, faster time to market and pay as you grow contracts.

Within the next 2-3 years we will begin to see how greater access to IT means opportunity for everyone, the only question is are you going to be a voting member of this new society or a holdover from the good old days of dictatorship.

 

 

German companies ask for Internet border patrol.

In the last year multiple companies started serving German customers out of Germany based datacenter locations.

There seems to be a specifically strong sentiment around security & privacy with German companies after the Edward Sonwden leaks. The kneejerk reaction is to mandate that servers should sit within German borders, as that would take any security & privacy concern away. Cloud providers are now starting to follow this customer demand.

Interestingly this reaction is more sentiment driven as there is no legal ground to request this. Especially as more and more German companies are putting this in place as a default policy, regardless of what type of data (privacy sensitive or not…)

Looking at the Federal Data Protection Act (Bundesdatenschutzgesetz in German) (“BDSG”) it states that certain transfer of data (like personal data) outside of the EU needs to be reported and approved and Data controllers must take appropriate technical and organizational measures against unauthorized or unlawful processing and against accidental loss or destruction of, or damage to, personal data. Nothing says servers need to be in Germany.

Looking at other EU countries, Germany seems to be the only country where organizations express such behavior. The only next inline could be Switzerland.

Talking to my industry peers in Germany is a surreal experience on this front.  I always like the risk analysis approach to privacy & security, and that leads to interesting conversations on what the benefit would be around hosting the data in your own country.  Some know there is no legal need but state they feel safer that way. If we take the NSA paranoia a few levels up then I could ask:

- Does your German datacenter provider use any IBM/HP/Dell/Cisco/… equipment? The answer is mostly yes. So then you would be NSA vulnerable anyway. (German article) 

- Is your German datacenter provider actually a German company or is it owned by a non-Germany company? (or has offices outside of Germany). Then it seems the US can mandate data handover

There are multiple other 'scare' scenarios possible for people to get their hands on the data, like accessing the data on laptops that travel across border. 

This all focuses on the NSA spying on you, but neglects the fact that the Bundesnachrichtendienst has full local authority to access the German based data… but that is not perceived as an issue.

The real obvious one is the internet connection it self; I would assume non of the German hosted servers are connected to the internet, as that would be the easy road in to the data for government agencies and hackers. I would also assume no travelling employees or remote offices are accessing this data…

But I seem to assume allot… as most companies actually do this, but again it isn’t perceived as a problem.

As data flows free across borders by the nature of ‘the Internet’, my advise to the overly sensitive CIO’s in Germany is that they actually need to take it up one level;

Germany should start with Internet border patrol, executed by the Government. Every packet travelling in & out of Germany should be inspected for sensitive content. Obviously encrypted packets should be dropped and no VPN’s allowed. This is the only way to circumvent the risk your trying to capture by moving servers in to Germany specifically. I think the Bundesnachrichtendienst would appreciate this effort.

:-)

 

We are eagerly waiting the launch of new data regulation from the EU that should unify regulation across Europe. See:  http://ec.europa.eu/justice/data-protection/index_en.htm 

Hopefully then my German peers will start threating this with some common sense. The EU at least seems to get it:

Protecting your personal data - a fundamental right!

The free flow of personal data - a common good! 

Note:

My team recently asked if certain server racks in the datacenter could be turned over to a specific countries embassy… so it could be covered under the Vienna Convention on Diplomatic Relations. That would be interesting from a maintenance perspective :-) 

 

Data Center Infrastructure Management - Where's the Beef?

Vendors and pundits alike have asked the question, "what's the problem with Data Center Infrastructure Management (DCIM). Why is there no real traction in a market that should be measured in the billions?" Is the lack of traction due to poor products or are the products too pricey and complex? The answer to the adoption problems are more difficult that you might expect because it's yes and no to each of the aforementioned potential reasons and more.

Schizophrenia

There's a general lack of acceptance or understanding of what a DCIM tool is supposed to be. Is it asset management, capacity planning, resource management, environmental controls, automation, or all of the above and more? When the customer hears too many voices, they tend to ignore all of them, at least I do.  To combat this issue, DCIM vendors will have to get better at highlighting and demonstrating value in a clear and simple way. I know this seems obvious, but I would argue that the majority of Data Center operators aren't listening yet, likely because they haven't "heard" the right message.  A well-executed DCIM project takes time and effort, time and effort that are not likely to be prioritized or assigned in clear goals or data center KPIs. Beyond creating a clearer and more pointed message, the DCIM vendors are going to have to build relationships with buyers as a way to help the buyer develop goals that benefit from the implementation of DCIM.  The traditional drop-in, sell something and record a win for the quarter won't cut it in the vast majority of enterprises.

What's the biggest issue?

I eluded to it in the previous paragraph, but to me what is and what has been the biggest issue with the adoption of DCIM is the lack of a single owner for the company data center(s). I know it sounds odd to suggest that not everyone has a data center manager or at least a person who plays that role on TV, but it's true. Most companies have at least created the role of "Data Center Manager" (DCM), but the question is; what is this role actually responsible for and who are they responsible to? I've been a DCM, and I've had the role report to me many times. In most cases the DCM is responsible for basic data center issues like capacity, security, and IT operations functions done within the confines of the DC. What's missing is the link between IT and Facilities, and a demand for a holistic report on the performance of the data center.

What's the problem with not having a holistic view and owner for the DC?

How do you sell a new roof to apartment dwellers? What if there wasn't a property owner/manager available? These issues and many more are what create adoption problems for what would otherwise be terrific data center products and services. If there isn't a single person responsible for all things data center (HVAC/Land/Emissions/PUE/Power costs/Network access/Generator maintenance/Operations Staff/Security/Capacity planning/Asset management, etc., etc.) and whom is expected to report on performance to the C-Suite how can you sell a product whose best selling point is "holistic data center improvement".  Also, since there are no companies sending out a person to be trained on owning and operating the company's $100 million data center(s), there are few if any vendors offering the comprehensive training required. Don't get me wrong, there are lots of great classes out there for training on all the different subsystems that support a data center. Unfortunately, none of the individual subsystem training options give you a "data center manager".  

So what to do with products like DCIM?

The truth is I think many vendors will struggle unless they can create a message that resonates with a specific IT or Facilities function, they can then focus on getting their foot in the door. Once inside, they can create value, which will allow them to make headway across functions.  For the IT or Facilities buyer, if you're interested in getting DCIM, pick an area where you can get an easy win (don't boil the ocean), then work on gaining broader cross functional acceptance.  DCIM isn't the only product or service affected by this "lack of a perfect buyer" syndrome, but it is one of the better examples.

Treating the Data Center as a System is another approach

I've been an advocate for treating the data center as a system for many years now. In fact at Data Center Pulse we created the "Data Center Stack" to help customers with a visual reference to work from when communicating issues, opportunities, change, or links across the entire holistic view of the data center.  Many of today's better DCIM products have a depth and breadth of capabilities that can scare the prospective customer.  Because most data center folks have a more narrow set of responsibilities, helping them to see the data center as a system might entice them to consider a comprehensive DCIM solution instead of point fixes.

DCIM is Dead, Long Live DCIM

How the future data center will incorporate building management systems, DCIM, infrastructure platforms, and cloud APIs is hard to foretell, but it's safe to say that more visibility and real-time responsiveness in the data center isn't a temporary demand or trend. As more functions become centralized in the data center and IT gets even more embedded in everyone's lives and places of work, the demand for improved data center performance will only increase. Show the customer where the beef is and they'll come buy it. However, keep in mind that you might need to size your pricing based on the customers appetite.

Some additional DCIM Related blogs on Data Center Pulse:

DCIM it's not about the tool, it's about the Implementation

Before you jump in on the DCIM Hype

Real-time Data Center Inventory Management

 

Data Center Infrastructure Management - Where's the Beef?

Vendors and pundits alike have asked the question, "what's the problem with Data Center Infrastructure Management (DCIM). Why is there no real traction in a market that should be measured in the billions?" Is the lack of traction due to poor products or are the products too pricey and complex? The answer to the adoption problems are more difficult that you might expect because it's yes and no to each of the aforementioned potential reasons and more.

Schizophrenia

There's a general lack of acceptance or understanding of what a DCIM tool is supposed to be. Is it asset management, capacity planning, resource management, environmental controls, automation, or all of the above and more? When the customer hears too many voices, they tend to ignore all of them, at least I do.  To combat this issue, DCIM vendors will have to get better at highlighting and demonstrating value in a clear and simple way. I know this seems obvious, but I would argue that the majority of Data Center operators aren't listening yet, likely because they haven't "heard" the right message.  A well-executed DCIM project takes time and effort, time and effort that are not likely to be prioritized or assigned in clear goals or data center KPIs. Beyond creating a clearer and more pointed message, the DCIM vendors are going to have to build relationships with buyers as a way to help the buyer develop goals that benefit from the implementation of DCIM.  The traditional drop-in, sell something and record a win for the quarter won't cut it in the vast majority of enterprises.

What's the biggest issue?

I eluded to it in the previous paragraph, but to me what is and what has been the biggest issue with the adoption of DCIM is the lack of a single owner for the company data center(s). I know it sounds odd to suggest that not everyone has a data center manager or at least a person who plays that role on TV, but it's true. Most companies have at least created the role of "Data Center Manager" (DCM), but the question is; what is this role actually responsible for and who are they responsible to? I've been a DCM, and I've had the role report to me many times. In most cases the DCM is responsible for basic data center issues like capacity, security, and IT operations functions done within the confines of the DC. What's missing is the link between IT and Facilities, and a demand for a holistic report on the performance of the data center.

What's the problem with not having a holistic view and owner for the DC?

How do you sell a new roof to apartment dwellers? What if there wasn't a property owner/manager available? These issues and many more are what create adoption problems for what would otherwise be terrific data center products and services. If there isn't a single person responsible for all things data center (HVAC/Land/Emissions/PUE/Power costs/Network access/Generator maintenance/Operations Staff/Security/Capacity planning/Asset management, etc., etc.) and whom is expected to report on performance to the C-Suite how can you sell a product whose best selling point is "holistic data center improvement".  Also, since there are no companies sending out a person to be trained on owning and operating the company's $100 million data center(s), there are few if any vendors offering the comprehensive training required. Don't get me wrong, there are lots of great classes out there for training on all the different subsystems that support a data center. Unfortunately, none of the individual subsystem training options give you a "data center manager".  

So what to do with products like DCIM?

The truth is I think many vendors will struggle unless they can create a message that resonates with a specific IT or Facilities function, they can then focus on getting their foot in the door. Once inside, they can create value, which will allow them to make headway across functions.  For the IT or Facilities buyer, if you're interested in getting DCIM, pick an area where you can get an easy win (don't boil the ocean), then work on gaining broader cross functional acceptance.  DCIM isn't the only product or service affected by this "lack of a perfect buyer" syndrome, but it is one of the better examples.

Treating the Data Center as a System is another approach

I've been an advocate for treating the data center as a system for many years now. In fact at Data Center Pulse we created the "Data Center Stack" to help customers with a visual reference to work from when communicating issues, opportunities, change, or links across the entire holistic view of the data center.  Many of today's better DCIM products have a depth and breadth of capabilities that can scare the prospective customer.  Because most data center folks have a more narrow set of responsibilities, helping them to see the data center as a system might entice them to consider a comprehensive DCIM solution instead of point fixes.

DCIM in Dead, Long Live DCIM

How the future data center will incorporate building management systems, DCIM, infrastructure platforms, and cloud APIs is hard to foretell, but it's safe to say that more visibility and real-time responsiveness in the data center isn't a temporary demand or trend. As more functions become centralized in the data center and IT gets even more embedded in everyone's lives and places of work, the demand for improved data center performance will only increase. Show the customer where the beef is and they'll come buy it. However, keep in mind that you might need to size your pricing based on the customers appetite.

Some additional DCIM Related blogs on Data Center Pulse:

DCIM it's not about the tool, it's about the Implementation

Before you jump in on the DCIM Hype

Real-time Data Center Inventory Management

 

My move to SDL

It’s the weekend before the holiday season and just like last year I find my self at an US airport making my way home… just in time for Christmas.

Sitting at the airport lobby, listing Christmas songs, I can’t help to reflect on the past year.

A lot has happened and things changed a lot. I have left OCOM (LeaseWeb/EvoSwitch/Dataxenter) after 2 years in September this year. Something that some of my peers in the market didn’t expect, but was long overdue. For too long I couldn’t identify myself with the way the company was run and its strategy. No good or bad here… just a big difference of opinion on vision and execution.

The last 2 months I have been able to talk to lots of different organizations in an effort to see what my next career step should be. I needed some time to recuperate from my little USA adventure with LeaseWeb & EvoSwitch. It was a great project to participate in but all the travel took a big toll on my personal life and me.

I also learned how passion for your work can be killed and what it takes to be sparked again. And how people are motivated by the Why in their jobs.

I had some good conversations with DCP board members Mark and Tim about my frustration in the lack of progress in the datacenter industry. It felt like I have been doing the same datacenter and cloud talks for the last 3 years, and things still didn’t move. I wanted to have the opportunity to really make a difference.

In November this year I joint SDL as the VP Global Cloud Operations. Again something most of my peers didn’t expect.

SDL’s services are based around language translation, customer experience and social media intelligence. Most of them are geared toward a CMO and its department. The company is making an aggressive play in to the PAAS/SAAS market.

Gartner has identified the potential of CMO’s moving in to cloud computing consumption publishing “The Top Four Impacts of Cloud Computing on Integrated Marketing Applications “.This is based on the prediction that in 5 years from now the CMO’s will spend more then most CIO’s.

Running this PAAS/SAAS environment requires a large-scale infrastructure that is able to handle high volumes of traffic and data storage. Being able to deliver these services on a worldwide scale, with the ability to scale-out, don’t only require technical transformation but also process & organizational.

During my conversations with SDL’s leadership (and specifically its CTO) their vision around infrastructure and service delivery lined up perfectly with mine. It encompass everything I worked on the last 5 years and the associated vision I evangelized for many years; DevOps, Kanban, Infrastructure as a Service (and the true meaning of the ‘service’ part), BigData, etc..

The next months and years I hope to execute on the companies cloud vision and build the needed reliable & scalable infrastructure and a world-class team to build & support it.

In the coming months I will share more information on SDL’s journey.

Have a great Christmas!

EE-IAAS – call to join IAAS energy research

The cloud market is hot. The IAAS market sees a lot of growth and new IAAS providers seem to entering the market on a daily basis. Enterprise IT shops are exploring the usage of public cloud solutions and building their own private cloud environments.

IAAS distributions like OpenStack and CloudStack seem to have thriving communities.

Organizations that are successful in deploying IAAS solutions, either for customers or internal use, see rapid growth in demand for these services.

IAAS services still need IT equipment to run on and datacenters to be housed in.  While there has been a strong focus on making the datacenter facility and IT equipment more energy efficient, not much is know about the energy efficiency of software running these IAAS services.

The Amsterdam University of Applied Science (HvA) lanced an energy consumption lab focused on energy efficient software called SefLab 1,5 years ago.

Several EU Datacenter Pulse (DCP) members donated time and equipment to the research during the launch.

During the start of the SefLab students and researchers got familiar with the research facility and its equipment. They did a webbrowser compare on energy usage. (results on the SefLab website)

Now that the lab is operational, its looking for new area’s of research. One of the focus areas is the development of IAAS distributions like CloudStack, OpenStack, etc.. and their energy efficiency; what are the effects of architecture choices?, is their a difference in energy efficiency between the distributions ?, what power-manager & reporting elements are missing from the distributions ?

The HvA and SefLab are looking for:

  • IAAS distribution users willing to participate in the research. This is possible in several ways ranging from active participation in formulating research questions, managing a subproject, and supervising student projects, to participating in workshops and events of the project.
  • IAAS distribution communities willing to participate in the research. Future development of IAAS energy management components can be tested and validated in the SefLab facilities.
  • Vendors willing to support the effort with knowledge transfer and hardware/software donation.
  • Industry groups willing to support the effort in knowledge transfer network.

The research done by SefLab is really end-user driven, so this is a natural fit with DCP’s strategy.Details can be obtained by contacting jan.wiersma@datacenterpulse.org

DCP members will be updated on the projects progress using the LinkedIn DCP member group.

See full call for participation here.

If the Tech Doesn’t Fit, You must Convict

The usual arguments on Twitter about new technology and solutions run the gamut from "it isn't real" to "there's only one real cloud". These "discussions" seem to go on and on and every time a new solution is introduced they are reignited.  Is all tech questionable, are all services terrific, are particular services better from specific vendors?  Most often it's more about fit in a specific organization or company, so how you position yourself is the key.

The rub

Too many vendors make claims that they don't need to make. The claims are wrong or at a minimum a reach, yet the technology underlying the claim is actually good. Why add "It even makes coffee" to an advertisement about a Swiss Army knife? Doesn't the knife already do many useful things very well? If you've got a good product, sell it on its merits.  In the case of modern technology solutions we're often hearing one or more of the following three terms "cloud", "bigdata", and "SDN".  Really, you've added a management tool to vSphere and now you have a "cloud" product?  You provide a data sync solution for Terredata arrays and that means you're in the "bigdata" market? Maybe you've figured out how to apply passwords to a common set of switches with a single command, well then yes, you must have an SDN product.

My biggest beef is with the use of the term cloud to support almost every new product and service release. Even the phone company thinks it's important for the customer to believe their service is running in the cloud. I mean really, who cares? But even more specifically it's the marketing pitches of cloud ish products to internal IT teams that are driving many of us in the Twitterverse to frothiness.  Consider the debate about public vs. private cloud. The debate is mostly dead now, and private cloud exists, but you're not doing private cloud beliebers (like me) any favors when you sell a solution that's really just some simple scripts on virtualization and you call it "cloud"

The deal

The deal is, scripts on virtualization can be a great solution. Just because it isn't cloud doesn't mean it's inherently bad, just that it's not cloud.  While "real" cloud may be the long term destination for virtually all infrastructure, it doesn't mean that more effective and automated use of your servers isn't still a worthy pursuit. 

It's also true that in many cases applying some simple automation and virtualization to infrastructure is all many IT organizations should be trying to do (right now). Moving legacy applications to cloud is fraught with risks and limited value. On the other hand, being able to quickly and uniformly applying common policies and or patches to application environments can be real handy. Lastly, many organizations are as ready to adopt cloudy or agile operations as a Yugo is ready to run the Indy 500.

Of course if your product or service sucks, then in the long run it won't matter what you're telling the customer, but you might as well start from a point of trust.

Next time

Position, position, position, it's like the technology vendor equivalent of the real estate mantra of location, location, location.  Determine how and where your solution actually fits in the varied landscape of IT organizations and industry verticals and then put the positive spin on what it really does. I know this sounds like what every company does. Yet interestingly I speak with company reps every day that don't understand the drivers that can and often do affect IT buying decisions. The hardest part is that many of these decisions aren't based on logic. In the world of sales, IT is probably more complex than most any other product or service category.  

 

 

 

 

Enterprise Legacy Environment Cloud Adoption vs Netflix

Two blogs written recently one by @benkepes and the other by @jeffsussna covered the topic of cloud adoption strategies versus legacy and best case environments. In Ben's blog he talked about how Netflix is an outlier in the cloud space. That their applications and use characteristics don't match enterprise use cases and don't match the complicated verticals of infrastructure and applications in the typical enterprise data center.

Jeff's responded to Ben suggesting that he sees the demand for agility in IT increasing the drive of IT organizations to focus on delivering new applications, even to the extent of replacing Tier I "systems-of-record" type applications. The idea of just moving legacy applications via "forklift" isn't really viable and as such companies are looking for ways to replace instead of simply upgrade or move.

I agree with both Ben and Jeff, but differ a little in expectations for what's likely to happen within the average enterprise IT transition to cloud. I do believe Netflix is an outlier, but like the original NASA programs, Netflix, and OpenCompute, etc. are leading indicators/inventors of what's likely to happen in the more traditional enterprise space. So my response to both Ben and Jeff isn't to suggest either is wrong, but rather that both are missing the how and timing of the transition.

There is NO one way for any company or IT solution to proceed

There isn't a single path that companies will follow. The industry in general will often try to pigeon hole the entire enterprise space into a single strategy or a common technology platform. However, if the history of IT adoption has taught us anything, it's that there are few times when everyone uses the same strategy to get to the next level.

If it were me

As a former IT guy (person), I tended to be a little bit of a leading edge type. That being said, I still used a process of value versus risk to make any decisions on moving forward with a major change to applications or infrastructure.  In the case of the move to cloud, I wouldn't use any other approach. As I see value outweighing risk, I would proceed with making an investment in change.

The drivers & process for making change

There are dozens of drivers that need to be weighed before making a major investment in core applications or large infrastructure projects and some of the more common are;

-          Immediate return on investment to the business (value)

o   Not "does it make IT better"

o   Measurement of value is difficult at best, but having the business define the value is critical

o   Will the IT organization be in a position to help the business leverage and grow new application use

-          Having an accurate and realistic measurement of risk (often missing from IT projects)

-          You should measure risk and value against a natural upgrade cycle (In other words, if you would have done a major upgrade within the next year, how does waiting impact your value/risk?)

-          Staffing models versus partners. Having the right team members, and or external partners are foundational questions for how/when to proceed

I could go on with process and driver thoughts, but I'm sure you're getting the gist of it.

What I believe we're likely to see over the next 1-10 years

Enterprises will approach the change in myriad ways as I've already said. However, I think one common approach will likely mirror the following:

-          New applications - Put them in the cloud or explain why they can't be (I'm not making a distinction between public and private cloud, but I will say that there are reasons for using one over the other or both)

-          Old applications that don't have an upgrade value and or aren't likely to be maintained into the future will be left as is, or where easily done moved to the cloud for some efficiency benefits

-          Core applications will be grandfathered over time as key opportunities of value, vs. timing, vs. application replacement options are better understood

-          IT will segment application sets based on infrastructure requirements and for the next 1-5 years we're likely to see a cloud tier approach that has a combination of cloud performance characteristics, scale, pricing, distribution, and reliability (all drivers that can be measured against each application)

-          5-10 years out were more likely to see all but the most hardened criminals applications converted to cloud applications and we're also likely to see the tiering of cloud infrastructure consolidated to 1-2 different performance models

Of course, I could be completely wrong

Predicting the future is both easy (you rarely ever get challenged over what you said 2 years ago) and hard (because most of us don't get it right).  So take this blog for what it is; one person's opinion on how enterprise cloud adoption is likely to unfold over the next 10 years.  If you were to challenge me, I suggest you call into question things like advances in application design, improved conversion tools, better intercloud operability, etc., etc. You also might consider helping determine how a business can easily put a value on moving to the cloud. Right now we all inherently believe in the value, but no one has really drawn a proof of concept picture of how a totally cloud enable traditional enterprise will benefit. In the meantime, keep up the good fight and stay focused on delivering value, whether it means moving to cloud or not.

 

Jeff's blog; http://blog.ingineering.it/post/62910634258/are-we-sure-netflix-is-just-an-edge-case

Ben's blog: http://www.forbes.com/sites/benkepes/2013/10/01/cloud-computing-legacy-vendors-and-the-only-reality-that-matters/

 

 

Forbes Article only tells the old story of the CIO role

In a September 27th article in Forbes written by Raj Sabhlok, the President of Zoho Corp., Mr. Sabhlok discusses the disruption of the CIO role being caused by modern IT solutions (I.e., SaaS, Cloud, consumer IT etc.). The point of the article is correct: the CIO role does need to change. However, I found the changes suggested by Mr. Sabhlok to be both retro and extremely incomplete.

The argument for change in the CIO role

Mr. Sabhlok says (and I paraphrase a little), "Most CIOs today operate in a largely tactical capacity for a relatively naïve user base. That leaves them to oversee fundamental responsibilities such as break fix, password management, software and hardware upgrades and email." So far, no real argument from me, other than to say I think the above statement over simplifies the role of IT. Where I start to differ more directly is the next section of the article, "Overcoming Tactical Irrelevance."

In this section, the author goes on to say that the workforce is growing more sophisticated, and the responsibilities of the CIO will as well. Unfortunately, while I wouldn't personally have written this intro, the author really missed the point in "new areas to brush up on" for the CIO. Mr. Sabhlok suggests that CIOs focus on the following four areas:

"Enterprise Security - To protect the enterprise from security attacks and breaches. CIOs need to be well versed in IT security and compliance."

Really, this is new? Most of the CIOs I know are pretty well-versed with security already. What I believe they aren't prepared for in many cases is the new way security risks will be introduced by consumer IT and having dozens of SaaS applications all independently sourced and managed. This may be a semantics issue, but I think the clarity is important here.

"Project Management - Delivering large and complex technology projects will demand CIO expertise in IT project management."

Dang, all this time IT has been screwed up because CIOs don't have project managers and can't run large projects. How did knowing how to run projects become more important in a time when we're also telling CIOs to stop building so much of their stuff and let others do it. I'm not saying the skill won't still be needed, but come on, we might as well be saying the CIO needs to know how to lead a team.

"Financials - CIOs must have corporate financial skills to understand budgeting and expense management."

Again, how is this a newly important skill? I don't see this changing in any real way except to say that IT might be forced to figure out how to plan for some work being OpEx that used to be CapEx as the average CFO hates inconsistency.

Last but now least...

"Legal - Legal skills will ensure CIOs understand regulations, data storage and retention policies, privacy policies, and "other" contractual issues impacting their companies."

Well, I certainly can't argue that. In fact, I'm pretty sure I helped write that into a CIO job description about 12 years ago and it was pretty relevant 10 years before that. How again is this a new requirement? My opinion is that the CIO will need to gain a better understanding of how using public cloud and working in new countries with multiplied user IT end points might impact data sovereignty and privacy issues. They will also need to gain a better appreciation for what the risks are or aren't if he or she has an issue with a public cloud provider.

My thoughts on the whole thing

I don't believe the author meant to sound like he was stating the obvious, but I do believe that some of the really important needs of the "New CIO" were lost in translation, and others were just plain missed.
Each of the four criteria for the CIO listed above will continue to be relevant in IT for some time to come. However, where I think the author really missed out is where most people seem to fail, and that's in how to build an organization that is staffed and trained appropriately. I won't rehash everything that myself and many others have already written about the changing IT organization, but suffice it to say, it will likely include big changes. Here are a few of the changes/needs I think are likely to be most critical.

IT to Business Functional integration

This has been an opportunity for IT for years, but for a number of reasons it either has not been pursued or fails due to lack of top down support for staff roles and reporting structure. However, the integration of some IT team members within the functional business units is even more critical than ever for a number of reasons; SaaS, cloud, and putting an IT eye on potential business process and innovation opportunities.

Data and Application management in an even more silo'd world

The potential risks of SaaS if not managed well are many, but one of the more obvious is the potential for creating islands of data. The same is true for independent use of cloud resources outside the oversight of IT and or IT tools. Without strong business partnerships and effective IT management tools, the potential risks of opportunities missed and money lost are real.

Training and leadership for a newly aligned IT team

The organizational change required might be the biggest omission in the author's article. Besides the "departmental integration" mentioned above, there will need to be programs that help staff let go of "technology allegiance" in order to focus more on "need/solution fulfillment." There must be new roles defined for negotiations of support and services relative to cloud, SaaS, and consumer-oriented IT solutions. IT will also need to develop a plan for how to put governance on the business use of IT, without treating it like "control."

This is only the beginning

What I've covered here is only a small slice of the kinds of changes IT will need to make, but I felt it necessary to point out that the changes did not fall under the category of "do more of the same old stuff." The debate over the future of IT is still raging, and that's a good thing. If it weren't raging, it would be because the the business was filling out the CIOs walking papers. There's still time as I've suggested in previous blogs, but there's no time like the present to get started on making your IT group as agile and adaptive as the business wants to be.

Professionally copy edited by Kestine Thiele (@imkestinamarie)