Log In

Innovate 18 min

Dean Nelson has led $10B in infrastructure projects in 9 countries and is now the Founder and Chairman of Infrastructure Masons.

Speaking with ADAPT’s Director of Strategic Research Matt Boon, Dean gives his predictions of the software-defined data centre future. He also shares how to set the right expectations of 5G and ways infrastructure leaders can enable transformation.

Matt Boon:

Dean Nelson, welcome. Welcome to the fabulous Gold Coast and the ADAPT Connected Cloud and Data Centre Edge.

Dean Nelson:

My very first time here in Australia.

Matt Boon:

Wow, really?

Dean Nelson:

At least on the Gold Coast.

Matt Boon:

Okay, well I hope you’ll be coming back again. I’m sure you will.

Dean Nelson:

Absolutely, this place is beautiful.

Matt Boon:

It’s a lovely place, isn’t it? I wanted to talk a little bit about some of the things that you’re really quite passionate about, Dean, and you’ve been spending a lot of time researching and talking about as well. The concept of the sort of the software-defined everything, the software-defined data centre. I’ve been looking at this market for many years now, as you have, you can probably tell we have similar vintage. Maybe I’m a bit older than you.

We’ve been looking at, software-defined networking, software-defined storage, and all of that concepts, and we’ve been talking about a software-defined data centre, at least in my mind, for a long time. But you kind of said that you believe that 2020 is the year software-defined data centre, it is like, comes of age.

I would love to get your thoughts on why. Why not in the last 10 years? What’s changed this year or the last couple years that are really bringing it to fore today, do you think?

Dean Nelson:

So, the software-defined data centre was defined back in 2009, I think it was. Intel and a bunch of people were trying’ figure out how to optimise inside of the data centre. It was defined as managing to compute, storage, and network resources with independent management software systems. The difference today is, we are missing a dimension. Compute, storage, and network are software-defined but facilitates are not.

Matt Boon:

Okay, interesting.

Dean Nelson:

So there are four elements here that have to be orchestrated, and so when you’re able to now virtualise power, just like you virtualise compute storage and the other aspects of it, you can now truly, dynamically optimise the environments. Now, the difference today is that we’ve been optimising the facility side with PUE. So that’s the amount of power used by the facilities versus used on the computer for actual workload, and that’s decreased rapidly over the last 10 years, and it literally is sloped, actually shrunk the slope on the actual curve of power consumption globally. But we’ve already picked that fruit, so now we’ve gotten to the point where data centres are very efficient as far as how they’re designed and operated.

The problem is that you have allocated capacity, so think of it as I’m going to do a 10-megawatt block of something. Then you’ve got buffers on that to ensure that you don’t go above these breaker positions, so imagine another 20% on top of that. Then you’ve got the redundant capacity. So when I have a failover, I need to have that capacity there. So in essence, you’ve got at least twice the amount of power that you actually use.”

That’s for a two-end data centre. Now, the majority of data centres are not that today. But what you also see is that the allocated capacity, imagine that 10 megawatts, they’re using usually around 50 to 60% of the actual capacity consistently. That’s it. So that means that I’ve got 50% of the actual contract capacity, 50 unused, plus the buffer, plus the actual failover. Now, all that capacity, this sounds familiar? We did that in compute with dedicated equipment, with dedicated storage systems, with dedicated networks, and we realised that wasn’t very efficient. So if we are to now define the power as a pool, and use that pool of software at the millisecond level of orchestration, we can have it operated just like another resource inside of a data centre. The power will be virtualized, just like compute, storage, and networking. So software-defined, the critical nature of it is, that you can orchestrate all of those elements at once. Now, you asked me about why this year?

There’s an economic push. So the hyperscalers are the largest consumption, consumers of data centre capacity around the world. They’ve built their own capacity for their own facilities, they’ve already done this optimisation at various levels of maturity, meaning that they’ve got self-orchestration and virtualisation of power in their own facilities.

But they also leverage all of the colos around the world, and when I say all, I mean all. They need capacity in all these different places for various levels of use. They can’t get the same capability from the colos. So the colos are not software-defined, they don’t have an API that allows them to be able to control that at a millisecond level. Also, the software, the data centres, the colos, are actually defined by SLA. So they have a service-level agreement that says, “I’m going to make sure you have 10 megawatts of critical power all the time, right, for at least six nights.” They’re not going to compromise on that because there are penalties if they do.

So there’s fear in how it’s going to be managed with software, just like we had fear with compute, storage, and network. Took a decade to get over that, but the economic pressure that’s coming ‘ into the colos, their cost per kilowatt, right now expectations from their largest consumer, which are Hyperscalers needs to keep going down. They’re reaching the diminished returns on optimisation, meaning that I’ve hit that floor on an actual design that I can do without compromising resiliency. So then, the only way to do this is to apply software to get smarter in how you actually deploy dynamically and manage the power itself, just like the other resources. Does that make sense?

Matt Boon:

It does. So how do you think the colos particularly handle the different customer types? So you got the hyperscalers pushing down demands on them, then we have, in Australia for example, organisations who are using colocation facilities don’t really care much about power and cooling. They say, “No it’s up to the colo, they can sort it out, we don’t worry about it”, kind of thing. Whereas we’re seeing a much more focus on costs, on optimisation, yet customers are saying, “No, we don’t want to worry about it, we want someone else to worry about it.” It’s kind of a, not really a disconnect, but it’d be like, how to kind of get the two aligned more effectively together.

Dean Nelson:

Well, it comes down to the consumer. So, you take hyperscale. They are doing more than 50% of the actual contracts globally.

Then you’ve got enterprise and retail, so wholesale and retail in the middle of this. Each of them have various level of expectation and care. So when I’m going into retail, I just care about my costs and make sure this thing is going up. I use everything of my service’s handling. Hyperscalers have really, really figured out that, “Look, it’s very simple I want to pass through on energy, so no markup, and I want my cost per kilowatt, which is rent, right, of all the capacity that’s deployed, to be at this point.” So the colos are at the point where half of their actual draw is going to be really low margins. Which is happening already.

Then, it starts to trickle into the other ones.

If they’re able to orchestrate with software, they can unstrand power.”

So think about this, all this stranded power, if I could now take the capacity I’ve deployed, the capital I’ve invested, and be able to now take a piece of that and resell it, yet still maintain my SLAs, my margins go up.

And there’s a very simple economic thing in this one. They can’t keep going down in the race to zero because they won’t be able to justify actually serving the hyperscalers, which means the hyperscalers will continue to build out their own facilities. But the problem with the hyperscaler is, they are still trying to do every country in the world, and deploying new regions, and new zones, and new capacity at the edge, and they can’t do it all by themselves. They just can’t get to market that fast. They have to leverage all of these other players that have built up things, including Australia.

Right, that they can leverage. But they expect the same level of sophistication and capability in a colo that they do on their own. If not, right, they’re not getting their metrics met. So if that’s opened up at the actual colo, it breaks open that market to say, they’re giving the price points that hyperscalers will continue to invest more. Yet they’re increasing their margins overall for all their customers. I mean, it’s just economics at that point.

So to me, that’s why this year is where software-defined power, that enables software-defined data centre, will be a reality.”

Matt Boon:

Okay, excellent, that makes sense. And it’s interesting to think, so when we look at this even though we have different concepts about software-defined power, we have hyperscale, we have cloud, all this stuff.

Essentially a lot of companies are doing things the way they’ve always done them, right? So to get to this software-defined level, clearly we need to make some changes in how we approach technology, and so on. Then that brings in, the whole concept of edge, which, when Edge Computing, Some pundits are saying it’s going to be bigger than the cloud, it’s kind of going to take over the world, this thing. So how do we manage this drive towards Edge Computing in a software-defined world? Are there different things we need to think about? Is it just another layer or component we orchestrate within that whole framework?

Dean Nelson:

Well, let’s start at the macro level. So, I believe that edge is going to surpass core eventually. Just comes down to, 5-10 years from now. I believe it is because we have never had the amount of bandwidth, latency reduction, and concurrently connected devices that are going to be enabled by 5G.

So that’s going to open the floodgates, and I’ve been talking about the data tsunami coming.

IDC predicted that we’re going to have 175 zettabytes of data generated every year by 2025. So put that in perspective, people are saying we got about 50 zettabytes today.”

Matt Boon:

It’s a big number.

Dean Nelson:

So we’ve got this massive three and a half times increase in actual capacity that’s going to be there, and I think that’s actually low. So, that means that people are going to start to consume more data, generate and consume more data, but they’re also going to have an expectation of how the performance is done. So the concept here is when 5G opens up the capability, you’re going to have gaming, medical, internet of things, smart cities, all of those truly relying on services in the place where people are, the concentrated places, these cities, which means that they’re going to expect latencies that we don’t deliver today. When you go back and open up your phone and do something, it goes back through a network and hits a core data centre somewhere in a region that Google, Amazon, or somebody else is serving. But that’s in the hundreds of milliseconds. When I’m now doing a fully immersive experience with Oculus glasses or the next thing, I need to do 240 frames a second, and if I don’t, my equilibrium gets thrown off, and I throw up.

So gaming is massive, that’s going to be the leading edge of it. But so are all the orchestration of smart cities, and autonomous vehicles, and all those things that are going to require lower and lower latency. So for me, the demand is coming that we’re going to have this capacity at closer and closer to the edge. Now, people have predicted that the cloud is going to eat the data centre. Now they’re saying that edge is going to eat the core.

The problem here is that people are looking at this as ethereal. It doesn’t just go to the cloud, it goes to an infrastructure deployment somewhere. So we have the capacity that continues to grow. Cloud, enterprise, on-prem, all those elements, we’re going to have complementary capacity added at the edge. So, to me, the issue is, we have got to prepare for a massive amount of data growth coming up here in the next 5-10 years, a massive amount of expansion in infrastructure, because guess what? Consumers are going to demand it.

We all are completely addicted to our phones, and as experiences get better and better, there’s going to be more and more requirement for higher and faster performance.”

So, now, to go back to your original question, that starts to push things out to the edge. Now, how do we manage these multipliers of units all across the edge? You can’t just take a data centre that you have today, and a rack deployment, a modular piece, and just push it out to the edge.

You have to rethink how it’s going to be done. So, will there be these modular ones with towers? Absolutely. Will there be ones that are going to be in server rooms and closets in our buildings? Absolutely. But you also have to think about this micro-edge. The deployments are going to be all the way back at the last thousand feet. So you’ve got places like this, or apartment buildings, etc. You’re going to have lots and lots of needs. Now, that is a security vector issue like no other.

Matt Boon:

I was going to come to security. That’s right.

Dean Nelson:

Okay. But how do you orchestrate that? We’re going to have more devices, like, by a factor of a hundred, out there in the world, that are serving things that are security holes, but now you don’t have the people to manage them as you do in highly concentrated data centres.

So if software-defined on everything is going to be critical for every edge deployment, you have to assume it’s lights out, you have to assume that that capacity is going to be there, and be managed, and you have to make sure that you’re going to be able to scale up in those factors. Does that make sense?”

Matt Boon:

Absolutely. And then when you think about the whole customer or human experience that the edge is going to drive, and 5G’s going to drive, all the devices and so on that we’re using, clearly, I mean, when you look at different geographies, even within countries the access to certain infrastructure can vary dramatically, here in Australia or in the States as well. I was in New Zealand, when you talk about 5G, in Auckland on a round table I did last year, and one of the guys said, “I haven’t even got 4G yet.” So how do you kind of balance that in terms of setting expectations, if you like, for the consumer, particularly when we can’t necessarily guarantee they’re going to get that same experience they expect everywhere? Do you have any thoughts on that? Because it’s a bit of a tricky one.

Dean Nelson:

It is. I do think that each city is going to be probably at a different pace. You think of the highly concentrated cities, the New York, Sydney those kind of things. They’re going to have 5G faster because there’s just a higher concentration of people and a higher demand. Now, it’s going to keep going out to the smaller cities, and rural areas, etc. over time. But in the next decade, the amount of actual capacity being deployed is going to be staggering.

Because remember, we need 10 times the amount of towers to be able to serve this, yet you’re going to get 100 times more bandwidth, and then we’re going to have this appetite by all of those users that are going to need to be satiated, right? They have to be able to serve it, so the demand will continue to go there. I think that I was just in India, right? I’ll give you an example. The entire data centre portfolio for all of India is 400 megawatts total. For all of India, with 1.3 billion people. Seven million square feet. That’s smaller than AT&T’s portfolio.

Matt Boon:

That’s pretty amazing, isn’t it?

Dean Nelson:

So, if they have an infrastructure challenge, do they have fibre? Yes, a tonne of it. Do they have power? Yes, challenged in different areas. But do they have the connectivity and the ability to manage it as they roll out? There’s going to be a massive flood of it, it’s going to be harder for them to scale up, but they will, as will South Africa, all of LATAM, these emerging markets are going to grow pretty quickly.

Matt Boon:

And they have leapfrogged, even leapfrogged some of the mature markets as well, in some of the things

Dean Nelson:

And that goes back to software-defined. If you think of, if they were able to adopt that at the beginning, for both the core data centres in how they do it and how they attract hyperscalers, they could be getting contracts sooner than, say, a traditional market like the US. And also because of the saturation in the US. So that includes the edge.

As they deploy it, if they have a software-defined mentality at the beginning, they’ll have a competitive advantage, and they could accelerate.”

Matt Boon:

Okay, so they’re leapfrogging some of the old mindsets that we’ve kind of embedded in lots of the other parts of the world.

Dean Nelson:

Like China, right? They’ve just basically jumped right ahead. Everything is paid through WeChat and all the different applications they have. They don’t even worry about the other traditional credit card apps.

Matt Boon:

No, exactly, that’s all gone. Yup, all gone, exactly. I’m interested also, when you think about what organisations need to do globally but also in the local marketplace, and ADAPT has this framework, if you like, that we call the 12 Core Competencies for success.

What business leaders, technology leaders, need to be really skilled in and able to actively, sort of, operate in those areas, and one of the things, as we talk about these areas about software-defined, and edge, and so on, emerging technologies, is how can we, sort of, enable innovation and transformation, right? It’s one of those sorts of core competencies we believe that leaders of today really do need.

You’re a leader, you’ve been doing this for some time, you’ve led organisations, I think the number has been responsible for about 10 billion dollars worth of technology investment. Big number but I do understand it.

Dean Nelson:

It’s nice spending someone else’s money.

Matt Boon:

Exactly, it’s always good fun. And so, I’m interested, from your point of view, what do the innovation and transformation mean to you? Not just as a concept but, as a leader of technology, a data centre leader, a cloud leader, or CIO? What do you think they need to be thinking about or looking at when they consider innovation and transformation?

Dean Nelson:

I think as a leader you really need to check yourself. It’s very easy to follow the same patterns, “This has worked for me before, I’ve hired these people, I’ve deployed this technology, I’ve followed this architecture”, and just go do it again. It’s a proven track record.

What you probably should be thinking about, and where I think some of the best innovation that I’ve had in my career, is when I’ve stepped back, and wiped the slate clean, and I’ve given my engineering team, some of my leaders, the challenge: “I’d like you to go think about this. Could we do this differently?” And you have the time and the space to do it. Giving them the capability to truly engineer and rethink is just amazing, the stuff that comes out of that. I mean, transformational type changes. We had the very first data centre with primary power from fuel cells in Utah when I was working at eBay.

Same concept. Why? Because fuel cells are emerging and we keep doing the same things the same way for the last two decades. Can we actually generate power and consume it a hundred feet away? Yes. And they did that, and it was just amazing what came out of it, the probability factors, and all the engineering. But it really made those people think. I think it’s the same for software, it’s the same for hardware, it’s the same for the network.

You have to give your people space to actually think differently than they do today. And it starts with you thinking differently.”

Do NOT follow this link or you will be banned from the site!