Carvalho Intel

A few weeks ago, the 2nd Huawei Cloud Congress West Africa organised by Huawei and Intel held at Oriental Hotel, Lagos. Themed “Transforming with Cloud, Setting New Benchmark,” the event featured representatives from industries in telecoms, finance, government and energy companies across West Africa.

TechCabal was able to corner Intel’s  Regional Business Director for Middle East, Turkey and Africa, Frederico Carvalho, for a brief chat on exactly what cloud is and what Intel’s Cloud for All has in store for Nigeria.


Cloud as a concept from a generalistic point is still vague. For the audience, cloud is really “cloudy.” How would you explain cloud technology to a five-year-old?

I think cloud is much easier than it sounds. It has two dimensions. The first dimension is more of the technological dimension, which is about making something available as a service. For example, if I need a car today, I go and buy a car, then the car will sit in the parking [lot] till I use it again – very efficient. Now, if a group of friends need a boat, instead of one person buying a boat alone, they buy it together and whoever needs a boat takes the boat. Now, imagine that to book the boat, you don’t have to drive somewhere and talk to someone or get it to a certain place. You just go online and book the boat. That’s what cloud is all about. It’s automating and making self service – a service that otherwise would require a lot of configuration – possible.

Going back to the car, if it stays in parking for a period, all that time is wasted money – it’s a resource that is wasted. That’s what traditional data centres were. You bought a server, and if you’re not using the application, the server’s idle. Nobody is using it. As you move servers into a virtualizing harmony, what you’re doing is you’re allowing different applications to use the same server as virtual machines. We basically create virtual instances in the same box. So, you’re maximising the use of the box.

Now, creating those virtual machines was not an easy task, it required a lot of coding and a lot of hands-on configuration. It could take months until you pop up a new service that is a virtual machine. Now, with cloud, that takes a few clicks and sometimes a credit card. So that’s what cloud is all about from a technical perspective.

For the majority of people, cloud means “something that I don’t know where it is”. That’s what it means, and the reason why that thought started building in people’s minds was because of the cost of public cloud or services offered in the cloud. You don’t necessarily know which server in which place is your virtual machine. So when you go on to Amazon web services with your credit card and you create a virtual machine there, you don’t really know which Amazon data centre or exact server it’s sitting on. It could be distributed among different servers around the world, depending on how they orchestrate and combine the resources to give you that. As a result, the concept of cloud became ‘This thing that is somewhere, I don’t know where it is’. And  that’s actually the reality in the public cloud.

In the private cloud – the cloud that you own in your environment, either on premise or somewhere else – even for regulatory purposes, you need to know where your data is. You need to make sure that you can have control over the physical location of your virtual machines. And so, one of the things I was explaining earlier today is one of the technologies that we’re building into our products.  It’s a technology that allows you know that a physical server was not compromised or hacked when it first booted. That the data inside those virtual machines are actually on those virtual machines and not a fake virtual machine. We call it Trusted Execution Technology and it allows you do clever things like that to make sure that you have control over where your virtual machine is and that the servers were not compromised.

That’s beyond most people’s understanding, but one of the interesting things about cloud is that most of the public cloud is consumed as a software service, not as an infrastructural service. And most of the cloud business is actually consumer cloud business, and not enterprise cloud business today. So if you look at all the cloud infrastructure and all the service that exists – Dropbox, Facebook, Gmail, Office365 – all of those are cloud services but consumed by people for their personal use, not by people for professional use. Only now, we’re starting to see the likes of salesforce.com, and applications like that offer these cloud services and we’re starting to see companies using cloud infrastructure as service on the cloud. But the majority today are still consumer driven.

So pretty much anyone who’s interacted with a web service or a computer has used the cloud without knowing it.

You’re using a service on a cloud infrastructure because it is automated, self-service, and you don’t really know which infrastructure is actually supporting that service. So whenever you use an application like Gmail, you’re using a cloud service.

You launched your global cloud initiative – Cloud for All – last year, what’s the trajectory?

The concept of cloud was really created by Google and Amazon, and people have had to grow infrastructure quickly in almost automated work. These people have hundred of thousands of servers and they have in the whole of tens of thousands of servers every week. So they had to find a way to configure these services without having lots of people. They created a very clever operating system that allows them to do that. Although it’s a very clever operating system, they still need a lot of people to make that work.

Cloud as an architecture is not really at the reach of every single company today. There’s still a lot of complexities that need to be solved by technical people – by IT people – to create a cloud environment inside the economy. And so cloud was not for all, unless you have the right skills to put all the pieces together to create a cloud environment, you would struggle to get a cloud environment. That’s why you don’t see cloud adoption in the private sector to be as fast as the public cloud. So what we’ve decided to do is create an initiative that is basically three aspects and is meant to help you as a company – as an institution – to put a cloud infrastructure that works in place in less than 24 hours. That’s the aim. We’re not there yet but we’re working towards it.

It requires three things. The first one is, we need to invest. Invest in getting the software pieces to be better. A lot of it is in open source. So the second piece is to collaborate with the open source community to solve the bugs. One of the biggest contributors to solving bugs in open slack in the world. So we spend a lot of time and resource in doing that. In making the products back.

The second one is to make sure that whatever piece of software that exists to create your core environment works optimally with the hardware. That means that they need the cloud operating systems, like Open site, Microsoft Azure, VMware, need to be able to see what hardware is in the need so that they can combine the best pieces to create your virtual machine. So we invest a lot on making sure that first, our technology allows your operating system to see it, and that it’s leveraging those technologies that we put on the hardware.

The third piece is, you need to get recipe books that tell you “if you want to create a private cloud with this type of functionalities and features, if you put all the pieces together in this way, you’ll get it right. Because we’ve tested it and we’ve made sure that it works. And you have all these options.” We’ve been working with the open site community, with vmware, to create one as well. So if you want to put something together, as a system integrator or as an end user that has an IT department, you can just take this recipe book and you know it will work. That’s Cloud for All in a nutshell.

Obviously it will work for people that have fairly generic cases. But again as the recipes become more, they’ll begin to cater to more specific context .

The cloud architecture can support any type of apps or workloads. It’s the architecture itself that you need to create on top of that, you start installing your applications. But creating the foundation is what the recipe books are all about.

In all of your travels within Africa, especially as questions of security and access come to the fore, and localisation is now a huge topic. We’re actually beginning to see local cloud companies spring up. In Kenya, we know there’s a couple of those, and we’ve seen the likes of Main One begin to have/start their own data centres. Local companies begin to talk about how they host their data locally. Developers feel that if they can host their data locally, it means they’ll be able to deliver faster services to consumers. What is intel’s stance in all of these developments?

Making sure that your data is safe is a foundation piece of any IT and cloud is no different. So making sure that we give you the ability to know that you can trust where your data is, that your data is encrypted and protected, and that you control how people access your data is critical.

Now, that requires you to do things in a certain way. The first thing is that you need to make sure that you’re enabling safety and security at the hardware level. Software alone is more fragile so that’s one of the things that we invest substantially. Making sure we build functionalities that help you accelerate how you encrypt and decrypt information in a way that doesn’t slow down your whole system. We give you the ability to know that when your server was booted, it was not compromised – that the firmware is not compromised, that the operating system was not compromised, so we can trust that server. And then we give you the ability to know where your virtual machines go or don’t go. Those things are absolutely critical so we enable them at the hardware level. Then we will work with the ecosystem to  make sure that the software leverages those functionalities.

The second aspect is making sure that you comply with regulations and auditing. Governments have rules that companies need to comply to. So making sure that you can set policies to comply to those rules is critical. In leveraging the same technologies, we can ensure that. We can do clever things like allow you store encryption keys at the hardware level – not in the software. We have a functionality called TPM – Trusted Platform Module that allows you store encryption keys at the hardware level. Obviously the software needs to use those functionalities but we make them available at the hardware level. We are kind of at the foundation of security trust and compliance in the data centre at the moment.

Are you working with any local partners?

We work with all the manufacturers, we work with all the software ecosystem – anyone that you can think of, we probably work with them. Huawei is a very good example. We’ve worked with HP, Dell, Fujitsu, Ericsson – I’m probably forgetting some of them. They may not be happy about that, but we work with all the hardware vendors. We work with all the software ecosystem in the cloud specifically. We’ve worked with the open source community, Open-Sec and all the open source products around the cloud. We work very closely with VMware, with Microsoft and other leading OSs.

Who would be the ideal customer archetype for intel’s cloud offerings?

Cloud in the future would be the standard architecture in all organisations. It’s just much more efficient. So it’s more of a question of “how” and “when” but not a question of “if”. And all organisations – governmental organisations, banks, manufacturers, telecom operators – for themselves or as a service will deploy cloud. They are probably virtualising at the moment but on top of virtualisation, they will want to make all the infrastructural software defined and automated self services as much as possible.

So all organisations that have a centre or have a data centre need will be using cloud. The question is how much cloud is going to be private, and how much cloud is going to be public, or how much cloud is going to be on their premises, and how much cloud would be off their premises.

Speaking as a total layman, at what scale do you think businesses or companies get to before they can then say, “we need to deploy cloud”?. Because if you are a sole proprietor and all you’re doing is accounting…

If you are a sole proprietor and you need an accounting package, and you have okay connectivity, you probably want the accounting package to be offered to you as a service. You don’t want to have to build a data centre to install your accounting package and have to run that data centre properly, make sure it’s safe, secure, trusted, compliant – that’s cost. So if i’m a supplier of an accounting or invoicing package, I would just host it in one of the local cloud service providers and I would offer it to you as a service. You don’t have to run it, you just have to use it.

So smaller businesses are actually the ones moving faster to the cloud because the benefits of cloud vs having to build your own infrastructure are very clear. Cloud requires good connectivity and depending on what service and what connectivity, you’ll choose public cloud or you’ll be forced to have your own data centre if the connectivity is not great. So connectivity is a big deciding factor.

Do you have any thoughts about where connectivity is going especially in emerging markets?

The biggest consideration around connectivity is the fact that wireless connectivity is reliant on the amount of spectrum that you can use. Spectrum is the limiting factor. And in countries where your fixed infrastructure, either copper or fibre, is not fairly deployed, you are very reliant on the amount of spectrum that you have available. So in countries like Nigeria, where you don’t have fibre to all the companies and homes in comparison, for instance, to a place like Dubai where you have fibre literally to every home and every company, the connectivity becomes a limiting factor for adoption of technology.

The other dimension to that is when we envision a world in which everything is smart and connected, that would only happen if you have enough bandwidth for all things to be smart and connected. And so, one of the big things that need to evolve is first we have to improve our physical connectivity – our cable connectivity – either with copper or more normally nowadays, with fibre.

Developing the fibre networks is absolutely critical for any country that hasn’t done so and the second aspect is the evolution of 3G and LTE to a new wireless technology that allows two things. The first is a better utilisation of the existing spectrum. For instance, to make a phone call, you need a certain amount of bandwidth. So if I am a provider of phone service, I need to break my spectrum into chunks that allow enough data so that the voice can go through. But if I’m a temperature probe on top of the mast measuring the temperature every hour in Lagos, I send a few bytes once in a while. The amount of connectivity that I need, for me, is very, very small. So being able to divide the spectrum in different ways is critical.

The second aspect is being able to use the best spectrum at any point in time. Being able to hand over seamlessly between bluetooth, wifi, LTE or 5G will be critical. So the evolution of connectivity in any country, and Nigeria will not be different from others, will evolve from one side – improving the physical connectivity. And on the other side, improving the wireless connectivity into smarter connectivity and smarter devices that can use the best connectivity in every moment.

Tola Agunbiade Author

Get the best African tech newsletters in your inbox