It’s many one’s dream to work in one of these Google Cloud Center buildings one day. They love and live for this kinda of stuff. Students going to college for a Networking and systems Administration degree right now hoping that one day ‘ll get to it.
Because you only have, at the end of the day, a finite numbers of resources. Now, what does that mean in real terms? Well, maybe your web server can handle 100 users per minute. Maybe it can handle 1,000 users per minute.
Maybe it can handle 1,000 users per second, or even much more than that. It really depends on the specifications of your hardware– how much RAM, how much CPU and so forth that you actually have– and it also depends, to some extent, on how well-written your code is and how fast or how slow your code, your software actually runs.
So these are knobs that can ultimately be turned. And through testing, can you figure this out in advance by simulating traffic in order to estimate exactly how many users you might be able to handle at a time?
Now, the relevance to today is that the cloud, so to speak, allows us to start to solve some of these problems and also allows us to start abstracting away the solutions to some of these problems.
Well, let’s see what this actually means. So at some point or other– especially when it’s not just my laptop, but it’s like 1,000 laptops, or 10,000 laptops and desktops and phones and more that are somehow trying to access my server here– at some point, we hit that upper limit whereby no more users can fit onto my web site per unit of time.
So what is the symptom that my users experience at that point if I’m over capacity? Well, they might see an error message of some sort. They might just experience a spinning icon because the website is super slow to respond. And maybe it does respond, but maybe it’s 10 seconds later.
So at the end of the day, they either have a bad experience or no experience whatsoever, because my server can only handle so many requests at a time. So what do you do to solve this problem?
If one server is not enough, maybe the most intuitive solution is, well, if one server is not giving me enough headroom, why don’t I just have two servers? So let’s go ahead and do that. Instead of having just one server, let’s go ahead and have two.
And let me propose that on the second server, it’s the exact same software. So whatever code I’ve written, in whatever language it’s written, I just have copies of my web site on both the original server and the second server.
Now I’ve solved the problem in the simple sense that I’ve doubled my capacity. If one server can handle 1,000 people per second, well, then surely two servers can handle 2,000 people per second, so I’ve doubled my capacity.
So that’s good. I’ve hopefully solved the problem. But it’s not quite as simple as that. At least pictorially, I’m still pointing at just one of those servers, so we’re going to have to clean up this picture alone and somehow figure out how to get users– or more generally, traffic– to both of these servers. I could just naively draw an arrow like this.
But what does that actually mean? We don’t want to abstract away so much of the detail that we’re ignoring this problem.
How do we implement this notion of choosing between left arrow and right arrow? Well, let’s consider what our solutions might be.
If a user, like me on my laptop, is trying to visit this web site– and the web site, ideally, is going to live at something like example.com, or facebook.com, or gmail.com, or whatever– I don’t want to have to broadcast different names for my servers. And you might actually notice this on the internet.
You might notice, if you start noticing the URLs of websites you’re visiting– especially for certain older, stodgier companies who haven’t necessarily implemented this in the most modern way– you might find yourself not just at www.something.com, but if you look closely, you might find yourself occasionally at www1.something.com, www2.something.com, or even www13.something.com.
Which is to say that some companies appear to solve this problem by just giving different names– similar names, but different names– to their two servers, three servers, 13 servers, or however many they have.
And then they somehow redirect users from their main domain name, www.something.com, to any one of those two or three or 13 servers. But this isn’t very elegant. The marketing folks would surely hate this, because you’re trying to build some brand recognition around your URL.
Why would you dirty it by just putting these arbitrary numbers in the URLs? Plus if you fast forward a bit in this story, if, for some reason down the road, you get fancier, bigger servers that can handle more users, and therefore you don’t need 13 of them– you can get away with just six of them– well, what happens if some of your customers have bookmarked, very reasonably, one of those older names, like www13.something.
If Google doesn’t have the technicians or technological know how to completely erase a hard-drive without having to crush it, one question is whether they should be in the technology business.
Those drives should be recycled and given to someone else. Computer techs spend a lot of time rebuilding machines that have been stripped of parts like hard-drives and RAM because some previous monkey tech has pulled out parts because of corporate policy.
All it takes to erase a hard-drive is a pass over a magnetic eraser. Just a few seconds and you are done.