Google One Cloud Storage Gives You 100GB for only $2 per Month

  • 00:00
  • |

So everything stays nicely in sync. But what’s the first problem that motivated the entirety of this discussion from the outset? Because Google One Cloud Storage Just giveaway 100GB for only $2 a month.

Well, what if one database isn’t really enough? Well, we could take the approach of vertically scaling our architecture, which is another piece of jargon in this space.

So vertical scaling means if your one database isn’t quite up to snuff, and you’re running low on disk space or capacity because of numbers of requests per second are, of course, limited, you know what you can do? You can go ahead and disconnect this one and go ahead and put in a bigger one, and therefore increase your capacity.

And vertical scaling means to really pay more money or get something higher end, a higher, more premium model, a more expensive model that’s got more disk space and more RAM and a faster CPU or more CPUs.

So you just throw hardware at the problem– not in the sense of multiple servers, but just one bigger and better server. But what are the challenges here? Well, if you’ve ever bought a home computer, odds are whether it’s been on Dell’s site or Microsoft’s or Apple’s or the like, you often have this good, better, best thing where, for the top of the line laptop or desktop, you’re going to be paying through the roof– through the nose, so to speak.

You’re going to be paying a premium for that top of the line model. But you might actually be able to save a decent number of dollars by going for the second best or the third best, because the marginal gains of each additional dollar really aren’t all that much.

Because for marketing reasons, they know that there might be some people out there that will always pay top dollar for the fastest one. But just because you’re paying twice as much doesn’t mean the laptops is going to be twice as good, for instance.

So this is to say to vertically scale your database, you might end up paying, through the nose, some very expensive hardware just to eke out some more performance.

But that’s not even the biggest problem. The most fundamental problem is at the end of the day, there is a top-of-the-line server for your database that only can support a finite number of database connections at a time, or a finite number of reads or writes, so to speak, saving and reading from the database.

So at some point or other, it doesn’t matter how much money you have or how willing you are to throw hardware at the problem.

There exists no server that can handle more users than you currently have. So at some point, you actually have to put away your wallet and put back on the engineering hat alone and figure out how to not vertically scale, but horizontally scale your architecture.

And by this, I mean actually introducing not just one big, fancy server, but two or more maybe smaller, cheaper servers. In fact, one of the things that companies like Google were especially good at early on was using off-the-shelf, inexpensive hardware and building supercomputers out of them, but much more economically than they might have had they gone top of the line everywhere, even though that would mean fewer servers.

Better to get more cheaper servers and somehow figure out how to interconnect them and write the software that lets them all be useful simultaneously so that we can instead have a picture that looks a bit more like this, with maybe a pair of databases in the picture now. Of course, we’ve now created that same problem that we had earlier about where does the data go.

Where does the traffic or the users flow, especially now where we have one on the left and one on the right? So there’s a couple of solutions here, but there are some different problems that arise with databases.

If we very simply put a load balancer in here, LB, and route traffic uniformly– say, to the left or to the right– that’s probably not the best thing. Because then you’re going to end up with a world where you’re saving some data for a user here and some data for a user here just by chance, because you’re using round robin, so to speak, or just some probabilistic heuristic where some of the traffic goes this way, some of the traffic goes that way.

And that’s not so good. OK. But we could solve that by somehow making sure that if this user, User A, visits my web site, I should always send him or her to the same database.

And you can do this in a couple of ways. You can enforce some notion of stickiness, so to speak, whereby you somehow notice that, oh, this is User A. We’ve seen him or her before. Let’s make sure we send him to this database on the left and not the one on the right. Or you can more formally use a process known as sharding.

In fact, this is very common early on in databases, and even in websites like Facebook, where you have so many users that you need to start splitting them across multiple databases. But gosh, how to do that?

Back in the earliest days of Facebook, what they might have done was put all Harvard users on one database, all MIT users on another, all BU users on another, and so forth. Because Facebook, as you may recall, started scaling out initially to disparate schools.

That was a wonderful opportunity to shard their data by putting similar users in their respective databases.

And at the time, I think you couldn’t even be friends with people in other schools, at least very early on, because those databases, presumably, were independent, or certainly could have been topological.

Or you might do something more simple that doesn’t create some problems like isolation there. Maybe all of your users whose last name start with A go on one server, and all of your users whose names start with B go on another server, and so forth. So you can almost hash your users, to borrow a terminology from hash tables, and decide where to put that data. Of course, that does not help with backups or redundancy.

Because if you’re putting all of your A names here and all of your B names here, what happens, god forbid, if one of the servers goes down? You’ve lost half of your customers.

So it would seem that no matter how you balance the load, you really want to maintain duplicates of data. And so there’s a few different ways people solve this. In fact, let me go ahead and temporarily go back to that first model, where we had a really fancy, bigger database that I’ll deliberately draw as pretty big.

And this is big in the sense that it can respond to requests quickly and it can store a lot of data. This might be generally called our primary or our master database. And it’s where our data goes to live long term. It’s where data is written to, so to speak, and could also be read from.

But if we’re going to bump up against some limit of how much work this database can do at once, it would be nice to have some secondary servers or tertiary servers. So a very common paradigm would be to use this primary database for writes– we’ll abbreviate it w– and then also have maybe a couple of smaller databases, or even the same size databases, that are meant for reads, abbreviated R.

And so long as these databases are somehow talking to one another, this topology will just work.