Google Cloud Platform Review: Head In The Clouds
Over the last decade, cloud computing services have sky rocketed as people push them to their limits and beyond to find more, advantageous uses for them. Providers such as Amazon Web Services (AWS), Microsoft Azure and VMware are constantly refining their own services and providing users with better means to their end. But in recent years, one service in particular has provided cloud computing that boasts both flexibility and performance to a high degree: The Google Cloud Platform (GCP).
Flexibility & Performance
The Google Compute Engine (GCE) is the main component offered by the Google Cloud Platform that we are interested in. This service allows the general public to create virtual machines (VMs) with an excellent range of selections and prices for the general hardware required for a VM. Some of the more common hardware is of course CPU and RAM. When creating a new VM, the Google Compute Engine offers a list of options to choose from for fixed values. But once chosen, you then have the freedom to slide both RAM and CPU as required, which is a useful benefit to everyone. Potentially the most important hardware for a VM are the disks. Disks can be chosen from either HDDs or SSDs, both scaling in IOPS (Input/Output Operations Per Second) as you increase the size of the disk, maxing out at approximately 10,000 read IOPS and 15,000 write IOPS for a 350GB (or higher) SSD. You can also choose whether or not the disk created is persistent. This means that all of your data on your disks will be protected, and as a result will persist through system restarts and shutdowns. Should you not require your VM online at all times, then you can shutdown your server for a period and you will not be charged the full price of the VM during this time. Persistent disks therefore not only offer you security of your data, but also additional benefits that might not be seen with some other cloud computing services. One other option available for disks is the local SSD. This is an SSD that is physically attached to a VM, which in turn offers superior performance on the other available disk options. However, this increased performance does have certain trade-offs.
- First, a local SSD can only be created when the VM itself is initially created. Thus it is not possible to add a local SSD to an existing VM.
- Second, when creating a local SSD there is [currently] a fixed size of 375GB. This may be more than enough for certain situations, but for others this can be very limiting indeed.
- Third, and potentially the biggest downside, a local SSD cannot be persistent. This means that any type of system fault could result in all data held on a local SSD being lost. It also means that the VM cannot be shutdown, which revokes the benefit of temporarily turning off the VM when it isn't needed.
As a result of the options available, the flexibility regarding both performance and price is highly suitable for anyone. It means that you can truly specify a system to your requirements, depending on both the performance and price range that you are looking for.
Testing The Platform
With all of the options available, the Google Compute Engine sounds like a great environment to build your new VMs in. But how easy is it to setup a VM in the Google Compute Engine? How does it fare for the tasks that you require? To try and answer these basic, but very important, questions, we have installed a Greenplum multi node cluster within the Google Compute Engine using Pivotal's Greenplum database version 18.104.22.168 and servers provisioned with a CentOS 7.2 image. The very simple reason for using Greenplum as a test is that it checks all of the boxes that would generally be required for a day to day server. Basic system processes can test the general performance of disks, CPU and RAM. By running a set of TPC-H queries on loop over a few days, it is possible to also see how daily traffic may, or may not, affect the performance of the servers. Furthermore, a Greenplum database requires perfect broadcast capabilities of networks between the multiple VMs, without any interferences. When initially looking into the networking capabilities of VMs in the Google Compute Engine, various posts made it appear that running a Multi Parallel Processing (MPP) Greenplum instance would be difficult (if possible). This therefore was essentially a make or break for the initial testing stages.
Google Compute Engine Cluster Setup
Setting up a VM using the Google Compute Engine is relatively straight forward. Once you are logged in, simply go into the Google Compute Engine and click Create Instance in the VM instances tab. From here you can select the name of the instance (it is important to note that this is the host name of the server as well), CPU and RAM values, boot disk options (including the OS image as well as the boot disk size and type), and optional extras (such as additional disks and network parameters). That’s all there is to it! Once you’re happy with the VM details, click Create and the Google Compute Engine will start to provision it over a respectfully short period of time. Once provisioned, it is possible to click on the VM to get a more detailed overview if required. From this view you can also choose to edit the VM and alter most settings.For our test cluster, we provisioned three VMs, each consisting of 2 CPUs, 4GB RAM and one standard persistent boot disk with 15GB space. The master VM has an additional standard persistent disk with 200GB space (which will be used to generate and store 100GB of TPC-H data), whilst the two segment servers each have an additional SSD persistent disk with 250GB space (for the Greenplum database segments).
It isn’t always an initial thought that disks can often be a limiting factor in server performance, especially when it comes to databases. Shovelling on more CPUs and RAM may help in part, but there is always an upper limit and disks can often impact that limit.For a provisioned 250GB SSD disk in the Google Compute Engine, one would expect to achieve approximately 7500 random-read IO per second (IOPS), which would be a very welcome sight for most database servers. But using the exceptional disk performance measuring tool FIO, it was in fact a disappointment to find that the approximate random-read IOPS performance seen on both of my SSDs was closer to 3400, regardless of using a range of different options available with the FIO tool to try and increase this. Similar testing on a separate VM provisioned with a 375GB local SSD returned similar disappointing results.
The most important task is to configure the network, as this is essential for Greenplum to run correctly. By default, no user has external or internal SSH access to any server. Whilst generating SSH keys for each individual user (using PuTTYgen) and then applying them to each VM via the Google Compute Engine is relatively straight forward, this only allows SSH access to a VM from an external host. Setting up SSH access between the VMs themselves, which for Greenplum is the more important aspect, requires a relatively simple task. First you need to manually generate an rsa key on each server for each user using the command ssh-keygen –t rsa from the command line (this key will appear in ~/.ssh). Then, share each generated key for each user between all servers (via SCP to and from an external host) and finally paste all keys into the ~/.ssh/authorized_keys file on all servers. With this task complete and successful, not only is the most tedious part of the server setup out of the way but it is also a relatively straight forward procedure from here on to get Greenplum up and running.
With the network setup as required, all that remains is the system configuration options and Greenplum software installation, which holds little to no complications. Once these additional tasks were complete, it was a single and simple command to successfully initialise the Greenplum database across the cluster.With the database up and running and a 100GB TPC-H data set generated, it was possible to load and query the data without any issues.
With the data loaded, a continuous loop of the 22 TPC-H queries was run against the data over several days. One thing we specifically looked for was the standard deviation percentage in query times for individual queries. Impressively, this averaged to be 2% across all queries, with the maximum being 7%. From this we concluded that daily traffic did not noticeably interfere from one persons server to the next. Less impressively however, the TPC-H tests once again showed that the Google Compute Engine wasn’t quite as performant as it boasts, as it returned an average time of 618 seconds per query, whilst an almost exact replica of the server (regarding Greenplum setup and hardware) on a different cloud provider returned an average time of 374 seconds per query.
It is easy to say that the Google Cloud Platform is a flexible and reliable cloud computing service, with options available that are more than capable of performing most tasks. Upgrading and scaling out your server(s) is essentially as quick as you can click, meaning you are never truly limited with regards to performance, and ultimately an ever growing server can easily meet ever growing needs.However, with all of the benefits that can be utilised, the expectations of server performance appears far more boasted than what seemed the likely reality. This may not be a problem for the smallest of requirements, but it could most definitely prove to be the downside for larger requirements. Scaling out your server(s) is of course an option, as mentioned above, but how far are you willing to go?