Hadoop Research Journey from Bare Metal to Google Cloud — Episode 2
Previously on our first episode of the trilogy “Hadoop Research Journey from bare metal to Google Cloud — Episode 1”, we covered our challenges.
In this episode, I am looking to focus on the POC that we did in order to decide whether we should rebuild the Research cluster in-house or migrate it to the cloud.
As we had many open questions around migration to the cloud, we decided to do a learning POC, focusing on 3 main questions:
- Understand the learning curve that will be required from the users
- Compatibility with our in-house Online Hadoop clusters
- Estimate cost for running the Research cluster in the Cloud
However, before jumping into the water of the POC, we had some preliminary work to be done.
As the Research cluster was running for over 6 years already, there were many different use cases running on it. Some of which are well known and familiar to users, but some are old tech debts which no one knew if needed or not, and what is their value.
We started with mapping all the flows and use cases running on the cluster, mapped users and assigned owners to the different workflows.
We also created distinction between ad-hoc queries and batch processing.
We mapped all the technologies we need to support on the Research cluster in order to assure full compatibility with our Online clusters and in-house environment.
After collecting all the required information regarding the use cases and mapping the technologies we selected representative workflows and users to participate in the POC and take active part in it, collecting their feedback regarding the learning curve and ease of use. This approach will also serve us well later on, if we decide to move forward with the migration, having in house ambassadors.
Once we mapped all our needs, it was also easier to get from the different cloud vendors high level cost estimation, to give us a general indication if it makes sense for us to continue and invest time and resources in doing the POC.
We wanted to complete the POC within 1 month, so on one hand it will run long enough to cover all types of jobs, but on the other hand it will not be prolonged.
For the POC environment we built Hadoop cluster, based on standard technologies.
We decided not to leverage at this point special proprietary vendor technologies, as we wanted to reduce the learning curve and were careful not to get into a vendor lock-in.
In addition, we decided to start the POC only with one vendor, and not to run it on multiple cloud vendors.
The reason behind it was our mindfulness to our internal resources and time constraints.
We did theoretical evaluation of technology roadmap and cost for several Cloud vendors, and choose to go with GCP option, looking to also leverage BigQuery in the future (once all our data will be migrated).
Once we decided on the vendor, technologies and use cases we were good to go.
For the purpose of the POC we migrated 500TB of our data, build the Hadoop cluster based on Data Proc, and build the required endpoint machines.
Needless to say, that already in this stage we had to create the network infrastructure to support the secure work of the hybrid environment between GCP and our internal datacenters.
Now that everything was ready we started the actual POC from the users perspective. For a period of one month the participate users will perform their use cases twice. Once on the in-house Research cluster (the production environment), and second time on the Research cluster build on GCP (the POC environment). The users were required to record their experience, which was measured according to the flowing criteria:
- Compatibility (did the test run seamlessly, any modifications to code and queries required, etc.)
- Performance (execution time, amount of resources used)
- Ease of use
During the month of the POC we worked closely with the users, gathered their overall experience and results.
In addition, we documented the compute power needed to execute those jobs, which enabled us to do better cost estimation for how much it would cost to run the full Research Cluster on the cloud.
The POC was successful
The users had a good experience, and our cost analysis proved that with leveraging the cloud elasticity, which in this scenario was very significant, the cloud option would be ROI positive compared with the investment we would need to do building the environment internally. (without getting into the exact numbers — over 40% cheaper, which is a nice incentive!)
With that we started our last phase — the actual migration, which is the focus of our last episode in “Hadoop Research Journey from Bare Metal to Google Cloud — Episode 3”. Stay tuned!