skip to Main Content

“Monolog”

My HPC and Big Data initiation started sometime in September 2005. I was invited to the office of the SVP for R&D for a meeting that took less than five minutes. I still refer to this session as the “monolog.” At that time, I was running product-related R&D groups with a focus on Speech Technologies. Not suspecting a career change I was hoping the meeting would reveal something good like a promotion. Maybe a raise or did I do something wrong? Who knows.

Apparently, management had taken notice over the years that I was running a lab with a variety of operating systems and hardware. Relying on support from Corporate IT was not an option. None of the HW and operating systems were company standards. Shadow IT at its best! My experience with the lab somehow persuaded management to hand me the responsibility of creating an HPC group under R&D (not IT). Although the job description was high level, the business need was evident. It came down to the ability to collect a lot of data and process it as fast as possible. All with the expectation to grow capacity significantly over the next years.

HPC and Big Data initiation

Within the company, there was no expertise to solve this problem in a scalable and cost-effective fashion. Corporate IT had originally bought some 40 desktops as a stop gap measure and hooked them up to the same network as the user desktops. It didn’t take long for them to understand that doing many data transfers on the same network infrastructure as regular users was not a good idea. Another side effect was heat dissipation associated with the desktops running 24/7. The desktops were nicely lined up in the hallway adjacent to people’s cubes. The rise in temperature did result in the landlord complaining about difficulties keeping the room temperature down during the summer.

To be honest, I wasn’t sure if I was the right person for the job. I had, in fact, no in-depth knowledge of storage, compute or network. However, I did understand that scalability and innovation were critical to the success of the project. We needed an HW & SW stack (a Platform) where individual components could be replaced one by one if needed. For example, being able to swap out a storage device for another one without user interruption. The concept of the Platform allowed us to drive growth, innovation and most importantly being vendor independent. Every single component of the Platform has been replaced at least once over the last decade without users even knowing that we swapped out components right under their nose.

The end of an era – the start of a new HPC and Big Data adventure

In short, after 11 years of building multi-Petabyte HPC and Big Data clusters for Speech applications, I felt it was time to move on. Providing consultancy and services in HPC and Big Data was the perfect solution for me.

In upcoming blogs, I will go through lessons learned and detailed review of the Platform. I am looking forward to sharing any new technologies and concepts in future blogs. Please follow us for blog updates.

This Post Has 2 Comments
  1. The monolog that launched ten thousand cores! It has been impressive to watch the evolution of this environment and the level of innovation applied to make it so successful.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top