Skip to content
When:January 28, 2019
Event:"Greybeards on Storage, Episode 79"
Session:"GreyBeards talk AI deep learning infrastructure with Frederic Van Haren, CTO & Founder, HighFens, Inc."

GreyBeards talk Artificial Intelligence, Deep Learning infrastructure

I have been honored with another podcast with the Greybeards on Storage and this time on Artificial Intelligence and its infrastructure. I always learn something new when I am on their podcast. If you are interested in AI this podcast is for you!

My previous podcast with the Greybeards was about infrastructure and storage for HPC (episode 33) and can be found here.


The term Artificial Intelligence (AI) has been around for a while. In the podcast, I briefly go over the evolution of AI over time. As well as explaining the relationship between Machine Learning (ML) and Deep Learning (DL). The combination of easy access to AI techniques and methodologies with the availability of powerful technology made AI more accessible to people.

For more detail on the relationship between AI, ML and DL go here.

Democratization of AI

The open source community has been the largest source and provider for AI techniques and methodologies. It wasn’t that long ago that those techniques and methodologies were considered Intellectual Property to a company. Today, it is quite easy for somebody to download and use one or more AI methodologies, all of this with a community of people supporting them.

Armed with data and AI techniques and methodologies people can practice AI. However, you still need infrastructure to run the AI applications on. Luckily, the storage, compute, and network technologies that make up the infrastructure have become increasingly more powerful. Even so that a laptop could be used for AI prototyping. These days, it is much more affordable due to the availability of Public Clouds such as AWS (Amazon), Azure (Microsoft), or GCP (Google).

Artificial Intelligence (AI)

It is important to note that Artificial Intelligence has two components to it. First, there is the “Training” component. The goal is to build a model that is an analytical representation of the processed data. Secondly, there is the “Inference” component. Here the trained model is used to predict future outcomes (new data). Both components have different needs and requirements.


Back To Top