{"id":52524,"date":"2021-10-14T15:00:28","date_gmt":"2021-10-14T14:00:28","guid":{"rendered":"https:\/\/www.microsoft.com\/en-gb\/industry\/blog\/?p=52524"},"modified":"2022-02-10T20:45:14","modified_gmt":"2022-02-10T19:45:14","slug":"building-scalable-data-science-applications-using-containers-part-6","status":"publish","type":"post","link":"https:\/\/www.microsoft.com\/en-gb\/industry\/blog\/technetuk\/2021\/10\/14\/building-scalable-data-science-applications-using-containers-part-6\/","title":{"rendered":"Building Scalable Data Science Applications using Containers \u2013 Part 6"},"content":{"rendered":"

\"An<\/p>\n

Welcome to the sixth part of this blog series around using containers for Data Science. In parts one<\/a>, two<\/a>, three<\/a>, four<\/a>, and five<\/a>, we provided a number of building blocks that we\u2019ll use here. If this is the first blog you\u2019ve seen, it may be worth skimming the first five parts, or going back and progressing through them. We make a number of assumptions about your familiarity with Docker, storage, and multi-container applications, which were covered previously.<\/p>\n

In this article we convert the previous docker-compose application (part five<\/a>) to one that capitalises on a Kubernetes approach \u2013 scalability, resilience, predefined configuration packages with Helm etc.<\/p>\n

Reviewing the previous Docker approach\u2019s structure, almost everything sits in a container mounting shared storage.<\/p>\n

\"A<\/p>\n

Kubernetes brings a different dimension to how you might consider a solution, and our approach builds on this. In this article, we won\u2019t stretch the bounds of what Kubernetes can do, but we will show how to take an existing containers-based application, and slowly migrate that capability to cloud services with Kubernetes used as an orchestration engine.<\/p>\n

This is the revised architecture.<\/p>\n

\"A<\/p>\n

Things to note about this:<\/p>\n