DC/OS: The Incredible Scalable Container Platform | 10th Magnitude

DC/OS: The Incredible Scalable Container Platform | 10th Magnitude

I have some big news to share: I’m pleased to help announce and be a part of the GA release of DC/OS 1.7! This marks a move by Mesosphere to make DC/OS free and 100% open source, in addition to adding a host of new features and neat add-ons. This post will consist of an introduction to its capabilities and a quickstart guide on how to get up and running with your own (or public) Docker containers. I expect the reader to be either familiar with container concepts or in the process of evaluating Docker platforms, or to have experience with a Docker scheduler like Marathon or Kubernetes (which can also run on top of DC/OS!). Let’s take a look.

What is DC/OS?

This is Mesosphere’s flagship product: the Datacenter Operating System. It’s designed to be the platform your datacenter runs on top of. Combining a set of open source technologies (Mesos, Zookeeper, Marathon, Docker) already run in production by giants like Netflix, Apple, eBay and Twitter, it provides the primitives necessary for building scalable and reliable distributed applications. It also aims to be the easiest way to run containers in production. It comes preloaded with a public package repository that forms the basis of installing and configuring your datacenter apps, but is otherwise a blank slate for you to run your apps and frameworks. If you’re familiar with package managers like apt, yum, or chocolatey, installing a framework is just as simple. For example, dcos package install jenkinswill deploy a containerized Jenkins master to your node, preconfigured to use your cluster’s resources for its worker processes.

Anatomy of a DC/OS Installation

  • Components
    • Mesos – The kernel of your DC/OS. This manages scheduling your apps and frameworks as well as making the slave resources available to your cluster.
    • Marathon – The init system/service manager. This framework contains definitions of all of your apps with instructions on how they should be run. It will monitor their health and restart failed apps and orchestrate rolling upgrades to new versions.
    • Chronos – The scheduled task manager. This is analogous to cron or the Windows Task Scheduler and can execute complex commands on a set schedule.
  • Machines
    • Bootstrap – This node creates and serves the cluster-wide configuration and installer run by the master and slave nodes. This needs to stay online and is necessary for creating new slave nodes.
    • Master(s) – These node(s) run the various DC/OS services necessary for the cluster to function. The preloaded system copy of Marathon is where new frameworks and apps are installed by default. For testing purposes, deploying to this system Marathon is fine, but in production it is a good idea to deploy another copy of Marathon just for your apps (https://docs.mesosphere.com/usage/services/marathon/marathon-user-instance/).
    • Slaves – These nodes are what your frameworks and Docker apps actually run on. Through the Mesos agent, they report their available resources (CPU, RAM and disk space) back to the Mesos masters, which use that information to decide where to schedule tasks.

Marathon on DC/OS Quickstart

Let’s say your app, hello_world, runs inside of a Docker container. Let’s also say you have a DC/OS installation ready to go. What would getting hello_world onto your installation look like? First, you’d create a simple JSON configuration describing how you want your app to run. Tutum has a simple Docker container by the same name, so let’s use that for now:

{ 
  "id": "/hello_world", 
  "instances": 1, 
  "cpus": 1, 
  "mem": 512, 
  "container": { 
    "type": "DOCKER", 
    "docker": { 
      "image": "tutum/hello_world", 
      "network": "BRIDGE", 
      "portMappings": [{ 
        "containerPort": 80, 
        "protocol": "tcp" 
      }] 
    } 
  } 
}

This contains basic information like how much CPU priority and RAM and what network configuration your app should get. Then you’ll deploy it to DC/OS (preferably after a series of rigorous tests, of course, but as the final step between QA and deployment) by running:

dcos marathon app add hello_world.json

Simple, right? Browse to the endpoint exposed by Marathon, and you’ll see your app running:

Marathon Hellow World

But what does it actually take to get there? Let’s look at how to install DC/OS for yourself.

Installing DC/OS

There are a few different deployment types: Automated via GUI, Automated via CLI and Manual via CLI. You should pick whichever one makes the most sense for your environment, but I’m a big fan of Chef so I wrote a cookbook that uses the manual command line method that makes generating the DC/OS config and distributing the installer a bit easier and reproducible. You can find that here:https://github.com/ryanl-ee/dcos-cookbook/. See the cookbook readme for more details about testing and deployment.

If you’re unfamiliar with Chef, or just want a simple way to kick the tires, you can follow along yourself with the excellent Community Edition installation instructions: https://docs.mesosphere.com/administration/installing/cloud/.

For a more advanced install, and for a better look at the method the Chef cookbook uses to install, check out the Enterprise Edition instructions:https://docs.mesosphere.com/administration/installing/custom/

The simplicity of this install is centered around the single config.yaml file. This file contains all the configuration you can pass through to the cluster installer and allows you to store that definition externally and use it for automated infrastructure testing.

After you’ve installed your DC/OS cluster, you’ll see something like this:

Mesosphere DCOS Dashboard

This dashboard collects the information you’ll need to get insight into what’s happening in your datacenter.

If you’ve used DC/OS before, you’ll notice that authentication has been enabled by default now. Authentication into the cluster management portal integrates with Google, GitHub and Microsoft accounts.

Thanks to the Kubernetes on Mesos project, we have a native way to run the popular Docker scheduler directly on DC/OS. Kubernetes is temporarily unavailable from the DC/OS package repo, but keep an eye on the GitHub issue for more news (https://github.com/mesosphere/kubernetes-mesos/issues/799).

Automated Interactions

You can programmatically and automatically interact with your cluster via the DC/OS command line interface. Find installation instructions here:https://docs.mesosphere.com/usage/cli/install/. If you’re on Windows and don’t want to install Python/PIP, it may be easier to install the dcos-cli on a VM or on the bootstrap node. Marathon (and most other frameworks) also provide an API, but the dcos-cli is generally the place to look for interacting with your cluster. For example, if you’ve installed Kafka on your cluster, you can interactively create brokers and topics, right from your command prompt.

Next Steps

Getting your cluster set up is just the first step to a truly automated stack. Use an automated tool like Azure RM templates or Terraform to deploy your machines, a configuration management tool like Chef to configure and maintain them, bootstrap a CI/CD server like Jenkins to your cluster, and preload it with jobs to install the rest of your DC/OS frameworks. Imagine being able to not only give your developers a reproducible and portable environment, but also to give your Ops team the same flexibility in their infrastructure. Ops would be able to build a nearly identical environment on demand either in your chosen cloud, on-prem hardware or even their laptops, whose entire functionality and behavior is determined by a simple config file. By choosing config-driven tools that live around or on top of DC/OS, you can make it the centerpiece of your datacenter.

Further reading:

https://docs.mesosphere.com/usage/tutorials/

https://open.mesosphere.com/advanced-course/

Need more help? Join the DC/OS community on Slack: http://dcos-community.slack.com

 

Learn more about Mesosphere and Azure on Our Website

Have Your Say: