krane

As this project approaches the v1.0 milestone, we're excited to announce that kubernetes-deploy will be officially renamed as krane. Follow the 1.0 requirement label to keep up with the progress.

kubernetes-deploy Build status codecov

kubernetes-deploy is a command line tool that helps you ship changes to a Kubernetes namespace and understand the result. At Shopify, we use it within our much-beloved, open-source Shipit deployment app.

Why not just use the standard kubectl apply mechanism to deploy? It is indeed a fantastic tool; kubernetes-deploy uses it under the hood! However, it leaves its users with some burning questions: What just happened? Did it work?

Especially in a CI/CD environment, we need a clear, actionable pass/fail result for each deploy. Providing this was the foundational goal of kubernetes-deploy, which has grown to support the following core features:

​:eyes: Watches the changes you requested to make sure they roll out successfully.

:interrobang: Provides debug information for changes that failed.

:1234: Predeploys certain types of resources (e.g. ConfigMap, PersistentVolumeClaim) to make sure the latest version will be available when resources that might consume them (e.g. Deployment) are deployed.

:closed_lock_with_key: Creates Kubernetes secrets from encrypted EJSON, which you can safely commit to your repository

​:running: Running tasks at the beginning of a deploy using bare pods (example use case: Rails migrations)

This repo also includes related tools for running tasks and restarting deployments.

demo-deploy.gif

missing-secret-fail


Table of contents

KUBERNETES-DEPLOY

KUBERNETES-RESTART

KUBERNETES-RUN

KUBERNETES-RENDER

CONTRIBUTING


Prerequisites

  • Ruby 2.4+
  • Your cluster must be running Kubernetes v1.11.0 or higher1
  • Each app must have a deploy directory containing its Kubernetes templates (see Templates)

1 We run integration tests against these Kubernetes versions. You can find our official compatibility chart below.

Kubernetes version Last officially supported in gem version
1.5 0.11.2
1.6 0.15.2
1.7 0.20.6
1.8 0.21.1
1.9 0.24.0
1.10 0.27.0

Installation

  1. Install kubectl (requires v1.11.0 or higher) and make sure it is available in your $PATH
  2. Set up your kubeconfig file for access to your cluster(s).
  3. gem install kubernetes-deploy

Usage

kubernetes-deploy <app's namespace> <kube context>

Environment variables:

  • $REVISION: the SHA of the commit you are deploying. Will be exposed to your ERB templates as current_sha.
  • $KUBECONFIG: points to one or multiple valid kubeconfig files that include the context you want to deploy to. File names are separated by colon for Linux and Mac, and semi-colon for Windows. If ommitted, will use the Kubernetes default of ~/.kube/config.
  • $TASK_ID: used as the ID of the deployment for resource naming.
  • $ENVIRONMENT: used to set the deploy directory to config/deploy/$ENVIRONMENT. You can use the --template-dir=DIR option instead if you prefer (one or the other is required).
  • $GOOGLE_APPLICATION_CREDENTIALS: points to the credentials for an authenticated service account (required if your kubeconfig user's auth provider is GCP)

Options:

Refer to kubernetes-deploy --help for the authoritative set of options.

  • --template-dir=DIR: Used to set the deploy directory. Set $ENVIRONMENT instead to use config/deploy/$ENVIRONMENT. This flag also supports reading from STDIN. You can do this by using --template-dir=-. Example: cat templates_from_stdin/*.yml | kubernetes-deploy ns ctx --template-dir=-.
  • (alpha feature) -f [PATHS]: Accepts a comma-separated list of directories and/or filenames to specify the set of directories/files that will be deployed (use - to read from STDIN). Can be invoked multiple times. Cannot be combined with --template-dir. Example: cat templates_from_stdin/*.yml | kubernetes-deploy ns ctx -f -,path/to/dir,path/to/file.yml
  • --bindings=BINDINGS: Makes additional variables available to your ERB templates. For example, kubernetes-deploy my-app cluster1 --bindings=color=blue,size=large will expose color and size.
  • --no-prune: Skips pruning of resources that are no longer in your Kubernetes template set. Not recommended, as it allows your namespace to accumulate cruft that is not reflected in your deploy directory.
  • --max-watch-seconds=seconds: Raise a timeout error if it takes longer than seconds for any resource to deploy.
  • --selector: Instructs kubernetes-deploy to only prune resources which match the specified label selector, such as environment=staging. If you use this option, all resource templates must specify matching labels. See Sharing a namespace below.

NOTICE: Deploy Secret resources at your own risk. Although we will fix any reported leak vectors with urgency, we cannot guarantee that sensitive information will never be logged.

Sharing a namespace

By default, kubernetes-deploy will prune any resources in the target namespace which have the kubectl.kubernetes.io/last-applied-configuration annotation and are not a result of the current deployment process, on the assumption that there is a one-to-one relationship between application deployment and namespace, and that a deployment provisions all relevant resources in the namespace.

If you need to, you may specify --no-prune to disable all pruning behaviour, but this is not recommended.

If you need to share a namespace with resources which are managed by other tools or indeed other kubernetes-deploy deployments, you can supply the --selector option, such that only resources with labels matching the selector are considered for pruning.

Using templates and variables

Each app's templates are expected to be stored in a single directory. If this is not the case, you can create a directory containing symlinks to the templates. The recommended location for app's deploy directory is {app root}/config/deploy/{env}, but this is completely configurable.

All templates must be YAML formatted. You can also use ERB. The following local variables will be available to your ERB templates by default:

  • current_sha: The value of $REVISION
  • deployment_id: The value of $TASK_ID, or in its absence, a randomly generated identifier for the deploy. Useful for creating unique names for task-runner pods (e.g. a pod that runs rails migrations at the beginning of deploys).

You can add additional variables using the --bindings=BINDINGS option which can be formated as comma separated string, JSON string or path to a JSON or YAML file. Complex JSON or YAML data will be converted to a Hash for use in templates. To load a file the argument should include the relative file path prefixed with an @ sign. An argument error will be raised if the string argument cannot be parsed, the referenced file does not include a valid extension (.json, .yaml or .yml) or the referenced file does not exist.

Bindings examples

# Comma separated string. Exposes, 'color' and 'size'
$ kubernetes-deploy my-app cluster1 --bindings=color=blue,size=large

# JSON string. Exposes, 'color' and 'size'
$ kubernetes-deploy my-app cluster1 --bindings='{"color":"blue","size":"large"}'

# Load JSON file from ./config
$ kubernetes-deploy my-app cluster1 --bindings='@config/production.json'

# Load YAML file from ./config (.yaml or .yml supported)
$ kubernetes-deploy my-app cluster1 --bindings='@config/production.yaml'

Using partials

kubernetes-deploy supports composing templates from so called partials in order to reduce duplication in Kubernetes YAML files. Given a template directory DIR, partials are searched for in DIR/partialsand in 'DIR/../partials', in that order. They can be embedded in other ERB templates using the helper method partial. For example, let's assume an application needs a number of different CronJob resources, one could place a template called cron in one of those directories and then use it in the main deployment.yaml.erb like so:

<%= partial "cron", name: "cleanup",   schedule: "0 0 * * *", args: %w(cleanup),    cpu: "100m", memory: "100Mi" %>
<%= partial "cron", name: "send-mail", schedule: "0 0 * * *", args: %w(send-mails), cpu: "200m", memory: "256Mi" %>

Inside a partial, parameters can be accessed as normal variables, or via a hash called locals. Thus, the cron template could like this:

---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: cron-<%= name %>
spec:
  schedule: <%= schedule %>
  successfulJobsHistoryLimit: 3
  failedJobsHistoryLimit: 3
  concurrencyPolicy: Forbid
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: cron-<%= name %>
            image: ...
            args: <%= args %>
            resources:
              requests:
                cpu: "<%= cpu %>"
                memory: <%= memory %>
          restartPolicy: OnFailure

Both .yaml.erb and .yml.erb file extensions are supported. Templates must refer to the bare filename (e.g. use partial: 'cron' to reference cron.yaml.erb).

Limitations when using partials

Partials can be included almost everywhere in ERB templates. Note: when using a partial to insert additional key-value pairs to a map you must use YAML merge keys. For example, given a partial p defining two fields 'a' and 'b',

a: 1
b: 2

you cannot do this:

x: yz
<%= partial 'p' %>

hoping to get

x: yz
a: 1
b: 2

but you can do:

<<: <%= partial 'p' %>
x: yz

This is a limitation of the current implementation.

Customizing behaviour with annotations

  • krane.shopify.io/timeout-override: Override the tool's hard timeout for one specific resource. Both full ISO8601 durations and the time portion of ISO8601 durations are valid. Value must be between 1 second and 24 hours.
    • Example values: 45s / 3m / 1h / PT0.25H
    • Compatibility: all resource types
  • krane.shopify.io/required-rollout: Modifies how much of the rollout needs to finish before the deployment is considered successful.
    • Compatibility: Deployment
    • full: The deployment is successful when all pods in the new replicaSet are ready.
    • none: The deployment is successful as soon as the new replicaSet is created for the deployment.
    • maxUnavailable: The deploy is successful when minimum availability is reached in the new replicaSet. In other words, the number of new pods that must be ready is equal to spec.replicas - strategy.RollingUpdate.maxUnavailable (converted from percentages by rounding up, if applicable). This option is only valid for deployments that use the RollingUpdate strategy.
    • Percent (e.g. 90%): The deploy is successful when the number of new pods that are ready is equal to spec.replicas * Percent.
  • krane.shopify.io/prunable: Allows a Custom Resource to be pruned during deployment.
    • Compatibility: Custom Resource Definition
    • true: The custom resource will be pruned if the resource is not in the deploy directory.
    • All other values: The custom resource will not be pruned.
  • krane.shopify.io/predeployed: Causes a Custom Resource to be deployed in the pre-deploy phase.
    • Compatibility: Custom Resource Definition
    • Default: true
    • true: The custom resource will be deployed in the pre-deploy phase.
    • All other values: The custom resource will be deployed in the main deployment phase.

Running tasks at the beginning of a deploy

To run a task in your cluster at the beginning of every deploy, simply include a Pod template in your deploy directory. kubernetes-deploy will first deploy any ConfigMap and PersistentVolumeClaim resources in your template set, followed by any such pods. If the command run by one of these pods fails (i.e. exits with a non-zero status), the overall deploy will fail at this step (no other resources will be deployed).

Requirements:

  • The pod's name should include <%= deployment_id %> to ensure that a unique name will be used on every deploy (the deploy will fail if a pod with the same name already exists).
  • The pod's spec.restartPolicy must be set to Never so that it will be run exactly once. We'll fail the deploy if that run exits with a non-zero status.
  • The pod's spec.activeDeadlineSeconds should be set to a reasonable value for the performed task (not required, but highly recommended)

A simple example can be found in the test fixtures: test/fixtures/hello-cloud/unmanaged-pod-1.yml.erb.

The logs of all pods run in this way will be printed inline. If there is only one pod, the logs will be streamed in real-time. If there are multiple, they will be fetched when the pod terminates.

migrate-logs

Deploying Kubernetes secrets (from EJSON)

Note: If you're a Shopify employee using our cloud platform, this setup has already been done for you. Please consult the CloudPlatform User Guide for usage instructions.

Since their data is only base64 encoded, Kubernetes secrets should not be committed to your repository. Instead, kubernetes-deploy supports generating secrets from an encrypted ejson file in your template directory. Here's how to use this feature:

  1. Install the ejson gem: gem install ejson
  2. Generate a new keypair: ejson keygen (prints the keypair to stdout)
  3. Create a Kubernetes secret in your target namespace with the new keypair: kubectl create secret generic ejson-keys --from-literal=YOUR_PUBLIC_KEY=YOUR_PRIVATE_KEY --namespace=TARGET_NAMESPACE >Warning: Do not use apply to create the ejson-keys secret. kubernetes-deploy will fail if ejson-keys is prunable. This safeguard is to protect against the accidental deletion of your private keys.
  4. (optional but highly recommended) Back up the keypair somewhere secure, such as a password manager, for disaster recovery purposes.
  5. In your template directory (alongside your Kubernetes templates), create secrets.ejson with the format shown below. The _type key should have the value “kubernetes.io/tls” for TLS secrets and “Opaque” for all others. The data key must be a json object, but its keys and values can be whatever you need.
{
  "_public_key": "YOUR_PUBLIC_KEY",
  "kubernetes_secrets": {
    "catphotoscom": {
      "_type": "kubernetes.io/tls",
      "data": {
        "tls.crt": "cert-data-here",
        "tls.key": "key-data-here"
      }
    },
    "monitoring-token": {
      "_type": "Opaque",
      "data": {
        "api-token": "token-value-here"
      }
    }
  }
}
  1. Encrypt the file: ejson encrypt /PATH/TO/secrets.ejson
  2. Commit the encrypted file and deploy as usual. The deploy will create secrets from the data in the kubernetes_secrets key.

Note: Since leading underscores in ejson keys are used to skip encryption of the associated value, kubernetes-deploy will strip these leading underscores when it creates the keys for the Kubernetes secret data. For example, given the ejson data below, the monitoring-token secret will have keys api-token and property (not _property):

{
  "_public_key": "YOUR_PUBLIC_KEY",
  "kubernetes_secrets": {
    "monitoring-token": {
      "_type": "kubernetes.io/tls",
      "data": {
        "api-token": "EJ[ENCRYPTED]",
        "_property": "some unencrypted value"
      }
    }
  }

A warning about using EJSON secrets with --selector: when using EJSON to generate Secret resources and specifying a --selector for deployment, the labels from the selector are automatically added to the Secret. If the same EJSON file is deployed to the same namespace using different selectors, this will cause the resource to thrash - even if the contents of the secret were the same, the resource has different labels on each deploy.

Deploying custom resources

By default, kubernetes-deploy does not check the status of custom resources; it simply assumes that they deployed successfully. In order to meaningfully monitor the rollout of custom resources, kubernetes-deploy supports configuring pass/fail conditions using annotations on CustomResourceDefinitions (CRDs).

Note: This feature is only available on clusters running Kubernetes 1.11+ since it relies on the metadata.generation field being updated when custom resource specs are changed.

Requirements:

  • The custom resource must expose a status subresource with an observedGeneration field.
  • The krane.shopify.io/instance-rollout-conditions annotation must be present on the CRD that defines the custom resource.
  • (optional) The krane.shopify.io/instance-timeout annotation can be added to the CRD that defines the custom resource to override the global default timeout for all instances of that resource. This annotation can use ISO8601 format or unprefixed ISO8601 time components (e.g. '1H', '60S').

Specifying pass/fail conditions

The presence of a valid krane.shopify.io/instance-rollout-conditions annotation on a CRD will cause kubernetes-deploy to monitor the rollout of all instances of that custom resource. Its value can either be "true" (giving you the defaults described in the next section) or a valid JSON string with the following format:

'{
  "success_conditions": [
    { "path": <JsonPath expression>, "value": <target value> }
    ... more success conditions
  ],
  "failure_conditions": [
    { "path": <JsonPath expression>, "value": <target value> }
    ... more failure conditions
  ]
}'

For all conditions, path must be a valid JsonPath expression that points to a field in the custom resource's status. value is the value that must be present at path in order to fulfill a condition. For a deployment to be successful, all success_conditions must be fulfilled. Conversely, the deploy will be marked as failed if any one of failure_conditions is fulfilled. success_conditions are mandatory, but failure_conditions can be omitted (the resource will simply time out if it never reaches a successful state).

In addition to path and value, a failure condition can also contain error_msg_path or custom_error_msg. error_msg_path is a JsonPath expression that points to a field you want to surface when a failure condition is fulfilled. For example, a status condition may expose a message field that contains a description of the problem it encountered. custom_error_msg is a string that can be used if your custom resource doesn't contain sufficient information to warrant using error_msg_path. Note that custom_error_msg has higher precedence than error_msg_path so it will be used in favor of error_msg_path when both fields are present.

Warning:

You must ensure that your custom resource controller sets .status.observedGeneration to match the observed .metadata.generation of the monitored resource once its sync is complete. If this does not happen, kubernetes-deploy will not check success or failure conditions and the deploy will time out.

Example

As an example, the following is the default configuration that will be used if you set krane.shopify.io/instance-rollout-conditions: "true" on the CRD that defines the custom resources you wish to monitor:

'{
  "success_conditions": [
    {
      "path": "$.status.conditions[?(@.type == \"Ready\")].status",
      "value": "True",
    },
  ],
  "failure_conditions": [
    {
      "path": '$.status.conditions[?(@.type == \"Failed\")].status',
      "value": "True",
      "error_msg_path": '$.status.conditions[?(@.type == \"Failed\")].message',
    },
  ],
}'

The paths defined here are based on the typical status properties as defined by the Kubernetes community. It expects the status subresource to contain a conditions array whose entries minimally specify type, status, and message fields.

You can see how these conditions relate to the following resource:

apiVersion: stable.shopify.io/v1
kind: Example
metadata:
  generation: 2
  name: example
  namespace: namespace
spec:
  ...
status:
  observedGeneration: 2
  conditions:
  - type: "Ready"
    status: "False"
    reason: "exampleNotReady"
    message: "resource is not ready"
  - type: "Failed"
    status: "True"
    reason: "exampleFailed"
    message: "resource is failed"
  • observedGeneration == metadata.generation, so kubernetes-deploy will check this resource's success and failure conditions.
  • Since $.status.conditions[?(@.type == "Ready")].status == "False", the resource is not considered successful yet.
  • $.status.conditions[?(@.type == "Failed")].status == "True" means that a failure condition has been fulfilled and the resource is considered failed.
  • Since error_msg_path is specified, kubernetes-deploy will log the contents of $.status.conditions[?(@.type == "Failed")].message, which in this case is: resource is failed.

Deploy walkthrough

Let's walk through what happens when you run the deploy task with this directory of templates. You can see this for yourself by running the following command:

krane deploy my-namespace my-k8s-cluster -f test/fixtures/hello-cloud --render-erb

As soon as you run this, you'll start seeing some output being streamed to STDERR.

Phase 1: Initializing deploy

In this phase, we:

  • Perform basic validation to ensure we can proceed with the deploy. This includes checking if we can reach the context, if the context is valid, if the namespace exists within the context, and more. We try to validate as much as we can before trying to ship something because we want to avoid having an incomplete deploy in case of a failure (this is especially important because there's no rollback support).
  • List out all the resources we want to deploy (as described in the template files we used).
  • Render ERB templates and apply partials, if enabled (which is the case for this example). If enabled, we also perform basic validation on the parsed templates.

Phase 2: Checking initial resource statuses

In this phase, we check resource statuses. For each resource listed in the previous step, we check Kubernetes for their status; in the first deploy this might show a bunch of items as "Not Found", but for the deploy of a new version, this is an example of what it could look like:

Certificate/services-foo-tls     Exists
Cloudsql/foo-production          Provisioned
Deployment/jobs                  3 replicas, 3 updatedReplicas, 3 availableReplicas
Deployment/web                   3 replicas, 3 updatedReplicas, 3 availableReplicas
Ingress/web                      Created
Memcached/foo-production         Healthy
Pod/db-migrate-856359            Unknown
Pod/upload-assets-856359         Unknown
Redis/foo-production             Healthy
Service/web                      Selects at least 1 pod

The next phase might be either "Predeploying priority resources" (if there's any) or "Deploying all resources". In this example we'll go through the former, as we do have predeployable resources.

Phase 3: Predeploying priority resources

This is the first phase that could modify the cluster.

In this phase we predeploy certain types of resources (e.g. ConfigMap, PersistentVolumeClaim, Secret, ...) to make sure the latest version will be available when resources that might consume them (e.g. Deployment) are deployed. This phase will be skipped if the templates don't include any resources that would need to be predeployed.

When this runs, we essentially run kubectl apply on those templates and periodically check the cluster for the current status of each resource so we can display error or success information. This will look different depending on the type of resource. If you're running the command described above, you should see something like this in the output:

Deploying ConfigMap/hello-cloud-configmap-data (timeout: 30s)
Successfully deployed in 0.2s: ConfigMap/hello-cloud-configmap-data

Deploying PersistentVolumeClaim/hello-cloud-redis (timeout: 300s)
Successfully deployed in 3.3s: PersistentVolumeClaim/hello-cloud-redis

Deploying Role/role (timeout: 300s)
Don't know how to monitor resources of type Role. Assuming Role/role deployed successfully.
Successfully deployed in 0.2s: Role/role

As you can see, different types of resources might have different timeout values and different success criteria; in some specific cases (such as with Role) we might not know how to confirm success or failure, so we use a higher timeout value and assume it did work.

Phase 4: Deploying all resources

In this phase, we:

  • Deploy all resources found in the templates, including resources that were predeployed in the previous step (which should be treated as a no-op by Kubernetes). We deploy everything so the pruning logic (described below) doesn't remove any predeployed resources.
  • Prune resources not found in the templates (you can disable this by using --no-prune).

Just like in the previous phase, we essentially run kubectl apply on those templates and periodically check the cluster for the current status of each resource so we can display error or success information.

If pruning is enabled (which, again, is the default), any resource which type is listed in DeployTask.prune_whitelist that we can find in the namespace but not in the templates will be removed. A particular message about pruning will be printed in the next phase if any resource matches this criteria.

Result

The result section will show:

  • A global status: if all resources were deployed successfully, this will show up as "SUCCESS"; if at least one resource failed to deploy (due to an error or timeout), this will show up as "FAILURE".
  • A list of resources and their individual status: this will show up as something like "Available", "Created", and "1 replica, 1 availableReplica, 1 readyReplica".

At this point the command also returns a status code:

  • If it was a success, 0
  • If there was a timeout, 70
  • If any other failure happened, 1

On timeouts: It's important to notice that a single resource timeout or a global deploy timeout doesn't necessarily mean that the operation failed. Since Kubernetes updates are asynchronous, maybe something was just too slow to return in the configured time; in those cases, usually running the deploy again might work (that should be a no-op for most - if not all - resources).

kubernetes-restart

kubernetes-restart is a tool for restarting all of the pods in one or more deployments. It triggers the restart by touching the RESTARTED_AT environment variable in the deployment's podSpec. The rollout strategy defined for each deployment will be respected by the restart.

Usage

Option 1: Specify the deployments you want to restart

The following command will restart all pods in the web and jobs deployments:

kubernetes-restart <kube namespace> <kube context> --deployments=web,jobs

Option 2: Annotate the deployments you want to restart

Add the annotation shipit.shopify.io/restart to all the deployments you want to target, like this:

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: web
  annotations:
    shipit.shopify.io/restart: "true"

With this done, you can use the following command to restart all of them:

kubernetes-restart <kube namespace> <kube context>

Options:

Refer to kubernetes-restart --help for the authoritative set of options.

  • --selector: Only restarts Deployments which match the specified Kubernetes resource selector.
  • --deployments: Restart specific Deployment resources by name.

kubernetes-run

kubernetes-run is a tool for triggering a one-off job, such as a rake task, outside of a deploy.

Prerequisites

  • You've already deployed a PodTemplate object with field template containing a Pod specification that does not include the apiVersion or kind parameters. An example is provided in this repo in test/fixtures/hello-cloud/template-runner.yml.
  • The Pod specification in that template has a container named task-runner.

Based on this specification kubernetes-run will create a new pod with the entrypoint of the task-runner container overridden with the supplied arguments.

Usage

kubernetes-run <kube namespace> <kube context> <arguments> --entrypoint=<entrypoint> --template=<template name>

Options:

  • --template=TEMPLATE: Specifies the name of the PodTemplate to use (default is task-runner-template if this option is not set).
  • --env-vars=ENV_VARS: Accepts a comma separated list of environment variables to be added to the pod template. For example, --env-vars="ENV=VAL,ENV2=VAL2" will make ENV and ENV2 available to the container.
  • --entrypoint=ENTRYPOINT: Specify the entrypoint to use to start the task runner container.
  • --skip-wait: Skip verification of pod success
  • --max-watch-seconds=seconds: Raise a timeout error if the pod runs for longer than the specified number of seconds

kubernetes-render

kubernetes-render is a tool for rendering ERB templates to raw Kubernetes YAML. It's useful for seeing what kubernetes-deploy does before actually invoking kubectl on the rendered YAML. It's also useful for outputting YAML that can be passed to other tools, for validation or introspection purposes.

Prerequisites

  • kubernetes-render does not require a running cluster or an active kubernetes context, which is nice if you want to run it in a CI environment, potentially alongside something like https://github.com/garethr/kubeval to make sure your configuration is sound.
  • Like the other kubernetes-deploy commands, kubernetes-render requires the $REVISION environment variable to be set, and will make it available as current_sha in your ERB templates.

Usage

To render all templates in your template dir, run:

kubernetes-render --template-dir=./path/to/template/dir

To render some templates in a template dir, run kubernetes-render with the names of the templates to render:

kubernetes-render --template-dir=./path/to/template/dir this-template.yaml.erb that-template.yaml.erb

To render a template in a template dir and output it to a file, run kubernetes-render with the name of the template and redirect the output to a file:

kubernetes-render --template-dir=./path/to/template/dir template.yaml.erb > template.yaml

Options:

  • --template-dir=DIR: Used to set the directory to interpret template names relative to. This is often the same directory passed as --template-dir when running kubernetes-deploy to actually deploy templates. Set $ENVIRONMENT instead to use config/deploy/$ENVIRONMENT. This flag also supports reading from STDIN. You can do this by using --template-dir=-.
  • --bindings=BINDINGS: Makes additional variables available to your ERB templates. For example, kubernetes-render --bindings=color=blue,size=large some-template.yaml.erb will expose color and size to some-template.yaml.erb.

Contributing

We :heart: contributors! To make it easier for you and us we've written a Contributing Guide

You can also reach out to us on our slack channel, #krane, at https://kubernetes.slack.com. All are welcome!

Code of Conduct

Everyone is expected to follow our Code of Conduct.

License

The gem is available as open source under the terms of the MIT License.