Kubernetes rollback entire cluster state

Kubernetes provides easy tools for rolling out and rolling back changes to Deployments and Daemonsets. However, deployments are often tightly associated with other kubernetes primitives like Secrets or Services, and I'd like to know how to do the same for those as they directly affect the running state of the app/cluster as well.For example, if I change some ports in my service or change a Secrets-based environment variable (then restart my pods), I may break something and want to rollback the configuration to the previous version.How can I do ...Read more

kubernetes - How to delete a pod in Openshift with restart policy set to always?

Could be a basic one..I have created a pod in Openshift Enterprise 3.2 with the configuration set as restartPolicy: Always . So ideally when the pod is destroyed Openshift will make sure to re-run/re-create the pod because of the restart policy.Now I no longer need the pod. When I am trying to destroy the pod it is getting created again. My question is What is the ideal way to destroy the pod with restartPolicy: Always...Read more

kubernetes - Incorrect reporting of container memory usage by cadvisor

cAdvisor reports 14GB for the memory used by Prometheus where as top reports 6.xGB.Can someone explain why this discrepancy? The documentatation of container_memory_usage_bytes says Current memory usage in bytes, including all memory regardless of when it was accessedbut it's not clear what this refers to - I assume it's virtual memory size?As reported by cAdvisor:core@ip-172-20-100-148 ~ $ curl -q localhost:4194/metrics | grep container_memory_usage_bytes | grep prometheuscontainer_memory_usage_bytes{container_name="prometheus",id="/docker/d37...Read more

kubernetes - Kubelet's cAdvisor metrics endpoint does not reliably return all metrics

I am having an issue with cAdvisor where not all metrics are being reliably returned when I query its metrics endpoint. Specifically, querying container_fs_limit_bytes{device=~"^/dev/.*$",id="/",kubernetes_io_hostname=~"^.*"} through Prometheus often displays results for only a fraction of the nodes in my Kubernetes cluster. This happens when the corresponding metrics are not scraped for over 5mins (due to the metrics becoming stale), but I'm not sure why all metrics are not being displayed every time the endpoint is queried successfully.Curlin...Read more

kubernetes - Deploying an ephemeral volume-on-demand and sharing that among pods

I would like to achieve the following functionality:when a given pod (let's call it application pod) is deployed on anode, another pod providing an ephemeral volume is deployed beforethat, if such "volume pod" has not existed on the target node yet the number of application pods can be scaled up and down, and allapplication pods on the same node share the single volume podThe first requirement assumes a kind of dependency definition among pods (just like it can be done among Marathon apps in case of Marathon).The second requirement assumes that...Read more

Kubernetes pod distribution amongst nodes

Is there any way to make kubernetes distribute pods as much as possible?I have "Requests" on all deployments and global Requests as wellas HPA. all nodes are the same.Just had a situation where my ASG scaled down a node and one service became completely unavailable as all 4 pods were on the same node that was scaled down. I would like to maintain a situation where each deployment must spread its containers on at least 2 nodes....Read more

kubernetes - How to force Pods/Deployments to Master nodes?

I've setup a Kubernetes 1.5 cluster with the three master nodes tainted dedicated=master:NoSchedule. Now I want to deploy the Nginx Ingress Controller on the Master nodes only so I've added tolerations:apiVersion: extensions/v1beta1kind: Deploymentmetadata: name: nginx-ingress-controller namespace: kube-system labels: kubernetes.io/cluster-service: "true"spec: replicas: 3 template: metadata: labels: k8s-app: nginx-ingress-lb name: nginx-ingress-lb annotations: scheduler.alpha.kubernetes.io/tolerations: ...Read more

kubernetes - Force some pods to schedule on the same node

I have a sandbox Kubernetes cluster, in which I shutdown all pods at night, so it can scale down with the cluster-autoscaler add-on.The problem is, it almost always keep the master plus 2 nodes running.Looking into cluster-autoscaler logs, I see the problem seems to be this:Fast evaluation: node ip-172-16-38-51.ec2.internal cannot be removed: non-deamons set, non-mirrored, kube-system pod present: dns-controller-3586597043-531v5Fast evaluation: node ip-172-16-49-207.ec2.internal cannot be removed: non-deamons set, non-mirrored, kube-system pod ...Read more

google cloud platform - Kubernetes: how to change accessModes of auto scaled pod to ReadOnlyMany?

I'm trying HPA: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/PV:apiVersion: v1kind: PersistentVolumemetadata: name: api-orientdb-pv labels: app: api-orientdbspec: capacity: storage: 10Gi accessModes: - ReadWriteOnce gcePersistentDisk: pdName: api-orientdb-{{ .Values.cluster.name | default "testing" }} fsType: ext4PVC:apiVersion: v1kind: PersistentVolumeClaimmetadata: name: api-orientdb-pv-claim labels: app: apispec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi...Read more

kubernetes - Any benefits from scaling up Deployment's Pods on same Node in Kuberenets

Consider a Deployment that runs 1 Pod containing a NodeJS container with no specified resource limits. My Kubernetes Cluster consists of 3 Nodes and is running different applications, 2 of the Nodes that are running other applications than NodeJS are experiencing a steady high load (i.e. CPU utilization > 80%), deeming scheduling new Pods into those Nodes ineffective.| Pod:A | | Pod:A | | Pod:NodeJS || Pod:B | | Pod:B | | ||--------| |--------| |---------------||CPU 85% | |CPU 85% | | CPU 60% ||Mem:80% | ...Read more

rancher - Multiple Host Kubernetes Ingress Controller

I've been studying Kubernetes for a few weeks now, and using the kube-lego NGINX examples (https://github.com/jetstack/kube-lego) have successfully deployed services to Kubernetes cluster using Rancher on DigitalOcean.I've deployed sample static sites, Wordpress, Laravel, Craft CMS, etc. All of which use custom Namespaces, Deployment, Secrets, Containers with external registries, Services, and Ingress Definitions.Using the example (lego) NGINX Ingress Controller setup, I'm able to apply DNS to the exposed IP address of my K8s cluster, and have ...Read more

kubernetes - Helm chart template evaluating expression incorrectly

I am setting some properties in configmap on basis of some flags' values. To achieve this I am using "if/else" conditions in my configmap.yaml. But these "if" conditions are working fine if there is only one expression inside "if" block but for multiple expression clubbed with "or" and "and" is being evaluated incorrectly.configmap.yaml:{{- else if (eq .Values.A "oracle") and (eq .Values.B "true") or (eq .Values.A "postgresql") }}The above condition is being evaluated to false however it was supposed to be evaluated as true because the last con...Read more