<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Managed (Tenant) Kubernetes on Cozystack</title><link>https://deploy-preview-533--cozystack.netlify.app/docs/next/kubernetes/</link><description>Recent content in Managed (Tenant) Kubernetes on Cozystack</description><generator>Hugo</generator><language>en</language><atom:link href="https://deploy-preview-533--cozystack.netlify.app/docs/next/kubernetes/index.xml" rel="self" type="application/rss+xml"/><item><title>GPU Sharing with HAMi</title><link>https://deploy-preview-533--cozystack.netlify.app/docs/next/kubernetes/gpu-sharing/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://deploy-preview-533--cozystack.netlify.app/docs/next/kubernetes/gpu-sharing/</guid><description>&lt;p&gt;
&lt;a href="https://github.com/Project-HAMi/HAMi" target="_blank"&gt;HAMi&lt;/a&gt; (Heterogeneous AI Computing Virtualization Middleware) is a CNCF Sandbox project that enables fractional GPU sharing in Kubernetes. Instead of dedicating an entire GPU to a single workload, HAMi lets containers request specific amounts of GPU memory and compute cores.&lt;/p&gt;


&lt;div class="alert alert-info" role="alert"&gt;


 This guide covers GPU sharing for &lt;strong&gt;containers in tenant Kubernetes clusters&lt;/strong&gt;. For GPU passthrough to virtual machines on the management cluster, see 
&lt;a href="https://deploy-preview-533--cozystack.netlify.app/docs/next/virtualization/gpu/"&gt;GPU Passthrough&lt;/a&gt;.

&lt;/div&gt;

&lt;h2 id="how-it-works"&gt;How it works&lt;/h2&gt;
&lt;p&gt;HAMi sits between the Kubernetes scheduler and the NVIDIA GPU driver:&lt;/p&gt;</description></item><item><title>Backups with the Velero addon</title><link>https://deploy-preview-533--cozystack.netlify.app/docs/next/kubernetes/backups-with-velero-addon/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://deploy-preview-533--cozystack.netlify.app/docs/next/kubernetes/backups-with-velero-addon/</guid><description>&lt;p&gt;The &lt;code&gt;velero&lt;/code&gt; addon of the 
&lt;a href="https://deploy-preview-533--cozystack.netlify.app/docs/next/kubernetes/" target="_blank"&gt;Managed Kubernetes&lt;/a&gt; application installs 
&lt;a href="https://velero.io/" target="_blank"&gt;Velero&lt;/a&gt; inside a tenant Kubernetes cluster. Combined with a tenant 
&lt;a href="https://deploy-preview-533--cozystack.netlify.app/docs/next/operations/services/object-storage/buckets/" target="_blank"&gt;Bucket&lt;/a&gt;, it lets tenant users back up workloads to S3 and restore them later.&lt;/p&gt;


&lt;div class="alert alert-info" role="alert"&gt;


 &lt;p&gt;This guide is for the &lt;strong&gt;tenant-side&lt;/strong&gt; Velero addon, which runs inside a tenant Kubernetes cluster and is operated by the tenant user.&lt;/p&gt;
&lt;p&gt;For the platform-level Velero used by cluster administrators to back up &lt;code&gt;VMInstance&lt;/code&gt;/&lt;code&gt;VMDisk&lt;/code&gt; resources from the management cluster, see 
&lt;a href="https://deploy-preview-533--cozystack.netlify.app/docs/next/operations/services/velero-backup-configuration/" target="_blank"&gt;Velero Backup Configuration&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>How to relocate etcd replicas in tenant clusters</title><link>https://deploy-preview-533--cozystack.netlify.app/docs/next/kubernetes/relocate-etcd/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://deploy-preview-533--cozystack.netlify.app/docs/next/kubernetes/relocate-etcd/</guid><description>&lt;p&gt;Tenant Kubernetes clusters are using their own etcd clusters, not the one that is used by the management cluster.
Such etcd clusters are deployed in tenants and are available to managed Kubernetes clusters deployed in the tenant and its sub-tenants.&lt;/p&gt;
&lt;p&gt;Replicas of a tenant etcd cluster can be relocated between nodes for maintenance reasons.
Currently, management operations for tenant etcd clusters are not automated,
but such a task can be done manually.&lt;/p&gt;</description></item></channel></rss>