日本熟妇hd丰满老熟妇,中文字幕一区二区三区在线不卡 ,亚洲成片在线观看,免费女同在线一区二区

配置本地存儲卷自動擴容

在ACK集群中,如需使用ECS本地盤或云盤存儲具體波峰波谷特性的臨時數據,您可以創建并配置本地存儲卷。本地存儲將根據節點本地存儲容量進行調度,并在存儲容量達到設定值時執行數據卷自動擴容。

本地存儲介紹

image

本地存儲架構主要由以下三部分組成。

類型

說明

Node Resource Manager

負責維護本地存儲的初始化周期。以VolumeGroup為例,當您在ConfigMap上聲明了VolumeGroup相關組成信息時,NodeResourceManager會根據ConfigMap上的定義初始化當前節點上的本地存儲。

CSI Plugin

負責維護本地存儲卷的生命周期。以LVM為例,當您創建使用了某個VolumeGroup的PVC時,CSI Plugin會自動生成Logiical olume并綁定到PVC。

Storage Auto Expander

負責管理本地存儲卷的自動擴容。當監控發現當前本地存儲卷容量不足時,Storage Auto Expander會自動對本地存儲卷進行擴容。

關于本地存儲卷的更多信息,請參見本地存儲卷概述

前提條件

使用限制

  • 目前支持通過HostPath、LocalVolume、LVM、內存等掛載方式掛載本地存儲。
  • 本地存儲并非高可用存儲卷,只適用于一些臨時數據的保存及應用自帶高可用的場景。
  • LVM本地存儲卷,不支持數據的跨節點遷移,不適合在高可用場景中使用。

步驟一:初始化本地存儲

Node Resource Manager組件初始化本地存儲時,會讀取ConfigMap來管理集群內所有節點的本地存儲,為每個節點自動化創建VolumeGroup及QuotaPath。

  1. 執行以下命令,在集群的ConfigMap中定義本地存儲卷。

    請按需將cn-zhangjiakou.192.168.XX.XX替換為一個實際節點名稱。

    展開查看ConfigMap示例代碼

    cat <<EOF | kubectl apply -f -
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: node-resource-topo
      namespace: kube-system
    data:
      volumegroup: |-
        volumegroup:
        - name: volumegroup1
          key: kubernetes.io/hostname
          operator: In
          value: cn-zhangjiakou.192.168.XX.XX
          topology:
            type: device
            devices:
            - /dev/vdb
            - /dev/vdc
      quotapath: |-
        quotapath:
        - name: /mnt/path1
          key: kubernetes.io/hostname
          operator: In
          value: cn-zhangjiakou.192.168.XX.XX
          topology:
            type: device
            options: prjquota
            fstype: ext4
            devices:
            - /dev/vdd
    EOF

    以上的ConfigMap定義了在cn-zhangjiakou.192.168.XX.XX節點創建以下兩種本地存儲卷資源。

    • VolumeGroup資源:名稱為volumegroup1,該VolumeGroup資源由宿主機上的/dev/vdb/dev/vdc兩個塊設備組成。

    • Quotapath資源:該Quotapath由/dev/vdd塊設備格式化掛載在/mnt/path1路徑下。此處只能聲明一個devices

  2. 執行以下命令,部署Node Resource Manager組件。

    展開查看Node Resource Manager組件部署的示例代碼

    cat <<EOF | kubectl apply -f -
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: node-resource-manager
      namespace: kube-system
    ---
    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: node-resource-manager
    rules:
      - apiGroups: [""]
        resources: ["configmaps"]
        verbs: ["get", "watch", "list", "delete", "update", "create"]
      - apiGroups: [""]
        resources: ["nodes"]
        verbs: ["get", "list", "watch"]
    ---
    kind: ClusterRoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: node-resource-manager-binding
    subjects:
      - kind: ServiceAccount
        name: node-resource-manager
        namespace: kube-system
    roleRef:
      kind: ClusterRole
      name: node-resource-manager
      apiGroup: rbac.authorization.k8s.io
    ---
    kind: DaemonSet
    apiVersion: apps/v1
    metadata:
      name: node-resource-manager
      namespace: kube-system
    spec:
      selector:
        matchLabels:
          app: node-resource-manager
      template:
        metadata:
          labels:
            app: node-resource-manager
        spec:
          tolerations:
            - operator: "Exists"
          priorityClassName: system-node-critical
          serviceAccountName: node-resource-manager
          hostNetwork: true
          hostPID: true
          containers:
            - name: node-resource-manager
              securityContext:
                privileged: true
                capabilities:
                  add: ["SYS_ADMIN"]
                allowPrivilegeEscalation: true
              image: registry.cn-hangzhou.aliyuncs.com/acs/node-resource-manager:v1.18.8.0-5b1bdc2-aliyun
              imagePullPolicy: "Always"
              args:
                - "--nodeid=$(KUBE_NODE_NAME)"
              env:
                - name: KUBE_NODE_NAME
                  valueFrom:
                    fieldRef:
                      apiVersion: v1
                      fieldPath: spec.nodeName
              volumeMounts:
                - mountPath: /dev
                  mountPropagation: "HostToContainer"
                  name: host-dev
                - mountPath: /var/log/
                  name: host-log
                - name: etc
                  mountPath: /host/etc
                - name: config
                  mountPath: /etc/unified-config
          volumes:
            - name: host-dev
              hostPath:
                path: /dev
            - name: host-log
              hostPath:
                path: /var/log/
            - name: etc
              hostPath:
                path: /etc
            - name: config
              configMap:
                name: node-resource-topo
    EOF
  3. 執行以下命令,安裝csi-plugin組件。

    csi-plugin組件提供數據卷的全生命周期管理,包括數據卷的創建、掛載、卸載、刪除及擴容等服務。

    展開查看csi-plugin組件部署的示例代碼

    cat <<EOF | kubectl apply -f -
    apiVersion: storage.k8s.io/v1
    kind: CSIDriver
    metadata:
      name: localplugin.csi.alibabacloud.com
    spec:
      attachRequired: false
      podInfoOnMount: true
    ---
    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      labels:
        app: csi-local-plugin
      name: csi-local-plugin
      namespace: kube-system
    spec:
      selector:
        matchLabels:
          app: csi-local-plugin
      template:
        metadata:
          labels:
            app: csi-local-plugin
        spec:
          containers:
          - args:
            - --v=5
            - --csi-address=/csi/csi.sock
            - --kubelet-registration-path=/var/lib/kubelet/csi-plugins/localplugin.csi.alibabacloud.com/csi.sock
            env:
            - name: KUBE_NODE_NAME
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: spec.nodeName
            image: registry.cn-hangzhou.aliyuncs.com/acs/csi-node-driver-registrar:v1.2.0
            imagePullPolicy: Always
            name: driver-registrar
            volumeMounts:
            - mountPath: /csi
              name: plugin-dir
            - mountPath: /registration
              name: registration-dir
          - args:
            - --endpoint=$(CSI_ENDPOINT)
            - --v=5
            - --nodeid=$(KUBE_NODE_NAME)
            - --driver=localplugin.csi.alibabacloud.com
            env:
            - name: KUBE_NODE_NAME
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: spec.nodeName
            - name: SERVICE_PORT
              value: "11290"
            - name: CSI_ENDPOINT
              value: unix://var/lib/kubelet/csi-plugins/localplugin.csi.alibabacloud.com/csi.sock
            image: registry.cn-hangzhou.aliyuncs.com/acs/csi-plugin:v1.14.8-40ee9518-local
            imagePullPolicy: Always
            name: csi-localplugin
            securityContext:
              allowPrivilegeEscalation: true
              capabilities:
                add:
                - SYS_ADMIN
              privileged: true
            volumeMounts:
            - mountPath: /var/lib/kubelet
              mountPropagation: Bidirectional
              name: pods-mount-dir
            - mountPath: /dev
              mountPropagation: HostToContainer
              name: host-dev
            - mountPath: /var/log/
              name: host-log
            - mountPath: /mnt
              mountPropagation: Bidirectional
              name: quota-path-dir
          hostNetwork: true
          hostPID: true
          serviceAccount: csi-admin
          tolerations:
          - operator: Exists
          volumes:
          - hostPath:
              path: /var/lib/kubelet/csi-plugins/localplugin.csi.alibabacloud.com
              type: DirectoryOrCreate
            name: plugin-dir
          - hostPath:
              path: /var/lib/kubelet/plugins_registry
              type: DirectoryOrCreate
            name: registration-dir
          - hostPath:
              path: /var/lib/kubelet
              type: Directory
            name: pods-mount-dir
          - hostPath:
              path: /dev
              type: ""
            name: host-dev
          - hostPath:
              path: /var/log/
              type: ""
            name: host-log
          - hostPath:
              path: /mnt
              type: Directory
            name: quota-path-dir
      updateStrategy:
        rollingUpdate:
          maxUnavailable: 10%
        type: RollingUpdate
    EOF

步驟二:使用本地存儲卷創建應用

使用LVM本地存儲卷創建應用

  1. 執行以下命令,創建StorageClass。

    cat <<EOF | kubectl apply -f -
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
        name: csi-local
    provisioner: localplugin.csi.alibabacloud.com
    parameters:
        volumeType: LVM
        vgName: volumegroup1
        fsType: ext4
        lvmType: "striping"
    reclaimPolicy: Delete
    volumeBindingMode: WaitForFirstConsumer
    allowVolumeExpansion: true
    EOF

    parameters.vgName為在node-resource-topo configmap中定義的VolumeGroup名稱volumegroup1。更多信息,請參見LVM數據卷

  2. 執行以下命令,創建PVC。

    cat << EOF | kubectl apply -f -
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: lvm-pvc
      annotations:
        volume.kubernetes.io/selected-node: cn-zhangjiakou.192.168.XX.XX
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 2Gi
      storageClassName: csi-local
    EOF

    annotations下的volume.kubernetes.io/selected-node: cn-zhangjiakou.192.168.XX.XX表明后續關聯PVC的Pod將會被調度到cn-zhangjiakou.192.168.XX.XX節點上,該節點是之前在ConfigMap中定義的VolumeGroups資源所在節點。

  3. 執行以下命令,創建一個示例應用。

    展開查看示例應用代碼

    cat << EOF | kubectl apply -f -
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: deployment-lvm
      labels:
        app: nginx
    spec:
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - name: nginx
            image: nginx:1.7.9
            volumeMounts:
              - name: lvm-pvc
                mountPath: "/data"
          volumes:
            - name: lvm-pvc
              persistentVolumeClaim:
                claimName: lvm-pvc
    EOF

    等待應用啟動,容器中的/data目錄容量將為PVC聲明的容量2 GiB。

使用Quotapath本地存儲卷創建應用

  1. 執行以下命令,創建StorageClass。

    cat << EOF | kubectl apply -f -
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: alicloud-local-quota
    parameters:
      volumeType: QuotaPath
      rootPath: /mnt/path1
    provisioner: localplugin.csi.alibabacloud.com
    reclaimPolicy: Delete
    allowVolumeExpansion: true
    volumeBindingMode: WaitForFirstConsumer
    EOF

    parameters.rootPath為在node-resource-topo configmap中定義的QuotaPath資源的名稱/mnt/path1。更多信息,請參見QuotaPath數據卷

  2. 執行以下命令,創建PVC。

    cat << EOF | kubectl apply -f -
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: csi-quota
      annotations:
        volume.kubernetes.io/selected-node: cn-zhangjiakou.192.168.XX.XX
      labels:
        app: web-quota
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 2Gi
      storageClassName: alicloud-local-quota
    EOF

    annotations下的volume.kubernetes.io/selected-node: cn-zhangjiakou.192.168.XX.XX表明后續關聯PVC的Pod將會被調度到cn-zhangjiakou.192.168.XX.XX節點上,該節點是之前在ConfigMap中定義的Quotapath資源所在節點。

  3. 執行以下命令,創建一個示例應用。

    展開查看示例應用代碼

    cat << EOF | kubectl apply -f -
    apiVersion: apps/v1
    kind: StatefulSet
    metadata:
      name: web-quota
    spec:
      selector:
        matchLabels:
          app: nginx
      serviceName: "nginx"
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - name: nginx
            image: nginx
            volumeMounts:
            - name: disk-ssd
              mountPath: /data
          volumes:
            - name: "disk-ssd"
              persistentVolumeClaim:
                claimName: csi-quota
    EOF

    等待應用啟動,容器中的/data目錄容量等于PVC聲明的容量2 GiB。

步驟三:自動擴容本地存儲卷

  1. 安裝自動擴容插件。更多信息,請參見使用storage-operator進行存儲組件的部署與升級

    1. 執行以下命令配置storage-operator。

      kubectl edit cm storage-operator -n kube-system

      預期輸出:

        storage-auto-expander: |
          # deploy config
          install: true
          imageTag: "v1.18.8.1-d4301ee-aliyun"
    2. 執行以下命令檢查自動擴容組件是否啟動。

      kubectl get pod -n kube-system |grep storage-auto-expander

      預期輸出:

      storage-auto-expander-6bb575b68c-tt4hh               1/1     Running     0          2m41s
  2. 執行以下命令,配置自動擴容策略。

    cat << EOF | kubectl apply -f -
    apiVersion: storage.alibabacloud.com/v1alpha1
    kind: StorageAutoScalerPolicy
    metadata:
      name: hybrid-expand-policy
    spec:
      pvcSelector:
        matchLabels:
          app: web-quota
      namespaces:
        - default
      conditions:
        - name: condition1
          key: volume-capacity-used-percentage
          operator: Gt
          values:
            - "80"
      actions:
        - name: action
          type: volume-expand
          params:
            scale: 50%
            limits: 15Gi
    EOF

    從以上命令模板可以看出,當存儲容量達到80%以上時執行數據卷擴容,每次擴容50%,最大擴容到15 GiB。

相關文檔