日本熟妇hd丰满老熟妇,中文字幕一区二区三区在线不卡 ,亚洲成片在线观看,免费女同在线一区二区

構(gòu)建混合彈性容器集群(彈性ECS)

更新時(shí)間:

混合集群是通過(guò)阿里云注冊(cè)集群接入本地?cái)?shù)據(jù)中心自建Kubernetes集群的容器集群。它可以為自建Kubernetes集群擴(kuò)容云上計(jì)算節(jié)點(diǎn),同時(shí)也可以管理云上云下計(jì)算資源。本文以使用Calico容器網(wǎng)絡(luò)組件的IDC自建Kubernetes集群為例,介紹如何創(chuàng)建混合集群。

前提條件

  • 本地?cái)?shù)據(jù)中心自建Kubernetes集群的網(wǎng)絡(luò)與云上注冊(cè)集群使用的專有網(wǎng)絡(luò)VPC互聯(lián)互通,網(wǎng)絡(luò)互聯(lián)互通包括計(jì)算節(jié)點(diǎn)網(wǎng)絡(luò)和容器網(wǎng)絡(luò)互聯(lián)互通。您可以通過(guò)云企業(yè)網(wǎng)產(chǎn)品服務(wù)實(shí)現(xiàn)網(wǎng)絡(luò)互通。更多信息,請(qǐng)參見(jiàn)入門概述

  • 目標(biāo)集群必須使用注冊(cè)集群提供的私網(wǎng)集群導(dǎo)入代理配置接入注冊(cè)集群。

  • 通過(guò)注冊(cè)集群擴(kuò)容的云上計(jì)算節(jié)點(diǎn)能夠訪問(wèn)本地?cái)?shù)據(jù)中心自建Kubernetes集群的API Server。

  • 已通過(guò)Kubectl連接注冊(cè)集群。具體操作,請(qǐng)參見(jiàn)獲取集群KubeConfig并通過(guò)kubectl工具連接集群

混合彈性容器集群架構(gòu)

由于自建Kubernetes集群使用Calico路由場(chǎng)景較多,本文將以IDC自建集群使用Calico路由模式的場(chǎng)景為例進(jìn)行闡述。關(guān)于云上容器網(wǎng)絡(luò)插件的使用,建議您使用對(duì)應(yīng)云平臺(tái)定制化的網(wǎng)絡(luò)組件,阿里云容器平臺(tái)統(tǒng)一使用Terway網(wǎng)絡(luò)組件進(jìn)行容器網(wǎng)絡(luò)的管理。混合集群組網(wǎng)模式如下圖所示。

image

您在本地?cái)?shù)據(jù)中心內(nèi)的私網(wǎng)網(wǎng)段為192.168.0.0/24,容器網(wǎng)絡(luò)網(wǎng)段為10.100.0.0/16,采用Calico網(wǎng)絡(luò)插件的路由反射模式;云上專有網(wǎng)絡(luò)網(wǎng)段為10.0.0.0/8,計(jì)算節(jié)點(diǎn)虛擬交換機(jī)網(wǎng)段為10.10.24.0/24,容器Pod虛擬交換機(jī)網(wǎng)段為10.10.25.0/24,采用Terway網(wǎng)絡(luò)組件的共享模式。

使用ACK注冊(cè)集群構(gòu)建混合容器集群

  1. 配置云上云下容器網(wǎng)絡(luò)插件。

    在混合集群中,需要保證云下的Calico網(wǎng)絡(luò)插件只運(yùn)行在云下,云上的Terway網(wǎng)絡(luò)組件只運(yùn)行在云上。

    由于ACK注冊(cè)集群節(jié)點(diǎn)擴(kuò)容的云上ECS節(jié)點(diǎn)會(huì)自動(dòng)添加節(jié)點(diǎn)標(biāo)簽alibabacloud.com/external=true,為了使IDC內(nèi)的Calico Pod只運(yùn)行在云下,需要為其設(shè)置NodeAffinity配置,示例如下:

    cat <<EOF > calico-ds.patch
    spec:
      template:
        spec:
          affinity:
            nodeAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
                nodeSelectorTerms:
                - matchExpressions:
                  - key: alibabacloud.com/external
                    operator: NotIn
                    values:
                    - "true"
                  - key: type
                    operator: NotIn
                    values:
                    - "virtual-kubelet"
    EOF
    kubectl -n kube-system patch ds calico-node -p "$(cat calico-ds.patch)"
  2. 為Terway插件配置RAM權(quán)限。

    通過(guò)onectl配置

    1. 在本地安裝配置onectl。具體操作,請(qǐng)參見(jiàn)通過(guò)onectl管理注冊(cè)集群

    2. 執(zhí)行以下命令,為Terway插件配置RAM權(quán)限。

      onectl ram-user grant --addon terway-eniip

      預(yù)期輸出:

      Ram policy ack-one-registered-cluster-policy-terway-eniip granted to ram user ack-one-user-ce313528c3 successfully.

    通過(guò)控制臺(tái)配置

    為Terway網(wǎng)絡(luò)組件配置AK信息所需的RAM權(quán)限,權(quán)限策略內(nèi)容如下,具體操作,請(qǐng)參見(jiàn)為RAM用戶授權(quán)

    {
        "Version": "1",
        "Statement": [
            {
                "Action": [
                    "ecs:CreateNetworkInterface",
                    "ecs:DescribeNetworkInterfaces",
                    "ecs:AttachNetworkInterface",
                    "ecs:DetachNetworkInterface",
                    "ecs:DeleteNetworkInterface",
                    "ecs:DescribeInstanceAttribute",
                    "ecs:AssignPrivateIpAddresses",
                    "ecs:UnassignPrivateIpAddresses",
                    "ecs:DescribeInstances",
                    "ecs:ModifyNetworkInterfaceAttribute"
                ],
                "Resource": [
                    "*"
                ],
                "Effect": "Allow"
            },
            {
                "Action": [
                    "vpc:DescribeVSwitches"
                ],
                "Resource": [
                    "*"
                ],
                "Effect": "Allow"
            }
        ]
    }
  3. 安裝Terway插件。

    通過(guò)onectl安裝

    執(zhí)行以下命令,安裝Terway插件。

    onectl addon install terway-eniip

    預(yù)期輸出:

    Addon terway-eniip, version **** installed.

    通過(guò)控制臺(tái)安裝

    1. 登錄容器服務(wù)管理控制臺(tái),在左側(cè)導(dǎo)航欄選擇集群

    2. 集群列表頁(yè)面,單擊目標(biāo)集群名稱,然后在左側(cè)導(dǎo)航欄,選擇運(yùn)維管理 > 組件管理

    3. 組件管理頁(yè)面,搜索terway-eniip組件,單擊組件右下方的安裝,然后單擊確定

    4. 執(zhí)行以下命令,查看Terway網(wǎng)絡(luò)組件守護(hù)進(jìn)程集。

  4. 通過(guò)kubectl連接集群后,在注冊(cè)集群中執(zhí)行以下命令,查看Terway網(wǎng)絡(luò)組件守護(hù)進(jìn)程集。

    在混合集群擴(kuò)容云上節(jié)點(diǎn)之前,Terway將不會(huì)被調(diào)度到任何一個(gè)云下節(jié)點(diǎn)上。

    kubectl -nkube-system get ds |grep terway

    預(yù)期輸出:

    terway-eniip   0         0         0       0            0           alibabacloud.com/external=true      16s

    預(yù)期輸出表明,Terway Pod將只運(yùn)行在打標(biāo)alibabacloud.com/external=true的云上ECS節(jié)點(diǎn)。

  5. 執(zhí)行以下命令,編輯ConfigMap的eni-config,并配置eni_conf.access_key eni_conf.access_secret

    kubectl -n kube-system edit cm eni-config

    eni-config配置示例如下:

    kind: ConfigMap
    apiVersion: v1
    metadata:
     name: eni-config
     namespace: kube-system
    data:
     eni_conf: |
      {
       "version": "1",
       "max_pool_size": 5,
       "min_pool_size": 0,
       "vswitches": {"AZoneID":["VswitchId"]}, 
       "eni_tags": {"ack.aliyun.com":"{{.ClusterID}}"},
       "service_cidr": "{{.ServiceCIDR}}",
       "security_group": "{{.SecurityGroupId}}",
       "access_key": "",
       "access_secret": "",
       "vswitch_selection_policy": "ordered"
      }
     10-terway.conf: |
      {
       "cniVersion": "0.3.0",
       "name": "terway",
       "type": "terway"
      }
  6. 配置自定義節(jié)點(diǎn)初始化腳本。

    1. 改造自建Kubernetes集群原始的節(jié)點(diǎn)初始化腳本。

      以使用Kubeadm工具初始化的IDC自建Kubernetes集群為例,在IDC中為集群添加新節(jié)點(diǎn)的原始初始化腳本init-node.sh腳本示例如下。

      展開(kāi)查看init-node.sh腳本示例

      #!/bin/bash
      
      export K8S_VERSION=1.24.3
      
      export REGISTRY_MIRROR=https://registry.cn-hangzhou.aliyuncs.com
      cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
      overlay
      br_netfilter
      EOF
      modprobe overlay
      modprobe br_netfilter
      cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
      net.bridge.bridge-nf-call-iptables  = 1
      net.ipv4.ip_forward                 = 1
      net.bridge.bridge-nf-call-ip6tables = 1
      EOF
      sysctl --system
      yum remove -y containerd.io
      yum install -y yum-utils device-mapper-persistent-data lvm2
      yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
      yum install -y containerd.io-1.4.3
      mkdir -p /etc/containerd
      containerd config default > /etc/containerd/config.toml
      sed -i "s#k8s.gcr.io#registry.aliyuncs.com/k8sxio#g"  /etc/containerd/config.toml
      sed -i '/containerd.runtimes.runc.options/a\ \ \ \ \ \ \ \ \ \ \ \ SystemdCgroup = true' /etc/containerd/config.toml
      sed -i "s#https://registry-1.docker.io#${REGISTRY_MIRROR}#g"  /etc/containerd/config.toml
      systemctl daemon-reload
      systemctl enable containerd
      systemctl restart containerd
      yum install -y nfs-utils
      yum install -y wget
      cat <<EOF > /etc/yum.repos.d/kubernetes.repo
      [kubernetes]
      name=Kubernetes
      baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
      enabled=1
      gpgcheck=0
      repo_gpgcheck=0
      gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
             http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
      EOF
      yum remove -y kubelet kubeadm kubectl
      yum install -y kubelet-$K8S_VERSION kubeadm-$K8S_VERSION kubectl-$K8S_VERSION
      crictl config runtime-endpoint /run/containerd/containerd.sock
      systemctl daemon-reload
      systemctl enable kubelet && systemctl start kubelet
      containerd --version
      kubelet --version
      
      kubeadm join 10.200.1.253:XXXX --token cqgql5.1mdcjcvhszol**** --discovery-token-unsafe-skip-ca-verification

      ACK注冊(cè)集群中所需要配置的自定義節(jié)點(diǎn)初始化腳本init-node-ecs.sh,就是在init-node.sh腳本的基礎(chǔ)上接收并配置注冊(cè)集群下發(fā)的ALIBABA_CLOUD_PROVIDER_IDALIBABA_CLOUD_NODE_NAMEALIBABA_CLOUD_LABELSALIBABA_CLOUD_TAINTS這些環(huán)境變量即可,init-node-ecs.sh腳本示例如下。

      展開(kāi)查看init-node-ecs.sh腳本示例

      #!/bin/bash
      
      export K8S_VERSION=1.24.3
      
      export REGISTRY_MIRROR=https://registry.cn-hangzhou.aliyuncs.com
      cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
      overlay
      br_netfilter
      EOF
      modprobe overlay
      modprobe br_netfilter
      cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
      net.bridge.bridge-nf-call-iptables  = 1
      net.ipv4.ip_forward                 = 1
      net.bridge.bridge-nf-call-ip6tables = 1
      EOF
      sysctl --system
      yum remove -y containerd.io
      yum install -y yum-utils device-mapper-persistent-data lvm2
      yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
      yum install -y containerd.io-1.4.3
      mkdir -p /etc/containerd
      containerd config default > /etc/containerd/config.toml
      sed -i "s#k8s.gcr.io#registry.aliyuncs.com/k8sxio#g"  /etc/containerd/config.toml
      sed -i '/containerd.runtimes.runc.options/a\ \ \ \ \ \ \ \ \ \ \ \ SystemdCgroup = true' /etc/containerd/config.toml
      sed -i "s#https://registry-1.docker.io#${REGISTRY_MIRROR}#g"  /etc/containerd/config.toml
      systemctl daemon-reload
      systemctl enable containerd
      systemctl restart containerd
      yum install -y nfs-utils
      yum install -y wget
      cat <<EOF > /etc/yum.repos.d/kubernetes.repo
      [kubernetes]
      name=Kubernetes
      baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
      enabled=1
      gpgcheck=0
      repo_gpgcheck=0
      gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
             http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
      EOF
      yum remove -y kubelet kubeadm kubectl
      yum install -y kubelet-$K8S_VERSION kubeadm-$K8S_VERSION kubectl-$K8S_VERSION
      crictl config runtime-endpoint /run/containerd/containerd.sock
      systemctl daemon-reload
      systemctl enable kubelet && systemctl start kubelet
      containerd --version
      kubelet --version
      
      ####### <新增部分
      # 配置Node Labels,Taints,Node Name,Provider ID
      #KUBEADM_CONFIG_FILE="/usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf"
      KUBELET_CONFIG_FILE="/etc/sysconfig/kubelet"
      #KUBELET_CONFIG_FILE="/etc/systemd/system/kubelet.service.d/10-kubeadm.conf"
      if [[ $ALIBABA_CLOUD_LABELS != "" ]];then
        option="--node-labels"
        if grep -- "${option}=" $KUBELET_CONFIG_FILE &> /dev/null;then
          sed -i "s@${option}=@${option}=${ALIBABA_CLOUD_LABELS},@g" $KUBELET_CONFIG_FILE
        elif grep "KUBELET_EXTRA_ARGS=" $KUBELET_CONFIG_FILE &> /dev/null;then
          sed -i "s@KUBELET_EXTRA_ARGS=@KUBELET_EXTRA_ARGS=${option}=${ALIBABA_CLOUD_LABELS} @g" $KUBELET_CONFIG_FILE
        else
          sed -i "/^\[Service\]/a\Environment=\"KUBELET_EXTRA_ARGS=${option}=${ALIBABA_CLOUD_LABELS}\"" $KUBELET_CONFIG_FILE
        fi
      fi
      
      if [[ $ALIBABA_CLOUD_TAINTS != "" ]];then
        option="--register-with-taints"
        if grep -- "${option}=" $KUBELET_CONFIG_FILE &> /dev/null;then
          sed -i "s@${option}=@${option}=${ALIBABA_CLOUD_TAINTS},@g" $KUBELET_CONFIG_FILE
        elif grep "KUBELET_EXTRA_ARGS=" $KUBELET_CONFIG_FILE &> /dev/null;then
          sed -i "s@KUBELET_EXTRA_ARGS=@KUBELET_EXTRA_ARGS=${option}=${ALIBABA_CLOUD_TAINTS} @g" $KUBELET_CONFIG_FILE
        else
          sed -i "/^\[Service\]/a\Environment=\"KUBELET_EXTRA_ARGS=${option}=${ALIBABA_CLOUD_TAINTS}\"" $KUBELET_CONFIG_FILE
        fi
      fi
      
      if [[ $ALIBABA_CLOUD_NODE_NAME != "" ]];then
        option="--hostname-override"
        if grep -- "${option}=" $KUBELET_CONFIG_FILE &> /dev/null;then
          sed -i "s@${option}=@${option}=${ALIBABA_CLOUD_NODE_NAME},@g" $KUBELET_CONFIG_FILE
        elif grep "KUBELET_EXTRA_ARGS=" $KUBELET_CONFIG_FILE &> /dev/null;then
          sed -i "s@KUBELET_EXTRA_ARGS=@KUBELET_EXTRA_ARGS=${option}=${ALIBABA_CLOUD_NODE_NAME} @g" $KUBELET_CONFIG_FILE
        else
          sed -i "/^\[Service\]/a\Environment=\"KUBELET_EXTRA_ARGS=${option}=${ALIBABA_CLOUD_NODE_NAME}\"" $KUBELET_CONFIG_FILE
        fi
      fi
      
      if [[ $ALIBABA_CLOUD_PROVIDER_ID != "" ]];then
        option="--provider-id"
        if grep -- "${option}=" $KUBELET_CONFIG_FILE &> /dev/null;then
          sed -i "s@${option}=@${option}=${ALIBABA_CLOUD_PROVIDER_ID},@g" $KUBELET_CONFIG_FILE
        elif grep "KUBELET_EXTRA_ARGS=" $KUBELET_CONFIG_FILE &> /dev/null;then
          sed -i "s@KUBELET_EXTRA_ARGS=@KUBELET_EXTRA_ARGS=${option}=${ALIBABA_CLOUD_PROVIDER_ID} @g" $KUBELET_CONFIG_FILE
        else
          sed -i "/^\[Service\]/a\Environment=\"KUBELET_EXTRA_ARGS=${option}=${ALIBABA_CLOUD_PROVIDER_ID}\"" $KUBELET_CONFIG_FILE
        fi
      fi
      
      #重啟Docker,并啟動(dòng)kubelet
      systemctl daemon-reload
      systemctl enable kubelet && systemctl start kubelet
      
      ####### 新增部分>
      
      kubeadm join 10.200.1.253:XXXX --token cqgql5.1mdcjcvhszol**** --discovery-token-unsafe-skip-ca-verification
    2. 保存和配置自定義腳本。

      將自定義腳本保存在HTTP文件服務(wù)器上,例如存放在OSS Bucket上。示例地址為https://kubelet-****.oss-cn-hangzhou-internal.aliyuncs.com/init-node-ecs.sh

      將自定義節(jié)點(diǎn)添加腳本的路徑https://kubelet-****.oss-cn-hangzhou-internal.aliyuncs.com/init-node-ecs.sh配置到addNodeScriptPath字段區(qū)域并保存,如下所示:

      apiVersion: v1
      data:
        addNodeScriptPath: https://kubelet-****.oss-cn-hangzhou-internal.aliyuncs.com/init-node-ecs.sh
      kind: ConfigMap
      metadata:
        name: ack-agent-config
        namespace: kube-system

    當(dāng)您完成上述配置操作后,即可在目標(biāo)ACK注冊(cè)集群創(chuàng)建節(jié)點(diǎn)池以及擴(kuò)容ECS節(jié)點(diǎn)。

  7. 創(chuàng)建節(jié)點(diǎn)池并擴(kuò)容ECS節(jié)點(diǎn)。

    1. 登錄容器服務(wù)管理控制臺(tái),在左側(cè)導(dǎo)航欄選擇集群

    2. 集群列表頁(yè)面,單擊目標(biāo)集群名稱,然后在左側(cè)導(dǎo)航欄,選擇節(jié)點(diǎn)管理 > 節(jié)點(diǎn)池

    3. 節(jié)點(diǎn)池頁(yè)面,根據(jù)需求創(chuàng)建節(jié)點(diǎn)池并擴(kuò)容節(jié)點(diǎn)。具體操作,請(qǐng)參見(jiàn)創(chuàng)建節(jié)點(diǎn)池

相關(guān)文檔

規(guī)劃Terway場(chǎng)景的容器網(wǎng)絡(luò)。具體操作,請(qǐng)參見(jiàn)Kubernetes集群網(wǎng)絡(luò)規(guī)劃

本地?cái)?shù)據(jù)中心Kubernetes網(wǎng)絡(luò)與云上專有網(wǎng)絡(luò)VPC互聯(lián)互通。具體操作,請(qǐng)參見(jiàn)功能特性

創(chuàng)建注冊(cè)集群并接入本地?cái)?shù)據(jù)中心自建Kubernetes集群。具體操作,請(qǐng)參見(jiàn)通過(guò)控制臺(tái)創(chuàng)建注冊(cè)集群