ASM實例為1.18.0.124版本以下,ASM僅支持向您自建的兼容Zipkin協議的系統導出追蹤數據;1.18.0.124版本及以上,ASM僅支持向您自建的OpenTelemetry導出鏈路追蹤數據。本文介紹如何向自建的Zipkin或者OpenTelemetry導出ASM鏈路追蹤數據。
前提條件
該自建系統支持標準Zipkin協議,并通過標準Zipkin端口9411監聽。若您使用Jaeger,則需要部署Zipkin Collector。
該自建系統部署于數據面集群內。
已添加Kubernetes集群到ASM實例。具體操作,請參見添加集群到ASM實例。
ASM實例已部署入口網關。具體操作,請參見創建入口網關。
操作步驟
請按照實例版本選擇相應操作。
ASM實例版本為1.18.0.124及以上
步驟一:部署Zipkin
執行以下命令,創建zipkin命名空間,用于部署Zipkin。
kubectl create namespace zipkin
執行以下命令,通過Helm安裝Zipkin。
helm install --namespace zipkin my-zipkin carlosjgp/zipkin --version 0.2.0
執行以下命令,檢查Zipkin是否正常運行。
kubectl -n zipkin get pods
預期輸出:
NAME READY STATUS RESTARTS AGE my-zipkin-collector-79c6dc9cd7-jmswm 1/1 Running 0 29m my-zipkin-ui-64c97b4d6c-f742j 1/1 Running 0 29m
步驟二:部署OpenTelemetry Operator
執行以下命令,創建opentelemetry-operator-system命名空間。
kubectl create namespace opentelemetry-operator-system
執行以下命令,使用Helm在opentelemetry-operator-system命名空間下安裝OpenTelemetry Operator。
helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts helm install --namespace=opentelemetry-operator-system --set admissionWebhooks.certManager.enabled=false --set admissionWebhooks.certManager.autoGenerateCert=true opentelemetry-operator open-telemetry/opentelemetry-operator
執行以下命令,檢查opentelemetry-operator是否正常運行。
kubectl get pod -n opentelemetry-operator-system
預期輸出:
NAME READY STATUS RESTARTS AGE opentelemetry-operator-854fb558b5-pvllj 2/2 Running 0 1m
STATUS
為Running
,表明opentelemetry-operator正常運行。
步驟三:創建OpenTelemetry Collector
使用以下內容,創建collector.yaml文件。
請將YAML中的
${ENDPOINT}
替換為gRPC協議的VPC網絡接入點,${TOKEN}
替換為鑒權Token。關于如何獲取阿里云可觀測鏈路OpenTelemetry版的接入點和鑒權Token,請參見接入和鑒權說明。apiVersion: opentelemetry.io/v1alpha1 kind: OpenTelemetryCollector metadata: labels: app.kubernetes.io/managed-by: opentelemetry-operator name: default namespace: opentelemetry-operator-system annotations: sidecar.istio.io/inject: "false" spec: config: | extensions: memory_ballast: size_mib: 512 zpages: endpoint: 0.0.0.0:55679 receivers: otlp: protocols: grpc: endpoint: "0.0.0.0:4317" exporters: debug: zipkin: endpoint: http://my-zipkin-collector.zipkin.svc.cluster.local:9411/api/v2/spans service: pipelines: traces: receivers: [otlp] processors: [] exporters: [zipkin, debug] ingress: route: {} managementState: managed mode: deployment observability: metrics: {} podDisruptionBudget: maxUnavailable: 1 replicas: 1 resources: {} targetAllocator: prometheusCR: scrapeInterval: 30s resources: {} upgradeStrategy: automatic
在ACK集群對應的KubeConfig環境下,執行以下命令,將collector部署到集群。
kubectl apply -f collector.yaml
執行以下命令,檢查collector是否正常啟動。
kubectl get pod -n opentelemetry-operator-system
預期輸出:
NAME READY STATUS RESTARTS AGE opentelemetry-operator-854fb558b5-pvllj 2/2 Running 0 3m default-collector-5cbb4497f4-2hjqv 1/1 Running 0 30s
預期輸出表明collector正常啟動。
執行以下命令,檢查服務是否創建。
kubectl get svc -n opentelemetry-operator-system
預期輸出:
opentelemetry-operator ClusterIP 172.16.138.165 <none> 8443/TCP,8080/TCP 3m opentelemetry-operator-webhook ClusterIP 172.16.127.0 <none> 443/TCP 3m default-collector ClusterIP 172.16.145.93 <none> 4317/TCP 30s default-collector-headless ClusterIP None <none> 4317/TCP 30s default-collector-monitoring ClusterIP 172.16.136.5 <none> 8888/TCP 30s
預期輸出表明服務已創建成功。
步驟四:部署測試應用
部署bookinfo和sleep應用。具體操作,請參見在ASM實例關聯的集群中部署應用。
apiVersion: v1 kind: ServiceAccount metadata: name: sleep --- apiVersion: v1 kind: Service metadata: name: sleep labels: app: sleep service: sleep spec: ports: - port: 80 name: http selector: app: sleep --- apiVersion: apps/v1 kind: Deployment metadata: name: sleep spec: replicas: 1 selector: matchLabels: app: sleep template: metadata: labels: app: sleep spec: terminationGracePeriodSeconds: 0 serviceAccountName: sleep containers: - name: sleep image: curl:8.1.2 command: ["/bin/sleep", "infinity"] imagePullPolicy: IfNotPresent volumeMounts: - mountPath: /etc/sleep/tls name: secret-volume volumes: - name: secret-volume secret: secretName: sleep-secret optional: true ---
步驟五:訪問應用并查看上報的追蹤數據
執行以下命令,訪問productpage應用。
kubectl exec -it deploy/sleep -c sleep -- curl productpage:9080/productpage?u=normal
訪問成功后,查看OpenTelemetry Collector日志,查看debug exporter打印的輸出。
2023-11-20T08:44:27.531Z info TracesExporter {"kind": "exporter", "data_type": "traces", "name": "debug", "resource spans": 1, "spans": 3}
步驟六:配置ASM網關,通過Zipkin頁面查看上報的追蹤數據
創建網關規則。
使用以下內容,創建ingressgateway.yaml。
apiVersion: networking.istio.io/v1beta1 kind: Gateway metadata: name: ingressgateway namespace: istio-system spec: selector: istio: ingressgateway servers: - hosts: - '*' port: name: http number: 80 protocol: HTTP --- apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: ingressgateway namespace: istio-system spec: gateways: - ingressgateway hosts: - '*' http: - route: - destination: host: my-zipkin-collector.zipkin.svc.cluster.local port: number: 9411
在ASM實例對應的KubeConfig環境下,執行以下命令,為ASM網關創建80端口監聽和指向Zipkin服務的路由。
kubectl apply -f ingressgateway.yaml
通過網關地址訪問Zipkin服務,查看已經上報的鏈路追蹤數據。
ASM實例版本為1.18.0.124以下
步驟一:為網格實例啟用鏈路追蹤
ASM實例版本為1.17.2.28以下:登錄ASM控制臺,在目標實例的基本信息頁面,單擊功能設置,選中啟用鏈路追蹤,按需進行配置,然后單擊確定。
ASM實例版本為1.17.2.28及以上:請參考鏈路追蹤設置說明,啟用鏈路追蹤。
步驟二:在數據面集群部署Zipkin
使用以下內容,創建zipkin-server.yaml文件。
apiVersion: apps/v1 kind: Deployment metadata: name: zipkin-server namespace: istio-system spec: replicas: 1 selector: matchLabels: app: zipkin-server component: zipkin template: metadata: labels: app: zipkin-server component: zipkin spec: containers: - name: zipkin-server image: openzipkin/zipkin imagePullPolicy: IfNotPresent readinessProbe: httpGet: path: /health port: 9411 initialDelaySeconds: 5 periodSeconds: 5
說明如果您需要使用自行準備的追蹤系統YAML文件部署,請確保Deployment處于istio-system命名空間下。
執行以下命令,將該配置應用到數據面集群。
kubectl --kubeconfig=${DATA_PLANE_KUBECONFIG} apply -f zipkin-server.yaml
說明命令中的
${DATA_PLANE_KUBECONFIG}
請替換為數據面集群的KubeConfig文件路徑,${ASM_KUBECONFIG}
請替換為網格實例的KubeConfig文件路徑。部署完畢后,確認ZipkinServer Pod正常啟動。
步驟三:創建Service暴露ZipkinServer
您需要在istio-system命名空間下創建名為zipkin的服務,來接收ASM的鏈路追蹤信息。
若需要將Zipkin暴露于公網,請使用zipkin-svc-expose-public.yaml。
若不希望暴露于公網,請使用zipkin-svc.yaml。
為了便于查看追蹤數據,下文使用zipkin-svc-expose-public.yaml將Zipkin Server暴露于公網端口。
創建的服務名稱必須為zipkin。
按需選擇以下內容,創建YAML文件。
若需要將Zipkin暴露于公網,請使用zipkin-svc-expose-public.yaml。
apiVersion: v1 kind: Service metadata: labels: app: tracing component: zipkin name: zipkin namespace: istio-system spec: ports: - name: zipkin port: 9411 protocol: TCP targetPort: 9411 selector: app: zipkin-server component: zipkin type: LoadBalancer
若不希望暴露于公網,請使用zipkin-svc.yaml。
apiVersion: v1 kind: Service metadata: labels: app: tracing component: zipkin name: zipkin namespace: istio-system spec: ports: - name: zipkin port: 9411 protocol: TCP targetPort: 9411 selector: app: zipkin-server component: zipkin type: ClusterIP
說明如果您需要使用自行準備的YAML文件部署Service,請確保Service處在istio-system命名空間下。
執行以下命令,將Zipkin Service應用到數據面集群。
# 部署內網zipkin。 kubectl --kubeconfig=${DATA_PLANE_KUBECONFIG} apply -f zipkin-svc.yaml # 部署公網可以訪問的zipkin。 kubectl --kubeconfig=${DATA_PLANE_KUBECONFIG} apply -f zipkin-svc-expose-public.yaml
步驟四:部署測試應用BookInfo
執行以下命令,將Bookinfo應用部署到數據面集群中。
kubectl --kubeconfig=${DATA_PLANE_KUBECONFIG} apply -f bookinfo.yaml
apiVersion: v1 kind: Service metadata: name: details labels: app: details service: details spec: ports: - port: 9080 name: http selector: app: details --- apiVersion: v1 kind: ServiceAccount metadata: name: bookinfo-details labels: account: details --- apiVersion: apps/v1 kind: Deployment metadata: name: details-v1 labels: app: details version: v1 spec: replicas: 1 selector: matchLabels: app: details version: v1 template: metadata: labels: app: details version: v1 spec: serviceAccountName: bookinfo-details containers: - name: details image: docker.io/istio/examples-bookinfo-details-v1:1.16.2 imagePullPolicy: IfNotPresent ports: - containerPort: 9080 --- ################################################################################################## # Ratings service ################################################################################################## apiVersion: v1 kind: Service metadata: name: ratings labels: app: ratings service: ratings spec: ports: - port: 9080 name: http selector: app: ratings --- apiVersion: v1 kind: ServiceAccount metadata: name: bookinfo-ratings labels: account: ratings --- apiVersion: apps/v1 kind: Deployment metadata: name: ratings-v1 labels: app: ratings version: v1 spec: replicas: 1 selector: matchLabels: app: ratings version: v1 template: metadata: labels: app: ratings version: v1 spec: serviceAccountName: bookinfo-ratings containers: - name: ratings image: docker.io/istio/examples-bookinfo-ratings-v1:1.16.2 imagePullPolicy: IfNotPresent ports: - containerPort: 9080 --- ################################################################################################## # Reviews service ################################################################################################## apiVersion: v1 kind: Service metadata: name: reviews labels: app: reviews service: reviews spec: ports: - port: 9080 name: http selector: app: reviews --- apiVersion: v1 kind: ServiceAccount metadata: name: bookinfo-reviews labels: account: reviews --- apiVersion: apps/v1 kind: Deployment metadata: name: reviews-v1 labels: app: reviews version: v1 spec: replicas: 1 selector: matchLabels: app: reviews version: v1 template: metadata: labels: app: reviews version: v1 spec: serviceAccountName: bookinfo-reviews containers: - name: reviews image: docker.io/istio/examples-bookinfo-reviews-v1:1.16.2 imagePullPolicy: IfNotPresent env: - name: LOG_DIR value: "/tmp/logs" ports: - containerPort: 9080 volumeMounts: - name: tmp mountPath: /tmp - name: wlp-output mountPath: /opt/ibm/wlp/output volumes: - name: wlp-output emptyDir: {} - name: tmp emptyDir: {} --- apiVersion: apps/v1 kind: Deployment metadata: name: reviews-v2 labels: app: reviews version: v2 spec: replicas: 1 selector: matchLabels: app: reviews version: v2 template: metadata: labels: app: reviews version: v2 spec: serviceAccountName: bookinfo-reviews containers: - name: reviews image: docker.io/istio/examples-bookinfo-reviews-v2:1.16.2 imagePullPolicy: IfNotPresent env: - name: LOG_DIR value: "/tmp/logs" ports: - containerPort: 9080 volumeMounts: - name: tmp mountPath: /tmp - name: wlp-output mountPath: /opt/ibm/wlp/output volumes: - name: wlp-output emptyDir: {} - name: tmp emptyDir: {} --- apiVersion: apps/v1 kind: Deployment metadata: name: reviews-v3 labels: app: reviews version: v3 spec: replicas: 1 selector: matchLabels: app: reviews version: v3 template: metadata: labels: app: reviews version: v3 spec: serviceAccountName: bookinfo-reviews containers: - name: reviews image: docker.io/istio/examples-bookinfo-reviews-v3:1.16.2 imagePullPolicy: IfNotPresent env: - name: LOG_DIR value: "/tmp/logs" ports: - containerPort: 9080 volumeMounts: - name: tmp mountPath: /tmp - name: wlp-output mountPath: /opt/ibm/wlp/output volumes: - name: wlp-output emptyDir: {} - name: tmp emptyDir: {} --- ################################################################################################## # Productpage services ################################################################################################## apiVersion: v1 kind: Service metadata: name: productpage labels: app: productpage service: productpage spec: ports: - port: 9080 name: http selector: app: productpage --- apiVersion: v1 kind: ServiceAccount metadata: name: bookinfo-productpage labels: account: productpage --- apiVersion: apps/v1 kind: Deployment metadata: name: productpage-v1 labels: app: productpage version: v1 spec: replicas: 1 selector: matchLabels: app: productpage version: v1 template: metadata: labels: app: productpage version: v1 spec: serviceAccountName: bookinfo-productpage containers: - name: productpage image: docker.io/istio/examples-bookinfo-productpage-v1:1.16.2 imagePullPolicy: IfNotPresent ports: - containerPort: 9080 volumeMounts: - name: tmp mountPath: /tmp volumes: - name: tmp emptyDir: {} ---
通過kubectl執行以下命令,部署Bookinfo應用的VirtualServices。
kubectl --kubeconfig=${ASM_KUBECONFIG} apply -f virtual-service-all-v1.yaml
apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: productpage spec: hosts: - productpage http: - route: - destination: host: productpage subset: v1 --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - reviews http: - route: - destination: host: reviews subset: v1 --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: ratings spec: hosts: - ratings http: - route: - destination: host: ratings subset: v1 --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: details spec: hosts: - details http: - route: - destination: host: details subset: v1 ---
通過kubectl執行以下命令,部署Bookinfo應用的DestinationRules。
kubectl --kubeconfig=${ASM_KUBECONFIG} apply -f destination-rule-all.yaml
apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: productpage spec: host: productpage subsets: - name: v1 labels: version: v1 --- apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: reviews spec: host: reviews subsets: - name: v1 labels: version: v1 - name: v2 labels: version: v2 - name: v3 labels: version: v3 --- apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: ratings spec: host: ratings subsets: - name: v1 labels: version: v1 - name: v2 labels: version: v2 - name: v2-mysql labels: version: v2-mysql - name: v2-mysql-vm labels: version: v2-mysql-vm --- apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: details spec: host: details subsets: - name: v1 labels: version: v1 - name: v2 labels: version: v2 ---
通過Kubectl執行以下命令,部署Bookinfo應用的Gateway。
kubectl --kubeconfig=${ASM_KUBECONFIG} apply -f bookinfo-gateway.yaml
apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: bookinfo-gateway spec: selector: istio: ingressgateway # use istio default controller servers: - port: number: 80 name: http protocol: HTTP hosts: - "*" --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: bookinfo spec: hosts: - "*" gateways: - bookinfo-gateway http: - match: - uri: exact: /productpage - uri: prefix: /static - uri: exact: /login - uri: exact: /logout - uri: prefix: /api/v1/products route: - destination: host: productpage port: number: 9080
步驟五:產生追蹤數據
執行以下命令,獲得入口網關地址。
kubectl --kubeconfig=${DATA_PLANE_KUBECONFIG} get svc -n istio-system|grep ingressgateway|awk -F ' ' '{print $4}'
使用地址
入口網關地址/productpage
訪問Bookinfo應用。
步驟六:查看鏈路追蹤數據
執行以下命令,獲取Zipkin Service地址。
kubectl --kubeconfig=${DATA_PLANE_KUBECONFIG}get svc -n istio-system|grep zipkin|awk -F ' ' '{print $4}'
使用
Zipkin Service地址:9411
,訪問Zipkin控制臺,查看追蹤數據。