kubebuilder自定义资源

    技术2022-07-16  99

    github地址 一直在网上看k8s自定义资源这一块的内容,但是只停留于看,并没有真正的自己去实践一波,写这篇文章主要参考的是这篇博客,只是我对他做了一些简化,我只希望外部能够通过nodeip+port访问我的服务,并且对里面的资源进行统一生命周期管理。

    1、使用kubebuilder初始化一个自定义资源

    kubebuilder的安装请参考以前写的博客

    1.进入gopath src 下新建一个文件夹,进入新建文件夹,生成自定以资源的相关文件,生成controller,type等,生成webhook相关的文件

    [root@master src]# mkdir servicemanager [root@master src]# cd servicemanager/ [root@master servicemanager]# kubebuilder init --domain servicemanager.io Writing scaffold for you to edit... Get controller runtime: $ go get sigs.k8s.io/controller-runtime@v0.5.0 Update go.mod: $ go mod tidy Running make: $ make /usr/local/bin/controller-gen object:headerFile="hack/boilerplate.go.txt" paths="./..." go fmt ./... go vet ./... go build -o bin/manager main.go Next: define a resource with: $ kubebuilder create api [root@master servicemanager]# kubebuilder create api --group servicemanager --version v1 --kind ServiceManager Create Resource [y/n] y Create Controller [y/n] y Writing scaffold for you to edit... api/v1/servicemanager_types.go controllers/servicemanager_controller.go Running make: $ make /usr/local/bin/controller-gen object:headerFile="hack/boilerplate.go.txt" paths="./..." go fmt ./... go vet ./... go build -o bin/manager main.go [root@master servicemanager]# kubebuilder create webhook --group servicemanager --version v1 --kind ServiceManager --defaulting --programmatic-validation Writing scaffold for you to edit... api/v1/servicemanager_webhook.go

    生成的,目录结构如下:

    . ├── api │ └── v1 │ ├── groupversion_info.go // GVK信息、scheme生成的方法都在这里 │ ├── servicemanager_types.go // 自定义CRD结构,需修改的文件 │ ├── servicemanager_webhook.go // webhook相关的文件 │ └── zz_generated.deepcopy.go // 深度拷贝 ├── bin │ └── manager // go打包文件的二进制文件 ├── config // 所有最终生成的需要kubectl apply的的资源,按照功能进行分片成不同的目录,这里有些地方可以做些自定义的配置 │ ├── certmanager │ │ ├── certificate.yaml │ │ ├── kustomization.yaml │ │ └── kustomizeconfig.yaml │ ├── crd // crd的配置 │ │ ├── kustomization.yaml │ │ ├── kustomizeconfig.yaml │ │ └── patches │ │ ├── cainjection_in_servicemanagers.yaml │ │ └── webhook_in_servicemanagers.yaml │ ├── default │ │ ├── kustomization.yaml │ │ ├── manager_auth_proxy_patch.yaml │ │ ├── manager_webhook_patch.yaml │ │ └── webhookcainjection_patch.yaml │ ├── manager // manager的deployment在这里 │ │ ├── kustomization.yaml │ │ └── manager.yaml │ ├── prometheus // metric暴露 │ │ ├── kustomization.yaml │ │ └── monitor.yaml │ ├── rbac // rbac授权 │ │ ├── auth_proxy_client_clusterrole.yaml │ │ ├── auth_proxy_role_binding.yaml │ │ ├── auth_proxy_role.yaml │ │ ├── auth_proxy_service.yaml │ │ ├── kustomization.yaml │ │ ├── leader_election_role_binding.yaml │ │ ├── leader_election_role.yaml │ │ ├── role_binding.yaml │ │ ├── servicemanager_editor_role.yaml │ │ └── servicemanager_viewer_role.yaml │ ├── samples // 简单的自定义资源yaml文件 │ │ └── servicemanager_v1_servicemanager.yaml │ └── webhook // Unit webhook Service,用来接收APIServer转发而来的webhook请求 │ ├── kustomization.yaml │ ├── kustomizeconfig.yaml │ └── service.yaml ├── controllers │ ├── servicemanager_controller.go // # CRD controller的核心逻辑在这里 │ └── suite_test.go ├── Dockerfile // 制作crd-controller镜像的Dockerfile ├── go.mod ├── go.sum ├── hack │ └── boilerplate.go.txt ├── main.go // 程序入口 ├── Makefile // make编译文件 └── PROJECT // 项目元数据

    2.修改servicemanager_types.go文件

    type ServiceManagerSpec struct { // INSERT ADDITIONAL SPEC FIELDS - desired state of cluster // Important: Run "make" to regenerate code after modifying this file // Foo is an example field of ServiceManager. Edit ServiceManager_types.go to remove/update // Category 只有两种可能 deployment statefulset // 这个注释表示该字段的值只能是Deployment 或者 Statefulset // +kubebuilder:validation:Enum=Deployment;Statefulset Category string `json:"category,omitempty"` // 标签选择器 Selector map[string]string `json:"selector,omitempty"` // 引用的statefulset deployment的template Template corev1.PodTemplateSpec `json:"template,omitempty"` // 副本数 最大不超过10 // +kubebuilder:validation:Maximum=10 Replicas *int32 `json:"replicas,omitempty"` //端口号 端口号做大超过65535 服务端口号 // +kubebuilder:validation:Maximum=65535 Port *int32 `json:"port,omitempty"` // +kubebuilder:validation:Maximum=65535 Targetport *int32 `json:"targetport,omitempty"` } // ServiceManagerStatus defines the observed state of ServiceManager type ServiceManagerStatus struct { // INSERT ADDITIONAL STATUS FIELD - define observed state of cluster // Important: Run "make" to regenerate code after modifying this file Replicas int32 `json:"replicas,omitempty"` LastUpdateTime metav1.Time `json:"last_update_time,omitempty"` DeploymentStatus appsv1.DeploymentStatus `json:"deployment_status,omitempty"` ServiceStatus corev1.ServiceStatus `json:"service_status,omitempty"` } // 这里,Spec和Status均是ServiceManager的成员变量,Status并不像Pod.Status一样,是Pod的subResource.因此, // 如果我们在controller的代码中调用到Status().Update(),会触发panic, // 并报错:the server could not find the requested resource // 如果我们想像k8s中的设计那样,那么就要遵循k8s中status subresource的使用规范: // kubebuilder:subresource:status // 用户只能指定一个CRD实例的spec部分; // CRD实例的status部分由控制器进行变更。 // +kubebuilder:object:root=true // +kubebuilder:subresource:status // +kubebuilder:subresource:scale:selectorpath=.spec.selector,specpath=.spec.replicas,statuspath=.status.replicas // ServiceManager is the Schema for the servicemanagers API type ServiceManager struct { metav1.TypeMeta `json:",inline"` metav1.ObjectMeta `json:"metadata,omitempty"` Spec ServiceManagerSpec `json:"spec,omitempty"` Status ServiceManagerStatus `json:"status,omitempty"` } // +kubebuilder:object:root=true // ServiceManagerList contains a list of ServiceManager type ServiceManagerList struct { metav1.TypeMeta `json:",inline"` metav1.ListMeta `json:"metadata,omitempty"` Items []ServiceManager `json:"items"` }

    // +kubebuilder:subresource:status 一定要加上,不然在更新资源状态的时候会报错资源找不到。

    3、修改文件servicemanager_controller.go

    定义一个interface 用于内部资源的创建,更新,资源是否已经存在,实例化资源

    type OwnResource interface { // 获取内部资源的实体类 MakeOwnResource(instance *servicemanagerv1.ServiceManager,logger logr.Logger,scheme *runtime.Scheme)(interface{}, error) // 校验资源是否存在 OwnResourceExist(instance *servicemanagerv1.ServiceManager,client client.Client,logger logr.Logger)(bool, interface{},error) // 获取内部资源的状态并修改自定资源的状态 UpdateOwnerResources(instance *servicemanagerv1.ServiceManager,client client.Client,logger logr.Logger) error // 发布内部资源 ApplyOwnResource(instance *servicemanagerv1.ServiceManager,client client.Client,logger logr.Logger,scheme *runtime.Scheme)error }

    结构体实现这4个接口,以service为例

    type OwnService struct { Port *int32 } // 获取内部资源的实体类 func (ownService *OwnService) MakeOwnResource(instance *ServiceManager,logger logr.Logger,scheme *runtime.Scheme) (interface{}, error){ var label = map[string]string{ "app": instance.Name, } objectMeta := metav1.ObjectMeta{ Name:instance.Name, Namespace:instance.Namespace, } servicePort := []corev1.ServicePort{ corev1.ServicePort{ TargetPort: intstr.IntOrString{intstr.Int,*instance.Spec.Targetport,""}, NodePort: *instance.Spec.Port, Port:*instance.Spec.Port, }, } serviceSpec := corev1.ServiceSpec{ Selector:label, Type:corev1.ServiceTypeNodePort, Ports:servicePort, } service := &corev1.Service{ ObjectMeta: objectMeta, Spec:serviceSpec, } if err :=controllerutil.SetControllerReference(instance,service,scheme); err != nil{ msg := fmt.Sprintf("set controllerReference for service %s/%s failed", instance.Namespace, instance.Name) logger.Error(err, msg) return nil, err } return service,nil } // 校验资源是否存在 func (ownService *OwnService) OwnResourceExist(instance *ServiceManager,client client.Client,logger logr.Logger) (bool, interface{},error){ service := &corev1.Service{} // 查看k8s集群中是否存在service资源 if err := client.Get(context.Background(),types.NamespacedName{Name:instance.Name,Namespace:instance.Namespace},service); err != nil{ return false,nil,err } return true,service,nil } // 获取内部资源的状态并修改自定资源的状态 func (ownService *OwnService) UpdateOwnerResources(instance *ServiceManager,client client.Client,logger logr.Logger) error{ service := &corev1.Service{} if err := client.Get(context.Background(),types.NamespacedName{Name:instance.Name,Namespace:instance.Namespace},service); err != nil{ logger.Error(err,"service 资源不存在!") return err } instance.Status.LastUpdateTime = metav1.Now() instance.Status.ServiceStatus = service.Status return nil } // 发布内部资源 func (ownService *OwnService) ApplyOwnResource(instance *ServiceManager,client client.Client,logger logr.Logger,scheme *runtime.Scheme) error{ // 首先查看资源是否存在 exsit,found,err := ownService.OwnResourceExist(instance,client,logger) if err != nil { logger.Error(err,"service 资源不存在!") // return err } service,err := ownService.MakeOwnResource(instance,logger,scheme) newService,ok := service.(*corev1.Service) if !ok { logger.Error(err,"service 结构体转化失败!") return err } if err != nil { logger.Error(err,"获取service资源失败!") return err } if exsit { // 更新 founService,ok := found.(*corev1.Service) if ! ok{ logger.Error(err,"service 结构体转化失败!") return err } // 这里有个坑,svc在创建前可能未指定clusterIP,那么svc创建后, // 会自动指定clusterIP并修改spec.clusterIP字段, // 因此这里要补上。SessionAffinity同理 newService.Spec.ClusterIP = founService.Spec.ClusterIP newService.Spec.SessionAffinity = founService.Spec.SessionAffinity newService.ObjectMeta.ResourceVersion = founService.ObjectMeta.ResourceVersion // 如果不等,则更新Service资源 if founService != nil && !reflect.DeepEqual(founService.Spec,newService.Spec) { err := client.Update(context.Background(),newService) if err != nil { logger.Error(err,"service更新失败!") return err } } }else{ // 创建 err := client.Create(context.Background(),newService) if err != nil { logger.Error(err,"service 创建失败!") return err } } return nil }

    修改controller包中的方法,这个方式是真正调谐部分

    // +kubebuilder:rbac:groups=servicemanager.servicemanager.io,resources=servicemanagers,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=servicemanager.servicemanager.io,resources=servicemanagers/status,verbs=get;update;patch // +kubebuilder:rbac:groups=apps,resources=statefulsets,verbs=get;update;patch;delete // +kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;update;patch;delete // +kubebuilder:rbac:groups=core,resources=services,verbs=get;update;patch;delete func (r *ServiceManagerReconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) { ctx := context.Background() logger := r.Log.WithValues("servicemanager", req.NamespacedName) serviceManager := &servicemanagerv1.ServiceManager{} if err := r.Get(ctx,req.NamespacedName,serviceManager); err != nil{ logger.Error(err,"获取serviceManager失败!") return ctrl.Result{}, err } // 如果存在,获取own资源 ownResources,err := r.getOwnResource(serviceManager) if err != nil { logger.Error(err,"获取ownResource失败!") } var success = true for _,ownResource := range ownResources { // 发布或者更新子资源 if err := ownResource.ApplyOwnResource(serviceManager,r.Client,logger,r.Scheme); err != nil{ success = false } } // 获取更新内置资源的状态,并且修改自定义资源的crd newServiceManager := serviceManager.DeepCopy() for _,ownResource := range ownResources { // 发布或者更新子资源 if err := ownResource.UpdateOwnerResources(newServiceManager,r.Client,logger); err != nil{ success = false } } // 更新newServiceManager if newServiceManager != nil && !reflect.DeepEqual(serviceManager.Status,newServiceManager.Status) { if err := r.Status().Update(ctx,newServiceManager); err != nil{ // 这里不处理 r.Log.Error(err, "unable to update Unit status") } } if !success{ // 调谐失败 logger.Info("更新内置资源失败,将监听资源再次放入到workqueue里") return ctrl.Result{},err }else{ logger.Info("更新内置资源成功!") return ctrl.Result{},nil } return ctrl.Result{}, nil } func (r *ServiceManagerReconciler) getOwnResource(instance *servicemanagerv1.ServiceManager) ([]OwnResource, error) { var ownResources []OwnResource if instance.Spec.Category == "Deployment" { ownDeployment := &servicemanagerv1.OwnDeployment{ Category:instance.Spec.Category, } ownResources = append(ownResources, ownDeployment) } else { // statefulset留着后面写 /*ownStatefulSet := &servicemanagerv1.OwnStatefulSet{ Spec: appsv1.StatefulSetSpec{ Replicas: instance.Spec.Replicas, Selector: instance.Spec.Selector, Template: instance.Spec.Template, ServiceName: instance.Name, }, } ownResources = append(ownResources, ownStatefulSet)*/ } if instance.Spec.Port != nil { ownService := &servicemanagerv1.OwnService{ Port:instance.Spec.Port, } ownResources = append(ownResources, ownService) } return ownResources,nil }

    4、修改servicemanager_webhook.go文件,这个文件主要做的就是请求k8sapi的时候进行已成拦截 可以做的时 修改结构体 添加校验

    /* Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package v1 import ( "k8s.io/apimachinery/pkg/runtime" ctrl "sigs.k8s.io/controller-runtime" logf "sigs.k8s.io/controller-runtime/pkg/log" "sigs.k8s.io/controller-runtime/pkg/webhook" ) // log is for logging in this package. var servicemanagerlog = logf.Log.WithName("servicemanager-resource") func (r *ServiceManager) SetupWebhookWithManager(mgr ctrl.Manager) error { return ctrl.NewWebhookManagedBy(mgr). For(r). Complete() } // EDIT THIS FILE! THIS IS SCAFFOLDING FOR YOU TO OWN! // +kubebuilder:webhook:path=/mutate-servicemanager-servicemanager-io-v1-servicemanager,mutating=true,failurePolicy=fail,groups=servicemanager.servicemanager.io,resources=servicemanagers,verbs=create;update,versions=v1,name=mservicemanager.kb.io var _ webhook.Defaulter = &ServiceManager{} // Default implements webhook.Defaulter so a webhook will be registered for the type // 这个方法时可以修改结构体 比如添加一些默认参数什么的 func (r *ServiceManager) Default() { servicemanagerlog.Info("default", "name", r.Name) // TODO(user): fill in your defaulting logic. } // TODO(user): change verbs to "verbs=create;update;delete" if you want to enable deletion validation. // +kubebuilder:webhook:verbs=create;update,path=/validate-servicemanager-servicemanager-io-v1-servicemanager,mutating=false,failurePolicy=fail,groups=servicemanager.servicemanager.io,resources=servicemanagers,versions=v1,name=vservicemanager.kb.io var _ webhook.Validator = &ServiceManager{} // ValidateCreate implements webhook.Validator so a webhook will be registered for the type // 下面的方法主要时做校验使用 func (r *ServiceManager) ValidateCreate() error { servicemanagerlog.Info("validate create", "name", r.Name) // TODO(user): fill in your validation logic upon object creation. return nil } // ValidateUpdate implements webhook.Validator so a webhook will be registered for the type func (r *ServiceManager) ValidateUpdate(old runtime.Object) error { servicemanagerlog.Info("validate update", "name", r.Name) // TODO(user): fill in your validation logic upon object update. return nil } // ValidateDelete implements webhook.Validator so a webhook will be registered for the type func (r *ServiceManager) ValidateDelete() error { servicemanagerlog.Info("validate delete", "name", r.Name) // TODO(user): fill in your validation logic upon object deletion. return nil }

    这里可能会设计到的概念 webhook、finalizer有兴趣的可以自行百度 kubebuilder的一些注解都有什么含义 这个看官网上面都写的很清楚

    自定义crd基本写完,我们怎么在本地运行呢 修改config/default/kustomization.yaml文件 将红框内的注释放开 修改config/crd/kustomization.yaml文件 根据阅读注释的描述,把下图圈中的部分注释打开: 修改Makefile文件中的指令deploy 替换为自己的registry

    export IMAGE="my.registry.com:5000/unit-controller:tmp" make deploy IMG=${IMAGE}

    最终生成一个all_in_one.yaml文件,这个文件有六千多行 1、需要把yaml文件中CustomResourceDefinition.spec下新增一个字段:preserveUnknownFields: false 2、MutatingWebhookConfiguration 和 ValidatingWebhookConfiguration 这两个webhook配置需要修改什么呢?来看看下载的配置,以为例:MutatingWebhookConfiguration

    下面copy的是博客博客,这里面讲的很详细 这里面有两个地方要修改:

    caBundle现在是空的,需要补上 clientConfig现在的配置是ca授权给的是Service unit-webhook-service,也即是会转发到deployment的pod,但我们现在是要本地调试,这里就要改成本地环境。 下面来讲述如何配置这两个点。

    CA证书签发 这里要分为多个步骤:

    1.ca.cert 首先获取K8s CA的CA.cert文件:

    kubectl config view --raw -o json | jq -r '.clusters[0].cluster."certificate-authority-data"' | tr -d '"' > ca.cert

    ca.cert的内容,即可复制替换到上面的MutatingWebhookConfiguration和ValidatingWebhookConfigurationd的webhooks.clientConfig.caBundle里。(原来的Cg==要删掉.)

    2.csr 创建证书签署请求json配置文件:

    注意,hosts里面填写两种内容:

    Unit controller的service 在K8s中的域名,最后Unit controller是要放在K8s里运行的。 本地开发机的某个网卡IP地址,这个地址用来连接K8s集群进行调试。因此必须保证这个IP与K8s集群可以互通

    cat > unit-csr.json << EOF { "hosts": [ "unit-webhook-service.default.svc", "unit-webhook-service.default.svc.cluster.local", "192.168.254.1" ], "CN": "unit-webhook-service", "key": { "algo": "rsa", "size": 2048 } } EOF

    3.生成csr和pem私钥文件:

    [root@vm254011 unit]# cat unit-csr.json | cfssl genkey - | cfssljson -bare unit 2020/05/23 17:44:39 [INFO] generate received request 2020/05/23 17:44:39 [INFO] received CSR 2020/05/23 17:44:39 [INFO] generating key: rsa-2048 2020/05/23 17:44:39 [INFO] encoded CSR [root@vm254011 unit]# [root@vm254011 unit]# ls unit* unit.csr unit-csr.json unit-key.pem

    4.创建CertificateSigningRequest资源

    cat > csr.yaml << EOF apiVersion: certificates.k8s.io/v1beta1 kind: CertificateSigningRequest metadata: name: unit spec: request: $(cat unit.csr | base64 | tr -d '\n') usages: - digital signature - key encipherment - server auth EOF # apply kubectl apply -f csr.yaml

    5.向集群提交此CertificateSigningRequest. 查看状态:

    [root@vm254011 unit]# kubectl apply -f csr.yaml certificatesigningrequest.certificates.k8s.io/unit created [root@vm254011 unit]# kubectl describe csr unit Name: unit Labels: <none> ... CreationTimestamp: Sat, 23 May 2020 17:56:14 +0800 Requesting User: kubernetes-admin Status: Pending Subject: Common Name: unit-webhook-service Serial Number: Subject Alternative Names: DNS Names: unit-webhook-service.default.svc unit-webhook-service.default.svc.cluster.local IP Addresses: 192.168.254.1 Events: <none>

    可以看到它还是pending的状态,需要同意一下请求:

    [root@vm254011 unit]# kubectl certificate approve unit certificatesigningrequest.certificates.k8s.io/unit approved [root@vm254011 unit]# [root@vm254011 unit]# kubectl get csr unit NAME AGE REQUESTOR CONDITION unit 111s kubernetes-admin Approved,Issued # 保存客户端crt文件 [root@vm254011 unit]# kubectl get csr unit -o jsonpath='{.status.certificate}' | base64 --decode > unit.crt

    可以看到,现在已经签署完毕了。

    汇总一下:

    第1步生成的ca.cert文件给caBundle字段使用 第3步生成的unit-key.pem私钥文件和第5步生成的unit.crt文件,提供给客户端(unit controller)https服务使用 更新WebhookConfiguration 根据上面生成的证书相关内容,对all_in_one.yaml 中的WebhookConfiguration进行替换,替换之后:

    apiVersion: admissionregistration.k8s.io/v1beta1 kind: MutatingWebhookConfiguration metadata: creationTimestamp: null name: unit-mutating-webhook-configuration webhooks: - clientConfig: caBundle: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJd01EVXhNakEzTkRNeE0xb1hEVE13TURVeE1EQTNORE14TTFvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTG5CCmRvZVRHNTlYMkZsYXRoN1RhRnYrZ2hjbGxsV0NLbkxuT1hQLzZydE0wdE92U0RCQjV2UVJsNUF0L3BWMEJucmQKZGtyOWRnMWRKSHp1T05WamkxTml6QVdUbWtSbDBKczMrdjFMUzBCY2xLeU5XbWRQM0NNUWl2M1BDbjNISG9rcgoveDZncnFaa3RxeUo2ck5JMXFocmkzbjNLSWFQWFBtYUJIeW1zWCt1UjQyMk1kaGNhU3dBUDQwUktzcUtWcS81CkRodzdHdVZzdFZHNG5GZUZ2dlFuYU1jVm13WUpyellFQWxNRitlSyswM3IyWEFLQUZxQnBEWXBaZlg1Wi9tUEsKVXlxNlIwcEJUaG9adXlwSUhQekwwMkJGazlDbmU3eTBXd1d6L1VleDJSN2toOVJhendNeVVTNlJKYU4wT2hRaQpsTTZyM2lZcnIzVWIxSW1ieE5NQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFENHVNaVZpL28zSkVhVi9UZzVKRWhQK2tQZm8KVzBLeUtaT3FNVlZzRVZsM1l2aFdYdGxOaCtwT0ZHSTlPQVFZdE5NKzZDeEJLVm9Xd1NzSUpyYkpZeVR2bGFlYgpHZnJGZWRkL2NkM0N5M2N1UDQ0ZjRPQ3VabTZWckJUVy8wUms3LzVKMHlLTmlSSDVqelRJL0szZGtKWkNERktOCjRGdWZxZ3Y0QTNxdVYwQXJaNFNOV2poVEx2SlM1VVdaOUpxUndyU3NqNlpvenRJRVhiU1d2aWhyS2FGQmtoWWwKRG5KM2N4cFljYXJ0aVZqS1g3SUNQQTJxdmw1azF4ZEMwVldTQWlLdTVFR24zZkFmdkQwN2poeVBub3lkMjVmWApQeDlkaGlzaDgwaFl4Nm9pbHpHdUppMGZDNjgxZ0VRRTQzUGhNRHRCZHNKMTBEejRQYTdrL2QvY3hETT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= url: https://192.168.254.1:9443/mutate-custom-my-crd-com-v1-unit # service: # name: unit-webhook-service # namespace: default # path: /mutate-custom-my-crd-com-v1-unit failurePolicy: Fail name: munit.kb.io rules: - apiGroups: - custom.my.crd.com apiVersions: - v1 operations: - CREATE - UPDATE resources: - units --- apiVersion: admissionregistration.k8s.io/v1beta1 kind: ValidatingWebhookConfiguration metadata: creationTimestamp: null name: unit-validating-webhook-configuration webhooks: - clientConfig: caBundle: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJd01EVXhNakEzTkRNeE0xb1hEVE13TURVeE1EQTNORE14TTFvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTG5CCmRvZVRHNTlYMkZsYXRoN1RhRnYrZ2hjbGxsV0NLbkxuT1hQLzZydE0wdE92U0RCQjV2UVJsNUF0L3BWMEJucmQKZGtyOWRnMWRKSHp1T05WamkxTml6QVdUbWtSbDBKczMrdjFMUzBCY2xLeU5XbWRQM0NNUWl2M1BDbjNISG9rcgoveDZncnFaa3RxeUo2ck5JMXFocmkzbjNLSWFQWFBtYUJIeW1zWCt1UjQyMk1kaGNhU3dBUDQwUktzcUtWcS81CkRodzdHdVZzdFZHNG5GZUZ2dlFuYU1jVm13WUpyellFQWxNRitlSyswM3IyWEFLQUZxQnBEWXBaZlg1Wi9tUEsKVXlxNlIwcEJUaG9adXlwSUhQekwwMkJGazlDbmU3eTBXd1d6L1VleDJSN2toOVJhendNeVVTNlJKYU4wT2hRaQpsTTZyM2lZcnIzVWIxSW1ieE5NQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFENHVNaVZpL28zSkVhVi9UZzVKRWhQK2tQZm8KVzBLeUtaT3FNVlZzRVZsM1l2aFdYdGxOaCtwT0ZHSTlPQVFZdE5NKzZDeEJLVm9Xd1NzSUpyYkpZeVR2bGFlYgpHZnJGZWRkL2NkM0N5M2N1UDQ0ZjRPQ3VabTZWckJUVy8wUms3LzVKMHlLTmlSSDVqelRJL0szZGtKWkNERktOCjRGdWZxZ3Y0QTNxdVYwQXJaNFNOV2poVEx2SlM1VVdaOUpxUndyU3NqNlpvenRJRVhiU1d2aWhyS2FGQmtoWWwKRG5KM2N4cFljYXJ0aVZqS1g3SUNQQTJxdmw1azF4ZEMwVldTQWlLdTVFR24zZkFmdkQwN2poeVBub3lkMjVmWApQeDlkaGlzaDgwaFl4Nm9pbHpHdUppMGZDNjgxZ0VRRTQzUGhNRHRCZHNKMTBEejRQYTdrL2QvY3hETT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= url: https://192.168.254.1:9443/validate-custom-my-crd-com-v1-unit # service: # name: unit-webhook-service # namespace: default # path: /validate-custom-my-crd-com-v1-unit failurePolicy: Fail name: vunit.kb.io rules: - apiGroups: - custom.my.crd.com apiVersions: - v1 operations: - CREATE - UPDATE resources: - units

    主意,url中的ip地址需要是本地开发机的ip地址,同时此ip需要能与K8s集群正常通信,uri为service.path.

    修改完两个WebhookConfiguration之后,下一步就可以去部署all_in_one.yaml文件了,由于现在controller要在本地运行调试,因此,这个阶段,要记得把all_in_one.yaml中的Deployment资源部分注释掉。

    [root@vm254011 unit]# kubectl apply -f all_in_one.local.yaml --validate=false namespace/unit-system created customresourcedefinition.apiextensions.k8s.io/units.custom.my.crd.com created mutatingwebhookconfiguration.admissionregistration.k8s.io/unit-mutating-webhook-configuration created role.rbac.authorization.k8s.io/unit-leader-election-role created clusterrole.rbac.authorization.k8s.io/unit-manager-role created clusterrole.rbac.authorization.k8s.io/unit-proxy-role created clusterrole.rbac.authorization.k8s.io/unit-metrics-reader created rolebinding.rbac.authorization.k8s.io/unit-leader-election-rolebinding created clusterrolebinding.rbac.authorization.k8s.io/unit-manager-rolebinding created clusterrolebinding.rbac.authorization.k8s.io/unit-proxy-rolebinding created service/unit-controller-manager-metrics-service created service/unit-webhook-service created validatingwebhookconfiguration.admissionregistration.k8s.io/unit-validating-webhook-configuration created

    K8s这边的CRD资源、webhook资源、RBAC授权都已经搞定了,下一步就是启动本地的controller进行调试了。

    新建一个自定义资源ServiceManager

    apiVersion: servicemanager.servicemanager.io/v1 kind: ServiceManager metadata: name: servicemanager-sample spec: # Add fields here category: Deployment #selector: #app: servicemanager-sample replicas: 2 port: 30027 #nodeport 和 serviceport targetport: 80 #container port template: metadata: name: servicemanager-sample spec: containers: - image: nginx imagePullPolicy: IfNotPresent name: servicemanager-sample resources: limits: cpu: 110m memory: 256Mi requests: cpu: 100m memory: 128Mi

    通过nodeip + port 能正常访问

    Processed: 0.011, SQL: 9