摘要:源碼版本簡(jiǎn)介在急群眾,在每個(gè)節(jié)點(diǎn)上都會(huì)啟動(dòng)一個(gè)服務(wù)進(jìn)程。該進(jìn)程用于處理節(jié)點(diǎn)下發(fā)到本節(jié)點(diǎn)的任務(wù),管理及中的容器。每個(gè)進(jìn)程會(huì)在上注冊(cè)節(jié)點(diǎn)自身信息,定期向節(jié)點(diǎn)匯報(bào)節(jié)點(diǎn)資源的使用情況,并通過監(jiān)控容器和節(jié)點(diǎn)資源。最后運(yùn)行健康檢測(cè)服務(wù)。
源碼版本
kubernetes version: v1.3.0
簡(jiǎn)介在Kubernetes急群眾,在每個(gè)Node節(jié)點(diǎn)上都會(huì)啟動(dòng)一個(gè)kubelet服務(wù)進(jìn)程。該進(jìn)程用于處理Master節(jié)點(diǎn)下發(fā)到本節(jié)點(diǎn)的任務(wù),管理Pod及Pod中的容器。每個(gè)Kubelet進(jìn)程會(huì)在APIServer上注冊(cè)節(jié)點(diǎn)自身信息,定期向Master節(jié)點(diǎn)匯報(bào)節(jié)點(diǎn)資源的使用情況,并通過cAdvise監(jiān)控容器和節(jié)點(diǎn)資源。
關(guān)鍵結(jié)構(gòu) KubeletConfigurationtype KubeletConfiguration struct {
// kubelet的參數(shù)配置文件
Config string `json:"config"`
// kubelet支持三種源數(shù)據(jù):
// 1. ApiServer: kubelet通過ApiServer監(jiān)聽etcd目錄,同步Pod清單
// 2. file: 通過kubelet啟動(dòng)參數(shù)"--config"指定配置文件目錄下的文件
// 3. http URL: 通過"--manifest-url"參數(shù)設(shè)置
// 所以下面會(huì)有三種同步的頻率配置
// 同步容器和配置的頻率。
SyncFrequency unversioned.Duration `json:"syncFrequency"`
// 文件檢查頻率
FileCheckFrequency unversioned.Duration `json:"fileCheckFrequency"`
// Http模式檢查頻率
HTTPCheckFrequency unversioned.Duration `json:"httpCheckFrequency"`
// 該參數(shù)設(shè)置HTTP模式下的endpoint
ManifestURL string `json:"manifestURL"`
ManifestURLHeader string `json:"manifestURLHeader"`
// 是否需要開啟kubelet Server,就是指下列的10250端口
EnableServer bool `json:"enableServer"`
// kubelet服務(wù)地址
Address string `json:"address"`
// kubelet服務(wù)端口,默認(rèn)10250
// 別的服務(wù)端口如下:
// -->Scheduler服務(wù)端口:10251
// -->ControllerManagerPort: 10252
Port uint `json:"port"`
// kubelet服務(wù)的只讀端口,沒有任何認(rèn)證(0:disable)。默認(rèn)為10255
// 該功能只要配置端口,就必定開啟服務(wù)
ReadOnlyPort uint `json:"readOnlyPort"`
// 證書相關(guān):
TLSCertFile string `json:"tLSCertFile"`
TLSPrivateKeyFile string `json:"tLSPrivateKeyFile"`
CertDirectory string `json:"certDirectory"`
// 用于識(shí)別kubelet的hostname,代替實(shí)際的hostname
HostnameOverride string `json:"hostnameOverride"`
// 指定創(chuàng)建Pod時(shí)的基礎(chǔ)鏡像
PodInfraContainerImage string `json:"podInfraContainerImage"`
// 配置kubelet需要交互的docker的endpoint
// 比如:unix:///var/run/docker.sock, 這個(gè)是默認(rèn)的Linux配置
DockerEndpoint string `json:"dockerEndpoint"`
// kubelet的volume、mounts、配置目錄路徑
// 默認(rèn)是/var/lib/kubelet
RootDirectory string `json:"rootDirectory"`
SeccompProfileRoot string `json:"seccompProfileRoot"`
// 是否允許root權(quán)限
AllowPrivileged bool `json:"allowPrivileged"`
// kubelet允許pods使用的資源:主機(jī)的Network、PID、IPC
// 默認(rèn)都是kubetypes.AllSource,即所有資源"*"
HostNetworkSources string `json:"hostNetworkSources"`
HostPIDSources string `json:"hostPIDSources"`
HostIPCSources string `json:"hostIPCSources"`
// 限制從鏡像倉庫拉取鏡像的速度, 0:unlimited; 5.0: default
RegistryPullQPS float64 `json:"registryPullQPS"`
// 從鏡像倉庫拉取鏡像允許產(chǎn)生的爆發(fā)值
RegistryBurst int32 `json:"registryBurst"`
// 限制每秒產(chǎn)生的events最大數(shù)量
EventRecordQPS float32 `json:"eventRecordQPS"`
// 允許產(chǎn)生events的爆發(fā)值
EventBurst int32 `json:"eventBurst"`
// 使能debug模式,進(jìn)行l(wèi)og收集和本地允許容器和命令
EnableDebuggingHandlers bool `json:"enableDebuggingHandlers"`
// 容器被回收之前存在的最小時(shí)間,在這時(shí)間之前是不允許被回收的
MinimumGCAge unversioned.Duration `json:"minimumGCAge"`
// Pod中允許存在Container的最大數(shù)量,默認(rèn)是2
MaxPerPodContainerCount int32 `json:"maxPerPodContainerCount"`
// 該節(jié)點(diǎn)上允許存在的最大container數(shù)量,默認(rèn)是240
MaxContainerCount int32 `json:"maxContainerCount"`
// cAdvisor服務(wù)端口,默認(rèn)是4194
CAdvisorPort uint `json:"cAdvisorPort"`
// 健康檢測(cè)端口,默認(rèn)是10248
HealthzPort int32 `json:"healthzPort"`
// 健康檢測(cè)綁定地址,默認(rèn)是“127.0.0.1”
HealthzBindAddress string `json:"healthzBindAddress"`
// kubelet進(jìn)程的oom-score-adj值,范圍:[-1000, 1000]
OOMScoreAdj int32 `json:"oomScoreAdj"`
// 是否自動(dòng)向Apiserver注冊(cè)
RegisterNode bool `json:"registerNode"`
ClusterDomain string `json:"clusterDomain"`
MasterServiceNamespace string `json:"masterServiceNamespace"`
// 集群DNS的IP,kubelet將配置所有的containers去使用該DNS
ClusterDNS string `json:"clusterDNS"`
// 流連接的超時(shí)時(shí)間
StreamingConnectionIdleTimeout unversioned.Duration `json:"streamingConnectionIdleTimeout"`
// Node狀態(tài)更新頻率,該值需要和nodeController中的nodeMonitorGracePeriod一起作用
// 設(shè)置kubelet每隔多少時(shí)間向APIServer匯報(bào)節(jié)點(diǎn)狀態(tài),默認(rèn)為10s
NodeStatusUpdateFrequency unversioned.Duration `json:"nodeStatusUpdateFrequency"`
// 設(shè)置鏡像被回收之前存在的最短時(shí)間,在這時(shí)間之前是不會(huì)被回收
ImageMinimumGCAge unversioned.Duration `json:"imageMinimumGCAge"`
// 磁盤占用率超過該值后,鏡像垃圾回收進(jìn)程將一直運(yùn)行
ImageGCHighThresholdPercent int32 `json:"imageGCHighThresholdPercent"`
// 磁盤占用率低于該值,鏡像垃圾回收進(jìn)程將不運(yùn)行
ImageGCLowThresholdPercent int32 `json:"imageGCLowThresholdPercent"`
// 磁盤空間的保留大小,當(dāng)?shù)陀谠撝禃r(shí),Pods將不能再創(chuàng)建
LowDiskSpaceThresholdMB int32 `json:"lowDiskSpaceThresholdMB"`
// 計(jì)算所有Pods和緩存容量的磁盤使用情況的頻率
VolumeStatsAggPeriod unversioned.Duration `json:"volumeStatsAggPeriod"`
// Network和volume的插件相關(guān)
NetworkPluginName string `json:"networkPluginName"`
NetworkPluginDir string `json:"networkPluginDir"`
VolumePluginDir string `json:"volumePluginDir"`
CloudProvider string `json:"cloudProvider,omitempty"`
CloudConfigFile string `json:"cloudConfigFile,omitempty"`
// 一個(gè)cgroups的名字,用于隔離kubelet ????為啥要隔離?單節(jié)點(diǎn)支持多個(gè)kubelet??
KubeletCgroups string `json:"kubeletCgroups,omitempty"`
// 用于隔離容器運(yùn)行時(shí)(Docker、Rkt)的cgroups
RuntimeCgroups string `json:"runtimeCgroups,omitempty"`
SystemCgroups string `json:"systemContainer,omitempty"`
CgroupRoot string `json:"cgroupRoot,omitempty"`
// ???
ContainerRuntime string `json:"containerRuntime"`
// 設(shè)置所有的runtime請(qǐng)求的超時(shí)時(shí)間(如:pull、logs、exec、attach),除了那些長(zhǎng)時(shí)間運(yùn)行的任務(wù)
RuntimeRequestTimeout unversioned.Duration `json:"runtimeRequestTimeout,omitempty"`
// rkt執(zhí)行文件的路徑
RktPath string `json:"rktPath,omitempty"`
// rkt通訊端點(diǎn)
RktAPIEndpoint string `json:"rktAPIEndpoint,omitempty"`
RktStage1Image string `json:"rktStage1Image,omitempty"`
// kubelet文件鎖,用于與別的kubelet進(jìn)行同步
LockFilePath string `json:"lockFilePath"`
ExitOnLockContention bool `json:"exitOnLockContention"`
// 基于Node.Spec.PodCIDR來配置網(wǎng)卡cbr0
ConfigureCBR0 bool `json:"configureCbr0"`
// 配置網(wǎng)絡(luò)模式, promiscuous-bridge、hairpin-veth、none
HairpinMode string `json:"hairpinMode"`
// 表示該節(jié)點(diǎn)已經(jīng)有監(jiān)控docker和kubelet的程序
BabysitDaemons bool `json:"babysitDaemons"`
// 該kubelet下能運(yùn)行的最大Pods數(shù)量
MaxPods int32 `json:"maxPods"`
NvidiaGPUs int32 `json:"nvidiaGPUs"`
// 容器命令執(zhí)行的Handler,通過字符串來配置不同的Handler
// 可配置:"native" or "nsender",default: "native"
DockerExecHandlerName string `json:"dockerExecHandlerName"`
// 這個(gè)CIDR用于分配Pod IP地址,只作用在standalone模式
PodCIDR string `json:"podCIDR"`
// 配置容器的DNS解析文件,默認(rèn)是"/etc/resolv.conf"
ResolverConfig string `json:"resolvConf"`
// 使能容器的CPU配額功能
CPUCFSQuota bool `json:"cpuCFSQuota"`
// 如果kubelet運(yùn)行在容器中的話,需要把該值設(shè)置為true
// kubelet運(yùn)行在主機(jī)上和容器里會(huì)有差異:
// 在主機(jī)上的話,寫文件數(shù)據(jù)沒有什么限制,直接調(diào)用ioutil.WriteFile()接口就OK
// 在容器里的話,如果kubelet要寫數(shù)據(jù)到它所創(chuàng)建的容器的話,就得使用nsender進(jìn)入到
// 容器對(duì)應(yīng)的namespace中,然后寫數(shù)據(jù)
Containerized bool `json:"containerized"`
// kubelet進(jìn)程可以打開的最大文件數(shù)
MaxOpenFiles uint64 `json:"maxOpenFiles"`
// 由apiServer指定CIDR
ReconcileCIDR bool `json:"reconcileCIDR"`
// 指定kubelet將它所在的Node注冊(cè)到Apiserver,為Schedulable
RegisterSchedulable bool `json:"registerSchedulable"`
// kubelet發(fā)送給apiServer的請(qǐng)求的正文類型,default:"application/vnd.kubernetes.protobuf"
ContentType string `json:"contentType"`
// kubelet和apiServer交互所設(shè)定的QPS
KubeAPIQPS float32 `json:"kubeAPIQPS"`
// kubelet與apiServer交互允許產(chǎn)生的爆發(fā)值
KubeAPIBurst int32 `json:"kubeAPIBurst"`
// 設(shè)置為true的話,告訴kubelet串行的去pull image
SerializeImagePulls bool `json:"serializeImagePulls"`
// 使能Flannel網(wǎng)絡(luò)來啟動(dòng)kubelet,該前提是默認(rèn)Flannel已經(jīng)啟動(dòng)了
ExperimentalFlannelOverlay bool `json:"experimentalFlannelOverlay"`
// Node可能會(huì)出于out-of-disk的狀態(tài)(磁盤空間不足),kubelet需要定時(shí)查詢node狀態(tài)
// 所以該值就是定時(shí)查詢的頻率
OutOfDiskTransitionFrequency unversioned.Duration `json:"outOfDiskTransitionFrequency,omitempty"`
// kubelet所在節(jié)點(diǎn)的IP.如果該值有設(shè)置,那么kubelet會(huì)把該值設(shè)置到node上
NodeIP string `json:"nodeIP,omitempty"`
// 該Node的Labels
NodeLabels map[string]string `json:"nodeLabels"`
NonMasqueradeCIDR string `json:"nonMasqueradeCIDR"`
EnableCustomMetrics bool `json:"enableCustomMetrics"`
// 以下幾個(gè)都跟回收策略有關(guān),詳細(xì)的需要查看代碼實(shí)現(xiàn)。
// 用逗號(hào)分隔的回收資源的條件表達(dá)式
// 參考: https://kubernetes.io/docs/admin/out-of-resource/
EvictionHard string `json:"evictionHard,omitempty"`
EvictionSoft string `json:"evictionSoft,omitempty"`
EvictionSoftGracePeriod string `json:"evictionSoftGracePeriod,omitempty"`
EvictionPressureTransitionPeriod unversioned.Duration `json:"evictionPressureTransitionPeriod,omitempty"`
EvictionMaxPodGracePeriod int32 `json:"evictionMaxPodGracePeriod,omitempty"`
// 設(shè)置每個(gè)核最大的Pods數(shù)量
PodsPerCore int32 `json:"podsPerCore"`
// 是否使能kubelet attach/detach的功能
EnableControllerAttachDetach bool `json:"enableControllerAttachDetach"`
}
Kubelet啟動(dòng)流程
main 入口
main入口: cmd/kubelet/kubelet.go
Main源碼如下:
func main() {
runtime.GOMAXPROCS(runtime.NumCPU())
s := options.NewKubeletServer()
s.AddFlags(pflag.CommandLine)
flag.InitFlags()
util.InitLogs()
defer util.FlushLogs()
verflag.PrintAndExitIfRequested()
if err := app.Run(s, nil); err != nil {
fmt.Fprintf(os.Stderr, "%v
", err)
os.Exit(1)
}
}
有看過源碼的同學(xué),應(yīng)該會(huì)發(fā)現(xiàn)kubernetes所有執(zhí)行程序的入口函數(shù)風(fēng)格都差不多一致。
options.NewKubeletServer(): 創(chuàng)建了一個(gè)KubeletServer結(jié)構(gòu),并進(jìn)行了默認(rèn)值的初始化。
接口如下:
func NewKubeletServer() *KubeletServer {
return &KubeletServer{
...
KubeletConfiguration: componentconfig.KubeletConfiguration{
Address: "0.0.0.0",
CAdvisorPort: 4194,
VolumeStatsAggPeriod: unversioned.Duration{Duration: time.Minute},
CertDirectory: "/var/run/kubernetes",
CgroupRoot: "",
CloudProvider: AutoDetectCloudProvider,
ConfigureCBR0: false,
ContainerRuntime: "docker",
RuntimeRequestTimeout: unversioned.Duration{Duration: 2 * time.Minute},
CPUCFSQuota: true,
...
}
s.AddFlags(pflag.CommandLine): 該接口用于從kubelet命令行獲取參數(shù)。
接口如下:
func (s *KubeletServer) AddFlags(fs *pflag.FlagSet) {
fs.StringVar(&s.Config, "config", s.Config, "Path to the config file or directory of files")
fs.DurationVar(&s.SyncFrequency.Duration, "sync-frequency", s.SyncFrequency.Duration, "Max period between synchronizing running containers and config")
fs.DurationVar(&s.FileCheckFrequency.Duration, "file-check-frequency", s.FileCheckFrequency.Duration, "Duration between checking config files for new data")
...
}
命令行參數(shù)獲取完之后,就是進(jìn)行日志等的初始化。
verflag.PrintAndExitIfRequested(): 判斷了參數(shù)是否是help,是的話直接打印help信息,然后退出。
最后就進(jìn)入到關(guān)鍵函數(shù)app.Run(s, nil)。
Run入口: cmd/kubelet/app/server.go
該接口的代碼很長(zhǎng),其實(shí)主要也是做了一些準(zhǔn)備工作,先來看下參數(shù)配置的過程。
代碼如下:
func run(s *options.KubeletServer, kcfg *KubeletConfig) (err error) {
...
// 可以看到app.Run()進(jìn)來的時(shí)候,kcfg=nil
if kcfg == nil {
// UnsecuredKubeletConfig()返回一個(gè)有效的KubeConfig
cfg, err := UnsecuredKubeletConfig(s)
if err != nil {
return err
}
kcfg = cfg
// 初始化一個(gè)Config,用來與APIServer交互
clientConfig, err := CreateAPIServerClientConfig(s)
if err == nil {
// 用于創(chuàng)建各類client: 核心client、認(rèn)證client、授權(quán)client...
kcfg.KubeClient, err = clientset.NewForConfig(clientConfig)
// 創(chuàng)建一個(gè)events的client
// make a separate client for events
eventClientConfig := *clientConfig
eventClientConfig.QPS = s.EventRecordQPS
eventClientConfig.Burst = int(s.EventBurst)
kcfg.EventClient, err = clientset.NewForConfig(&eventClientConfig)
}
...
}
// 創(chuàng)建了一個(gè)cAdvisor對(duì)象,用于獲取各類資源信息
// 其中有部分接口還未支持
if kcfg.CAdvisorInterface == nil {
kcfg.CAdvisorInterface, err = cadvisor.New(s.CAdvisorPort, kcfg.ContainerRuntime)
if err != nil {
return err
}
}
// kubelet的容器管理模塊
if kcfg.ContainerManager == nil {
if kcfg.SystemCgroups != "" && kcfg.CgroupRoot == "" {
return fmt.Errorf("invalid configuration: system container was specified and cgroup root was not specified")
}
kcfg.ContainerManager, err = cm.NewContainerManager(kcfg.Mounter, kcfg.CAdvisorInterface, cm.NodeConfig{
RuntimeCgroupsName: kcfg.RuntimeCgroups,
SystemCgroupsName: kcfg.SystemCgroups,
KubeletCgroupsName: kcfg.KubeletCgroups,
ContainerRuntime: kcfg.ContainerRuntime,
})
if err != nil {
return err
}
}
...
// 配置系統(tǒng)OOM參數(shù)
// TODO(vmarmol): Do this through container config.
oomAdjuster := kcfg.OOMAdjuster
if err := oomAdjuster.ApplyOOMScoreAdj(0, int(s.OOMScoreAdj)); err != nil {
glog.Warning(err)
}
// 繼續(xù)接下去的kubelet運(yùn)行步驟
if err := RunKubelet(kcfg); err != nil {
return err
}
// kubelet的監(jiān)控檢測(cè)
if s.HealthzPort > 0 {
healthz.DefaultHealthz()
go wait.Until(func() {
err := http.ListenAndServe(net.JoinHostPort(s.HealthzBindAddress, strconv.Itoa(int(s.HealthzPort))), nil)
if err != nil {
glog.Errorf("Starting health server failed: %v", err)
}
}, 5*time.Second, wait.NeverStop)
}
if s.RunOnce {
return nil
}
<-done
return nil
}
該接口主要準(zhǔn)備了一個(gè)KubeletConfig結(jié)構(gòu),調(diào)用UnsecuredKubeletConfig()接口進(jìn)行創(chuàng)建。
然后還創(chuàng)建了一些該結(jié)構(gòu)中的kubeClient、EventClient、CAdvisorInterface、ContainerManager、oomAdjuster等對(duì)象。
然后調(diào)用了RunKubelet()接口,走接下去的服務(wù)運(yùn)行流程。
最后運(yùn)行健康檢測(cè)服務(wù)。
下面挑關(guān)鍵的接口進(jìn)行介紹:
UnsecuredKubeletConfig()接口func UnsecuredKubeletConfig(s *options.KubeletServer) (*KubeletConfig, error) {
。。。
// kubelet可能會(huì)以容器的方式部署,需要配置標(biāo)準(zhǔn)輸出
mounter := mount.New()
var writer io.Writer = &io.StdWriter{}
if s.Containerized {
glog.V(2).Info("Running kubelet in containerized mode (experimental)")
mounter = mount.NewNsenterMounter()
writer = &io.NsenterWriter{}
}
// 配置kubelet的TLS
tlsOptions, err := InitializeTLS(s)
if err != nil {
return nil, err
}
// kubelet有兩種部署方式: 直接運(yùn)行在物理機(jī)上,還有一種是通過容器部署。
// 若部署到容器中,就會(huì)有namespace隔離的問題,導(dǎo)致kubelet無法訪問docker容器的
// namespace并且docker exec運(yùn)行命令。
// 所以這里會(huì)進(jìn)行判斷,如果運(yùn)行在容器中的話,就需要用到nsenter,它可以協(xié)助kubelet
// 到指定的namespace運(yùn)行命令。
// nsenter參考資料: https://github.com/jpetazzo/nsenter
var dockerExecHandler dockertools.ExecHandler
switch s.DockerExecHandlerName {
case "native":
dockerExecHandler = &dockertools.NativeExecHandler{}
case "nsenter":
dockerExecHandler = &dockertools.NsenterExecHandler{}
default:
glog.Warningf("Unknown Docker exec handler %q; defaulting to native", s.DockerExecHandlerName)
dockerExecHandler = &dockertools.NativeExecHandler{}
}
// k8s對(duì)image的回收管理策略
// MinAge: 表示鏡像存活的最小時(shí)間,只有在這之后才能回收該鏡像
// HighThresholdPercent: 磁盤占用超過該值后,GC一直開啟
// LowThresholdPercent: 磁盤占用低于該值的話,GC不開啟
imageGCPolicy := kubelet.ImageGCPolicy{
MinAge: s.ImageMinimumGCAge.Duration,
HighThresholdPercent: int(s.ImageGCHighThresholdPercent),
LowThresholdPercent: int(s.ImageGCLowThresholdPercent),
}
// k8s根據(jù)磁盤空間配置策略
// DockerFreeDiskMB: 磁盤可用空間低于該值時(shí),pod將無法再在該節(jié)點(diǎn)創(chuàng)建,也是指該磁盤需要保留的空間大小
diskSpacePolicy := kubelet.DiskSpacePolicy{
DockerFreeDiskMB: int(s.LowDiskSpaceThresholdMB),
RootFreeDiskMB: int(s.LowDiskSpaceThresholdMB),
}
。。。
// k8s v1.3引入的功能。Eviction用于k8s集群提前感知節(jié)點(diǎn)memory/disk負(fù)載情況,來調(diào)度資源。
thresholds, err := eviction.ParseThresholdConfig(s.EvictionHard, s.EvictionSoft, s.EvictionSoftGracePeriod)
if err != nil {
return nil, err
}
evictionConfig := eviction.Config{
PressureTransitionPeriod: s.EvictionPressureTransitionPeriod.Duration,
MaxPodGracePeriodSeconds: int64(s.EvictionMaxPodGracePeriod),
Thresholds: thresholds,
}
// 初始化KubeletConfig結(jié)構(gòu)
return &KubeletConfig{
Address: net.ParseIP(s.Address),
AllowPrivileged: s.AllowPrivileged,
Auth: nil, // default does not enforce auth[nz]
。。。
}, nil
}
這段代碼中,個(gè)人覺得有幾個(gè)點(diǎn)比較值得了解下:
該接口中會(huì)涉及到kubelet跑在物理機(jī)上還是容器中。
如果運(yùn)行在容器中,會(huì)存在namespace權(quán)限的問題,需要通過nsenter來操作docker容器。
kubelet提供了參數(shù)"--docker-exec-handler"(即DockerExecHandlerName),來配置是否使用nsenter.
Nsenter功能可以了解下。
還有一個(gè)kubelet Eviction功能。該功能是k8s v1.3.0新引入的功能,eviction功能就是在節(jié)點(diǎn)超負(fù)荷之前,提前不讓Pod進(jìn)行創(chuàng)建,主要就是針對(duì)memory和disk。
之前的版本是不會(huì)提前感知集群的節(jié)點(diǎn)負(fù)荷,當(dāng)內(nèi)存吃緊時(shí),k8s只依靠?jī)?nèi)核的OOM Killer、磁盤定期對(duì)image和container進(jìn)行垃圾回收功能,這樣對(duì)于Pod有不確定性。eviction很好的解決了該問題,可以在kubelet啟動(dòng)時(shí)指定memory/disk等參數(shù),來保證節(jié)點(diǎn)穩(wěn)定工作,讓集群提前感知節(jié)點(diǎn)負(fù)荷。
創(chuàng)建client會(huì)有兩步:
調(diào)用CreateAPIServerClientConfig()進(jìn)行Config初始化
調(diào)用clientset.NewForConfig()根據(jù)之前初始化的Config,創(chuàng)建各類Client。
CreateAPIServerClientConfig()接口如下:
func CreateAPIServerClientConfig(s *options.KubeletServer) (*restclient.Config, error) {
// 檢查APIServer是否有配置
if len(s.APIServerList) < 1 {
return nil, fmt.Errorf("no api servers specified")
}
// 檢查是否配置了多個(gè)APIServer,新版本已經(jīng)支持多APIServer的HA
// 現(xiàn)在默認(rèn)是用第一個(gè)Server
// TODO: adapt Kube client to support LB over several servers
if len(s.APIServerList) > 1 {
glog.Infof("Multiple api servers specified. Picking first one")
}
clientConfig, err := createClientConfig(s)
if err != nil {
return nil, err
}
clientConfig.ContentType = s.ContentType
// Override kubeconfig qps/burst settings from flags
clientConfig.QPS = s.KubeAPIQPS
clientConfig.Burst = int(s.KubeAPIBurst)
addChaosToClientConfig(s, clientConfig)
return clientConfig, nil
}
func createClientConfig(s *options.KubeletServer) (*restclient.Config, error) {
if s.KubeConfig.Provided() && s.AuthPath.Provided() {
return nil, fmt.Errorf("cannot specify both --kubeconfig and --auth-path")
}
if s.KubeConfig.Provided() {
return kubeconfigClientConfig(s)
}
if s.AuthPath.Provided() {
return authPathClientConfig(s, false)
}
// Try the kubeconfig default first, falling back to the auth path default.
clientConfig, err := kubeconfigClientConfig(s)
if err != nil {
glog.Warningf("Could not load kubeconfig file %s: %v. Trying auth path instead.", s.KubeConfig, err)
return authPathClientConfig(s, true)
}
return clientConfig, nil
}
// 就是這邊默認(rèn)指定了第一個(gè)APIServer
func kubeconfigClientConfig(s *options.KubeletServer) (*restclient.Config, error) {
return clientcmd.NewNonInteractiveDeferredLoadingClientConfig(
&clientcmd.ClientConfigLoadingRules{ExplicitPath: s.KubeConfig.Value()},
&clientcmd.ConfigOverrides{ClusterInfo: clientcmdapi.Cluster{Server: s.APIServerList[0]}}).ClientConfig()
}
創(chuàng)建Config成功之后,便調(diào)用clientset.NewForConfig()創(chuàng)建各類Clients:
func NewForConfig(c *restclient.Config) (*Clientset, error) {
// 配置Client連接限制
configShallowCopy := *c
if configShallowCopy.RateLimiter == nil && configShallowCopy.QPS > 0 {
configShallowCopy.RateLimiter = flowcontrol.NewTokenBucketRateLimiter(configShallowCopy.QPS, configShallowCopy.Burst)
}
var clientset Clientset
var err error
// 創(chuàng)建核心Client
clientset.CoreClient, err = unversionedcore.NewForConfig(&configShallowCopy)
if err != nil {
return nil, err
}
// 創(chuàng)建第三方Client
clientset.ExtensionsClient, err = unversionedextensions.NewForConfig(&configShallowCopy)
if err != nil {
return nil, err
}
// 創(chuàng)建自動(dòng)伸縮Client
clientset.AutoscalingClient, err = unversionedautoscaling.NewForConfig(&configShallowCopy)
if err != nil {
return nil, err
}
// 創(chuàng)建批量操作的Client
clientset.BatchClient, err = unversionedbatch.NewForConfig(&configShallowCopy)
if err != nil {
return nil, err
}
// 創(chuàng)建Rbac Client (RBAC:基于角色的訪問控制)
// 跟k8s的認(rèn)證授權(quán)有關(guān),可以參考: https://kubernetes.io/docs/admin/authorization/
clientset.RbacClient, err = unversionedrbac.NewForConfig(&configShallowCopy)
if err != nil {
return nil, err
}
// 創(chuàng)建服務(wù)發(fā)現(xiàn)Client
clientset.DiscoveryClient, err = discovery.NewDiscoveryClientForConfig(&configShallowCopy)
if err != nil {
glog.Errorf("failed to create the DiscoveryClient: %v", err)
return nil, err
}
return &clientset, nil
}
上面的各種客戶端實(shí)際就是api rest請(qǐng)求的客戶端。
RunKubelet上面的各類創(chuàng)建及初始化完之后,便進(jìn)入下一步驟RunKubelet:
func RunKubelet(kcfg *KubeletConfig) error {
...
// k8s event對(duì)象創(chuàng)建,用于kubelet向APIServer發(fā)送管理容器相關(guān)的各類events
// 后面會(huì)多帶帶介紹k8s events功能,這里不再展開細(xì)講
eventBroadcaster := record.NewBroadcaster()
kcfg.Recorder = eventBroadcaster.NewRecorder(api.EventSource{Component: "kubelet", Host: kcfg.NodeName})
eventBroadcaster.StartLogging(glog.V(3).Infof)
if kcfg.EventClient != nil {
glog.V(4).Infof("Sending events to api server.")
eventBroadcaster.StartRecordingToSink(&unversionedcore.EventSinkImpl{Interface: kcfg.EventClient.Events("")})
} else {
glog.Warning("No api server defined - no events will be sent to API server.")
}
// 配置capabilities
privilegedSources := capabilities.PrivilegedSources{
HostNetworkSources: kcfg.HostNetworkSources,
HostPIDSources: kcfg.HostPIDSources,
HostIPCSources: kcfg.HostIPCSources,
}
capabilities.Setup(kcfg.AllowPrivileged, privilegedSources, 0)
credentialprovider.SetPreferredDockercfgPath(kcfg.RootDirectory)
// 調(diào)用CreateAndInitKubelet()接口,進(jìn)行各類初始化
builder := kcfg.Builder
if builder == nil {
builder = CreateAndInitKubelet
}
if kcfg.OSInterface == nil {
kcfg.OSInterface = kubecontainer.RealOS{}
}
k, podCfg, err := builder(kcfg)
if err != nil {
return fmt.Errorf("failed to create kubelet: %v", err)
}
// 設(shè)置kubelet進(jìn)程自身最大能打開的文件句柄數(shù)
util.ApplyRLimitForSelf(kcfg.MaxOpenFiles)
// TODO(dawnchen): remove this once we deprecated old debian containervm images.
// This is a workaround for issue: https://github.com/opencontainers/runc/issues/726
// The current chosen number is consistent with most of other os dist.
const maxkeysPath = "/proc/sys/kernel/keys/root_maxkeys"
const minKeys uint64 = 1000000
key, err := ioutil.ReadFile(maxkeysPath)
if err != nil {
glog.Errorf("Cannot read keys quota in %s", maxkeysPath)
} else {
fields := strings.Fields(string(key))
nkey, _ := strconv.ParseUint(fields[0], 10, 64)
if nkey < minKeys {
glog.Infof("Setting keys quota in %s to %d", maxkeysPath, minKeys)
err = ioutil.WriteFile(maxkeysPath, []byte(fmt.Sprintf("%d", uint64(minKeys))), 0644)
if err != nil {
glog.Warningf("Failed to update %s: %v", maxkeysPath, err)
}
}
}
const maxbytesPath = "/proc/sys/kernel/keys/root_maxbytes"
const minBytes uint64 = 25000000
bytes, err := ioutil.ReadFile(maxbytesPath)
if err != nil {
glog.Errorf("Cannot read keys bytes in %s", maxbytesPath)
} else {
fields := strings.Fields(string(bytes))
nbyte, _ := strconv.ParseUint(fields[0], 10, 64)
if nbyte < minBytes {
glog.Infof("Setting keys bytes in %s to %d", maxbytesPath, minBytes)
err = ioutil.WriteFile(maxbytesPath, []byte(fmt.Sprintf("%d", uint64(minBytes))), 0644)
if err != nil {
glog.Warningf("Failed to update %s: %v", maxbytesPath, err)
}
}
}
// kubelet可以只運(yùn)行一次,也可以作為一個(gè)后臺(tái)daemon一直運(yùn)行
// 一次運(yùn)行的話,就是Runonce,處理下pods事件然后退出
// 一直運(yùn)行的話,就是startKubelet()
// process pods and exit.
if kcfg.Runonce {
if _, err := k.RunOnce(podCfg.Updates()); err != nil {
return fmt.Errorf("runonce failed: %v", err)
}
glog.Infof("Started kubelet %s as runonce", version.Get().String())
} else {
// 進(jìn)入關(guān)鍵函數(shù)startKubelet()
startKubelet(k, podCfg, kcfg)
glog.Infof("Started kubelet %s", version.Get().String())
}
return nil
}
該接口中會(huì)調(diào)用CreateAndInitKubelet()接口再進(jìn)行初始化,其中又調(diào)用了kubelet.NewMainKubelet()接口。
kubelet可以只運(yùn)行一次,也可以后臺(tái)一直運(yùn)行。要一直運(yùn)行的話就是調(diào)用startKubelet()。
我們先看下初始化接口干了些什么?
func CreateAndInitKubelet(kc *KubeletConfig) (k KubeletBootstrap, pc *config.PodConfig, err error) {
// TODO: block until all sources have delivered at least one update to the channel, or break the sync loop
// up into "per source" synchronizations
// TODO: KubeletConfig.KubeClient should be a client interface, but client interface misses certain methods
// used by kubelet. Since NewMainKubelet expects a client interface, we need to make sure we are not passing
// a nil pointer to it when what we really want is a nil interface.
var kubeClient clientset.Interface
if kc.KubeClient != nil {
kubeClient = kc.KubeClient
// TODO: remove this when we"ve refactored kubelet to only use clientset.
}
// 初始化container GC參數(shù)
gcPolicy := kubecontainer.ContainerGCPolicy{
MinAge: kc.MinimumGCAge,
MaxPerPodContainer: kc.MaxPerPodContainerCount,
MaxContainers: kc.MaxContainerCount,
}
// 配置kubelet server的端口, default: 10250
daemonEndpoints := &api.NodeDaemonEndpoints{
KubeletEndpoint: api.DaemonEndpoint{Port: int32(kc.Port)},
}
// 創(chuàng)建PodConfig
pc = kc.PodConfig
if pc == nil {
// kubelet支持三種數(shù)據(jù)源: file、HTTP URL、k8s APIServer
// 默認(rèn)是k8s APIServer,這里還會(huì)涉及到cache,可以深入學(xué)習(xí)下具體實(shí)現(xiàn)
pc = makePodSourceConfig(kc)
}
//
k, err = kubelet.NewMainKubelet(
kc.Hostname,
kc.NodeName,
kc.DockerClient,
kubeClient,
。。。
)
if err != nil {
return nil, nil, err
}
k.BirthCry()
k.StartGarbageCollection()
return k, pc, nil
}
初始化接口中還有一層調(diào)用:kubelet.NewMainKubelet(),該接口在1.3中是N多參數(shù),并且函數(shù)實(shí)現(xiàn)也是很長(zhǎng)很長(zhǎng),寫的非常不友好,不過看了下新版本已經(jīng)重寫過了。我們還是拿這個(gè)又長(zhǎng)又胖的接口,繼續(xù)了解下:
func NewMainKubelet(
hostname string,
nodeName string,
。。。
) (*Kubelet, error) {
。。。
// 創(chuàng)建service的cache.NewStore, 設(shè)置service的監(jiān)聽函數(shù)listWatch,并設(shè)置對(duì)應(yīng)的反射NewReflector,然后設(shè)置serviceLister
serviceStore := cache.NewStore(cache.MetaNamespaceKeyFunc)
if kubeClient != nil {
// TODO: cache.NewListWatchFromClient is limited as it takes a client implementation rather
// than an interface. There is no way to construct a list+watcher using resource name.
listWatch := &cache.ListWatch{
ListFunc: func(options api.ListOptions) (runtime.Object, error) {
return kubeClient.Core().Services(api.NamespaceAll).List(options)
},
WatchFunc: func(options api.ListOptions) (watch.Interface, error) {
return kubeClient.Core().Services(api.NamespaceAll).Watch(options)
},
}
cache.NewReflector(listWatch, &api.Service{}, serviceStore, 0).Run()
}
serviceLister := &cache.StoreToServiceLister{Store: serviceStore}
// 創(chuàng)建node的cache.NewStore, 設(shè)置fieldSelector,設(shè)置監(jiān)聽函數(shù)listWatch,設(shè)置對(duì)應(yīng)的反射NewReflector,并設(shè)置nodeLister,nodeInfo和nodeRef
nodeStore := cache.NewStore(cache.MetaNamespaceKeyFunc)
if kubeClient != nil {
// TODO: cache.NewListWatchFromClient is limited as it takes a client implementation rather
// than an interface. There is no way to construct a list+watcher using resource name.
fieldSelector := fields.Set{api.ObjectNameField: nodeName}.AsSelector()
listWatch := &cache.ListWatch{
ListFunc: func(options api.ListOptions) (runtime.Object, error) {
options.FieldSelector = fieldSelector
return kubeClient.Core().Nodes().List(options)
},
WatchFunc: func(options api.ListOptions) (watch.Interface, error) {
options.FieldSelector = fieldSelector
return kubeClient.Core().Nodes().Watch(options)
},
}
cache.NewReflector(listWatch, &api.Node{}, nodeStore, 0).Run()
}
nodeLister := &cache.StoreToNodeLister{Store: nodeStore}
nodeInfo := &predicates.CachedNodeInfo{StoreToNodeLister: nodeLister}
// TODO: get the real node object of ourself,
// and use the real node name and UID.
// TODO: what is namespace for node?
nodeRef := &api.ObjectReference{
Kind: "Node",
Name: nodeName,
UID: types.UID(nodeName),
Namespace: "",
}
// 創(chuàng)建磁盤空間管理對(duì)象,該對(duì)象需要使用cAdvisor的接口來獲取磁盤相關(guān)信息
// 最后一個(gè)參數(shù)便是配置磁盤管理的Policy
diskSpaceManager, err := newDiskSpaceManager(cadvisorInterface, diskSpacePolicy)
if err != nil {
return nil, fmt.Errorf("failed to initialize disk manager: %v", err)
}
// 創(chuàng)建一個(gè)空的container reference manager對(duì)象
containerRefManager := kubecontainer.NewRefManager()
// 創(chuàng)建OOM 監(jiān)控對(duì)象,使用cAdvisor接口監(jiān)控內(nèi)存,并使用event recorder上報(bào)oom事件
oomWatcher := NewOOMWatcher(cadvisorInterface, recorder)
// TODO: remove when internal cbr0 implementation gets removed in favor
// of the kubenet network plugin
if networkPluginName == "kubenet" {
configureCBR0 = false
flannelExperimentalOverlay = false
}
// 初始化Kubelet
klet := &Kubelet{
hostname: hostname,
nodeName: nodeName,
。。。
}
...
procFs := procfs.NewProcFS()
imageBackOff := flowcontrol.NewBackOff(backOffPeriod, MaxContainerBackOff)
klet.livenessManager = proberesults.NewManager()
// 初始化pod的cache和manager對(duì)象
klet.podCache = kubecontainer.NewCache()
klet.podManager = kubepod.NewBasicPodManager(kubepod.NewBasicMirrorClient(klet.kubeClient))
// 初始化Docker container Runtime
switch containerRuntime {
case "docker":
// dockerClient就是之后會(huì)介紹,就是kubelet用于操作docker的client
// recorder: 即之前創(chuàng)建的event recorder
// 還會(huì)有各類物理機(jī)信息,pull images的QPS等等參數(shù)
// 具體可以了解下DockerManager結(jié)構(gòu)
// Only supported one for now, continue.
klet.containerRuntime = dockertools.NewDockerManager(
dockerClient,
kubecontainer.FilterEventRecorder(recorder),
klet.livenessManager,
containerRefManager,
klet.podManager,
machineInfo,
podInfraContainerImage,
pullQPS,
pullBurst,
containerLogsDir,
osInterface,
klet.networkPlugin,
klet,
klet.httpClient,
dockerExecHandler,
oomAdjuster,
procFs,
klet.cpuCFSQuota,
imageBackOff,
serializeImagePulls,
enableCustomMetrics,
klet.hairpinMode == componentconfig.HairpinVeth,
seccompProfileRoot,
containerRuntimeOptions...,
)
case "rkt":
...
default:
return nil, fmt.Errorf("unsupported container runtime %q specified", containerRuntime)
}
...
// 設(shè)置containerGC
containerGC, err := kubecontainer.NewContainerGC(klet.containerRuntime, containerGCPolicy)
if err != nil {
return nil, err
}
klet.containerGC = containerGC
// 設(shè)置imageManager
imageManager, err := newImageManager(klet.containerRuntime, cadvisorInterface, recorder, nodeRef, imageGCPolicy)
if err != nil {
return nil, fmt.Errorf("failed to initialize image manager: %v", err)
}
klet.imageManager = imageManager
klet.runner = klet.containerRuntime
// 設(shè)置statusManager
klet.statusManager = status.NewManager(kubeClient, klet.podManager)
// 設(shè)置probeManager
klet.probeManager = prober.NewManager(
klet.statusManager,
klet.livenessManager,
klet.runner,
containerRefManager,
recorder)
klet.volumePluginMgr, err =
NewInitializedVolumePluginMgr(klet, volumePlugins)
if err != nil {
return nil, err
}
// 設(shè)置volumeManager
klet.volumeManager, err = kubeletvolume.NewVolumeManager(
enableControllerAttachDetach,
hostname,
klet.podManager,
klet.kubeClient,
klet.volumePluginMgr,
klet.containerRuntime)
// 創(chuàng)建runtime Cache對(duì)象
runtimeCache, err := kubecontainer.NewRuntimeCache(klet.containerRuntime)
if err != nil {
return nil, err
}
klet.runtimeCache = runtimeCache
klet.reasonCache = NewReasonCache()
klet.workQueue = queue.NewBasicWorkQueue(klet.clock)
// 創(chuàng)建podWorkers對(duì)象,這個(gè)比較關(guān)鍵,后面會(huì)多帶帶介紹
klet.podWorkers = newPodWorkers(klet.syncPod, recorder, klet.workQueue, klet.resyncInterval, backOffPeriod, klet.podCache)
klet.backOff = flowcontrol.NewBackOff(backOffPeriod, MaxContainerBackOff)
klet.podKillingCh = make(chan *kubecontainer.PodPair, podKillingChannelCapacity)
klet.setNodeStatusFuncs = klet.defaultNodeStatusFuncs()
// 設(shè)置eviction manager
evictionManager, evictionAdmitHandler, err := eviction.NewManager(klet.resourceAnalyzer, evictionConfig, killPodNow(klet.podWorkers), recorder, nodeRef, klet.clock)
if err != nil {
return nil, fmt.Errorf("failed to initialize eviction manager: %v", err)
}
klet.evictionManager = evictionManager
klet.AddPodAdmitHandler(evictionAdmitHandler)
// apply functional Option"s
for _, opt := range kubeOptions {
opt(klet)
}
return klet, nil
}
該接口中,會(huì)創(chuàng)建podWorkers,該對(duì)象比較重要,跟pod的實(shí)際操作有關(guān),后面會(huì)多帶帶進(jìn)行介紹。這里先只點(diǎn)到為止。
我們回想下整個(gè)流程就會(huì)發(fā)現(xiàn),cmd/kubelet/app主要就是做一些簡(jiǎn)單的參數(shù)處理,具體的初始化都是在pkg/kubelet中做的。
看完初始化,我們要進(jìn)入真正運(yùn)行的接口startKubelet():
func startKubelet(k KubeletBootstrap, podCfg *config.PodConfig, kc *KubeletConfig) {
// 這里是真正的啟動(dòng)kubelet
go wait.Until(func() { k.Run(podCfg.Updates()) }, 0, wait.NeverStop)
// 這里是開啟kubelet Server,便于調(diào)用kubelet的API進(jìn)行操作
if kc.EnableServer {
go wait.Until(func() {
k.ListenAndServe(kc.Address, kc.Port, kc.TLSOptions, kc.Auth, kc.EnableDebuggingHandlers)
}, 0, wait.NeverStop)
}
// 該處是開啟kubelet的只讀服務(wù),端口是10255
if kc.ReadOnlyPort > 0 {
go wait.Until(func() {
k.ListenAndServeReadOnly(kc.Address, kc.ReadOnlyPort)
}, 0, wait.NeverStop)
}
}
繼續(xù)深入,進(jìn)入到真正啟動(dòng)kubelet的接口k.Run(),這個(gè)里的k是個(gè)KubeletBootstrap類型的interface,實(shí)際對(duì)象是由CreateAndInitKubelet()接口返回的Kubelet對(duì)象,所以Run()實(shí)現(xiàn)可以查看該對(duì)象的實(shí)現(xiàn)。
具體實(shí)現(xiàn)路徑:pkg/kubelet/kubelet.go,接口如下:
func (kl *Kubelet) Run(updates <-chan kubetypes.PodUpdate) {
// 開啟日志服務(wù)
if kl.logServer == nil {
kl.logServer = http.StripPrefix("/logs/", http.FileServer(http.Dir("/var/log/")))
}
if kl.kubeClient == nil {
glog.Warning("No api server defined - no node status update will be sent.")
}
// init modulers,如imageManager、containerManager、oomWathcer、resourceAnalyzer
if err := kl.initializeModules(); err != nil {
kl.recorder.Eventf(kl.nodeRef, api.EventTypeWarning, kubecontainer.KubeletSetupFailed, err.Error())
glog.Error(err)
kl.runtimeState.setInitError(err)
}
// Start volume manager
go kl.volumeManager.Run(wait.NeverStop)
// 起協(xié)程,定時(shí)向APIServer更新node status
if kl.kubeClient != nil {
// Start syncing node status immediately, this may set up things the runtime needs to run.
go wait.Until(kl.syncNodeStatus, kl.nodeStatusUpdateFrequency, wait.NeverStop)
}
// 起協(xié)程,定時(shí)同步網(wǎng)絡(luò)狀態(tài)
go wait.Until(kl.syncNetworkStatus, 30*time.Second, wait.NeverStop)
go wait.Until(kl.updateRuntimeUp, 5*time.Second, wait.NeverStop)
// Start a goroutine responsible for killing pods (that are not properly
// handled by pod workers).
// 起協(xié)程,定時(shí)處理那些被killing pods
go wait.Until(kl.podKiller, 1*time.Second, wait.NeverStop)
// Start component sync loops.
kl.statusManager.Start()
kl.probeManager.Start()
// 啟動(dòng)evictionManager
kl.evictionManager.Start(kl.getActivePods, evictionMonitoringPeriod)
// Start the pod lifecycle event generator.
kl.pleg.Start()
// 開啟pods事件,用于處理APIServer下發(fā)的任務(wù),updates是一個(gè)管道
kl.syncLoop(updates, kl)
}
func (kl *Kubelet) initializeModules() error {
// Step 1: Promethues metrics.
metrics.Register(kl.runtimeCache)
// Step 2: Setup filesystem directories.
if err := kl.setupDataDirs(); err != nil {
return err
}
// Step 3: If the container logs directory does not exist, create it.
if _, err := os.Stat(containerLogsDir); err != nil {
if err := kl.os.MkdirAll(containerLogsDir, 0755); err != nil {
glog.Errorf("Failed to create directory %q: %v", containerLogsDir, err)
}
}
// Step 4: Start the image manager.
if err := kl.imageManager.Start(); err != nil {
return fmt.Errorf("Failed to start ImageManager, images may not be garbage collected: %v", err)
}
// Step 5: Start container manager.
if err := kl.containerManager.Start(); err != nil {
return fmt.Errorf("Failed to start ContainerManager %v", err)
}
// Step 6: Start out of memory watcher.
if err := kl.oomWatcher.Start(kl.nodeRef); err != nil {
return fmt.Errorf("Failed to start OOM watcher %v", err)
}
// Step 7: Start resource analyzer
kl.resourceAnalyzer.Start()
return nil
}
到這里基本就結(jié)束了,學(xué)習(xí)源碼的過程中會(huì)發(fā)現(xiàn)很多點(diǎn)值得深入研究,比如:
dockerclient
podWorkers
podManager
cAdvisor
containerGC
imageManager
diskSpaceManager
statusManager
volumeManager
containerRuntime
kubelet cache
events recorder
Eviction Manager
kubelet如何收到APIServer任務(wù),創(chuàng)建pod的流程
等等。。
后面會(huì)繼續(xù)挑一些關(guān)鍵點(diǎn)進(jìn)行分析。
文章版權(quán)歸作者所有,未經(jīng)允許請(qǐng)勿轉(zhuǎn)載,若此文章存在違規(guī)行為,您可以聯(lián)系管理員刪除。
轉(zhuǎn)載請(qǐng)注明本文地址:http://hztianpu.com/yun/32540.html
摘要:源碼版本介紹在分析啟動(dòng)流程時(shí),老是會(huì)碰到各類,這里單獨(dú)提出來做下較詳細(xì)的分析。主要由兩部分組成使用指定的回收策略,刪除那些已經(jīng)結(jié)束的所有的生命周期管理就是通過來實(shí)現(xiàn)的,其實(shí)該也是依賴了。相關(guān)配置該值表示磁盤占用率達(dá)到該值后會(huì)觸發(fā)。 源碼版本 kubernetes version: v1.3.0 kubelet GC介紹 在分析kubelet啟動(dòng)流程時(shí),老是會(huì)碰到各類GC,這里單獨(dú)提出來...
摘要:源碼版本簡(jiǎn)介是下的一個(gè)監(jiān)控項(xiàng)目,用于進(jìn)行容器集群的監(jiān)控和性能分析?;镜墓δ芗案拍罱榻B可以回顧我之前的一篇文章監(jiān)控之介紹。在源碼分析之前我們先介紹的實(shí)現(xiàn)流程,由上圖可以看出會(huì)從各個(gè)上獲取相關(guān)的監(jiān)控信息,然后進(jìn)行匯總發(fā)送給后臺(tái)數(shù)據(jù)庫。 源碼版本 heapster version: release-1.2 簡(jiǎn)介 Heapster是Kubernetes下的一個(gè)監(jiān)控項(xiàng)目,用于進(jìn)行容器集群的監(jiān)控...
摘要:離線安裝包三步安裝,簡(jiǎn)單到難以置信源碼分析說句實(shí)在話,的代碼寫的真心一般,質(zhì)量不是很高。然后給該租戶綁定角色。 k8s離線安裝包 三步安裝,簡(jiǎn)單到難以置信 kubeadm源碼分析 說句實(shí)在話,kubeadm的代碼寫的真心一般,質(zhì)量不是很高。 幾個(gè)關(guān)鍵點(diǎn)來先說一下kubeadm干的幾個(gè)核心的事: kubeadm 生成證書在/etc/kubernetes/pki目錄下 kubeadm 生...
摘要:離線安裝包三步安裝,簡(jiǎn)單到難以置信源碼分析說句實(shí)在話,的代碼寫的真心一般,質(zhì)量不是很高。然后給該租戶綁定角色。 k8s離線安裝包 三步安裝,簡(jiǎn)單到難以置信 kubeadm源碼分析 說句實(shí)在話,kubeadm的代碼寫的真心一般,質(zhì)量不是很高。 幾個(gè)關(guān)鍵點(diǎn)來先說一下kubeadm干的幾個(gè)核心的事: kubeadm 生成證書在/etc/kubernetes/pki目錄下 kubeadm 生...
閱讀 3202·2021-09-03 10:33
閱讀 1399·2019-08-30 15:53
閱讀 2760·2019-08-30 15:45
閱讀 3523·2019-08-30 14:11
閱讀 679·2019-08-30 13:55
閱讀 2796·2019-08-29 15:24
閱讀 2073·2019-08-26 18:26
閱讀 3704·2019-08-26 13:41