投稿時間:2022-01-01 14:12:18 RSSフィード2022-01-01 14:00 分まとめ(16件)

カテゴリー等 サイト名等 記事タイトル・トレンドワード等 リンクURL 頻出ワード・要約等/検索ボリューム 登録日
TECH Engadget Japanese AirTagのストーキング悪用、本当の問題は「警察が真剣に対応しないこと」との報道 https://japanese.engadget.com/apple-airtag-stalking-problem-law-enforcement-045009870.html airtag 2022-01-01 04:50:09
TECH Engadget Japanese Amazonプライムビデオ1月の新作:「約束のネバーランド」「今日から俺は!!劇場版」「銀魂 THE FINAL」など https://japanese.engadget.com/prime-video-new-041019095.html Amazonプライムビデオ月の新作「約束のネバーランド」「今日から俺は劇場版」「銀魂THEFINAL」など幅広い作品ラインアップが魅力のAmazonPrimeVideoでは、毎月、新作の配信コンテンツが追加されています。 2022-01-01 04:10:19
python Pythonタグが付けられた新着投稿 - Qiita 【Python】ダミーデータ作成【Faker】 https://qiita.com/SyogoSuganoya/items/7da3648e4dee309a2346 性別男女プログラミング言語pythonphpjavahtmlextwordlistを設定することでそのリストから生成するように制限できます。 2022-01-01 13:43:46
python Pythonタグが付けられた新着投稿 - Qiita 【Python Flask & SQLAlchemy】初心者プログラマーのWebアプリ#5 DB登録/編集/削除 https://qiita.com/Bashi50/items/e3459ca2a4661ce5dac6 FlaskでDBのCRUD登録・取得・更新・削除を行うDBとテーブルを作成できたのでいよいよDBを使ったWEBアプリ作成をしていこうと思います。 2022-01-01 13:28:36
Program [全てのタグ]の新着質問一覧|teratail(テラテイル) PyCharmのエディタのスクロールが重い https://teratail.com/questions/376245?rss=all pycharm 2022-01-01 13:37:25
Program [全てのタグ]の新着質問一覧|teratail(テラテイル) Pythonのエラー https://teratail.com/questions/376244?rss=all PythonのエラーPythonの初心者で本を読みながらコードを書いています。 2022-01-01 13:16:44
Program [全てのタグ]の新着質問一覧|teratail(テラテイル) npm installが何をやっても成功しません。 https://teratail.com/questions/376243?rss=all npminstallが何をやっても成功しません。 2022-01-01 13:07:22
AWS AWSタグが付けられた新着投稿 - Qiita CDKのサンプルプロジェクトの作成 https://qiita.com/ShiroUz/items/3c6b3fc1dc8dc47febb9 CDKのサンプルプロジェクトの作成CDKのサンプルプロジェクトの作成CDKのインストール下記記事を参考クレデンシャルの適用下記記事を参考CDKでサンプルアプリの作成SiroUzsimpoleapptreeInodemodulesL├ーREADMEmd├ーbin│└ーsimpoleappts├ーcdkjson├ーjestconfigjs├ーlib│└ーsimpoleappstackts├ーpackagelockjson├ーpackagejson├ーtest│└ーsimpoleapptestts└ーtsconfigjsondirectoriesfilesSバケットを作成する。 2022-01-01 13:00:34
Ruby Railsタグが付けられた新着投稿 - Qiita rails new で指定するフラグによって生成されるコードの違いを確認するリポジトリを作った https://qiita.com/snaka/items/d60fbf6e26ce42237de8 上記から、skiphotwireを指定することで出力結果に以下の違いがあることがわかります。 2022-01-01 13:18:10
海外TECH DEV Community Introduction to Kubernetes with Amazon EKS https://dev.to/donaldsebleung/introduction-to-kubernetes-with-amazon-eks-1nj6 Introduction to Kubernetes with Amazon EKSYou can access key assets in this article on GitHub In this article we will introduce basic concepts around Kubernetes followed by a hands on session where we spin up a Kubernetes cluster on Amazon EKS and deploy a simple website to the cluster PrerequisitesIt is assumed that you Are comfortable with the Linux command lineAre familiar with the concept of containers practical experience with Docker would be beneficialHave an AWS accountPossess basic experience in configuring and using the AWS CLIAre aware that following this hands on session may incur monetary costs and you are solely responsible for any such costsWith the prerequisites addressed let s get started BackgroundModern applications are often comprised of microservices communicating with each other through the network instead of as a single monolithic entity Each microservice typically runs in its own container which calls for an efficient and standardized approach for scheduling and managing these containers Kubernetes is the industry standard solution that addresses these needs Kubernetes HistoryReference Kubernetes WikipediaKubernetes is a container orchestration tool originally developed by Google which was released under the open source Apache license in and donated to the Cloud Native Computing Foundation CNCF a non profit organization jointly founded by Google and The Linux Foundation at the same time for this purpose Kubernetes allows for one or more physical or virtual machines otherwise known as nodes to form a cluster on which containerized workloads can be scheduled deployed and managed with relative ease providing benefits such as scalability reliability and high availability ArchitectureReference KubernetesA Kubernetes cluster consists of one or more nodes which are further subdivided into control and worker nodes Control nodes reside in the control plane and are responsible for scheduling and managing where containerized workloads should be run while worker nodes reside in the data plane and are responsible for actually running the containerized workloads as well as reporting the status of these workloads and the status of the nodes themselves to the control plane Kubernetes primarily relies on a declarative configuration model where the Kubernetes administrator specifies one or more YAML files to be applied to the cluster Each YAML file describes one or more objects logical entities that represent part of a desired state The Kubernetes cluster then constantly works towards ensuring that each object exists and that the actual state matches the desired state taking corrective action whenever necessary to maintain or re converge to the desired state We say that the state of the Kubernetes cluster is eventually consistent The most basic type of object in a Kubernetes cluster is the Pod A pod is essentially a wrapper around a container though it is possible for a pod to hold more than one container Pods are ephemeral meaning that they can be created and destroyed at will either directly by the Kubernetes administrator or through higher level objects such as Deployments that automatically manage such pods Therefore one should not expect any particular pod to exist over a prolonged period of time Next we have ReplicaSets where each ReplicaSet manages a fixed number of identical pods replicas and works hard to ensure that the specified number of replicas exist So if for some reason a replica in a ReplicaSet terminates the ReplicaSet will automatically spawn a new replica Conversely if an extra replica was somehow created e g by the Kubernetes administrator the ReplicaSet will automatically terminate a replica Deployments are another type of object very similar to ReplicaSets except the former also supports rolling updates and rollbacks with zero downtime When a new version of the containerized workload is deployed to the cluster the deployment performs a rolling update that is each pod running the old workload is incrementally replaced by a pod running the new one until all pods have been replaced and running the new workload As long as the deployment consists of or more replicas this guarantees that the application will always be available In order to expose a Deployment to other workloads within the cluster or the outside world a Service needs to be created for it which is essentially an abstraction specifying how a particular workload can be contacted through the network e g what IP address and or port should be used Within a cluster the workloads can be further segregated into Namespaces which can be thought of as virtual clusters within the real physical cluster This allows related workloads to be grouped together and for unrelated workloads to be logically separated from one another Finally Kubernetes allows for CustomResourceDefinitions where a Kubernetes developer and or administrator can define their own objects to extend the functionality of Kubernetes Needless to say this is an advanced topic that casual Kubernetes users should not need to worry too much about With most of the basic concepts cleared let s get our hands dirty Spinning up a Kubernetes cluster with Amazon EKSReference Getting started with Amazon EKSAmazon EKS stands for Elastic Kubernetes Service and is a managed Kubernetes offering provided by AWS The term managed roughly means that certain aspects of Kubernetes such as provisioning each node and connecting them to form the cluster are managed by the cloud provider so you do not have to worry about it yourself Of course this means that there are options for provisioning and configuring a Kubernetes cluster from scratch which will not be covered in this article Minikube kind for spinning up a single node cluster intended primarily for development and testing purposeskubeadm for provisioning a multi cluster nodeTo create a cluster with Amazon EKS and communicate with it we need two tools eksctl A command line tool specific to EKS for creating EKS Kubernetes clusterskubectl The Kubernetes command line client used for communicating with a Kubernetes server cluster We also need AWS CLI version or later configured with an IAM user with sufficient permissions to create the cluster and its associated resources If not you may wish to first go through this introductory hands on session on AWS CLI Technically eksctl is not strictly required and an EKS cluster can be manually created using a combination of the AWS web console and the AWS CLI and quite possibly with the CLI alone but the process is rather complex and requires a detailed understanding of the underlying AWS services roles permissions etc eksctl manages these complexities for us under the hood so we can easily create a cluster with a single command and focus on deploying our apps to Kubernetes Now assuming you have AWS CLI v installed and an IAM user with sufficient permissions such as an IAM administrator let s download kubectl version from AWS curl o kubectl It is also possible to download kubectl from upstream they should be the same so it shouldn t make a difference Download the associated checksum file and use it to verify our kubectl download curl o kubectl sha shasum c kubectl shaIf the download was not corrupted the second command should output kubectl OKFinally make kubectl executable and install it somewhere under your PATH chmod x kubectl sudo mv kubectl usr local binNext download and install eksctl curl silent location uname s amd tar gz tar xz C tmp sudo mv tmp eksctl usr local binIf the installation was successful running eksctl version should print an appropriate version number and exit With the tools installed let s spin up a cluster eksctl create clusterYou will see some blue information text printed to the console and the command may take a while to complete so do be patient In my case it took about minutes before my cluster was ready On success the last line of output should be similar to the following EKS cluster beautiful unicorn in us east region is readyThe name of your cluster beautiful unicorn in my case and the region in which it is deployed us east may vary Note that you could also have explicitly specified a name and region when creating the cluster by passing in the name and region flags respectively Now check our Kubernetes version Note that you should see two versions one for the kubectl client that we downloaded v x if you followed the instructions exactly and one for the server cluster However if kubectl is not correctly configured to connect to the cluster you may receive an error when attempting to read the Kubernetes version from the server Therefore checking the Kubernetes version also serves to check that the cluster is up and running and that we can connect to it with kubectl kubectl version short omitting the short flag prints detailed version information in JSON format Here I get Client Version v dfdbServer Version v eks eacAnother thing to note is that the Kubernetes client and server versions should only be at most minor version apart So if you downloaded the absolute latest kubectl version at the time of writing you would see a warning about incompatible Kubernetes client server versions as the server is at which is minor versions behind In that case you will have to downgrade your version of kubectl accordingly Now that our cluster is ready and we can connect to it let s fetch some info about our cluster Get a list of all clusters we have created with eksctl eksctl get cluster eksctl version using region us east NAME REGION EKSCTL CREATEDbeautiful unicorn us east TrueGet a list of all nodes in our cluster kubectl get nodesNAME STATUS ROLES AGE VERSIONip ec internal Ready lt none gt m v eks bcbip ec internal Ready lt none gt m v eks bcbGet a list of all namespaces in our cluster kubectl get namespacesNAME STATUS AGEdefault Active mkube node lease Active mkube public Active mkube system Active mGet a list of services in the default namespace kubectl get servicesNAME TYPE CLUSTER IP EXTERNAL IP PORT S AGEkubernetes ClusterIP lt none gt TCP mGet a list of services in the kube system namespace kubectl get services namespace kube systemNAME TYPE CLUSTER IP EXTERNAL IP PORT S AGEkube dns ClusterIP lt none gt UDP TCP mGet a list of pods in the default namespace kubectl get podsNo resources found in default namespace There are no pods in the default namespace because we haven t deployed any apps yet However a number of pods were created in other namespaces such as kube system for the control plane kubectl get pods all namespacesNAMESPACE NAME READY STATUS RESTARTS AGEkube system aws node ftwmx Running mkube system aws node kmk Running mkube system coredns cbdf g Running mkube system coredns cbdf lwxg Running mkube system kube proxy cmpr Running mkube system kube proxy hphz Running mThe above command fetches pods from all namespaces Here we see all control plane components were automatically deployed to the kube system namespace Now that everything is working correctly let s deploy our first Pod Deploying a single Pod to our EKS clusterReference Deploy a sample applicationLet s first create a namespace for our website to separate it from other workloads Recall that Kubernetes favors a declarative approach whereby we describe objects with YAML configuration files and apply them to the cluster Save the following in a file namespace yaml apiVersion vkind Namespacemetadata labels app donaldsebleung com name donaldsebleung comLet s look at some of the fields kind Describes the kind of object we are defining in this case a Namespacelabels A set of key value pairs used to identify and keep track of objects In this case we define a single label app with value donaldsebleung comname The name of our object Here we call it donaldsebleung comNow let s apply it to our cluster kubectl apply f namespace yamlnamespace donaldsebleung com createdAs seen above we apply a YAML configuration file with kubectl apply The f lt FILE gt option specifies a single file to apply The output indicates that the namespace was successfully created but let s list all namespaces again to be safe kubectl get namespacesNAME STATUS AGEdefault Active hdonaldsebleung com Active skube node lease Active hkube public Active hkube system Active hWe can also get info for a single object like so kubectl get lt OBJECT KIND gt lt OBJECT NAME gt kubectl get namespace donaldsebleung comNAME STATUS AGEdonaldsebleung com Active mSince a namespace primarily exists to partition object within a cluster they aren t very interesting on their own so we don t see much being printed out But we can get more information on our namespace by specifying the o yaml option which outputs the information about the object in YAML format kubectl get namespace donaldsebleung com o yamlapiVersion vkind Namespacemetadata annotations kubectl kubernetes io last applied configuration apiVersion v kind Namespace metadata annotations labels app donaldsebleung com name donaldsebleung com creationTimestamp T Z labels app donaldsebleung com kubernetes io metadata name donaldsebleung com name donaldsebleung com resourceVersion uid efd b b dbeaadcspec finalizers kubernetesstatus phase ActiveNotice this contains all the information we specified in our namespace yaml file and more Tip if you re not sure where to start when writing a YAML file for a particular kind of object printing the YAML configuration for other objects of the same kind could serve as a reference Now let s define a pod in our newly created namespace Save the following configuration in a pod yaml file apiVersion vkind Podmetadata labels app donaldsebleung com name donaldsebleung com namespace donaldsebleung comspec containers name donaldsebleung com image donaldsebleung donaldsebleung com ports containerPort Here we see that the kind of object is a Pod instead of a namespace Let s look at some new and existing fields as well We added an app label to our pod again with value donaldsebleung com Unlike the case of the namespace where the label was purely declarative a label or set of labels on a pod has practical uses we ll see shortlyThe name of our pod is donaldsebleung com which is identical to that of our namespace In general distinct objects only require distinct names if they belong to the same namespace and they are the same kind of objectThe namespace field under metadata specifies that this pod should be created in the donaldsebleung com namespace If omitted the pod is created in the default namespace insteadThe spec top level field describes what our pod is actually made ofUnder containers the name field specifies the name of a container inside our pod Recall that a pod can have multiple containers hence we see a dash before the name field indicating that we are specifying a list of containers though in this case the length of the list is The image field specifies which image our container should be based on Here we use an image with tag donaldsebleung donaldsebleung com built from the following Dockerfile FROM ubuntu focalRUN apt get update amp amp apt get install y wget openjdk jdkRUN wget amp amp tar xvf v tar gz amp amp rm v tar gzRUN mv donaldsebleung com appWORKDIR appRUN mvnw packageCMD usr bin java jar app target personal website SNAPSHOT jar The containerPort field under ports exposes port from the container This is because our web server serves an HTTPS web page on port Now let s apply the config kubectl apply f pod yamlpod donaldsebleung com createdView pods in our namespace n is short for namespace kubectl get pods n donaldsebleung comNAME READY STATUS RESTARTS AGEdonaldsebleung com Running sHere we see it s up and running If not e g you see under READY wait for a short while and run the same command again until the pod is ready Congratulations You ve successfully deployed your first pod to a Kubernetes cluster But wait how do we know the web server is actually up and running Recall that pods are ephemeral they can be created and destroyed at will either by the cluster administrator or by higher level objects such as deployments so we cannot rely on a particular pod always being available To access the web server inside our pod we ll need to expose our pod to the rest of the cluster and later to the outside world via a Service Exposing our pod to the cluster via a ServiceReference Deploy a sample applicationRecall that a Service is an abstraction that exposes running workloads in our case a single pod There are types of services at the time of writing ClusterIP exposes the workload to the rest of the cluster through an internal IP but not to the outside worldNodePort exposes the workload through a specified port on all nodes in the cluster This is the simplest way to expose a workload to the outside world but is rarely the best choiceLoadBalancer exposes the workload through a dedicated load balancer The exact details of how the load balancer is provisioned etc depends on the cloud providerExternalName yet another type of service we won t cover in this article source More details about these types of services and how they differ can be found through in this excellent writeup Here we ll use a ClusterIP service and explore our website within the cluster shortly Save the following in a clusterip service yaml file apiVersion vkind Servicemetadata labels app donaldsebleung com name donaldsebleung com namespace donaldsebleung comspec selector app donaldsebleung com type ClusterIP ports name https port targetPort protocol TCPAgain a brief overview of the fields used Under spec we have a selector field This selects pods based on their labels that the service should target Here we specify that our service should target pods with an app label of value donaldsebleung com That s why I told you labels on pods are important and not purely decorative See The type is ClusterIP This produces an internal IP we can use within the cluster to access the serviceUnder ports the first and only item has port and targetPort fields The targetPort field specifies the port within the pod s that network requests should be forwarded to in this case since that s where our web server is listening at The port field specifies the port through which the service is accessed from outsideThe protocol field specifies the transport layer protocol used e g TCP UDP ICMP TCP in our caseSo our service provides an IP within the cluster that forwards HTTPS requests from TCP port the standard HTTPS port to TCP port The net result is that within the cluster instead of having to access the website as https lt clusterIP gt we can access it as https lt clusterIP gt instead like we would a normal website Apply the config kubectl apply f clusterip service yamlservice donaldsebleung com createdConfirm the service is created kubectl get services n donaldsebleung comNAME TYPE CLUSTER IP EXTERNAL IP PORT S AGEdonaldsebleung com ClusterIP lt none gt TCP mHere we see an IP accessible from within the cluster Notice that the EXTERNAL IP is lt none gt i e we still cannot access the service from outside the cluster To access the service from within a cluster spawn a shell inside our pod kubectl exec n donaldsebleung com it donaldsebleung com bin bashA breakdown of the command used kubectl exec similar to docker exec but here we execute a command within the specified pod n donaldsebleung com in the donaldsebleung com namespace it donaldsebleung com allocate an interactive tty same as in Docker for the pod with name donaldsebleung com bin bash pass the remaining arguments to the pod here we pass in bin bash to execute the Bash shellIf successful you should see a root shell and the working directory is app In the root shell fetch the webpage with wget and print to stdout replacing with your cluster IP wget qO no check certificateThe no check certificate option is required since the web server uses a self signed certificate by default If successful you should see an HTML page being printed lt DOCTYPE HTML gt lt Hyperspace by HTML UP htmlup net ajlkn Free for personal and commercial use under the CCA license htmlup net license gt Now that we ve seen how to expose a service within the cluster let s see how to do the same but to the outside world But before that let s cover Deployments a higher level object that manages a ReplicaSet of pods and takes care of rolling updates and rollbacks Clean up our existing pod and service with the following commands kubectl delete f clusterip service yaml kubectl delete f pod yamlkubectl delete deletes the object s specified in the provided YAML file Creating our first DeploymentReference Deploy a sample applicationA Deployment manages a ReplicaSet which in turn manages a fixed number of replicas of Pods While a ReplicaSet only ensures that the number of pods remains at the desired number of replicas a Deployment offers rolling update and rollback functionality as well by replacing pods in the deployment incrementally so there is no downtime Save the following in a file deployment yaml apiVersion apps vkind Deploymentmetadata name donaldsebleung com labels app donaldsebleung com namespace donaldsebleung comspec replicas selector matchLabels app donaldsebleung com template metadata labels app donaldsebleung com namespace donaldsebleung com spec containers name donaldsebleung com image donaldsebleung donaldsebleung com ports containerPort Here in the spec replicas indicates how many pod replicas should be created in this case selector indicates how the Deployment keeps track of its pods Here we matchLabels where the app label on the pod is equal to donaldsebleung comtemplate the template to use for each pod in the deployment Notice this is identical to our pod yaml except leaving out apiVersion kind and metadata nameApply the config kubectl apply f deployment yamldeployment apps donaldsebleung com createdCheck deployments in our namespace kubectl get deployments n donaldsebleung comNAME READY UP TO DATE AVAILABLE AGEdonaldsebleung com mWe see here that all replicas of the donaldsebleung com deployment are up and running If not e g you see wait a few seconds and try again until all replicas are up Let s expose our deployment to the outside world using a load balancer But as per Network load balancing on Amazon EKS we need to first deploy AWS Load Balancer Controller to our EKS cluster Deploying AWS Load Balancer Controller to our EKS clusterReference AWS Load Balancer ControllerNote this section is highly specific to Amazon EKS and contains a lot of AWS related details that are not applicable to other managed Kubernetes offerings or Kubernetes in general Don t feel too bad if you find yourself blindly copy pasting commands in this section without fully understanding what is going on First we need to create an IAM OIDC identity provider for our cluster source replace beautiful unicorn with the actual name of your cluster eksctl utils associate iam oidc provider cluster beautiful unicorn approveNow download an IAM policy for the AWS Load Balancer Controller that allows it to make calls to AWS APIs on your behalf source and use it to create an IAM policy curl o iam policy json aws iam create policy policy name AWSLoadBalancerControllerIAMPolicy policy document file iam policy jsonNow get our account ID and make note of it aws sts get caller identityIn my case my account ID is Now create an IAM role for use with a Kubernetes service account view the linked reference article for details replacing the cluster name and account ID as appropriate eksctl create iamserviceaccount cluster beautiful unicorn namespace kube system name aws load balancer controller attach policy arn arn aws iam policy AWSLoadBalancerControllerIAMPolicy override existing serviceaccounts approveFinally we are ready to install the AWS Load Balancer Controller itself Install cert manager to our cluster for managing certificate related stuff kubectl apply validate false f This may take a while to execute no more than a few dozen seconds as Kubernetes creates a large number of objects Now download the controller specification curl Lo v full yaml Open the downloaded file v full yaml in your favorite text editor mine is Vim and make the following changes Delete the ServiceAccount section of the file It looks like this apiVersion vkind ServiceAccountmetadata labels app kubernetes io component controller app kubernetes io name aws load balancer controller name aws load balancer controller namespace kube systemReplace your cluster name in the deployment spec with the actual name of your clusterNow apply the config kubectl apply f v full yamlAgain this make take a few seconds To gain peace of mind verify the controller is properly installed kubectl get deployment aws load balancer controller n kube systemNAME READY UP TO DATE AVAILABLE AGEaws load balancer controller sPhew that was complicated Back to the interesting stuff Expose our deployment to the outside world using a load balancerReference Deploy a sample applicationLet s make a copy of our clusterip service yaml We ll name the copy loadbalancer service yaml cp clusterip service yaml loadbalancer service yamlOpen loadbalancer service yaml and change the line type ClusterIPto type LoadBalancerYup that s it Now apply the config kubectl apply f loadbalancer service yamlservice donaldsebleung com createdGet the service details Notice how the type is now LoadBalancer and we have an external IP or DNS name rather kubectl get services n donaldsebleung comNAME TYPE CLUSTER IP EXTERNAL IP PORT S AGEdonaldsebleung com LoadBalancer aefbfcbbecbedfa us east elb amazonaws com TCP sNow visit replacing the external DNS name as appropriate for your scenario The browser may display a scary warning about a self signed certificate Ignore it and proceed with viewing the website content Feel free to poke around the website and learn more about me shameless promotion here P When you re done continue with the rest of this article You did it You exposed your deployment to the outside world using a load balancer and saw the results in your browser Rolling updatesA major advantage of deployments over standalone pods or even ReplicaSets is the ability to perform a rolling update without application downtime It does this by replacing the pods in the deployment one by one or as configured in the YAML file until all the old pods have been replaced by new ones Let s see this in action Copy our existing deployment yaml to deployment patched yaml cp deployment yaml deployment patched yamlNow update the container image used in each of the pods from donaldsebleung donaldsebleung com to donaldsebleung donaldsebleung com which simply replaces the slogan IT consultant by day software developer by night on the homepage with Cloud virtualization and open source enthusiast because I ve recently realized that the old slogan doesn t fit well with the rest of the content on my website Your modified deployment patched yaml file should look like this apiVersion apps vkind Deploymentmetadata name donaldsebleung com labels app donaldsebleung com namespace donaldsebleung comspec replicas selector matchLabels app donaldsebleung com template metadata labels app donaldsebleung com namespace donaldsebleung com spec containers name donaldsebleung com image donaldsebleung donaldsebleung com ports containerPort Interested learners may refer to the Dockerfile for which simply applies a patch to the downloaded source code before building the project FROM ubuntu focalRUN apt get update amp amp apt get install y wget patch openjdk jdkRUN wget amp amp tar xvf v tar gz amp amp rm v tar gzRUN mv donaldsebleung com appWORKDIR appCOPY index patch RUN patch p lt index patchRUN mvnw packageCMD usr bin java jar app target personal website SNAPSHOT jar Here s the patch if you re interested src main resources templates index html src main resources templates index new html lt section id intro class wrapper style fullscreen fade up gt lt div class inner gt lt h gt Donald S Leung lt h gt lt p gt IT consultant by day software developer by night lt p gt lt p gt Cloud virtualization and open source enthusiast lt p gt lt ul class actions gt lt li gt lt a href about class button scrolly gt About me lt a gt lt li gt lt ul gt lt script src assets js main js gt lt script gt lt body gt lt html gt No newline at end of file lt html gt Now apply the config kubectl apply f deployment patched yamldeployment apps donaldsebleung com configuredAnd refresh your browser multiple times Notice that the website should be up all the time there should be no moment where it is not available Furthermore for a while you should see the slogan on the homepage alternate between the old and new versions Eventually though it should converge to the following which indicates all pods in the deployment have been replaced to use the new container image Before we conclude let s look at one more feature autoscaling deployments with HorizontalPodAutoscaler But first we need to install the Kubernetes metric server which provides aggregate resource usage data needed for autoscaling Installing the Kubernetes metric serverReference Installing the Kubernetes metrics serverJust apply the appropriate YAML config as follows kubectl apply f Now confirm it is properly deployed kubectl get deployment metrics server n kube systemNAME READY UP TO DATE AVAILABLE AGEmetrics server sWe re good to go Autoscaling deployments with HorizontalPodAutoscalerReference HorizontalPodAutoscaler WalkthroughCurrently our deployment has a fixed number of pods which is good enough for testing and demonstration purposes But what if the web traffic is low at a particular time interval Would maintaining pods in that case be a waste of resources On the other hand what if we experience a sudden surge in traffic Is pods enough to handle the surge gracefully without degradation in performance Fortunately through HorizontalPodAutoscaler HPA Kubernetes allows you to automatically scale the number of replicas in a Deployment depending on some metric s such as CPU utilization per pod But first we need to define some resource limits for our pods in the deployment Save the following in a file deployment patched with limit yaml apiVersion apps vkind Deploymentmetadata name donaldsebleung com labels app donaldsebleung com namespace donaldsebleung comspec replicas selector matchLabels app donaldsebleung com template metadata labels app donaldsebleung com namespace donaldsebleung com spec containers name donaldsebleung com image donaldsebleung donaldsebleung com ports containerPort resources limits cpu m requests cpu mThis is identical to deployment patched yaml except we added the following part resources limits cpu m requests cpu mHere m represents one thousandth or of a CPU core so limits cpu m means each pod is not allowed to use more than CPU cores requests cpu m means that each pod requests CPU cores Apply the config kubectl apply f deployment patched with limit yamldeployment apps donaldsebleung com configuredSince Kubernetes favors a declarative approach notice we did not need to delete the deployment and re deploy we just had to apply the new YAML config and Kubernetes will configure the deployment to converge towards the new spec Now save the following in hpa yaml apiVersion autoscaling vkind HorizontalPodAutoscalermetadata labels app donaldsebleung com name donaldsebleung com namespace donaldsebleung comspec maxReplicas minReplicas scaleTargetRef apiVersion apps v kind Deployment name donaldsebleung com targetCPUUtilizationPercentage Here maxReplicas is the maximum number of replicas we should have regardless of the load We set this to so there can never be more than replicasminReplicas same as maxReplicas but sets the minimum Here we make sure we always have at least replicascaleTargetRef what object does our HPA target Here we target the deployment donaldsebleung com by nametargetCPUUtilizationPercentage what percentage of CPU utilization relative to the CPU limit we should aim for in each pod Here we specify i e we aim for m m CPU cores used per podApply the config kubectl apply f hpa yamlhorizontalpodautoscaler autoscaling donaldsebleung com createdWait a few seconds then query our HPA kubectl get hpa n donaldsebleung comNAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGEdonaldsebleung com Deployment donaldsebleung com mNotice how we now have replica instead of This is because there is no traffic to be handled so the pods aren t using any noticeable amount of CPU In response the HPA scaled our deployment down to replica in order to conserve resources Another thing hpa is short for horizontalpodautoscaler Our command might as well have been kubectl get horizontalpodautoscalers n donaldsebleung comBut hpa is easier to type In fact this isn t the only abbreviation we can use ObjectShorthandPodpoReplicaSetrsDeploymentdeployServicesvcI personally find the full names more descriptive and readable But if you re lazy or the full name is insanely long e g horizontalpodautoscaler feel free to use the shorthands instead Now try to generate some load by querying the site repeatedly You might want to open this in a new terminal window or tab to keep it running while we make our observations in the original window replace the external DNS name accordingly while true do wget qO no check certificate gt dev null doneYou might even want to run it in multiple terminal windows I opened a few dozen of them myself Wait a short while maybe a minute or two and query our HPA again kubectl get hpa n donaldsebleung comNAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGEdonaldsebleung com Deployment donaldsebleung com mLook now the CPU utilization has gone up and our HPA responded by scaling our deployment up to replicas Once you re done exploring stop bombarding the endpoint with requests by closing the appropriate terminal windows tabs Wait a few minutes and you should see the CPU utilization go down again and the number of replicas scaled back down to kubectl get hpa n donaldsebleung comNAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGEdonaldsebleung com Deployment donaldsebleung com m CleanupLet s delete our namespace Since we put every other object we ve created in this namespace this should delete those objects as well kubectl delete f namespace yamlnamespace donaldsebleung com deletedNow delete our cluster replace the name accordingly eksctl delete cluster n beautiful unicorn This may take a few minutes In the end you should see something like all cluster resources were deleted ConclusionIn this hands on session we learned What Kubernetes isHow it relates to the microservice architecture used in many modern applicationsKey concepts and objects in KubernetesHow Kubernetes works in particular the declarative approach it favorsHow to actually apply these concepts to a real multi node Kubernetes cluster by leveraging a managed Kubernetes offering by AWS known as Amazon EKSOf course this is just the tip of the iceberg there are so many more features in Kubernetes that we have yet to explore If this article piqued your interest in Kubernetes consider learning more about it through the following resources or otherwise Official Kubernetes websiteIntroduction to KubernetesIntroduction to Containers Kubernetes and OpenShiftUntil then happy new year ReferencesDonaldKellett ks intro eks Kubernetes Wikipedia Kubernetes Getting started with Amazon EKS Minikube kind kubeadm Getting started with Amazon EKS eksctl Deploy a sample application Service Kubernetes Kubernetes NodePort vs LoadBalancer vs Ingress When should I use what Network load balancing on Amazon EKS AWS Load Balancer Controller Create an IAM OIDC provider for your cluster Installing the Kubernetes metrics server HorizontalPodAutoscaler Walkthrough Introduction to Kubernetes Introduction to Containers Kubernetes and OpenShift 2022-01-01 04:43:26
ニュース BBC News - Home Fireworks and Big Ben mark subdued UK new year amid Covid spread https://www.bbc.co.uk/news/uk-59844031?at_medium=RSS&at_campaign=KARANGA omicron 2022-01-01 04:24:11
北海道 北海道新聞 平壌で年越しイベント 公演と花火に多くの市民 https://www.hokkaido-np.co.jp/article/629400/ 大みそか 2022-01-01 13:12:00
北海道 北海道新聞 渡辺雄太は途中出場、無得点 米プロバスケNBA https://www.hokkaido-np.co.jp/article/629399/ 渡辺雄太 2022-01-01 13:12:00
北海道 北海道新聞 俺たちの10番、コンサの誇り 宮沢裕樹、サッカー人生を語る https://www.hokkaido-np.co.jp/article/628498/ 北海道コンサドーレ札幌 2022-01-01 13:04:20
海外TECH reddit [Postgame Thread] Georgia Defeats Michigan 34-11 https://www.reddit.com/r/CFB/comments/rtb0wn/postgame_thread_georgia_defeats_michigan_3411/ Postgame Thread Georgia Defeats Michigan Box Score provided by ESPN Team T Georgia Michigan Made with the r CFB Game Thread Generator submitted by u CFB Referee to r CFB link comments 2022-01-01 04:02:04
海外TECH reddit エロゲ福袋開けていくわ https://www.reddit.com/r/newsokunomoral/comments/rtbd3f/エロゲ福袋開けていくわ/ ewsokunomorallinkcomments 2022-01-01 04:22:50

コメント

このブログの人気の投稿

投稿時間:2021-06-17 22:08:45 RSSフィード2021-06-17 22:00 分まとめ(2089件)

投稿時間:2021-06-20 02:06:12 RSSフィード2021-06-20 02:00 分まとめ(3871件)

投稿時間:2021-06-17 05:05:34 RSSフィード2021-06-17 05:00 分まとめ(1274件)