投稿時間:2022-12-31 03:20:36 RSSフィード2022-12-31 03:00 分まとめ(23件)

カテゴリー等 サイト名等 記事タイトル・トレンドワード等 リンクURL 頻出ワード・要約等/検索ボリューム 登録日
AWS AWS How do I set up Amazon Inspector Classic to run security assessments on my Amazon EC2 instances? https://www.youtube.com/watch?v=DmrMlCZvKYI How do I set up Amazon Inspector Classic to run security assessments on my Amazon EC instances Skip directly to the demo For more details see the Knowledge Center article with this video Nicholas shows you how to set up Amazon Inspector Classic to run security assessments on your Amazon EC instances Introduction Additional information Tag your EC instances Define the assessment target Define the assessment template and run the assessment Run the assessment ClosingSubscribe More AWS videos More AWS events videos ABOUT AWSAmazon Web Services AWS is the world s most comprehensive and broadly adopted cloud platform offering over fully featured services from data centers globally Millions of customers ーincluding the fastest growing startups largest enterprises and leading government agencies ーare using AWS to lower costs become more agile and innovate faster AWS AmazonWebServices CloudComputing 2022-12-30 17:11:43
python Pythonタグが付けられた新着投稿 - Qiita MACアドレスで打刻システムを作ってみた! https://qiita.com/paupau/items/d82bd26fb722b7c23b7e raspberrypi 2022-12-31 02:37:47
Docker dockerタグが付けられた新着投稿 - Qiita TypeScript で socket 送信 https://qiita.com/naozo-se/items/60bda485058ecb3d6cde socket 2022-12-31 02:13:36
Docker dockerタグが付けられた新着投稿 - Qiita docker run hello-world の 'Hello from Docker' を再度表示させたいと思ったあなたへ https://qiita.com/Jinta/items/dd46ea9c6117771d52cf docker 2022-12-31 02:03:25
golang Goタグが付けられた新着投稿 - Qiita GoとGraphQLでサーバー実装 【実装編-Part1】 https://qiita.com/shion0625/items/be454f94a99cac461d5e 切り取り 2022-12-31 02:12:48
GCP gcpタグが付けられた新着投稿 - Qiita Google Cloud Certified Professional Cloud Architect 合格体験記 https://qiita.com/tkuribayashi/items/a450edea1312dff89828 rofessionalcloudarchitect 2022-12-31 02:36:00
技術ブログ Developers.IO Azure ADの条件付きアクセス機能でIPアドレス制限を試してみた https://dev.classmethod.jp/articles/azure-ad-ip-restriction-access/ microsoftentra 2022-12-30 17:36:40
海外TECH MakeUseOf How to Open the Phone Dialer in Windows 11 https://www.makeuseof.com/windows-11-phone-dialer/ dialer 2022-12-30 17:46:15
海外TECH MakeUseOf How to Build a Successful Personal Brand on LinkedIn https://www.makeuseof.com/build-personal-brand-linkedin/ linkedin 2022-12-30 17:31:16
海外TECH MakeUseOf How to Fix Windows Update Error 0xCA00A009 https://www.makeuseof.com/windows-update-error-0xca00a009-fix/ quick 2022-12-30 17:15:16
海外TECH MakeUseOf 6 Reasons Why Many Linux Distros Don't Ship KDE by Default https://www.makeuseof.com/reasons-why-linux-distros-ship-kde-by-default/ Reasons Why Many Linux Distros Don x t Ship KDE by DefaultWhen it comes to customizability there s no other desktop that even comes close to KDE Plasma So why don t more distributions ship KDE by default 2022-12-30 17:01:14
海外TECH DEV Community EKS Cluster Autoscaler: 6 Best Practices For Effective Autoscaling https://dev.to/castai/eks-cluster-autoscaler-6-best-practices-for-effective-autoscaling-4fnf EKS Cluster Autoscaler Best Practices For Effective AutoscalingWe all love Kubernetes for its autoscaling capabilities and enjoy them when running clusters in a managed Kubernetes service like Amazon EKS Many of you already have set up VPA and or HPA for pod autoscaling to ensure that your application is scaled to meet the load demand But at some point you re bound to face a new challenge and this is where the EKS Cluster Autoscaler can help You might not get enough capacity in your cluster during peak times Or you might experience the opposite wasted hardware capacity during off peak moments The Cluster Autoscaler comes to the rescue In this guide we will explore the EKS Cluster Autoscaler to show you how it works and share some best practices to help you always adjust capacity to demand When to use the Cluster Autoscaler The Cluster Autoscaler is one of the three Kubernetes autoscaling dimensions It automatically adds or removes nodes in a cluster based on pod resource requests  Contrary to Horizontal and Vertical Autoscalers the Cluster Autoscaler doesn t measure CPU and RAM usage values for its decisions Instead it checks the cluster every N seconds for pending pods in a “pending state That state indicates that the Kubernetes scheduler wasn t able to assign these pods to a node because of insufficient cluster capacity or other conditions Teams use the Cluster Autoscaler to automate the process of scaling the number of nodes up or down in line with their application s demand The best part of using the Cluster Autoscaler is that it automatically does the scaling job for you  That s what makes the Cluster Autoscaler a great cost management tool By using it you can eliminate overprovisioning and cloud waste paying only for as many cloud resources as your application really utilizes EKS Cluster Autoscaler Autoscaling on AWSEven managed Kubernetes usually doesn t have built in autoscaling out of the box For a long time the official EKS documentation has recommended using the official Kubernetes Cluster Autoscaler  Not a long time ago Karpenter appeared on the scene Karpenter is an open source project that attempts to address some of the issues of the original Kubernetes Cluster Autoscaler  Even though it faces new competition the official Cluster Autoscaler remains a popular choice Being vendor neutral widely adopted and battle tested it s an attractive option for many teams Setting the EKS Cluster Autoscaler upUsually the Cluster Autoscaler is installed as a Kubernetes Deployment in the cluster in the kube system namespace You can set up the Autoscaler to run several replicas and use the leader election mechanism for high availability  However note that actually only one replica is responsible for scaling at a time the elected leader It s important to understand that multiple replicas won t provide horizontal scalability This means that you need to adjust it vertically to be able to handle your cluster load The Cluster Autoscaler is a core component of the Kubernetes control plane It s there to help you make decisions around scaling and scheduling Where does Amazon EKS come in It might be confusing to differentiate between the official Kubernetes Cluster Autoscaler and EKS Cluster Autoscaler  When you re running your cluster in an AWS managed service the cloud provider offers an extension that makes it all work This extension of the Kubernetes Cluster Autoscaler communicates its decisions to the AWS infrastructure using APIs for example to manage the EC instances where your cluster is running Before I show you how to set up the EKS Cluster Autoscaler let s review how it works GlossaryBefore diving into the how let s define some terms used in this guide and official documentation Cluster Autoscaler a piece of software that automatically performs cluster scale up or scale down when needed It adds or removes nodes in your cluster Official Kubernetes Cluster Autoscaler this cluster autoscaler is provided by the Kubernetes community SIG Autoscaling EKS Cluster Autoscaler EKS cluster autoscaler is an extension that bridges the Official Kubernetes Cluster Autoscaler to integrate with AWS infrastructure Check out this GitHub page to learn more about the Cluster Autoscaler on AWS Node Group Node groups are groups of nodes within a cluster They re not actual resources but you can find them as an abstraction in the Cluster Autoscaler Cluster API and other Kubernetes components When grouped nodes may share several common properties like labels and taints but still run on a different instance type or in a different Availability Zone EKS Auto Scaling Group Auto Scaling groups is an AWS EC feature to scale the number of instances up or down We could say that it is an implementation of Node groups in EKS How does EKS Cluster Autoscaler work What the Cluster Autoscaler does is looping through two tasks checking the cluster for unschedulable pods and calculating whether it s possible to consolidate all the currently deployed pods on a smaller number of nodes Here s how the Cluster Autoscaler works step by step It scans the cluster to detect any pods that can t be scheduled on any existing nodes This might result from inadequate CPU or memory resources and another common reason is that pod s node taint tolerations or affinity rules don t match any existing nodes  Suppose the Cluster Autoscaler finds a cluster that has some unschedulable pods It then checks its managed node pools to understand whether adding a node would unblock the pod or have no effect at all The Autoscaler adds one node to the node pool if it s the former It also scans nodes in the node pools it manages and if it identifies a node on which pods could be rescheduled to other nodes in the cluster it evicts them and removes the spare node When deciding to move a pod the Cluster Autoscaler considers factors around pod priority and PodDisruptionBudgets Since the Autoscaler controller works on the infrastructure level it needs permissions to view and manage node groups That s why security best practices like the principle of least privilege are key here you must do your best to securely manage these necessary credentials A hands on guide to EKS Cluster AutoscalerHere is a quick lab so that you can see the EKS Cluster Autoscaler in action We will use the most straightforward way to set it up starting with the cluster itself We will use the eksctl command to make things easier There are a few prerequisites before Cluster Autoscaler can be installed A working environment with aws eksctl and kubectl command line tools EKS cluster OIDC provider Auto Scaling Group with Tags IAM Policy And Service Account Only the first one is mandatory to be prepared before starting Other prerequisites can be created by following provided steps Create the clusterIf you don t have an EKS cluster running or want to experiment on a temporary cluster let s create it Note If you create a cluster following this guide don t forget to delete it to stop expenses the command is provided in the last step gt eksctl create cluster name ca demo cluster instance types t medium nodegroup name ng tm nodes nodes max spot asg access vpc nat mode DisableThis will create a cluster called ca demo cluster with an Auto Scaling node group “ng tm including two nodes initially and a max capacity of four nodes The spot nodes parameter is specified to create cheaper instances We also specify asg access to prepare it for autoscaling Note In some regions some instance types or spot instances might not be available Try using another instance type if the creation fails due to instance capacity errors Set up the OIDC providerIf you are setting up autoscaler in an existing cluster then please check out the documentation to check if you already have an OIDC provider  If you ve just created a new cluster you can enable the OIDC provider like this gt eksctl utils associate iam oidc provider cluster ca demo cluster approve Auto Scaling Groups and TagsIf you used the eksctl command in the previous step to create your node groups these tags should already be there For an existing cluster you should check and add if they do not exist Required tags adjust the cluster name in the second tag ks io cluster autoscaler enabled trueks io cluster autoscaler ca demo cluster owned IAM Policy Store this content to file policy json Note adjust the cluster name if you used a different one      Version Statement                       Effect Allow              Action                  autoscaling SetDesiredCapacity                  autoscaling TerminateInstanceInAutoScalingGroup                           Resource              Condition                  StringEquals                      aws ResourceTag ks io cluster autoscaler ca demo cluster owned                                                              Effect Allow              Action                  autoscaling DescribeAutoScalingInstances                  autoscaling DescribeAutoScalingGroups                  ec DescribeLaunchTemplateVersions                  autoscaling DescribeTags                  autoscaling DescribeLaunchConfigurations                           Resource               Now run this command to create an IAM policy You can adjust the policy name or file name gt aws iam create policy policy name DemoClusterAutoscalerPolicy policy document file policy jsonTake note of the created policy ARN in the command output You ll need to specify it in the next step In case you missed it you can check it again using this gt aws iam list policies query Policies PolicyName Arn output text grep DemoClusterAutoscalerPolicy IAM Service AccountNow let s create a Service account and attach our newly created policy Make sure to set the correct ARN from the previous step gt eksctl create iamserviceaccount    cluster ca demo cluster    namespace kube system    name cluster autoscaler    attach policy arn arn aws iam policy DemoClusterAutoscalerPolicy    override existing serviceaccounts    approve Deploy the Cluster AutoscalerWe already have all the prerequisites and are ready to deploy the Autoscaler itself Download the Kubernetes deployment file gt curl o cluster autoscaler autodiscover yaml We need to make a few small adjustments in the file Replace lt YOUR CLUSTER NAME gt with the correct name of the cluster ca demo cluster if you created it by following the previous steps Verify and adjust the container image version to the corresponding version compatible with the Kubernetes version in your cluster version compatibility I recommend adding the command line argument skip nodes with system pods false to the container command for more flexible scaling down Now let s deploy it to our cluster gt kubectl apply f cluster autoscaler autodiscover yamlYou should be able to see the cluster autoscaler deployment and pod in the kube system namespace gt kubectl get deploy cluster autoscaler n kube system gt kubectl get pods n kube system l app cluster autoscalerIn pod logs you should see messages about cluster state and scaling decisions replace with the correct pod name in the command gt kubectl logs cluster autoscaler POD NAME n kube systemIf there are any errors related to authorization possibly any of the steps related to OIDC or IAM Policy and Service Account were not completed correctly Testing cluster autoscalingLet s create a simple deployment to trigger cluster scale up apiVersion apps vkind Deploymentmetadata   name test appspec   selector     matchLabels       app test app  replicas   template     metadata       labels         app test app    spec       containers        name nginx        image nginx        ports          containerPort         resources             requests               cpu m              memory Mi gt kubectl apply f test app yamlThis should create The deployment “test app with a single pod Now let s scale it to have more replicas gt kubectl scale replicas f test app yamlCheck the pod status gt kubectl get pods l app test appInitially some pods should be pending Then autoscaling should be triggered and more nodes should be added to the cluster Some pods could be left unscheduled pending if the Auto Scaling group reaches the max number of nodes Try adjusting the replica count to your needs Now let s reduce the replica count to see the Cluster Autoscaler downscaling action gt kubectl scale replicas f test app yamlThe number of nodes will go down after a few minutes You can recheck the Autoscaler pod logs to see which decisions were made Destroy the created resourcesClusters and VMs incur costs in your cloud provider account If you created a cluster by following the steps in this guide destroy it as soon as you re done playing around using this command gt eksctl delete cluster name ca demo cluster best practices for EKS Cluster AutoscalerSet the least privileged access to the IAM roleIf you use Auto Discovery it s smart to use the least privilege access by limiting the Actions autoscaling SetDesiredCapacity and autoscaling TerminateInstanceInAutoScalingGroup to the Auto Scaling Groups scoped to the current cluster Why is this important This is how you prevent the Cluster Autoscaler from running in one cluster by modifying node groups in a different cluster Even if you didn t scope the node group auto discovery argument down to the node groups of the cluster using tags for example ks io cluster autoscaler lt cluster name gt      Version      Statement                       Effect Allow              Action                  autoscaling SetDesiredCapacity                  autoscaling TerminateInstanceInAutoScalingGroup                           Resource              Condition                  StringEquals                      autoscaling ResourceTag ks io cluster autoscaler enabled true                      aws ResourceTag ks io cluster autoscaler lt my cluster gt owned                                                              Effect Allow              Action                  autoscaling DescribeAutoScalingInstances                  autoscaling DescribeAutoScalingGroups                  ec DescribeLaunchTemplateVersions                  autoscaling DescribeTags                  autoscaling DescribeLaunchConfigurations                           Resource               Configure node groups wellTo make your autoscaling effort worth the time start by configuring a set of node groups for your cluster If you pick the right set of node groups you ll maximize availability and reduce cloud costs across all of your workloads In AWS node groups are implemented with EC Auto Scaling Groups that offer flexibility to a broad range of use cases Still the Cluster Autoscaler needs to make some assumptions about your node groups so it pays to keep the configuration consistent with them For example each node needs to have identical scheduling properties labels taints resources Instead of creating many node groups containing fewer nodes try creating fewer nodes with many nodes This will have the greatest impact on scalability Use the correct Kubernetes versionKubernetes is evolving fast and its control plane API changes often The maintainers of Cluster Autoscaler do not guarantee compatibility with other versions than that for which it was released When deploying the EKS Cluster Autoscaler ensure you use a matching version You can find a compatibility list here Check node group instances for the same capacityIf you don t the Cluster Autoscaler won t work as expected Why Because it assumes that every instance in your node group has the same amount of CPU and memory The Cluster Autoscaler takes the first instance type in the node group for scheduling simulation  If your group contains instance types with more resources they won t be utilized this means wasted resources and higher costs And vice versa if there is an instance type with fewer resources pods won t fit to be scheduled on it  That s why you must double check that the node group that will undergo autoscaling contains instances or nodes of the same type And if you re managing mixed instance types ensure they have the same resource footprint Define resource requests for each podThe Cluster Autoscaler makes scaling decisions based on pods scheduling status as well as individual node utilization If you fail to specify resource requests for every pod the autoscaler won t work as it should When scaling up Cluster Autoscaler chooses instance types according to pod resources When scaling down it will look for nodes with utilization lower than the specified threshold To calculate utilization it sums up the requested resources and compares them to node capacity  If there are any pods or containers without resource requests Autoscaler s decisions will definitely be affected and you ll be facing an issue Make your life easier and double check that all the pods scheduled to run in an autoscaled node or instance group have their resource requests specified Set the PodDisruptionBudget wiselyPodDisruptionBudget PDB helps in two ways Its main mission is to prevent your applications from disruption PDB will protect from evicting all or a significant amount of pods of a single Deployment or StatefulSet The Cluster Autoscaler will respect PDB rules and downscale nodes safely by moving only the allowed number of pods On the other hand PDB can help to downscale not only in a restrictive way but also in a permissive one By default the Cluster Autoscaler won t evict any kube system pods unless PDB is specified So by specifying a reasonable PDB you will enable the Cluster Autoscaler to evict even kube system pods and remove underutilized nodes  Note Before an eviction the Cluster Autoscaler ensures that evicted pods will be scheduled on a different node with enough free capacity When specifying the PodDisruptionBudget consider the minimum necessary number of replicas of the pods Many system pods run as single instance pods aside from Kube dns and restarting them might cause disruptions So don t add a disruption budget for single instance pods like metrics server you ll sleep better at night Curious to see a modern autoscaler in action By constantly monitoring cloud provider inventory pricing and availability in supported cloud provider regions and zones we have collected data and knowledge on which instance families provide the best value and which should be avoided That s how CAST AI Cluster Autoscaler can select the best matching instance types on its own or according to your preferences Given that it s a managed service you don t need to worry about upgrades scalability and availability CAST AI platform is monitoring clusters and is always ready to act promptly  Here s an example showing how closely the CAST AI autoscaler follows the actual resource requests in the cluster Check how well your cluster is doing in terms of autoscaling and cost efficiency Check out instance recommendations by connecting your cluster to our free and read only mode Kubernetes cost monitoring module it works with Amazon EKS and Kops as well as GKE and AKS CAST AI clients save an average of on their Kubernetes billsConnect your cluster and see your costs in min no credit card required Get started 2022-12-30 17:24:42
Apple AppleInsider - Frontpage News New iPads, mostly new Apple TV, and old problems -- October 2022 in review https://appleinsider.com/articles/22/12/30/new-ipads-mostly-new-apple-tv-and-old-problems----october-2022-in-review?utm_medium=rss New iPads mostly new Apple TV and old problems October in reviewApple revamped its iPad lineup lost one designer and briefly regained a previous one plus Musk bought Twitter Left Apple Store Australia staff on strike Source Cameron Atfield Sydney Morning HeraldIf Apple had its way the real news of October would be the launch of the new tenth generation iPad the refreshed iPad Pro and the lower cost Apple TV K Read more 2022-12-30 17:45:05
ニュース BBC News - Home Returns show Trump paid no taxes in 2020 https://www.bbc.co.uk/news/world-us-canada-64127825?at_medium=RSS&at_campaign=KARANGA losses 2022-12-30 17:09:04
ニュース BBC News - Home Severe flooding causes road and rail disruption in Scotland https://www.bbc.co.uk/news/uk-scotland-64118732?at_medium=RSS&at_campaign=KARANGA scotland 2022-12-30 17:28:25
ニュース BBC News - Home Ukraine hit by fresh wave of Iran drones - officials https://www.bbc.co.uk/news/world-europe-64125257?at_medium=RSS&at_campaign=KARANGA ukrainian 2022-12-30 17:29:56
ニュース BBC News - Home Cody Fisher stabbing: Birmingham nightclub's licence suspended https://www.bbc.co.uk/news/uk-england-birmingham-64124926?at_medium=RSS&at_campaign=KARANGA fisher 2022-12-30 17:31:28
ニュース BBC News - Home Chinese fighter jet flies 20 feet from US military plane https://www.bbc.co.uk/news/world-us-canada-64129801?at_medium=RSS&at_campaign=KARANGA china 2022-12-30 17:25:12
ビジネス ダイヤモンド・オンライン - 新着記事 【精神科医が教える】 自分を好きになれない…心が一瞬で軽くなる“逆転の発想法” - 精神科医Tomyが教える 心の執着の手放し方 https://diamond.jp/articles/-/314847 【精神科医が教える】自分を好きになれない…心が一瞬で軽くなる“逆転の発想法精神科医Tomyが教える心の執着の手放し方NHK『あさイチ』月日放送に精神科医Tomy先生が出演し、これまで覆面を貫いてきた気になる素顔を初公開。 2022-12-31 02:50:00
ビジネス ダイヤモンド・オンライン - 新着記事 【共通テストで最高得点】1月1日からの「最強の勉強スケジュール」 - 逆転合格90日プログラム https://diamond.jp/articles/-/315271 逆転 2022-12-31 02:45:00
ビジネス ダイヤモンド・オンライン - 新着記事 【がんばっているのに稼げない】ブログ初心者がやりがちな「致命的な失敗」とは? - ブログで5億円稼いだ方法 https://diamond.jp/articles/-/315286 致命 2022-12-31 02:40:00
ビジネス ダイヤモンド・オンライン - 新着記事 年末の心を休めるたった1つの考え方 - 佐久間宣行のずるい仕事術 https://diamond.jp/articles/-/315017 佐久間宣行 2022-12-31 02:35:00
ビジネス ダイヤモンド・オンライン - 新着記事 【神様】は見ている。それ?! 運がいい人、お金持ちの人が新年にする意外なことベスト2 - 旬のカレンダー https://diamond.jp/articles/-/315367 【神様】は見ている。 2022-12-31 02:30:00

コメント

このブログの人気の投稿

投稿時間:2021-06-17 05:05:34 RSSフィード2021-06-17 05:00 分まとめ(1274件)

投稿時間:2021-06-20 02:06:12 RSSフィード2021-06-20 02:00 分まとめ(3871件)

投稿時間:2020-12-01 09:41:49 RSSフィード2020-12-01 09:00 分まとめ(69件)