投稿時間:2023-03-22 20:39:16 RSSフィード2023-03-22 20:00 分まとめ(42件)

カテゴリー等 サイト名等 記事タイトル・トレンドワード等 リンクURL 頻出ワード・要約等/検索ボリューム 登録日
ROBOT ロボスタ AIを用いた新しい広告手法「iCADs」とは? フジテレビがミニ番組の動画内で初の取り組み https://robotstart.info/2023/03/22/my-routine-icads.html 2023-03-22 10:47:53
IT ITmedia 総合記事一覧 [ITmedia ビジネスオンライン] WBC優勝トロフィーは米ティファニー製 侍ジャパンを祝福 https://www.itmedia.co.jp/business/articles/2303/22/news185.html itmedia 2023-03-22 19:46:00
IT ITmedia 総合記事一覧 [ITmedia News] 25年続いた老舗カメラ情報サイト「DPReview」が閉鎖へ Amazonのレイオフで https://www.itmedia.co.jp/news/articles/2303/22/news194.html amazon 2023-03-22 19:45:00
IT ITmedia 総合記事一覧 [ITmedia News] 大谷翔平名乗るTwitterアカウント出現、真偽不明もすでにフォロワー15万人超 https://www.itmedia.co.jp/news/articles/2303/22/news192.html itmedia 2023-03-22 19:20:00
IT ITmedia 総合記事一覧 [ITmedia News] 日本人「新しいBing」めっちゃ使う 日本MSが利用動向を公開 1人当たりの検索数で世界トップ https://www.itmedia.co.jp/news/articles/2303/22/news193.html itmedia 2023-03-22 19:17:00
IT ITmedia 総合記事一覧 [ITmedia News] AI開発用のスパコンをクラウド化した「DGX Cloud」、NVIDIAが法人提供 Oracle、Microsoft、Googleのクラウドと連携 https://www.itmedia.co.jp/news/articles/2303/22/news191.html dgxcloud 2023-03-22 19:11:00
TECH Techable(テッカブル) 自社のNFTプロジェクトを簡単に作成できる「Callbackドロップ」ローンチ! https://techable.jp/archives/200723 callback 2023-03-22 10:00:08
Ruby Rubyタグが付けられた新着投稿 - Qiita Dateクラスの使い方 https://qiita.com/ryuuya0921/items/998a5d2aaa64a5b14698 datene 2023-03-22 19:17:37
AWS AWSタグが付けられた新着投稿 - Qiita AWS S3 https://qiita.com/valenciahiroki/items/55fb57180ea028e827d3 simple 2023-03-22 19:46:29
Azure Azureタグが付けられた新着投稿 - Qiita TerraformでAzure Load BalancerからVM(HTTP Server)にルーティングするシンプルな構成 https://qiita.com/tkhs1121/items/4211702265afc8757edd azureloadbalancer 2023-03-22 19:58:03
技術ブログ Mercari Engineering Blog 仲間たちをエンパワーメントする取り組みはつづくよ #WomenCareerTalk https://engineering.mercari.com/blog/entry/20230320-a4d3d57996/ hellip 2023-03-22 12:00:03
技術ブログ Developers.IO Apache에서 DocumentRoot를 나누어 ALB로 접속해 보기 https://dev.classmethod.jp/articles/jw-divide-documentroot-from-apache-and-try-to-access-it-as-alb/ Apache에서DocumentRoot를나누어ALB로접속해보기안녕하세요클래스메소드김재욱 Kim Jaewook 입니다 이번에는Apache에서DocumentRoot를나누어ALB로접속해보는방법을정리해봤습니다 하고싶은것이번블로그에서는 2023-03-22 10:38:44
技術ブログ Developers.IO Amazon DevOps Guru のコスト見積もりツールでタグ見積もりを行う際の注意点 https://dev.classmethod.jp/articles/amazon-devops-guru-cost-estimate-tag/ amazondevopsguru 2023-03-22 10:33:25
技術ブログ Developers.IO ALBのアクセスログを有効にする際、Access Deniedになった時にS3バケットで確認するポイント https://dev.classmethod.jp/articles/alb-access-log-permission-error-s3-bucket/ accessdenied 2023-03-22 10:06:36
技術ブログ Developers.IO iPhoneのSafariにて契約手続きを行う場合に契約内容表示をPDFとしてソツなく保存する手順を振り返ってみた #NotionAI https://dev.classmethod.jp/articles/descript-with-notion-ai/ iphone 2023-03-22 10:03:50
海外TECH MakeUseOf Can You Close All the Open Apps on Your iPhone at Once? https://www.makeuseof.com/how-to-close-multiple-apps-simultaneously-iphone/ close 2023-03-22 10:46:16
海外TECH MakeUseOf How to Install Multiple Mac Apps at Once https://www.makeuseof.com/how-to-install-multiple-mac-apps-at-once/ terminal 2023-03-22 10:30:16
海外TECH DEV Community Create a Pull Request from Visual Studio Code https://dev.to/this-is-learning/create-a-pull-request-from-visual-studio-code-18nh Create a Pull Request from Visual Studio CodeDid anyone say the word Productivity We re all used to the GitHub User Interface so we usually navigate to GitHub then search for our repository and then click the button to create a Pull Request But did you know that you can do all of this from Visual Studio Code In this article we ll see how to create a Pull Request from Visual Studio Code in literally two clicks This article will also be the first of a trilogy about the Github VScode workflow you better stay tuned to see what else you can do If you re new to my articles you know I usually match them with a YouTube video Fun fact while recording it I noticed the Visual Studio Code extension had a little bug the perfect opportunity to contribute to the project and to use the extension to create a Pull Request fixing a bug of the extension itself I ll tell you more in the video In any case if you don t really like watching videos I ll do my best to explain everything down here with some screenshots to make it easier to follow Install the official GitHub ExtensionThe first thing you need to do is to install the official GitHub Pull Requests and Issues extension for Visual Studio Code You can find it in the marketplace by searching for GitHub or by clicking here Note make sure to not get confused the extension called GitHub is and old one and deprecated The new one is called GitHub Pull Requests and Issues As soon as the extension is installed you ll see a new icon in the Activity Bar on the left side of Visual Studio Code Opening it the first time will ask you to login to GitHub just click on the button and a browser tab will open where you can login to your GitHub account Create a Pull RequestNow that you re logged in you can create a Pull Request from Visual Studio Code by clicking the icon on the top bar If you re already on a pushed branch this is the panel that will open Merge Changes FromThe first panel you will see allows you to select the origin branch that is the one containing the changes you want to merge By default it will select the current branch you re on You can also select the remote in case you have more than one In the most common open source situation you ll have your fork and the original repository In this panel you might want to select your fork IntoSimilar to the previous panel it allows you to select a remote and a branch This time it s the destination If you re working on your own project the remote will likely be the same as the one you selected in the previous panel but if you re contributing to someone else s project you ll likely want to select the original repository Branch is usually main but make sure to read the project s contribution guidelines to see if there s a specific branch you should use TitleYou PR s title By default it will use the message of the last commit but you can change it to whatever you want DescriptionYou PR s description If the project has a template in github PULL REQUEST TEMPLATE md you will see this box already filled with the template Similar to the title you re free to change it Create as draftThe final option before the Create button is a checkbox that allows you to create the PR as a draft This is useful if you want to create the PR but you re not ready to merge it yet CreateOnce you re happy with the options you selected you can click the Create button and the PR will be created on GitHub Compare changesWaaaait a moment before clicking the Create button you can also open the Compare changes panel right below From this panel you can see all changes that will be included in the Pull Request in the diff format you re used to see on vscode so green files are added red are deleted and yellow are modified Add labelsAren t labels also supported I can t see them in the screenshot well that s because the button only shows up if you hover the mouse over the top bar To be honest I m not sure this is a feature or a bug I might open an issue on the extension s repository to ask about it Anyway if you click that button the quick pick menu will open on vscode letting you select the labels you want to add to the PR from the list of labels available in the repository And as I mention in the video this is exactly where I found the bug Allow empty labels array to be pushed to set labels to remove all of them Balastrong posted on Mar This PR fixes If there are no checked labels from the quick pick menu the command set labelsis not posted preventing the possibility to remove all the last one selected labels I didn t remove the if because labelsToAdd can be undefined if the menu is closed with esc but the length control is causing the bug hence should be removed View on GitHub Creating the Pull RequestOk at this point we ve seen pretty much everything we needed we created the Pull Request from the Create button and if you go on GitHub you can see it s there ready to be reviewed Speaking of reviewing Pull Requests wouldn t be cool if this was also possible from Visual Studio Code Well you re in luck because that s exactly what we re going to see in the next article of this three articles series Stay tuned Thanks for reading this article I hope you found it interesting I recently launched my Discord server to talk about Open Source and Web Development feel free to join Do you like my content You might consider subscribing to my YouTube channel It means a lot to me ️You can find it here Feel free to follow me to get notified when new articles are out Leonardo MontiniFollow I talk about Open Source GitHub and Web Development I also run a YouTube channel called DevLeonardo see you there 2023-03-22 10:46:18
海外TECH DEV Community SearQ: A RESTful search engine https://dev.to/daviducolo/searq-a-restful-search-engine-2b58 SearQ A RESTful search engineHello everyone Today I want to tell you about my personal Rails project SearQ a search engine based on RSS feeds that I developed myself as an open source REST API SearQ was born from the idea of creating a fast and scalable search engine that can handle a vast amount of information from different sources Using RSS feeds SearQ is able to extract the information needed for searches making it an attractive option for those looking to integrate a search engine into their projects SearQ also offers two additional tools Export and Flow Export allows users to extract a query in CSV format while Flow enables the creation of a custom JSON endpoint based on a query These tools provide even more flexibility and customization for developers working with SearQ It s great to see that you are constantly updating and improving the project and are open to suggestions and collaborations But the most interesting thing about SearQ is the REST API I developed to allow developers to integrate the search engine into their applications Thanks to this API SearQ becomes extremely flexible and customizable adapting to the needs of individual projects To develop SearQ I used Ruby with Meilisearch an open source lightning fast and hyper relevant search engine that fits effortlessly into your apps websites and workflow and several open source libraries But the real strength of SearQ is the ability to customize the search experience Through the use of RSS feeds it is possible to choose the sources from which to extract information giving the ability to create thematic and specialized search engines SearQ is an evolving project and I am constantly striving to update and improve it I am always open to suggestions and collaborations and I am sure that SearQ can become an interesting option for those looking to integrate a search engine into their projects searq searq org SearQ the RSS search engine that is both speedy and free SearQ offers a RESTful API that simplifies the search for data from RSS feeds Finding what you need has never been easier with SearQ SEARQAPI search engine based on RSS feedsSearQ is a REST API that allows users to search for and retrieve information from RSS RDF site summary feeds Instead of relying on a traditional web crawling method it leverages the information contained in RSS feeds to present the most relevant and up to date results to the user The engine works by aggregating information from multiple RSS feeds indexing the content and allowing users to search through it using keywords or phrases By utilizing RSS feeds the search results are often more targeted and accurate making it easier for users to find the information they are looking for Additionally because the information is being pulled from a variety of sources users may also be exposed to a wider range of perspectives and opinions on a given topic curl G H Authorization Token TOKEN d q q data … View on GitHubIf you are a developer looking for a REST API for search I encourage you to take a look at SearQ You will find an open source project that can offer many customization options and a fast and scalable search experience 2023-03-22 10:40:38
海外TECH DEV Community EKS cluster Monitoring for AWS Fargate with Prometheus and managed Grafana https://dev.to/monirul87/eks-cluster-monitoring-for-aws-fargate-with-prometheus-and-managed-grafana-1h2f EKS cluster Monitoring for AWS Fargate with Prometheus and managed GrafanaFirstly We need to create node group in our existing EKS cluster as metrics are inaccessible to Fargate Architecture Node group for prometheus I actually used IAC terrafrom to create eks node group worker node for prometheus resource aws eks node group monirul ec cluster name aws eks cluster monirul name node group name monirul ec prometheus node role arn aws iam role node arn subnet ids var private subnet id a var private subnet id b scaling config desired size max size min size ami type AL x AL x AL x GPU AL ARM CUSTOM capacity type ON DEMAND ON DEMAND SPOT disk size instance types m large depends on aws iam role policy attachment node AmazonEKSWorkerNodePolicy aws iam role policy attachment node AmazonEKS CNI Policy aws iam role policy attachment node AmazonECContainerRegistryReadOnly EKS Node IAM Roleresource aws iam role node name Ec Worker Role assume role policy lt lt POLICY Version Statement Effect Allow Principal Service ec amazonaws com Action sts AssumeRole POLICY resource aws iam role policy attachment node AmazonEKSWorkerNodePolicy policy arn arn aws iam aws policy AmazonEKSWorkerNodePolicy role aws iam role node name resource aws iam role policy attachment node AmazonEKS CNI Policy policy arn arn aws iam aws policy AmazonEKS CNI Policy role aws iam role node name resource aws iam role policy attachment node AmazonECContainerRegistryReadOnly policy arn arn aws iam aws policy AmazonECContainerRegistryReadOnly role aws iam role node name Or we can create manually Go to AWS Management Console gt EKS gt Your cluster gt Compute gt Add node group We know that We have to use EC for Prometheus since will need volumes mounted to it While creating node group we have to attach an IAM role to EC worker nodes For easy demonstration I have created a new IAM role and attached policies as below Run the following command to confirm that your EC worker nodes are running properly pods are should be running state k get po n kube system grep aws nodeaws node jxdh Running daws node mxgq Running dNote Node exporter runs as a daemon set and is responsible for collecting metrics of the host it runs on Most of these metrics are low level operating system metrics like vCPU memory network disk of the host machine not containers and hardware statistics etc These metrics are inaccessible to Fargate customers since AWS is responsible for the health of the host machine Install EBS CSI driverPrometheus and Grafana needs persistent storage attached to them which is also called PV Persistent Volume in terms of Kubernetes For stateful workloads to use Amazon EBS volumes as PV we have to add aws ebs csi driver into the cluster Associating IAM role to Service accountBefore we add aws ebs csi driver we need to create an IAM role and associate it with Kubernetes service account Let s use an example policy file which you can download using the command below curl sSL o ebs csi policy json Now let s create a new IAM policy with that file export EBS CSI POLICY NAME AmazonEBSCSIPolicyexport AWS REGION eu west aws iam create policy region AWS REGION policy name EBS CSI POLICY NAME policy document file ebs csi policy jsonexport EBS CSI POLICY ARN aws region eu west iam list policies query Policies PolicyName EBS CSI POLICY NAME Arn output text echo EBS CSI POLICY ARN arn aws iam policy AmazonEBSCSIPolicyAfter that let s attach the new policy to Kubernetes service account eksctl create iamserviceaccount cluster EKS CLUSTER NAME name ebs csi controller irsa namespace kube system attach policy arn EBS CSI POLICY ARN override existing serviceaccounts approveAnd now we re ready to install aws ebs csi driver Setting up aws ebs csi driver Helm RepoAssuming that helm is installed let s add new helm repository as below helm repo add aws ebs csi driver helm repo updateAfter adding new helm repository let s install aws ebs csi driver with below command using helm helm upgrade install aws ebs csi driver version namespace kube system set serviceAccount controller create false set serviceAccount snapshot create false set enableVolumeScheduling true set enableVolumeResizing true set enableVolumeSnapshot true set serviceAccount snapshot name ebs csi controller irsa set serviceAccount controller name ebs csi controller irsa aws ebs csi driver aws ebs csi driver Creating Namespace gt prometheuskubectl create namespace prometheus dry run client o yaml kubectl apply f Setting up Prometheus Helm Repositorieshelm repo add kube state metrics helm repo add prometheus community Setting up PrometheusCHART VERSION helm upgrade install wait prometheus prometheus community prometheus namespace prometheus create namespace version CHART VERSION f prometheus prometheus values yaml set alertmanager persistentVolume storageClass gp server persistentVolume storageClass gp debugVerify that Prometheus pods are running kubectl get pods namespace prometheusNAME READY STATUS RESTARTS AGEprometheus alertmanager Running dprometheus kube state metrics fcfbf dssx Running dprometheus prometheus node exporter lp Running dprometheus prometheus node exporter mwnj Running dprometheus prometheus pushgateway fdbdf pfdt Running dprometheus server dcfdf thcvn Running dHere is the prometheus value that we can use during installation prometheus values yamlrbac create truepodSecurityPolicy enabled falseimagePullSecrets name image pull secret Define serviceAccount names for components Defaults to component s fully qualified name serviceAccounts server create true name annotations Monitors ConfigMap changes and POSTs to a URL Ref configmapReload prometheus If false the configmap reload container will not be deployed enabled true configmap reload container name name configmap reload configmap reload container image image repository jimmidyson configmap reload tag v When digest is set to a non empty value images will be pulled by digest regardless of tag value digest pullPolicy IfNotPresent containerPort Additional configmap reload container arguments extraArgs Additional configmap reload volume directories extraVolumeDirs Additional configmap reload mounts extraConfigmapMounts name prometheus alerts mountPath etc alerts d subPath configMap prometheus alerts readOnly true Security context to be added to configmap reload container containerSecurityContext configmap reload resource requests and limits Ref resources server Prometheus server container name name server Use a ClusterRole and ClusterRoleBinding If set to false we define a RoleBinding in the defined namespaces ONLY NB because we need a Role with nonResourceURL s metrics you must get someone with Cluster admin privileges to define this role for you before running with this setting enabled This makes prometheus work for users who do not have ClusterAdmin privs but wants prometheus to operate on their own namespaces instead of clusterwide You MUST also set namespaces to the ones you have access to and want monitored by Prometheus useExistingClusterRoleName nameofclusterrole namespaces to monitor instead of monitoring all clusterwide Needed if you want to run without Cluster admin privileges namespaces yournamespace sidecarContainers add more containers to prometheus server Key Value where Key is the sidecar name lt Key gt Example sidecarContainers webserver image nginx sidecarContainers sidecarTemplateValues context to be used in template for sidecarContainers Example sidecarTemplateValues your custom globals sidecarContainers webserver include webserver container template Template for webserver container template might looks like this image Values server sidecarTemplateValues repository Values server sidecarTemplateValues tag sidecarTemplateValues Prometheus server container image image repository quay io prometheus prometheus if not set appVersion field from Chart yaml is used tag When digest is set to a non empty value images will be pulled by digest regardless of tag value digest pullPolicy IfNotPresent prometheus server priorityClassName priorityClassName EnableServiceLinks indicates whether information about services should be injected into pod s environment variables matching the syntax of Docker links WARNING the field is unsupported and will be skipped in Ks prior to v enableServiceLinks true The URL prefix at which the container can be accessed Useful in the case the web external url includes a slug so that the various internal URLs are still able to access as they are in the default case Optional prefixURL External URL which can access prometheus Maybe same with Ingress host name baseURL Additional server container environment variables You specify this manually like you would a raw deployment manifest This means you can bind in environment variables from secrets e g static environment variable name DEMO GREETING value Hello from the environment e g secret environment variable name USERNAME valueFrom secretKeyRef name mysecret key username env List of flags to override default parameters e g enable feature agent storage agent retention max time m defaultFlagsOverride extraFlags web enable lifecycle web enable admin api flag controls access to the administrative HTTP API which includes functionality such as deleting time series This is disabled by default web enable admin api storage tsdb no lockfile flag controls BD locking storage tsdb no lockfile storage tsdb wal compression flag enables compression of the write ahead log WAL storage tsdb wal compression Path to a configuration file on prometheus server container FS configPath etc config prometheus yml The data directory used by prometheus to set storage tsdb path When empty server persistentVolume mountPath is used instead storagePath global How frequently to scrape targets by default scrape interval m How long until a scrape request times out scrape timeout s How frequently to evaluate rules evaluation interval m remote write remoteWrite remote read remoteRead Custom HTTP headers for Liveness Readiness Startup Probe Useful for providing HTTP Basic Auth to healthchecks probeHeaders name Authorization value Bearer ABCDEabcde Additional Prometheus server container arguments extraArgs Additional InitContainers to initialize the pod extraInitContainers Additional Prometheus server Volume mounts extraVolumeMounts Additional Prometheus server Volumes extraVolumes Additional Prometheus server hostPath mounts extraHostPathMounts name certs dir mountPath etc kubernetes certs subPath hostPath etc kubernetes certs readOnly true extraConfigmapMounts name certs configmap mountPath prometheus subPath configMap certs configmap readOnly true Additional Prometheus server Secret mounts Defines additional mounts with secrets Secrets must be manually created in the namespace extraSecretMounts name secret files mountPath etc secrets subPath secretName prom secret files readOnly true ConfigMap override where fullname is Release Name Values server configMapOverrideName Defining configMapOverrideName will cause templates server configmap yaml to NOT generate a ConfigMap resource configMapOverrideName Extra labels for Prometheus server ConfigMap ConfigMap that holds serverFiles extraConfigmapLabels ingress If true Prometheus server Ingress will be created enabled false For Kubernetes gt you should specify the ingress controller via the field ingressClassName See specifying the class of an ingress ingressClassName nginx Prometheus server Ingress annotations annotations kubernetes io ingress class nginx kubernetes io tls acme true Prometheus server Ingress additional labels extraLabels Prometheus server Ingress hostnames with optional path Must be provided if Ingress is enabled hosts prometheus domain com domain com prometheus path pathType is only for ks gt pathType Prefix Extra paths to prepend to every host configuration This is useful when working with annotation based services extraPaths path backend serviceName ssl redirect servicePort use annotation Prometheus server Ingress TLS configuration Secrets must be manually created in the namespace tls secretName prometheus server tls hosts prometheus domain com Server Deployment Strategy type strategy type Recreate hostAliases allows adding entries to etc hosts inside the containers hostAliases ip hostnames example com Node tolerations for server scheduling to nodes with taints Ref tolerations key key operator Equal Exists value value effect NoSchedule PreferNoSchedule NoExecute only Node labels for Prometheus server pod assignment Ref nodeSelector Pod affinity affinity affinity nodeAffinity requiredDuringSchedulingIgnoredDuringExecution nodeSelectorTerms matchExpressions key eks amazonaws com compute type operator NotIn values fargate affinity nodeAffinity requiredDuringSchedulingIgnoredDuringExecution nodeSelectorTerms matchExpressions key kubernetes io os operator In values linux key kubernetes io arch operator In values amd arm key eks amazonaws com compute type operator NotIn values fargate PodDisruptionBudget settings ref podDisruptionBudget enabled false maxUnavailable Use an alternate scheduler e g stork ref schedulerName persistentVolume If true Prometheus server will create use a Persistent Volume Claim If false use emptyDir enabled true Prometheus server data Persistent Volume access modes Must match those of existing PV or dynamic provisioner Ref accessModes ReadWriteOnce Prometheus server data Persistent Volume labels labels Prometheus server data Persistent Volume annotations annotations Prometheus server data Persistent Volume existing claim name Requires server persistentVolume enabled true If defined PVC must be created manually before volume will be bound existingClaim Prometheus server data Persistent Volume mount root path mountPath data Prometheus server data Persistent Volume size size Gi Prometheus server data Persistent Volume Storage Class If defined storageClassName lt storageClass gt If set to storageClassName which disables dynamic provisioning If undefined the default or set to null no storageClassName spec is set choosing the default provisioner gp on AWS standard on GKE AWS amp OpenStack storageClass Prometheus server data Persistent Volume Binding Mode If defined volumeBindingMode lt volumeBindingMode gt If undefined the default or set to null no volumeBindingMode spec is set choosing the default mode volumeBindingMode Subdirectory of Prometheus server data Persistent Volume to mount Useful if the volume s root directory is not empty subPath Persistent Volume Claim Selector Useful if Persistent Volumes have been provisioned in advance Ref selector selector matchLabels release stable matchExpressions key environment operator In values dev Persistent Volume Name Useful if Persistent Volumes have been provisioned in advance and you want to use a specific one volumeName emptyDir Prometheus server emptyDir volume size limit sizeLimit Annotations to be added to Prometheus server pods podAnnotations iam amazonaws com role prometheus Labels to be added to Prometheus server pods podLabels Prometheus AlertManager configuration alertmanagers Specify if a Pod Security Policy for node exporter must be created Ref podSecurityPolicy annotations Specify pod annotations Ref apparmor Ref seccomp Ref sysctl seccomp security alpha kubernetes io allowedProfileNames seccomp security alpha kubernetes io defaultProfileName docker default apparmor security beta kubernetes io defaultProfileName runtime default Use a StatefulSet if replicaCount needs to be greater than see below replicaCount Annotations to be added to deployment deploymentAnnotations statefulSet If true use a statefulset instead of a deployment for pod management This allows to scale replicas to more than pod enabled false annotations labels podManagementPolicy OrderedReady Alertmanager headless service to use for the statefulset headless annotations labels servicePort Enable gRPC port on service to allow auto discovery with thanos querier gRPC enabled false servicePort nodePort Prometheus server readiness and liveness probe initial delay and timeout Ref tcpSocketProbeEnabled false probeScheme HTTP readinessProbeInitialDelay readinessProbePeriodSeconds readinessProbeTimeout readinessProbeFailureThreshold readinessProbeSuccessThreshold livenessProbeInitialDelay livenessProbePeriodSeconds livenessProbeTimeout livenessProbeFailureThreshold livenessProbeSuccessThreshold startupProbe enabled false periodSeconds failureThreshold timeoutSeconds Prometheus server resource requests and limits Ref resources limits cpu m memory Mi requests cpu m memory Mi Required for use in managed kubernetes clusters such as AWS EKS with custom CNI such as calico because control plane managed by AWS cannot communicate with pods IP CIDR and admission webhooks are not working hostNetwork false When hostNetwork is enabled this will set to ClusterFirstWithHostNet automatically dnsPolicy ClusterFirst Use hostPort hostPort Vertical Pod Autoscaler config Ref verticalAutoscaler If true a VPA object will be created for the controller either StatefulSet or Deployemnt based on above configs enabled false updateMode Auto containerPolicies containerName prometheus server Custom DNS configuration to be added to prometheus server pods dnsConfig nameservers searches ns svc cluster domain example my dns search suffix options name ndots value name edns Security context to be added to server pods securityContext runAsUser runAsNonRoot true runAsGroup fsGroup Security context to be added to server container containerSecurityContext service If false no Service will be created for the Prometheus server enabled true annotations labels clusterIP List of IP addresses at which the Prometheus server service is available Ref external ips externalIPs loadBalancerIP loadBalancerSourceRanges servicePort sessionAffinity None type ClusterIP Enable gRPC port on service to allow auto discovery with thanos querier gRPC enabled false servicePort nodePort If using a statefulSet statefulSet enabled true configure the service to connect to a specific replica to have a consistent view of the data statefulsetReplica enabled false replica Prometheus server pod termination grace period terminationGracePeriodSeconds Prometheus data retention period default if not specified is days retention d Prometheus server ConfigMap entries for rule files allow prometheus labels interpolation ruleFiles Prometheus server ConfigMap entries serverFiles Alerts configuration Ref alerting rules yml groups name Instances rules alert InstanceDown expr up for m labels severity page annotations description labels instance of job labels job has been down for more than minutes summary Instance labels instance down DEPRECATED DEFAULT VALUE unless explicitly naming your files please use alerting rules yml alerts Records configuration Ref recording rules yml DEPRECATED DEFAULT VALUE unless explicitly naming your files please use recording rules yml rules prometheus yml rule files etc config recording rules yml etc config alerting rules yml Below two files are DEPRECATED will be removed from this default values file etc config rules etc config alerts scrape configs job name prometheus static configs targets localhost A scrape configuration for running Prometheus on a Kubernetes cluster This uses separate scrape configs for cluster components i e API server node and services to allow each to use different authentication configs Kubernetes labels will be added as Prometheus labels on metrics via the labelmap relabeling action Scrape config for API servers Kubernetes exposes API servers as endpoints to the default kubernetes service so this uses endpoints role and uses relabelling to only keep the endpoints associated with the default kubernetes service using the default named port https This works for single API server deployments as well as HA API server deployments job name kubernetes apiservers kubernetes sd configs role endpoints Default to scraping over https If required just disable this or change to http scheme https This TLS amp bearer token file config is used to connect to the actual scrape endpoints for cluster components This is separate to discovery auth configuration because discovery amp scraping are two separate concerns in Prometheus The discovery auth config is automatic if Prometheus runs inside the cluster Otherwise more config options have to be provided within the lt kubernetes sd config gt tls config ca file var run secrets kubernetes io serviceaccount ca crt If your node certificates are self signed or use a different CA to the master CA then disable certificate verification below Note that certificate verification is an integral part of a secure infrastructure so this should only be disabled in a controlled environment You can disable certificate verification by uncommenting the line below insecure skip verify true bearer token file var run secrets kubernetes io serviceaccount token Keep only the default kubernetes service endpoints for the https port This will add targets for each API server which Kubernetes adds an endpoint to the default kubernetes service relabel configs source labels meta kubernetes namespace meta kubernetes service name meta kubernetes endpoint port name action keep regex default kubernetes https job name kubernetes nodes Default to scraping over https If required just disable this or change to http scheme https This TLS amp bearer token file config is used to connect to the actual scrape endpoints for cluster components This is separate to discovery auth configuration because discovery amp scraping are two separate concerns in Prometheus The discovery auth config is automatic if Prometheus runs inside the cluster Otherwise more config options have to be provided within the lt kubernetes sd config gt tls config ca file var run secrets kubernetes io serviceaccount ca crt If your node certificates are self signed or use a different CA to the master CA then disable certificate verification below Note that certificate verification is an integral part of a secure infrastructure so this should only be disabled in a controlled environment You can disable certificate verification by uncommenting the line below insecure skip verify true bearer token file var run secrets kubernetes io serviceaccount token kubernetes sd configs role node relabel configs action labelmap regex meta kubernetes node label target label address replacement kubernetes default svc source labels meta kubernetes node name regex target label metrics path replacement api v nodes proxy metrics job name kubernetes nodes cadvisor Default to scraping over https If required just disable this or change to http scheme https This TLS amp bearer token file config is used to connect to the actual scrape endpoints for cluster components This is separate to discovery auth configuration because discovery amp scraping are two separate concerns in Prometheus The discovery auth config is automatic if Prometheus runs inside the cluster Otherwise more config options have to be provided within the lt kubernetes sd config gt tls config ca file var run secrets kubernetes io serviceaccount ca crt If your node certificates are self signed or use a different CA to the master CA then disable certificate verification below Note that certificate verification is an integral part of a secure infrastructure so this should only be disabled in a controlled environment You can disable certificate verification by uncommenting the line below insecure skip verify true bearer token file var run secrets kubernetes io serviceaccount token kubernetes sd configs role node This configuration will work only on kubelet As the scrape endpoints for cAdvisor have changed if you are using older version you need to change the replacement to replacement api v nodes proxy metrics more info here relabel configs action labelmap regex meta kubernetes node label target label address replacement kubernetes default svc source labels meta kubernetes node name regex target label metrics path replacement api v nodes proxy metrics cadvisor Metric relabel configs to apply to samples before ingestion Metric Relabeling metric relabel configs metric relabel configs action labeldrop regex kubernetes io hostname failure domain beta kubernetes io region beta kubernetes io os beta kubernetes io arch beta kubernetes io instance type failure domain beta kubernetes io zone Scrape config for service endpoints The relabeling allows the actual service scrape endpoint to be configured via the following annotations prometheus io scrape Only scrape services that have a value of true except if prometheus io scrape slow is set to true as well prometheus io scheme If the metrics endpoint is secured then you will need to set this to https amp most likely set the tls config of the scrape config prometheus io path If the metrics path is not metrics override this prometheus io port If the metrics are exposed on a different port to the service then set this appropriately prometheus io param lt parameter gt If the metrics endpoint uses parameters then you can set any parameter job name kubernetes service endpoints honor labels true kubernetes sd configs role endpoints relabel configs source labels meta kubernetes service annotation prometheus io scrape action keep regex true source labels meta kubernetes service annotation prometheus io scrape slow action drop regex true source labels meta kubernetes service annotation prometheus io scheme action replace target label scheme regex https source labels meta kubernetes service annotation prometheus io path action replace target label metrics path regex source labels address meta kubernetes service annotation prometheus io port action replace target label address regex d d replacement action labelmap regex meta kubernetes service annotation prometheus io param replacement param action labelmap regex meta kubernetes service label source labels meta kubernetes namespace action replace target label namespace source labels meta kubernetes service name action replace target label service source labels meta kubernetes pod node name action replace target label node Scrape config for slow service endpoints same as above but with a larger timeout and a larger interval The relabeling allows the actual service scrape endpoint to be configured via the following annotations prometheus io scrape slow Only scrape services that have a value of true prometheus io scheme If the metrics endpoint is secured then you will need to set this to https amp most likely set the tls config of the scrape config prometheus io path If the metrics path is not metrics override this prometheus io port If the metrics are exposed on a different port to the service then set this appropriately prometheus io param lt parameter gt If the metrics endpoint uses parameters then you can set any parameter job name kubernetes service endpoints slow honor labels true scrape interval m scrape timeout s kubernetes sd configs role endpoints relabel configs source labels meta kubernetes service annotation prometheus io scrape slow action keep regex true source labels meta kubernetes service annotation prometheus io scheme action replace target label scheme regex https source labels meta kubernetes service annotation prometheus io path action replace target label metrics path regex source labels address meta kubernetes service annotation prometheus io port action replace target label address regex d d replacement action labelmap regex meta kubernetes service annotation prometheus io param replacement param action labelmap regex meta kubernetes service label source labels meta kubernetes namespace action replace target label namespace source labels meta kubernetes service name action replace target label service source labels meta kubernetes pod node name action replace target label node job name prometheus pushgateway honor labels true kubernetes sd configs role service relabel configs source labels meta kubernetes service annotation prometheus io probe action keep regex pushgateway Example scrape config for probing services via the Blackbox Exporter The relabeling allows the actual service scrape endpoint to be configured via the following annotations prometheus io probe Only probe services that have a value of true job name kubernetes services honor labels true metrics path probe params module http xx kubernetes sd configs role service relabel configs source labels meta kubernetes service annotation prometheus io probe action keep regex true source labels address target label param target target label address replacement blackbox source labels param target target label instance action labelmap regex meta kubernetes service label source labels meta kubernetes namespace target label namespace source labels meta kubernetes service name target label service Example scrape config for pods The relabeling allows the actual pod scrape endpoint to be configured via the following annotations prometheus io scrape Only scrape pods that have a value of true except if prometheus io scrape slow is set to true as well prometheus io scheme If the metrics endpoint is secured then you will need to set this to https amp most likely set the tls config of the scrape config prometheus io path If the metrics path is not metrics override this prometheus io port Scrape the pod on the indicated port instead of the default of job name kubernetes pods honor labels true kubernetes sd configs role pod relabel configs source labels meta kubernetes pod annotation prometheus io scrape action keep regex true source labels meta kubernetes pod annotation prometheus io scrape slow action drop regex true source labels meta kubernetes pod annotation prometheus io scheme action replace regex https target label scheme source labels meta kubernetes pod annotation prometheus io path action replace target label metrics path regex source labels meta kubernetes pod annotation prometheus io port meta kubernetes pod ip action replace regex d A Fa f A Fa f replacement target label address source labels meta kubernetes pod annotation prometheus io port meta kubernetes pod ip action replace regex d replacement target label address action labelmap regex meta kubernetes pod annotation prometheus io param replacement param action labelmap regex meta kubernetes pod label source labels meta kubernetes namespace action replace target label namespace source labels meta kubernetes pod name action replace target label pod source labels meta kubernetes pod phase regex Pending Succeeded Failed Completed action drop Example Scrape config for pods which should be scraped slower An useful example would be stackriver exporter which queries an API on every scrape of the pod The relabeling allows the actual pod scrape endpoint to be configured via the following annotations prometheus io scrape slow Only scrape pods that have a value of true prometheus io scheme If the metrics endpoint is secured then you will need to set this to https amp most likely set the tls config of the scrape config prometheus io path If the metrics path is not metrics override this prometheus io port Scrape the pod on the indicated port instead of the default of job name kubernetes pods slow honor labels true scrape interval m scrape timeout s kubernetes sd configs role pod relabel configs source labels meta kubernetes pod annotation prometheus io scrape slow action keep regex true source labels meta kubernetes pod annotation prometheus io scheme action replace regex https target label scheme source labels meta kubernetes pod annotation prometheus io path action replace target label metrics path regex source labels meta kubernetes pod annotation prometheus io port meta kubernetes pod ip action replace regex d A Fa f A Fa f replacement target label address source labels meta kubernetes pod annotation prometheus io port meta kubernetes pod ip action replace regex d replacement target label address action labelmap regex meta kubernetes pod annotation prometheus io param replacement param action labelmap regex meta kubernetes pod label source labels meta kubernetes namespace action replace target label namespace source labels meta kubernetes pod name action replace target label pod source labels meta kubernetes pod phase regex Pending Succeeded Failed Completed action drop adds additional scrape configs to prometheus yml must be a string so you have to add a after extraScrapeConfigs example adds prometheus blackbox exporter scrape configextraScrapeConfigs job name prometheus blackbox exporter metrics path probe params module http xx static configs targets relabel configs source labels address target label param target source labels param target target label instance target label address replacement prometheus blackbox exporter Adds option to add alert relabel configs to avoid duplicate alerts in alertmanager useful in H A prometheus with different external labels but the same alertsalertRelabelConfigs alert relabel configs source labels dc regex d target label dcnetworkPolicy Enable creation of NetworkPolicy resources enabled false Force namespace of namespaced resourcesforceNamespace Extra manifests to deploy as an arrayextraManifests apiVersion v kind ConfigMap metadata labels name prometheus extra data extra data value Configuration of subcharts defined in Chart yaml alertmanager sub chart configurable values Please see alertmanager If false alertmanager will not be installed enabled true persistence size Gi podSecurityContext runAsUser runAsNonRoot true runAsGroup fsGroup kube state metrics sub chart configurable values Please see kube state metrics If false kube state metrics sub chart will not be installed enabled true promtheus node exporter sub chart configurable values Please see prometheus node exporter If false node exporter will not be installed enabled true rbac pspEnabled false containerSecurityContext allowPrivilegeEscalation false affinity nodeAffinity requiredDuringSchedulingIgnoredDuringExecution nodeSelectorTerms matchExpressions key kubernetes io os operator In values linux key kubernetes io arch operator In values amd arm key eks amazonaws com compute type operator NotIn values fargate pprometheus pushgateway sub chart configurable values Please see prometheus pushgateway If false pushgateway will not be installed enabled true Optional service annotations serviceAnnotations prometheus io probe pushgatewayNote you have to add nodeAffinity for node exporter affinity nodeAffinity requiredDuringSchedulingIgnoredDuringExecution nodeSelectorTerms matchExpressions key kubernetes io os operator In values linux key kubernetes io arch operator In values amd arm key eks amazonaws com compute type operator NotIn values fargate Ingress for prometheus urlLets add the ingress for prometheus in the following way apiVersion networking ks io vkind Ingressmetadata annotations alb ingress kubernetes io actions ssl redirect Type redirect RedirectConfig Protocol HTTPS Port StatusCode HTTP alb ingress kubernetes io certificate arn arn aws acm eu west certificate cdf ae a fbdc alb ingress kubernetes io listen ports HTTP HTTPS alb ingress kubernetes io scheme internet facing alb ingress kubernetes io success codes alb ingress kubernetes io target type ip finalizers ingress ks aws resources name prometheus server namespace prometheusspec ingressClassName alb rules http paths backend service name prometheus server port number path pathType Prefix backend service name prometheus server port number path pathType Prefix AWS Managed Grafana Add this prometheus url as a datasource on grafana That s it It s ready now to create dashboard 2023-03-22 10:33:34
Apple AppleInsider - Frontpage News Keyport Pivot 2.0 & Popl Review: Streamlining everyday carry https://appleinsider.com/articles/23/03/22/keyport-pivot-20-popl-review-streamlining-everyday-carry?utm_medium=rss Keyport Pivot amp Popl Review Streamlining everyday carryKeyport s Pivot is a great way to build your own keychain multi tool and its new collaboration with Popl gives a small yet useful e business card Keyport integrates daily carry items like keys fobs and multi tools into a customizable package that can fit on your keychain or carabiner The Pivot can hold up to eight keys and tool inserts and two additional modules on the sides of the chassis The Pivot requires some setup but Keyport s Youtube channel provides a tutorial as well as some overviews of specific tools To summarize you unscrew the bolt holding together the Pivot insert your desired tools and keys and put it back on A coin or inch flathead screwdriver does fine for this Read more 2023-03-22 10:41:00
海外TECH Engadget Ubisoft's new 'Ghostwriter' AI tool can automatically generate video game dialogue https://www.engadget.com/ubisofts-ghostwriter-ai-tool--automatically-generate-video-game-dialogue-103510366.html?src=rss Ubisoft x s new x Ghostwriter x AI tool can automatically generate video game dialogueA good open world game is filled with little details that add to a player s sense of immersion One of the key elements is the presence of background chatter Each piece of dialog you hear is known as a quot bark quot and must be individually written by the game s creators ー nbsp a time consuming detailed task Ubisoft maker of popular open world gaming series like Assassin s Creed and Watch Dogs hopes to shorten this process with Ghostwriter a machine learning tool that generates first drafts of barks nbsp To use Ghostwriter narrative writers input the character and type of interaction they are looking to create The tool then produces variations each with two slightly different options for writers to review As the writers make edits to the drafts Ghostwriter updates ideally producing more tailored options moving forward The idea here is to save game writers time to focus on the big stuff quot Ghostwriter was created hand in hand with narrative teams to help them complete a repetitive task more quickly and effectively giving them more time and freedom to work on games narrative characters and cutscenes quot Ubisoft states in a video release nbsp nbsp Ubisoft touts Ghostwriter as an quot AI quot tool ーthe big thing at the moment with seemingly every company from Google to Microsoft hopping onboard the AI train nbsp Like similar tools though the question is how to get people ーnamely staff ー nbsp to actually use it According to Ben Swanson the R amp D scientist at Ubisoft who created Ghostwriter the biggest challenge now is integrating the tool into production To better facilitate this the production team created Ernestine a back end tool that facilitates anyone to create new machine learning models in Ghostwriter If Ghostwriter proves effective writers should be able to spend their time and energy building more detailed and engaging gaming worlds to explore nbsp This article originally appeared on Engadget at 2023-03-22 10:35:10
海外科学 NYT > Science 8 Dolphins Dead After Washing Ashore in New Jersey https://www.nytimes.com/2023/03/22/nyregion/dolphins-dead-new-jersey-shore.html beaches 2023-03-22 10:05:54
医療系 医療介護 CBnews 基本方針に「個人の行動と健康状態の改善」規定へ-厚労省が改正案公表、24年4月から適用予定 https://www.cbnews.jp/news/entry/20230322193814 健康増進法 2023-03-22 19:55:00
医療系 医療介護 CBnews ゾコーバ錠の追加対策、現時点では行わず-薬食審安全対策調査会、厚労省が症例集積注視も https://www.cbnews.jp/news/entry/20230322192508 安全対策 2023-03-22 19:40:00
金融 金融庁ホームページ 「インパクト投資等に関する検討会」(第6回)議事次第を公表しました。 https://www.fsa.go.jp/singi/impact/siryou/20230322.html 次第 2023-03-22 11:15:00
金融 金融庁ホームページ 鈴木財務大臣兼内閣府特命担当大臣閣議後記者会見の概要(令和5年3月17日)を掲載しました。 https://www.fsa.go.jp/common/conference/minister/2023a/20230317-1.html 内閣府特命担当大臣 2023-03-22 11:15:00
ニュース @日本経済新聞 電子版 ダイキンは米国でデータセンター向け大型空調の関連企業2社を買収、300億円を投じます。 5G普及で成長するデータセンターの需要を取り込み、米空調市場でシェア首位を目指します。 #日経イブニングスクープ https://t.co/tbBigRFlUQ https://twitter.com/nikkei/statuses/1638495788090159104 2023-03-22 11:00:12
ニュース @日本経済新聞 電子版 ひろぎんHD、外債損切りで一転最終減益 23年3月期 https://t.co/7W5Ikx1Rpt https://twitter.com/nikkei/statuses/1638493700421742593 最終減益 2023-03-22 10:51:54
ニュース @日本経済新聞 電子版 岸田首相、迫られた初の戦地入り G7議長国の姿勢示す https://t.co/fP2jHh6Ikl https://twitter.com/nikkei/statuses/1638493699331207168 首相 2023-03-22 10:51:54
ニュース @日本経済新聞 電子版 花粉症、がん発症リスクを低減 免疫の働き関係か https://t.co/mBn85i3MST https://twitter.com/nikkei/statuses/1638487936282009602 花粉症 2023-03-22 10:29:00
ニュース @日本経済新聞 電子版 [社説]中国は「ロシア支援」の自制を https://t.co/dENCIRXZ5P https://twitter.com/nikkei/statuses/1638485168494346240 社説 2023-03-22 10:18:00
ニュース @日本経済新聞 電子版 習近平氏、仲裁役を演出 中ロ共同声明「対話が最良」 https://t.co/WTqZl4P9Lg https://twitter.com/nikkei/statuses/1638483382073192449 共同声明 2023-03-22 10:10:54
ニュース @日本経済新聞 電子版 【日経特報】京都市の「空き家新税」、政府同意へ 26年にも導入 https://t.co/vtgEj8efn9 https://twitter.com/nikkei/statuses/1638480882985222146 空き家 2023-03-22 10:00:58
ニュース BBC News - Home Bafta TV Awards 2023: This is Going to Hurt and The Responder lead nominations https://www.bbc.co.uk/news/entertainment-arts-65026718?at_medium=RSS&at_campaign=KARANGA nominations 2023-03-22 10:20:05
ニュース BBC News - Home Suspect questioned after man set alight near Birmingham mosque https://www.bbc.co.uk/news/uk-england-birmingham-65036283?at_medium=RSS&at_campaign=KARANGA birmingham 2023-03-22 10:47:58
ニュース BBC News - Home Ukraine war: Four dead as Russia launches new attack on cities https://www.bbc.co.uk/news/world-europe-65036208?at_medium=RSS&at_campaign=KARANGA crimea 2023-03-22 10:09:43
ニュース BBC News - Home Usyk walks away from Fury fight talks https://www.bbc.co.uk/sport/boxing/65037833?at_medium=RSS&at_campaign=KARANGA oleksandr 2023-03-22 10:16:59
ニュース BBC News - Home Six Nations 2023: Freddie Steward's red card against Ireland rescinded https://www.bbc.co.uk/sport/rugby-union/65035837?at_medium=RSS&at_campaign=KARANGA ireland 2023-03-22 10:40:01
ニュース Newsweek 司会者の呼びかけに答えず、女性出演者が失神・昏倒...米テレビ生放送中の緊迫映像 https://www.newsweekjapan.jp/stories/culture/2023/03/post-101167.php その間も私たちはアリッサのことを思い、すぐに回復するよう祈っている」病院に搬送されたシュワルツさん自身も、後にFacebookの投稿で、「メールや電話、お見舞いの言葉をありがとう。 2023-03-22 19:33:00
IT 週刊アスキー Yahoo! MAP(Android版)、場所やルートなどを複数タブで開ける「タブ機能」を提供開始 https://weekly.ascii.jp/elem/000/004/129/4129577/ yahoo 2023-03-22 19:40:00
IT 週刊アスキー 『FFXI』のプロデューサーが交代!現ディレクター藤戸氏が兼任へ https://weekly.ascii.jp/elem/000/004/129/4129591/ mmorpg 2023-03-22 19:30:00

コメント

このブログの人気の投稿

投稿時間:2021-06-17 05:05:34 RSSフィード2021-06-17 05:00 分まとめ(1274件)

投稿時間:2021-06-20 02:06:12 RSSフィード2021-06-20 02:00 分まとめ(3871件)

投稿時間:2020-12-01 09:41:49 RSSフィード2020-12-01 09:00 分まとめ(69件)