python |
Pythonタグが付けられた新着投稿 - Qiita |
pytorchの線形回帰を練習 |
https://qiita.com/Takeshi_Sue/items/9fb8a9256f1ff1aaf8ce
|
targetstwoxplusthreexforxininputs学習される前のacの予測値をランダムに作る。 |
2021-08-17 01:36:31 |
python |
Pythonタグが付けられた新着投稿 - Qiita |
AidemyでのAI学習記録 |
https://qiita.com/murao99999/items/4a0c14e45fb81c61495a
|
一通りカリキュラムはこなしましたが、本当に理解するに繰り返し学習していくことが必要だと痛感しています。 |
2021-08-17 01:22:35 |
python |
Pythonタグが付けられた新着投稿 - Qiita |
ラズパイとAzureのFaceAPIで赤ちゃんの表情を読み取ってみた |
https://qiita.com/akahira/items/8a227d7409a7f751ee7c
|
ただ見守るだけではなく、AzureのFaceAPIを使って赤ちゃんの表情を読み取ることができれば、話すことができない赤ちゃんの気持ちが読み取れるのではないかと考えて、FaceAPIの結果から赤ちゃんの気持ちをLINE通知する装置を作ってみました。 |
2021-08-17 01:05:59 |
js |
JavaScriptタグが付けられた新着投稿 - Qiita |
【実践】VSCodeでサーバ上のWEBアプリをリモートデバッグする(第1回:JavaScript編) |
https://qiita.com/_jiji/items/436115db740140f7dae8
|
VSCodeとWEBサーバを接続する①左端のアクティビティバーにある「リモートエクスプローラー」アイコンをクリック②「SSHTargets」を選択③「AddNew」アイコンをクリック「EnterSSHConnectionCommand」にSSHログインユーザ名WEBサーバIPアドレスを入力sshuser「SelectSSHConfigurationfiletoupdate」にCUsersxxxsshconfigを選択CUsersxxxsshconfigサイドバーに表示されたWEBサーバIPアドレスを選択し、「ConnecttoHostinNewWindow」アイコンをクリック「Areyousureyouwanttocontinue」にContinueを選択SSHログインパスワードを入力WEBサーバ上のコンテンツフォルダーを開くステータスバーが接続状態に変化したら、アクティビティバーにある「エクスプローラー」アイコンをクリックし、「フォルダーを開く」をクリックサンプルソースをコピーしたWEBコンテンツフォルダーを指定し、「OK」をクリックvarwwwhtmlsampleSSHログインパスワードを入力「このフォルダー内の・・・」ダイアログが表示されたら「信頼します」を選択サイドバーにコンテンツフォルダーのファイルツリーが表示され、エディター領域にはソースコードが表示される。 |
2021-08-17 01:35:04 |
Program |
[全てのタグ]の新着質問一覧|teratail(テラテイル) |
analogreadの位置によって、arduinoの挙動が大きく変わる |
https://teratail.com/questions/354685?rss=all
|
もう片方は、物体を検知したらシリアルポートにwhitecountを出力するシステムを作っています。 |
2021-08-17 01:05:51 |
Linux |
Ubuntuタグが付けられた新着投稿 - Qiita |
GitLab-CEのバージョン指定アップグレード |
https://qiita.com/TAKANEKOMACHI/items/e31164796e8320101f12
|
GitLabCEの→へのアップグレード手順前提UbuntuBionicGitLabCEaptgetupgradeで表示されたエラー要は、aptgetupgradeでGitLabがリストに出てるけど、今はなので「メジャーバージョンアップ」になるよ。 |
2021-08-17 01:18:17 |
GCP |
gcpタグが付けられた新着投稿 - Qiita |
GitLabの認証でGoogle OAuth2を利用 |
https://qiita.com/motoori/items/a1d85ad1f9fea6b2922d
|
GitLabの認証でGoogleOAuthを利用概要GCEubuntultsGitLabeeGitLabからGKEを作成しようとしたがGCPの認証を行ってからしてねとメッセージが出たので認証を行う認証情報の作成前提条件として事前にGCPのプロジェクトを作成済み認証情報gt認証情報を作成gtOauthクライアントIDをクリック以下のように情報を入力していくアプリケーション種類ウェブアプリケーションを選択名前GitLab分かりやすければ何でもよい承認済みのJavaScript生成元URL承認済みのリダイレクトURI承認済みのリダイレクトURIはGitLabが用意しているもので決まり事です、ドメインの部分は自身の環境に合わせたドメインにしてくださいまた、Googleは生のIPアドレスを受け付けないので、ドメイン名を指定する必要があります。 |
2021-08-17 01:11:46 |
Azure |
Azureタグが付けられた新着投稿 - Qiita |
ラズパイとAzureのFaceAPIで赤ちゃんの表情を読み取ってみた |
https://qiita.com/akahira/items/8a227d7409a7f751ee7c
|
ただ見守るだけではなく、AzureのFaceAPIを使って赤ちゃんの表情を読み取ることができれば、話すことができない赤ちゃんの気持ちが読み取れるのではないかと考えて、FaceAPIの結果から赤ちゃんの気持ちをLINE通知する装置を作ってみました。 |
2021-08-17 01:05:59 |
Ruby |
Railsタグが付けられた新着投稿 - Qiita |
railsチュートリアル 第3章 テスト駆動 |
https://qiita.com/masatom86650860/items/75f8002c3c55fc01cc6a
|
aboutismissingatemplateforrequestformatstexthtml多分aboutのページを作成すれば成功かaboutページを作成するために新規ファイルを作成。 |
2021-08-17 01:38:08 |
海外TECH |
DEV Community |
#100daysofcode [Day - 06 ] |
https://dev.to/alsiamworld/100daysofcode-day-06-305a
|
daysofcode Day daysofcode Day Hello everyone Today I d create a Simple Bank calculation Page using Tailwind CSS amp JavaScript DOM You can deposit withdraw and check the balance after login I m a beginner at JS and just learning JS DOM and by this simple knowledge I have made this bank calculation page and the next day I will post my Weekly Project Live Preview Code Link javascript programming beginner |
2021-08-16 16:47:27 |
海外TECH |
DEV Community |
Emacs package management with straight.el and use-package |
https://dev.to/jkreeftmeijer/emacs-package-management-with-straight-el-and-use-package-3oc8
|
Emacs package management with straight el and use packageEmacs includes a package manager named package el which installs packages from the official Emacs Lisp Package Archive named GNU ELPA GNU ELPA hosts a selection of packages but most are available on MELPA which is an unofficial package archive that implements the ELPA specification To use MELPA it has to be installed by adding it to the list of package el package archives The built in package manager installs packages through the package install function For example to install the evil commentary package from MELPA call package install inside Emacs M x package install lt RET gt evil commentary lt RET gt Straight elStraight el is an alternative package manager that installs packages through Git checkouts instead of downloading tarballs from one of the package archives Doing so allows installing forked packages altering local package checkouts and locking packages to exact versions for reproducable setups InstallationThe Getting started section in the straight el README provides the bootstrap code to place inside emacs d init el in order to install it Install straight el defvar bootstrap version let bootstrap file expand file name straight repos straight el bootstrap el user emacs directory bootstrap version unless file exists p bootstrap file with current buffer url retrieve synchronously silent inhibit cookies goto char point max eval print last sexp load bootstrap file nil nomessage Straight el uses package archives like GNU ELPA as registries to find the linked repositories to clone from Since these are checked automatically there s no need to add them to the list of package archives While package el loads all installed packages on startup straight el only loads packages that are referenced in the init file This allows for installing packages temporarily without slowing down Emacs startup time on subsequent startups To create a truly reproducable setup disable package el in favor of straight el by turning off package enable at startup Because this step needs to happen before package el gets a chance to load packages it this configuration needs to be set in the early init file Disable package el in favor of straight el setq package enable at startup nil With this configuration set Emacs will only load the packages installed through straight el UsageTo use straight el to install a package for the current session execute the straight use package command M x straight use package lt RET gt evil commentary lt RET gt To continue using the package in future sessions add the straight use package call to emacs init el straight use package evil commentary To update an installed package execute the straight pull package command M x straight pull package lt RET gt evil commentary lt RET gt To update the version lockfile which is used to target the exact version to check out when installing run straight freeze versions M x straight freeze versions lt RET gt Use packageUse package is a macro to configure and load packages in Emacs configurations It interfaces with package managers like package el or straight el to install packages but is not a package manager by itself For example when using straight el without use package installing and starting evil commentary requires installing the package and starting it as two separate steps straight use package evil commentary evil commentary mode Combined with use package the installation and configuration are unified into a single call to use package use package evil commentary config evil commentary mode Aside from keeping configuration files tidy having package configuration contained within a single call allows for more advanced package setups For example packages can be lazy loaded keeping their configuration code from executing until the package they configure is needed InstallationTo install use package with straight el use straight use package Install use package straight use package use package Using straight el with use packageBy default use package uses package el to install packages To use straight el instead of package el pass the straight option use package evil commentary straight t To configure use package to always use straight el use use package to configure straight el to turn on straight use package by default Configure use package to use straight el by default use package straight custom straight use package by default t Now installing any package using use package uses straight el even when omitting the straight el option Having both straight el and use package installed and configured to work together the straight use package function isn t used anymore Instead all packages are installed and configured through use package UsageUse the use package macro to load a package If the package is not installed yet it is installed automatically use package evil commentary Use package provides keywords to add configuration key bindings and variables Although there are many more options some examples include config init bind and custom config and init The config and init configuration keywords define code that s run right after or right before a package is loaded respectively For example call evil mode from the config keyword to start Evil after loading its package To turn off evil want C i jump right before evil is loaded instead of adding it to the early init file configure it in the init keyword use package evil init setq evil want C i jump nil config evil mode bind Adds key bindings after a module is loaded For example to use consult buffer instead of the built in switch to buffer after loading the consult package add a binding through the bind keyword use package consult bind C x b consult buffer custom Sets customizable variables The variables set through use package are not saved in Emacs custom file Instead all custom variables are expected to be set through use package In an example from before the custom keyword is used to set the straight use package by default configuration option after loading straight el use package straight custom straight use package by default t SummaryThe resulting emacs d init el file installs straight el and use package and configures straight el as the package manager for use package to use Install straight el defvar bootstrap version let bootstrap file expand file name straight repos straight el bootstrap el user emacs directory bootstrap version unless file exists p bootstrap file with current buffer url retrieve synchronously silent inhibit cookies goto char point max eval print last sexp load bootstrap file nil nomessage Install use package straight use package use package Configure use package to use straight el by default use package straight custom straight use package by default t The emacs d early init el file disables package el to disable its auto loading causing all packages to be loaded through straight el in the init file Disable package el in favor of straight el setq package enable at startup nil This is the only configuration set in the early init file All other packages are installed and configured through use package which makes sure to load configuration options before packages are loaded if configured with the init keyword Footnotes Calling use package would normally install straight el but since it s already installed the installation is skipped and the configuration is set Here the call to use package is only used to configure straight el by setting the straight use package by default option |
2021-08-16 16:37:12 |
海外TECH |
DEV Community |
Powering Kubernetes in the Cloud with Kuma Service Mesh |
https://dev.to/mbogan/powering-kubernetes-in-the-cloud-with-kuma-service-mesh-1cni
|
Powering Kubernetes in the Cloud with Kuma Service MeshI recently decided I wanted to start cutting third party cloud services out of my life I purchased a shiny Raspberry Pi which reminded me of the Amiga computers of my youth and decided to try Nextcloud on it as a personal cloud It was a far quicker process than I expected thanks to the awesome NextCloudPi project Within twenty minutes I had a running Nextcloud instance However I could only access it locally on my internal network and accessing it externally is complicated if you don t have a static IP address or use dynamic DNS on a router that supports it There are of course myriad ways to solve these problems and NextCloudPi offers convenient solutions to many of them but I was also interested in how Kubernetes might handle some of the work for me Of course this can mean I am using cloud providers of a different kind but I would have portability and with a combination of an Ingress and Service Mesh could move my hosting around as I wanted In this post I walk through the steps I took to use Kong Ingress Controller and Kuma Service Mesh to accomplish at least some of what I was aiming for PrerequisitesTo follow along you need the following A running Kubernetes cluster I used Google Kubernetes Engine GKE so I wouldn t spend most of my time setting up a cluster but most options should work for you If you do use GKE make sure you don t use the “autopilot option I initially did and hit issues later with the certification manager for creating SSL connections Another important change to make is that when you create the cluster change the Nodes in the Default pool to use the COS not COS CONTAINERD image type There are some underlying issues when using Kuma with GKE as noted in this GitHub issue and this is the currently recommended workaround Otherwise you will hit pod initializing issues that affect certificate provisioning The GCloud CLI tool makes interacting with clusters much easier I recommend you install it and run gcloud init before continuing When the cluster is ready make sure you are connected to it by clicking on the cluster the connect icon and then copy the command under “Command line access pasting and running it in your terminal I used Helm to roll out the resources that Nextcloud needed because it seemed easiest to me But again there are other options available To install the Kubernetes certification manager for managing the cluster s certificates needed to make it publicly accessible I followed these installation instructions and no changes were needed kubectl apply f If you use GKE then note the installation step about elevating permissions kubectl create clusterrolebinding cluster admin binding clusterrole cluster admin user gcloud config get value core account Install KumaA service mesh takes your cluster a step further and is useful for long running or well used clusters The features between service meshes differ slightly but most provide security routing and observability as a minimum For this post I used Kuma but other options are available To add Kuma follow steps one and two of the Kuma installation guide with Helm which are the following helm repo add kuma kubectl create namespace kuma systemhelm install namespace kuma system kuma kuma kuma Install and Set Up NextcloudCreate a Namespace for Nextcloud This does mean that you need to namespace some of your commands throughout the rest of this walkthrough You do this by adding the n nextcloud argument The namespace adds Kuma as a sidecar annotation meaning that Kuma connects to any resources that are part of the namespace Save the following manifest as a file called namespace yaml apiVersion vkind Namespacemetadata name nextcloud namespace nextcloud annotations kuma io sidecar injection enabledSend it to the cluster kubectl apply f namespace yamlCreate a persistent volume claim In my case it uses a pre defined GKE storage class Save the following as gke pvc yaml apiVersion vkind PersistentVolumeClaimmetadata name pvc nc namespace nextcloud spec accessModes ReadWriteOnce resources requests storage GiSend it to the cluster kubectl apply f gke pvc yamlAdd the Nextcloud Helm repository helm repo add nextcloud Copy the default configuration values for Nextcloud to make it easier to make any future changes You can also do this with command line arguments but I found using the configuration file tidier and easier to follow helm show values nextcloud nextcloud gt gt nextcloud values yamlChange the values to match your setup and suit your preferences For this example I changed the following to match my domain and persistent volume claim You can find a full list of the configuration in the GitHub repository for the chart nextcloud host nextcloud chrischinchilla com mariadb master persistence enabled true existingClaim pvc nc accessMode ReadWriteOnce size Gi Install the Nextcloud chart with Helm helm install nextcloud nextcloud nextcloud namespace nextcloud values nextcloud values yamlThe Nextcloud setup tells you what steps to take to access the service Ignore those for now as the next steps expose the cluster to the wide web Add Kong Ingress ControllerWith kubectl or any Kubernetes dashboard wait until all the needed containers are running and initialized You need to set up the ingress in a couple of phases and the order is important to get a certificate for secure connections I followed the Kong ingress instructions with a couple of changes to suit my use case I installed cert manager earlier Read the prerequisites section I used my personal DNS host Netlify to create an A record and match it to the external IP address As I used GKE I updated cluster permissions following the steps here Install the ingress controller with kubectl create f You need to apply the ingress manifest twice with and without a certificate This is because you cannot generate a certificate without a domain that responds The first time I applied the ingress to the cluster I used the following manifest saved as ingress yaml If you want to use the same change the name and host values to match your domain apiVersion networking ks io vkind Ingressmetadata name nextcloud chrischinchilla com namespace nextcloud annotations kubernetes io ingress class kongspec rules host nextcloud chrischinchilla com http paths path pathType Prefix backend service name nextcloud port number Apply with kubectl apply f ingress yamlNext you need the cluster s external IP address which you can find with kubectl get service n kong kong proxy or from your provider dashboard Create a DNS record with your domain registrar or local router Once the DNS changes propagate probably the slowest part of this whole blog open the URL for the Nextcloud instance The server only responds on a non secure HTTP connection If you switch to a secure connection HTTPS you see a warning To fix this you need to take some further steps First request a TLS Certificate from Let s Encrypt I used the following ClusterIssuer definition saved as issuer yaml I used the email address that matches my GKE account just to be sure apiVersion cert manager io vkind ClusterIssuermetadata name letsencrypt prod namespace cert managerspec acme email EMAIL ADDRESS privateKeySecretRef name letsencrypt prod server solvers http ingress class kongApply with kubectl apply f issuer yamlUpdate ingress yaml to include new annotations for the cert manager and a tls section Again make sure to change the name secretName and host values to match your domain apiVersion networking ks io vkind Ingressmetadata name nextcloud chrischinchilla com namespace nextcloud annotations kubernetes io tls acme true cert manager io cluster issuer letsencrypt prod kubernetes io ingress class kongspec tls secretName nextcloud chrischinchilla com hosts nextcloud chrischinchilla com rules host nextcloud chrischinchilla com http paths path pathType Prefix backend service name nextcloud port number Apply with kubectl apply f ingress yamlNow the secure connection and certificate work and you can log in to use the Nextcloud instance You can also confirm the certificate exists with the following command kubectl n nextcloud get certificates Add Features to Service MeshThe Ingress works as intended While the service mesh also works it s not adding much so I decided to leverage the built in metrics monitoring I followed the steps in the Kubernetes Quickstart including updating and reapplying the mesh You need to install the kumactl tool to manage some Kuma features read the Kuma CLI guide for more details Enable Kuma metrics on the cluster with kumactl install metrics kubectl apply f This command provisions a new kuma metrics namespace with all the services required to run the metric collection and visualization This can take a while as Kubernetes downloads all the required resources Create a file called mesh yaml that contains the following apiVersion kuma io valphakind Meshmetadata name defaultspec mtls enabledBackend ca backends name ca type builtin metrics enabledBackend prometheus backends name prometheus type prometheusApply the manifest kubectl apply n kuma system f mesh yamlIgnore the artificial metrics step and instead enable Nextcloud metrics by updating the metrics section of nextcloud values yaml to enable metrics Set the metrics exporter image to use and add annotations to the pods Some of these lines are already in the nextcloud values yaml file and you need to uncomment them metrics enabled true replicaCount https false timeout s image repository xperimental nextcloud exporter tag v pullPolicy IfNotPresent service type ClusterIP annotations prometheus io scrape true prometheus io port labels You then need to delete and reinstall Nextcloud with the new values Do so using the following commands helm delete nextcloud nextcloud nextcloud namespace nextcloudhelm install nextcloud nextcloud nextcloud namespace nextcloud values nextcloud values yamlOpen the Grafana dashboard with kubectl port forward svc grafana n kuma metrics And then start watching and analyzing metrics by opening port on the domain setup above Next cloud StepsIn this post I looked at creating a Nextcloud instance with Kubernetes enabling web access to the cluster with an ingress and enabling some features specific to a service mesh There s a lot more mesh relevant functionality that you could add to something like Nextcloud For example you could create and enforce regional data sovereignty by hosting Kubernetes instances in different regional data centers Or you could use the multi zone feature to manage routing or the DNS feature to manage domain resolution instead of an external provider Finally at the moment the cluster uses account details for various services as defined in the nextcloud values yaml file This is partially secure as you don t need to check this file into version control Instead I could use the secrets feature to rotate access details at run time enabling maximum security |
2021-08-16 16:32:20 |
Apple |
AppleInsider - Frontpage News |
How to use window management in macOS Monterey |
https://appleinsider.com/articles/21/08/16/how-to-use-window-management-in-macos-monterey?utm_medium=rss
|
How to use window management in macOS MontereyApple hasn t introduced new features to window management in macOS Monterey but it s made the existing ones easier to find ーand it has made some refinements The forthcoming macOS Monterey has taken on some new iPad like options and icons for Full Screen and Split ViewThe best features are the ones you can find With the forthcoming macOS Monterey more users are going to be benefitting from full screen and Split View apps simply because how to use them is clearer Read more |
2021-08-16 16:36:13 |
Apple |
AppleInsider - Frontpage News |
Best Deals August 16 - Satechi 20% off, $229.99 2TB Solid State Drive, and more! |
https://appleinsider.com/articles/21/08/16/best-deals-august-16---satechi-20-off-22999-2tb-solid-state-drive-and-more?utm_medium=rss
|
Best Deals August Satechi off TB Solid State Drive and more Monday s best deals include off Satechi Back to School sale Back to School supplies sale on Amazon off an Asus gaming router and more Deals Monday August Shopping online for the best discounts and deals can be an annoying and challenging task So rather than sifting through miles of advertisements check out this list of sales we ve hand picked just for the AppleInsider audience Read more |
2021-08-16 16:29:20 |
Apple |
AppleInsider - Frontpage News |
Amazon's hidden AirPods Pro deal drives price down to $179.99 |
https://appleinsider.com/articles/21/08/16/amazons-hidden-airpods-pro-deal-drives-price-down-to-17999?utm_medium=rss
|
Amazon x s hidden AirPods Pro deal drives price down to Monday s best AirPods deals feature additional savings at checkout on AirPods Pro driving the price down to at Amazon This is the cheapest price anywhere with units shipping this week New lower AirPods Pro priceReaders can save anywhere from to on AirPods at Amazon this week with AirPods Pro currently on sale for Read more |
2021-08-16 16:52:18 |
海外TECH |
Engadget |
Blue Origin takes NASA to court over SpaceX lunar lander contract |
https://www.engadget.com/blue-origin-federal-claims-court-nasa-challenge-163927526.html?src=rss
|
Blue Origin takes NASA to court over SpaceX lunar lander contractFollowing a billion Hail Mary Jeff Bezos Blue Origin has filed a complaint with the US Court of Federal Claims over NASA s handling of the Human Landing System program The court challenge comes less than a month after the US Government Accountability Office GAO dismissed a protest the company filed in response to NASA s decision to award a single contract for the Artemis lunar lander The agency went with a billion bid from Elon Musk s SpaceX opting not to fund a billion proposal from Blue Origin nbsp NASA s original intention was to sign two separate contracts but limited funding from Congress made that difficult to do so Blue Origin alleged the decision was quot fundamentally unfair quot because NASA allowed SpaceX to modify its bid something the company says it didn t get the opportunity to do as well However the GAO concluded NASA s quot evaluation of all three proposals was reasonable and consistent with applicable procurement law regulation and the announcement s terms quot The sad thing is that even if Santa Claus suddenly made their hardware real for free the first thing you d want to do is cancel itーElon Musk elonmusk August At the time Blue Origin hinted it would escalate the situation quot We stand firm in our belief that there were fundamental issues with NASA s decision but the GAO wasn t able to address them due to their limited jurisdiction quot the company said following the announcement quot Blue Origin filed suit in the US Court of Federal Claims in an attempt to remedy the flaws in the acquisition process found in NASA s Human Landing System quot a spokesperson for Blue Origin told Engadget quot We firmly believe that the issues identified in this procurement and its outcomes must be addressed to restore fairness create competition and ensure a safe return to the Moon for America quot What this means for the Human Landing System program and Project Artemis more broadly is likely another delay Following Blue Origin s GAO protest NASA ordered SpaceX to stop work on the lunar lander contract while the watchdog investigated the matter While this latest complaint is sealed a source told The Verge Blue Origin asked a judge to order a temporary pause on SpaceX s contract while the case is resolved in court NASA and SpaceX lost about three months waiting for the GAO to investigate Blue Origin s protest If a judge approves the company s request this latest pause could be even longer Ultimately any further delays will make NASA s goal of returning to the Moon by difficult |
2021-08-16 16:39:27 |
海外TECH |
Engadget |
Amazon's Kindle Paperwhite is on sale for a record low of $80 right now |
https://www.engadget.com/amazon-kindle-paperwhite-regular-kindle-sale-161429099.html?src=rss
|
Amazon x s Kindle Paperwhite is on sale for a record low of right nowWhile nothing can replicate holding a new hardcover book in your hands you can t beat the convenience of an e reader Devices like Amazon s Kindles let you take your whole library with you so you re never without options when picking your next read They also make good student devices if you have a lot of digital textbooks available to you Those looking to pick up a new e reader for work or play can get a Kindle for less right now ーAmazon just knocked the price of its Kindle Paperwhite down to and its standard Kindle down to While not a record low for the Kindle the discount on the Paperwhite brings it back down to its Prime Day price Buy Kindle Paperwhite at Amazon Buy Kindle at Amazon The Paperwhite may be three years old at this point but it remains one of the best e readers you can buy Not only is it compact but it has a waterproof design that will protect it against accidental splashes Amazon updated it a few years ago with a higher contrast display plus Audible support which means you can listen to audiobooks when you have a pair of Bluetooth headphones connected to the device While we recommend the Paperwhite to those that can afford it Amazon s standard Kindle is a much better buy now than it was a couple of years ago When last updated the Kindle received a higher contrast display a new front light that makes reading in dark places much easier and a smaller sleeker design It may not have the bells and whistles that the Paperwhite has but it does its one and only job of displaying ebooks well That said if you re still happy with your old yet trusty e reader it s probably not necessary to upgrade Kindles have received some convenient new features over the past few years but none fundamentally change the experience of reading an ebook But for those that haven t yet taken the plunge Amazon s latest sale is a great one to consider if you want to bring an e reader into your life The biggest caveat to keep in mind is that these discounts are on the ad supported Kindles so you ll have to deal with Amazon s quot special offers quot and lock screen ads on your e reader Follow EngadgetDeals on Twitter for the latest tech deals and buying advice |
2021-08-16 16:14:29 |
Cisco |
Cisco Blog |
Cisco Catalyst 8000V, the Cloud-Smart Router, Powers Secure SD-WAN for Multicloud and SaaS |
https://blogs.cisco.com/networking/cisco-catalyst-8000v-the-cloud-smart-router-powers-secure-sd-wan-for-multicloud-and-saas
|
Cisco Catalyst V the Cloud Smart Router Powers Secure SD WAN for Multicloud and SaaSPowering secure multicloud networking and purpose built for the cloud the Catalyst V provides a smart enterprise ready and simplified experience for easy deployment |
2021-08-16 16:00:48 |
ニュース |
BBC News - Home |
Hundreds more Britons and Afghans to arrive in UK |
https://www.bbc.co.uk/news/uk-58235707
|
afghanistan |
2021-08-16 16:45:35 |
ニュース |
BBC News - Home |
Malala: 'Futures of Afghan child refugees aren't lost' |
https://www.bbc.co.uk/news/world-asia-58236327
|
taliban |
2021-08-16 16:06:52 |
ニュース |
BBC News - Home |
Polish law on property stolen by Nazis angers Israel |
https://www.bbc.co.uk/news/world-europe-58218750
|
world |
2021-08-16 16:08:31 |
ニュース |
BBC News - Home |
UK military commander: 'We've betrayed' Afghans who helped British troops' |
https://www.bbc.co.uk/news/uk-58231760
|
UK military commander x We x ve betrayed x Afghans who helped British troops x Major General Charlie Herbert a military commander from the UK s campaign in Afghanistan has accused the government of betraying Afghans who supported British troops |
2021-08-16 16:15:41 |
ニュース |
BBC News - Home |
Hakainde Hichilema: The Zambian 'cattle boy' who became president |
https://www.bbc.co.uk/news/world-africa-58229710
|
zambia |
2021-08-16 16:29:39 |
ニュース |
BBC News - Home |
England v India: Mohammed Siraj dismisses Moeen Ali and Sam Curran in two balls |
https://www.bbc.co.uk/sport/av/cricket/58236895
|
England v India Mohammed Siraj dismisses Moeen Ali and Sam Curran in two ballsWatch as India s Mohammed Siraj removes England s Moeen Ali for and Sam Curran for a golden duck in back to back deliveries on the final day of the second Test at Lord s |
2021-08-16 16:37:50 |
ニュース |
BBC News - Home |
Covid-19 in the UK: How many coronavirus cases are there in my area? |
https://www.bbc.co.uk/news/uk-51768274
|
cases |
2021-08-16 16:05:59 |
北海道 |
北海道新聞 |
フィギュア中国杯が中止 国際連盟、代替開催地募る |
https://www.hokkaido-np.co.jp/article/578842/
|
国際スケート連盟 |
2021-08-17 01:03:00 |
GCP |
Cloud Blog |
Foundational best practices for securing your cloud deployment |
https://cloud.google.com/blog/topics/developers-practitioners/foundational-best-practices-securing-your-cloud-deployment/
|
Foundational best practices for securing your cloud deploymentAs covered in our recent blog posts the security foundations blueprint is here to curate best practices for creating a secured Google Cloud deployment and provide a Terraform automation repo for adapting adopting and deploying those best practices in your environment In today s blog post we re diving a little deeper into the security foundations guide to highlight several best practices for security practitioners and platform teams to use with setting up configuring deploying and operating a security centric infrastructure for their organization The best practices described in the blueprint are a combination of both preventative controls and detective controls and are organized as such in the step by step guide The first topical sections cover preventative controls which are implemented through architecture and policy decisions The next set of topical sections cover detective controls which use monitoring capabilities to look for drift anomalous or malicious behavior as it happens If you want to follow along in the full security foundations guide as you read this post we are covering sections of the Step by step guide chapter II Preventative controlsThe first several topics cover how to protect your organization and prevent potential breaches using both programmatic constraints policies and architecture design Organization structureOne of the benefits of moving to Google Cloud is your ability to manage resources their organization and hierarchy in one place The best practices in this section give you a resource hierarchy strategy that does just that As implemented it provides isolation and allows for segregation of policies privileges and access which help reduce risk of malicious activity or error And while this sounds like you might be doing more work the capabilities in GCP make this possible while easing administrative overhead The step by step guide s recommended organization structureThe best practices include using a single organization for top level ownership of resources implementing a folder hierarchy to group projects into related groups prod non prod dev common bootstrap where you can create segmentation and isolation and subsequently apply security policies and grant access permissions andestablishing organizational policies that define resource configuration constraints across folders and projects Resource deploymentWhether you are rolling out foundational or infrastructure resources or deploying an application the way you manage your deployment pipeline can provide extra security or create extra risk The best practices in this section show you how to set up review approval and rollback processes that are automated and standardized They limit the amount of manual configuration and therefore reduce the possibility of human error drive consistency allow revision control and enable scale This allows for governance and policy controls to help you avoid exposing your organization to security or compliance risks The best practices described include codifying the Google Cloud infrastructure into Terraform modules which provides an automated way of deploying resources using private Git repositories for the Terraform modules initiating deployment pipeline actions with policy validation and approval stages built into the pipeline anddeploying foundations infrastructure and workloads through separate pipelines and access patterns Access patterns outlined in the security foundations blueprintAuthentication and authorizationMany data breaches come from incorrectly scoped or over granted privileges Controlling access precisely allows you to keep your deployments secure by permitting only certain users access to your protected resources This section delivers best practices for authentication validating a user s identity and authorization determining what that user can do in your cloud deployment Recommendations include managing user credentials in one place for example either Google Cloud Identity or Active Directory and enabling syncs so that the removal of access and privileges for suspended or deleted user accounts are propagated appropriately This section also reinforces the importance of using multi factor authentication MFA and phishing resistant security keys covered more in depth in the Organization structure chapter Privileged identities especially should use multi factor authentication and consider adding multi party authorization as well since due to their access they are frequently targets and thus at higher risk Throughout all the best practices in this section the overarching theme is the principle of least privilege only necessary permissions are to be granted No more no lessA few more of the best practices include maintaining user identities automatically with Cloud Identity federated to your on prem Active Directory if applicable as the single source of truth using Single sign on SSO for authentication establishing privileged identities to provide elevated access in emergency situations andusing Groups with a defined naming convention rather than individual identities to assign permissions with IAM Additional video resource on how to use Groups with IAMNetworking As your network is the communication layer between your resources and to the internet making sure it is secure is critical in preventing external also known as north south and internal east west attacks This section of the step by step guide goes into how to secure and segment your network so that services that store highly sensitive data are protected It also includes architecture alternatives based on your deployment patterns The guide goes deeper to show how best to configure the networking of your cloud deployment so that resources can communicate with each other with your on prem environment as well as the public internet And it does all that while maintaining security and reliability By keeping network policy and control centralized implementing these best practices is easier to manage This section is robust in providing detailed opinionated guidance so if you would like to dive in further to this topic head to section of the full step by step guide to learn more A few of the high level best practices in this section are centralizing network policies and control through use of Shared VPC or a hub and spoke architecture if this fits your use case separating services that contain sensitive data in separate Shared VPC networks base and restricted and using separate projects IAM and a VPC Service Control perimeter to limit data transfers in or out of the restricted network using Dedicated Interconnect or alternatives to connect on prem with Google Cloud and using Cloud DNS to communicate with on prem DNS servers accessing Google Cloud APIs from the cloud and from on premises through private IP addresses andestablishing tag based firewall rules to control network traffic flows Key and secret managementWhen you are trying to figure out where to store keys and credentials it is often a trade off between level of security and convenience This section outlines a secure and convenient method for storing keys passwords certificates and other sensitive data required for your cloud applications using Cloud Key Management Services and Secret Manager Following these best practices ensure that storing secrets in code is avoided the lifecycles of your keys and secrets are managed properly and the principles of least privilege and separation of duties are adhered to The best practices described include creating managing and using cryptographic keys with Cloud Key Management Services storing and retrieving all other general purpose secrets using Secret Manager andusing prescribed hierarchies to separate keys and secrets between the organization and folder levels LoggingLogs are used by diverse teams across an organization Developers use them to understand what is happening as they write code security teams use them for investigations and root cause analysis administrators use them to debug problems in production and compliance teams use them to support regulatory requirements The best practices in this section keep all those use cases in mind to ensure the diverse set of users are supported with the logs they need The guide recommends a few best practices around logs including centralizing your collection of logs in an organization level log sink project unifying monitoring data at the folder level ingesting aggregating and processing logs with the Cloud Logging API and Cloud Log Router andFexporting logs from sinks to Cloud Storage for audit purposes to BigQuery for analysis and or to a SIEM through Cloud Pub Sub Logging structure described in the step by step guideDetective controlsThe terminology “detective controls might evoke the sense of catching drift and malicious actions as they take place or just after But in fact these latter sections of the step by step guide cover how to prevent attacks as well using monitoring capabilities to detect vulnerabilities and misconfigurations before they have an opportunity to be exploited Detective controlsMuch like a detective trying to solve a crime may whiteboard a map of clues suspects and their connections this section covers how to detect and bring together possible infrastructure misconfigurations vulnerabilities and active threat behavior into one pane of glass This can be achieved through a few different options using Google Cloud s Security Command Center Premium using native capabilities in security analytics leveraging BQ and Chronicle as well as integrating with third party SIEM tools if applicable for your deployment The guide lists several best practices including aggregating and managing security findings with Security Command Center Premium to detect and alert on infrastructure misconfigurations vulnerabilities and active threat behavior using logs in BigQuery to augment detection of anomalous behavior by Security Command Center Premium andintegrating your enterprise SIEM product with Google Cloud Logging Security Command Center in the Cloud ConsoleBilling setupSince your organization s cloud usage flows through billing setting up billing alerts and monitoring your billing records can work as an additional mechanism for enhancing governance and security by detecting unexpected consumption The supporting best practices described include setting up billing alerts are used on a per project basis to warn at key thresholds and andexporting billing records to a BigQuery dataset in a Billing specific project If you want to learn more about how to set up billing alerts export your billing records to BigQuery and more you can also check out the Beyond Your Bill video series Bringing it all together and next stepsThis post focused on the best practices provided in the blueprint for building the foundational infrastructure for your cloud deployment including preventative and detective controls While the best practices are many they can be adopted adapted and deployed efficiently using templates provided in the Terraform automation repository And of course the non abbreviated details of implementing these best practices is available in the security foundations guide itself Go forth deploy and stay safe out there Related ArticleBuild security into Google Cloud deployments with our updated security foundations blueprintGet step by step guidance for creating a secured environment with Google Cloud with the security foundations guide and Terraform blueprin Read Article |
2021-08-16 16:30:00 |
GCP |
Cloud Blog |
Deploy Anthos on GKE with Terraform part 1: GitOps with Config Sync |
https://cloud.google.com/blog/topics/anthos/using-terraform-to-enable-config-sync-on-a-gke-cluster/
|
Deploy Anthos on GKE with Terraform part GitOps with Config SyncAnthos Config Management ACM offers cloud platform administrators a variety of techniques to streamline cluster configuration One ACM feature Config Sync allows them to use a Git repository to create common configurations that are automatically applied on Kubernetes clusters in their fleet bringing a familiar code review collaboration process to config management Another ACM feature Policy Controller enforces security guardrails in compliance with their organization s requirements This blog series explores these offerings and how to get started using them with Terraform Many platform administrators prefer Infrastructure as Code to achieve repeatable and predictable deployments This also applies to configuring ACM features on Kubernetes clusters In the past platform administrators who used Terraform lacked a smooth transition from HCL to modeling cluster configuration They had to resort to manual processes that required additional temporary permissions granted to operators to complete provisioning The new GKEHub API and new resources enabled in Terraform Provider for Google Cloud Platform ーgoogle gke hub feature google hub feature membership and google gke hub membershipーmake it possible to automate last mile cluster configuration including pointing it to a Git repository and turning on the Policy Controller For platform administrators this solves previous challenges of modeling cluster configuration such as namespaces services accounts RBAC in a Kubernetes idiomatic way i e without the awkward Terraform HCL counterparts Better still this natural IaC approach improves auditability and transparency and reduces risk of misconfigurations or security gaps In this part blog series we ll show how you can enable Anthos features on GKE We ll start with Config Sync to reconcile the cluster state with the specified Git repository Based on a GKE cluster resource in your Terraform configuration You can then enable GKE Hub membership and the configmanagement feature Additional settings can then be configured for each of the features sync repo to point at the repo storing your cluster configurations poliy dir to point at the root of the repo to reconcile and the specific sync branch in the repo Applying this configuration with Terraform will enable Config Sync and will automatically synchronize the state of the cluster with the repo immediately creating the Kubernetes config objects on the cluster Your pods deployments services and other native Ks objects will automatically be created See this article for more details on how to organize configs in a repo The cluster now is fully provisioned and requires no “last mile configuration steps This repo provides a complete example of provisioning a cluster that is synchronized with a repo that contains a popular WordPress configuration In the next part of the series we ll show you how you can use Terraform to configure another ACM feature Policy Controller Related ArticleGet in sync Consistent Kubernetes with new Anthos Config Management featuresAnthos Config Management and Config Controller bring Kubernetes style declarative policy and config management to GKE environments Read Article |
2021-08-16 16:30:00 |
コメント
コメントを投稿