投稿時間:2021-04-09 22:48:27 RSSフィード2021-04-09 22:00 分まとめ(54件)

カテゴリー等 サイト名等 記事タイトル・トレンドワード等 リンクURL 頻出ワード・要約等/検索ボリューム 登録日
AWS AWS Cloud Enterprise Strategy Blog CISO Insight: Every AWS Service Is a Security Service https://aws.amazon.com/blogs/enterprise-strategy/ciso-insight-every-aws-service-is-a-security-service/ CISO Insight Every AWS Service Is a Security ServiceAmazon Web Services customers have many services to contemplate and perhaps integrate into their cloud footprint irrespective of where they are in their cloud journey The relentless pace of innovation continues to be one of the main attractions for customers with AWS as their cloud provider knowing that new services and features are always coming … 2021-04-09 12:48:41
AWS AWS Japan Blog AWS Control Tower アクションの追跡、およびワークフローの自動トリガーへのライフサイクルイベントの使用 https://aws.amazon.com/jp/blogs/news/using-lifecycle-events-to-track-aws-control-tower-actions-and-trigger-automated-workflows/ これは、ControlTowersAccountFactoryを使用して新しいアカウントを作成するなどのアクションが完了したことを追跡できるようにする、ControlTowerにおける機能のつです。 2021-04-09 12:19:14
AWS AWS Japan Blog 分散型環境における AWS Service Catalog を使用したインフラストラクチャデリバリーの標準化 https://aws.amazon.com/jp/blogs/news/standardizing-infrastructure-delivery-in-distributed-environments-using-aws-service-catalog/ お客様の多くに、AWSServiceCatalogを使用するメリットは「一枚のガラス」を通すようにインフラストラクチャがプロビジョニングできることである、と捉えていただいていますが、実はこれには、製品のデプロイを自動化する機能も備わっています。 2021-04-09 12:18:15
AWS AWS Japan Blog AWS Service Catalog を使用しての、Amazon ECS 継続的デリバリー用の自動設計図の共有 https://aws.amazon.com/jp/blogs/news/managing-aws-resources-across-multiple-accounts-and-regions-using-aws-systems-manager-automation/ AWSServiceCatalogを使用しての、AmazonECS継続的デリバリー用の自動設計図の共有この記事は、AWSDevTechのスペシャリストSAであるMahmoudElZayetが執筆しましたnbsp現代的なアプリケーション開発プロセスは、各組織がスピードや品質を継続的に向上することを可能にしています。 2021-04-09 12:17:10
AWS AWS Japan Blog AWS Systems Manager Automation を使用して複数のアカウントとリージョンにある AWS リソースを管理する https://aws.amazon.com/jp/blogs/news/managing-aws-resources-across-multiple-accounts-and-regions-using-aws-systems-manager-automation-2/ さらに、AmazonCloudWatchEventsを使用して、AWSリソースへの変更に基づいてドキュメントをトリガーしたり、AWSマネジメントコンソール、AWSCLI、またはAWSSDKを通して直接実行したりできます。 2021-04-09 12:16:08
AWS AWS Japan Blog 会社標準の AWS Service Catalog 製品に対するマルチリージョン、マルチアカウントカタログを設定する方法 https://aws.amazon.com/jp/blogs/news/how-to-set-up-a-multi-region-multi-account-catalog-of-company-standard-aws-service-catalog-products/ 会社標準のAWSServiceCatalog製品に対するマルチリージョン、マルチアカウントカタログを設定する方法多くのAWSのお客様が、AWSでの使用に承認されているITサービスのカタログを作成して管理するためにAWSServiceCatalogを導入しています。 2021-04-09 12:14:59
python Pythonタグが付けられた新着投稿 - Qiita Pythonの基本的なファイル操作 https://qiita.com/k8m/items/74b05784e34bcf3b0a2d 2021-04-09 21:20:44
python Pythonタグが付けられた新着投稿 - Qiita 【Python】pandasというライブラリを使ってCSVファイルを読み込み、グラフを作成する。 https://qiita.com/rihu-do/items/d8f232565445be93dd70 【Python】pandasというライブラリを使ってCSVファイルを読み込み、グラフを作成する。 2021-04-09 21:09:44
js JavaScriptタグが付けられた新着投稿 - Qiita スマホに使うJSで window.innerHeight や window.innerWidth は使うべきではない https://qiita.com/YuHima03/items/2780d6725d9699e4cd62 スマホに使うJSでwindowinnerHeightやwindowinnerWidthは使うべきではないなんでconsolelogwindowinnerHeight上の文を実行してwindowinnerHeightを取得してみると、PCとスマホで結果が大きく異なります。 2021-04-09 21:58:57
js JavaScriptタグが付けられた新着投稿 - Qiita [JavaScript] おおまかにclass構文を理解する https://qiita.com/tkyngnm/items/47b749d406990212ef22 オブジェクトの設計書であるクラスを作っておくと作りたい数だけnew演算子でインスタンスを作れば良いので効率的にコーディングできます。 2021-04-09 21:31:05
Program [全てのタグ]の新着質問一覧|teratail(テラテイル) python エラー https://teratail.com/questions/332481?rss=all 2021-04-09 21:50:28
Program [全てのタグ]の新着質問一覧|teratail(テラテイル) Nuxt.jsでFilterで絞り込みした後にSortしたい https://teratail.com/questions/332480?rss=all この後に安い順で並べ替えするとぶどう房nbsp円ぶどう房nbsp円ぶどう房nbsp円ぶどう房nbsp円というようにSortする前の値が追加で一緒に表示されてしまう。 2021-04-09 21:45:01
Program [全てのタグ]の新着質問一覧|teratail(テラテイル) http://[設定したElastic IPアドレス] に入った際に このページは動作していません HTTP500のエラー laravelのポートフォリオをデプロイしたい https://teratail.com/questions/332479?rss=all http設定したElasticIPアドレスに入った際にこのページは動作していませんHTTPのエラーlaravelのポートフォリオをデプロイしたい前提・実現したいことawsのubuntu上でlaravelのポートフォリオをデプロイしたい。 2021-04-09 21:32:48
Program [全てのタグ]の新着質問一覧|teratail(テラテイル) for文内での変数名の変更に関して https://teratail.com/questions/332478?rss=all for文内での変数名の変更に関して前提・実現したいことpythonにて以下のようなコードで折れ線グラフを表示しています。 2021-04-09 21:28:25
Program [全てのタグ]の新着質問一覧|teratail(テラテイル) XHRファイルをスクレイピングしたい。または、やり方を学習できる書籍やサイトを教えてほしい。 https://teratail.com/questions/332477?rss=all XHR ファイル を スクレイピング し たい 。 2021-04-09 21:19:38
Program [全てのタグ]の新着質問一覧|teratail(テラテイル) VScodeでiOSデバッグをしたい!! ReactNative使用 https://teratail.com/questions/332476?rss=all VScodeでiOSデバッグをしたいReactNative使用前提・実現したいことVScodeでiOSのデバッグをしたい、、プログラミング歴ヶ月ですが、少しづつ慣れてきたのでReactnbspNativeでスマホアプリを作成しようとしております。 2021-04-09 21:17:56
Program [全てのタグ]の新着質問一覧|teratail(テラテイル) 配列の中の配列のデータの取得方法 https://teratail.com/questions/332475?rss=all 配列の中の配列のデータの取得方法配列の処理について質問です。 2021-04-09 21:17:47
Program [全てのタグ]の新着質問一覧|teratail(テラテイル) openpyxlで変数を使用してエクセルファイルのworksheetsを取得したい https://teratail.com/questions/332474?rss=all openpyxlで変数を使用してエクセルファイルのworksheetsを取得したい前提・実現したいこと現在、openpyxlを使用してエクセルファイルのシートを取得しようとしています。 2021-04-09 21:17:14
Ruby Rubyタグが付けられた新着投稿 - Qiita FactoryBotの[KeyError:Factory not registered: "user"]のエラー解決。 https://qiita.com/yun02850266/items/4d391c3ccdfa488d028e FactoryBotのKeyErrorFactorynotregisteredquotuserquotのエラー解決。 2021-04-09 21:25:22
海外TECH DEV Community 10 reasons why Twitter is better than LinkedIn for finding work https://dev.to/andrewbaisden/10-reasons-why-twitter-is-better-than-linkedin-for-finding-work-gbk reasons why Twitter is better than LinkedIn for finding workLinkedIn is a great social network for finding work Many companies use it for hiring and it is very easy to find a recruiter However it is not the only social network out there which you can use for finding your next job Here are reasons why Twitter is better than LinkedIn for finding work It is more casualOne of the main differences between the two platforms is that Twitter feels more fun and casual whereas LinkedIn is more formal and business orientated People tend to be themselves when in a more casual scene it is easier to express yourself and meet people On LinkedIn it has more of a formal strict feel to it and many job seekers get harassing and spam messages from recruiters and some companies which makes the experience feel negative Developers are more open to connecting with their peersOn Twitter it feels far more natural to follow other developers who usually follow you back Almost everyone is open to it regardless if you know them in real life or not However the same rules don t always apply when using LinkedIn Firstly you have to send a connection request which the person you are trying to connect with could easily refuse So in this sense the platform is more closed than it is open like Twitter Less recruiters more companiesThere are recruiters on Twitter however their main playground is LinkedIn that is where they do most of their work On Twitter you are more likely to engage with companies who see your work or people who work for those companies Thats an easy way to make a new connection because Twitter can feel more like a public journal of your current developer journey Whereas on LinkedIn it feels more like you are trying to force your CV into someone s face in the hope that it gets you an interview Engagement is much higherYou are more likely to get noticed on Twitter than you are on LinkedIn Tweets generally get significantly higher likes because the platform is designed to be more personal and fun People also tend to spend more time on Twitter because the experience is more enjoyable and they have friends on there Compared to LinkedIn which feels like a tool specifically designed for work How many people feel excited to spend hours on LinkedIn all day Most of the conversations are business focused and there is a lot of spam from recruiters hitting send all and sending you a generic job description To companies that want you to sign up to their platform Twitter Spaces is awesome Twitter Spaces is an audio only chat room where you are in a room with other users You can talk about anything you want and it is one of the best ways to engage and connect with your followers I have been using it for a few months now and the experience is extremely positive It is like live radio and a conference call all in one I have been in Twitter Spaces rooms where they talk about the best ways for finding work different technical stacks and better ways on promoting yourself It is like getting insider secrets that you can use to boost your career prospects The casual feel of the rooms is very inviting too This is a feature which LinkedIn currently does not have It is a good way to gauge personality and culture fitOn Twitter you are free to express yourself anyway you want So the tweets that you do can be a good indication of what your personality is like In contrast people tend to be more serious and business minded on LinkedIn so they are likely to say things in a more formal manner that is more appropriate for a cover letter or resume On LinkedIn when you look at a profile you are only looking at someones past experience in most cases Whereas if you look at Twitter you could see what projects they are currently working on and the type of learning they are doing It is a better learning toolYes I know that LinkedIn Learning exists but hear me out LinkedIn Learning is only free as a month trial after that you need to pay for it When compared to what s on offer with Twitter it can t really compete There are thousands of developers using Twitter and many of them share FREE resources each day as well as giving their own insights into learning strategies Blogging opens up doors to new opportunitiesThis is another area where Twitter wins Because the platform feels more casual people use it for writing articles and blogging I have been doing this for over year and have grown my network on different platforms It has even opened up doors to a few freelance jobs as a technical writer The content you put on Twitter can really help you to grow as a developer Sure you can post the same content on LinkedIn but we go back to the topic of engagement The audience is usually not as big and most people tend to have more recruiters as connections than developers so the content you write might not be as applicable for them Less ghostingConnections that you have on Twitter feel more real in my opinion That s because it is easier to connect with people as you have a common interest and it is socially acceptable to talk about any topic The same can t be said of LinkedIn where it feels like everyone you connect with is just a business partner The biggest reason for this is that you are typically going to be connecting with recruiters and they use the platform for work and not socialising So the topics of conversation are more strict and business focused Because of this it is hard to form strong connections with people and if you are unsuccessful with a few interviews then you become forgotten and the recruiter moves onto someone else Twitter feels more supportive as you are going through the same problems other developers have so instead of losing connections you gain them for being more authentic Twitter has a more active communityTwitter has about million monthly active users whereas LinkedIn has million monthly active users While the difference is not hugely dissimilar it can make a difference In general people tend to spend far more time on Twitter than they do with LinkedIn which means you have a much higher chance of someone responding or noticing something that you said You are also likely to see much more posts on Twitter and likely more profile views as well You also have the added bonus of growing your developer network at a much faster rate Final ThoughtsI really hope that you enjoyed reading this article and learned something from it As a content creator and technical writer I am passionate about sharing my knowledge and helping other people reach their goals Let s connect on Twitter LinkedIn and GitHub You can also check out my website I will just leave you one last easter egg These are lyrics from a song lets see if you can find it If we ll be unitedWe re stronger togetherWe always have the high hopeNot all for one but one for all Peace ️ 2021-04-09 12:29:25
海外TECH DEV Community Gestion des erreurs RXJS -NGRX https://dev.to/stack-labs/gestion-des-erreurs-rxjs-ngrx-3kci Gestion des erreurs RXJS NGRXDans un projet frontend lorsqu on fait un appel HTTP il ne faut pas oublier la gestion des cas d erreurs Un appel HTTP peut être en erreur pour diverses raisons on peut citer un serveur est inaccessible le backend est tombé àcause d une erreur interne par exempleun timeout si la requête prend plus d un certain temps àrépondreune erreur renvoyée par le backend avec un message spécifique l utilisateur n a pas le droit d accéder àcette ressource par exempleDans chaque cas si le frontend ne gère pas ces erreurs on se retrouve avec une application qui fonctionne mal ou dans le pire des cas plus du tout Dans cet article je vais vous présenter la façon de gérer vos erreurs lors d un appel HTTP pour un projet Angular On verra d abord la gestion des erreurs dans un subscribe puis la gestion des erreurs dans un effect Prenons l exemple d un service HobbitsService et de la méthode findHobbits qui fait un appel HTTP pour retourner un observable d une liste d Hobbits Injectable export class HobbitsService constructor private http HttpClient findHobbits Observable lt Hobbit gt return this http get lt Hobbit gt api hobbits On veut afficher la liste des Hobbits et pendant que la requête HTTP est en cours on affiche un loader àl utilisateur Gérer les erreurs dans un subscribe Exemple d une erreur non traitéeDans le composant HobbitsComponent une liste d Hobbits est récupérée àl initialisation du composant Un loader est affichélorsque le booléen isLoading est àtrue export class HobbitsComponent implements OnInit isLoading true hobbits Hobbit constructor private hobbitsService HobbitsService ngOnInit this hobbitsService findHobbits subscribe hobbits Hobbit gt this hobbits hobbits this isLoading false Que se passe t il si l appel findHobbits échoue Le loader va être affiché sans s arrêter alors que l appel est terminé Pourquoi La gestion du statut du loader est placédans la fonction NEXT du subscribe Quand une erreur survient on ne passe pas dans NEXT mais dans la fonction ERROR du subscribe NEXT ERROR COMPLETE les fonctions d un subscribesubscribe a fonctions optionnelles NEXT ERROR COMPLETE this hobbitsService findHobbits subscribe gt console log Next gt console log Error gt console log Completed Si l appel HTTP réussit on voit les logs suivant NextCompletedEn cas de succès la valeur est émise dans la fonction NEXT Puis l observable se ferme et il passe dans la fonction COMPLETE C est la fin du lifecycle de l observable aucune erreur n a étéémise Si l appel HTTP échoue on voit les logs suivant ErrorEn cas d erreur aucune valeur n est émise dans la fonction NEXT On passe dans la fonction ERROR c est la fin du lifecycle de l observable A savoir Un appel HTTP est un observable qui complete après avoir émit une valeur On a alors deux chemins possibles On ne peut pas être dans un COMPLETE et ERROR dans le lifecycle d un observable c est soit l un soit l autre Pour résoudre le problèmePour gérer l affichage du loader en cas d erreur on va traiter son état dans la fonction NEXT et dans la fonction ERROR export class HobbitsComponent implements OnInit isLoading true hobbits Hobbit constructor private hobbitsService HobbitsService ngOnInit this hobbitsService findHobbits subscribe hobbits Hobbit gt this hobbits hobbits this isLoading false gt this isLoading false Si l appel HTTP réussit ou échoue on aura le booléen isLoading àfalse et donc on n aura plus le loader affichéàl infini Traiter ou logger l erreurDans le cas oùon veut utiliser l erreur pour debugger ou pour afficher un message précis àl utilisateur par exemple on peut utiliser l erreur retournée comme ceci this hobbitsService findHobbits subscribe gt console log Next error gt console log Error error gt console log Completed Gestion les erreurs dans un effectPour gérer vos effets de bord par exemple vos appels backends vous pouvez également utiliser la librarie NGRX et les effects Personnellement c est la manière dont je gère ces appels Je ne donne pas la responsabilitéau composant de récupérer les données L action loadHobbits met un booléen isLoading àtrue dans le store L action loadHobbitsSuccess passe ce booléen àfalse et enregistre la liste des Hobbits dans le store Le loader est affichési le booléen isLoading est àtrue Exemple sans gestion d erreur Injectable export class HobbitsEffects loadHobbits createEffect gt this actions pipe ofType loadHobbits concatMap gt this hobbitsService findHobbits pipe map hobbits Hobbit gt loadHobbitsSuccess hobbits constructor private actions Actions private hobbitsService HobbitsService Que se passe t il si l appel findHobbits échoue Le loader va être affiché sans s arrêter alors que l appel est terminé Pourquoi Seul l action loadHobbitsSuccess met le booléen isLoading àfalse Or en cas d erreur on ne passe pas dans le map qui suit l appel HTTP Il faudrait attraper l erreur àl aide de l opérateur catchError catchErrorL opérateur catchError va permettre comme son nom l indique d attraper l erreur et de retourner un nouvel observable this hobbitsService findHobbits pipe map gt SUCCESS catchError gt of ERROR Pour résoudre le problèmeOn va créer une nouvelle action loadHobbitsError qui va permettre dans notre exemple de mettre le booléen isLoading àfalse et donc d arrêter d afficher le loader en cas d erreur Injectable export class HobbitsEffects loadHobbits createEffect gt this actions pipe ofType loadHobbits concatMap gt this hobbitsService findHobbits pipe map hobbits Hobbit gt loadHobbitsSuccess hobbits catchError gt of loadHobbitsError constructor private actions Actions private hobbitsService HobbitsService A savoir Si vous êtes sur une version antérieure àla version d NGRX en cas d erreur non attrapée dans l observable principal àl aide d un catchError l effect est complete Depuis la version si aucune erreur est attrapée dans l observable principal l effect se resouscrit avec une limite maximum d erreurs Appels multiplesEn cas d appels multiples on peut choisir de retourner un observable avec des données pour gérer les cas d appels qui ont échoués Dans l exemple ci dessous on a une liste d ids d Hobbits donnée par l action loadHobbitsBeers Pour chaque id d Hobbit on fait un appel HTTP via favoriteBeersByHobbitId qui va retourner une liste de string qui correspond aux bières préférées d un Hobbit donné Ces appels sont effectués en parallèles et si l un d eux échoue on enregistre l id du Hobbit ainsi que la bière Prancing Pony s Ale par défaut Ainsi les appels qui ont échouésont traités avec des données par défaut Injectable export class HobbitsEffects loadHobbitsDetails createEffect gt this actions pipe ofType loadHobbitsBeers mergeMap hobbitsIds gt forkJoin hobbitsIds map hobbitId gt this hobbitsService favoriteBeersByHobbitId hobbitId pipe map beers string gt id hobbitId beers catchError gt of id hobbitId beers Prancing Pony s Ale map hobbitsBeers HobbitsBeers gt loadHobbitsBeersSuccess hobbitsBeers constructor private actions Actions private hobbitsService HobbitsService Traiter ou logger l erreurDans le cas oùon veut utiliser l erreur pour debugger ou pour afficher un message précis àl utilisateur par exemple on peut utiliser l erreur retournée comme ceci this hobbitsService findHobbits pipe map hobbits Hobbit gt SUCCESS catchError error gt console log ERROR error return of ERROR 2021-04-09 12:28:42
海外TECH DEV Community Git au quotidien : les seules commandes que j'utilise vraiment https://dev.to/stack-labs/git-au-quotidien-les-seules-commandes-que-j-utilise-vraiment-3g74 Git au quotidien les seules commandes que j x utilise vraimentJe ne suis pas une experte Git je ne connais pas toutes les commandes mais depuis quelques années j ai une routine et quelques commandes que j utilise au quotidien Ces commandes me permettent de gérer de mes besoins Avec seulement commandes je peux gérer mon projet au quotidien Voici la liste des commandes que je vais présenter àtravers mon workflow de développement git fetchgit checkoutgit branchgit addgit commitgit pushgit rebase Démarrer mon développementJe présente deux façons de démarrer mon développement Je vais commencer par GitLab un outil ultra répandu que j ai toujours connu via mes différentes expériences chez les petits comme chez les gros clients Il m est arrivéde démarrer mon développement sans passer par GitLab donc je présenterai également cette deuxième option La branche principale de développement de votre projet peut s appeler master dev main Cela dépend de votre projet et peut importe son nom c est la branche sur laquelle toute l équipe fait partir ses branches pour faire son développement Dans mon exemple la branche principale de développement s appelle main Cas je démarre mon développement via GitLabQuand je démarre un développement je crééma branche àpartir de l issue GitLab il y a un bouton pour créer une branche et ça la rattachera àvotre issue Ici mon issue s appelle Add new feature et la branche va s appeler automatiquement add new feature Une fois la branche créée via GitLab je veux tout simplement la récupérer et me placer sur cette branche Récupération de ma brancheJe récupère ma branche en local en faisant git fetchJe peux voir dans ma console le nom de ma branche apparaître Se déplacer sur ma branchePour commencer mon développement je vais me déplacer sur ma branche Je fais cette commande git checkout add new feature Cas je démarre mon développement àla mano Récupération des modifications du projetJe fais la commande suivante git fetch Se déplacer sur la branche principaleJe me place sur la branche principale de développement de mon projet Pour me déplacer j utilise git checkout Ici la branche principale de développement de mon projet s appelle main donc je fais git checkout main Écraser la branche principale pour la mettre àjourJ écrase la branche main locale par la branche distante main git reset hard origin mainA partir de ce moment là je sais que la branche main locale est la même que la branche distante Je m assure ainsi de créer ma branche àpartir de la branche principale àjour Création de ma branche et se déplacer sur celle ciJe veux créer ma branche àpartir de main pour faire mon développement et me déplacer sur la branche créée Je veux appeler ma branche add new feature je vais donc faire la commande suivante git checkout b add new feature En cours de développementJe développe sur ma branche et une fois tout ou partie de mon développement terminéje fais les commandes suivantes J ajoute mes modificationsJ ajoute toutes mes modifications je fais git add ougit add A Cas je commit pour la première foisJe créémon commit avec ce message feat example add new feature Je fais la commande suivante git commit m feat example add new feature ougit commit va m ouvrir l éditeur de mon choix vim par défaut Si le Conventional Changelog ne vous dit rien voici un article pour vous Cas j ajoute des modifications àmon commit précédentSi je veux ajouter un développement supplémentaire sur ma branche je travaille en général avec un seul commit pour mon développement donc j utilise git commit amend git commit amend ou git commit amend no edit Je ne travaille pas avec des commits intermédiaire du genre feat example add somethingfeat example delete somethingfeat example update something elsePour ensuite squash ces commits pour avoir seul commit feat example add new featureJe fais un seul commit mon commit final et j amend ce commit pour n avoir qu un seul commit qui sera mergésur la branche de développement C est une question d habitude et de préférence A la fin de mon développementPrérequis si vous créez vos branches en local configurez votre git avec cette commande pour ne pas avoir àpréciser àchaque fois l upstream git config global push default current Je pousse mes modificationsJe fais cette commande pour pousser mon développement git pushougit push force with leaseSi vous n êtes pas familier avec l option force with lease je vous recommande cet article Mise àjour de ma branche par rapport àla branche principaleDans le cas oùla branche principale de développement àévoluée par rapport àma branche de développement je dois faire un git rebase Je récupère toutes les modifications faites par les autres personnes de mon projet git fetchJe mets àjour ma branche par rapport àmain la branche principale git rebase origin mainJe gère les conflits éventuels via mon IDE pour plus de simplicité Vous avez également des outils Git qui permettent de gérer les conflits plus facilement Et ensuite je push quand j ai terminémon rebase git push force with leaseJ utilise également le rebase interactive i interactive pour certaines situations C est une option que je conseille de regarder même si je ne l utilise qu occasionnellement la doc du rebase Les commandes supplémentaires Utilisées occasionnellementQuand je veux voir l historique des commits sur une branche git logQuand je veux voir mes modifications en cours non commités git statusQuand je veux sauvegarder un brouillon de mon développement en cours sur une branche git stashQuelques exemples de stash sont listés ici Utilisées rarementQuand je démarre sur un nouveau projet git clone Quand je définis mon nom d auteur et l email associé git config global user name Clara Belair git config global user email clara belair example com Pourquoi je préfère git fetch àgit pull pour récupérer les modifications distantes On me pose souvent la question pourquoi je fais un git fetch au lieu d un git pull Attention ce ne sont pas des commandes équivalentes Si vous faites un git pull par défaut il y a un merge qui est fait juste après le fetch Voici la description de git pull Intègre les modifications d un dépôt distant dans la branche actuelle Dans son mode par défaut git pull est l abréviation de git fetch suivi de git merge FETCH HEAD Pour éviter toute mauvaise surprise je conseille donc de faire un git fetch pour récupérer les modifications distantes Ou alors soyez conscient des commandes exécutées derrière git pull par défaut et de leurs conséquences sur votre branche actuelle 2021-04-09 12:28:20
海外TECH DEV Community Memory Consumption In Linux https://dev.to/vumdao/memory-consumption-in-linux-3b55 Memory Consumption In LinuxIn the world of devops there are many tools support us to monitor memory usage with metrics such Datadogs Grafana etc but traditional people still asking for the knowledge and skills of monitoring Memory Consumption manually This article explains a reasonable method to measure memory consumption of a process on Linux Linux is equipped with virtual memory management and therefore measuring the memory consumption of a single process is not as simple as most users think This article explains what information you can get from each indicator related to memory consumption The Linux tools most commonly used are the VSZ Virtual Memory Size and RSS Resident Set Size and the new one is PSS Proportional set size What s In This DocumentTechnical TermsVSZ Virtual Memory Size and Demand PagingRSS Resident Set Size and Shared LibrariesPSS Proportional Set Size Python script to get PSS Proportional Set Size Technical Terms pageThis is a block of memory that is used in memory management on Linux One page is bytes in typical Linux systems physical memoryThis is the actual memory typically the RAM that is on the computer virtual memoryThis is a memory space given to a process that lets the process think it has its own continuous memory that is isolated from other processes regardless of the actual memory amount on the computer or the situation of other processes memory consumption A virtual memory page can be mapped to a physical memory page and hence processes only need to think about the virtual memory VSZ Virtual Memory Size and Demand PagingConsidering the VSZ virtual memory size to measure memory consumption of a process does not make much sense This is due to the feature called demand paging which suppresses unnecessary memory consumption For example a text editor named emacs has functions that can handle XML files These function however are not used all the time Loading these functions on to the physical memory is not necessary when the user just wants to edit a plain text The demand paging feature does not load pages unless they are used by the process This is how it works First when the program starts Linux gives a virtual memory space to the process but does not actually load pages that have functions on to the physical memory When the program actually calls a function in the virtual memory the MMU in the CPU tells Linux that the page is not loaded Then Linux pauses the process loads the page on to the physical memory maps the page to the virtual memory of the process then lets the process run again from where it got paused The process therefore does not need to know that it got paused and just simply assume the function was loaded on the virtual memory and use it VSZ virtual memory size describes the entire virtual memory size of the process regardless of pages being loaded on the actual memory or not This is therefore not a realistic indicator to measure memory consumption since it includes pages that are not actually consumed RSS Resident Set Size and Shared LibrariesRSS Resident Set Size describes the total amount of the pages for a process that are actually loaded on the physical memory This may sound like the real amount of memory being consumed by the process and is better than VSZ virtual memory size but it is not that simple due to the feature called shared libraries or dynamic linking libraries A library is a module that can be used by programs to handle a certain feature For example libpng so takes care of compressing and decompressing PNG image files and libxml so takes care of handling XML files Instead of making each programmer write these functions they can use libraries developed by others and achieve the result they want A shared object is a library that can be shared by multiple programs or processes For example let s say there are two processes running at the same time that want to use XML handling functions that are in the shared library libxml so Instead of loading the pages that have the exact same functions multiple times Linux loads it once on to the physical memory and maps it to both processes virtual memory Both processes do not need to care if they are sharing the functions with somebody else because they can access the functions and use them inside their own virtual memory Due to this feature Linux suppresses unnecessary duplication of memory pages Now let us go back to the same example above Emacs a text editor has functions that can handle XML files This is taken care of by the shared library libxml so This time the user that is running emacs is actually working with XML files and emacs is using the functions in libxml so Meanwhile there are two more process running in the background that are using libxml so too Since libxml so is a shared library Linux only loads it once on the physical memory and maps it to all three processes virtual memory When you see the RSS Resident Set Size of emacs it will include the pages of libxml so This is not wrong because emacs is actually using it But what about the other two processes It is not just emacs that is using those functions If you sum the RSS Resident Set Size of all three processes libxml so will be counted three times even though it is only loaded on the physical memory once RSS Resident Set Size therefore is an indicator that will show the memory consumption when the process is running by it self without sharing anything with other processes For practical situations where libraries are being shared RSS Resident Set Size will over estimate the amount of memory being consumed by the process Using to measure memory consumption of a process is not wrong but you may want to keep in mind of this behaviour PSS Proportional Set Size PSS Proportional Set Size is a relatively new indicator that can be used to measure memory consumption of a single process It is not be available on all Linux systems yet but if it is available it may come in handy The concept is to split the memory amount of shared pages evenly among the processes that are using them This is how PSS Proportional Set Size calculates memory consumption If there are N processes that are using a shared library each process is consuming one N th of the shared libraries pages For the example above emacs and two other processes were sharing the pages of libxml so Since there are three processes PSS will consider each process is consuming one third of libxml so s pages I consider PSS Proportional Set Size as a more realistic indicator compared to RSS Resident Set Size It works well especially when you want to consider the memory consumption of an entire system all together and not each process individually For example when you are developing a system that has multiple processes and daemons and you want to estimate how much memory you should install on the device PSS Proportional Set Size works better than RSS Resident Set Size Python script to get PSS Proportional Set Size usr bin env python coding utf pss py Print the PSS Proportional Set Size of accessable processes import os sys re pwdfrom functools import cmp to key as cmp def pss main Print the user name pid pss and the command line for all accessable processes in pss descending order Get the user name pid pss and the command line information for all processes that are accessable Ignore processes where the permission is denied ls user pid pss cmd for pid in filter lambda x x isdigit os listdir proc try ls append owner of process pid pid pss of process pid cmdline of process pid except IOError pass Calculate the max length of the user name pid and pss in order to print them in aligned columns userlen pidlen psslen for user pid pss cmd in ls userlen max userlen len user pidlen max pidlen len pid psslen max psslen len str pss Get the width of the terminal with os popen tput cols as fp term width int fp read strip Print the information Ignore kernel modules since they allocate memory from the kernel land not the user land and PSS is the memory consumption of processes in user land fmt ds ds ds s userlen pidlen psslen print fmt USER PID PSS COMMAND for user pid pss cmd in sorted ls key cmp lambda x y y x if cmd print fmt user pid pss cmd term width def pss of process pid Return the PSS of the process specified by pid in KiB bytes unit param pid process ID return PSS value with open proc s smaps pid as fp return sum int x for x in re findall Pss s d fp read re M def cmdline of process pid Return the command line of the process specified by pid param pid process ID return command line with open proc s cmdline pid as fp return fp read replace strip def owner of process pid Return the owner of the process specified by pid param pid process ID return owner try owner pid pwd getpwuid os stat proc s pid st uid pw name except Exception return docker return owner pid if name main pss main How to runsudo python pss pyIf the server instance running docker then it would show userID refRefs Blog · Github · Web · Linkedin · Group · Page · Twitter 2021-04-09 12:26:01
海外TECH DEV Community Get your streak on...! https://dev.to/r4nkt/get-your-streak-on-3fc6 Get your streak on Get your streak on While this feature wasn t considered necessary for rnkt s initial release it was always considered a must have Well over the last week initial support for streaks has been deployed to production This makes it very easy for you to define achievement criteria in terms of streaks We are very excited to have rolled this functionality out and hope that you and your users will enjoy the new possibilities that are now available What are streaks In terms of rnkt achievement criteria a streak is when a player has performed some sort of action s over a consecutive series of time intervals Some examples A salesperson has closed at least one sale for days in a row A student has passed the daily quiz days in a row A user has walked at least steps hours in a row How does it work Rnkt s streaks are defined on the individual criteria The criterion resource has a new property streak Like similar properties it takes a string that matches the following pattern interval amount The interval can be one of the following dayshoursBoth require an amount which indicates the number of days or hours that the streak should be So if we take some of the examples from above we can see how easy it is to define streaks days days in a row days days in a row hours hours in a row Very simple and straightforward Take it to the next levelFor many the basic use of streak criteria will be sufficient but when you combine them with criteria conditions and custom data references then you can really start to come up with very complex and interesting achievements There are so many possibilities Imagine that you develop an e learning application or service and your students are allowed to take a daily quiz You could define an achievement for students that pass the daily quiz by or more for three straight days To do so you would need your achievement which we ll call The Triple your primary criteria group and a single criterion You ll also have an action which might be called Pass Quiz and give a custom ID of pass quiz You will report it whenever a player passes a quiz And you ll want to pass the result in the activity s custom data like so custom data result Then your criterion will have the following properties custom action id pass quizrule gte streak days These properties let rnkt know that the criterion is met when the player has passed at least one quiz each day for three or more consecutive days But what if they don t get a result of or more Well that s where criteria conditions come into play For this scenario you ll have a criteria condition on either your criteria group or your criterion with the following definition conditions groups conditions activityData result gte So by defining the activityData criteria condition you make sure that only activities that have a custom data result of or greater will be considered This is applied and then the criterion s rule and streak are considered That is once all Pass Quiz activities for the player that did not meet the criteria conditions result gt have been filtered out then rnkt will look to see if at least one such activity exists per day for three or more consecutive days That s pretty great don t you think I want to learn moreYou can check out our ever improving documentation or you can also join our Discord server and ask for help or make suggestions ️ Coming SoonThere are still more streak related features that are in the pipeline Obviously we re going to expand the available time intervals and you will soon have the ability to define streaks in terms of the following time intervals weeklymonthlyyearlyminutely More may be added but we ll allow your needs to drive those efforts If you want to be notified when these new features are added if you d like to suggest one or if you d just like to be kept abreast of what we re doing then follow us on twitter We ll keep you posted 2021-04-09 12:24:58
海外TECH DEV Community Spring Native: Spring Boot but faster https://dev.to/antmordel/spring-native-spring-boot-but-faster-4o5h Spring Native Spring Boot but fasterWhile you are reading these lines there are a few hundreds of Spring Boot applications struggling to load a JVM and start Yet others are running but consuming a huge amount of memory since a JVM needs to be running supporting the application that is running on top And adding this latency and resource consumption to a more Cloud Native world is just something that It s NOT a match Some developers just abandoned the idea of using Spring Boot in a world conquered by containers And they moved far to those battles with Golang Others stayed with Java but in another flavour like Quarkus or Micronaut i e GraalVM powered But in March something happened in the community of Spring Boot users and that was the release of the beta program of Spring Native What it was slow and resource consuming Now is just awesome Why Spring Boot applications can be now be compiled using GraalVM I e it can be compiled to native code Image size for an example project can go from MB to MBRAM memory consumption can go from MB to MBStart up times can go from seconds to millisecondsSeems awesome I ve written a small example that you can take a look and comment on that antmordel bookend Spring Native Starter App Spring Boot just faster Bookend Spring Native Starter appExample Spring Boot REST application using Spring NativeSpring Boot Java LombokProvided a comparison between JVM image build and GraalVM image build Using Spring Native secondsUsing the JVM secondsGet startedCreate the image locally mvnw clean spring boot build imageRunning the just created local imagedocker run rm p bookend native SNAPSHOTor justdocker compose up View on GitHubAnd if you are Spanish I ve explained all this in a more multimedia format Thanks for reading I appreciate your comments ️ 2021-04-09 12:16:43
Apple AppleInsider - Frontpage News Six of top ten smartphones sold in Jan. were Apple's, iPhone 12 sales king https://appleinsider.com/articles/21/04/09/six-of-top-ten-smartphones-sold-in-january-were-iphones-iphone-12-sales-king Six of top ten smartphones sold in Jan were Apple x s iPhone sales kingThe iPhone was the best selling smartphone globally during January with Apple snagging six of the top ten spots All four members of the iPhone series made the top tenAccording to Counterpoint Research the iPhone led the way followed by iPhone Pro and iPhone Pro Max Together those three phones captured of Apple s monthly sales Read more 2021-04-09 12:46:21
Apple AppleInsider - Frontpage News Advertisers 'staring into the abyss' as Apple limits ad tracking https://appleinsider.com/articles/21/04/09/advertisers-staring-into-the-abyss-as-apple-limits-ad-tracking Advertisers x staring into the abyss x as Apple limits ad trackingApple s forthcoming App Tracking Transparency privacy feature in iOS leaves advertisers guessing but sure that revenues will be hit Users will be asked whether they want to allow ad tracking or notAs Apple reminds developers to prepare for App Tracking Transparency ATT in iOS marketing firms expects their billion US mobile ad industry to take a hit Read more 2021-04-09 12:36:15
海外TECH Engadget Engadget Podcast: Michio Kaku on 'The God Equation,' LG gives up on phones https://www.engadget.com/engadget-podcast-michio-kaku-god-equation-lg-mobile-123029717.html Engadget Podcast Michio Kaku on x The God Equation x LG gives up on phonesThis week we explore why the Biden admission is bringing on Big Tech critics like Lina Khan and Tim Wu It seems like they re gearing up to regulate the tech world even more 2021-04-09 12:30:29
海外TECH Engadget GE is working to put COVID-19 virus-detecting sensors in phones https://www.engadget.com/ge-covid-19-virus-detecting-sensor-phones-122240128.html GE is working to put COVID virus detecting sensors in phonesScientists at GE Research have been awarded a grant to develop tiny sensors that can be embedded inside phones to identify COVID particles on surfaces 2021-04-09 12:22:40
海外ニュース Japan Times latest articles ‘Full capacity everywhere’: Manila hospitals struggle as virus surges https://www.japantimes.co.jp/news/2021/04/09/asia-pacific/philippines-virus-surge-hospitals/ Full capacity everywhere Manila hospitals struggle as virus surgesMore contagious variants of the coronavirus have been blamed for a record surge in infections in Metro Manila that has overstretched hospitals 2021-04-09 22:52:46
海外ニュース Japan Times latest articles Likely legal, ‘vaccine passports’ emerge as the next coronavirus divide in U.S. https://www.japantimes.co.jp/news/2021/04/09/world/politics-diplomacy-world/covid-vaccines-passport/ Likely legal vaccine passports emerge as the next coronavirus divide in U S Around the U S businesses schools and politicians are considering vaccine passports as a path to reviving the economy and getting Americans back to work and 2021-04-09 22:01:57
海外ニュース Japan Times latest articles Panel of Japan’s ruling LDP to seek early passage of law on LGBT understanding https://www.japantimes.co.jp/news/2021/04/09/national/lgbt-ldp-same-sex-marriage-discrimination/ Panel of Japan s ruling LDP to seek early passage of law on LGBT understandingA draft outline of the bill requires the government to set a basic plan for promoting understanding of sexual and gender minorities and review its 2021-04-09 21:42:43
海外ニュース Japan Times latest articles Biden sets out first attempt at tackling U.S. gun violence ‘epidemic’ https://www.japantimes.co.jp/news/2021/04/09/world/biden-guns-executive-order/ Biden sets out first attempt at tackling U S gun violence epidemic With Congress unable to agree on broad new gun regulations Biden has announced six executive measures that he said will help tamp down the crisis 2021-04-09 21:39:33
海外ニュース Japan Times latest articles Netflix gets ‘Spider-Man’ and ‘Jumanji’ franchises in multiyear Sony deal https://www.japantimes.co.jp/news/2021/04/09/business/corporate-business/netflix-sony-television-film/ dealfinancial 2021-04-09 21:15:51
海外ニュース Japan Times latest articles In world first, COVID-19 patient in Japan undergoes living donor lung transplant https://www.japantimes.co.jp/news/2021/04/09/national/science-health/covid-19-japan-kyoto-hospitals-transplants-health-kyoto-university-hospital/ In world first COVID patient in Japan undergoes living donor lung transplantThe operation which took around hours to perform transplanted part of healthy lungs from the patient s husband and son 2021-04-09 21:03:38
海外ニュース Japan Times latest articles Yasunobu Okugawa earns first win as Swallows beat Carp https://www.japantimes.co.jp/sports/2021/04/09/baseball/japanese-baseball/okugawa-swallows-beat-carp/ victory 2021-04-09 22:12:00
海外ニュース Japan Times latest articles Nadeshiko Japan routs Paraguay 7-0 in friendly https://www.japantimes.co.jp/sports/2021/04/09/soccer/japan-routs-paraguay/ march 2021-04-09 21:10:08
ニュース BBC News - Home Prince Philip has died aged 99, Buckingham Palace announces https://www.bbc.co.uk/news/uk-11437314 husband 2021-04-09 12:53:40
ニュース BBC News - Home Prince Philip: World leaders react to death of the Duke of Edinburgh https://www.bbc.co.uk/news/world-56687736 family 2021-04-09 12:25:26
GCP Google Cloud Platform Japan 公式ブログ Cloud SQL for SQL Server - Active Directory 認証が利用可能に https://cloud.google.com/blog/ja/products/databases/windows-authentication-now-supported-by-googles-cloud-sql-database/ インスタンス名は外部から閲覧可能です。 2021-04-09 14:00:00
GCP Google Cloud Platform Japan 公式ブログ Dataproc Metastore の一般提供開始により、データレイク管理がさらに簡単に https://cloud.google.com/blog/ja/products/data-analytics/data-lake-management-just-got-easier-dataproc-metastore-ga/ GoogleCloudのデータレイクにDataprocMetastoreを使用するDataprocMetastoreはつ以上のDataprocクラスタやセルフマネージドクラスタに簡単に接続して、さまざまなオープンソース処理エンジンでHiveテーブルを共有できます。 2021-04-09 14:00:00
GCP Google Cloud Platform Japan 公式ブログ 株式会社プレイド:Anthos clusters on AWS を活用し、マルチクラウドでも GKE 由来の高度なマネージド環境を享受 https://cloud.google.com/blog/ja/topics/customers/plaid-anthos-clusters-on-aws/ 対象となったパートはそれまでVMに構築されていたのですが、これをコンテナへ移行するに際しては、GKEの使いやすさを評価し、新たにGoogleCloud上に構築することになりました。 2021-04-09 14:00:00
GCP Google Cloud Platform Japan 公式ブログ 政府との連携による気候問題への取り組み https://cloud.google.com/blog/ja/topics/public-sector/working-with-governments-on-climate-goals/ Googleでは、年までに以上の都市に支援を提供し、年間炭素排出量を億トン減らすことを目標として設定していますこれは、日本の年分の炭素排出量に相当します。 2021-04-09 13:00:00
GCP Google Cloud Platform Japan 公式ブログ Auto Trader: Oracle から PostgreSQL への道のり https://cloud.google.com/blog/ja/products/databases/how-auto-trader-migrated-its-on-prem-databases-to-cloud-sql/ それ以来、このサービスのCloudSQLインスタンスのリソースを分以内で簡単にスケールできています。 2021-04-09 13:00:00
北海道 北海道新聞 ロ4―7西(9日) 西武が打ち勝つ https://www.hokkaido-np.co.jp/article/531595/ 連敗 2021-04-09 21:16:00
北海道 北海道新聞 オ1―2日(9日) 日本ハムが連敗7で止める https://www.hokkaido-np.co.jp/article/531589/ 日本ハム 2021-04-09 21:13:00
北海道 北海道新聞 胆振管内でで2人感染 新型コロナ https://www.hokkaido-np.co.jp/article/531594/ 胆振管内 2021-04-09 21:15:00
北海道 北海道新聞 観光回復、期待しぼむ 「まん延防止」大阪、東京適用で小樽 連休も見込み薄 https://www.hokkaido-np.co.jp/article/531586/ 特別措置法 2021-04-09 21:04:00
GCP Cloud Blog Recovering global wildlife populations using ML https://cloud.google.com/blog/topics/developers-practitioners/recovering-global-wildlife-populations-using-ml/ Recovering global wildlife populations using MLWildlife provides critical benefits to support nature and people Unfortunately wildlife is slowly but surely disappearing from our planet and we lack reliable and up to date information to understand and prevent this loss By harnessing the power of technology and science we can unite millions of photos from motion sensored cameras around the world and reveal how wildlife is faring in near real time and make better decisions wildlifeinsights org aboutCase study backgroundGoogle partnered with several leading conservation organizations to build a project known as Wildlife Insights which is a web app that enables people to upload manage and identify images of wildlife from camera traps The intention is for anyone in the world who wishes to protect wildlife populations and take inventory of their health to do so in a non invasive way  The tricky part however is reviewing each of the millions of photos and identifying every species and so this is where Machine Learning is of great help with this big data problem Themodels built by the inter organizational collaboration presently classifies up to species and includes region based logic such as preventing a camera trap in Asiaーfor exampleーfrom classifying an African elephant using geo fencing These models have been in development for several years and are continuously being evolved to serve animals all over the globe You can learn more about it here This worldwide collaboration took a lot of work but much of the basic technology used isavailable to you at WildlifeLifeInsights org And for those interested in wanting to learn how to build a basic image classifier inspired from this wildlife project please continue reading You can also go deeper by trying out our sample tutorial at the end which contains the code we used and lets you run it interactively in a step by step notebook you can click the “play icon at each step to run each process How to build an image classification model to protect wildlifeWe re launching a Google Cloud series called “People and Planet AI to empower users to build amazing apps that can help solve complex social and environmental challenges inspired from real case studies such as the project above In this first episode we show you how to use Google Cloud s Big Data amp ML capabilities to automatically classify images of animals from camera traps You can check out the video here Requirements to get startedHardwareYou would require two hardware components Camera trap s →to take photos which we also strongly recommend you share by uploading on Wildlife Insights to help build a global observation network of wildlife  Micro controller s like a Raspberry Pi or Arduino → to serve as a small linux computer for each camera It hosts the ML model locally and does the heavy lifting of labeling images by species as well as omitting blank images that aren t helpful With these two tools the goal is to have the labeled images then uploaded via an internet connection This can be done over a cellular network However in remote areas you can carry the microcontroller to a wifi enabled area periodically to do the transfer Friendly tips  In order for the microcontroller to send the images it s classified over the internet we recommend using Cloud PubSub which publishes the images as messages to an endpoint in your cloud infrastructure PubSub helps send and receive messages between independent applications over the internet when managing dozens or hundreds of camera traps and their micro controllers you can leverage Cloud IoT core to upload your ML classification model on all these devices simultaneouslyーespecially as you update the model with new data from the field SoftwareThe model is trained and output via Google Cloud using a free camera trap dataset from lila science It costs less than each time to train the model as of the publishing of this article the granular breakdown is listed at the bottom of this article  TIP you can retrain once or twice a year depending on how many images of new species you collect and or how frequently you want to upgrade the image classification model Image selection for training and validation of an ML modelThe two products that perform most of the heavy lifting are Dataflow and AI Platform Unified Dataflow is a serverless data processing service that can process very large amounts of data needed for Machine Learning activities In this scenario we use it to run jobs Dataflow job creates a database in BigQuery from image metadata The columns of the metadata collected are used from the Camera Traps database mentioned above This is a one time setup category The species we want to predict this is our label file name The path where the image file is located We will also do some very basic data cleaning like discarding rows with categories that are note useful like   ref empty unidentifiable unidentified unknown Dataflow job makes a list of images that would be great for creating a balanced dataset This is informed by our requirements for selecting images that have a minimum and maximum amount of species per category  This is to ensure we train abias free model that doesn t include a species it has too little information about and is later unable to classify it correctly  In emoji examples the data looks skewed like this  When this dataset balancing act is completed Dataflow proceeds to download the actual images into Cloud Storage from lila science  This enables us to only store and process the images that are relevant and not the entire dataset keeping computation time and costs to a minimum Building the ML modelWe build a model with AI Platform Unified which uses AutoML and the images we stored in Cloud Storage This is the part where historically speaking when training a model it could take days now it takes hours Once the model is ready you can download it and import it into each micro controller to begin classifying the images locally To quickly show you how this looks we will enable the model to live in the cloud let s see what the model thinks about some of these images Export model for a microcontrollerThe prior example was for exporting the model as an online endpoint however in order to export your model and then download it into a microcontroller here are the instructions Try it outYou can use WildLifeInsights org s free webapp to upload and classify images today However if you would like to build your own classification model try out our sample or check out the code in GitHub It requires no prior knowledge and it contains all the code you need to train an image classification model and then run several predictions in an online notebook format simply scroll to the bottom and click “open in Colab per this screenshot As mentioned earlier what used to take days can now compute in hours And so all you will need for this project is to ️Set aside hours you click run for each step and the longest part just runs in the background you simply need to check back after hours to move onto the next and last step  Create a free Google Cloud project you will then delete that project when you are finished to ensure you do not incur additional costs since the online model has an hourly cost Optional information Pricing breakdown The total cost for running this sample in your own Cloud project lt will vary slightly on each run Please note rates are based from the date of the publishing of this article and could vary Cloud Storage is in free tier below GB Does not include Cloud IoT Core nor if you wanted a cloud server to be an endpoint to listen to it constantly hourly charge to have devices speak to it online anytime  This option is when you don t use a microcontroller It s a prediction web service Create images database minutes wall time Dataflow Total vCPU vCPU hr vCPU hr Total memory GB hr GB hr Total HDD PD GB hr GB hr Shuffle Data Processed unknown but probably negligibleTrain AutoML model hour wall time Dataflow Total vCPU vCPU hr vCPU hr Total memory GB hr GB hr Total HDD PD GB hr GB hr Shuffle Data Processed unknown but probably negligibleAutoML training Training node hrs node hr Getting predictions assuming hr of model deployed time must undeploy model to stop further charges AutoML predictions Deployment hr hr 2021-04-09 12:35:00
GCP Cloud Blog JA Cloud SQL for SQL Server - Active Directory 認証が利用可能に https://cloud.google.com/blog/ja/products/databases/windows-authentication-now-supported-by-googles-cloud-sql-database/ インスタンス名は外部から閲覧可能です。 2021-04-09 14:00:00
GCP Cloud Blog JA Dataproc Metastore の一般提供開始により、データレイク管理がさらに簡単に https://cloud.google.com/blog/ja/products/data-analytics/data-lake-management-just-got-easier-dataproc-metastore-ga/ GoogleCloudのデータレイクにDataprocMetastoreを使用するDataprocMetastoreはつ以上のDataprocクラスタやセルフマネージドクラスタに簡単に接続して、さまざまなオープンソース処理エンジンでHiveテーブルを共有できます。 2021-04-09 14:00:00
GCP Cloud Blog JA 株式会社プレイド:Anthos clusters on AWS を活用し、マルチクラウドでも GKE 由来の高度なマネージド環境を享受 https://cloud.google.com/blog/ja/topics/customers/plaid-anthos-clusters-on-aws/ 対象となったパートはそれまでVMに構築されていたのですが、これをコンテナへ移行するに際しては、GKEの使いやすさを評価し、新たにGoogleCloud上に構築することになりました。 2021-04-09 14:00:00
GCP Cloud Blog JA 政府との連携による気候問題への取り組み https://cloud.google.com/blog/ja/topics/public-sector/working-with-governments-on-climate-goals/ Googleでは、年までに以上の都市に支援を提供し、年間炭素排出量を億トン減らすことを目標として設定していますこれは、日本の年分の炭素排出量に相当します。 2021-04-09 13:00:00
GCP Cloud Blog JA Auto Trader: Oracle から PostgreSQL への道のり https://cloud.google.com/blog/ja/products/databases/how-auto-trader-migrated-its-on-prem-databases-to-cloud-sql/ それ以来、このサービスのCloudSQLインスタンスのリソースを分以内で簡単にスケールできています。 2021-04-09 13:00:00

コメント

このブログの人気の投稿

投稿時間:2021-06-17 05:05:34 RSSフィード2021-06-17 05:00 分まとめ(1274件)

投稿時間:2021-06-20 02:06:12 RSSフィード2021-06-20 02:00 分まとめ(3871件)

投稿時間:2020-12-01 09:41:49 RSSフィード2020-12-01 09:00 分まとめ(69件)