投稿時間:2022-02-11 01:36:25 RSSフィード2022-02-11 01:00 分まとめ(44件)

カテゴリー等 サイト名等 記事タイトル・トレンドワード等 リンクURL 頻出ワード・要約等/検索ボリューム 登録日
Google Official Google Blog Be ready for what’s next: growing your business in 2022 https://blog.google/products/ads-commerce/growing-your-business-in-2022/ Be ready for what s next growing your business in Like many of you I ve spent the first month of ramping back up at work and making progress on plans for this year And while there still isn t a playbook for navigating a pandemic that s upended daily life my team is continuing to focus on ways Google can help you respond and deliver on your greatest business needs Buying behavior will continue to change and people will use technology in new ways to discover products and brands That s why my team and I are more committed than ever to connecting consumers with the businesses around them while continuing to power a free and open internet Buying behavior will continue to change and people will use technology in new ways to discover products and brands This is an intentionally ambitious goal and today I want to share the three priorities that are guiding our product roadmap unlocking new opportunities for business growth preparing for the future of measurement and ensuring we exceed consumer expectations for privacy Unlocking new opportunities for growth with automationShifts in consumer behavior continue to present challenges and opportunities for businesses around the globe And despite some parts of the world reopening it appears many of these shifts will not only stay but accelerate Take food delivery for example Searches for “takeout restaurants surged last year compared to the start of the pandemic ac In meeting with many advertisers I ve heard how readiness speed and agility have been critical for managing complexity and driving growth in these uncertain times That s why advertisers are turning to automation more than ever before In fact over of Google advertisers are now using automated bidding to free up time and improve ad performance af Over of Google advertisers are now using automated bidding to free up time and improve ad performance It s also important to build on innovations like Performance Max campaigns ーand make them easy to use This single campaign enables marketers to find incremental high value customers across Google s full range of advertising channels and inventory By simply providing conversion goals audience signals and a number of creative assets advertisers that use Performance Max campaigns in their accounts have seen an average increase of total incremental conversions at a similar cost per action e Performance Max campaigns help you increase conversions acrossGoogle s full range of advertising channels and inventory Similarly Discovery campaigns allow you to reach up to three billion users across Google feeds like YouTube and Discover ーall from a single campaign You can deliver highly visual inspiring and personalized ad experiences to people who are ready to discover your brand Across all Google Ads campaigns ーincluding Video action campaigns and Smart Display campaigns ーour data shows that automation is unlocking growth for businesses around the world This is especially true for Search campaigns As we continue improving our Search products we re seeing the multiplicative effect of using automated targeting creative and bidding together Automation is unlocking growth for businesses around the world One of my favorite examples comes from tails com which is based in the UK The tailor made dog food brand took a test learn and scale approach as it expanded into new markets across Europe Using the combination of broad match Smart Bidding and responsive search ads tails com increased sign ups in Germany from its generic Search campaigns by Watch how tails com expanded its business in Germany Preparing for the future of measurementWhether it s Google Ads or Google Analytics the products you use should help you solve the unique challenges facing your business They also need to deliver meaningful results and performance especially during times of change and uncertainty For example we know that new approaches to measurement are critical as cookies and other identifiers are phased out The future of measurement is combining consented first party data with insights from new privacy safe technology like browser APIs and using modeling to close data gaps Solutions like enhanced conversions consent mode conversion modeling and data driven attribution allow you to respect your customers privacy preferences while confidently measuring the impact of your ads The future of measurement is combining consented first party data with insights from new privacy safe technology First party data is not only critical for measuring your media it s also essential in understanding your customers Our research shows that companies that link their first party data sources can generate times the incremental revenue from single ad placement communication or outreach Assigning value to your conversions and using first party data solutions like Customer Match enable you to express what s most valuable to your business and find opportunities for growth Exceeding consumers expectations for digital privacyThere s been a massive acceleration in the way people use technology to connect with businesses during the pandemic Meanwhile there are rising expectations for user privacy and control You have to meet your customers where they are and build meaningful relationships in a privacy safe way Empowering best in class marketingWhether a global brand like PepsiCo or an online business like tails com your stories of resilience and ingenuity continue to inspire my team to build for the future There s never been a more exciting time to be a marketer and we re here to be your partner along the way As you make progress on your plans for the year ahead continue to share your stories and feedback within the product and at events like Think Retail and Google Marketing Live We ll continue listening sharing insights and building products to help you come back stronger in 2022-02-10 16:00:00
python Pythonタグが付けられた新着投稿 - Qiita 【python初心者】matplotlib、seabornで箱ひげ図を描く https://qiita.com/yumi1123/items/8269697fdd70c4839ef0 dfirisboxplotpatchartistFalse横に表示するときはvertFalseを指定dfirisboxplotvertFalseseabornの場合seabornを使っても同じような結果を得られるsnsboxplotdatadfirisグループ分けして箱ひげ図を描く特定の列を選択し、その列の中でのグループ分けを基準にして箱ひげ図を作成する。 2022-02-11 00:39:22
Ruby Rubyタグが付けられた新着投稿 - Qiita Railsのassert_templateとassert_select https://qiita.com/d_takahashi/items/65d9c2c0527a604a7d4c assertselect要素セレクタ条件メッセージは、選択されたすべての要素が条件に一致することを主張します。 2022-02-11 00:14:17
Ruby Rubyタグが付けられた新着投稿 - Qiita selectメソッドとそれに関するメソッドの解説 https://qiita.com/wangqijiangjun/items/52c1a8d1db072085180c 条件に一致した要素がなかった場合、空の配列が返ってきます。 2022-02-11 00:00:49
Linux Ubuntuタグが付けられた新着投稿 - Qiita Ubuntu Server 20.04.3 LTS 構築#12 セキュリティ設定5-5 https://qiita.com/art22/items/e3dc823966be390b2f3d 最後にetccrond内にある実行ファイルに作成したシェルスクリプトを実行するように設定して、自動で実行が出来るのかを確認します。 2022-02-11 00:37:56
GCP gcpタグが付けられた新着投稿 - Qiita Google Cloud APIを使用して作成したGCEインスタンス(Windows)のリモートデスクトップ接続パスワードの復号化方法 https://qiita.com/tacitusxo/items/750c4eca47b42348147e GoogleCloudAPIを使用して作成したGCEインスタンスWindowsのリモートデスクトップ接続パスワードの復号化方法経緯NETCloudClientLibrariesを使用して、GCEインスタンスWindowsを作成する機会があったのですがパスワードの復号化に苦戦しましたので復号化手順を共有します。 2022-02-11 00:04:27
Git Gitタグが付けられた新着投稿 - Qiita git push origin HEAD https://qiita.com/yuya417/items/605421d8f5f5d20aa038 では、なぜgitpushoriginHEADでリモートブランチにプッシュできるのでしょうかなぜgitpushoriginHEADで現在のブランチをプッシュできるのかまず、HEADが鍵になっています。 2022-02-11 00:45:52
Ruby Railsタグが付けられた新着投稿 - Qiita Railsのassert_templateとassert_select https://qiita.com/d_takahashi/items/65d9c2c0527a604a7d4c assertselect要素セレクタ条件メッセージは、選択されたすべての要素が条件に一致することを主張します。 2022-02-11 00:14:17
Ruby Railsタグが付けられた新着投稿 - Qiita selectメソッドとそれに関するメソッドの解説 https://qiita.com/wangqijiangjun/items/52c1a8d1db072085180c 条件に一致した要素がなかった場合、空の配列が返ってきます。 2022-02-11 00:00:49
技術ブログ Mercari Engineering Blog Istio が解決する私達の問題 https://engineering.mercari.com/blog/entry/20220210-how-istio-solved-our-problems/ networhellip 2022-02-10 16:36:26
技術ブログ Developers.IO 「ハードワークは人を成長させるのか?」を細分化する https://dev.classmethod.jp/articles/fake-hardwork-myth/ 統括 2022-02-10 15:26:20
海外TECH MakeUseOf 8 Great Apps for Exercising With Personal Trainers https://www.makeuseof.com/apps-exercising-personal-trainers/ trainer 2022-02-10 15:30:22
海外TECH MakeUseOf The 8 Best Fight Sticks to KO Your Opponents https://www.makeuseof.com/tag/best-arcade-stick/ knockout 2022-02-10 15:24:13
海外TECH MakeUseOf Samsung Galaxy S22: Which Countries Get the Snapdragon Version and Which Get Exynos? https://www.makeuseof.com/galaxy-s22-series-where-snapdragon-exynos/ Samsung Galaxy S Which Countries Get the Snapdragon Version and Which Get Exynos The Galaxy S series comes in versions powered by Qualcomm s Snapdragon processor and Samsung s own Exynos chip Which can you get where you are 2022-02-10 15:17:12
海外TECH DEV Community What are the qualities of a Principal Engineer (or equivalent title)? https://dev.to/ben/what-are-the-qualities-of-a-principal-engineer-or-equivalent-title-50aj What are the qualities of a Principal Engineer or equivalent title What in your mind make for an effective principal level engineer ーthat is say one level above senior I put this in quotes because the titles are going to be flexible from place to place but what in your mind does it mean to be that level above senior I think about this because we are hiring for a principal role and we have our own criteria but I think at the end of the day it is a matter of organizational expectations and personal expectations So I m wondering what you think allie p allie p Come work with me We re hiring a principal engineer at forem Great team Everyone is remote Focused on a product that builds community Check it out jobs forem com o principal fu… PM Feb 2022-02-10 15:48:43
海外TECH DEV Community Contributing to the Apache Airflow project https://dev.to/aws/contributing-to-the-apache-airflow-project-37mf Contributing to the Apache Airflow project Contributing to Apache Airflow IntroductionIn this series of posts I am going to share what I learn as embark on my first upstream contribution to the Apache Airflow project The purpose is to show you how typical open source projects like Apache Airflow work how you engage with the community to orchestrate change and hopefully inspire more people to contribute to this open source project I will post regular updates as a series of posts as the journey unfolds But as always we need to set the stage and start with our reason for doing so The problemIn a previous post Creating a multi architecture CI CD solution with Amazon ECS and ECS Anywhere I set up a typical environment that you might come across with customers that are looking at a Hybrid approach to managing and deploying their workloads This allows you to run container images anywhere where you can deploy the ECS Anywhere agent and uses a specific configuration parameter launchtype EXTERNAL in order to know where to run your container Tasks More of this in a moment In this environment I have various data silos that reside in both my AWS environment and on my local network In this instance it is a MySQL database and the MySQL database contains different data across the two environments As part of building out my data lake on Amazon S I am pulling data from both these environments However in my particular use case I want to be able to control what data is moved to the data lake My solution was to use Apache Airflow and create a new workflow to orchestrate this I planned to create an ETL script and ensure the script can take parameters to maximise reuse and flexibility FROM public ecr aws docker library python latestWORKDIR appCOPY requirements txt requirements txtRUN pip install r requirements txtCOPY ENTRYPOINT python app read data q py Before testing this in Apache Airflow I package up the container image push it up to Amazon ECR and then test that it runs from the command You can find the code here Now that I have my ETL script I can use an Apache Airflow operator that integrates with Amazon ECS to orchestrate this That operator is called the ECS Operator The ECS Operator takes a number of parameters One of the key ones is launchtype as mentioned before this is how the ECS control plane knows where to run your tasks Reading the docs the two supported launchtypes for this Apache Airflow operator are EC and FARGATE We know that there is a third launchtype of EXTERNAL but that does not appear to be listed in the docs This is the simple workflow we create just to test how it works from airflow import DAGfrom datetime import datetime timedeltafrom airflow providers amazon aws operators ecs import ECSOperatordefault args owner ubuntu start date datetime retry delay timedelta seconds with DAG airflow dag test catchup False default args default args schedule interval None as dag query db ECSOperator task id airflow hybrid ecs task query dag dag cluster test hybrid task definition airflow hybrid ecs task launch type EXTERNAL overrides containerOverrides name public ecr aws abhu beachgeek latest command ricsue airflow hybrid temp csv select from customers rds airflow hybrid eu west awslogs group ecs hyrid airflow awslogs stream prefix ecs test ECSOperator task id test dag dag cluster test hybrid task definition test launch type EXTERNAL overrides containerOverrides awslogs group ecs test awslogs stream prefix ecs testI first create an ECS Cluster running on EC set the launchtype to EC and the trigger the DAG As expected the ECS task takes the parameters and runs the script exporting a file to my Amazon S bucket All good I know this works Next I change the DAG setting the launchtype of EXTERNAL results in an error when triggering the workflow as follows UTC taskinstance py INFO Marking task as FAILED dag id airflow dag test task id test execution date T start date T end date T UTC standard task runner py ERROR Failed to execute job for task test airflow providers amazon aws exceptions ECSOperatorError tasks failures arn arn aws ecs eu west container instance ccfaccadfdbc reason LAUNCH TYPE ResponseMetadata RequestId decdb fd f bfd afcdda HTTPStatusCode HTTPHeaders x amzn requestid decdb fd f bfd afcdda content type application x amz json content length date Tue Feb GMT RetryAttempts UTC local task job py INFO Task exited with return code We can see that the LAUNCH TYPE is the reason for the failure So it looks like the ECS Operator currently does not support the launchtype value of EXTERNAL Oh well at least we tried WorkaroundUsing Apache Airflow operators is my preferred way of interacting with downstream applications and services when building my workflows They remove a lot of the differentiated heavy lifting but they can also improve the performance and stability of your workflows Given we cannot use the ECS Operator I fall back to creating a simple Python operator which uses boto to do the same thing from airflow import DAGfrom datetime import datetime timedeltafrom airflow operators python import PythonOperatorimport botoimport jsondefault args owner ubuntu start date datetime retry delay timedelta seconds client boto client ecs region name eu west Function that will take variables and create our new ECS Task Definitiondef create task ti response client register task definition containerDefinitions name airflow hybrid boto image public ecr aws abhu beachgeek latest cpu portMappings essential True environment mountPoints volumesFrom command ricsue airflow hybrid period temp csv select from customers WHERE location China rds airflow hybrid eu west logConfiguration logDriver awslogs options awslogs group ecs test external awslogs region eu west awslogs stream prefix ecs taskRoleArn arn aws iam role ecsTaskExecutionRole executionRoleArn arn aws iam role ecsTaskExecutionRole family test external networkMode bridge requiresCompatibilities EXTERNAL cpu memory we now need to store the version of the new task so we can ensure idemopotency new taskdef json dumps response taskDefinition revision indent default str print TaskDef is now at str new taskdef return new taskdef ti xcom push key new taskdef value new taskdef Function that will run our ECS Taskdef run task ti new taskdef ti xcom pull key new taskdef task ids create taskdef new taskdef ti xcom pull task ids create taskdef print TaskDef passed is str new taskdef response client run task cluster test hybrid count launchType EXTERNAL taskDefinition test external taskdef format taskdef new taskdef with DAG airflow ecsanywhere boto catchup False default args default args schedule interval None as dag first task PythonOperator task id create taskdef python callable create task provide context True dag dag second task PythonOperator task id run task python callable run task provide context True dag dag first task gt gt second taskOwnershipWhilst this works I begin to think that maybe I can fix the ECS Operator and add the new launchtype of EXTERNAL I mean how hard can it be One of the underpinning principals that makes open source tick is that when you identify a problem like the one I have just described that you don t just look for someone else to fix it If you are getting value out of that software then you need to take ownership of the issue and look for a resolution It maybe that you are not a coder and perhaps do not have the technical skills needed but that is not an excuse Whether it is in helping to describe and help triage the issue providing support and resources for those more technical that can do a fix through to your involvement in the end resolution you need to take ownership Why mention this I want sure that any my contribution is anchored in a strong foundation In this instance improving how the current ECS operator works will optimise how I can use and integrate ETL tasks that I want to run via Amazon ECS PlanI work best when I have a plan and I am not sure if this is the best or right plan please let me know but the plan I have is to initially get the latest stable code of Apache Airflow up and running in my development environment Once that is working the next step is to understand in more detail how the ECS Operator code works and then make some minor changes to make sure I understand the flow from making a change to then how that changes flows into the final build Once I have got there I will hopefully have a better understanding of how to make changes and how to incorporate those within the parameters of this project Am I allowed to contribute Organisational policies I am assuming you work for a company but if you are an individual contributor I guess this does not apply so you can skip this section Before jumping into looking at how I fix this issue the first step was for me to understand how my organisation feels about me doing so Many organisations now have an open source usage and contribution policy and these are typically created by and owned managed by the Open Source Program Office or OSPO In my case we have clearly defined policies and guidance that helps our technical communities understand how and what they can do You should use this as an opportunity to understand more about your own organisations policies If you find these hard to find or perhaps you do not yet have an OSPO then maybe you should explore whether the time is right for you do look into this when you look at the various data points it is very likely you are using open source software within your business Make sure that you have the right setup to make open source work for you something an OSPO will absolutely help you with Anyway a small diversion but an important one If you work for a business being able and supported in how you contribute to the open source software you use is critical to the long term sustainability of the open source commons Contribution the first stepReviewing the Contribution fileWith permission granted the first step is to head over to the Apache Airflow project and look for information on how to contribute It is a good practice to create a CONTRIBUTING document in your project to help guide new contributors as well as ensure consistency for those who might already be familiar Apache Airflow has a detailed CONTRIBUTING guide as well as a guide for new contributors This is actually great as there is a lot of stuff in here and it looks like it is the product of lots of lessons learned so I did take some time to review this Reaching out to the projectThe contributors page also mentions about getting support from the existing Apache Airflow community I am already part of the Apache Airflow slack channel if you have not joined this then please do it is a really great community of like minded builders helping each other out and so reached out to Jarek Potiuk one of the maintainers He was quick to respond and encourage me on this adventure ensuring that if I run into any problems he would be happy to help Internally I reached out within our own Apache Airflow community and had another builder Mark Richman reach out to join in this adventure We were provided some useful guidance from existing internal contributors to the project and this is even before we have made any code changes Creating our issueThe first step as with many open source projects is to create an issue within the GitHub repository Here is the issue I created ECS Operator does not support launch type EXTERNAL Some observations from this When writing the issue I tried to put as much context about the problem as possible I spend a lot of time reading other issues and I find it helpful when people provide the problem as well as the context The issue is kind of an audit log of all the conversations and discussions around the problem you have raised Often the CONTRIBUTING guides will provide additional info on what they expect you to include here Once the issue was created I reached out to Jarek who assigned the issue to Mark and myself We are now owners of this issue The project provides an overview of the contribution workflow which makes it very easy to understand how you are expected to work Having created the issue the next step is to fork the project and build locally I did skip ahead a little and joined the devlist as I already was part of the Slack channel To do this incase it is not obvious it was not to me you just send a blank email to that address and then when they send you a reply you just need to confirm with a final response Once done you will now be part of the Apache Airflow dev mailing list You can find out more here Forking the projectForkingForking Apache Airflow is simple enough as GitHub makes this as simple as a couple of clicks I now have my own Apache Airflow fork for for this work The only thing I needed to make sure was that it was the right branch latest main and not any of the other branches I have made that mistake before Getting my local development environment readyThe next step is to setup my local developer environment so I can actually build the project from source The documentation provides some guidance here and it looks like there are two approaches The doc provides some nice guidance here with some of the differences to be aware of This is where I found out having the discussion with existing contributors to the project was super helpful as they provided more opinionated guidance to what is in the docs I am going to go with setting up the Breeze environment This also got me thinking more practically where am I actually going to do the development I have my trusty Macbook which certainly has enough grunt to do the job But do I really want to do that there What other options do I have I try and do all my work in my AWS Cloud environment as I find it a really nice way to have everything I need centred around a specific activity I could also potentially spin up a virtual desktop I have an Amazon Linux virtual desktop which provides a much more flexible environment and would allow me to deploy a lot of the standard tools I use on my Macbook It is probably worth thinking before you proceed about what will work best for your workflow In the end I decided that I would experiment with the AWS Cloud environment so am going to try that out I have set myself up a custom instance type as I notice that in the doc it says that the Breeze setup uses uses GBs of disk and many CPUs I am going to start off with m xlarge which will give me GB ram and vCPUs I also noticed that a lot of the documentation and the docker files are using the debian package format so will go with Ubuntu for the OS rather than the default of Amazon Linux The only thing I need to do is increase the disk space for my Cloud environment which is easy enough with this script which will increase the volume size to GB pip install user upgrade botoexport instance id curl s python c import botoimport osfrom botocore exceptions import ClientError ec boto client ec volume info ec describe volumes Filters Name attachment instance id Values os getenv instance id volume id volume info Volumes VolumeId try resize ec modify volume VolumeId volume id Size print resize except ClientError as e if e response Error Code InvalidParameterValue print ERROR MESSAGE format e if eq then sudo rebootfiThis only took a few minutes to provision and now I have a blank canvas on which to proceed I took a look at the pre req s and needed to install jq docker compose coreutils which for some reason didn t create the sym link for gstat and I was ready to go To install docker compose I originally used apt get but this installed an older version so ran the following command sudo curl L uname s uname m o usr local bin docker composeand then ensured that it was earlier in the path so this newer version was picked upThe final thing I did to complete my AWS Cloud setup was to update the AWS cli to v using the followingcurl o awscliv zip unzip awscliv zipsudo aws installAnd then update the credentials that AWS Cloud uses and then using my standard aws credentials either by copying your aws config files or running aws configure Virtual DesktopWhilst I was troubleshooting an issue with running the tests above I tried using Amazon Workspaces to spin up a virtual desktop Amazon Linux and tried the same process as above It worked pretty much the same except that it uses yum vs apt Also a lot of the stuff is already installed so the only thing I needed to do wasInstall the aws cli v same as above Install docker compse same as above although I needed to set the executable bit chmod x I needed to start docker by changing to root and then using service docker start Still as root I could then start Breeze as normalRunning Breeze for the first timeThe next step was to actually clone the repo my forked one into my Cloud IDE and then run the breeze command I got an error as followserror failed to solve failed to solve with frontend dockerfile v failed to create LLB definition rpc error code Unknown desc error getting credentials err exit status out GDBus Error org freedesktop DBus Error ServiceUnknown The name org freedesktop secrets was not provided by any service files It turns out thatI needed to install a new package and this then fixed the error sudo apt install gnome keyringWhen I run breeze again it all works and I am now sitting within the Breeze container sitting at the bash shell I exit and then add the autocomplete option which is recommended within the doc just running breeze setup autocomplete and looks like we are good to go There is a lot of details in the doc and I will admit that I was not sure where to begin I found this video from Jarek was much more helpful and recommend folk view it I actually had it running and tried to follow along once I had everything up and running Get local builds runningReviewing the quick start guide it advices to run the following commands to get Apache Airflow up and running within Breeze breeze python backend mysql breeze start airflowAfter a few minutes of downloading various images it launches a tmux terminal and I could see the Apache Airflow processes start As I was using AWS Cloud I could not access the preview URLs it was listening out for so I needed to modify the inbound security rule for my AWS Cloud rule so that my specific IP could access this instance on this port Once I did that using the public IP of the AWS Cloud instance I could now access this Apache Airflow instance and login using the default admin admin user The quick start also suggests setting up MySQL Workbench and connecting to the local MySQL database As I am using JetBrains DataGrip I open another port on my AWS Cloud instance create a new connection profile for this instance connecting to port and validate I can access this too So far looking good From the active terminal I can stop this by running which I can validate has killed all the processes as I can no longer access it via the browser stop airflowAnd I can exit from Breeze by entering exit from the container shell and then stop Breeze by running breeze stop TestsYou run tests from within the Breeze environment Having gone through the video above I wanted to run some just to sanity check the setup Aside from some wonky paths running the following worked fine pytest tests core lt hidden output gt passed skipped warnings in s You can also run the tests from the AWS Cloud terminal It took me a while trying to make sense of this but by running breeze tests tests coreI actually went through the Testing doc which provides a lot of good instructions on the various scenarios and ran lots of these to sanity check my setup Given that the ECS Operator is part of the providers package and there is some specific guidance around the testing Apache Airflow providers packages We know that the ECS Operator is in the Amazon provider package so we run the following from within Breeze pytest tests providers amazonor from the host we can run breeze tests tests providers amazonIf you want to run all the provider tests you can run pytest tests providers TroubleshootingWhile running these the tests did not complete and I got an error as follows TestAwsSHookNoMock test check for bucket raises error with invalid conn id self lt tests providers amazon aws hooks test s TestAwsSHookNoMock object at xfeaace gt monkeypatch lt pytest monkeypatch MonkeyPatch object at xfeadad gt def test check for bucket raises error with invalid conn id self monkeypatch monkeypatch delenv AWS PROFILE raising False monkeypatch delenv AWS ACCESS KEY ID raising False monkeypatch delenv AWS SECRET ACCESS KEY raising False hook SHook aws conn id does not exist with pytest raises NoCredentialsError hook check for bucket test non existing bucket Failed DID NOT RAISE lt class botocore exceptions NoCredentialsError gt tests providers amazon aws hooks test s py Failed Captured stderr call WARNI airflow providers amazon aws hooks s SHook Unable to use Airflow Connection for credentials ERROR airflow providers amazon aws hooks s SHook Not Found Captured log call WARNING airflow providers amazon aws hooks s SHook base aws py Unable to use Airflow Connection for credentials ERROR airflow providers amazon aws hooks s SHook s py Not FoundIf you see this then it is likely the issue is caused by an IAM role that is attached to your EC instance At least that is what it was in my case I could run these tests locally on my Macbook on my Virtual Desktop without issue but it was always this test that failed When you review the test code it kind of makes sense as the test case is actually removing AWS credentials so expects to fail The attached IAM role at the EC instance is kind of invisible to the test and so whilst the test case expects to fail with no credentials the IAM role helpfully provides credentials therefore failing the test Lost about a day with this issue so I hope this help some who sees a similar problem It will take several minutes to complete but then you should get something like passed skipped xfailed warnings in s Summary and next stepsNow that we have our developer environment setup we have the Apache Airflow source code forked and integrated with our developer tooling made sure that everything runs and that the tests all complete successfully we are good to go In the next blog post I will take the next step of trying to explore the packages around the ECS Operator to try and understand how it works I will look at the tests for this operator a look at the end to end flow for changes which will set up us for being able to start working on a fix for our problem PostscriptAs I was writing this I stumbled upon fellow Developer Advocate Damon Cortesi who also blogged about his journey contributing to Apache Airflow This is a great read and I wish I had know about this before starting Check out the post Building and Testing a new Apache Airflow Plugin 2022-02-10 15:28:00
海外TECH DEV Community Configuration and Storage in Kubernetes https://dev.to/okteto/configuration-and-storage-in-kubernetes-7ae Configuration and Storage in KubernetesHello fellow developers By this point in the series I hope you re now much less scared of Kubernetes So far we ve explored its architecture some essential Ks objects and even deployed an app on a Kubernetes cluster using Okteto Pat yourselves on the back for making it till here In this article we ll first look at two somewhat similar Kubernetes objects which help with providing configuration to our application running in the cluster ConfigMaps and Secrets Then we ll move on to look at how storage is handled in Kubernetes using Volumes and Persistent Volumes So strap on and let s get started ConfigMaps and SecretsTo inject configuration data into a container Kubernetes provides us with two objects ConfigMaps and Secrets It is highly recommended to separate configuration data from your application code so you ll see these two objects being used in almost all Kubernetes clusters If you see the YAML for a ConfigMap or a Secret you ll notice that both of them are almost similar They both have a data key under which the configuration data is provided in key value pairs For example data key value key valueThe difference is in the fact that secrets are meant for holding sensitive data When writing the YAML for a secret we wouldn t specify value and value directly like we would for ConfigMaps Instead we would specify the base encoded versions of these values Pods can refer to a particular ConfigMap and or Secret and specify a key and then they would have an environment variable in their container with the corresponding value This would enable you to refer to these environment variables in your application code For the movies app which we deployed our api pod needed the credentials for the MongoDB database in order to connect to it We provided these using a secret If you see the YAML for the api deployment you ll see that we re getting an environment variable called MONGODB PASSWORD for our container from a secret called mongodb If you want to see how the YAML for this secret looks head over to the terminal and run kubectl get secret mongodb o yamlThe o yaml flag gets the YAML for a particular Kubernetes object You ll see that under the data key for the returned object we have the mongodb password key which we were referring to in our api deployment To see the actual value for this key you ll have to decode the base encoded value shown Now that you have an idea of how configuration data is handled in Kubernetes clusters let s move on and take a look at how data is shared between Ks objects using Volumes and Persistent Volumes Storage in KubernetesIf you recall in the second article I mentioned that Pods are meant to be ephemeral This means that any data generated by containers running in the Pod also gets destroyed when the pod is destroyed In Kubernetes Volumes and Persistent Volumes help us solve this problem of data loss Apart from this they also solve another problem sharing of data between different containers VolumesThere are a lot of volume types offered by Kubernetes But thankfully as developers we mostly never have to worry about all this stuff In this section we ll just cover a common volume type which you might run into emptyDir The emptyDir type has two important use cases The first is that it allows us to share data between two containers running in the same pod The second is that if our container ever crashes it enables us to still retain all the data created previously Do note that this volume only exists as long as the Pod is running So if your pod is destroyed for any reason you WILL lose all the data Let s look at how volumes are configured for Pods apiVersion vkind Podmetadata name my podspec volumes name cache volume emptyDir containers image nginx name my container volumeMounts mountPath cache name cache volumeUnder the Pod spec we first define our volumes in a list in the above example we have one volume called cache volume which is of type emptyDir Then while specifying our containers we refer to this volume under volumeMounts mountPath is where in the filesystem of the container the data from the volume should be loaded into Persistent VolumesWe just learned that emptyDir volumes won t save our data if our Pod goes down So you must be wondering how does one store data which persists regardless of any changes to the Pod This is where Persistent Volumes come to save the day A Persistent Volume PV is a cluster level storage object What this means is that just like nodes it too is a resource present in the cluster It is an administrator s job to provision this so don t worry too much about how it s created However what we as developers should be familiar with is how to use this provisioned storage To use a Persistent Volume we create a Persistent Volume Claim PVC object and then refer to this claim under the volumes key of the Pod YAML like we saw above A PVC is nothing but a request for storage This is how a basic PVC object would look like apiVersion vkind PersistentVolumeClaimmetadata name pv claimspec accessModes ReadWriteOnce resources requests storage GiOnce you create this PVC the Kubernetes control plane will bind this claim to a suitable persistent volume for you The above YAML should be pretty simple to understand we re requesting Gigabyte of storage with the access mode of ReadWriteOnce What this means is that the volume can be mounted as a read write storage but only by a single node After you create the PVC all that you need to do is refer to it in your Pod s YAML as we did for emptyDir above spec volumes name pv storage persistentVolumeClaim claimName pv claimAnd voilà You now have a persistent storage solution for your application Okteto gives you the ability to do a lot more with volumes including creating a data clone of your application s database and using it with your development environment You can read up more on how to do that here This concludes our discussion on configuration and storage in Kubernetes We started by taking a look at ConfigMaps and Secrets and saw how they help provide us configuration data to our application Then we looked at how we can leverage Volumes to safeguard our application data in case our container restarts Finally we looked at Persistent Volumes which provide us a way to persist data in a separate resource present in the cluster thus ensuring that even Pod deletion doesn t lead to a loss of data All of this does look intimidating at first but remember that you don t have to fight these battles alone as a developer If you re using a managed Kubernetes environment like Okteto most of this is already taken care of for you If not even then you should be receiving support from the infra team But like I said earlier even as a developer it is good to have an idea of things so you re not totally lost In the final article of this series we ll be taking a look at how networking works in Kubernetes so make sure to keep an eye out for that one DThe Kubernetes for Developers blog series has now concluded and you can find all the posts in this series here 2022-02-10 15:11:09
海外TECH DEV Community JHipster Community Survey Results https://dev.to/jhipster/jhipster-community-survey-results-1gp9 JHipster Community Survey ResultsSeveral weeks ago we launched the JHipster Community Survey The goal was to get feedback from the community about the most useful features and components things that are missing and where we should focus our attention to make the project even better for everyone We got over responses thank you all for taking the time to contribute and share the survey with your community Now let s take a look at the results How do you use JHipster The first section of questions was dedicated to understanding the most common scenarios in which JHipster is used and how exactly so we can focus our efforts accordingly Which client side frameworks do you use with JHipster The most commonly used framework in the JHipster community is Angular with of respondents making it their choice React is in second place with votes and the rest of the votes are distributed between Vue js Svelte and using none What kind of applications do you create with JHipster It was very interesting to understand what kind of applications developers are building with JHipster SPA Monolith is a clear winner with Microservices being a close second Gateways and Server only projects were equally popular Outside Java Spring Boot do you use other languages frameworks supported by JHipster Many developers use JHipster for Java and Spring Boot projects but do they use it for other platforms and frameworks Yes Node js Kotlin and Flutter are quite popular as well as such frameworks as Quarkus Ionic React Native and Micronaut What is one thing you love about JHipster This was an open question where everyone could share what they like about JHipster in any form Some of the most popular answers were the following Easy bootstrapping of web applicationsUse of best practices and standardsEliminating routine work and getting to production fastEnabling unit and integration testsUsing latest technologies and open source stackFlexibility and customizations out of the boxContinuous support and community…and many other things Thanks everyone for this great feedback What can be improved We also asked to identify one thing people hate about JHipster so we can improve it Many users said there is no such thing Others pointed out that improving some of the documentation the upgrade process entities update and adding more tutorials in particular for various cloud platforms would be great How would you rate the quality of generated code of users rated the quality of the code generated by JHipster as out of or higher JHipster s RoadmapSome of the most interesting and at the same time challenging questions to answer for each project are about the roadmap Which features should we prioritize What is missing Are our current plans and assumptions right or do our users feel differently To get night into that we asked our users several questions about the roadmap What features would you like to see in JHipster All of the options that we put up for voting micro frontends GraphQL support and GraalVM Native Image support are highly anticipated by the community Some of the features suggested by the community were support for TypeScript DynamoDB Flyway Redis and Vert x extended NeoJ support and Python support along with introducing app templates and entity based application dashboards and others If you could improve the getting started experience for JHipster what would you change A few users suggested having a minimalistic version of Jhipster which is already addressed with the recent announcement of JHipster Lite ーgo and give it a try if you haven t yet Other suggestions were to have more project samples a simple JDL sample like “Book and “Author outlining best practices for upgrading creating sample projects with various integrations and adding more short explainer videos and step by step tutorials for beginners JHipster User ProfileWe also asked a few questions to better understand who our users are and in what context they are using JHipster First a few short facts Most of Hipster users define their programming level experience as “Experienced or “Expert Only of them are beginners Most of them also learned about JHipster from a colleague or via GitHub For the majority of users JHipster is an integral part of their workflow of respondents companies approve using JHipster And a few more details about developers using JHipster Which programming languages do you regularly use Another question we asked was about programming languages Almost all our respondents use Java which is not surprising since a lot of JHipster s backend options are Java based JavaScript is also popular probably primarily for the front end part Python and Kotlin are next which also corresponds to feature request data we ve seen before of the users also chose C and Golang Have you ever made contributions to the project JHipster is lucky to have a very active and engaged community Every third community member helped the project by providing feedback via GitHub and every fourth created content about it such as blog posts event talks etc Other contributions such as sending code PRs contributing to docs and helping other developers are extremely valuable and improve the project for everyone ConclusionThanks again to everyone for taking the time to provide this feedback We will go through the suggestions with the core team and will use the results to adjust the project plans Some of the most requested features are already on our roadmap with ongoing work happening and bounties on them ーfeel to contribute or share your thoughts in corresponding GitHub tickets Implement a micro frontends prototype with JHipster GraphQL support Add native image support with GraalVM and Spring NativeIf you didn t have a chance to participate in the survey or have other suggestions your feedback is always welcome on GitHub and Twitter 2022-02-10 15:04:27
Apple AppleInsider - Frontpage News Best deals Feb. 10: discounted refurbished Apple computers, $50 off iPad mini 6, $40 off 8TB internal HDD, more! https://appleinsider.com/articles/22/02/10/best-deals-feb-10-discounted-refurbished-apple-computers-50-off-ipad-mini-6-40-off-8tb-internal-hdd-more?utm_medium=rss Best deals Feb discounted refurbished Apple computers off iPad mini off TB internal HDD more In addition to refurbished Apple computers like the inch MacBook Thursday s best deals include off AirPods Pro off first generation Apple Pencil and off select Amazon Basics microphones Best deals February As we do every day we ve collected some of the best deals we could find on Apple products tech accessories and other items for the AppleInsider audience If an item is out of stock it may still be able to be ordered for delivery at a later date Read more 2022-02-10 15:34:18
Apple AppleInsider - Frontpage News Factory contamination ruins 6.5 million terabytes of flash storage, prices may rise https://appleinsider.com/articles/22/02/10/factory-contamination-ruins-65-million-terabytes-of-flash-storage-prices-may-rise?utm_medium=rss Factory contamination ruins million terabytes of flash storage prices may riseWestern Digital and Kioxia Corp reduced production at two plants in Japan after contaminated materials were discovered This could lead to further price inflation and delays in consumer electronics Western Digital and Kioxia flash memory production impacted by contaminated materialsWestern Digital and Kioxia are some of the largest producers of flash memory in the industry Contaminated materials in plants at Yokkaichi and Kitakami have led to limited production of flash memory which is expected to impact the rest of the industry Read more 2022-02-10 15:22:37
Apple AppleInsider - Frontpage News Samsung debuts new Galaxy S22 lineup, trio of Galaxy Tab S8 tablets https://appleinsider.com/articles/22/02/09/samsung-debuts-new-galaxy-s22-lineup-trio-of-galaxy-tab-s8-tablets?utm_medium=rss Samsung debuts new Galaxy S lineup trio of Galaxy Tab S tabletsSamsung on Wednesday announced its new Galaxy S lineup of smartphones as well as a new Galaxy Tab S series of tablets that features an Ultra model for the first time Galaxy S Lineup Credit SamsungThe Galaxy S and Galaxy S Plus devices don t feature many significant updates They pack an improved primary sensor and better optical zoom as well as a new Snapdragon Gen chipset in the U S and a Exynos SoC in other countries Read more 2022-02-10 15:24:41
海外TECH Engadget The Nintendo Switch is $20 off at Woot for Prime members https://www.engadget.com/nintendo-switch-woot-amazon-prime-good-deal-153745517.html?src=rss The Nintendo Switch is off at Woot for Prime membersWednesday s Nintendo Direct was packed full of eye catching games that are coming to Nintendo Switch If a new version of Wii Sports or the option to play Portal or No Man s Sky on the go intrigues you but you haven t snagged the console yet now might be a good time to pick one up Prime members can get off a Nintendo Switch at Woot That lowers the price to Buy Nintendo Switch at Woot This is a return of a deal that we saw in late January You ll be able to save on a version of the console with a better battery that Nintendo released in ーyou won t be able to get a discount on a Switch Lite or the OLED model this time around The Switch is a great console that we gave a score of to in our most recent review Nintendo has rolled out Bluetooth headphones support since then It s an excellent way to play games on both your TV and pretty much anywhere else The library of titles in the eShop is stellar ranging from blockbustergames and killerindies to Nintendo and Sega classics that are included in Switch Online plans Of course the Switch is the only official way to play Nintendo s latestfirst party console games too You ll need to sign in to Woot with your Prime account to see the deal which is limited to one unit per person It s worth noting that Woot s return policy is different from parent company Amazon s The offer runs until February th nbsp Follow EngadgetDeals on Twitter for the latest tech deals and buying advice 2022-02-10 15:37:45
海外TECH Engadget Most Android 12 phones will soon receive the Material You makeover https://www.engadget.com/google-material-you-android-12-rollout-150036940.html?src=rss Most Android phones will soon receive the Material You makeoverDynamic colors and a wave of new Google centric design changes are on their way to most Android phones First unveiled at I O last year Google s Material You appeared to be Android s most dramatic redesign in years and offered users a wave of new customization and accessibility features Users could tweak their phone s color palette adjust the placement of widgets and make other adjustments for aesthetic and accessibility purposes But Google s Material You was only available for Pixel phones and a few Samsung devices Soon though Material You will be available for a much wider swath of new Android phones including those by Samsung OnePlus Oppo Vivo realme Xiaomi and Tecno The exact dates haven t been announced just yet and will depend on the manufacturer but the release windows are likely to occur within the next few months The new Galaxy S and S Ultra both unveiled by Samsung this week will include Material You While cosmetic enhancements are nice Material You also makes it far easier for Android phones to be integrated into Google s ecosystem Color palette selections and other visual changes to the phone s themselves appear across Google s apps including Gmail Meet and Drive Much of Google s product library has already gotten a Material You makeover In Engadget s review of Material You and Android we noted that the redesign was easy on the eyes and helped declutter Android s interface It also includes a host of quality of life changes including a Privacy Dashboard That feature breaks down which apps have been granted specific permissions as well as what kind of data they re able to access 2022-02-10 15:00:36
Cisco Cisco Blog Hybrid work: what’s possible when 3 key domains come together https://blogs.cisco.com/ciscoit/hybrid-work-whats-possible-when-3-key-domains-come-together Hybrid work what s possible when key domains come togetherCisco s hybrid work evolution has generated tangible benefits in several key areas These results prove that when three domains HR IT and Facilities come together they enable the hybrid working model to reach its full potential 2022-02-10 15:41:18
海外TECH CodeProject Latest Articles Creating a Teams Conversation Bot with SSO https://www.codeproject.com/Articles/5324026/Creating-a-Teams-Conversation-Bot-with-SSO github 2022-02-10 15:54:00
海外科学 NYT > Science Neanderthals and Humans Swapped Places in This French Cave https://www.nytimes.com/2022/02/09/science/neanderthals-cave-france.html cavea 2022-02-10 15:11:29
金融 RSS FILE - 日本証券業協会 株券等貸借取引状況(週間) https://www.jsda.or.jp/shiryoshitsu/toukei/kabu-taiw/index.html 貸借 2022-02-10 15:30:00
金融 RSS FILE - 日本証券業協会 金融・証券インストラクターを募集中です! https://www.jsda.or.jp/jikan/instructor/index.html Detail Nothing 2022-02-10 15:30:00
金融 金融庁ホームページ 令和4年1月に開催された業界団体との意見交換会において金融庁が提起した主な論点を公表しました。 https://www.fsa.go.jp/common/ronten/index.html 意見交換会 2022-02-10 17:00:00
金融 金融庁ホームページ 第144回自動車損害賠償責任保険審議会議事要旨について公表しました。 https://www.fsa.go.jp/singi/singi_zidousya/gijiyosi/20220124.html 自動車損害賠償責任保険 2022-02-10 17:00:00
金融 金融庁ホームページ 金融審議会「ディスクロージャーワーキング・グループ」(第6回)を開催します。 https://www.fsa.go.jp/news/r3/singi/20220218.html 金融審議会 2022-02-10 17:00:00
金融 金融庁ホームページ 「ソーシャルボンドのインパクト指標(社会的な効果に係る指標)等に関する委託調査」の最終報告書について公表しました。 https://www.fsa.go.jp/common/about/research/20211221/20211221-1.html 調査 2022-02-10 17:00:00
金融 金融庁ホームページ 金融審議会「ディスクロージャーワーキング・グループ」(第5回) の議事録を公表しました。 https://www.fsa.go.jp/singi/singi_kinyu/disclose_wg/gijiroku/20220119.html 金融審議会 2022-02-10 17:00:00
金融 金融庁ホームページ プレス・リリース「中央銀行総裁・銀行監督当局長官グループは、バーゼルⅢ枠組みの実施に関する合意を一致して再確認するとともに、パブロ・エルナンデス・デ・コス氏をバーゼル銀行監督委員会の議長に再任」について掲載しました。 https://www.fsa.go.jp/inter/bis/20220210/20220210.html 中央銀行 2022-02-10 17:00:00
金融 ニュース - 保険市場TIMES 損保ジャパン、クレジットカードのサイバー攻撃対策を支援 https://www.hokende.com/news/blog/entry/2022/02/11/010000 2022-02-11 01:00:00
ニュース BBC News - Home Ukraine-Russia crisis: Stakes are very high, Boris Johnson says https://www.bbc.co.uk/news/uk-60326142?at_medium=RSS&at_campaign=KARANGA diplomacy 2022-02-10 15:52:39
ニュース BBC News - Home Johnson broke law over No 10 parties, says ex-PM Sir John Major https://www.bbc.co.uk/news/uk-politics-60331189?at_medium=RSS&at_campaign=KARANGA johnson 2022-02-10 15:33:15
ニュース BBC News - Home Prince Charles tests positive for Covid, Clarence House says https://www.bbc.co.uk/news/uk-60334842?at_medium=RSS&at_campaign=KARANGA clarence 2022-02-10 15:15:37
ニュース BBC News - Home £150 cost of living payments for Scottish households https://www.bbc.co.uk/news/uk-scotland-scotland-politics-60336046?at_medium=RSS&at_campaign=KARANGA extra 2022-02-10 15:42:53
ニュース BBC News - Home Winter Olympics: GB women bounce back to beat Sweden https://www.bbc.co.uk/sport/av/winter-olympics/60336470?at_medium=RSS&at_campaign=KARANGA sweden 2022-02-10 15:13:53
ニュース BBC News - Home Winter Olympics schedule: Day-by-day guide to key events and British medal hopes https://www.bbc.co.uk/sport/winter-olympics/60111409?at_medium=RSS&at_campaign=KARANGA beijing 2022-02-10 15:18:44
北海道 北海道新聞 ドイツが3連覇 リュージュ・10日 https://www.hokkaido-np.co.jp/article/644532/ 連覇 2022-02-11 00:19:00
北海道 北海道新聞 押切、粘って入賞 女子5000で8位 https://www.hokkaido-np.co.jp/article/644524/ 連続 2022-02-11 00:16:41
IT 週刊アスキー 次世代スマートウォッチ「タグ・ホイヤー コネクテッド キャリバーE4」発表 https://weekly.ascii.jp/elem/000/004/083/4083220/ wearosbygoogle 2022-02-11 00:45:00

コメント

このブログの人気の投稿

投稿時間:2021-06-17 05:05:34 RSSフィード2021-06-17 05:00 分まとめ(1274件)

投稿時間:2021-06-20 02:06:12 RSSフィード2021-06-20 02:00 分まとめ(3871件)

投稿時間:2020-12-01 09:41:49 RSSフィード2020-12-01 09:00 分まとめ(69件)