python |
Pythonタグが付けられた新着投稿 - Qiita |
heroku + tweepyを使ってbotを作る際に大変だったところ(備忘録) |
https://qiita.com/st17086ts/items/9abe92b93a8632cde79b
|
opencvpythonモジュール名バージョンしかし、opencvをheroku上で扱うためにはこれだけではNGですrequirementstxt以外にもAptfile拡張子なしを作成しなければなりません今回自分の場合は、opencvを利用するためにAptfileには以下のような内容を書き込みフォルダに残すとうまく機能しました。 |
2022-04-04 23:59:55 |
python |
Pythonタグが付けられた新着投稿 - Qiita |
pipenvで"Error: the command get_cpu.py could not be found ..." |
https://qiita.com/escapade/items/418e12f3a1cbc21c3383
|
pipenvでquotErrorthecommandgetcpupycouldnotbefoundquotシェルでpipenvを動かそうとしたら……pipenvrungetcpupyこんなエラーが出ちゃったErrorthecommandgetcpupycouldnotbefoundwithinPATHorPipfilesscripts解決方法→単にpythonをつけて実行指示出すの忘れてただけでした下正解。 |
2022-04-04 23:56:20 |
python |
Pythonタグが付けられた新着投稿 - Qiita |
( ^ω^)は突貫スクレイピングを強いられてるようです |
https://qiita.com/oneseekyes/items/965ff54b2d9242508557
|
ωは突貫スクレイピングを強いられてるようです※こちらの続きです※ここに書かれている内容はωが初心者以外実際の企業・人物・団体には関連ございません。 |
2022-04-04 23:32:41 |
python |
Pythonタグが付けられた新着投稿 - Qiita |
Kaggleの臨床検査データセットを使ってみた⑤ ~不均衡データの取り扱い~ |
https://qiita.com/tuk19/items/ade9a23ce80988aa6a30
|
今回はその⑤全回他の回はこちらから①モデルの性能比較をしてみた②特徴選択をして、重要度を可視化してみた③アンサンブル学習をしてみた④外れ値の取り扱い使用したデータセットPatientTreatmentClassificationElectronicHealthRecordDatasetインドネシアの病院で集められた血液検査の結果から、患者に治療が必要かどうかを判定するモデル今回使用したモデルは以下の種類XGBoostランダムフォレストロジスティック回帰決定木kー近傍法データ確認データは貧血に関連する血液検査の結果。 |
2022-04-04 23:05:06 |
python |
Pythonタグが付けられた新着投稿 - Qiita |
PPAPを拒否するMilterを作ってみました |
https://qiita.com/hirachan/items/898e3554c007ae1026ef
|
NoPPAPMilter の インストール p i p i n s t a l lnoppapmilternoppapmilter という コマンド が インストール さ れ ます 。 |
2022-04-04 23:04:51 |
js |
JavaScriptタグが付けられた新着投稿 - Qiita |
OBS用の行き先案内板シミュレータを作る |
https://qiita.com/CIB-MC/items/116b6dddbfd7ebbadc84
|
鬼その下側の次のパネル進捗度合いに応じて下方向に伸ばすcanvasの変形ifprogressltctxtopcurrenttransformprogressprogressctxbottomnexttransformelsectxtopcurrenttransformctxbottomnexttransformprogressフリップ回しを止める止める際にはposdestに表示したい項目の配列インデックスを指定させます。 |
2022-04-04 23:51:08 |
js |
JavaScriptタグが付けられた新着投稿 - Qiita |
ストリームアルゴリズムを用いて確率抽選を行う |
https://qiita.com/Arihi/items/49a731f21e7b1ddab217
|
ストリームアルゴリズムを用いて確率抽選を行うゲームにおけるアイテムのドロップなどにありがちな、Aというアイテムはで出現するBはで出現するCはで出現するという確率に従ってABCを抽選するという処理を考えます。 |
2022-04-04 23:41:57 |
js |
JavaScriptタグが付けられた新着投稿 - Qiita |
Javascript を書く場所 |
https://qiita.com/wwww_xxx12345/items/6a4769d1500e502ef481
|
|
2022-04-04 23:22:49 |
Docker |
dockerタグが付けられた新着投稿 - Qiita |
Alpine Linuxの初期ユーザーと権限をDockerを使って調べてみた |
https://qiita.com/isosa_yama/items/37392ddd94a3f8580b96
|
commandchownxfsxfstmpentrypointshampampshtmpentrypointsh結果として初期ユーザーの設定というよりはファイルの権限側の問題でしたが初期ユーザーにxfsがいたこと、その他のデフォルトユーザーがわかり最終的にはxfsユーザーで実行させることができました。 |
2022-04-04 23:39:47 |
Git |
Gitタグが付けられた新着投稿 - Qiita |
deleted by themを追加したい時の魔法 |
https://qiita.com/tanaka350/items/ab9ee41e8475ca0e6516
|
hogefilegtnewfilehogefile |
2022-04-04 23:59:01 |
技術ブログ |
Developers.IO |
MUIのDashboard TemplateをReactアプリに導入してみた |
https://dev.classmethod.jp/articles/using-the-management-screen-template-of-mui/
|
freereacttemplates |
2022-04-04 14:55:08 |
海外TECH |
MakeUseOf |
Raspberry Pi CEO Addresses Shortages; Recommends Raspberry Pi 400 and Pico |
https://www.makeuseof.com/eben-upton-raspberry-pi-shortages-400-pico/
|
makers |
2022-04-04 14:53:08 |
海外TECH |
MakeUseOf |
How to Add Fonts to Google Docs |
https://www.makeuseof.com/add-fonts-to-google-docs/
|
google |
2022-04-04 14:30:14 |
海外TECH |
MakeUseOf |
6 Ways to Fix the "Bad Sectors" Error on Windows |
https://www.makeuseof.com/windows-bad-sectors-error-fix/
|
Ways to Fix the amp quot Bad Sectors amp quot Error on WindowsWhen your hard drive encounters a bad sector WIndows will let you know about it Don t panic just yet though here are some ways to fix it |
2022-04-04 14:15:15 |
海外TECH |
MakeUseOf |
7 Things to Do Before Selling Your Old Android Phone |
https://www.makeuseof.com/what-to-do-when-selling-old-android-phone/
|
android |
2022-04-04 14:01:13 |
海外TECH |
DEV Community |
Load external data into OPA: The Good, The Bad, and The Ugly |
https://dev.to/permit_io/load-external-data-into-opa-the-good-the-bad-and-the-ugly-26lc
|
Load external data into OPA The Good The Bad and The UglyThere are several ways to create a data fetching mechanism for OPA each of them has its pros and cons To make sense of these different methods I ve decided to create this guide that will help you figure out which data fetching method would be best for you with full knowledge of each method s good bad and ugly aspects TL DR The methods that we are going to review are Including data in JWT tokensOverload input for OPA within the queryPolling for data using BundlesPushing data into OPA using the APIPulling data using OPA during Policy EvaluationOPAL Open Policy Administration Layer Before we dive into details let s first cover some basics What is OPAAuthorization is becoming increasingly complicated applications are getting bigger and require handling more users than ever before policies are becoming more complex and dependent on multiple factors Like a client s location time of the action user roles and relations to resources This is where OPA Open Policy Agent comes in OPA is a great open source tool that allows us to evaluate complicated policies It s fast part of the CNCF Which means it adheres to CNCF s guidelines and standards and is used for handling permissions in some of the largest companies in the world e g Netflix Pinterest and others You can check out an introduction to OPA here How OPA Works with DataManaging policies with OPA often requires relevant contextual data Information about the user the resource they are trying to access etc Without this information OPA will not be able to make the right decisions when it comes to deciding on policies For example a policy that states “Only paying users can access this feature requires OPA to have information on Who my users areWhich one of them is a paying user and which isn tA policy that states “Users in the application can only access their own photos or those of their children requires OPA to know Who are the application s usersWhich user is a parent which user is a child and which user relates to whoWhich photo belongs to each userHaving access to this contextual data is thus critical for OPA to be able to make its decisions The bottom line question is how can we bring this data into OPA and which way is the most effective to do so The data fetching mechanism Basic requirements Before we dive into the different methods of fetching data for OPA let s agree on a couple of basic guidelines for how this data fetching mechanism should work It s necessary to be able to handle data about policies on a large scaleBecause data can come from many sources thus getting very complex very quickly we want this mechanism to be as easily manageable as possible The data fetching mechanism needs to be operational in real time This is a crucial component that will allow us to avoid a “New enemy attack a situation where a user with revoked permission can still access sensitive data because the permissions have not been updated in time between the different parts of the system It should be easy to maintain because the need for access control is here to stay and is likely to evolve in the future Now that we ve established some basic requirements let s dive into the various data fetching mechanisms we can utilize to solve our issue in the most efficient way Let s dive in Including data in JWT tokens JSON Web Tokens JWT allow you to securely transmit signed JSON data between software systems and are usually produced during the authentication process JWTs can be sent to OPA as inputs thus enabling OPA to make decisions about a Policy query For example this is what a JWT with authorization data looks like The first part is the algorithm for the secure signing and in the middle we can see the roles and related images for our authorization The good JWTs are an easy to use well known technology that you probably already utilize in your system as part of the authentication layer The bad JWTs have a size limit not everything can be decoded into a JWT while it looks OK in the example presented above if a user has files the JWT length changes from characters to and that s considering a simple file name With the full path it s even longer Additionally a JWT created during the authentication phase doesn t include all the necessary information required to make the policy decision especially if you are using a vendor like Auth to authenticate In addition storing data in JWTs means we have to refresh the token read as login logout every time we want to update the data The ugly You might think it s a good idea to start with JWTs because you don t have a lot of data as time goes by the amount of data grows exponentially and the situation easily spirals out of control with an enormous amount of JWTs floating around in each request Bottom line JWTs are ideal for simple identity related data and in general it s best to think of the claims and data in the JWT as hints about identity given by the Identity Management and Authentication layers rather than verbatim data for authorization Overload input for OPA within the query Another option is to attach input for every policy query to OPA adding the relevant data to it It will look something like this in python pseudocode wrapping OPA def delete image user id image id policy json data policy json data “user roles get user roles user id returns list of roles like “editor policy json data user images get user images user id returns list of images “img png sends request that looks like this localhost i d roles pro related images image png image png image id “image png H Content Type application json and returns true false permitted check opa policy policy json data “delete image id if not permitted raise AuthorizationError The good Using this method is simple and it ensures that only the relevant data is cherry picked for each query sent thus avoiding loading storing a lot of data in OPA The bad This method prevents us from following one of the most important best practices in building authorization decoupling policy and code As our code now has to take on the responsibility of tailoring the data for OPA Having policy and code mixed together in one layer creates a situation where we struggle to upgrade add capabilities and monitor the code overall as it is replicated between different microservices Each change would require us to refactor large areas of code that only drift further from one another as these microservices develop The ugly Having so much code repetition is an antithesis to the DRY principle creating a multitude of complications and difficulties as our application evolves Considering the example code above for instance a very similar code will be written to delete image update image and get image Bottom line In general it is best to leave this method for simple cases or to augment more advanced cases with cherry picking Polling for data using BundlesThe bundle feature periodically checks and downloads policy bundles from a centralized server which can include both data and policies An example of a simple way to implement this solution would be running an Nginx container that serves the bundle files and configuring OPA to fetch data from it using s buckets is also a common pattern The configuration for OPA will be as follows services nginx url credentials bearer token dGVzdGluZzpZXNaWn scheme Basicbundles authz service nginx resource bundle tar gzThe good It allows you to load large amounts of data much larger than the previous methods it has a delta bundle feature that lets you sync only new data but not policy it lets you have one source of truth and it is more readable than JWTs The bad Using bundles doesn t cut it when we have data that changes rapidly as it requires triggering a full policy update for every small change Making this a very inefficient process The ugly Even with the new delta bundle feature you still need to manage and create the bundles on your own and it works with polling which isn t real time In addition being dependent on a polling interval means you have to choose between rapid polling which can result in high costs or slow polling which can lead to delays and risk of inconsistency The bottom line For cases where updates to data mainly come as part of the CI CD cycle bundles are a great option Bundles can also work well for static or rather static applications For modern dynamic applications this option might be too slow inefficient on its own Pushing data into OPA using the APIYou can also push policy data into OPA with an API request this approach is similar in most aspects to the bundle API except it allows you to optimize for update latency and network traffic It will look something like this in python pseudo code def send user update to opa requests put f opa url users params users user user def callback on new user all users get all users send update to opa all users In this example we are updating the user list of OPA for each callback on new user creation The good This way you don t need to load the entire bundle at every update you can also update part of it which is much more performant in terms of memory and network usage as well as giving you more control of how you manage distributed data into OPA The bad Applying this method to import new kinds of data from different data sources is going to require a continuous effort of writing enormous amounts of code The ugly This method requires continuous maintenance you can t just set it up and forget about it If left abandoned this code will very quickly become obsolete The bottom line Great way to load data into OPA in a dynamic fashion but requires a lot of development and administration in all but very simple cases Pulling data using OPA during Policy EvaluationOPA includes a function http send that allows it to reach out to external HTTP servers during evaluation and request additional data It will look something like this in Rego pseudo code default allow false allow true input method GET input path getSalary user managers http send get managers url managers managers input user contains managers user You can see the call to http send get managers url that returns the list of the managers to help evaluate the policy Similarly you can embed more functions into OPA as a plugin to fetch data as part of a query from other sources The good This is a solid option to use when you have a very large volume of data that is required to make permission decisions and you cannot load it all into OPA The bad Using this method puts a strain on OPA as it always comes with network latency that slows all of your policy evaluations Additionally this method is prone to network errors The ugly Error handling with rego isn t simple at all and relying on this feature can lead to some frustrating results While OPA and Rego can be used to evaluate policies very quickly you may want to avoid adding more logic than you need The bottom line This is a great way to load data into OPA in a highly dynamic way without writing a lot of code That being said this solution is not applicable when the relevant data requires parsing or edge case handling which Rego lacks OPAL Open Policy Administration Layer OPAL is an open source project for administering authorization and access control for OPA OPAL responds to policy and data changes pushes live updates to OPA agents and thus brings open policy up to the speed needed by live applications To run OPAL with OPA you can simply use the Docker example Send an update to OPAL on every change in your data or connect your data source s webhook with OPAL and let OPAL stream the updates to OPA The good OPAL includes live updates and Git tracking GitOps and saves you the hassle of having to write all the code by yourself like in the Pushing Data with API option The bad OPAL is a fairly new library it might take some time to learn and some work to integrate into your project The ugly First of all OPAL is beautiful But being one of the contributors to this open source project I might be biased That being said the architecture can be a bit more complicated than bundle server JWTs so you might need to take your time and make sure you understand it The bottom line OPAL is inspired by the way companies like Netflix work with OPA but it requires some work to set up Simple applications will do better with one of the other methods but for full modern applications OPAL is probably the more robust reliable option ConclusionAs we have seen there are various methods to build data fetching mechanisms each of them having their own pros and cons Some of these methods Including data in JWT tokens and using Overload input for OPA within the query could only prove useful in simple cases some Polling for data using Bundles lack effectiveness in dynamic applications Pushing data with API is a good solution to load data into OPA in a dynamic fashion while requiring a lot of development and administration and Pulling data using OPA during Policy Evaluation is not applicable when the relevant data requires parsing or edge case handling OPAL has the advantage of being a more robust reliable solution but it requires you to adopt new open source based technology The most important thing to take from this review is understanding the complexities and challenges of building data fetching mechanisms correctly and understanding that every method has its pros and cons Still not sure which method is the right one for you and your architecture Need someone to brainstorm with Don t be shy reach out to us on our Slack community |
2022-04-04 14:48:28 |
海外TECH |
DEV Community |
Vauld Referral Code FREE100 Get Free Token |
https://dev.to/husainiaamer/vauld-referral-code-free100-get-free-token-2l4i
|
Vauld Referral Code FREE Get Free TokenVauld Referral Code Get Free Cash Bonus FREE is the most recent Vauld referral code which you can use to earn referral fees by introducing friends to this marketplace Crypto is a volatile asset growing in popularity nowadays especially because it is decentralized Vauld is one such Singapore based application It has been newly launched in India to invest in cryptocurrency It provides several things such as information tools and guidance to help you become a successful investor Vauld is a Singapore based application that has been newly launched in India to invest in cryptocurrency It provides several things such as information tools and guidance to help you become a successful investor Vauld Referral CodeVauld Referral CodeVauld Referral CodeVauld Referral Code |
2022-04-04 14:45:37 |
海外TECH |
DEV Community |
Be Careful About Timezones In Backend and Frontend Development |
https://dev.to/aqeelzeid/be-careful-about-timezones-in-backend-and-frontend-development-47c6
|
Be Careful About Timezones In Backend and Frontend DevelopmentDear Devs Today I ran into an issue regarding timezones When I was working on the local host the backend frontend browser and the database all ran on the same machine which is my laptop in a single timezone But when I push to production the dates in the backend and the dates in the frontend was hours apart This is because the server and the frontend resided in different time zones I checked the backend code and the database they were fine and the frontend code also didn t show any issues When I checked my Javascript code it was using lt h gt new Date startTime toLocaleTimeString lt h gt which uses local time which adds hours to the start timeSo instead I used UTC time the backend code was using UTC time So make sure you pay attention to timezones configurations in your database backend and frontend according to your use case and your execution environments here are some articles that helped me Photo Credit Donald Wu on Unsplash |
2022-04-04 14:33:18 |
海外TECH |
DEV Community |
I'm looking for a Developer Advocate Role |
https://dev.to/danielhe4rt/im-looking-for-a-developer-advocate-role-115o
|
I x m looking for a Developer Advocate RoleHi everyone My name is Daniel Reis aka danielhert I m Brazilian and I m looking for a Developer Advocate Evangelist role I m passionate about connecting people into the most diverse technologies and I see a greater way to do it by raising inclusive communities Some important informations about me Studying programming since I have been in the Web Development as a dev for years PHP Laravel I ve been leading a Technical Community Hert with more than members for almost years now I have been doing live coding for almost years and I got Twitch Partnership with an average of viewers followers Four technical articles submitted to dev to over the past year with more than views Content produced for YouTube audience on a channel with currently k subscribers Joined on Microsoft MVP Program in I ve participated in two talks in important events PHP Community Summit GitHub Presente and I m willing to get into Brazilian Universities to talk more about the importance of communities My goal is to make a diverse and inclusive environment for everyone that wants to become a developer I found a passion like no other participating in communities and helping other people in the area Being able to be the difference and inspire other people took me here I want to do more and I m looking for a new home to allow me to in this journey Hert Developers LinkedIn Portfolio MVP Award Contribution Page DevTo Twitch Channel YouTube Channel Twitter most in PT BR Personal Developer Roadmap |
2022-04-04 14:32:25 |
海外TECH |
DEV Community |
From First-Touch to Multi-Touch Attribution With RudderStack, Dbt, and SageMaker |
https://dev.to/rudderstack/from-first-touch-to-multi-touch-attribution-with-rudderstack-dbt-and-sagemaker-2pcg
|
From First Touch to Multi Touch Attribution With RudderStack Dbt and SageMaker An overview of the architecture data and modeling you need to assess contribution to conversion in multi touch customer journeysWhere should we spend more marketing budget The answer to this age old budget allocation question boils down to determining which campaigns are working and which aren t but with the amount of data we collect on the customer journey today it s not always crystal clear which campaigns are which Upon first glance it s fairly straightforward to review metrics in ad platforms like Google ads to examine ROI on paid ad campaigns and you may even be sending conversion data back to that platform but you re still restricted to evaluating only a fraction of the user interactions one or two steps of the conversion path at best that led up to and influenced a sale These limitations are due to the fact that many ad platforms were built when basic marketing attribution was enough for most companies Today though data teams and marketers have more data than ever increasing their appetite for more advanced attribution solutions Even so advanced attribution is still hard But why In short moving beyond basic single touch attribution introduces a number of complexities Instead of binary outcomes this digital marketing campaign brought someone to our site or it didn t a user can have many marketing touchpoints which introduces the idea of influence along a buyer s journey and that s where things start to get tricky To understand which marketing efforts are contributing more to a successful objective conversion event we have to evaluate it relative to all of the other campaigns and this is very complicated First it involves collecting and normalizing an enormous amount of data from a lot of different sources and second it requires the application of statistical modeling techniques that are typically outside the skillsets of many marketing departments Neither of these marketing measurement challenges should be addressed by the marketing team The first comprehensive data collection is a data engineering problem and the second statistical modeling is a data science problem Because those are not core skills in marketing most marketers fall back on last touchpoint models or outsource complex attribution to third party attribution tools that don t have a complete picture of the attribution data The problem is these types of attribution cannot deliver the deep insights necessary for a holistic cross channel understanding of marketing performance across the whole consumer journey from the first touchpoint to the last click Thankfully modern tooling can solve this problem across teams The challenge when a user is involved in multiple campaigns across multiple sessions on different devices how do you know which of these different touchpoints actually influenced the sale The RudderStack approach involves building a data set in your warehouse that combines the events user touches as well as the metadata such as the marketing campaign associated with them In addition to analyzing campaign performance we can also use this same data for a variety of machine learning models including lead scoring and likelihood to repeat a purchase In this article we will walk through how we recently helped an e commerce customer realign their marketing budget through the use of a combination of different attribution models We will start with a high level architecture review and how they use RudderStack to collect all of the data including marketing spend to create a complete story of the user journey in their own data warehouse Next we will show you how they used dbt and RudderStack Reverse ETL to prepare the data for modeling In this example AWS SageMaker was used to run the Jupyter Notebook and we will walk you through how the results of multiple models are sent back to their warehouse for reporting Architecture from comprehensive data collection to using multi touch attribution modelsHere is the basic end to end flow Stream behavioral events from various platforms web mobile etc into the warehouse ETL additional data sets to complete the user journey data set in the warehouse sales emails app volume usage inventory etc Create enriched user journeys via RudderStack identity stitching Define conversion and user features using dbt Reverse ETL user features to S Run python models on the S data from a Jupyter Notebook in SageMaker and output results back to S Lambda function streams new result records from S to RudderStack and routes to the warehouse and downstream toolsThe end result is that the enriched user journey data produces a data flow and feature set that can feed multiple different attribution models as outputs from simple to more complex This is important because downstream teams often need different views of attribution to answer different questions On the simple end of the spectrum knowing how people initially enter the journey their first touch is very helpful for understanding which channels drive initial interest while a last touch model shows which conversions finally turn visitors into users or customers The most important data however often lives in between first touch and last touch In fact even in our own research on the journey of RudderStack users we commonly see total touch points before conversion Understanding the touchpoints that happen after the first touch that influence the last touch can reveal really powerful insights for marketing and product teams especially if those touchpoints cost money in the form of paid campaigns Let s dig into an overview of the workflow for this use case Here s what we ll cover A quick explanation of how data engineers can collect every touchpoint from the stack without the pain An overview of how to build basic first touch and last touch attribution An explanation of why it s valuable to apply additional statistical models for multi touch attribution and an overview of how feature building in dbt fits into the architecture The data engineering challenge capturing every touchpointCapturing the entire user journey is such a common use case both for us and our customers that our teams often take the term for granted When we talk about user journeys what we really mean is in chronological order tell me every single touchpoint where a particular user was exposed to our business whether that be on our website mobile app email etc and also include metadata such as UTM params referring URLs etc that might also provide context or insight about how that particular touchpoint But where does all of that data come from The answer is that it comes from all over your stack which explains why it s a data engineering challenge For behavioral event data our customer uses RudderStack Event Stream SDKs to capture a raw data feed of how users are interacting with their website and mobile app we have SDKs from JavaScript to mobile and server side and even gaming frameworks Behavioral data is only one piece of the puzzle though This customer also captured touchpoints from cloud apps in their stack For that they leverage Rudderstack ETL sources to ingest application data from their CRM and marketing automation tools Lastly they use RudderStack s HTTP and Webhook sources for ingesting data from proprietary internal systems those sources accept data from anything that will send a payload It s worth noting that RudderStack s SDKs handle identity resolution for anonymous and known users across devices and subdomains This allowed our customer to use dbt to combine data from cloud apps legacy data and user event data as part of their identity stitching framework to achieve the coveted view of the customer Solving the data engineering challenge is really powerful stuff when it comes to attribution in large part because data is the entire foundation of good modeling This is also why we believe in building your customer data stack in the warehouse where you are collecting all of your data anyways Our customer told us that one major advantage of having a flexible stack is that unlike traditional first touch and last touch analysis in GA or Adobe Analytics building a solution on the warehouse allowed them to factor in the effect and cost of coupons and other discounts applied at checkout via server side events and treat them as alternative forms of paid marketing spend Additionally having data from sales and marketing automation tools truly completed the picture for them because they could evaluate the contribution of offline activity such as emails opened even if the recipient didn t click on any links that directed them back to the website Both of these use cases were impossible for them with third party analytics tools and siloed data So at this point our customer had all of the data in their warehouse and had stitched together customer profiles and journeys using dbt Then what After building user journeys with RudderStack and dbt they had the foundation for creating robust data models for statistical analysis and ML modeling For their particular e commerce use case we helped them create a model that combined both the touchpoints and the marketing campaigns associated with those touchpoints to create a multipurpose dataset for use in SageMaker Here is a sampling of some of event types and sources used List of Channels amp RudderStack Event TypesPaid Channel Event Name RudderStack Event TypeGoogle Paid Display site visit PageGoogle Paid Search site visit PageEmail Nurture Newsletter email opened ETLEmail Abandoned Cart email opened ETLTwitter Post Organic site visit PageFacebook Display Image Carousel site visit PageEmail Retargeting email opened ETLBraze SMS Abandoned Cart sms sent TrackTikTok Display site visit PageYoutube Video site visit PageIn App Messaging Coupon Offer coupon applied Server Side TrackInstagram Shopping offline purchase Existing Data In WarehouseGoogle Shopping site visit PageOnce we had established the touchpoints we needed in the dataset the next step was defining the conversion a key component of attribution models This customer s direct to consumer e commerce use case defined conversion as a customer making any total purchases over a certain dollar threshold many companies would consider these high value or loyal customers It s important to note that this can comprise multiple purchases over any time period on a user level which is impossible to derive in traditional analytics tools because it requires aggregating one or more transactions and one or more behavioral events and tying that to a known user In the data warehouse though the customer had all of that data and could easily manage timelines because each touchpoint had a timestamp Using RudderStack and dbt we helped them build a dbt model that output a single table of user touches with associated campaigns and flag each user with a timestamp of whether or not the user eventually converted UTM parameters were extracted from Page calls and woven together with applicable track calls such as Abandoned Cart Email Coupon Sent and then again combined with other data from ETL sources in the warehouse as outlined in the table below RownametypeUSERIDVARCHAR EVENT CATEGORYVARCHAR EVENTVARCHAR CHANNELVARCHAR TIMESTAMPTIMESTAMP LTZ CONVERSION TIMETIMESTAMP LTZ ROW IDVARCHAR The output table was designed to serve a variety of different statistical and ML models beyond this use case and includes the following columns userId RudderStack user identifier from the various events table In our case we will use the Rudder ID created from the identity stitching model For BB applications this could be AccountID Org ID etc event category The type of event being sourced Not used in this analysis but may be useful for filtering or other ML modeling event The name of the specific event used to generate the touch Again this field is not used in our attribution modeling but will be used in other models channel The marketing channel attributed to this particular event As we will see in our dbt this could be driven by a UTM parameter on a page event or it may be extrapolated from the source itself ie braze SMS messages email opens form customer io server side events or shipping data already in the warehouse timestamp This will typically be the timestamp value on the event itself but could be any timestamp to indicate when this particular touch occurred conversion time This represents the timestamp of when the user had their first qualifying order total This is computed in a different step within the dbt and applied to all of the events for that particular userId If the user has not completed the checkout process this will be null It is important to note that we do not want any events for a particular user after the time the user converts row id The sequence identifier for each userId This is used by the RudderStack reverse etl to support daily incremental loads for new events each day With the data set created in the warehouse the customer connected RudderStack s Reverse ETL pipeline to send the table to S where the attribution modeling was executed in SageMaker and Jupyter Notebooks They then used a Lambda function to send the results back through RudderStack and into the warehouse where the team could begin interpreting results Keep your eyes peeled for a deep dive into that workflow in an upcoming post Here s a visual diagram of the architecture Starting with first and last touch attributionAs we said above this customer wanted to evaluate various attribution models to answer different questions about their marketing spend and the customer journey They started simple with basic first touch and last touch models As we said above every touchpoint is timestamped so it was fairly simple to extract first last touch and first last campaign attribution This customer in particular was interested in comparing that attribution across first and last touch models which was simple to achieve within the same report SQL query etc Interestingly they said this was incredibly valuable because a similar comparative analysis couldn t be performed in Google Analytics or using a spreadsheet to export last touch attribution from a CRM The problem with last touch attributionLast touch attribution is the most common way to assign credit for conversion As simple as it sounds this is often the case because it s the easiest kind of attribution to track especially in tools that were never designed for attribution custom fields in Salesforce anyone For the sake of clarity a last touch attribution model assigns all of the credit to the last touch So if a conversion is valued at x and the user interacted with four different campaigns before conversion only the final campaign gets the whole credit of x while the previous campaigns get zero credit This becomes a major problem when the campaigns goals are different For example some campaigns may aim for brand awareness which almost always means lower conversion rates When brand campaigns do convert it usually happens over a much longer period of time even after the campaign has ended or as an assist that brings the user back in prior to conversion So even if certain brand campaigns are extremely influential on eventual conversion last touch attribution models don t afford them the credit they deserve This is particularly important when marketing teams are trying to optimize the balance between spend on brand campaigns vs conversion campaigns We see this scenario across all of our customers be they BB or BC and the larger the sale typically the flatter the tail The chart below shows a typical Days to Conversion chart and highlights how last touch can grossly overstate the influence of a last touch campaign s significance Better options through statistical modelingWith the complexity of today s marketing environments and the limitations of last touch modeling we must consider more complex alternatives for assigning the appropriate credit to the appropriate campaign and consider the full path up to the point of conversion which is exactly what our customer set out to do This problem of attributing user conversion to all touches throughout the journey is called Multi Touch Attribution MTA This itself can again be done in various rule based approaches Some examples of these rules are Linear Attribution Model This approach gives equal credit to all touches Time Decay Model More recent touches are weighted more and the longer ago a touch occurred the less weight it receives U Shape Attribution Similar to Time Decay except the first and last touches get higher credit and intermediate touches get less These are all heuristic based rules and can be arbitrary At RudderStack we recommend a more data driven approach We routinely employ these two established methods Shapley values are derived from Game Theory and they essentially capture the marginal contribution from each touch towards the ultimate conversion Markov chain based values capture the probabilistic nature of user journeys and the removal effect of each touch point They also highlight the existing critical touches in the journey points where if something goes wrong the conversion probability is negatively impacted Here s Fig how our results look using these three models Paid Channel Event Name Last TouchMarkovShapley ValuesGoogle Paid Display site visit Google Paid Search site visit Email Nurture Newsletter email opened Email Abandoned Cart email opened Twitter Post Organic site visit Facebook Display Image Carousel site visit Email Retargeting email opened Braze SMS Abandoned Cart sms sent TikTok Display site visit Youtube Video site visit In App Messaging Coupon Offer coupon applied Instagram Shopping offline purchase Google Shopping site visit Total Helpful insights in comparing modelsWhen our customer evaluated the returned results in their warehouse they uncovered some pretty interesting and helpful insights Last touch based attribution gives a very high weight to abandoned cart emails Anecdotally this makes sense as users are likely enticed by a coupon for something they ve already considered purchasing and this is the last activity they engage with prior to purchasing On the other hand both the Markov and Shapley values suggest that while this may occur just before a conversion its marginal contribution is not as significant as the last touch model would suggest remember the key conversion is total purchases above some value Instead of continuing to invest in complex abandoned cart follow up email flows the customer focused on A B testing abandoned cart email messaging as well as testing recommendations for related products In the last touch model Instagram purchases don t look like a compelling touchpoint This alone was valuable data because Instagram purchase data is siloed and connecting activity across marketplaces is complicated Again using the warehouse helped break those silos down for our customer Interestingly even though last touch contribution was very low it was clear from the Shapely values that Instagram purchases were a major influence on the journey for high value customers So in what would have previously been a counter intuitive decision the customer increased marketing spend to drive purchases on Instagram and included content that drove customers back to their primary eCommerce site Markov values for Twitter organic posts are much higher compared to the Shapley values This showed our customer that not many people actually make a significant purchase based on Twitter posts but when they do they have very high intent The customer reallocated spend from Google which was overrated in the last touch model and invested in promoting organic Twitter posts and creating new kinds of organic content The Facebook Display campaign has a high Shapley value but low Markov value which indicates a high dropoff rate from people after seeing ads on Facebook Based on this insight the customer moved budget from Facebook to TikTok and YouTube both of which had far less dropoff ConclusionThe only way to truly understand what campaigns are working is to have full insight into every touchpoint of a user s journey RudderStack eliminates the engineering headaches involved in collecting and unifying all of your data feeds and reduces the build time for your data scientists with its uniform schema design and identity stitching If you would like to learn more about how RudderStack can help address your company s engineering or data science needs sign up free today and join us on slack |
2022-04-04 14:32:00 |
海外TECH |
DEV Community |
How to use Chart.js your Angular 13+ project |
https://dev.to/chadwinjdeysel/how-to-use-chartjs-your-angular-13-project-1ccc
|
How to use Chart js your Angular projectCharts are one of the best if not the best too to use to visualize data Every developer should be able to use charts in their project In this tutorial I m going to show you how to add charts into your project using the library Chart js Note This tutorial was made using the latest versions at the time of writing Angular Chart js Getting startedFirst we ll need to create a new Angular project ng new angular chart js tutorialWe ll select no routing and CSS Then we ll create a new component for the Chart itself So navigate into your project file and use use the following command ng g c components chartI m using the short hands for generate component and creating a new file for our components Once completed open the project your code editor and navigate to the app component html file Once there replace all the existing code with the following lt h gt Chart js Example lt h gt lt app chart gt lt app chart gt Creating the chartNow let s go to the chart component html file in the components file and add the following code lt div class chart container style width px height px gt lt canvas id my chart gt lt canvas gt lt div gt Chart js uses the canvas element to draw charts Now let s switch to the chart component ts file and add the following imports import Chart ChartConfiguration ChartItem registerables from node modules chart js Then we ll create a method and call it in the ngOnInit method This method is going to be responsible for creating our chart Your code should look something like this ngOnInit void this createChart createChart void In createChart void method we ll need to following along some steps Start by registering the chart Chart register registerables Now we ll setup the data our chart is going to be using const data labels January February March April May datasets label My First dataset backgroundColor rgb borderColor rgb data Chart js also allows us to customise the Chart itself by configuring the options const options scales y beginAtZero true display false All these options do is to start the y axis from zero and hide the y axis to give a cleaner feel Once that s completed we ll configure the chart const config ChartConfiguration type line data data options options Note the type of chart we ll be creating will be a line chart You can use other options such as pie doughnut bar bubble etc For a list of all the types you can get started here Now we ll grab the chart item the canvas on which the chart will be displayed on const chartItem ChartItem document getElementById my chart as ChartItemFinally we ll create the chart with this final line of code new Chart chartItem config End ResultTo view the end result open up the terminal type ng serve and navigate to localhost in your browser once your app startup is completed The end result should look something like this ConclusionFor more details about Chart js be sure to check our their website official repository and be sure to give them a star I ve also created a repository for you to follow along in case you get stuck If you found this post useful please follow me on Twitter for more Angular and development tips and check me out on GitHub Thanks for reading and have a great day |
2022-04-04 14:31:16 |
海外TECH |
DEV Community |
How I Create And Repurpose Content To Have The Most Impact Online |
https://dev.to/mishacreatrix/how-i-create-and-repurpose-content-to-have-the-most-impact-online-gn6
|
How I Create And Repurpose Content To Have The Most Impact OnlineThere are lots of ways to create and distribute content online these days There are so many social media and content platforms I ve lost count While this is a huge plus for online creators like me it can also be challenging to know where to start This is why learning where to focus your efforts to have the most impact is so important You can try to be everywhere sure but without a proper system or guiding star you ll burn out pretty quickly This is why I spent the last few weeks brainstorming my Content Creation Conveyor Belt I wanted to make sure I was focusing my content creation efforts in the right places online without sinking too much time into it each week The saying work smarter not harder comes to mind here This article is a walkthrough of my content creation process how I repurpose my content and the tools I use to make it happen How It Works An OverviewLet s start with a bird s eye view of the whole system Here s a mind map I created to summarize everything As you can see the system starts with the smallest unit of content the idea then works its way up to a larger more fleshed out idea Each stage of the system requires repurposing adding and removing content to make it suitable for the platform it will live on It s important to say here that not every idea will work well on every platform I ve had some ideas that made great tweets and great articles but aren t suited for threads I ve also had ideas that immediately turned into articles like this one without testing them out as essays or threads Keep this in mind if you are looking to try this out for yourself The way I think about this is twofold which format I feel would have the most impact and how fleshed out I think the idea is Take the idea for this article as an example how I create and repurpose content I feel pretty strongly that it would be more helpful as an article than a super concise essay or thread I also know a lot about this topic so it s something I can almost write stream of consciously without thinking about and refining the idea This part of the system is definitely a personal preference but it s how I operate right now I should also note here this idea isn t mine I took great inspiration from Ev Chapman s article The Bottom s Up Approach To Writing That Guarantees A Successful Article I highly recommend you read this article to get a sense of how this approach works If you re a content creator and don t know who Ev Chapman is fix that now TweetInitial ideas start life as Tweets If you think about it a Tweet is the best way to test an idea It costs nothing is very low effort and can the results can be amazing I ve often heard it said that a Tweet is like a lottery ticket Not sure where this idea originally came from but Alex Llull captures it really well in this Tweet To manage the ideas for my Tweets I use inboxes like Todoist and a notebook to capture them as they come to me during the day I manage my Tweets in a Notion Dashboard I call the Tweet HUD It allows me to refine the Tweet and repurpose it in different ways The best Tweets each week are repurposed to my Instagram using Poet so I use Poet so to turn the Tweet into an image then manually post it on Instagram when I think of it I m not very regular about this process but it has been a proven way to increase my Instagram audience without much extra effort Essay ThreadI use Twitter s Analytics to see my top performing Tweets from the last week or so These best Tweets are perfect candidates to become atomic essays or threads Sometimes I ll create both sometimes I ll pick one it depends on the idea and how well I think it would work Atomic essays are published on my website and on Typeshare Threads of course get published on Twitter I use Zlappo to schedule my tweets and threads I manage writing my atomic essays and threads in Obsidian I have dedicated folders for each type of content under a primary Content Creation folder ArticleEssays or threads that did well are repurposed and expanded into longer form articles Not all articles work like this though This article for example I m writing from scratch I did share the mind map on Twitter but never turned the idea into a thread or an atomic essay I write my articles in Typora and manage them in Obsidian in an Articles folder Articles are published to my website and then cross posted to Medium Dev to and Hashnode I also include a link to the article on my Changelog site VideoNow we come to the part of the process I don t actually follow yet Currently I don t have any video presence online but it s something I m working on this year Here s how I plan to repurpose my content for video Articles will be turned into videos for YouTube I ll either record myself speaking the article or talk to the camera off the cuff or with a rough outline These longer YouTube videos will be cut down into short snippets for Instagram Twitter and potentially TikTok Ship for does this to great effect Of course I m sure this process will evolve as I start figuring everything out but for now this is the rough idea PodcastAgain as with video this isn t something I do yet but am hoping to get started with this year For the videos I created above I can pull the audio out and create a podcast version of the content Lots of popular podcasts YouTube channels already do this so I m not reinventing the wheel I plan to use Anchor to upload the podcasts as this service distributes content across all major podcasting services I could also pull out particular audio snippets and repurpose those on Twitter Instagram Design Details is a super podcast that does this My Obsidian Setup For Content CreationLittle side tangent here but I m sure there are lots of you wondering how this is all managed in Obsidian Honestly I could write a whole article about how I use Obsidian for content creation but for the purposes of this article I ll keep it brief I have a Content Creation directory which is subdivided into the following directories atomic essaysthreadsarticlesvideospodcastsHere s a screenshot Each type of content also has its own template This means that each time I create a new article for example I can add the template and avoid writing things out from scratch Here s what my article template looks like for example Let me know if you d like a more in depth article on my Obsidian workflow for managing content creation ️ My Key Take AwaysI approach content creation with this mindset create once use many times What works on one platform may not work on another Consider how you can repurpose that idea in a way that will be most suited to the platform To avoid being overwhelmed pick one to two platforms and start with those until you have the routine down Then start adding more platforms whenever you like See what other ways people are repurposing their content across different platforms and try it out for yourself There s no harm in experimenting If you enjoyed this please consider sharing it with someone else who might find it useful This article was originally published over on my website How I Create And Repurpose Content To Have The Most Impact Online |
2022-04-04 14:29:51 |
海外TECH |
DEV Community |
writing command line scripts in php: part 2, reading STDIN |
https://dev.to/gbhorwood/writing-command-line-scripts-in-php-part-2-reading-stdin-2enf
|
writing command line scripts in php part reading STDINalthough not a popular choice php can be a very effective language for writing command line scripts it is feature rich and if you are longer on php skills than say bash or if you have already existing php code you would like to incorporate into a command line script it can be an excellent choice in this series of articles we will be going through the various techniques and constructs that will help us build quality scripts in php this installment focuses on reading from standard input previous installmentsthis is the second installment of the series the first installment covered parsing command line arguments preflighting our scripts to ensure they can run in the current environment and handling some niceties of script design the articles that compose this series so far are pt arguments preflights and more pt handling STDIN input the flyoverwe will be designing our example script here to read data piped into it from the STDIN stream this feature requires us to do two things test if there is content in STDIN waiting to be read by our scriptactually reading the STDIN inputif you are not familiar with linux data streams such as STDIN or STDOUT it is a good idea to spend a few minutes reading up on them first reading piped inputcommand line scripts often take their input as piped in from STDIN consider this short little pipeline that finds all the users on the system that use the fish shell instead of bash cat etc passwd grep bin fish gbhorwood x grant horwood home gbhorwood usr bin fishthe cat command dumps the contents of the file to STDOUT normally STDOUT goes to our terminal so we can read it however in this example the operator pipe traps the contents of STDOUT and uses it as input for the next command to the right the pipe operator essentially pipes output from the command on the left to the input for the command on the right that s why it s called a pipe this is a handy feature and one we want to implement in our php script so let s do that with this function which we will add to ourfancyscript php usr bin env php lt php Read contents piped in from STDIN stream return String function read piped input piped input null while line fgets STDIN noted STDIN here is not a string piped input line return string piped input Entry point my piped in content read piped input print piped input is PHP EOL print my piped in content let s look a that read piped input function the core functionlity here is using fgets to read from the STDIN pointer on a loop line by line until the content is exhausted those lines are concatenated together and returned mission accomplished let s see how it runs echo this is our piped input ourfancyscript phppiped input is this is our piped inputexactly what we expect testing for piped inputexcept there s a problem if we run ourfancyscript php without any input on STDIN it hangs why because it s patiently waiting for input that never comes to solve this we are going to write a function that tests whether or not there is any input on STDIN and only read from the pipe if it returns true Test if there is input waiting on STDIN return bool function test piped input streams STDIN note STDIN here is not a string write array except array seconds zero seconds on timeout since this is just for testing stream change streamCount stream select streams write array except array seconds return boolean streamCount the key to this function is the stream select command stream select basically waits for the state of a stream to change timing out after seconds have passed we pass to it STDIN as the only element of an array since that s the stream we re interested in and set the timeout seconds to we use zero seconds because STDIN input is present or not before we even run our script there s no sense waiting around for it it s either there or it isn t if there is data piped in to our command STDIN has by definition changed and stream select returns a non zero number we know we have data waiting for us if there is no data the stream is unchanged and the return is putting it togethernow that we have test piped input and read piped input we can put them together in our script Entry point if test piped input my stdin content read piped input print piped input is PHP EOL print my stdin content if we now run ourfancyscript php without a piped in stream it proceeds if we do pipe in data it handles it let s look at the full script now usr bin env php lt php Test if there is input waiting on STDIN return bool function test piped input streams STDIN note STDIN here is not a string write array except array seconds zero seconds on timeout since this is just for testing stream change streamCount stream select streams write array except array seconds return boolean streamCount Read contents piped in from STDIN stream return String function read piped input piped input null while line fgets STDIN piped input line return string piped input Entry point if test piped input my stdin content read piped input print piped input is PHP EOL print my stdin content next stepsthere s still a lot more ground to cover in effectively geting input to our script in future installments we will be looking at interactive input |
2022-04-04 14:29:05 |
海外TECH |
DEV Community |
Why we use Ember.js at OTA Insight |
https://dev.to/otainsight/why-we-use-emberjs-at-ota-insight-4oai
|
Why we use Ember js at OTA InsightWe always choose the stack that is right for the job We often get asked why we use Ember js It s not the most popular framework or the largest community but it is the right choice for the products we are building at OTA Insight From our first product Rate Insight to the four after that all of it is built with Ember js And for good reasons We were able to create new features quickly have a codebase that s scalable and have a good developer experience All of these are reasons we choose Ember js But let s go in a bit more detail Batteries are includedIt s a phrase you ll see floating around the Ember js community and it s one of the reasons why we were able to move so fast here at OTA Insight The strong focus on convention over configuration enables you to move quickly as you don t need to reinvent the wheel every time It also gives developers new to your codebase a good guideline on how to write code There s no ways of doing things There were times we had to make something custom going against the conventions put forth in the framework This can become difficult requiring some research into the inner workings of Ember js but this hasn t occurred all that much The benefits of having strong conventions and guidelines on how code should be structured are more than worth this tradeoff What steep learning curve The thing I hear developers fear most is the steep learning curve of Ember js Not sure where this comes from especially with the latest move to Ember Octane After this milestone Ember js is making use of the latest javaScript features and it feels pretty close to working with standard javaScript Gone are the days of having to call custom get set functions using EmberObjects etc Now everything is done with class syntax decorators etc If you know javaScript you ll have no problem getting started in Ember js The templates used in Ember js are close to basic HTML with the ease of Handlebars added on top You separate your concerns having the template and styles separated from the logic All of this provides a clear overview for developers starting with the framework Of course there are still Ember js specific features you need to know How is the app structured What are the lifecycle hooks What is the Ember js way of doing things But this is what you ll see learning any framework Bonus is that here at OTA Insight we have a whole team of people with experience in Ember js who will be happy to help you out All you have to do is ask Modern frameworks and Ember jsWe ll regularly check our tech stack and see if we can improve it Our use of Ember js is no exception We recently built a Vue js prototype of our application to see how it compares We focussed on factors to determine how they compared Developer experiencePerformanceScalabilityCommunityI won t go into too much detail here that may be a post for later or if you want to know more feel free to reach out Regarding developer experience it was clear the conventions defined in Ember js really allowed us to develop things quickly Speed was comparable to Vue js thanks to the Glimmer components recently introduced in Ember js Even now with products the codebase is still scalable as well although using a framework like Nuxt you could probably come to a similar result Even though there is a larger community with Vue js we found Ember js had a clearer roadmap for the future After our research was done we had to decide is it worth changing our framework to Vue js What was clear is that the two were comparable Ember js outperformed in some regards and Vue js in others as you can see above We decided that if Ember js can compete with one of the top three frameworks out there right now it s clear this was the right choice back in the day and we are still proud to be developing in Ember js Paving the wayEmber js is the right choice for our main application But it isn t always the right choice For example our design system Frameworks come and go in the frontend development world but web components are forever That s why we have built and are expanding a design system using Stencil js Web components can be used in any framework and Stencil js makes it easy to create new ones We did not build this in Ember js on purpose While Ember js has its use and while we still believe in it now who knows what the frontend landscape will look like years from now Having our design system and so our most generic and most used components be framework agnostic is a huge benefit It opens the door to have them used with other frameworks or with no framework at all We have several smaller apps some use Ember js and some are just plain HTML CSS and javaScript In the future we might even have one running in React or Svelte All of these can use our design system We don t cling to Ember js just because of a choice we made all those years ago We do it because we still believe it s the right choice for our main app For the design system it wasn t We always choose the stack that is right for the job |
2022-04-04 14:26:12 |
海外TECH |
DEV Community |
First impressions of the new AWS Cloud Quest: Cloud Practitioner adventure |
https://dev.to/aws-builders/first-impressions-of-the-new-aws-cloud-quest-cloud-practitioner-adventure-4bco
|
First impressions of the new AWS Cloud Quest Cloud Practitioner adventure What is AWS Cloud Quest and who is it for In the middle of last month March AWS announced two new free initiatives for upskilling yourself in building foundational cloud skills AWS Cloud Quest Cloud Practitioner is one of the two and it is a game based role playing experience It target audience are new to cloud and early career learners and aims to teach you cloud computing concepts through quest completion interaction with NPCs Non Player Characters and collecting gems after completing the challenges It comprises of challenges from cloud essentials to highly available apps and cover the foundational elements anyone needs to know when starting out on their learning journey with AWS and cloud In my role I need to be up to speed with the offering of cloud learning courses and experiences so I thought I ll give it a try and see how it performs In the next sections I ll cover main areas in terms of interface and world it is a game after all content and quests how does the knowledge align with the game user interaction and performance how does the system and environment perform and learning experience how effective is it in teaching cloud concepts and practice Followed by an overall summary of the review and recommendation compared to some other AWS free education programs Interface and world To get started let s look at the interface and world The overall look and feel reminds me of Sims and those retro feel games but brings quite a few surprise elements like a giant gorilla the first time you land in the city The character creation interface and process is a little bit clunky and choice of customisation decent but a little limited for my liking For an educational game though I d say it performs decently in the category An example of the name selection interface post character creation and my new learner badge below Content and quests Once your character is ready and you picked an available name for your badge you get dropped into the virtual game world and the adventure begins The city map is pretty vast and has quite a few different elements including a gorilla and giraffe that you see as soon as you arrive you have a hoverboard to move around and there are plenty of NPCs around to give you work to do The dialogue keeps it casual but also technical especially when getting your quests Some example screens below with the city map and dialogue when receiving my first quest User interaction and performance In terms of the interaction with the characters around the city that was actually one of the better parts of the gameplay and seemed to go a bit smoother than the actual quest elements interaction and performance The dialogue to get my first quest was easy to navigate the interface clear and pointing you to exactly what to do to advance and what you would need to do to complete the challenge successfully as well as what services and learning you will need in the process Unfortunately after this is where it all went downhill for me a little Once I collected the quest it was time to get down to do the work and complete it and for that I had to go to the Solution Center What is the Solution Center you ask Well it is the hands on lab and learning environment that is mostly where you do the building of your foundational cloud skills Having previous experience with AWS I still went through the steps and motions to see what the learning experience would be like for my less experienced peers and unfortunately this is where it didn t quite make the cut for me performance interaction and time wise when compared with the more traditional AWS digital training When you come to the Solution Center to build the solution and complete the quest you have steps you need to go through LearnPlanPracticeDIY LearnIn the learn section you can interact with a diagram of the solution you build and can watch videos of the concepts and services that you will use I must admit the diagram was really helpful and gets new to cloud learners used to the concept of solution or architecture diagrams as well as explain what the services are The less great part of this step was the fact that the videos were completely frozen for me and I could only listen to the voice of the presenter without the visual help This wasn t that great for myself as I am a visual learner and prefer to have a graphical support when studying new concepts It might have been a glitch or due to lag potentially PlanIn the plan section you will use the architecture diagram to creatively come up with the solution for the challenge I must admit that being a static website on S I didn t really put my heart into this step as much as a new to cloud person might PracticeNow this alongisde DIY was my favorite part of the learning in terms of concept but had major difficulties with the lag and performance aspect Something that would have taken me minutes to do on my along took about due to the glitchy interface buttons not registering clicks or scrolls or the screen loading time being very slow one of my co workers gave it a go as well as I wanted to see if there was something wrong with my account or connection but it seemed to be as bad for them as well Now the lab was great they supply the files and steps to help you go through it as well as time bound lab account to use so you have everything set for your practice The instructions and explanations were really clear and I think probably the most valuable part of the whole quest is in the Solution Center If it wasn t for the slowness and glitchy controls I would have scored this exceptional for learners I did manage to finish it in close to half an hour with all the delays though and could move on to the DIY section DIYThe last section of the Solution Center experience is the DIY this is where you get a challenge to solve based on the practice lab you just completed For the first quest it is to change the name of the index file which is not too complex even for beginners I also like that it allows you to stretch yourself a bit from the start and give you more confidence in solving issues going forward Learning experienceOnce you completed your solution steps and finalized the quest you can go back to the NPC who gave you the task to complete it I really like the achievement building part of it as well as the integration of resources in the Solution Center to allow for a blended learning model from reading architecture diagrams video concept lessons and finally hands on lab practice using AWS it was really well thought out Now are the slowness and delays worth it that s debatable as someone with more experience you might feel like it is a bit cumbersome but as a new to cloud learner that might be totally worth it as you feel you can take your time The performance might also improve as the initiative is live and they can improve on it with the user input and feedback so could be worth coming back to it in a few weeks I think I would definitely appreciate the Solution Center approach even without the gamified experience but if the goal is to have you go through a series of quest and different quests from practitioner to specialist and more like the certifications I can see why it is cool to have an integrated game like world to do this in The fun little easter eggs are also great I got to pick my own personalized type of lighthouse to add to the city after completing the quest so who am I to refuse that Overall thoughts and review I think AWS Cloud Quest Cloud Practitioner is a good start for AWS in exploring diversified and gamified cloud learning experiences I know AWS have committed to invest in providing free cloud computing skills training to million people by so I think this might be a good avenue for them to use when reaching out to early career and skills learners interested in cloud computing I have submitted my feedback on the experience especially in terms of the performance control response and lag overall affecting users enjoyment and learning journey so hope to see some improvements in the following months I think if you have the time and interest in game like learning this is definitely worth it although for more time conscious and traditional learning fans I would still recommend the AWS Digital Training and Skillbuilder resources as a more straightforward and concise experience |
2022-04-04 14:26:09 |
海外TECH |
DEV Community |
A galactic guide to building a blog with Next.js and Contentful |
https://dev.to/stahlwalker/a-galactic-guide-to-building-a-blog-with-nextjs-and-contentful-12o1
|
A galactic guide to building a blog with Next js and ContentfulIn a galaxy far far away there lived an individual who was known as the grill and breakfast guy So he set out on an epic quest to become not just a Padawan Chef but a Master Chef like those who came before him This is a guide to building a blog with Contentful while using the popular JavaScript framework Next js If you haven t caught on already it s a Star Wars themed cookbook filled with recipes handed down from multiple family members and friends I m hopeful that you ll enjoy this journey and the build out of this app May the FOOD be with you Episode I The Phantom StarterTo get your project started you need to follow along with this amazing Next js and Contentful starter guide crafted by Developer Advocate Brittany Walker This walkthrough will help create your project connect your Contentful account and deploy with Vercel Following this starter you ll be able to implement any of the additional features detailed below Episode II Attack of the AppsNow that you have your project created running it locally and connected to Contentful let s add a couple Contentful apps The first app I highly recommend is adding webhooks and connecting it to your Vercel project This will notify Vercel everytime you make changes to your content in Contentful to rebuild To add a webhook go to settings in the Contentful web app and simply select “Add Webhook or you can use the Vercel template which we can do because this project is already deployed there If you are taking the manual approach here is the Webhooks documentation to help you along Second we want to install the GraphQL playground app You ll find this by navigating to “Apps and clicking on “Manage Apps in the top menu then scrolling down to all the available apps Simply click install and then this feature will be available to reference later when querying data within your project Note you ll need to configure this app so have your Contentful Preview API token available Episode III Revenge of the DisqusWith any good blog having the ability for readers to interact is a must On multiple occasions I ve used Disqus when adding a comments section However being new to Next js I wasn t certain where to add it and Disqus doesn t have integration with Next js within its tool So here are the steps to adding Disqus First sign up for an account at Disqus They have a free plan which will get the job done After signing up you ll need to click “I want to install Disqus on my site Follow the instructions by adding your website name and category Again you won t see Next js listed in the platforms section so you will need to choose a manual universal code install I will be providing an example of my code below Now you are in luck because there is Disqus React an npm package for Disqus In your terminal run npm install disqus reactNext we are going to add a comment js file to the components folder and add the following code components comment jsimport DiscussionEmbed from disqus react const DisqusComments post gt const disqusShortname stahlwalkercookbook const disqusConfig url post slug identifier post slug Single post slug title post title Single post title return lt div gt lt DiscussionEmbed shortname disqusShortname config disqusConfig gt lt div gt export default DisqusComments Next we need to import the Disqus component into the slug js file located in the posts folder in your pages directory Then it s just a matter of finding where you d like the comments to populate I placed it at the end of my blog posts Episode IV A New Social ShareI m a fan of the ability to share your blog across whatever social networks you prefer We re going to use another npm package to add social sharing In the terminal of your project run npm i next shareFrom there you can add the following code to your slug js file I added this component after my article before the comments You ll need to update the URL to your site s URL to grab all the individual posts From here people will be able to share not just your site but the blog post you ve written Below is an example pages posts slug js lt div className social gt lt h gt Looks tasty share with friends lt h gt lt FacebookShareButton url post slug gt lt FacebookIcon size round gt lt FacebookShareButton gt lt TwitterShareButton url post slug gt lt TwitterIcon size round gt lt TwitterShareButton gt lt div gt Episode V The Open Graphs Strike BackNow to make sure everything looks great when links from your blog are shared we re going to work on updating the open graph tags Open graph tags hold metadata that is used by search engines and social media platforms This section focuses on updating your open graph tags and including Twitter and Facebook The file we are working with is located in the components directory and named meta js Make sure to customize yours to fit your project For example you ll have to create a project in Facebook to obtain your fb app id With Twitter I created a “summary large image Twitter card alternatively you can just put “summary if you prefer the smaller content display One problem I ran into was that images were not properly displaying This was because the image meta properties require a full image URL so keep that in mind while updating Here is an example of what my meta tags look like components meta js lt meta name twitter card content summary large image gt lt meta name twitter site content LucasStahl gt lt meta name twitter creator content LucasStahl gt lt meta name twitter title content Stahlwalker Cookbook gt lt meta name twitter description content A blog dedicated to cooking up recipes for all those far far and away gt lt meta property twitter image content gt lt meta property og title content Stahlwalker Cookbook gt lt meta property og type content website gt lt meta property og url content gt lt meta property og image content gt lt meta property fb app id content add Facebook id gt lt meta property og description content A blog dedicated to cooking up recipes for all those far far and away gt To validate these are working correctly you can check Twitter s validator and Facebook s debugger Regarding your Favicon images those can be located in the public directory Here is a link to a Favicon generator to create the correct image sizes you need to replace the current Next js image Episode VI Return of the CategoriesHaving your blog posts is one thing but wouldn t it be cool if you could sort them by a category tag To do this we need to add another field to the content model which we can do in the “Content Model section in the Contentful web app In the “Post content type I added a new text field and selected the “short and “list options This new field means you can start tagging your blog posts that later can be searched In my cookbook I categorized the recipes by cuisine type Once you have your content model updated you need to update your GraphQL query in your code In the api js file located in your “lib directory add the name of the new field to the query variable I called my field foodCategory If you are unsure how your query should look this is where you can use the GraphQL playground app to make sure your queries are correct I wanted the category to be listed under the header on the blog post pages I added foodCategory as a prop for my PostHeader component so that I could access the category within the PostHeader component pages posts slug js lt PostHeader title post title coverImage post coverImage foodCategory post foodCategory date post date author post author gt If you would like users to be allowed to search by category you could add an API search tool such as Algolia or Elasticsearch You may also want to filter all blog posts by category and build that out This is the first step in the process Episode VII The CSS AwakensI have zero experience with Tailwind and it comes pre installed with this project If you are like me and want to get back to the basics you can style with regular CSS The styles folder is located in the public directory you ll just have to apply a class and remember since we are using Next js you will need to use “className to the containers you are looking to style Since this is a Star Wars themed cookbook I used Google Fonts to get a font that had the look and feel similar to what has been used in the films I also used Font Awesome for social icons I added both libraries to my app js Head tag Episode VIII The Last Load More ButtonAt this point my homepage was displaying all of my blog posts I wanted to have more control over how many posts were listed You have the option to add pagination and if that interests you I suggest taking a look at the following blog on Next js pagination with Contentful If you want something simple and sweet you can add a load more button In this instance I went with the latter I modified the existing more stories component which allowed me to control how many posts to display and how many should appear when a user clicks to load more I also used a prop to determine if the component should display a load more button since it is used in multiple places Here is an example of what that looks like components more stories jsimport React useState from react import PostPreview from components post preview export default function MoreStories posts showMore const postNum setPostNum useState Default number of posts displayed function handleClick setPostNum prevPostNum gt prevPostNum posts length postNum showMore false showMore true return lt section gt lt h className mb text xl md text xl font bold tracking tighter leading tight gt More Recipes lt h gt lt div className grid grid cols md grid cols md col gap lg col gap row gap md row gap mb gt posts slice postNum map post gt lt PostPreview key post slug title post title coverImage post coverImage date post date author post author slug post slug excerpt post excerpt gt lt div gt showMore amp amp lt button className load more onClick handleClick gt Load More lt button gt lt section gt Once you have the code updated in your component make sure to add the new prop showMore to the component in the index js file pages index js morePosts length gt amp amp lt MoreStories posts morePosts showMore true gt And there you have it your load more should be in effect Episode IX Rise of the Stahlwalker CookbookAnd that is how I created my first Next js project We took a starter and built it out by adding apps a commenting section the ability to share styled it with CSS rather than Tailwind and added a load more options for viewers to dive further into your blog posts To see all of the final code you can access it here The Next js and Contentful starter guide really inspired me so I d love to see what you ve built with it as well I m planning on continuing to add features to my new blog but if you d like to check it out here is the live version of the Stahlwalker Cookbook project |
2022-04-04 14:25:58 |
海外TECH |
DEV Community |
Mobile Device Management Guide 2022: AOSP for custom MDM |
https://dev.to/antonlogvinenko/mobile-device-management-guide-2022-aosp-for-custom-mdm-3cin
|
Mobile Device Management Guide AOSP for custom MDMMobile Device Management or MDM is an administrative area that deals with security and monitoring of corporate mobile devices MDM software suggests methods to quickly deploy integrate and monitor a network of certain smartphones or tablets One of the primary concerns of MDM is security Since corporate mobile devices can access critical business data they can threaten enterprise databases Administrating devices through MDM provides the means to distribute software packages set permissions and optimize device functionality But ideally MDM has to provide a way to oversee mobile devices as easily as desktop computers There are MDM systems for all mobile operating systems including cross platform ones But today we ll focus on Android as the only operating system that offers fully fledged customization in terms of MDM With the biggest market share and open source nature let s look at how we can approach building mobile device management for Android devices Mobile device management features and capabilitiesBefore analyzing the actual Android solutions we need to clearly understand what MDM is capable of Mobile device management systems provide quite similar capabilities across the market Some of the most common functions are Enrollment ーa procedure that entails installing MDM on a mobile device The majority of MDM software supports bulk enrollment to install software packages on multiple devices at once An Over the air OTA enrollment suggests distribution and installation of MDM through a dedicated web page or app Profile management ーonce we install the MDM package we can now assign working profiles for the device users Policy management ーa policy is a bunch of rules or permissions set for a given device By providing policies we may lock or unlock some of the software or hardware functions denote rules for accessing corporate data etc Device administration and troubleshooting ーfurther all the policies can be updated remotely This part of MDM functionality implements monitoring of the device and troubleshooting Device location tracking detecting device location via GPS Remote wiping ーonce we enroll MDM on the device all the corporate data accessed through it will be stored on a protected profile This data and the MDM itself can be wiped to factory settings remotely to protect any data leakage from the stolen lost or compromised device MDM is a complex solution However it consists of modular small parts that after closer inspection are actually not that hard to implement as long as a skillful team is involved Generally these parts can be divided into an admin panel which includes a device policy controller DPC and middleware to orchestrate enrolled devices By middleware we mean an API interface that performs all the policy updates and transfers data between the MDM server and smartphones So MDM solutions basically provide management of such features as WiFi Bluetooth NFC USB file transfer location tracking phone SMS or just simply allowing users to access some settings on devices that otherwise wouldn t be available If any additional functionality is required the best thing to do is to extend the existing API by adding methods managers and services It is done with certain policies on the backend which would be later pushed to devices Then a system service is created as a simple way of sharing these settings across devices Based on our experience such systems might scale to thousands of devices and require numerous custom policies The majority of the security requirements are covered by the available MDM solutions on the market However some systems may require enhanced security and independence of operating system providers This will impose a challenge to build a custom Android MDM where any policy can be deployed on the device So first let s look at the standard solutions for Android MDM and then talk about the customizable options Android Management API out of the box MDMAndroid Management API is a managed solution for building mobile device management systems for Android Google ships the whole package of MDM software including backend based on their Cloud Platform a device policy controller and user interface to administrate corporate devices out of the box All of these components become available after a few steps of registration we ll describe later The Android API itself is required to build your own MDM solution and create custom policies Currently the registration for new solutions is open except for creating a custom DPC Which means it is possible to develop an MDM platform based on the Android management API but you won t be able to apply custom policies To use a managed MDM you ll have to create a dedicated Google account to log into Google Workspaces The account cannot be associated with the existing enterprise account Google provides device policies for over types of Android smartphones except Samsung KNOX There are three ways we can implement MDM with Android management API Work profile A dedicated account on a device that stores and transfers corporate data without affecting personal data MDM policies are also applied to the profile data only Managed device a smartphone or tablet enrolled with MDM Dedicated device a separate device used with restricted functionality For example this can be a tablet used as a bulletin board The system allows enrolling target devices over the air which means no cables are required You can use a wireless connection to enroll and manage mobile devices on Android The list of available policies includes nearly all of the native functionality for Android devices While the existing policies are not customizable the ease of deployment and no need for Android development outweigh this flaw So now let s look at how to approach Google MDM HOW TO SET UP ANDROID MANAGEMENT API MDMThe registration procedure to Google s MDM platform imposes several requirements Android device version or higherAccess to Google Cloud PlatformEnabled Android management API in Google Cloud Platform After all the requirements are completed there are two ways to enroll your devices The easiest one is a quickstart procedure suggested by Google This entails a few steps of creating a Google Cloud project generating a QR code and enrolling the device with a default policy Note that a device can only have a single policy at a time The more advanced way to complete setup is meant for creating your own MDM solution based on Google s API For a full setup procedure please check the article PDF where all the corresponding steps are described Now let s move on and sum up the strengths and weaknesses of Google s MDM BENEFITS AND LIMITATIONS OF ANDROID MANAGEMENT APIIn terms of a mobile device management platform Google suggests a solid base either for deploying a ready made ecosystem or using its API to build your own one So here we ll highlight some significant factors in choosing the solution type Out of the box solution Google s MDM requires just a few steps to enroll your first devices After that the full range of capabilities for device management becomes available for the users It doesn t require any additional Android development so we can say it s a low code solution Since minimal manipulations on the server side are required Large list of supported devices Usually different types of Android devices can be used in the organization Integrating different devices and writing custom enrollment methods might be a serious pain unless your company uses a single type of Android device Android management API solves this problem by providing support for over Android devices by default DPC is updated with Android A device policy controller is a module that sits on your target device and implements policies sent from a server This bit of software is operating system dependent meaning the updates in the OS will require DPC to be rewritten Default DPC from Google is updated with the Android version since it s a managed solution So the only possible flaw here is that Google will abandon some of the older Android versions in the future This becomes beneficial if you lack flexibility within Google s solution or want another set of policies Or you can use Android management API as a groundwork for your own solution All of the developed solutions using Google s DPC have to be registered and certified by Google So here you ll need an experienced engineering team that has expertise in mobile application development and mobile device management systems specifically Besides these advantages there are also two important factors you want to keep in mind No customization options In terms of Google s managed platform there is no way you can customize the existing software Building your own solution based on Android management API allows you to customize the backend part but the set of policies remains unchanged Vendor lock in Another problem is security Since the backend part of a system runs on Google Cloud servers all of your corporate data will pass through it This doesn t necessarily mean your sensitive data is compromised However some organizations want to protect their information and keep it in house To overcome these limitations another option for Android mobile device management can be approached Android Open Source Project AOSP custom mobile device managementAndroid Open Source Project or AOSP is an open source software stack for a wide range of mobile devices and a corresponding open source project led by Google It can be used to create your own custom variants of the Android OC and mobile applications and connect it with your custom back end device management platform AOSP provides a number of benefits for custom Android development Active open source solution Some features that might be necessary are either already implemented some might be available in the nearest future As soon as a security patch is committed to AOSP it can be applied and pushed to users Full control over product life cycle The product owner decides when to deliver a new feature or security update Product needs can be prioritized instead of waiting for something that might never be rolled out by vendors who naturally prioritize their own needs Customization at any level Linux kernel contains all the essential hardware drivers like camera keypad display and others Above it there is a set of libraries including an open source Web browser engine WebKit libc library SQLite database which is a useful repository for storage and sharing of application data libraries to play and record audio and video SSL libraries responsible for Internet security and others Then the Android Framework layer provides many higher level services to applications in the form of Java classes The application level is where Android developers usually work Android applications extend the core Android operating system There are two primary sources for applications Pre Installed Applications Android has a set of pre installed applications including phone email calendar web browser and contacts These function as user applications as well as providers of key device capabilities that can be accessed by other applications Pre installed applications may be a part of the open source Android platform or they may be developed by an OEM for a specific device User Installed Applications Android provides an open development environment supporting any third party applications Our device administration app would fall into this category and our main work will be done primarily at the framework and application levels Applications at these layers are written with Java so a regular Android team will feel comfortable working with them As a solution for businesses that use mobile device management AOSP helps to resolve concerns for security of functionality limitations and ownership over the product As it s possible to create an Android stack that will provide any features and means of device management that they might require So let s quickly enlist the pros and cons of this approach BENEFITS AND LIMITATIONS OF CUSTOM MDMFlexibility As we mentioned custom development allows you to choose any architecture and solution type you want for mobile device management This means you can deploy a custom backend with policies of your choice You can provide any enrollment methods security management and reporting Independence of data collection While AOSP is an open source project led by Google the solution based on it doesn t depend on their infrastructure This means your mobile device management system will be secured from data collection by third party organizations Operating system customization Additionally the operating system itself can be customized to bring extra functionality and enhance security of the system The only limitation here is hardware capabilities of the target device Complexity The approach of customizing the OS creating your own applications and back end management platform is a complex and time consuming project Moreover customizing Android OC core requires not only an experienced and skilled development team but extensive quality assurance as well How to choose the best solution Given the described options how do you choose Here is a quick run through the key points Google mobile device management platform is capable of closing the majority of needs for mobile device management systems The only tangible limitation here is the closed registration for developing custom device policy controllers so no custom policies can be implemented Another solution is using Google Cloud platform as your backend which can be a security or architectural concern for some organizations Using Android management API as a basis for your system allows some customization but still relies on Google s infrastructure So approaching AOSP based MDM might be the best choice if you require a high degree of customization Based on our experience working with AOSP might be a time consuming task But at the end of the day it fulfills project objectives at the same level as native Android MDM solutions Mobile Device Management solutions for iOSIt s a rare case for the enterprise to use devices only on Android So the first thing that pops up is the question what about iOS Apple devices have a built in framework for enrolling and managing iOS smartphones and tablets The capabilities are similar to Android management API including enrollment options and management flexibility Because it s a huge topic we re going to describe MDM for iOS in a dedicated article Please stay tuned to learn about the implementation of the iOS framework in future material or contact us if you ve got interested in Android MDM solution |
2022-04-04 14:25:20 |
Apple |
AppleInsider - Frontpage News |
Daily deals April 4: $99 AirPods, $69 off Apple Watch Series 7, unlocked iPhones from $389, more |
https://appleinsider.com/articles/22/04/04/daily-deals-april-4-99-airpods-69-off-apple-watch-series-7-unlocked-iphones-from-389-more?utm_medium=rss
|
Daily deals April AirPods off Apple Watch Series unlocked iPhones from moreMonday s top deals include off Apple Watch Series off Apple AirPods Pro off Apple AirPods nd Generation and much more Apple Watch Series Apple AirPods Pro and Apple AirPods nd Generation side by sideEach day we set off on a search around the internet to find the best tech deals available including discounts on Apple products tech accessories and a variety of other items all to help you save some cash If an item is out of stock you may still be able to order it for delivery at a later date Many of the discounts are likely to expire soon though so act fast Read more |
2022-04-04 14:12:59 |
海外TECH |
CodeProject Latest Articles |
Simple Fast Adaptive Grid to Accelerate Collision Detection between AABB of Particles |
https://www.codeproject.com/Articles/5327631/Simple-Fast-Adaptive-Grid-to-Accelerate-Collision
|
Simple Fast Adaptive Grid to Accelerate Collision Detection between AABB of ParticlesA walkthrough for a grid implementation of particle in cell problem to improve performance of axis aligned bounding box AABB collision checking in various scenarios |
2022-04-04 14:45:00 |
海外科学 |
NYT > Science |
What to Know About the Bird Flu Outbreak |
https://www.nytimes.com/article/bird-flu.html
|
backyard |
2022-04-04 14:22:05 |
金融 |
RSS FILE - 日本証券業協会 |
PSJ予測統計値 |
https://www.jsda.or.jp/shiryoshitsu/toukei/psj/psj_toukei.html
|
統計 |
2022-04-04 16:00:00 |
金融 |
RSS FILE - 日本証券業協会 |
J-IRISS |
https://www.jsda.or.jp/anshin/j-iriss/index.html
|
iriss |
2022-04-04 15:39:00 |
金融 |
金融庁ホームページ |
審判期日の予定を更新しました。 |
https://www.fsa.go.jp/policy/kachoukin/06.html
|
期日 |
2022-04-04 16:00:00 |
金融 |
金融庁ホームページ |
資金決済法に基づく払戻手続実施中の商品券の発行者等一覧を更新しました。 |
https://www.fsa.go.jp/policy/prepaid/index.html
|
資金決済法 |
2022-04-04 15:10:00 |
ニュース |
BBC News - Home |
Easter travel disruption as flights cancelled |
https://www.bbc.co.uk/news/business-60976958?at_medium=RSS&at_campaign=KARANGA
|
delays |
2022-04-04 14:32:35 |
ニュース |
BBC News - Home |
Dan Walker: Presenter to leave BBC Breakfast for Channel 5 |
https://www.bbc.co.uk/news/entertainment-arts-60986736?at_medium=RSS&at_campaign=KARANGA
|
announces |
2022-04-04 14:38:54 |
ニュース |
BBC News - Home |
Hungary election: PM Viktor Orban criticises Ukraine's Zelensky as he wins vote |
https://www.bbc.co.uk/news/world-europe-60977917?at_medium=RSS&at_campaign=KARANGA
|
election |
2022-04-04 14:27:32 |
ニュース |
BBC News - Home |
Human rights watchdog publishes single-sex spaces guide |
https://www.bbc.co.uk/news/uk-60983982?at_medium=RSS&at_campaign=KARANGA
|
guidetransgender |
2022-04-04 14:26:21 |
ビジネス |
ダイヤモンド・オンライン - 新着記事 |
プロクレアHD、株主優待を新設し、配当+優待利回り が4%超に! 青森銀行とみちのく銀行が合併して誕生 した企業で、地元特産品のカタログギフトを贈呈へ! - 株主優待【新設・変更・廃止】最新ニュース |
https://diamond.jp/articles/-/301023
|
|
2022-04-04 23:10:00 |
北海道 |
北海道新聞 |
室蘭の感染急拡大1・6倍 前週278人、半数10代以下 西胆振は1・2倍496人 |
https://www.hokkaido-np.co.jp/article/665434/
|
新型コロナウイルス |
2022-04-04 23:19:56 |
北海道 |
北海道新聞 |
胆振155人感染 6日連続前週上回る |
https://www.hokkaido-np.co.jp/article/665432/
|
新型コロナウイルス |
2022-04-04 23:14:00 |
北海道 |
北海道新聞 |
旭医大病院長に復帰した古川氏が会見 改めて「解任は不当」 |
https://www.hokkaido-np.co.jp/article/665389/
|
旭川医科大 |
2022-04-04 23:08:56 |
北海道 |
北海道新聞 |
「誠実に対応し、元の旭医大に戻れるように」 旭医大病院長に復職の古川博之氏 記者会見の主な発言 |
https://www.hokkaido-np.co.jp/article/665331/
|
古川博之 |
2022-04-04 23:09:50 |
コメント
コメントを投稿