投稿時間:2021-04-28 12:36:10 RSSフィード2021-04-28 12:00 分まとめ(42件)

カテゴリー等 サイト名等 記事タイトル・トレンドワード等 リンクURL 頻出ワード・要約等/検索ボリューム 登録日
TECH Engadget Japanese アップル「M2(仮)」チップ量産開始か。新型MacBook Proに搭載のうわさ https://japanese.engadget.com/apple-m2chip-massproduction-025029849.html ipadpro 2021-04-28 02:50:29
TECH Engadget Japanese クリエイティブツールとしての額。厚さ12mmの雑誌まで入る「モーメント フレーム」 https://japanese.engadget.com/moment-frame-024510574.html ピクチャレールでご使用されているご様子で壁との距離感が美しいです。 2021-04-28 02:45:10
TECH Engadget Japanese ガーミンのウェアラブル端末、血中酸素トラッキング機能に対応 https://japanese.engadget.com/garmin-023357630.html 酸素 2021-04-28 02:33:57
IT ITmedia 総合記事一覧 [ITmedia News] 24インチiMacの白いベゼル、前面のアゴの理由、電源コネクターは抜けやすい疑惑 Apple幹部が答える https://www.itmedia.co.jp/news/articles/2104/28/news074.html apple 2021-04-28 11:29:00
TECH Techable(テッカブル) 「GameVketZERO」開催目前! 出展カタログと期間中のイベントを公開 https://techable.jp/archives/153769 gamevket 2021-04-28 02:00:32
python Pythonタグが付けられた新着投稿 - Qiita VTuberの配信内チャットでのトレンドワードを見れるサイトを作ってみた https://qiita.com/4thapp/items/6d1322e3e99accb4d2ca おそらく、TwitterAPIを使ってツイートするのが一般的ですが、とりあえずお手軽に動作させたかったため、AzureLogicAppsを併用して、ツイート処理ができるようにしました。 2021-04-28 11:30:50
js JavaScriptタグが付けられた新着投稿 - Qiita vue-routerでリファラを取得する https://qiita.com/mgmgmogumi/items/327ac99133d9b123481d vuerouterでリファラを取得する前のページ情報を取得したいときに、vuerouterを使ってリファラを取得する方法について書きます。 2021-04-28 11:16:31
Program [全てのタグ]の新着質問一覧|teratail(テラテイル) CSSで夏を表現できるものがわからない https://teratail.com/questions/335535?rss=all 表現 2021-04-28 11:55:06
Program [全てのタグ]の新着質問一覧|teratail(テラテイル) 副問い合わせ?だと思います https://teratail.com/questions/335534?rss=all 副問い合わせだと思いますど素人です。 2021-04-28 11:53:47
Program [全てのタグ]の新着質問一覧|teratail(テラテイル) @meadi screen and (max-width: 768px) iPad(画面幅:768px)に反映されていない。 https://teratail.com/questions/335533?rss=all meadiscreenandmaxwidthpxiPad画面幅pxに反映されていない。 2021-04-28 11:51:33
Program [全てのタグ]の新着質問一覧|teratail(テラテイル) VMware vCenter server クラスタにホストが追加できない https://teratail.com/questions/335532?rss=all 先に追加したホストB、ESXiは問題なく追加できました。 2021-04-28 11:41:13
Program [全てのタグ]の新着質問一覧|teratail(テラテイル) Centos7 Postfixの最新版「GhettoForge」リポジトリをインストールできない https://teratail.com/questions/335531?rss=all CentosPostfixの最新版「GhettoForge」リポジトリをインストールできない現在Centosでメールサーバーを作成中で最新Postfixをインストールしたいです。 2021-04-28 11:39:33
Program [全てのタグ]の新着質問一覧|teratail(テラテイル) クリックした周辺に特定の要素を出現させるには? https://teratail.com/questions/335530?rss=all クリックした周辺に特定の要素を出現させるにはページ内のある文字や画像など、クリックした際に、そのクリックした対象の近くに別に要素を表示させたいのですが、方法がわからず質問をさせて頂きました。 2021-04-28 11:37:26
Program [全てのタグ]の新着質問一覧|teratail(テラテイル) Laravelで画像をbase64_encodeして保存したい https://teratail.com/questions/335529?rss=all Laravelで画像をbaseencodeして保存したい既存のコントローラーの処理で、追加で画像をbaseencodeする記述をどうに書けばいいのかわからないので教えていただければと思いますpublicfunctionexeStoreProductRequestrequest商品のデータを受け取るinputsrequestgtallDBbeginTransactiontry商品を登録ProductcreateinputsDBcommitcatchThrowableeDBrollbackabortSessionflasherrmsg商品を登録しました。 2021-04-28 11:35:38
Program [全てのタグ]の新着質問一覧|teratail(テラテイル) ログイン後に元の画面に戻る処理 https://teratail.com/questions/335528?rss=all ログイン後に元の画面に戻る処理前提・実現したいことPHPでログイン後に元の画面に戻る処理を実現したいです。 2021-04-28 11:25:04
Program [全てのタグ]の新着質問一覧|teratail(テラテイル) C言語においての変数について https://teratail.com/questions/335527?rss=all C言語においての変数について質問includeltstdiohgtintolympicintyearintmainvoidintyearholdscanfdampyearholdolympicyearswitchholdcaseprintf開催されない。 2021-04-28 11:23:09
Program [全てのタグ]の新着質問一覧|teratail(テラテイル) Linux:locateコマンドで引数に「*」用いたときの挙動について https://teratail.com/questions/335526?rss=all Linuxlocateコマンドで引数に「」用いたときの挙動について前提・実現したいことLinux超初心者です。 2021-04-28 11:18:15
Program [全てのタグ]の新着質問一覧|teratail(テラテイル) C言語のプログラムがわかりません https://teratail.com/questions/335525?rss=all C言語のプログラムがわかりません前提・実現したいことプログラムで次元配列aとbがそれぞれベクトルaとbnbspを表しています。 2021-04-28 11:17:02
Program [全てのタグ]の新着質問一覧|teratail(テラテイル) 【WordPress】データベース接続確立エラーについて https://teratail.com/questions/335524?rss=all 【WordPress】データベース接続確立エラーについてWordPressで突然画面に「データベース接続確立エラー」と表示され、コンテンツにアクセス出来なくなったのですが、原因の辿り方についてお伺いさせていただきます。 2021-04-28 11:14:09
Program [全てのタグ]の新着質問一覧|teratail(テラテイル) wordpress lightningのページヘッダーに背景画像を入れて調整したい https://teratail.com/questions/335523?rss=all wordpresslightningのページヘッダーに背景画像を入れて調整したいwordpressnbsplightningのトップページではない個々のページのヘッダーに背景画像を配置して、高さ調整も行いたいのですが、カスタムcssにbackgroundimageでurl指定をするとヘッダーの半分ほどに画像が表示され、半分は元のbackgroundcolorが表示されます。 2021-04-28 11:13:10
Program [全てのタグ]の新着質問一覧|teratail(テラテイル) C言語で文字列を格納したい https://teratail.com/questions/335522?rss=all C言語で文字列を格納したい文字リテラルを事前に確保した領域tempにコピーしたいのですがエラーが発生します。 2021-04-28 11:10:21
Program [全てのタグ]の新着質問一覧|teratail(テラテイル) Postgresql SSL通信のwiresharkでの暗号化確認方法 https://teratail.com/questions/335521?rss=all PostgresqlSSL通信のwiresharkでの暗号化確認方法前提・実現したいことPostgresqlでSSL暗号化ができていることをwiresharkで確認できないでしょうかSSLセッション暗号化のSSLRequestの要求と応答のパケット内容を確認できればと思っています。 2021-04-28 11:08:01
GCP gcpタグが付けられた新着投稿 - Qiita GCP Cloud Source Repositories の GitHubミラーリングでリポジトリ接続エラー https://qiita.com/naoki_koreeda/items/8bd2b9514c28c6fc1690 GCPCloudSourceRepositoriesのGitHubミラーリングでリポジトリ接続エラーCloudSourceRepositoriesのGitHubリポジトリのミラーリングGitHubのPrivateリポジトリをミラーリングできるので、ここからAppEngineにデプロイしたら楽そうと思ってやってみました。 2021-04-28 11:23:54
Azure Azureタグが付けられた新着投稿 - Qiita VTuberの配信内チャットでのトレンドワードを見れるサイトを作ってみた https://qiita.com/4thapp/items/6d1322e3e99accb4d2ca おそらく、TwitterAPIを使ってツイートするのが一般的ですが、とりあえずお手軽に動作させたかったため、AzureLogicAppsを併用して、ツイート処理ができるようにしました。 2021-04-28 11:30:50
Git Gitタグが付けられた新着投稿 - Qiita git | command https://qiita.com/tzhong518/items/4e942159348bf610032c command 2021-04-28 11:05:50
海外TECH DEV Community Job Opportunities - Freelancing Developer https://dev.to/krowser/job-opportunities-freelancing-developer-o7i Job Opportunities Freelancing Developer About UsEstablished in by Kayinajah Inyang Krowser Web Services KWS creates hybrid applications built with creativity for more speed amp data saving while protecting your privacy Krowser Web Services has raised a total of M in funding over rounds Our latest funding was raised on Mar from a Series A round LinkedIn Website CrunchBase About The OpportunityAs a way of helping upcoming developers we are recruiting freelancing developers they will find their clients and complete the job on behalf of Krowser Web Services and keep all the revenue We will also provide them with some of the necessary tools they need and funding to obtain the core software development certifications LocationRemote you can be based in any country of the world Skill amp QualificationsBasic programming skills as this is an entry level job How To ApplyApply on LinkedIn Or Send your CV to krowser aol comPlease feel free to leave questions in the comments section 2021-04-28 02:26:23
海外TECH DEV Community Managing loading status for React is much easier with loadio https://dev.to/hepter/managing-loading-status-for-react-is-much-easier-with-loadio-1n8n Managing loading status for React is much easier with loadio IntroductionMany projects contain asynchronous calls The operation of these may be unaware of the user or the user may need to know about that status In order to notify the user of this the loading component is shown on the screen and the user is informed that something is running At this point the management of asynchronous methods should be managed in various ways and the component should be demonstrated Today I will show you how you can handle this in an easy way with loadio loadioThis package is a simple to use tool that allows you to manage status information with promises Install it with Yarn yarn add loadio NPM npm install loadioWrap the method you want to follow the status information import withPool useStatus from loadio const fetch withPool global fetch Or add promise directly into it with PoolManagerPoolManager append fetch get data And that s all You can easily view the status on your home page by calling the new method you have wrapped in place of the old one import useStatus from loadio function HomePage isLoading const status useStatus return lt gt isLoading Loading Loaded lt button onClick gt myWrappedfetch get data gt Get lt button gt lt gt It also generates a percentage of information according to the number of tasks isLoading boolean percentage number runningTasks number A complete example with React Component is as follows import React from react import withPool useStatus from loadio const fetch withPool global fetch class HomePage extends React Component render const isLoading percentage this props return lt gt isLoading Loading Loaded percentage lt button onClick gt fetch get data gt Get lt button gt lt gt export default withStatus HomePage Demo ConclusionBy wrapping promise methods or else adding them directly we have made it easy to show the loading status with percentage information You can view the details of the package by clicking here Thanks 2021-04-28 02:21:52
海外TECH DEV Community How to Build Better Machine Learning Models https://dev.to/rishitdagli/how-to-build-better-machine-learning-models-3ff0 How to Build Better Machine Learning ModelsHello developers If you have built Deep Neural Networks before you might know that it can involve a lot of experimentation In this article I will share with you some useful tips and guidelines that you can use to better build better deep learning models These tricks should make it a lot easier for you to develop a good network You can pick and choose which tips you use as some will be more helpful for the projects you are working on Not everything mentioned in this article will straight up improve your models performance A high level approach for Hyperparameter tuning️One of the more painful things about training Deep Neural Networks is the large number of hyperparameters you have to deal with These could be your learning rate α the discounting factor ρ and epsilon εif you are using the RMSprop optimizer Hinton et al or the exponential decay rates β₁and β₂if you are using the Adam optimizer Kingma et al You also need to choose the number of layers in the network or the number of hidden units for the layers You might be using learning rate schedulers and would want to configure those features and a lot more We definitely need ways to better organize our hyperparameter tuning process A common algorithm I tend to use to organize my hyperparameter search process is Random Search Though there are other algorithms that might be better I usually end up using it anyway Let s say for the purpose of this example you want to tune two hyperparameters and you suspect that the optimal values for both would be somewhere between one and five The idea here is that instead of picking twenty five values to try out like and so on systematically it would be more effective to select twenty five points at random Based on Lecture Notes of Andrew Ng Here is a simple example with TensorFlow where I try to use Random Search on the Fashion MNIST Dataset for the learning rate and the number of units in the first Dense layer Here I suspect that an optimal number of units in the first Dense layer would be somewhere between and and my learning rate would be one of e e or e Consequently as shown in this example I set my minimum value for the number of units to be and the maximum value to be and have a step size of Then instead of hardcoding a value for the number of units I specify a range to try out hp units hp Int units min value max value step model add tf keras layers Dense units hp units activation relu We do the same for our learning rate but our learning rate is simply one of e e or e rather than a range hp learning rate hp Choice learning rate values e e e optimizer tf keras optimizers Adam learning rate hp learning rate Finally we perform Random Search and specify that among all the models we build the model with the highest validation accuracy would be called the best model Or simply that getting a good validation accuracy is the goal tuner kt RandomSearch model builder objective val accuracy max trials directory random search starter project name intro to kt tuner search img train label train epochs validation data img test label test After doing so I also want to retrieve the best model and the best hyperparameter choice Though I would like to point out that using the get best models is usually considered a shortcut To get the best performance you should retrain your model with the best hyperparameters you get on the full dataset Which was the best model best model tuner get best models What were the best hyperparameters best hyperparameters tuner get best hyperparameters I won t be talking about this code in detail in this article but you can read about it in this article I wrote some time back if you want Use Mixed Precision Training for large networksThe bigger your neural network is the more accurate your results in general As model sizes grow the memory and compute requirements for training these models also increase The idea with using Mixed Precision Training NVIDIA Micikevicius et al is to train deep neural networks using half precision floating point numbers which let you train large neural networks a lot faster with no or negligible decrease in the performance of the networks But I d like to point out that this technique should only be used for large models with more than million parameters or so While mixed precision would run on most hardware it will only speed up models on recent NVIDIA GPUs for example Tesla V and Tesla T and Cloud TPUs I want to give you an idea of the performance gains when using Mixed Precision When I trained a ResNet model on my GCP Notebook instance consisting of a Tesla V it was almost three times better in the training time and almost times on a Cloud TPU instance with almost no difference in accuracy The code to measure the above speed ups was taken from this example To further increase your training throughput you could also consider using a larger batch size ーand since we are using float tensors you should not run out of memory It is also rather easy to implement Mixed Precision with TensorFlow With TensorFlow you could easily use the tf keras mixed precision Module that allows you to set up a data type policy to use float and also apply loss scaling to prevent underflow Here is a minimalistic example of using Mixed Precision Training on a network In this example we first set the dtype policy to be float which implies that all of our model layers will automatically use float After doing so we build a model but we override the data type for the last or the output layer to be float to prevent any numeric issues Ideally your output layers should be float Note I ve built a model with so many units so we can see some difference in the training time with Mixed Precision Training since it works well for large models If you are looking for more inspiration to use Mixed Precision Training here is an image demonstrating speedup for multiple models by Google Cloud on a TPU Speedups on a Cloud TPU Use Grad Check for backpropagation️In multiple scenarios I have had to custom implement a neural network And implementing backpropagation is typically the aspect that s prone to mistakes and is also difficult to debug With incorrect backpropagation your model could learn something which might look reasonable which makes it even more difficult to debug So how cool would it be if we could implement something which could allow us to debug our neural nets easily I often use Gradient Check when implementing backpropagation to help me debug it The idea here is to approximate the gradients using a numerical approach If it is close to the calculated gradients by the backpropagation algorithm then you can be more confident that the backpropagation was implemented correctly As of now you can use this expression in standard terms to get a vector which we will call dθ approx Calculate approx gradients‌‌ In case you are looking for the reasoning behind this you can find more about it in this article I wrote So now we have two vectors dθ approx and dθ calculated by backprop And these should be almost equal to each other You could simply compute the Euclidean distance between these two vectors and use this reference table to help you debug your nets Reference table Cache DatasetsCaching datasets is a simple idea but it s not one I have seen used much The idea here is to go over the dataset in its entirety and cache it either in a file or in memory if it is a small dataset This should save you from performing some expensive CPU operations like file opening and data reading during every single epoch This does also means that your first epoch would comparatively take more timesince you would ideally be performing all operations like opening files and reading data in the first epoch and then caching them But the subsequent epochs should be a lot faster since you would be using the cached data This definitely seems like a very simple to implement idea right Here is an example with TensorFlow showing how you can very easily cache datasets It also shows the speedup from implementing this idea Find the complete code for the below example in this gist of mine A simple example of caching datasets and the speedup with it How to tackle overfitting When you re working with neural networks overfitting and underfitting might be two of the most common problems you face This section talks about some common approaches that I use when tackling these problems You might know this but high bias will cause you to miss a relationship between features and labels underfitting and high variance will cause the model to capture the noise and overfit to the training data I believe the most effective way to solve overfitting is to get more data ーthough you could also augment your data A benefit of deep neural networks is that their performance improves as they are fed more and more data But in a lot of situations it might be too expensive to get more data or it simply might not be possible to do so In that case let s talk about a couple of other methods you could use to tackle overfitting Apart from getting more data or augmenting your data you could also tackle overfitting either by changing the architecture of the network or by applying some modifications to the network s weights Let s look at these two methods Changing the Model ArchitectureA simple way to change the architecture such that it doesn t overfit would be to use Random Search to stumble upon a good architecture Or you could try pruning nodes from your model essentially lowering the capacity of your model We already talked about Random Search but in case you want to see an example of pruning you could take a look at the TensorFlow Model Optimization Pruning Guide Modifying Network WeightsIn this section we will see some methods I commonly use to prevent overfitting by modifying a network s weights Weight RegularizationIterating back on what we discussed “simpler models are less likely to overfit than complex ones We try to keep a bar on the complexity of the network by forcing its weights only to take small values To do so we will add to our loss function a term that can penalize our model if it has large weights Often L₁and L₂regularizations are used the difference being L The penalty added is ∝to weight coefficients L The penalty added is ∝to weight coefficients ²where x represents absolute values Do you notice the difference between L and L the square term Due to this L might push weights to be equal to zero whereas L would have weights tending to zero but not zero In case you are curious about exploring this further this article goes deep into regularizations and might help This is also the exact reason why I tend to use L more than L regularization Let s see an example of this with TensorFlow Here I show some code to create a simple Dense layer with units and the L regularization import tensorflow as tftf keras layers Dense kernel regularizer tf keras regularizers L To provide more clarity on what this does as we discussed above this would add a term ×weight coefficient value² to the loss function which works as a penalty to very big weights Also it is as easy as replacing L to L in the above code to implement L for your layer DropoutsThe first thing I do when I am building a model and face overfitting is try using dropouts Srivastava et al The idea here is to randomly drop out or set to zero ignore x of output features of the layer during training We do this to stop individual nodes from relying on the output of other nodes and prevent them from co adapting from other nodes too much Dropouts are rather easy to implement with TensorFlow since they are available as layers Here is an example of me trying to build a model to differentiate images of dogs and cats with Dropout to reduce overfitting model tf keras models Sequential tf keras layers ConvD padding same activation relu input shape IMG HEIGHT IMG WIDTH tf keras layers MaxPoolingD tf keras layers Dropout tf keras layers ConvD padding same activation relu tf keras layers MaxPoolingD tf keras layers Dropout tf keras layers Flatten tf keras layers Dense activation relu tf keras layers Dense activation sigmoid As you could see in the code above you could directly use tf keras layers dropout to implement the dropout passing it the fraction of output features to ignore here of the output features Early stoppingEarly stopping is another regularization method I often use The idea here is to monitor the performance of the model at every epoch on a validation set and terminate the training when you meet some specified condition for the validation performance like stop training when loss lt It turns out that the basic condition like we talked about above works like a charm if your training error and validation error look something like in this image In this case Early Stopping would just stop training when it reaches the red box for demonstration and would straight up prevent overfitting It Early stopping is such a simple and efficient regularization technique that Geoffrey Hinton called it a “beautiful free lunch Hands On Machine Learning with Scikit Learn and TensorFlow by Aurelien GeronAdapted from Lutz Prechelt However for some cases you would not end up with such straightforward choices for identifying the criterion or knowing when Early Stopping should stop training the model For the scope of this article we will not be talking about more criteria here but I would recommend that you check out “Early Stopping ーBut When Lutz Prechelt which I use a lot to help decide criteria Let s see an example of Early Stopping in action with TensorFlow import tensorflow as tfcallback tf keras callbacks EarlyStopping monitor loss patience model tf keras models Sequential model compile model fit callbacks callback In the above example we create an Early Stopping Callback and specify that we want to monitor our loss values We also specify that it should stop training if it does not see noticeable improvements in loss values in epochs Finally while training the model we specify that it should use this callback Also for the purpose of this example I show a Sequential model ーbut this could work in the exact same manner with a model created with the functional API or sub classed models too Thank you for reading Thank you for sticking with me until the end I hope you will benefit from this article and incorporate these tips in your own experiments I am excited to see if they help you improve the performance of your neural nets too If you have any feedback or suggestions for me please feel free to reach out to me on Twitter 2021-04-28 02:19:57
Apple AppleInsider - Frontpage News Apple got Adobe Flash to work on iOS but performance was 'abysmal,' says Scott Forstall https://appleinsider.com/articles/21/04/28/apple-got-adobe-flash-to-work-on-ios-but-performance-was-abysmal-says-scott-forstall?utm_medium=rss Apple got Adobe Flash to work on iOS but performance was x abysmal x says Scott ForstallContrary to a very public stance against the adoption of Adobe Flash Apple at one point in the development of iPhone and its underlying iOS operating system attempted to build in support for the once ubiquitous software The tidbit revealed by former iOS chief Scott Forstall during a taped deposition for the upcoming Epic Games v Apple trial is salacious news for longtime Apple followers According to Forstall Apple attempted to work with Adobe to get Flash working on iOS The topic came up when the former executive was asked about integrating cross platforming capabilities in iOS a potential avenue of inquisition Epic could explore in the upcoming trial Read more 2021-04-28 02:56:07
海外科学 NYT > Science C.D.C. Eases Outdoor Mask Guidance for Vaccinated Americans https://www.nytimes.com/2021/04/27/us/politics/coronavirus-masks-outdoors.html C D C Eases Outdoor Mask Guidance for Vaccinated Americans“Go get the shot President Biden declared Tuesday hailing an easing of federal guidance on outdoor mask wearing as a step toward post pandemic normalcy 2021-04-28 02:32:52
ビジネス ダイヤモンド・オンライン - 新着記事 欧州スーパーリーグ構想、黒幕は米銀マネー - WSJ発 https://diamond.jp/articles/-/269951 黒幕 2021-04-28 11:21:00
LifeHuck ライフハッカー[日本版] 投資リスクが怖い人へ忍び寄る「安全・確実・高利回り」の誘惑 https://www.lifehacker.jp/2021/04/233848moneyhack2104.html 山崎俊輔 2021-04-28 12:00:00
LifeHuck ライフハッカー[日本版] 2000万超えの人気作! ハンモックで快適にくつろげる贅沢が簡単手に入るテントを使ってみた https://www.lifehacker.jp/2021/04/machi-ya-haventent2-review3.html haventent 2021-04-28 11:05:00
北海道 北海道新聞 道内感染220人前後の見通し 新型コロナ 200人超は1月15日以来 https://www.hokkaido-np.co.jp/article/538357/ 新型コロナウイルス 2021-04-28 11:13:00
北海道 北海道新聞 前田、6回途中5失点で2敗目 インディアンス戦 https://www.hokkaido-np.co.jp/article/538356/ 途中 2021-04-28 11:12:00
北海道 北海道新聞 比、インド大使館一時閉鎖 コロナ対策、入国禁止も https://www.hokkaido-np.co.jp/article/538353/ 入国禁止 2021-04-28 11:07:00
北海道 北海道新聞 ムヒカ元大統領が緊急入院 世界一貧しい大統領、食道に骨 https://www.hokkaido-np.co.jp/article/538352/ 元大統領 2021-04-28 11:07:00
北海道 北海道新聞 RCEP、国会で承認 手続き完了、年内発効も https://www.hokkaido-np.co.jp/article/538351/ asean 2021-04-28 11:07:00
ニュース Newsweek 【ミャンマールポ】現地報道もデモも、国軍の「さじ加減」で消されている https://www.newsweekjapan.jp/stories/world/2021/04/post-96173.php ただ多くのミャンマー人が、日本人である彼がミャンマーのためにいま起こっている出来事をつぶさに発信してきたことに感謝している。 2021-04-28 11:30:00
IT 週刊アスキー 業務マニュアルを簡単に作成できる「toaster team」、iOS版アプリの提供開始 https://weekly.ascii.jp/elem/000/004/053/4053186/ toasterteam 2021-04-28 11:30:00
マーケティング AdverTimes 電通とベクトルが業務提携 ESGスコアリングを利用し、企業のSXを推進 https://www.advertimes.com/20210428/article348658/ 企業価値 2021-04-28 03:00:57
マーケティング AdverTimes バイオハザード新作発売で吉幾三が名曲の替え歌熱唱「俺らこんな村いやだLv.100」 https://www.advertimes.com/20210428/article348629/ 発売 2021-04-28 02:30:44

コメント

このブログの人気の投稿

投稿時間:2021-06-17 05:05:34 RSSフィード2021-06-17 05:00 分まとめ(1274件)

投稿時間:2021-06-20 02:06:12 RSSフィード2021-06-20 02:00 分まとめ(3871件)

投稿時間:2020-12-01 09:41:49 RSSフィード2020-12-01 09:00 分まとめ(69件)