投稿時間:2022-05-07 07:18:20 RSSフィード2022-05-07 07:00 分まとめ(19件)

カテゴリー等 サイト名等 記事タイトル・トレンドワード等 リンクURL 頻出ワード・要約等/検索ボリューム 登録日
Google カグア!Google Analytics 活用塾:事例や使い方 ポッドキャストの番組名を変更する方法とデメリット https://www.kagua.biz/marke/podcast/20220507a1.html amazon 2022-05-06 21:00:52
AWS AWS Security Blog How to let builders create IAM resources while improving security and agility for your organization https://aws.amazon.com/blogs/security/how-to-let-builders-create-iam-resources-while-improving-security-and-agility-for-your-organization/ How to let builders create IAM resources while improving security and agility for your organizationMany organizations restrict permissions to create and manage AWS Identity and Access Management IAM resources to a group of privileged users or a central team This post explains how also granting these permissions to builders that is the people who are developing testing launching and managing cloud infrastructure can speed up your development increase your … 2022-05-06 21:07:44
AWS AWS Security Blog How to let builders create IAM resources while improving security and agility for your organization https://aws.amazon.com/blogs/security/how-to-let-builders-create-iam-resources-while-improving-security-and-agility-for-your-organization/ How to let builders create IAM resources while improving security and agility for your organizationMany organizations restrict permissions to create and manage AWS Identity and Access Management IAM resources to a group of privileged users or a central team This post explains how also granting these permissions to builders that is the people who are developing testing launching and managing cloud infrastructure can speed up your development increase your … 2022-05-06 21:07:44
海外TECH Ars Technica Puzzling cases of hepatitis in kids leaps to 109 in 25 states, CDC reports https://arstechnica.com/?p=1852878 liver 2022-05-06 21:05:29
海外TECH MakeUseOf How Reliable Are Reverse Phone Lookup Sites? https://www.makeuseof.com/are-reverse-phone-lookup-sites-reliable/ lookup 2022-05-06 21:30:14
海外TECH MakeUseOf Is Your Virtual Memory Too Low? Here's How to Fix It! https://www.makeuseof.com/tag/virtual-memory-low-heres-fix/ virtual 2022-05-06 21:05:14
海外TECH MakeUseOf Microsoft Excel Is Teaming Up... With EVE Online https://www.makeuseof.com/microsoft-excel-eve-online-addon/ office 2022-05-06 21:03:22
海外TECH DEV Community Explorations in Knowledge Distillation https://dev.to/mage_ai/explorations-in-knowledge-distillation-25c1 Explorations in Knowledge DistillationKnowledge distillation is a common way to train compressed models by transferring the knowledge learned from a large model into a smaller model Today we ll be taking a look at using knowledge distillation to train a model that screens for pneumonia in chest x rays What is the point Let s say you re working in an area where privacy is extremely important like healthcare It may be the case that we cannot send patient data to the cloud where all our fancy GPU s live This creates the need to train a model that can be downloaded and run on a low power machine So…what are we working with Let s use the Chest X Ray Images dataset from Kaggle The task is to identify pneumonia in chest x rays Pneumonia is a lung infection that causes coughing fever chills and breathing difficulties It s caused by an immune response to some kind of infection to the lungs by virus or bacteria The ongoing COVID virus can cause pneumonia Basically your lung has air sacs which is where oxygen and carbon dioxide is exchanged for you to breathe When these air sacs are infected by virus or bacteria your body produces an immune response by inflaming the area with fluid Most people can recover from this but it can cause death in some people due to respiratory failure Pneumonia kills many more people in developing countries While people died in the US from pneumonia in it caused million deaths worldwide Access to high quality healthcare is a major factor in the lethality of pneumoniaLimited access to healthcare infrastructure motivates the need for technology to bring down costs and increase efficiency Let s take a look at the dataWith this in mind given a chest x ray we would be looking for cloudy regions that indicate fluids in the lungs Here is a normal chest x rayHere is an chest x ray of a patient with pneumoniaNot so easy to tell the difference is it It s not that obvious why one scan is healthy while the other is infected From what I ve researched on this doctors look for white clumps around the peripherals of the lungs So let s model it If you want to skip ahead to the code go here The easiest thing we can do here is simply to throw this into a pre trained convolutional ResNet model and see how far we can get We ll be using PyTorch and PyTorch lightning to build and train the models PyTorch Lightning is a library that will let us modularize our code so we can separate the bits that are common in basically all image classification tasks and the bits that are specific to image distillation tasks Let s start by building a generic BaseImageClassificationTask class to take care of all the boring stuff in image classification tasks like configuring optimizers and loading datasets See the code here and dataset loading here Now let s create a simple ImageClassificationTask which can consume any PyTorch image classification model and compute the cross entropy loss This sets us up to plug in any PyTorch model that can consume an image and output a prediction class ImageClassificationTask BaseImageClassificationTask def init self net train dataset test dataset val dataset classes learning rate e super init self train dataset train dataset self test dataset test dataset self val dataset val dataset self learning rate learning rate self net net def training step self batch batch idx training step defined the train loop It is independent of forward x y batch prediction self net x loss F cross entropy prediction y self log train loss loss return lossMagically not really we can now kick off a training loop PyTorch Lightning will take care of sampling from data loaders and back propagating the loss checkpoint callback ModelCheckpoint monitor val loss dirpath checkpoints filename chest xray epoch d val loss f val acc f mode min trainer pl Trainer max epochs gpus callbacks checkpoint callback model ImageClassificationTask ResNet num classes train dataset test dataset val dataset Here are the results training with ResNet after epochs Validation AccuracyFinal test set accuracy How “small can we make this model Remember the original goal was to build models that can be downloaded and run on low power machines In this case let s build a simple layer CNN as the student model We can measure the size of this model in ways Model size which translates to number of parametersModel speed which typically translates to number of layers SizeThe ResNet model has M parameters while the layer CNN has parameters This is a reduction in model parameters SpeedCPU inference with ResNet takes ms while the layer CNN takes ms This is a x speed up in inference speed Do we actually need a teacher The first question we should ask is do we actually need a teacher model Let s naively take the our student model and train it with the ImageClassificationTask as we did with the ResNet model Here are the results after epochs Test set accuracy DistillationNow let s build our ImageClassificationDistillationTask class The only meaningful difference between the ImageClassificationTask and the ImageClassificationDistillationTask is how the final training loss is computed as well as some hyper parameters to configure the loss Starting with a trained teacher network and untrained student network We already did this with the ResNet above Forward pass through the teacher model and get logitsMake sure you put the teacher model into a test mode so we don t needlessly collect gradients Compute the final loss as distillation loss classification loss Backpropagate loss through student modelclass ImageClassificationDistillationTask BaseImageClassificationTask def init self teacher model student model train dataset test dataset val dataset learning rate temperature alpha super init self learning rate learning rate self teacher model teacher model self train dataset train dataset self test dataset test dataset self val dataset val dataset self net student model self temperature temperature self alpha alpha def training step self batch batch idx training step defined the train loop It is independent of forward x y batch student logits self net x student target loss F cross entropy student logits y with torch no grad teacher logits self teacher model x distillation loss nn KLDivLoss F log softmax student logits self temperature dim F softmax teacher logits self temperature dim loss self alpha student target loss self alpha distillation loss self log train loss loss return loss How does the loss function work The loss function is a weighted sum of things The normal classification loss referred to as student target loss in the gist The cross entropy loss between student logits and teacher logits referred to as the distillation loss in the gist The loss is typically expressed in literature like this The first part is the classification loss and the second is the distillation lossThe cross entropy loss between the student and the teacher is the main innovation Intuitively this trains the student on the teacher s uncertainty This is also commonly referred to as the distillation loss Intuitively the purpose of this is to teach the student how the teacher “thinks In addition to training the student on the ground truth label we also train the student on the uncertainty of the label that the teacher learned If the teacher outputs a prediction of pneumonia and not pneumonia we also want the student to be equally uncertain An intuitive visualization of distillation lossThis is motivates the need for the two parameters to adjust the behavior of this loss Alpha How much weight we put on the student teacher loss relative to the normal classification lossTemperature How much we scale the uncertainty of the teacher model AlphaThe alpha parameter controls the weight that is put on the distillation loss An alpha of means we only consider the distillation loss while an alpha of means we completely ignore the distillation loss TemperatureThe temperature is a more interesting parameter which scales how “uncertain the teacher predictions are Here s an example for a model that outputs classes Here is how the predictions scale with various values for temperature to scale the uncertainty of these predictions T lt makes the model more certain of its predictionsT gt makes the model less certain of its predictionsAt T the model is very uncertain compared to the original predictions The purpose of the temperature parameter is to control how uncertain the teacher predictions are Which hyper parameters work best Here are the final results for the student layer CNN model with different hyper parameter settings Something weird happened at alpha temperature Better performance seems to skew to the upper left of this table The best performing setting by far was alpha temperature which achieves on the test set This is an improvement from the original when we just trained the student model from scratch without distillation Here are the final results In SummaryWe were able to train a model that is smaller and times faster than ResNet and is about worse than the teacher model 2022-05-06 21:44:29
Apple AppleInsider - Frontpage News Apple users report 'phantom' AirTag stalking alerts, likely because of a bug https://appleinsider.com/articles/22/05/06/apple-users-report-phantom-airtag-stalking-alerts-likely-because-of-a-bug?utm_medium=rss Apple users report x phantom x AirTag stalking alerts likely because of a bugSome Apple users are seeing what appear to be phantom AirTag anti stalking alerts that are likely the result of a bug in the company s safety mechanisms according to a new report AirTagThe bug related alerts have a few distinguishing characters from actual reports of stalking including the fact that their pathway on a map appear erratic and non sensible The Wall Street Journal reported Friday Read more 2022-05-06 21:03:17
海外TECH CodeProject Latest Articles Masking Texture in OpenGL using Class CImage https://www.codeproject.com/Articles/1157044/Masking-Texture-in-OpenGL-using-Class-CImage bitmap 2022-05-06 21:48:00
海外TECH CodeProject Latest Articles number_cast and named smart casts https://www.codeproject.com/Articles/5331023/number-cast-To-From-and-named-smart-casts number cast and named smart castsA more descriptive and tightly scoped cast for numeric conversions Includes rounding options overflow checks a high resilience to coding errors and some special syntactical conveniences 2022-05-06 21:08:00
金融 ニュース - 保険市場TIMES 大樹生命、「大樹ファミリーセカンドオピニオンサービス」を開始 https://www.hokende.com/news/blog/entry/2022/05/07/070000 大樹生命、「大樹ファミリーセカンドオピニオンサービス」を開始セカンドオピニオンサービスの対象を拡大大樹生命は月日から、「大樹ファミリーセカンドオピニオンサービス」を開始し、セカンドオピニオンサービスの対象を拡大する。 2022-05-07 07:00:00
ニュース BBC News - Home Sunderland 1-0 Sheffield Wednesday: Ross Stewart gives Black Cats narrow play-off lead https://www.bbc.co.uk/sport/football/61286452?at_medium=RSS&at_campaign=KARANGA Sunderland Sheffield Wednesday Ross Stewart gives Black Cats narrow play off leadSunderland take a narrow lead to Hillsborough with a League One play off semi first leg win over Sheffield Wednesday 2022-05-06 21:40:46
ニュース BBC News - Home Scottish election results 2022: How did the SNP get so good at winning? https://www.bbc.co.uk/news/uk-scotland-scotland-politics-61358666?at_medium=RSS&at_campaign=KARANGA council 2022-05-06 21:31:02
北海道 北海道新聞 巨大監督像、宮崎お目見え ソフトバンク10日に公式戦 https://www.hokkaido-np.co.jp/article/677837/ 開催 2022-05-07 06:33:06
北海道 北海道新聞 漫画で学ぶ、包括的性教育 埼玉大教授、切り口多様に https://www.hokkaido-np.co.jp/article/677838/ 避妊 2022-05-07 06:33:06
北海道 北海道新聞 米運輸省ビル名「ミネタ」 日系元長官、功績たたえ https://www.hokkaido-np.co.jp/article/677840/ 米大統領 2022-05-07 06:33:00
北海道 北海道新聞 NY株続落、98ドル安 FRBの積極引き締め警戒 https://www.hokkaido-np.co.jp/article/677836/ 引き締め 2022-05-07 06:02:00
ビジネス 東洋経済オンライン 「FRBの利上げ0.5%」でぬか喜びをしてはいけない 前回の「量的引き締め」から学ぶところはあるか | 新競馬好きエコノミストの市場深読み劇場 | 東洋経済オンライン https://toyokeizai.net/articles/-/587128?utm_source=rss&utm_medium=http&utm_campaign=link_back maytheforcebewithyou 2022-05-07 06:30:00

コメント

このブログの人気の投稿

投稿時間:2021-06-17 05:05:34 RSSフィード2021-06-17 05:00 分まとめ(1274件)

投稿時間:2021-06-20 02:06:12 RSSフィード2021-06-20 02:00 分まとめ(3871件)

投稿時間:2020-12-01 09:41:49 RSSフィード2020-12-01 09:00 分まとめ(69件)