python |
Pythonタグが付けられた新着投稿 - Qiita |
たった2行で機械学習手法を選定! |
https://qiita.com/shimokame777/items/b874c0f91205da11c42a
|
lazypredict |
2022-09-17 20:54:48 |
python |
Pythonタグが付けられた新着投稿 - Qiita |
毎日めんどくさいMicrosoftFormsに自動回答して寝る |
https://qiita.com/trimscash/items/26ddf360d4c285dbd058
|
forms |
2022-09-17 20:48:37 |
js |
JavaScriptタグが付けられた新着投稿 - Qiita |
info There appears to be trouble with your network connection. Retrying... |
https://qiita.com/gaia003/items/560bcbbfdd4b05873d47
|
ppearstobetroublewithyou |
2022-09-17 20:30:39 |
Linux |
Ubuntuタグが付けられた新着投稿 - Qiita |
Ubuntuでフォルダー作成コマンド |
https://qiita.com/waokitsune/items/b684da0408dc3790f138
|
mkdir |
2022-09-17 20:38:49 |
AWS |
AWSタグが付けられた新着投稿 - Qiita |
【AWS】 EC2にSSHで接続できたときのおはなし |
https://qiita.com/MPrg2011/items/ea952e27e308dbfbb257
|
開発 |
2022-09-17 20:29:54 |
AWS |
AWSタグが付けられた新着投稿 - Qiita |
AWS Solutions Architect Associate (SAA-C03)に合格するまでの話 |
https://qiita.com/A-nkt/items/ed30810d489b43547e3b
|
sarchitectassociatesaac |
2022-09-17 20:29:09 |
Git |
Gitタグが付けられた新着投稿 - Qiita |
私がよく使うGitのコマンド【まとめ】 |
https://qiita.com/MPrg2011/items/885e2c070736724434af
|
github |
2022-09-17 20:29:26 |
海外TECH |
Ars Technica |
Punishment, puppies, and science: Bringing dog training to heel |
https://arstechnica.com/?p=1881676
|
heeldog |
2022-09-17 11:05:31 |
海外TECH |
DEV Community |
Deep Learning Library From Scratch 6: Integrating new autodiff module and MNIST digit classifier |
https://dev.to/ashwinscode/deep-learning-library-from-scratch-6-integrating-new-autodiff-module-and-mnist-digit-classifier-co9
|
Deep Learning Library From Scratch Integrating new autodiff module and MNIST digit classifierHello and welcome to part of this series of building a deep learning library from scratch The github repo for this series is ashwins code Zen Deep Learning Library Deep Learning library written in Python Contains code for my blog series on building a deep learning library Zen Deep Learning LibraryA deep learning library written in Python Contains the code for my blog series where we build this library from scratchmnist py contains an example for a MNIST digit recogniser using the libraryrnn py contains an example of a Recurrent Neural Network which learns to fit to the graph sin x cos x View on GitHub What are we doing If you recall from the previous post we finished the code for our automatic differentiation module for now at least Deep learning libraries rely on an automatic differentiation module to handle the backpropagation process during model training However our library currently calculates weight derivatives by hand Now that we have our own autodiff module let s have our library use it to carry out backpropagation We are also going to build a digit classifier to test out if everything works What was wrong with doing without the module Doing it without the module was not wrong as such After all it did work perfectly fine However when we start to implement more complex types of layers and activation functions in our library hard coding the derivative calculations may begin to become difficult to get your head around An autodiff module provides that layer of abstraction for us calculating the derivates for us so we don t have to nn pyLet s create a file called nn py This file will contain all the components that make up a neural network such as layers activations etc Linear Layerimport autodiff as adimport numpy as npimport loss import optimnp random seed class Layer def init self passclass Linear Layer def init self units self units units self w None self b None def call self x if self w is None self w ad Tensor np random uniform size x shape self units low np sqrt x shape high np sqrt x shape self b ad Tensor np zeros self units return x self w self bQuite simple so far call simply carries out the forward pass when an instance of this class is called as a function It also initialises the layer s parameters if it s being called for the first time The weights and biases are now instances of the Tensor class which means they will become part of the computation graph when operations begin This means that our autodiff module will be able to calculate their derivatives Note how there is no backward method like we had previously We don t need it anymore since the autodiff module will calculate the derivates for us Activationsclass Sigmoid def call self x return np e x class Softmax def call self x e x np e x np max x value s x e x ad reduce sum e x axis keepdims True return s xclass Tanh def call self x return np e x These stay the pretty much same as before just without the backward method of course Model classclass Model def init self layers self layers layers def call self x output x for layer in self layers output layer output return output def train self x y epochs loss fn loss MSE optimizer optim SGD lr batch size for epoch in range epochs loss print f EPOCH epoch for batch in tqdm range len x batch size output self x batch batch batch size l loss fn output y batch batch batch size optimizer self l loss l print LOSS loss value The model class stays similar to how it was before but now can train on the dataset in batches Training in batches rather than using the whole dataset at once enables the model to better understand the data it s given loss pyloss py will contain the different loss functions we implement in the library import autodiff as addef MSE pred real loss ad reduce mean pred real return lossdef CategoricalCrossentropy pred real loss ad reduce mean real ad log pred return lossAgain same as before just without the backward methods New Autodiff functionsBefore we got onto optimisers you may have noticed so far that the code uses some new functions from the autodiff module Here are the new functionsdef reduce sum tensor axis None keepdims False var Tensor np sum tensor value axis axis keepdims keepdims var dependencies append tensor var grads append np ones tensor value shape return vardef reduce mean tensor axis None keepdims False return reduce sum tensor axis keepdims tensor value sizedef log tensor var Tensor np log tensor value var dependencies append tensor var grads append tensor value return var optim pyoptim py will contain the different optimisers we implement in this library SGDfrom nn import Layerclass SGD def init self lr self lr lr def delta self param return param gradient self lr def call self model loss loss get gradients for layer in model layers if isinstance layer Layer layer update self Momentumclass Momentum def init self lr beta self lr lr self beta beta self averages def momentum average self prev grad return self beta prev self lr grad def delta self param param id param id if param id not in self averages self averages param id self averages param id self momentum average self averages param id param gradient return self averages param id def call self model loss loss get gradients for layer in model layers if isinstance layer Layer layer update self RMSPropclass RMSProp def init self lr beta epsilon self lr lr self beta beta self epsilon epsilon self averages def rms average self prev grad return self beta prev self beta grad def delta self param param id param id if param id not in self averages self averages param id self averages param id self rms average self averages param id param gradient return self lr self averages param id self epsilon param gradient def call self model loss loss get gradients for layer in model layers if isinstance layer Layer layer update self Adamclass Adam def init self lr beta beta epsilon self lr lr self beta beta self beta beta self epsilon epsilon self averages self averages def rms average self prev grad return self beta prev self beta grad def momentum average self prev grad return self beta prev self beta grad def delta self param param id param id if param id not in self averages self averages param id self averages param id self averages param id self momentum average self averages param id param gradient self averages param id self rms average self averages param id param gradient adjust self averages param id self beta adjust self averages param id self beta return self lr adjust adjust self epsilon def call self model loss loss get gradients for layer in model layers if isinstance layer Layer layer update self The code here has changed quite a bit from what it was from before Let s have a closer look at what s going on call def call self model loss loss get gradients for layer in model layers if isinstance layer Layer layer update self When an instance of an optimiser class is called it takes in the model it s training and the loss value loss get gradients Here we utilise our autodiff module If you can remember the get gradients method is part of the Tensor class and computes the derivates of all variables involved in the calculation of this tensor This means all the weights and biases in the network now have their derivatives computed which are all stored in their gradient property for layer in model layers if isinstance layer Layer layer update self Now that the derivates have been computed the optimiser will iterate through each layer of the network and update their parameters by calling the layer s update method passing itself as a parameter to it The update method in our Linear layer class is as such nn pyclass Linear Layer def update self optim self w value optim delta self w self b value optim delta self b self w grads self w dependencies self b grads self b dependencies This method takes an instance of an optimiser and updates the layer s parameters by a delta value calculated by the optimiser self w value optim delta self w self b value optim delta self b delta is a method in the optimiser s class It takes in a tensor and uses its derivative to return the value of how much this tensor should adjust by The delta method varies depending on the optimiser that is being used Let s have a look at one of the delta methods class RMSProp def rms average self prev grad return self beta prev self beta grad def delta self param param id param id if param id not in self averages self averages param id self averages param id self rms average self averages param id param gradient return self lr self averages param id self epsilon param gradient param id param idif param id not in self averages self averages param id Remember that most optimisers keep track of a type of average of each parameter s gradients to help locate the global minimum This is why we assigned an id to each tensor so that their gradient averages could be kept track of by an optimiser self averages param id self rms average self averages param id param gradient return self lr self averages param id self epsilon param gradientIf necessary the parameter s gradient average is recalculated remember SGD does not maintain an average The method then computes how much the parameter should be adjusted by and returns this value Have a look at the other optimisers and to help you figure out how it all works MNIST Digit ClassifierTo see if all our new changes work as expected let s build a neural network to classify images of handwritten digits Import modulesfrom sklearn datasets import load digitsimport numpy as npimport nnimport optimimport lossfrom autodiff import from matplotlib import pyplot as plt Prepare datasetdef one hot n max arr max arr n return arrmnist load digits images np array image flatten for image in mnist images targets np array one hot n for n in mnist target The mnist dataset contains images as D arrays However our library does not have layers that accept D inputs yet so we have to flatten them one hot takes in a number and returns the one hotted array for it of length maxone hot gt Building the modelmodel nn Model nn Linear nn Tanh nn Linear nn Sigmoid nn Linear nn Softmax This is a simple feed forward network which uses the softmax function to output a probability distribution This distribution specifies the probability of each class each digit in this case being true given its input the image Training the modelmodel train images targets epochs loss fn loss CategoricalCrossentropy optimizer optim RMSProp batch size All we need to train our model is this one line I ve decided to use the first images to train the model there are around in the dataset Feel free to see how the model reacts when you change the training configuration Maybe try changing the optimiser loss function or the learning rate and see how that affects training Testing the modelimages images np random shuffle images for image in images plt imshow image reshape cmap gray plt show pred np argmax model np array image value axis print pred Here we shuffle the images that the model didn t train on into a random order We then go through each image display it and get our model to predict what digit the image shows Let s run it Here is the output just after the model trains EPOCH ████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████ lt it s LOSS EPOCH ████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████ lt it s LOSS EPOCH ████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████ lt it s LOSS EPOCH ████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████ lt it s LOSS Here you can see the model finishes with a loss of Not bad at all Note how each epoch doesn t even take a second to complete It seems our autodiff module is able to compute several derivatives at an acceptable speed Now let s see if the model can really classify digits These were the first random digits when I ran the code These look like a and a to me maybe the last one is a I can t tell haha Let s see what the model predictedLooks good to me Just to make sure let s make it go through all the images and calculate it s accuracy correct cnt for image in images pred np argmax model np array image value axis real np argmax targets cnt if pred real correct cnt print accuracy correct cnt Running this results in the following output accuracy Wow It is safe to say that our model has successfully trained for this specific task Thank you for readingThank you for reading through this post The next few posts will involve implementing more layer types such as recurrent layers and convolutional layers so I hope you stay tuned for that |
2022-09-17 11:51:22 |
海外TECH |
DEV Community |
Why our websites are slow — Importance of bundle size |
https://dev.to/shreyvijayvargiya/why-our-websites-are-slow-importance-of-bundle-size-1le4
|
Why our websites are slow ーImportance of bundle sizeNot Reactjs not a framework or the servers or API nor the internet is the reason behind it Under the HoodThe story begins way back exactly one year ago When I was covering stories on react and react native apps optimisation I have covered more than ways in a single story on how to improve react app performance This is the beauty of software development it s never stagnant or monotonous Because today even after one year I read a new story that tells the impact of bundle size on application performance and a mind boggling story to understand why websites can be slow Right under the noseFor one year I have covered so many stories on react application optimisation but today I have got a new perspective on seeing react applications Bird EYE perspective ーFind the root cause of problemsSee it s not rocket science but let me explain with the story We first create an application in ReactjsThen we use Framework and packages and add codeIf our senior developer requests us to improve our React app performance we always proceed in the so called universal direction as explainedCheck the image sizeCheck the internetCheck the APICheck the code structure architecture or size of the repositoryCheck the Javascript frameworks and language twists and turns such as re rendering in reactjs and bla bla bla But what if I told you to follow Bottom to Top approach instead Let s start with the question ー How browser loads the website The server sends the app bundle or response such as HTML to the browserWebsite loads HTML file and if javascript is present it will be fetched parsed compiled and executed Followed by loading of CSSLastly images are loaded or being lazy loadedCool so what really can be the reason for the entire website being slow Ofcourse the things which will take time isn t it So out of HTML CSS JS and Image if we exclude the API response here the only thing that will occupy most of the time to be executed on the browser is Javascript Understand that images and large size files are not the culprits every time For example the graph below tells that if the same size image and same size javascript when loads in the browser the javascript file takes more time to process than the image seconds for image and seconds for javascript Bottom to Top ApproachCheck the javascript file size and try to decrease it even if the JS file includes an image try to optimise the image Understand the core reason how parsing compiling and execution of the bundle can be improved significantly That will cover up almost most of the reasons why applications are slow If not then it s time to move up in the root causes list Checking the image optimisation re rendering sharding and API response things in order Impact of Bundle sizeMost websites are now server side and include more of the javascript code rather than HTML and CSS Bundle when sends to the browser whether it s compressed or decompressed will parse compile and then execute Imagine if I send the browser Kb of app bundle whereas I send kb of app bundle what do you think which one will take time Webpack bundle analyzerOut of curiosity the above npm package will help you to analyze bundle size by showing the image locally as shown below The smaller the bundle size the less load will be on the browser CPU and memory and the faster it will run the website Simple not rocket science We have already suspected that javascript can be the culprit of the slowness of the app Adding to it if the App bundle includes most of the javascript code then more time it will take to run completely on the browser So we should consider the fact that our bundle size should be low at the end As it will load the entire javascript file on the browser Alternative to JavascriptWhen I understand this I got one more question Then why we are supporting server side or javascript based applications in frontend Why can t we move back to writing HTML and CSS in the front The answer I got is NO it s not correct and useful every time in every case to use mostly HTML and CSS in frontend Frameworks like React help develop products like Figma Gmail e commerce and games much easier because of directly dealing with javascript in the front end Real world exampleWe got a new concept called the island model Astro is the new Multi page application framework that deals with serving HTML files only to the browser But Astro can t be used in every product it really depends on the product type and if you want to know the reason below is the story for it Astro can t be used for applications where heavy states are to be managed animations e commerce and dynamic websites Is Astro fastest framework in In the end we can t develop high and bit more complex products every time easily using HTML and CSS only We need to have javascript to make development much easier Not a switch I recommend Improving bundle sizeLet s move back to bundle size impact How to improve it Look for code splitting only load the required one and try to reuseMinify the bundle size using loaders or webpack pluginsRemove the unused dependenciesLazy load the dependencies load only when askedCaching the outputStill improving bundle size will not be an easy task You can check the next js bundle size using the below command yarn run buildIt will give bundle size output and you can try making the changes and measure the bundle size improvements The most efficient way is to check the entire app bundle in the browser inspect tab in the performance section You can google the way it s not too hard to understand so not covering it much in this story Try introducing changes such asCode splittingCaching using React Query or Service WorkerMinify the final output size using loadersYou can keep measuring the bundle size locally Followed by checking the final performance in the inspect tab The performance measured by the google lighthouse will also give a percentage output of the performance Root causeThe root cause of the javascript and who carries the javascript is the app bundle Why app bundle really matters is because the more it s size more CPU power and memory it will consume more time it will take to get executed Understand this also sometimes apps are fast even though the bundle size is OKAY In that case Either the CPU of the browser is really slow it usually happens in the old devicesOr the network connectivity is lowNothing more than that nothing less than that Ultimate Plan of ActionWhich phone app is tested if the phone is slow then check the app on the new phone Make sure internet connectivity is goodEven if the app is slow in all the phones with good internet connectivity then proceed with bundle size If bundle size is not an issue check the image size and other tantrums such as re rendering HTML parsing fonts and CSS and so on With the bottom to top approach we have a new way of addressing the entire application performance issues Final verdictI am hoping to see more tools in the future helping in following this approach because nobody is talking about it I am able to find the root cause because I read this story written by the Engineering Manager at Google Addy Osmani ConclusionI am reading more about bundle size for app performance and the root cause behind our apps being really slow Because I am done adding image optimisation react rendering issues and the usual way of optimising applications It s time to go beyond it finding the root cause and play with it If you find more readings around it please do share I would love to read Until next time have a good day ShreyiHateReading |
2022-09-17 11:31:19 |
ニュース |
BBC News - Home |
King Charles and Prince William meet people in lying-in-state queue |
https://www.bbc.co.uk/news/uk-62940182?at_medium=RSS&at_campaign=KARANGA
|
lambeth |
2022-09-17 11:50:08 |
ニュース |
BBC News - Home |
Prince Harry to stand vigil at Queen's coffin in military uniform |
https://www.bbc.co.uk/news/uk-62938463?at_medium=RSS&at_campaign=KARANGA
|
coffin |
2022-09-17 11:37:51 |
北海道 |
北海道新聞 |
国内7万975人感染 138人死亡、新型コロナ |
https://www.hokkaido-np.co.jp/article/732783/
|
新型コロナウイルス |
2022-09-17 20:24:00 |
北海道 |
北海道新聞 |
牛乳・乳製品11月値上げ 買い控え懸念 メーカー、小売店、酪農家も「厳しい」 |
https://www.hokkaido-np.co.jp/article/732741/
|
森永乳業 |
2022-09-17 20:23:19 |
北海道 |
北海道新聞 |
競泳、瀬戸が200m個メ優勝 栃木国体の会期前競技 |
https://www.hokkaido-np.co.jp/article/732759/
|
宇都宮市 |
2022-09-17 20:06:51 |
北海道 |
北海道新聞 |
舞踊で親しむアイヌ文化 豊浦音楽祭で「剣の舞」披露 |
https://www.hokkaido-np.co.jp/article/732782/
|
音楽祭 |
2022-09-17 20:17:00 |
北海道 |
北海道新聞 |
3年ぶり室工大祭、きょうまで 「ラムドック」人気 |
https://www.hokkaido-np.co.jp/article/732779/
|
工大 |
2022-09-17 20:12:00 |
北海道 |
北海道新聞 |
体長1~2センチ、ヒメイカ繁殖期 ダイバーに人気 函館 |
https://www.hokkaido-np.co.jp/article/732778/
|
函館市臼尻町 |
2022-09-17 20:12:00 |
北海道 |
北海道新聞 |
プーチン氏、「報復」砲撃を示唆 ウクライナがロ民間施設攻撃と |
https://www.hokkaido-np.co.jp/article/732771/
|
示唆 |
2022-09-17 20:06:12 |
北海道 |
北海道新聞 |
新谷、来年1月に米でマラソン 世界選手権をコロナで欠場 |
https://www.hokkaido-np.co.jp/article/732774/
|
世界選手権 |
2022-09-17 20:02:00 |
北海道 |
北海道新聞 |
胆振管内168人感染、日高管内22人 新型コロナ |
https://www.hokkaido-np.co.jp/article/732770/
|
新型コロナウイルス |
2022-09-17 20:01:49 |
IT |
週刊アスキー |
GigaCrystaの360Hz対応ゲーミングディスプレーなどが展示! ハワイアンな雰囲気が漂うアイ・オー・データ機器ブース【TGS2022】 |
https://weekly.ascii.jp/elem/000/004/106/4106004/
|
gigacrysta |
2022-09-17 20:30:00 |
コメント
コメントを投稿