投稿時間:2021-09-09 04:20:39 RSSフィード2021-09-09 04:00 分まとめ(28件)

カテゴリー等 サイト名等 記事タイトル・トレンドワード等 リンクURL 頻出ワード・要約等/検索ボリューム 登録日
AWS AWS News Blog Amazon EKS Anywhere – Now Generally Available to Create and Manage Kubernetes Clusters on Premises https://aws.amazon.com/blogs/aws/amazon-eks-anywhere-now-generally-available-to-create-and-manage-kubernetes-clusters-on-premises/ Amazon EKS Anywhere Now Generally Available to Create and Manage Kubernetes Clusters on PremisesAt AWS re Invent we preannounced new deployment options of Amazon Elastic Container Service Amazon ECS Anywhere and Amazon Elastic Kubernetes Service Amazon EKS Anywhere in your own data center Today I am happy to announce the general availability of Amazon EKS Anywhere a deployment option for Amazon EKS that enables you to easily create … 2021-09-08 18:08:39
python Pythonタグが付けられた新着投稿 - Qiita python基礎(continue, break,with) https://qiita.com/okateru/items/9686f72d460967d4e7e0 Pytorchでは、epochがの時は学習を省略するなどの例がある。 2021-09-09 03:13:00
Program [全てのタグ]の新着質問一覧|teratail(テラテイル) JavaScript ハングル文字を表示するには https://teratail.com/questions/358490?rss=all alert 2021-09-09 03:28:40
Program [全てのタグ]の新着質問一覧|teratail(テラテイル) React Routeを含むプロジェクトがビルド後動作しない https://teratail.com/questions/358489?rss=all ReactRouteを含むプロジェクトがビルド後動作しない前提・実現したいことReactRouteを用いたプロジェクトを作成しています。 2021-09-09 03:12:38
海外TECH DEV Community ❌AMAZON updates its Terms of Service: PROHIBITS using AWS in case of a Zombie Apocalypse🧟‍♂️ https://dev.to/dotnetsafer/amazon-updates-its-terms-of-service-prohibits-using-aws-in-case-of-a-zombie-apocalypse-3ga7 AMAZON updates its Terms of Service PROHIBITS using AWS in case of a Zombie Apocalypse‍ ️Who reads the terms of service Yes those hundreds of pages in which they explain the conditions of use of a service Well let s start with someone who has read them to them If not this article would not have existed Just recently AWS has updated its terms of service and well the usual What you can and cannot do but the funny thing is point which in a nutshell says that they prohibit the use of their Lumberyard services in case of a zombie apocalypse You do not believe it See for yourself by reading the AWS Terms of Service The official information specifies “Lumberyard is a cross platform game engine where you can create games for most modern platforms for free PC Mac iOS Android all consoles including VR glasses Sure for many of us it reminds us of the typical nonsense like “don t wash your cat in the washing machine instructions but here…Just read it again and think about it “…of a widespread viral infection transmitted via bites or contact with bodily fluids that causes human corpses to reanimate and seek to consume living human flesh blood brain or nerve tissue and is likely to result in the fall of organized civilization And why if this happens the restriction “will not apply As I said before it is difficult to understand what is written in this paragraph of the “Amazon document But it looks really creepy and raises a lot of questions…Internet users have already begun to debate this problem and the comments are very weird…People who laugh from irony ironically claiming that it is a jokeSome already proposing new Zombies tax ratesOthers congratulating the internOthers that give a lot to think about what happened “the last time And others that we will never know what he had writtenTo finish I leave this important information “When zombies are hungry they won t stop until they find food for them which means you need to disappear from the city as quickly as possible 2021-09-08 18:32:40
海外TECH DEV Community Integrating Firebase with React Native https://dev.to/jscrambler/integrating-firebase-with-react-native-jpa Integrating Firebase with React NativeFirebase is a Backend as a Service BaaS that provides an advantage to mobile developers who use React Native for developing mobile applications As a React Native developer by using Firebase you can start building an MVP minimum viable product keeping the costs low and prototyping the application pretty fast In this tutorial we will learn how to get started by integrating Firebase in a React Native application We will also create a small application from scratch with the help of Firebase amp React Native to see how they work together Getting StartedFirebase is a platform that got acquired by Google and has a healthy and active community Most users in this community are web and mobile developers as Firebase can help with mobile analytics push notification crash reporting and out of the box it also provides email as well as social authentication To get started you will need a target mobile OS whether you choose to go with iOS or Android or both Please refer to the React Native official documentation if you are setting up a React Native development environment for the first time You will need SDK tools and Android Studio especially to set up a developer environment for Android For iOS you need Xcode installed on your macOS You will also need Nodejs gt x x and npm yarn installedreact native cli gt Or use npxReact Native is distributed as two npm packages react native cli and react native We are going to use the react native cli to generate an app Begin by installing it npm install g react native cliNow let s create a new React Native project called “rnFirebaseDemo react native init rnFirebaseDemoWhen the above command is done running traverse into the project directory using cd rnFirebaseDemo Now let s check if everything is working correctly and our React Native application has been properly initialized by running one of the following commands For iOSYarn run run ios For Windows Unix usersyarn run androidThis command will run the default screen as shown below in an iOS simulator or Android emulator but it will take a few moments since we are running it for the first time Configuring a Firebase ProjectTo start a new Firebase app with a frontend framework or a library you need the API keys To obtain these API keys you need access to a Firebase project A new Firebase project is created from the Firebase console Initially you can create a free tier Firebase project known as Spark Plan To know about pricing and more information on Firebase plans take a look here Now click on the Add project button and then enter the name of the Firebase project Then click the Continue on the step screen On the step screen you can leave everything as default and press the Create project button to create a new Firebase project When the loading finishes press the button and you ll be welcomed by the main dashboard screen of the Firebase project Adding Firebase to a React Native projectThe react native firebase library is the officially recommended collection of packages that brings React Native support for all Firebase services on both Android and iOS apps To add it to our existing React Native application we need to install the following dependency yarn add react native firebase appTo connect the iOS app with your Firebase project s configuration you need to generate download and add a GoogleService Info plist file to the iOS bundle From the Firebase dashboard screen click on Project Overview gt Settings and in the General tab go to Your Apps section Click on the Add app button A modal appears as shown below Select iOS in the modal and then in step enter your app details and click the Register app button In step download the GoogleService Info plist file Then using Xcode open the projects ios projectName xcworkspace Right click on the project name and Add files to the project Select the downloaded GoogleService Info plist file from your computer and ensure the Copy items if needed checkbox is enabled To allow Firebase on iOS to use the credentials the Firebase iOS SDK must be configured during the bootstrap phase of your application Open the ios projectName AppDelegate m file and at the top of the file add import lt Firebase h gt Within the same file add the following at the top of the didFinishLaunchingWithOptions function BOOL application UIApplication application didFinishLaunchingWithOptions NSDictionary launchOptions if FIRApp defaultApp nil FIRApp configure …the rest of the function body remains the sameNext rebuild the iOS app Execute the following commands cd ios pod install repo update cd npx react native run iosTo connect the Android app to your Firebase project s configuration you need to generate download and add a google services json file to the iOS bundle From the Firebase dashboard screen click on Project Overview gt Settings and in the General tab go to the “Your Apps section Click on the Add app button and then click the button with the Android icon in the modal In Step enter the details of your app and then click the button “Register app The Android package name in the image below must match your local projects package name which can be found inside of the manifest tag within the android app src main AndroidManifest xml file within your project In step download the google services json file and place it inside of your React Native project at the following location android app google services json To allow Firebase on Android to use the credentials the google services plugin must be enabled on the project This requires modification to two files in the Android directory Add the google services plugin as a dependency inside of your android build gradle file buildscript dependencies other dependencies Add the line below classpath com google gms google services Lastly execute the plugin by adding the following to your android app build gradle file apply plugin com android application apply plugin com google gms google services lt Add this lineNext rebuild the android app Execute the following commands npx react native run androidThe package is used to configure and install Firebase SDK in a React Native project To use any of the Firebase features such as Auth Firestore Storage or Realtime Database you have to install the individual packages from the React Native Firebase library In this tutorial let s install and configure the Realtime Database Open the terminal window and execute the following series of commands yarn add react native firebase database after successful installation for ioscd ios pod installcd npx react native run ios for android just rebuild the appnpx react native run android Building app screensWhen we open the project in a code editor its structure looks like this We need to make some modifications before we can really start building our app Create an src directory inside the root folder This is where our app components and screens will live Further within the src directory we will create two folders screens and components The screen directory will contain all the UI related components that we need to display to the end user whereas the components folder will contain any other component that will be used or re used to display the user interface Let us create our first screen the Home screen inside screens with a new file Home js import React from react import View Text from react native export default function Home return lt View gt lt Text gt Home Screen lt Text gt lt View gt Our next screen is going to be Add Item Create a new file called AddItem js import React from react import View Text from react native export default function AddItem return lt View gt lt Text gt Add Item lt Text gt lt View gt Our last screen is going to be a list of items that we need to display In the same directory create a new file called List js import React from react import View Text from react native export default function List return lt View gt lt Text gt List lt Text gt lt View gt Adding react navigationTo navigate between different screens we need to add the react navigation library We are going to use the version yarn add react navigation native react native reanimated react native gesture handler react native screens react native safe area context react native community masked view react navigation stackThen add the following line at the top of the index js file import react native gesture handler The next step is to run the command below and link the libraries we just installed cd ios pod installAfter adding this package let us run the build process again npx react native run ios ORnpx react native run androidNow to see it in action let us add the Home component as our first screen Add the following code in App js import as React from react import NavigationContainer from react navigation native import createStackNavigator from react navigation stack import Home from src screens Home we will use these two screens later in the Navigatorimport AddItem from src screens AddItem import List from src screens List const Stack createStackNavigator function App return lt NavigationContainer gt lt Stack Navigator gt lt Stack Screen name Home component Home gt lt Stack Navigator gt lt NavigationContainer gt export default App At this stage if we go to the simulator we will see the following result The Home Screen is showing up We will add two other screens as routes to AppNavigator in order to navigate to them through the Home Screen function App return lt NavigationContainer gt lt Stack Navigator gt lt Stack Screen name Home component Home gt lt Stack Screen name AddItem component AddItem gt lt Stack Screen name List component List gt lt Stack Navigator gt lt NavigationContainer gt Now our stack has three routes a Home route an AddItem route and a ListItem route The Home route corresponds to the Home screen component the AddItem corresponds to the AddItem screen and the ListItem route corresponds to the ListItem component Navigating between the screensPreviously we defined a stack navigator with three routes but we didn t hook them up in order to navigate between them Well this is an easy task too The react navigation library provides us with a way to manage navigation from one screen to another and back To make this work we will modify Home js import React from react import Button View Text from react native export default function Home navigation return lt View gt lt Text gt Home Screen lt Text gt lt Button title Add an Item onPress gt navigation navigate AddItem gt lt Button title List of Items color green onPress gt navigation navigate List gt lt View gt In the code above we are adding a Button component from the react native API react navigation passes a navigation prop in the form of navigation navigate to every screen in the stack navigator We have to use the same screen name on the onPress function to navigate as we defined in App js under AppNavigator You can also customize the back button manually with your own styling on both screens AddItem and List but for our demonstration we are going to use the default styling Creating a Database with FirebaseGo to the Firebase Console and click the Realtime Database from the menu bar If you are creating a realtime database for the first time in your Firebase project click the Create Database button Then when asked for rules enable test mode For the example app we re building in this demo we will enable the database in the test mode Adding Data from the App to FirebaseIn this section we will edit AddItem js which represents an input field and a button The user can add an item to the list and it will get saved to Firebase data import React from react import View Text TouchableHighlight StyleSheet TextInput Alert from react native import database from react native firebase database let addItem item gt database ref items push name item export default function AddItem const name onChangeText React useState const handleSubmit gt addItem name Alert alert Item saved successfully return lt View style styles main gt lt Text style styles title gt Add Item lt Text gt lt TextInput style styles itemInput onChangeText text gt onChangeText text gt lt TouchableHighlight style styles button underlayColor white onPress handleSubmit gt lt Text style styles buttonText gt Add lt Text gt lt TouchableHighlight gt lt View gt const styles StyleSheet create main flex padding flexDirection column justifyContent center backgroundColor fc title marginBottom fontSize textAlign center itemInput height padding marginRight fontSize borderWidth borderColor white borderRadius color white buttonText fontSize color alignSelf center button height flexDirection row backgroundColor white borderColor white borderWidth borderRadius marginBottom marginTop alignSelf stretch justifyContent center In the code above we are adding a Firebase database instance from config js and db and then pushing any item that the user adds through addItem and handleSubmit You will get an alert message when you press the Add button to add the item from the input value as shown below To verify that the data is there in the database go to your Firebase console Fetching Items from the DatabaseTo fetch data from the Firebase database we are going to use the same reference to db in List js import React from react import View Text StyleSheet from react native import ItemComponent from components ItemComponent import database from react native firebase database let itemsRef database ref items export default function List const itemsArray setItemsArray React useState React useEffect gt itemsRef on value snapshot gt let data snapshot val const items Object values data setItemsArray items return lt View style styles container gt itemsArray length gt lt ItemComponent items itemsArray gt lt Text gt No items lt Text gt lt View gt const styles StyleSheet create container flex justifyContent center backgroundColor ebebeb For the ItemComponent we create a new file inside components ItemComponent js This is a non screen component Only the List will use it to map and display each item import React from react import View Text StyleSheet from react native export default function ItemComponent items return lt View style styles itemsList gt items map item index gt return lt View key index gt lt Text style styles itemtext gt item name lt Text gt lt View gt lt View gt const styles StyleSheet create itemsList flex flexDirection column justifyContent space around itemtext fontSize fontWeight bold textAlign center This step concludes the integration of a Firebase database with our React Native app You can now add the new data items and fetch them from the database as shown below ConclusionIn this tutorial we ve shown you how to integrate Firebase with a React Native application You don t need a complete server that creates an API and further uses a database to prototype or build an MVP of your application You can find the complete code inside this Github repo Finally don t forget to pay special attention if you re developing commercial React Native apps that contain sensitive logic You can protect them against code theft tampering and reverse engineering by following our guide 2021-09-08 18:21:15
Apple AppleInsider - Frontpage News How to watch Apple's 'iPhone 13' launch event https://appleinsider.com/articles/21/09/08/how-to-watch-apples-iphone-13-launch-event?utm_medium=rss How to watch Apple x s x iPhone x launch eventGiven that Apple s new event is called California streaming it s no surprise that you ll be able to see the iPhone announcement live but there are many different ways Here s how to do it California Streaming will be streamed from CaliforniaApple hasn t even confirmed that its next event will feature the new iPhone but it has announced a date time ーand some of the ways to watch California Streaming will be on Tuesday September from a m PT Read more 2021-09-08 18:29:13
海外TECH Engadget Moog's Model 15 app now works with Ableton Live and other DAWs https://www.engadget.com/moog-model-15-macos-vst-wrapper-184009610.html?src=rss Moog x s Model app now works with Ableton Live and other DAWsBack in January Moog updated its Model app to support Macs running Big Sur marking the first time one of the company s soft synths had come to desktop It was a significant step forward in terms of accessibility However the synth ran as an Audio Unit v plugin meaning you couldn t use it in conjunction with non Apple digital audio workstations like Abelton That s changing today with the Model app now available within a VST wrapper Short for Virtual Studio Technology VST is the most widely supported standard for DAW synthesizer and effect unit plugins As such you re no longer limited to GarageBand Logic and MainStage if you want to dabble with the modular synth Unfortunately if you re a Windows user you still can t install the software on your computer You will have to look to either Moog s iOS app or one of the many other modular synths you can download online 2021-09-08 18:40:09
海外TECH Engadget Google Photos will deliver as many prints as you like to your home https://www.engadget.com/google-photos-prints-home-delivery-canvas-sizes-181529516.html?src=rss Google Photos will deliver as many prints as you like to your homeGoogle is expanding its printing options for Photos including more flexibility for ordering prints to your home Until now the only way to get prints of your images directly from Google was to use the AI driven premium print service which can automatically select of your best images each month and send them to you Otherwise you d have to pick them up from a CVS Walgreens or Walmart Now though you can order and receive as many prints as you want at your casa nbsp In addition there are more size options for Google Photos prints and canvases Along with the previous x inch x and x formats Google can now print and send your photos in x x x and x It will add more canvas sizes in the next few weeks as well You ll be able to order canvases in x x x x x and x 2021-09-08 18:15:29
海外TECH Network World Sleeping and waiting on Linux https://www.networkworld.com/article/3632395/sleeping-and-waiting-on-linux.html#tk.rss_all Sleeping and waiting on Linux The Linux sleep and wait commands allow you to run commands at a chosen pace or capture and display the exit status of a task after waiting for it to finish Sleep simply inserts a timed pause between commands Wait on the other hand waits until a process completes before notifying you that it has finished SleepThe sleep command pauses for a specified time It s generally used in a script but works on the command line as well In the example below sleep pauses a minute between the two date commands date sleep dateWed Sep PM EDT Wed Sep PM EDT Summarizing your command line usage on LinuxThe sleep command takes the numeric argument as the number of seconds You can however ask it to sleep for various amounts of time by adding another character to the argument m minuteh hoursd days date sleep m dateWed Sep PM EDT Wed Sep PM EDT In fact you can sleep for less than a second if you need To read this article in full please click here 2021-09-08 18:30:00
海外TECH Network World AWS, NetApp team up for a cloud-native file system https://www.networkworld.com/article/3632628/aws-netapp-team-up-for-a-cloud-native-file-system.html#tk.rss_all AWS NetApp team up for a cloud native file system Amazon Web Services and NetApp have teamed up to tie NetApp s on prem storage and its proprietary OS for storage disk arrays to AWS s managed file storage service FSx Called Amazon FSx for NetApp ONTAP the service provides things like capacity scaling maintenance and updates so on prem staff doesn t have to Performance management with automatic tiering between local storage and fully elastic AWS storage is provided by AWS as well Learn about backup and recovery Backup vs archive Why it s important to know the difference How to pick an off site data backup method Tape vs disk storage Why isn t tape dead yet The correct levels of backup save time bandwidth space This is not a new area for AWS which offers two similar services for Windows File Server and the Lustre HPC file storage system FSx for Windows File Server is a native Windows file system that offers Windows file storage in the cloud while FSx for Lustre offers scalable high performance storage for HPC applications To read this article in full please click here 2021-09-08 18:19:00
海外科学 NYT > Science How California's Recall Vote Could Affect the State's Climate Policies https://www.nytimes.com/2021/09/08/climate/california-recall-newsom-climate.html How California x s Recall Vote Could Affect the State x s Climate PoliciesMany Republicans vying to replace Newsom as governor want to roll back the state s ambitious plans to cut planet warming emissions a change with nationwide implications 2021-09-08 18:53:54
海外TECH WIRED A Texas Abortion ‘Whistleblower’ Site Still Can't Find a Host https://www.wired.com/story/texas-abortion-law-whistleblower-site extreme 2021-09-08 18:12:51
海外ニュース Japan Times latest articles Japan plans to extend coronavirus emergency in 19 prefectures https://www.japantimes.co.jp/news/2021/09/09/national/coronavirus-emergency-extension-tokyo/ Japan plans to extend coronavirus emergency in prefecturesThe state of emergency in place in of Japan s prefectures will be lifted in Miyagi and Okayama where less strict quasi emergency measures will 2021-09-09 03:21:59
ニュース BBC News - Home Social care tax rise: Boris Johnson wins Commons vote https://www.bbc.co.uk/news/uk-politics-58492169?at_medium=RSS&at_campaign=KARANGA insurance 2021-09-08 18:43:07
ニュース BBC News - Home British teenager Raducanu reaches US Open semis https://www.bbc.co.uk/sport/tennis/58493663?at_medium=RSS&at_campaign=KARANGA belinda 2021-09-08 18:43:02
ニュース BBC News - Home Covid: Boris Johnson concerned over unvaccinated hospital patients https://www.bbc.co.uk/news/uk-58494842?at_medium=RSS&at_campaign=KARANGA johnson 2021-09-08 18:46:56
ビジネス ダイヤモンド・オンライン - 新着記事 大学から脱落する米男子学生「目標を見失った」 - WSJ PickUp https://diamond.jp/articles/-/281670 wsjpickup 2021-09-09 03:45:00
ビジネス ダイヤモンド・オンライン - 新着記事 原油が高値から約2割下落も、長めの時間軸では上昇を見込む理由 - マーケットフォーカス https://diamond.jp/articles/-/281613 不透明感 2021-09-09 03:40:00
ビジネス ダイヤモンド・オンライン - 新着記事 トヨタとBMW、自動車業界の逆張り投資家 - WSJ PickUp https://diamond.jp/articles/-/281671 wsjpickup 2021-09-09 03:35:00
ビジネス ダイヤモンド・オンライン - 新着記事 米テク幹部が狙う国防産業、くすぶる警戒感 - WSJ PickUp https://diamond.jp/articles/-/281672 国防総省 2021-09-09 03:30:00
ビジネス ダイヤモンド・オンライン - 新着記事 サブウェイが実験、「サンドイッチの種類が多すぎて選べない」問題を解決するAI技術とは? - DESIGN SIGHT https://diamond.jp/articles/-/281669 2021-09-09 03:25:00
ビジネス ダイヤモンド・オンライン - 新着記事 どこからが「いじめ」になるのか、法律上の定義とは?弁護士が中高で特別授業 - 2020年代の教育 https://diamond.jp/articles/-/281281 どこからが「いじめ」になるのか、法律上の定義とは弁護士が中高で特別授業年代の教育世界中を見渡しても、いじめのない学校も社会も、残念ながら存在しないのではなかろうか。 2021-09-09 03:20:00
ビジネス ダイヤモンド・オンライン - 新着記事 ハーバード教授3人が語る「コロナ禍での経営」、不確実性に対処する行動指針 - 有事の意思決定 一枚岩の経営チームがリードする https://diamond.jp/articles/-/281592 不確実性 2021-09-09 03:17:00
ビジネス ダイヤモンド・オンライン - 新着記事 ひろゆきが語る「SNSで増えた『恥ずかしい人』の特徴」ベスト1 - 1%の努力 https://diamond.jp/articles/-/281023 youtube 2021-09-09 03:15:00
ビジネス ダイヤモンド・オンライン - 新着記事 「“失敗も成長の機会になった!”は単なる自己正当化では?」への怖いくらい納得の回答 - 独学大全 https://diamond.jp/articles/-/281419 読書 2021-09-09 03:10:00
ビジネス ダイヤモンド・オンライン - 新着記事 30歳「転職しても不満ばかりの人」と「新卒の会社を辞めずにうまくいく人」の差 - マンガ転職の思考法 https://diamond.jp/articles/-/277232 2021-09-09 03:05:00
GCP Cloud Blog PyTorch on Google Cloud: How To train and tune PyTorch models on Vertex AI https://cloud.google.com/blog/topics/developers-practitioners/pytorch-google-cloud-how-train-and-tune-pytorch-models-vertex-ai/ PyTorch on Google Cloud How To train and tune PyTorch models on Vertex AISince the publishing of the inaugural post of PyTorch on Google Cloud blog series we announced Vertex AI Google Cloud s end to end ML platform at Google I O  Vertex AI unifies Google Cloud s existing ML offerings into a single platform for efficiently building and managing the lifecycle of ML projects It provides tools for every step of the machine learning workflow across various model types for varying levels of machine learning expertise We will continue the blog series with Vertex AI to share how to build train and deploy PyTorch models at scale and how to create reproducible machine learning pipelines on Google Cloud  Figure What s included in Vertex AI In this post we will show how to use Vertex AI Training to build and train a sentiment text classification model using PyTorchVertex AI Hyperparameter Tuning to tune hyperparameters of PyTorch modelsYou can find the accompanying code for this blog post on the GitHub repository and the Jupyter Notebook Let s get started Use case and datasetIn this article we will fine tune a transformer model BERT base from Hugging Face Transformers Library for a sentiment analysis task using PyTorch BERT Bidirectional Encoder Representations from Transformers is a Transformer model pre trained on a large corpus of unlabeled text in a self supervised fashion We will begin experimentation with the IMDB sentiment classification dataset on Notebooks  We recommend using a Notebook instance with limited compute for development and experimentation purposes Once we are satisfied with the local experiment on the notebook we show how you can submit a training job from the same Jupyter notebook to the Vertex Training service to scale the training with bigger GPU shapes Vertex Training service optimizes the training pipeline by spinning up infrastructure for the training job and spinning it down after the training is complete without you having to manage the infrastructure Figure ML workflow on Vertex AIIn the upcoming posts we will show how you can deploy and serve these PyTorch models on Vertex Prediction service followed by Vertex Pipelines to automate monitor and govern your ML systems by orchestrating a ML workflow in a serverless manner and storing workflow s artifacts using Vertex ML Metadata   Creating a development environment on NotebooksTo set up a PyTorch development environment on JupyterLab notebooks with Notebooks follow the setup section in the earlier post here  To interact with the new notebook instance go to the Notebooks page in the Google Cloud Console and click the “OPEN JUPYTERLAB link next to the new instance which becomes active when the instance is ready to use Figure Notebook instanceTraining a PyTorch model on VertexTrainingAfter creating a Notebooks instance you can start with your experiments Let s look into the model specifics for the use case The model specificsFor analyzing sentiments of the movie reviews in the IMDB dataset we will fine tune a pre trained BERT model from Hugging Face The pre trained BERT model already encodes a lot of information about the language as the model was trained on a large corpus of English data in a self supervised fashion Now we only need to slightly tune them using their outputs as features for the sentiment classification task This means quicker development iteration on a much smaller dataset instead of training a specific Natural Language Processing NLP model with a larger training dataset Figure Pretrained Model with classification layer The blue box indicates the pre trained BERT Encoder module Output of the encoder is pooled into a linear layer with the number of outputs same as the number of target labels classes For training the sentiment classification model we will Preprocess and transform tokenize the reviews dataLoad the pre trained BERT model and add the sequence classification head for sentiment analysisFine tune the BERT model for sentence classificationThe following code snippet shows how to preprocess the data and fine tune a pre trained BERT model Please refer to the Jupyter Notebook for complete code and detailed explanation In the snippet above notice that the encoder also referred to as the base model weights are not frozen This is why a very small learning rate e is chosen to avoid loss of pre trained representations Learning rate and other hyperparameters are captured under the TrainingArguments object During the training we are only capturing accuracy metrics You can modify the compute metrics function to capture and report other metrics Training the model on Vertex AIWhile you can do local experimentation on your Notebooks instance for larger datasets or large models often a vertically scaled compute resource or horizontally distributed training is required The most effective way to perform this task is Vertex Training service for following reasons Automatically provision and deprovision resources Training job on Vertex AI will automatically provision computing resources perform the training task and ensure deletion of compute resources once the training job is finished Reusability and portability You can package training code with its parameters and dependencies into a container and create a portable component This container can then be run with different scenarios such as hyperparameter tuning various data sources and more Training at scale You can run a distributed training job on Vertex Training to train models in a cluster across multiple nodes in parallel and resulting in faster training time Logging and Monitoring The training service logs messages from the job to Cloud Logging and can be monitored while the job is running In this post we show how to scale a training job with Vertex Training by packaging the code and creating a training pipeline to orchestrate a training job  There are three steps to run a training job using Vertex AI custom training service Figure Custom training on Vertex AISTEP Determine training code structure Package training application code as a Python source distribution or as a custom container image Docker STEP Choose a custom training method You can run a training job on Vertex Training as a custom job or a hyperparameter training job or a training pipeline Custom jobs With a custom job you configure the settings to run your training code on Vertex AI such as worker pool specs machine types accelerators Python training spec or custom container spec Hyperparameter tuning jobs Hyperparameter tuning jobs automate tuning of hyperparameters of your model based on the criteria you configure such as goal or metric to optimize hyperparameters values and number of trials to run Training pipelines Orchestrates custom training jobs or hyperparameter tuning jobs with additional steps after the training job is successfully completed STEP Run the training job You can submit the training job to run on Vertex Training using gcloud CLI or any of Client SDK libraries such as Vertex SDK for Python Refer to the documentation for further details on custom training methods Packaging the training applicationBefore running the training application on Vertex Training the training application code with required dependencies must be packaged and uploaded to a Cloud Storage bucket that your Google Cloud project can access There are two ways to package the application and run on Vertex Training Create a Python source distribution with the training code and dependencies to use with a pre built containers on Vertex AIUse custom containers to package dependencies using Docker containersYou can structure your training code in any way you prefer Refer to the GitHub repository or Jupyter Notebook for our recommended approach on structuring training code  Run Custom Job on Vertex Training with a pre built containerVertex AI provides Docker container images that can be run as pre built containers for custom training These containers include common dependencies used in training code based on the Machine Learning framework and framework version For the sentiment analysis task we are using Hugging Face Datasets and fine tune a transformer model from Hugging Face Transformers Library using PyTorch We use the pre built container for PyTorch and package the training application code as a Python Source Distribution by adding standard Python dependencies required by the training algorithm transformers datasets and tqdm in the setup py file Figure Custom training with pre built containers on Vertex TrainingThe find packages function inside setup py includes the training code in the package as dependencies We use Vertex SDK for Python to create and submit the training job to the Vertex training service by configuring a Custom Job resource with the pre built container image for PyTorch and specifying the training code packaged as Python source distribution We are attaching a NVIDIA Tesla T GPU to the training job for accelerating the training   Alternatively you can also submit the training job to Vertex AI training service using gcloud beta ai custom jobs create command gcloud command stages your training application on GCS bucket and submits the training job worker pool spec parameter in the command defines the worker pool configuration used by the custom job Following are the fields within worker pool spec Set the executor image uri to us docker pkg dev vertex ai training pytorch gpu latest for training on pre built PyTorch v image for GPUSet the local package path to the path to the training codeSet the python module to the trainer task which is the main module to start the training applicationSet the accelerator type and machine type to set the compute type to run the applicationRefer to documentation for the gcloud beta ai custom jobs create command for details Run Custom Job on Vertex Training with custom containerTo create a training job with a custom container you define a Dockerfile to install or add the dependencies required for the training job Then you build and test your Docker image locally to verify push the image to Container Registry and submit a Custom Job to Vertex Training service Figure Custom training with custom containers on Vertex TrainingWe create a Dockerfile with a pre built PyTorch container image provided by Vertex AI as the base image install the dependencies transformers datasets tqdm and cloudml hypertune and copy the training application code Now build and push the image to Google Cloud Container Registry Submit the custom training job to Vertex Training using Vertex SDK for Python  Alternatively you can also submit the training job to Vertex AI training service using gcloud beta ai custom jobs create command with custom container spec gcloud command submits the training job and launches worker pool with the custom container image specified worker pool spec parameter defines the worker pool configuration used by the custom job Following are the fields within worker pool spec Set the container image uri to the custom container image pushed to Google Cloud Container Registry for trainingSet the accelerator type and machine type to set the compute type to run the applicationOnce the job is submitted you can monitor the status and progress of training job either in Google Cloud Console or use gcloud CLI command gcloud beta ai custom jobs stream logs as shown below Figure Monitor progress and logs of custom training jobs from Google Cloud ConsoleHyperparameter tuning on Vertex AIThe training application code for fine tuning a transformer model uses hyperparameters such as learning rate and weight decay These hyperparameters control the behavior of the training algorithm and can have a substantial effect on the performance of the resulting model In this section we show how you can automate tuning these hyperparameters with Vertex Training We submit a Hyperparameter Tuning job to Vertex Training service by packaging the training application code and dependencies in a Docker container and push the container to Google Container Registry similar to running a CustomJob on Vertex AI with Custom Container shown in the earlier section Figure Hyperparameter Tuning on Vertex TrainingHow does hyperparameter tuning work in Vertex AI Following are the high level steps involved in running a Hyperparameter Tuning job on Vertex Training service Define the hyperparameters to tune the model along with the metric to optimizeVertex Training service runs multiple trials of the training application with the hyperparameters and limits you specify maximum number of trials to run and number of parallel trials Vertex AI keeps track of the results from each trial and makes adjustments for subsequent trials This requires your training application to report the metrics to Vertex AI using the Python package cloudml hypertune When the job is finished get the summary of all the trials with the most effective configuration of values based on the criteria you configuredRefer to the Vertex AI documentation to understand how to configure and select hyperparameters for tuning configure tuning strategy and how Vertex AI optimizes the hyperparameter tuning jobs The default tuning strategy uses results from previous trials to inform the assignment of values in subsequent trials Changes to training application code for hyperparameter tuningThere are few requirements to follow that are specific to hyperparameter tuning in Vertex AI To pass the hyperparameter values to training code you must define a command line argument in the main training module for each tuned hyperparameter Use the value passed in those arguments to set the corresponding hyperparameter in the training application s code You must pass metrics from the training application to Vertex AI to evaluate the efficacy of a trial You can use cloudml hypertune Python package to report metrics Previously in the training application code we instantiated Trainer with hyperparameters passed as training arguments training args These hyperparameters are passed as command line arguments to the training module trainer task which are then passed to the training args Refer to python package trainer module for training application code To report metrics to Vertex AI when hyperparameter tuning is enabled we call cloudml hypertune Python package after the evaluation phase as a callback to the trainer object The trainer object passes the metrics computed in the last evaluation phase to the callback that will be reported by the hypertune library to Vertex AI for evaluating trials Run Hyperparameter Tuning Job on Vertex AIBefore submitting the Hyperparameter Tuning job to Vertex AI push the custom container image with the training application to Cloud Container Registry repository and then submit the job to Vertex AI using Vertex SDK for Python We use the same image as before when running the Custom Job on Vertex Training service Define the training arguments with hp tune argument set to y so that training application code can report metrics to Vertex Training service Create a CustomJob with worker pool specs to define machine types accelerators and customer container spec with the training application code Next define the parameter and metric specifications parameter spec defines the search space i e parameters to search and optimize The spec requires to specify the hyperparameter data type as an instance of a parameter value specification Refer to the documentation on selecting the hyperparameter to tune and how to define them metric spec defines the goal of the metric to optimize The goal specifies whether you want to tune your model to maximize or minimize the value of this metric Configure and submit a HyperparameterTuningJob with the CustomJob metric spec parameter spec and trial limits Trial limits define how many trials to allow the service to run  max trial count Maximum of Trials run by the service Start with a smaller value to understand the impact of the hyperparameters chosen before scaling up parallel trial count Number of Trials to run in parallel Start with a smaller value as Vertex AI uses results from the previous trials to inform the assignment of values in subsequent trials Higher number of parallel trials mean these trials start without having the benefit of the results of any trials still running search algorithm Search algorithm specified for the study When not specified Vertex AI by default applies Bayesian optimization to arrive at the optimal solution to search over the parameter space Refer to the documentation to understand the hyperparameter training job configuration Alternatively you can submit a hyperparameter tuning job to Vertex AI training service using gcloud beta ai hp tuning jobs create The gcloud command submits the hyperparameter tuning job and launches multiple trials with a worker pool based on custom container image specified number of trials and the criteria set The command requires hyperparameter tuning job configuration provided as configuration file in YAML format with job name Refer to the Jupyter notebook on creating the YAML configuration and submitting the job via gcloud command You can monitor the hyperparameter tuning job launched from Cloud Console following the link here or use gcloud CLI command gcloud beta ai custom jobs stream logs Figure Monitor progress and logs of hyperparameter tuning jobs from Google Cloud ConsoleAfter the job is finished you can view and format the results of the hyperparameter tuning Trials run by Vertex Training service and pick the best performing Trial to deploy to Vertex Prediction service Run predictions locallyLet s run prediction calls on the trained model locally with a few examples refer to the notebook for the complete code The next post in this series will show you how to deploy this model on Vertex Prediction service Cleaning up the Notebook environmentAfter you are done experimenting you can either stop or delete the Notebooks instance Delete the Notebooks instance to prevent any further charges If you want to save your work you can choose to stop the instance instead What s next In this article we explored Notebooks for PyTorch model development We then trained and tuned the model on Vertex Training service a fully managed service for training machine learning models at scale We looked at how you can submit training jobs as Custom Job and Hyperparameter Tuning Job to Vertex Training using Vertex SDK for Python and gcloud CLI commands with both pre built and custom containers for PyTorch In the next installments of this series we will show how to deploy PyTorch models on Vertex Prediction service and orchestrate a machine learning workflow using Vertex Pipelines We encourage you to explore the Vertex AI features and read the reference guide on best practices for implementing machine learning on Google Cloud ReferencesIntroduction to NotebooksCustom training on Vertex TrainingConfiguring distributed training on Vertex TrainingGitHub repository with code and accompanying notebookStay tuned Thank you for reading Have a question or want to chat Find authors here Rajesh Twitter LinkedIn and Vaibhav LinkedIn Thanks to Karl Weinmeister and Jordan Totten for helping and reviewing the post Related ArticlePyTorch on Google Cloud How to train PyTorch models on AI PlatformWith PyTorch on Google Cloud blog series we aim to shareーhow to build train and deploy PyTorch models at scale how to create reproduci Read Article 2021-09-08 18:15:00

コメント

このブログの人気の投稿

投稿時間:2021-06-17 22:08:45 RSSフィード2021-06-17 22:00 分まとめ(2089件)

投稿時間:2021-06-20 02:06:12 RSSフィード2021-06-20 02:00 分まとめ(3871件)

投稿時間:2021-06-17 05:05:34 RSSフィード2021-06-17 05:00 分まとめ(1274件)