TECH |
Engadget Japanese |
英当局、2.2億円相当の脱税案件の捜査にともない法執行機関として初めてNFTを押収 |
https://japanese.engadget.com/british-authorities-seize-more-than-19-million-nft-for-the-first-time-in-a-fraud-case-215041297.html
|
法執行機関 |
2022-02-14 21:50:41 |
AWS |
AWS News Blog |
New – Amazon EC2 C6a Instances Powered By 3rd Gen AMD EPYC Processors for Compute-Intensive Workloads |
https://aws.amazon.com/blogs/aws/new-amazon-ec2-c6a-instances-powered-by-3rd-gen-amd-epyc-processors-for-compute-intensive-workloads/
|
New Amazon EC Ca Instances Powered By rd Gen AMD EPYC Processors for Compute Intensive WorkloadsAt AWS re Invent we launched Amazon EC Ma instances powered by the rd Gen AMD EPYC processors running at frequencies up to GHz which offer customers up to percent improvement in price performance compared to Ma instances Many customers are looking for ways to optimize their cloud utilization and they are taking advantage … |
2022-02-14 21:41:37 |
AWS |
AWS Machine Learning Blog |
Automate a shared bikes and scooters classification model with Amazon SageMaker Autopilot |
https://aws.amazon.com/blogs/machine-learning/automate-a-shared-bikes-and-scooters-classification-model-with-amazon-sagemaker-autopilot/
|
Automate a shared bikes and scooters classification model with Amazon SageMaker AutopilotAmazon SageMaker Autopilot makes it possible for organizations to quickly build and deploy an end to end machine learning ML model and inference pipeline with just a few lines of code or even without any code at all with Amazon SageMaker Studio Autopilot offloads the heavy lifting of configuring infrastructure and the time it takes to build … |
2022-02-14 21:03:19 |
AWS |
AWS |
The Power of Partnership: Brandon Pulsipher, VP of Cloud Operations at Adobe | Amazon Web Services |
https://www.youtube.com/watch?v=uYbh1-VWOL8
|
The Power of Partnership Brandon Pulsipher VP of Cloud Operations at Adobe Amazon Web ServicesBrandon Pulsipher the Vice President of Cloud Operations at Adobe sits down with Enterprise Strategist Jake Burns to talk about their organization s transformation Listen in to hear how about Adobe s cloud transformation how they have leveraged failure and why partnerships are important to success Learn more about AWS Executive Insights at Subscribe More AWS videos More AWS events videos ABOUT AWSAmazon Web Services AWS is the world s most comprehensive and broadly adopted cloud platform offering over fully featured services from data centers globally Millions of customers ーincluding the fastest growing startups largest enterprises and leading government agencies ーare using AWS to lower costs become more agile and innovate faster AWS AmazonWebServices CloudComputing |
2022-02-14 21:34:56 |
海外TECH |
Ars Technica |
This Nintendo “insider” fooled thousands of followers with fake predictions |
https://arstechnica.com/?p=1834176
|
twitter |
2022-02-14 21:32:54 |
海外TECH |
MakeUseOf |
7 Things You Can Do With Location-Based Reminders |
https://www.makeuseof.com/things-to-do-location-based-reminders/
|
particular |
2022-02-14 21:30:22 |
Apple |
AppleInsider - Frontpage News |
Texas sues Meta over Facebook's past facial recognition practices |
https://appleinsider.com/articles/22/02/14/texas-sues-meta-over-facebooks-past-facial-recognition-practices?utm_medium=rss
|
Texas sues Meta over Facebook x s past facial recognition practicesTexas s attorney general has filed a lawsuit against Meta that claims Facebook s facial recognition policies resulted in tens of millions of state privacy violations Zuckerberg MetaThe lawsuit which was filed in a Texas district court on Monday by Attorney General Ken Paxton focuses on the company s capture of biometric data in user uploaded photos Facebook carried out the practice from until November when it shut down the program Read more |
2022-02-14 21:24:56 |
海外TECH |
Engadget |
Microsoft will fully reopen its headquarters on February 28th |
https://www.engadget.com/microsoft-return-to-work-date-offices-reopen-213354036.html?src=rss
|
Microsoft will fully reopen its headquarters on February thMicrosoft is finally ready to reopen key offices after two years of pandemic related closures and numerous delays The company now plans to enter the quot final stage quot of its Washington state return to work plan starting Feb th at which point facilities including the company s Redmond headquarters and services will be completely open to workers and visitors alike From that day forward staff will have days to adjust to whatever work routine they and their managers have chosen whether it s in person remote or hybrid Offices in California s San Francisco Bay Area will also open starting February th Other US offices would follow quot as conditions allow quot according to Microsoft The Windows creator justified the move by pointing to high vaccination rates in its home county as well as falling hospitalizations and deaths Local testing and compliance with government guidance were also part of the strategy Microsoft said The schedule is more aggressive than the timelines seen at some of Microsoft s peers Meta is currently aiming for March th while Apple and others have indefinite delays Amazon is dropping mask mandates for fully vaccinated warehouse workers but it s also ending paid leave for unvaccinated workers who develop COVID The company also loosened its in person work requirements for office employees Microsoft s decision signals confidence that the worst of COVID and the Omicron variant surge are behind the company However they also reflect changed expectations ーremote work is more practical in part through Microsoft tools like Teams and Viva The firm is also preparing for a future where Mesh enables mixed reality collaboration There just isn t as much pressure to return to the office as there was in and those who do return may see more sparsely populated buildings ーat least for now |
2022-02-14 21:33:54 |
海外科学 |
NYT > Science |
Fact-Checking Joe Rogan’s Interview With Robert Malone That Caused an Uproar |
https://www.nytimes.com/2022/02/08/arts/music/fact-check-joe-rogan-robert-malone.html
|
Fact Checking Joe Rogan s Interview With Robert Malone That Caused an UproarMr Rogan a wildly popular podcast host and his guest Dr Malone a controversial infectious disease researcher offered a litany of falsehoods over three hours |
2022-02-14 21:32:41 |
ニュース |
BBC News - Home |
Ukraine crisis: Biden and Johnson say still hope for diplomatic agreement |
https://www.bbc.co.uk/news/world-europe-60382694?at_medium=RSS&at_campaign=KARANGA
|
solution |
2022-02-14 21:24:02 |
ニュース |
BBC News - Home |
New powers proposed to end unsafe cladding scandal |
https://www.bbc.co.uk/news/business-60380468?at_medium=RSS&at_campaign=KARANGA
|
powers |
2022-02-14 21:09:31 |
ニュース |
BBC News - Home |
Dan Evans beats Egor Gerasimov to reach Qatar Open last 16 |
https://www.bbc.co.uk/sport/tennis/60382085?at_medium=RSS&at_campaign=KARANGA
|
gerasimov |
2022-02-14 21:18:22 |
ビジネス |
東洋経済オンライン |
投資家が物流の次に病院不動産を狙う納得理由 老朽化で建て替え需要、医療法人の意識変化も | 不動産 | 東洋経済オンライン |
https://toyokeizai.net/articles/-/510760?utm_source=rss&utm_medium=http&utm_campaign=link_back
|
不動産会社 |
2022-02-15 06:30:00 |
ニュース |
THE BRIDGE |
スマホ製造大手Xiaomi(小米)、EV用高電圧バッテリ開発Chilye(智緑)の18億円調達ラウンドにリード出資 |
https://thebridge.jp/2022/02/xiaomi-leads-funding-round-in-high-voltage-ev-battery-startup-chilye
|
スマホ製造大手Xiaomi小米、EV用高電圧バッテリ開発Chilye智緑の億円調達ラウンドにリード出資電気自動車EV用の高電圧バッテリシステムを開発する中国スタートアップChilye智緑は、Xiaomi小米がリードした投資家グループから約億人民元約億円を調達した。 |
2022-02-14 21:45:31 |
ニュース |
THE BRIDGE |
中国のメタバース型ソーシャルアプリ「Zheli(啫喱)」、アプリストア首位獲得から3日でダウンロード停止 |
https://thebridge.jp/2022/02/chinas-viral-metaverse-social-app-zheli-halts-downloads
|
中国のメタバース型ソーシャルアプリ「Zheli啫喱」、アプリストア首位獲得から日でダウンロード停止中国のメタバース型ソーシャルアプリ「Zheli啫喱」は、中国のアプリストアで新参者としては驚異的な首位獲得を果たしたが。 |
2022-02-14 21:15:58 |
ニュース |
THE BRIDGE |
ベトナム発NFTゲーム「Summoners Arena」運営、300万米ドルをシード調達——韓国のゲーム会社KraftonのCEOらから |
https://thebridge.jp/2022/02/krafton-ceo-backs-3m-vietnamese-nft-game-summoners-arena
|
ベトナム発NFTゲーム「SummonersArena」運営、万米ドルをシード調達ー韓国のゲーム会社KraftonのCEOらからTechinAsiaでは、有料購読サービスを提供。 |
2022-02-14 21:00:43 |
GCP |
Cloud Blog |
Orchestrating PyTorch ML Workflows on Vertex AI Pipelines |
https://cloud.google.com/blog/topics/developers-practitioners/orchestrating-pytorch-ml-workflows-vertex-ai-pipelines/
|
Orchestrating PyTorch ML Workflows on Vertex AI PipelinesPreviously in the PyTorch on Google Cloud series we trained tuned and deployed a PyTorch text classification model using Training and Prediction services on Vertex AI In this post we will show how to automate and monitor a PyTorch based ML workflow by orchestrating the pipeline in a serverless manner using Vertex AI Pipelines Let s get started Why Pipelines Before we dive in first let s understand why pipelines are needed for ML workflows As seen previously training and deploying a PyTorch based model encapsulates a sequence of tasks such as processing data training a model hyperparameter tuning evaluation packaging the model artifacts model deployment and retraining cycle Each of these steps have different dependencies and if the entire workflow is treated as a monolith it can quickly become unwieldy Machine Learning PipelinesAs the ML systems and processes begin to scale you might want to share your ML workflow with others on your team to execute the workflows or contribute to the code Without a reliable reproducible process this can become difficult With pipelines each step in the ML process runs in its own container This lets you develop steps independently and track the input and output from each step in a reproducible way allowing to iterate experiments effectively Automating these tasks and orchestrating them across multiple services enables repeatable and reproducible ML workflows that can be shared between different teams such as data scientists data engineers Pipelines are also a key component to MLOps when formalizing training and deployment operationalization to automatically retrain deploy and monitor models For example triggering a pipeline run when new training data is available retraining a model when performance of the model starts decaying and more such scenarios Orchestrating PyTorch based ML workflows with Vertex AI PipelinesPyTorch based ML workflows can be orchestrated on Vertex AI Pipelines which is a fully managed and serverless way to automate monitor and orchestrate a ML workflow on Vertex AI platform Vertex AI Pipelines is the most effective way to orchestrate automate and share ML workflows on Google Cloud for the following reasons Reproducible and shareable workflows A Vertex AI pipeline can be defined with Kubeflow Pipelines KFP v SDK an easy to use open source Python based library The compiled pipelines can be version controlled with the git tool of choice and shared among the teams This allows reproducibility and reliability of ML workflows while automating the pipeline Pipelines can also be authored with Tensorflow Extended TFX SDK Streamline operationalizing ML models Vertex AI Pipelines automatically logs metadata using Vertex ML Metadata service to track artifacts lineage metrics visualizations and pipeline executions across ML workflows This enables data scientists to track experiments as they try new models or new features By storing the reference to the artifacts in Vertex ML Metadata the lineage of ML artifacts can be analyzed ーsuch as datasets and models to understand how an artifact was created and consumed by downstream tasks what parameters and hyperparameters were used to create the model Serveless scalable and cost effective Vertex AI Pipelines is entirely serverless allowing ML engineers to focus on ML solutions rather than on infrastructure tasks provisioning maintaining deploying cluster etc When a pipeline is uploaded and submitted the service handles provisioning and scaling of the infrastructure required to run the pipeline This means pay only for the resources used for running the pipeline Integration with other Google cloud services A pipeline step may import data from BigQuery or Cloud Storage or other sources transform datasets using Cloud Dataflow or Dataproc train models with Vertex AI store pipeline artifacts in Cloud Storage get model evaluation metrics and deploy models to Vertex AI endpoints Using the pre built pipeline components for Vertex AI Pipelines makes it easy to call these steps in a pipeline NOTE You can also orchestrate PyTorch based ML workflows on Google Cloud using open source Kubeflow Pipelines KFP which is a core component in the OSS Kubeflow project for building and deploying portable scalable ML workflows based on Docker containers The OSS KFP backend runs on a Kubernetes cluster such as Google Kubernetes Engine GKE The OSS KFP includes the pre built PyTorch KFP components SDK for different ML tasks such as data loading model training model profiling and many more Refer to this blog post for exploring further on orchestrating PyTorch based ML workflow on OSS KFP In this post we will define a pipeline using KFP SDK v to automate and orchestrate the model training tuning and deployment workflow of the PyTorch text classification model covered previously With KFP SDK v component and pipeline authoring is simplified and has first class support for metadata logging and tracking making it easier to track metadata and artifacts produced by the pipelines Following is the high level flow to define and submit a pipeline on Vertex AI Pipelines High level flow to define and submit a pipelineDefine pipeline components involved in training and deploying a PyTorch modelDefine a pipeline by stitching the components in the workflow including pre built Google Cloud components and custom componentsCompile and submit the pipeline to Vertex AI Pipelines service to run the workflowMonitor the pipeline and analyze the metrics and artifacts generatedThis post builds on the training and serving code from the previous posts You can find the accompanying code and notebook for this blog post on the GitHub repository Concepts of a PipelineLet s look at the terminology and concepts used in KFP SDK v If you are familiar with KFP SDK skip to the next section “Defining the Pipeline for PyTorch based ML workflow Component A component is a self contained set of code performing a single task in a ML workflow for example training a model A component interface is composed of inputs outputs and a container image that the component s code runs in including an executable code and environment definition Pipeline A pipeline is composed of modular tasks defined as components that are chained together via inputs and outputs Pipeline definition includes configuration such as parameters required to run the pipeline Each component in a pipeline executes independently and the data inputs and outputs is passed between the components in a serialized format Concepts of a pipelineInputs amp Outputs Component s inputs and outputs must be annotated with data type which makes input or output a parameter or an artifact Parameters Parameters are inputs or outputs to support simple data types such as str int float bool dict list Input parameters are always passed by value between the components and are stored in the Vertex ML Metadata service Artifacts Artifacts are references to the objects or files produced by pipeline runs that are passed as inputs or outputs Artifacts support rich or larger data types such as datasets models metrics visualizations that are written as files or objects Artifacts are defined by name uri and metadata which is stored automatically in the Vertex ML Metadata service and the actual content of artifacts refers to a path in the Cloud Storage bucket Input artifacts are always passed by reference Learn more about KFP SDK v concepts here Defining the Pipeline for PyTorch based ML workflowNow that pipeline concepts are familiar let s look at building a pipeline for the PyTorch based text classification model The following pipeline schematic shows high level steps involved including input and outputs PyTorch training and deployment pipeline schematicFollowing are the steps in the pipeline Build custom training image This step builds a custom training container image from the training application code and associated Dockerfile with the dependencies The output from this step is the Container or Artifact registry URI of the custom training container Run the custom training job to train and evaluate the model This step downloads and preprocesses training data from IMDB sentiment classification dataset on HuggingFace then trains and evaluates a model on the custom training container from the previous step The step outputs Cloud Storage path to the trained model artifacts and the model performance metrics Package model artifacts This step packages trained model artifacts including custom prediction handler to create a model archive mar file using Torch Model Archiver tool The output from this step is the location of the model archive mar file on GCS Build custom serving image The step builds a custom serving container running TorchServe HTTP server to serve prediction requests for the models mounted The output from this step is the Container or Artifact registry URI to the custom serving container Upload model with custom serving container This step creates a model resource using the custom serving image and model archive file mar from the previous step Create an endpoint This step creates a Vertex AI Endpoint to provide a service URL where the prediction requests are sent Deploy model to endpoint for serving This step deploys the model to the endpoint created that creates necessary compute resources based on the machine spec configured to serve online prediction requests Validate deployment This step sends test requests to the endpoint and validates the deployment There are couple of things to note about the pipeline here The pipeline starts with building a training container image because the text classification model we are working on has data preparation and pre processing steps in the training code itself When working with your own datasets you can include data preparation or pre processing tasks as a separate component from the model training Building custom training and serving containers can be done either as part of the ML pipeline or within the existing CI CD pipeline Continuous Integration Continuous Delivery In this post we chose to include building custom containers as part of the ML pipeline In a future post we will go in depth on CI CD for ML pipelines and models Please refer to the accompanying notebook for the complete definition of pipeline and component spec Component specificationWith this pipeline schematic the next step is to define the individual components to perform the steps in the pipeline using KFP SDK v component spec We use a mix of pre built components from Google Cloud Pipeline Components SDK and custom components in the pipeline Let s look into the component spec for one of the steps building the custom training container image Here we are defining a Python function based component where the component code is defined as a standalone python function The function accepts Cloud Storage path to the training application code along with project and model display name as input parameters and outputs the Container Registry GCR URI to the custom training container The function runs a Cloud Build job that pulls the training application code and the Dockerfile and builds the custom training image which is pushed to Container Registry or Artifact Repository In the previous post this step was performed in the notebook using docker commands and now this task is automated by self containing the step within a component and including it in a pipeline There are a few things to notice about the component spec The standalone function defined is converted as a pipeline component using the kfp v dsl component decorator All the arguments in the standalone function must have data type annotations because KFP uses the function s inputs and outputs to define the component s interface By default Python is used as the base image to run the code defined You can configure the component decorator to override the default image by specifying base image install additional python packages using packages to install parameter and write the compiled component as a YAML file using output component file to share or reuse the component The inputs and outputs in the standalone function above are defined as Parameters which are simple data types representing values Inputs and outputs can be Artifacts representing any files or objects generated during the component execution These arguments are annotated as an kfp dsl Input or kfp dsl Output artifact For example the component specification for creating Model Archive file refers to the model artifacts generated in the training job as the input Input Model in the following snippet Refer here for the artifact types in KFP v SDK and here for the artifact types for Google Cloud Pipeline Components For component specs of other steps in the pipeline refer to the accompanying notebook Pipeline definitionAfter defining the components the next step is to build the pipeline definition describing how input and output parameters and artifacts are passed between the steps The following code snippet shows the components chained together Let s unpack this code and understand a few things The pipeline is defined as a standalone Python function annotated with the kfp dsl pipeline decorator specifying the pipeline s name and the root path where the pipeline s artifacts are stored The pipeline definition consists of both pre built and custom defined componentsPre built components from Google Cloud Pipeline Components SDK are defined for steps calling Vertex AI services such as submitting custom training job custom job CustomTrainingJobOp uploading a model ModelUploadOp creating an endpoint EndpointCreateOp and deploying a model to the endpoint ModelDeployOp Custom components are defined for steps to build custom containers for training build custom train image get training job details get training job details create mar file generate mar file and serving build custom serving image and validating the model deployment task make prediction request Refer to the notebook for custom component specification for these steps A component s inputs can be set from the pipeline s inputs passed as arguments or they can depend on the output of other components within this pipeline For example ModelUploadOp depends on custom serving container image URI from build custom serving image task along with the pipeline s inputs such as project id serving container routes and ports kfp dsl Condition is a control structure with a group of steps which runs only when the condition is met In this pipeline model deployment steps run only when the trained model performance exceeds the set threshold If not those steps are skipped Each component in the pipeline runs within its own container image You can specify machine type for each pipeline step such as CPU GPU and memory limits By default each component runs as a Vertex AI CustomJob using an e standard machine By default pipeline execution caching is enabled Vertex AI Pipelines service checks to see whether an execution of each pipeline step exists in Vertex ML metadata It uses a combination of pipeline name step s inputs output and component specification When a matching execution already exists the step is skipped and thereby reducing costs Execution caching can be turned off at task level or at pipeline level To learn more about building pipelines refer to the building Kubeflow pipelines section and follow the samples and tutorials Compiling and submitting the PipelinePipeline must be compiled for executing on Vertex AI Pipeline services When a pipeline is compiled the KFP SDK analyzes the data dependencies between the components to create a directed acyclic graph The compiled pipeline is in JSON format with all information required to run the pipeline Pipeline is submitted to Vertex AI Pipelines by defining a PipelineJob using Vertex AI SDK for Python client passing necessary pipeline inputs When the pipeline is submitted the logs show a link to view the pipeline run on Google Cloud Console or access the run by opening Pipelines dashboard on Vertex AI Accessing Pipeline dashboardHere is the runtime graph of the pipeline for the PyTorch text classification model Pipeline runtime graphA pipeline execution can be scheduledto run at a specific frequency using Cloud Scheduler ortriggeredbased on an event You can view the compiled JSON from the Pipeline Run summary tab on the Vertex AI Pipelines dashboard which can be useful for debugging Compiled pipeline protoMonitoring the PipelineThe pipeline run page shows the run summary as well details about individual steps including step inputs and outputs generated such as Model Artifacts Metrics Visualizations Pipeline artifactsVertex AI Pipelines automatically tracks pipeline execution information in Vertex ML Metadata including metadata and artifacts thereby enabling comparison across pipeline runs and the lineage of ML artifacts Pipeline lineageYou can compare across pipeline runs such as inputs parameters metrics and visualizations from the Vertex AI Pipelines dashboard You can also use aiplatform get pipeline df method from the Vertex AI SDK to fetch pipeline execution metadata for a pipeline as a Pandas dataframe Pipeline execution metadata as Pandas dataframeCleaning up resourcesAfter you are done experimenting you can either stop or delete the Notebooks instance If you want to save your work you can choose to stop the instance When you stop an instance you are charged only for the persistent disk storage To clean up all Google Cloud resources created in this post you can delete the individual resources created Training JobsModelEndpointCloud Storage BucketContainer ImagesPipeline runsFollow the Cleaning Up section in the Jupyter Notebook to delete the individual resources What s next This post continues from the training and deploying of the PyTorch based text classification model on Vertex AI and shows how to automate a PyTorch based ML workflow usingVertex AI Pipelines and Kubeflow Pipelines v SDK As the next steps you can work through this pipeline example on Vertex AI or perhaps orchestrate one of your own PyTorch models ReferencesIntroduction to Vertex AI PipelinesSamples and tutorials to learn more about Vertex AI PipelinesKubeflow Pipelines SDK v Train and tune PyTorch models on Vertex AIDeploy PyTorch models on Vertex AIGitHub repository with code and accompanying notebookStay tuned Thank you for reading Have a question or want to chat Find Rajesh on Twitter or LinkedIn Thanks to Vaibhav Singh Karl Weinmeister and Jordan Totten for helping and reviewing the post Related ArticleAnnouncing Vertex Pipelines general availabilityScalably run ML pipelines built with Kubeflow Pipelines or TFX without worrying about spinning up infrastructure Read Article |
2022-02-14 21:30:00 |
コメント
コメントを投稿