投稿時間:2022-12-14 02:32:51 RSSフィード2022-12-14 02:00 分まとめ(36件)

カテゴリー等 サイト名等 記事タイトル・トレンドワード等 リンクURL 頻出ワード・要約等/検索ボリューム 登録日
AWS AWS Machine Learning Blog Chronomics detects COVID-19 test results with Amazon Rekognition Custom Labels https://aws.amazon.com/blogs/machine-learning/chronomics-detects-covid-19-test-results-with-amazon-rekognition-custom-labels/ Chronomics detects COVID test results with Amazon Rekognition Custom LabelsChronomics is a tech bio company that uses biomarkersーquantifiable information taken from the analysis of moleculesーalongside technology to democratize the use of science and data to improve the lives of people Their goal is to analyze biological samples and give actionable information to help you make decisionsーabout anything where knowing more about the unseen is important … 2022-12-13 16:55:47
AWS AWS Media Blog Rooter enhances live stream experiences with Amazon IVS https://aws.amazon.com/blogs/media/rooter-enhances-live-stream-experiences-with-amazon-ivs/ Rooter enhances live stream experiences with Amazon IVSIndia s largest game streaming and esports platform adds HD capabilities lowers latency to deliver stand out content and improve monetization As the platform of choice for India s most popular gaming and esports live streamers Rooter has experienced significant growth in the past two years increasing daily active users from to nearly At any given … 2022-12-13 16:54:15
AWS AWS Mobile Blog Meet Our MLH Fellows of Summer 2022 https://aws.amazon.com/blogs/mobile/meet-our-mlh-fellows-of-summer-2022/ Meet Our MLH Fellows of Summer AWS Amplify is a complete solution for quickly and easily building full stack applications on AWS and is dedicated to building open source libraries and fostering a community for front end developers As part of our effort in fostering an open source community we collaborate with Major League Hacking who offers the MLH Fellowship which brings together talented … 2022-12-13 16:05:52
python Pythonタグが付けられた新着投稿 - Qiita 【環境構築】flask + VSCode で最強最速のデバッグ環境を構築 https://qiita.com/yamazaki_yuto/items/45522732ad4c861a3ec3 flask 2022-12-14 01:14:24
python Pythonタグが付けられた新着投稿 - Qiita コンピュータも計算を間違える!? https://qiita.com/tratra/items/ce94aa4dde6964ca7cde numpy 2022-12-14 01:01:24
js JavaScriptタグが付けられた新着投稿 - Qiita 埋め込んだiframeで全画面表示(fullscreen)をする https://qiita.com/manzoku_bukuro/items/08d260103defaf070e05 ifram 2022-12-14 01:06:49
AWS AWSタグが付けられた新着投稿 - Qiita CloudFrontWebDistribution を Distribution に書き換えしたのでメモ https://qiita.com/tainakanchu/items/5a2e04eedd4340afd36b awscdk 2022-12-14 01:30:58
AWS AWSタグが付けられた新着投稿 - Qiita AWS Glue Data Qualityを試してみた https://qiita.com/zumax/items/6e6bfc3f0f8914836ed6 awsgluedataquality 2022-12-14 01:15:12
AWS AWSタグが付けられた新着投稿 - Qiita EKSでGitHub Self-Hosted Runnerを立てる https://qiita.com/ay_goma/items/62386926c622d68fd81f selfhostedrunn 2022-12-14 01:07:17
海外TECH DEV Community Announcing Monokle 1.13, now with cluster management https://dev.to/kubeshop/announcing-monokle-113-now-with-cluster-management-5ggk Announcing Monokle now with cluster managementIt s a pleasure to share the latest release of our open source project Monokle Desktop a unified visual tool for authoring analyzing and deploying Kubernetes configurations from pre deployment to cluster The most exciting features of our v release include Cluster Mode for easy cluster management and an addition to our Compare amp Sync feature that eliminates the stress of working on projects containing lots of subfolders Cluster Mode offers Real time visibility of resources actively deployed in clusterClear vision of cluster updates In cluster resource validationRead about all new features in the announcement blog post Monokle Desktop can be downloaded and used on Windows Mac OS X and Linux Feedback is of course appreciated either here or on our Discord server 2022-12-13 16:28:59
Apple AppleInsider - Frontpage News Best Buy extends Upgrade+ program to iMac and Mac Studio https://appleinsider.com/articles/22/12/13/best-buy-extends-upgrade-program-to-imac-and-mac-studio?utm_medium=rss Best Buy extends Upgrade program to iMac and Mac StudioBest Buy is expanding its Upgrade program to include more Apple products with customers now able to get select iMac and Mac Studio models under the program Best BuyBest Buy launched its Mac equivalent of Apple s iPhone upgrade program in October in collaboration with Apple and with Citizens Pay handling financing In a December update Best Buy is now adding more models to the roster Read more 2022-12-13 16:17:15
Apple AppleInsider - Frontpage News First look: New Dexcom G7 glucose monitor https://appleinsider.com/articles/22/12/13/first-look-at-the-new-dexcom-g7-glucose-monitor?utm_medium=rss First look New Dexcom G glucose monitorAfter months of international availability the Dexcom G constant glucose monitor received its FDA clearance to distribute in the US Ahead of the wide rollout we got to go hands on with a demo model to see how it looks and fits Holding the new Dexcom G CGMInitially Dexcom had hoped to launch the G much earlier here stateside But a massive rewrite to the mobile app and sensor algorithms prompted increased scrutiny from the FDA before it could begin sales Read more 2022-12-13 16:14:05
海外TECH Engadget You can soon snag 'Dishonored 2' for free with an Amazon Prime subscription https://www.engadget.com/prime-gaming-free-games-december-dishonored-2-quake-brothers-a-tale-of-two-sons-162051297.html?src=rss You can soon snag x Dishonored x for free with an Amazon Prime subscriptionAmazon will offer Prime Gaming members an extra batch of PC games later this month at no extra cost Along with a few Metal Slug titles SNK th Anniversary Collection and a few others you can snap up Arcane Studios Dishonored between December th and January rd A few years before it unleashed Deathloop Arcane s Lyon studio developed another sneaky action adventure game in Dishonored As with the first game in the series it enables players to be creative in how they tackle missions depending on their preferred playstyle You can for instance take a non lethal stealthy approach or battle enemies head on This time around you can play as two characters each of which has their own supernatural abilities ーDeathloop fans will certainly see some of that game s DNA here It s not yet clear whether Amazon will offer Dishonored through Steam the Epic Games Store or the Amazon Games app However it s worth noting this game is unsupported on Steam Deck That s a bit odd considering its decade old predecessor should run on the handheld without any hitches Still a free ish game that s as good as Dishonored is nothing to sniff at especially if you end up looking for something to play during the holidays Before Dishonored and the other games hit the Prime Gaming lineup later this month there are a few other notable titles that members can snag at no extra cost right now as part of the regular monthly drop Classic first person shooter Quake and Brothers A Tale of Two Sons nbsp from A Way Out and It Takes Two director Josef Fares are up for grabs for another few weeks 2022-12-13 16:20:51
海外TECH Engadget Arturia's Pigments 4 adds new effects and a simplified interface https://www.engadget.com/arturia-pigments-4-soft-synth-vst-free-update-160036317.html?src=rss Arturia x s Pigments adds new effects and a simplified interfaceAt this point it s easier to list the features that Arturia s Pigments doesn t have than ones it does The company has been rolling out regular updates for a few years now and each new version adds something worth getting excited about New synth engines new effects whole new utility sections it s both a place for Arturia to experiment with new ideas and show off some of highlights from its lineup of vintage emulation At first blush Pigments seems like a comparatively minor update There s some new effects a handful of enhancements to the various oscillators but there s not much to reach out and demand your attention from a simple headline This seemingly subtle upgrade hides a lot of quality of life enhancements though For one now you can simply drag and drop modulation sources to their target Granted clicking LFO one and then clicking on the filter cut off wasn t particularly difficult But now you can just drag the little tab over an LFO to whatever you want to modulate It should make Pigments a little less intimidating to those just learning how to navigate the synth There s also now left and right arrows on multimode modules like the filter that lets you quickly change types where previously you had to click and open a drop down menu to select a new option ArturiaThe most dramatic UI changes are thankfully optional First is the new Play tab which strips away most of the sound design options and focuses on a core set of tweakable parameters You can t change effects here or set modulation preferences It s designed to just get you making noise without too many distractions It s fine for live performance or if you prefer to stick with the presets but hides a lot of the depth that makes Pigments so compelling nbsp The other major interface change is a bit of a head scratcher See while most companies are rushing to implement dark mode to save your eyes Arturia has decided that Pigments is too dark and added a light mode I m sure there are those out there who will enjoy its bright gray panels but I m personally not a fan Not only do I think the original theme is easier to stare at for prolonged periods of time but it s also just more consistent There are many elements of the interface that remain dark even in light mode and it looks a bit cobbled together That s a pretty minor nitpick though considering that once you get past the UX there s a handful of new toys to play with in Pigments There s a new filter borrowed from the company s MS emulation ring mod in the wavetable oscillator super unison in the analog oscillator a dedicated mod oscillator an improved bit crusher and wait for it shimmer reverb ArturiaThe MS filter might seem unnecessary considering Pigments already had different filter options many with multiple modes but it does bring something special to the table Rather than using it as you would any other lowpass filter the MS begs to be run at extremes Crank the resonance and turn on keyboard tracking and you can transform simple white noise into a delicate plucky key sound with a subtle crackle Or just crank the volume on it and hit it with the hottest signal you can though make sure to turn down the master volume or you ll risk blowing out your ears to get some crunchy saturation Ring Mod delivers some nice icy timbres Though this is not something that Pigments was exactly lacking before It s a welcome addition but definitely not something that you d miss too dearly if it quietly disappeared The more exciting changes are to the Bit Crusher effect as opposed to the one built into the sample engine The addition of jitter scale and new decimator options really let you fine tune the exact flavor of digital destruction you re looking for Oh then of course there s the shimmer reverb It does what it says on the tin Personally I think it s an essential effect and I m shocked Arturia hasn t added one until now There s nothing about it that stands out particularly but if you re using Pigments to create granular soundscapes or ambient plucks then you ll be thankful it s here Arturia also added new wavetables new samples and new noise types plus a pile of new presets And if the new library of included patches isn t enough for you there s also three new sound packs Wavelengths Lo fi Wavelengths Neuro Bass and Wavelengths Cinematic to broaden your sonic palette Pigments is available now as a free upgrade for existing owners New customers have a chance to pick it up at an introductory price of until January th after which the price will go back up to 2022-12-13 16:00:36
海外科学 NYT > Science Nuclear Fusion Energy Breakthrough: Video and How to Watch https://www.nytimes.com/2022/12/13/science/nuclear-fusion-energy-breakthrough.html research 2022-12-13 16:43:02
海外科学 NYT > Science When Black Psychiatrists Reach Out to Teens of Color https://www.nytimes.com/2022/12/13/health/adolescents-mental-health-psychiatry.html experts 2022-12-13 16:42:53
海外科学 NYT > Science Snow and Ice Expected to Batter Plains and Upper Midwest https://www.nytimes.com/article/winter-storm-snow-west-northern-plains.html Snow and Ice Expected to Batter Plains and Upper MidwestA weather system that clobbered the Sierra Nevada over the weekend with snowfall was expected to affect travel from the Central Plains to the Upper Midwest forecasters said 2022-12-13 16:01:41
海外TECH WIRED The Real Fusion Energy Breakthrough Is Still Decades Away https://www.wired.com/story/the-real-fusion-energy-breakthrough-is-still-decades-away/ clean 2022-12-13 16:11:18
海外TECH WIRED Urbanista Phoenix Review: Clever Charging, Decent Sound https://www.wired.com/review/review-urbanista-phoenix/ endless 2022-12-13 16:03:33
海外科学 BBC News - Science & Environment Breakthrough in nuclear fusion energy announced https://www.bbc.co.uk/news/science-environment-63950962?at_medium=RSS&at_campaign=KARANGA fusion 2022-12-13 16:16:12
金融 ◇◇ 保険デイリーニュース ◇◇(損保担当者必携!) 保険デイリーニュース(12/14) http://www.yanaharu.com/ins/?p=5100 取り組み 2022-12-13 16:04:03
金融 金融庁ホームページ 多重債務問題及び消費者向け金融等に関する懇談会 (第20回)を開催します。 https://www.fsa.go.jp/policy/kashikin/tajusaimukondankai/20221213.html 多重債務 2022-12-13 17:00:00
金融 金融庁ホームページ 金融安定理事会による本会合議事要旨について掲載しました。 https://www.fsa.go.jp/inter/fsf/20221213/20221213.html 金融安定理事会 2022-12-13 17:00:00
金融 金融庁ホームページ 鈴木財務大臣兼内閣府特命担当大臣閣議後記者会見の概要(令和4年12月9日)を掲載しました。 https://www.fsa.go.jp/common/conference/minister/2022b/20221209-1.html 内閣府特命担当大臣 2022-12-13 17:00:00
ニュース BBC News - Home Breakthrough in nuclear fusion energy announced https://www.bbc.co.uk/news/science-environment-63950962?at_medium=RSS&at_campaign=KARANGA fusion 2022-12-13 16:16:12
ニュース BBC News - Home Rishi Sunak promises end to asylum seeker backlog by 2023 https://www.bbc.co.uk/news/uk-politics-63959729?at_medium=RSS&at_campaign=KARANGA minister 2022-12-13 16:34:00
ニュース BBC News - Home Unimaginable pain over boys' Solihull lake deaths - aunt https://www.bbc.co.uk/news/uk-england-birmingham-63954733?at_medium=RSS&at_campaign=KARANGA community 2022-12-13 16:45:07
ニュース BBC News - Home Reality TV star Stephen Bear guilty of sex tape offences https://www.bbc.co.uk/news/uk-england-essex-63911965?at_medium=RSS&at_campaign=KARANGA onlyfans 2022-12-13 16:39:39
ニュース BBC News - Home Jersey explosion: Final two people feared dead named https://www.bbc.co.uk/news/world-europe-jersey-63957055?at_medium=RSS&at_campaign=KARANGA namednine 2022-12-13 16:56:18
ニュース BBC News - Home What is nuclear fusion and how does it work? https://www.bbc.co.uk/news/science-environment-63957085?at_medium=RSS&at_campaign=KARANGA limitless 2022-12-13 16:14:38
ニュース BBC News - Home Cancer mRNA vaccine completes pivotal trial https://www.bbc.co.uk/news/health-63959843?at_medium=RSS&at_campaign=KARANGA covid 2022-12-13 16:09:09
ニュース BBC News - Home Randolph Ross: USA sprinter banned until 2025 over faked email https://www.bbc.co.uk/sport/athletics/63955888?at_medium=RSS&at_campaign=KARANGA Randolph Ross USA sprinter banned until over faked emailRandolph Ross part of the United States gold winning xm squad at the Tokyo Olympics is banned for three years for faking an email to doping officials 2022-12-13 16:44:56
GCP Cloud Blog IT prediction: The era of workload-optimized, ultra-reliable infrastructure is upon us https://cloud.google.com/blog/products/infrastructure/aiml-to-automate-infrastructure-configuration/ IT prediction The era of workload optimized ultra reliable infrastructure is upon usEditor s note This post is part of an ongoing series on IT predictions from Google Cloud experts Check out the full list of our predictions on how IT will change in the coming years Prediction By over half of cloud infrastructure decisions will be automated by AI and MLGoogle s infrastructure is designed with scale out capabilities to support billions of people powering services like Search YouTube and Gmail every day To do that we ve had to pioneer global scale computing and storage systems and shorten network latency and distance limitations with new innovations Along the way we ve come to see cloud infrastructure as more than a simple commodity ーit s a source of inspiration and new capabilities But even as the demand on the industry s cloud infrastructure continues to increase there are simultaneously plateaus in the efficiency available from the underlying hardware In the past we saw annual performance gains of levels that often enabled a single infrastructure configuration to meet the needs of the vast majority of workloads As these improvements have slowed and new workloads such as AI ML and analytics have emerged we have seen a corresponding explosion in the variety and capability of infrastructure While empowering the burden of picking the right combination of infrastructure components for a given workload still falls on an organization s cloud architects  But by we predict that the burden and complexity of infrastructure decision making will disappear through the power of AI and ML automation which will automatically combine purpose built infrastructure prescriptive architectures and an ecosystem to deliver a workload optimized ultra reliable infrastructure The focus for cloud architects will therefore be on enabling business logic and innovation rather than how that logic maps to underlying infrastructure Already we are making investments to turn this vision into reality building custom silicon like the Infrastructure Processing Unit IPU for our new C VMs or a liquid cooled board for the new tensor processing unit The latter the TPU v platform is likely the world s fastest largest and most efficient machine learning supercomputer It can train large scale workloads up to faster and cheaper than alternatives Put another way TPU v will nearly double the performance of critical ML and AI services at half the cost unlocking new possibilities for what organizations can achieve when leveraging large scale learning and inference for business services These same IPUs and TPUs represent the foundation that will make it possible to automate cloud infrastructure decisions They ll be able to support the telemetry data and ML based analytics for proactive infrastructure recommendations that will increase the performance and reliability of workloads  Instead of determining hardware specifications and building the right infrastructure you ll only need to specify a workload AI and ML will take over the burden and recommend configure and identify the best options based on your budgetary performance and scaling requirements What is most exciting for us is how this will enable a much more rapid pace of service innovation which is the primary end goal of great cloud infrastructure 2022-12-13 17:00:00
GCP Cloud Blog Harness the power of data and AI in your life science supply chain https://cloud.google.com/blog/topics/healthcare-life-sciences/data-driven-intelligent-visible-life-sciences-supply-chain/ Harness the power of data and AI in your life science supply chainGlobal life science supply chains are lengthy and complex with many moving parts One small disruption can create serious delays and affect your ability to deliver therapeutics for patients  Supply chain disruptorsOver the last few years healthcare organizations have encountered a range of obstacles from both internal and external factors that have resulted in supply networks failing to get drugs and medical devices to where they need to be on time These obstacles include Labor and supply shortages Rising material costs Raw material constraintsGeo political eventsUnpredictable weather How do you overcome supply chain disruptors that are out of your control The intelligent healthcare supply chainWhile many organizations have already implemented data driven supply chains organizations are still faced with the challenges of static siloed and different functional supply chain applications limited data exchange with key trading partners across upstream and downstream operations and the inability to effectively leverage relevant external data  At Google Cloud we believe the key to meaningful and effective change is a data driven supply chain that allows you to achieve visibility flexibility and innovation Our solutions help you prepare for the unpredictable and enhance the value of your data By unlocking AI driven insights you can strengthen distribution networks and optimize your workflows and supply chains to become more reliable intelligent and sustainable Some of the business challenges we address include Predicting demand with Vertex AI ForecastVisual inspection for quality and predictive maintenance with pre built ML modelsAutomating and optimizing pickup and delivery operations with Cloud Fleet Routing APIReal time and holistic inventory visibility with Supply Chain TwinMake sure you re prepared for the unpredictable with real time visibility over your distribution networks Learn how you can harness the power of AI and analytics and gain actionable insights that enhance your supply chain 2022-12-13 17:00:00
GCP Cloud Blog How we validated the security controls of our new Confidential Space https://cloud.google.com/blog/products/identity-security/how-to-build-a-secure-confidential-space/ How we validated the security controls of our new Confidential SpaceWe re pleased to announce that Confidential Space our new solution that allows you to control access to your sensitive data and securely collaborate in ways not previously possible is now available in public Preview First announced at Google Cloud Next Confidential Space can offer many benefits to securely manage data from financial institutions healthcare and pharmaceutical companies and Web assets Today we will explore some security properties of the Confidential Space system that makes these solutions possible  Confidential Space uses a trusted execution environment TEE which allows data contributors to have control over how their data is used and which workloads are authorized to act on the data An attestation process and hardened operating system image helps to protect the workload and the data that the workload processes from an untrusted operator The Confidential Space system has three core components The workload is a containerized image with a hardened OS that runs in a cloud based TEE You can use Confidential Computing as the TEE that offers hardware isolation and remote attestation capabilities The attestation service which is an OpenID Connect OIDC token provider This service verifies the attestation quotes for TEE and releases authentication tokens The tokens contain identification attributes for the workload A managed cloud protected resource such as a Cloud Key Management Service key or Cloud Storage bucket The resource is protected by an allow policy that grants access to authorized federated identity tokens  The system can help ensure that access to protected resources is granted only to authorized workloads Confidential Space also can help protect the workload from inspection and tampering before and after attestation In our published Confidential Space Security Overview research paper we explore several potential attack vectors against a Confidential Space system and how it can mitigate those threats Notably the research notes how Confidential Space can protect against malicious workload operators and administrators and malicious outside adversaries who are attempting to create rogue workload attestations Through these protections Confidential Space establishes confidence that only the agreed upon workloads will be able to access sensitive data The research also highlights some of the extensive security reviews and tests executed to identify potential weak points in the system including domain expert reviews meticulous security audits and functional and fuzz testing We asked the NCC Group for an independent security assessment of Confidential Space to analyze its architecture and implementation NCC Group leveraged their experience reviewing other Google Cloud products to dig deep into Confidential Space  The NCC Group s extensive review which included penetration testing and automated security scanning found zero security vulnerabilities In their report the architecture review highlights how the security properties are achieved through the coordination of measured boot with vTPM attestation reduced attack surface with constricted administrator controls and access workload measurement and enforced launch policy and resource protection policy based on attested workload runtime properties The combination of these attributes creates powerful security properties gating release of data on runtime measurements of the actual workload code and environment instead of just user and service account credentials Confidential Space provides a platform that includes A dependable workload attestation including workload code measurement arguments and environment and operating environment claimsA fully managed attestation verification service that validates expected environmental attestation claimsA policy engine allowing for arbitrarily complex or extremely simple policy to be created around those claimsA mechanism to attach those policies to Google Cloud resourcesTogether the platform provides a mechanism where one can ensure that their data is only ever released into trusted workloads that will not abuse that data Take a look at our documentation and codelab and take it for a spin We hope that Confidential Space can inspire organizations to solve their use cases around multi party collaboration with sensitive data please contact your Google Cloud sales representative if you have any questions 2022-12-13 17:00:00
GCP Cloud Blog How a steel distributor reinvents its data science & ML workflows with Vertex AI https://cloud.google.com/blog/products/ai-machine-learning/how-steel-distributor-reinvents-its-data-science-ml-workflows-vertex-ai/ How a steel distributor reinvents its data science amp ML workflows with Vertex AINowadays many companies have to ask themselves Do we want to wait until a startup disrupts our business model or will we just do it on our own Klöckner a German steel and metal distributor chose the second option leveraging the power of Google Cloud to bring a more customer focused agile approach to an industry often mired in costly antiquated and time consuming processes In this blog post you will learn how Klöckner used Google Cloud Vertex AI services such as AutoML and Vertex AI Pipelines to improve internal processes through machine learning ML including increasing velocity in ML model building reducing time to production and providing solutions for production level ML challenges such as versioning lineage and reproducibility To put it in the words of the customer “Vertex provided solutions for problems we were not aware of yet said Matthias Berkenkamp the engineer who spearheaded Klöckner s use of the AI service   Klöckner implemented a webshop offering a solution for their customers to purchase products through a digital channel Although the shop was very well perceived Klöckner discovered that a significant portion of their customers still ordered through phone calls emails and submitted PDFs The order then required manual processing to get the data into their order systemsーtaking up valuable time and creating both inefficiencies and frequent errors Klöckner recognized that ML could help improve and automate the purchase entry process From the first ML model to the complexity of ML in productionKlöckner s data science team came up with different ideas and developed models for various use cases such as automated extraction of text from PDF files that were attached to purchase order emails However they didn t have experience on how to implement the model into their first ML application which was called IEPO Information Extraction from Purchase Orders Moreover Klöckner experienced communication challenges between data scientists and the DevOps team There was neither a software build pipeline nor documentation about the model training which was an absolute no go for a team living the GitOps approach if it s not in Git it cannot go to production After many delays the model was finally deployed but couldn t handle heavy loads such as orders with PDFs No one tested these edge cases before as there were so many other problems to solve first This neglected part between creating a model and finally running it in production was an underestimated beast How Vertex AI provided the right toolsAutoML was the eye opening entry pointKlöckner identified a new use case to be tested a model that matches mail content e g product numbers descriptions or specific parameters such as “ mm Alu Rechteckrohr to specific internal products They adopted Vertex AI Auto ML for this task Because AutoML enabled faster model development and easier collaboration it simple for everyone to start no internal processes to get a budget the initial credit is often enough to train first models staffing of data scientists or anything like that was not needed for a first try and considerably less data was already sufficient to test it out Surprisingly for the little effort it took the results were decent and accessible for everyone This experience turned the switch for many people involved from data science through product owners to DevOps which led to assessing Vertex AI to solve the challenges of bringing ML models into production ML in production requires special toolingAfter struggling with ML in production initially the DevOps team started to look for and compare different tools that could help in handling the complexity That was when they came across the open source software Kubeflow Pipelines KFP an ML workload orchestrator At first glance it seemed to be a good fit as Klöckner already used Kubernetes clusters There was still too much time consuming overhead operating clusters manually however Additionally knowledge about Docker containers and Kubernetes is required to use Kubeflowーwhich is overkill for most data scientists and areas in which the company s team was inexperienced After having a closer look into Vertex AI Klöckner encountered Kubeflow Pipelines again but this time they were able to run their ML workloads on the fully managed service Vertex AI Pipelines The provided software development kit SDK by Google made it easy to use Kubeflow Pipelines even for data scientists with little experience This let the data science team and the DevOps team focus on the pipelines themselves instead of managing Kubernetes clusters or the overall infrastructure underneath Further the DevOps team felt better prepared this time as they collectively had finished Coursera s ML Engineering for Production course Hence they better understood ML and MLOps concepts and practices This helped them to realize what went wrong the first time and what they could do to improve operating a project of this size better across teams It was critical to not reproduce issues from prior projects and Vertex AI offered everything needed in an integrated bundle From managed datasets over pipelining to metadata versioning governance and so much more Let alone the helping hand of other Google Cloud services such as Cloud Build and Artifact Registry for building storing and retrieving Docker container images Figure Initial draft idea of the Vertex AI integrationThe initial workflow a good start but imperfectIn their first attempt for the IEPO solution each GitLab build pipeline started with the commit of a fully trained model Data scientists manually did all training steps before on their manually managed and long running VM instances in Google Cloud There were some scripts e g for creating the train test validation data split or downloading PDFs to the respective machine but each scientist needed to have the workflow in mind or consult the different places where it was hopefully correctly written down The contributors to the software build pipeline in GitLab cared about building a Docker image and the serving infrastructure around the model In fear of breaking the model build and training the data scientists did not upgrade their machines in the cloud A simple change in environment e g a simple library update might have broken their VMs and the model training for several days They felt that having mostly mathematical backgrounds and more interest in preparing data and improving machine learning models the maintenance of a VM should be none of their duties They realized that most packages on used VMs were outdated for two years or more and partly publicly accessibleーa real security nightmare for every administrator Hence the goal was to go away from everyone developing hosting on their own resulting in different software versions etc There was a clear need for some harmonization across all users and environments Figure The initial workflow of the IEPO solutionThe result more standardized amp automated workflows for end to end ML in productionTrust and patience paid out The new workflow using Vertex AI Pipelines addresses their previous problems Everything that goes to production needs to go via Git otherwise it won t be deployed Data scientists needed to understand and learn how to work more like DevOps A handful of workshops and trainings later the data science team was convinced and understood the benefits of working through Git based processes such as working better collaboratively faster consistent etc  Now each commit to the Git repository is triggering a GitLab pipeline There the model code gets dockerized and uploaded to the Google Artifact Registry to which the Kubeflow Pipelines components have easy access The subsequent build step defines the ML pipeline and creates it in Vertex AI Data cleanup and splits model training and validation are all is written in python code with the help of the Vertex AI SDK for Python Each Kubeflow Pipeline component can request access to a GPU for faster training or more memory for preprocessing of the data When the model training is finished the GitLab pipeline continues and deploys the model onto a Kubernetes cluster for serving Figure Final workflow with GitLab CI and Google Cloud services integrationThe data science team learned to use Vertex AI Pipelines with the KFP SDK It took a bit of training on how to use it most efficiently such as having several pipelines in parallel instead of one big one It is being done with splitting pipelines automatically depending which language the data has i e split by label that represent language country as seen below Figure Training of language models German and French in parallel On the right side you can see the labels due to the customer s feature request To overcome the security issues with virtual machines data scientists only use short termed Jupyterlab instances on Vertex AI Workbench By convention the lifetime of such an instance is limited to one ticket or task whenever someone creates a new ticket they will receive the latest version Google manages the machine images as well as security updates and data scientists can concentrate on their actual work again The improvements that Vertex AI brought to Klökner s ML workflows have been significant Experiments that are formalized into components are easier to share and reproduce Also parallel training is straightforward to set up using the predefined components and pipelines that they build with the help of their partner dida Moreover the impact gets even more tangible looking at concrete process improvements With Vertex AI the time required for developing and training models was significantly reduced Whereas the initial IEPO model developed on machines of individual ML experts could take many days or weeks it now takes hours to get through first iterations And with a defined workflow and standard components that have transparent inputs and outputs human errors are significantly reduced There are fewer hidden black boxes and more sharing and collaborations All of this helped to have a purchase order fully processed into their ERP system in several minutes instead of half a day on average What s next The teams at Klöckner don t see their work done yet Now that they have declared workflows including Vertex AI as their “gold standard the plan is to port other ML workloads and previously trained models to Vertex AI And of course there is plenty of room for further improvement and optimization in their ML pipelines such as automatically triggering continuous training pipelines when new data arrives Similarly on the serving side model registration and live deployment of an improved model if committed to master after metrics comparison and acceptance test could be brought fully onto Vertex AI and automated as well e g via Model Registry amp Vertex AI Endpoints Additionally runtime metrics can be tracked to know how a model behaves and data drift detection can provide better understanding of when it s time for retraining And last but not least Klöckner has many ideas already for new ML use cases to simplify and accelerate their processes e g a model that understands when a customer says “Hey please execute the same order as last time With whatever comes next one thing is for sure with Google Cloud and its Vertex AI platform they have powerful technologies and resources to continue being a front runner of their industry Further Links Material Coursera course ML Engineering for ProductionMore MLOps related articles best practices…Best Practices for managing Vertex AI Pipelines codeBuilding reusable ML workflows with Pipeline TemplatesDigitec Galaxus Reinforcement Learning using TFXThe MLOps Playbook Best Practices for Ensuring Reliability of ML SystemsMLOps in Glassdoor an àla carte approachMLOps WhitepaperWe would like to thank Matthias Berkenkamp former Sr DevOps employee of Klöckner Martin Schneider Klöckner Kenny Casagrande Klöckner and Kai Hachenberg Google Cloud for their help in creating this blog post 2022-12-13 17:00:00

コメント

このブログの人気の投稿

投稿時間:2021-06-17 22:08:45 RSSフィード2021-06-17 22:00 分まとめ(2089件)

投稿時間:2021-06-20 02:06:12 RSSフィード2021-06-20 02:00 分まとめ(3871件)

投稿時間:2021-06-17 05:05:34 RSSフィード2021-06-17 05:00 分まとめ(1274件)