投稿時間:2021-05-19 05:32:08 RSSフィード2021-05-19 05:00 分まとめ(40件)

カテゴリー等 サイト名等 記事タイトル・トレンドワード等 リンクURL 頻出ワード・要約等/検索ボリューム 登録日
IT ITmedia 総合記事一覧 [ITmedia Mobile] 「Wear OS by Google」+「Tizen」=「Wear」 https://www.itmedia.co.jp/mobile/articles/2105/19/news059.html galaxywatch 2021-05-19 04:49:00
AWS AWS The Internet of Things Blog Part 2/2: Building Reliable IoT Device Software Using AWS IoT Core Device Advisor https://aws.amazon.com/blogs/iot/part-2-2-building-reliable-iot-device-software-using-aws-iot-core-device-advisor/ Part Building Reliable IoT Device Software Using AWS IoT Core Device AdvisorThis post was co written by David Walters Sr Partner Solutions Architect AWS IoT and Pavan Kumar Bhat Sr Technical Product Manager AWS IoT Device Ecosystem Introduction This is the second blog in a two part series In the first blog I explained the importance of testing IoT devices and how AWS IoT Core Device Advisor works … 2021-05-18 19:45:35
AWS AWS The Internet of Things Blog Part 1/2: Building Reliable IoT Device Software Using AWS IoT Core Device Advisor https://aws.amazon.com/blogs/iot/part-1-2-building-reliable-iot-device-software-using-aws-iot-core-device-advisor/ Part Building Reliable IoT Device Software Using AWS IoT Core Device AdvisorThis post was co written by David Walters Sr Partner Solutions Architect AWS IoT and Pavan Kumar Bhat Sr Technical Product Manager AWS IoT Device Ecosystem Introduction Internet of Things IoT devices that fail to connect to the internet reliably or are vulnerable to security threats can be catastrophic to IoT device makers An unreliable IoT … 2021-05-18 19:43:28
AWS AWS Government, Education, and Nonprofits Blog Paris-Saclay University uses AWS to advance data science through collaborative challenges https://aws.amazon.com/blogs/publicsector/paris-saclay-university-uses-aws-advance-data-science-collaborative-challenges/ Paris Saclay University uses AWS to advance data science through collaborative challengesThis is a guest post by Maria Teleńczuk research engineer at the Paris Saclay Center for Data Science CDS and Alexandre Gramfort senior research scientist at INRIA the French National Institute for Research in Digital Science and Technology Maria and Alexandre explain how they adapted their open source data challenge platform RAMP to train the models submitted by student challenge participants using Amazon Elastic Compute Cloud Amazon EC Spot instances and how they leveraged AWS to support three student challenges 2021-05-18 19:33:06
Program [全てのタグ]の新着質問一覧|teratail(テラテイル) Visual Studio 2019 で プロジェクトParentのクラスをプロジェクトChildで利用したい(C++/CLI on .NET Core 3.1) https://teratail.com/questions/339080?rss=all VisualStudioでプロジェクトParentのクラスをプロジェクトChildで利用したいCCLIonNETCoreやりたいことプロジェクトChildからプロジェクトParentのクラスを利用したいです。 2021-05-19 04:26:54
Program [全てのタグ]の新着質問一覧|teratail(テラテイル) javascriptで配列の中の連想配列から特定の要素を削除する方法について https://teratail.com/questions/339079?rss=all javascriptで配列の中の連想配列から特定の要素を削除する方法について何故か調べても出てこなかったので質問させてください。 2021-05-19 04:06:36
海外TECH Ars Technica Today’s best tech deals: Roku media streamers, Apple iPad Air, and more https://arstechnica.com/?p=1765506 apple 2021-05-18 19:35:47
海外TECH Ars Technica Florida water plant compromise came hours after worker visited malicious site https://arstechnica.com/?p=1765767 utilities 2021-05-18 19:31:59
海外TECH Ars Technica All fossil fuel exploration needs to end this year, IEA says https://arstechnica.com/?p=1765722 exploration 2021-05-18 19:13:22
海外TECH DEV Community WAO: How do you test software? https://dev.to/kallmanation/wao-how-do-you-test-software-l9f WAO How do you test software Cover photo by Michal Matlon on Unsplash Wrong Answers OnlyHow do you test your software Comment your wrong answer 2021-05-18 19:12:08
海外TECH DEV Community How to configure Nginx configuration file in ubuntu for localhost port forwarding https://dev.to/avi9360/how-to-configure-nginx-configuration-file-in-ubuntu-for-localhost-port-forwarding-1hlj How to configure Nginx configuration file in ubuntu for localhost port forwarding Firstcd cd etc nginx This takes you to the root dir of the Nginx serverthencd sites availablevim YourSiteNameNow server listen default server listen default server root var www html index index html index htm index nginx debian html server name location try files uri uri To save and exit from the vim use this command wqNow We want to update the sites enabled dircd cd sites enabled vim YourSiteNameserver listen default server listen default server root var www html index index html index htm index nginx debian html server name location try files uri uri Now Rur your project application and forward the port In the given example the defeat port is cdcd yourProject npx http server p Check out more on in depth guide through Nginx The Nginx is a reverse proxy that enables the user to host a static and dynamic website 2021-05-18 19:12:07
海外TECH DEV Community Load-balancing a gRPC service using Docker https://dev.to/useanvil/load-balancing-a-grpc-service-using-docker-2dfe Load balancing a gRPC service using Docker Night sweatsIt s sometime after midnight and you toss and turn In your slumber you are dreaming about getting a Slack alert that your production app is on fire from a random burst of traffic After further inspection you notice that one of your services seems to be having issues You suspect this is due to some backpressure being created by read write contentions in a shared queue or any of a million other things Every second spent trying to get your staging environment or PR deployment running with repro scenarios is a potential second of downtime for your service Gasp You wake up Now you get to thinking Wouldn t it be niceif you could quickly bring up a few instances of your microservice locally and try some suspect edge cases out Luckily there is a quick and easy way to get set up to extend your docker compose yml with minimal impact to your workflow allowing you to scale your services and load balance gRPC requests In this post we will cover how to use docker compose to scale a gRPC servicehow to use NGINX as a gRPC proxy and load balancerhow to inspect your running containers IntroductionWhile using RESTful APIs is a great way to expose services externally in a human readable way there are a number of reasons why this may not be the best option for your internal services One alternative is to use Remote Procedure Calls gRPC for this inter service communication Some advantages of this are you define your message format and service calls using Protocol Buffers which serve as contracts between clients and serversbinary message format optimized to reduce bandwidthleverages modern HTTP for communicationsupports bi directional streaming connectionsboth clients and servers have the perk of interoperability across languagesIf this seems like something that would suit your needs here s a helpful resource which provides great walkthroughs of setting up a client and server in several languages For this post we ll be using Node js by extending a starter example from the gRPC repo Is this for me So let s say you already have a microservice using gRPC or maybe you don t and want to learn how to make one You run a containerized workflow using Docker Compose for your dev environment Maybe you are running many instances of your microservice in production already through Docker Swarm Kubernetes or some other orchestration tool How would you go about replicating this configuration locally Well ideally you could try to match up your local with what you have in production by using something like minikube or Docker Desktop with Kubernetes support or others but what if this is not an option or you need to get something up and running quickly to test out a new feature or hotfix The rest of this post will cover how to get set up to do just that providing examples along the way The sample project Make a gRPC serviceIf you already have a service that uses gRPC you can follow along on how to change your docker compose yml to get up and running If you don t you can use our provided example for inspiration Either way you can go ahead and clone the repo to follow along git clone Running the codeEverything you need is in our example repo and is run with three commands Open three separate terminal windows In one start the server this will build the images for you as well docker compose up scale grpc In another monitor the container metrics docker statsOnce the servers and proxy are up run the client in another terminal docker compose run rm grpc src client js target nginx iterations batchSize That s it Did you notice in the container metrics that all your servers were being used That seems easy but let s take a look at how we did this Reviewing the project Directory structureThe project directory structure breaks out a few things src contains both the client and the server codeprotos the protocol buffer files used to define the gRPC messages and servicesconf the NGINX configuration file needed to proxy and LB the gRPC requestsdocker the Dockerfile used to run both the client and the server appsdocker compose yml defines the docker services we will needpackage json defines the project dependencies for the client and the serverThe dependencies for this project are in the package json These allow us to ingest the service and message definition in the protobuf and run the server and the client name grpc lb example version dependencies grpc grpc js grpc proto loader async google protobuf minimist We are using a node image to install the dependencies and run the server or client code in a container The Dockerfile for this looks like FROM node COPY home node WORKDIR home nodeRUN yarn installUSER nodeENTRYPOINT node For the client and server we use the gRPC project Node js example with some modifications to suit us We will get into details on these later The NGINX proxy config looks like user nginx events worker connections http upstream grpc server server grpc server listen http location grpc pass grpc grpc server The main things that are happening here are that we are defining NGINX to listen on port and proxy this HTTP traffic to our gRPC server defined as grpc server NGINX figures out that this serviceName port combo resolves to more than one instance through Docker DNS By default NGINX will round robin over these servers as the requests come in There is a way to set the load balancing behavior to do other things which you can learn about more in the comments of the repo We create three services through our docker compose ymlgrpc runs the servernginx runs the proxy to our grpc servicecAdvisor gives us a GUI in the browser to inspect our containersversion services grpc image grpc lb build context dockerfile docker Dockerfile volumes src home node src ro ports command src server js nginx image nginx container name nginx ports depends on grpc volumes conf nginx conf etc nginx nginx conf ro cAdvisor lt leaving out for brevity gt Scaling your serviceThis section is especially important if you already have a gRPC service and are trying to replicate the functionality from this example repo There are a few notable things that need to happen in your docker compose yml file Let your containers growMake sure you remove any container name from a service you want to scale otherwise you will get a warning This is important because docker will need to name your containers individually when you want to have more than one of them running Don t port clashWe need to make sure that if you are mapping ports you use the correct format The standard host port mapping in short syntax is HOST CONTAINER which will lead to port clashes when you attempt to spin up more than one container We will use ephemeral host ports instead Instead of ports Do this ports Doing it this way Docker will auto magic ly grab unused ports from the host to map to the container and you won t know what these are ahead of time You can see what they ended up being after you bring your service up Get the proxy hooked upUsing the nginx service in docker compose yml plus the nginx conf should be all you need here Just make sure that you replace the grpc with your service s name and port if it is different from the example Bring it upAfter working through the things outlined above to start your proxy and service up with a certain number of instances you just need to pass an additional argument scale lt serviceName gt lt number of instances gt docker compose up scale grpc Normally this would require us to first spin up the scaled instances check what ports get used and add those ports to a connection pool list for our client But we can take advantage of both NGINX proxy and Docker s built in DNS to reference the serviceName port to get both DNS and load balancing to all the containers for that service Yay If all is working you will see logs from nginx service when you run the client Some highlights about the example codeLet s call out some things we did in the example code that may be important for you A good bit of syntax was changed to align with our own preferences so here we mention the actual functionality changes server jsThis is mostly the same as the original example except that we added a random ID to attach to each server so we could see in the reponses We also added an additional service call Create a random ID for each server const id crypto randomBytes toString hex New service callfunction sayGoodbye call callback callback null message See you next time call request name from id helloworld protoHere we added another service and renamed the messages slightly The service definitions service Greeter rpc SayHello Request returns Reply rpc SayGoodbye Request returns Reply client jsThis is where we changed a lot of things In broad strokes we Collect the unique serverIDs that respond to us to log after all requests const serversVisited new Set lt gt serversVisited add message split pop lt gt console log serversVisited Array from serversVisited Promisify the client function calls to let us await them and avoid callback hell const sayHello promisify client sayHello bind client const sayGoodbye promisify client sayGoodbye bind client Perform batching so we send off a chunk of requests at a time delay for some time then second another chunk off until we burn through all our desired iterations Here you can play with the batchSize and iterations arguments to test out where your service blows up in latency throughput or anything else you are monitoring like CPU or memory utilization Handles the batching behavior we want const numberOfBatchesToRun Math round iterations batchSize timesSeries numberOfBatchesToRun function to run for numberOfBatchesToRun times in series next gt times batchSize fnToRunInBatches next function to run after all our requests are done gt console log serversVisited Array from serversVisited Inspecting containersYou can use the handy command docker stats to get a view in your terminal of your containers This is a nice and quick way to see the running containers CPU memory and network utilization but it shows you these live with no history view Alternatively we provide a service in the docker compose yml that spins up a container running cAdvisor which offers a GUI around these same useful metrics with user friendly graphs If you would rather run this as a one off container instead of a service remove the service cAdvisor and run this command in another terminal session instead tested on macOS docker run rm volume rootfs ro volume var run docker sock var run docker sock ro volume sys sys ro volume var lib docker var lib docker ro volume dev disk dev disk ro publish detach true name cadvisor privileged device dev kmsg userns host gcr io cadvisor cadvisor latestNow open a browser and go to http localhost docker to see the list of containers It should look like Here is a view of all four of the instances of my grpc service in action You can see they all share the load during the client requests Without load balancing only a single instance would get all the traffic bummer Watching for errorsNow may be a good time for you to start tweaking the arguments to your client and seeing how this impacts your service If you end up overwhelming it you will start to see things like This is when you know to start honing in on problem areas depending on what types of errors you are seeing SummaryIn this post we have covered how to use Docker Compose to scale a service locally This allows us to leverage NGINX as a proxy with load balancing capabilities and Docker s own DNS to run multiple instances of a gRPC service We also looked at how to inspect our running containers using docker stats and cAdvisor No more night sweats for you If you enjoyed this post and want to read more about a particular topic like using Traefik instead of NGINX we d love to hear from you Let us know at developers useanvil com 2021-05-18 19:01:34
Apple AppleInsider - Frontpage News Microsoft confirms Windows 10X isn't coming in 2021, may never launch https://appleinsider.com/articles/21/05/18/microsoft-confirms-windows-10x-isnt-coming-in-2021-may-never-launch?utm_medium=rss Microsoft confirms Windows X isn x t coming in may never launchMicrosoft has confirmed that it won t release Windows X in and has suggested that the update in the form it was announced may never see the light of day Credit MicrosoftJohn Cable Microsoft s head of Windows servicing and delivery confirmed on Tuesday that Windows X won t arrive in Instead Cable shared plans that Microsoft will incorporate features from the operating system into other parts of Windows Read more 2021-05-18 19:41:45
Apple AppleInsider - Frontpage News Apple expands self-driving car fleet, reduces number of test drivers https://appleinsider.com/articles/21/05/18/apple-expands-self-driving-car-fleet-reduces-number-of-test-drivers?utm_medium=rss Apple expands self driving car fleet reduces number of test driversApple has reportedly increased the number of self driving cars it is testing on California roads but has halved the number of drivers licensed to operate them Credit AppleInsiderAccording to data from the California Department of Motor Vehicles Apple currently has self driving test vehicles on the road This marks the first increase in Apple s fleet since August Read more 2021-05-18 19:07:05
Apple AppleInsider - Frontpage News Android 12's 'Material You' UI focuses on customizable colors https://appleinsider.com/articles/21/05/18/android-12s-material-you-ui-focuses-on-customizable-colors?utm_medium=rss Android x s x Material You x UI focuses on customizable colorsAndroid will provide users with a more personalized appearance as part of its new interface with the Material You concept offering dynamic color selections as well as easier summoning of the Google Assistant Marked as a fundamental change in how users can interact with their devices Material You is a new user interface that effectively brings themes to Android Rather than sticking to a single color palette or theme colors used throughout the operating system and in apps can be altered in various ways Changeable on a whim the selections travel with the user via their Google account and is applied to all connected devices and services Read more 2021-05-18 19:05:33
海外TECH Engadget President Biden reveals Ford's electric F-150 a day early in factory speech https://www.engadget.com/f-150-lightning-ev-ford-biden-reveal-early-193001820.html lightning 2021-05-18 19:30:01
海外TECH Engadget Google’s Project Starline is a ‘magic window’ for 3D telepresence https://www.engadget.com/google-project-starline-191228699.html Google s Project Starline is a magic window for D telepresenceGoogle s Project Starline uses a combination of specialized hardware and computer vision technology to create a “magic window for immersive video chat without a headset 2021-05-18 19:12:28
海外TECH Engadget Google's latest AI tool claims to identify common skin conditions https://www.engadget.com/google-ai-powered-dermatology-assist-tool-helps-identify-skin-hair-nail-conditions-190955234.html Google x s latest AI tool claims to identify common skin conditionsGoogle previewed a new AI powered tool that helps anyone with a smartphone get more information about skin hair and nail conditions 2021-05-18 19:09:55
海外TECH Engadget Google is building a more racially inclusive Android camera https://www.engadget.com/google-android-camera-pixel-inclusive-skin-tones-190340677.html Google is building a more racially inclusive Android cameraToday at Google I O Android VP Sameer Samat revealed that Google is also working to make its Android camera more inclusive with support for a variety of darker skin tones and different types of hair 2021-05-18 19:03:40
海外TECH Engadget Google explains how it will run on completely carbon-free energy by 2030 https://www.engadget.com/google-carbon-free-energy-2030-190100260.html ambitious 2021-05-18 19:01:00
海外科学 NYT > Science Things To Do At Home https://www.nytimes.com/2021/05/15/at-home/things-to-do-this-week.html climate 2021-05-18 19:39:13
海外TECH WIRED Everything Google Announced Today: Android, AI, Holograms https://www.wired.com/story/google-io-2021-highlights keynote 2021-05-18 19:34:45
海外科学 BBC News - Science & Environment Climate change: Ban new gas boilers from 2025 to reach net-zero https://www.bbc.co.uk/news/science-environment-57149059 energy 2021-05-18 19:29:28
ニュース BBC News - Home Don't holiday in amber list countries - Boris Johnson https://www.bbc.co.uk/news/business-57158372 firms 2021-05-18 19:36:13
ニュース BBC News - Home Climate change: Ban new gas boilers from 2025 to reach net-zero https://www.bbc.co.uk/news/science-environment-57149059 energy 2021-05-18 19:29:28
ニュース BBC News - Home Cavani scores with stunning lob as Man Utd draw with Fulham https://www.bbc.co.uk/sport/football/57066741 Cavani scores with stunning lob as Man Utd draw with FulhamEdinson Cavani scores a stunning lob from yards in his first game in front of fans at Old Trafford as Manchester United draw with relegated Fulham 2021-05-18 19:31:19
ニュース BBC News - Home Bamford and Roberts guarantee Leeds top-10 finish with win at Southampton https://www.bbc.co.uk/sport/football/57066732 Bamford and Roberts guarantee Leeds top finish with win at SouthamptonPatrick Bamford s th Premier League goal of the season helps Leeds beat Southampton in front of fans at St Mary s Stadium 2021-05-18 19:27:43
ビジネス ダイヤモンド・オンライン - 新着記事 なぜ私ばかりが異動できないのか…と嘆く人に欠けている「戦略的視点」 - これからの出世論 https://diamond.jp/articles/-/271026 人事異動 2021-05-19 05:00:00
ビジネス ダイヤモンド・オンライン - 新着記事 苦境の三越伊勢丹の中長期計画が「原点回帰」で期待を持てる理由 - DOL特別レポート https://diamond.jp/articles/-/271425 2021-05-19 04:55:00
ビジネス ダイヤモンド・オンライン - 新着記事 コロナ対応の失敗が浮き彫りにした日本の「危機の本質」 - 田中均の「世界を見る眼」 https://diamond.jp/articles/-/271506 機能不全 2021-05-19 04:50:00
ビジネス ダイヤモンド・オンライン - 新着記事 スタジオアリスの3月売上高28%増と洗車のキーパーの29%増に「大差」の理由 - コロナで明暗!【月次版】業界天気図 https://diamond.jp/articles/-/268852 スタジオアリスの月売上高増と洗車のキーパーの増に「大差」の理由コロナで明暗【月次版】業界天気図コロナ禍から企業が復活するのは一体、いつになるのだろうか。 2021-05-19 04:45:00
ビジネス ダイヤモンド・オンライン - 新着記事 東急・小田急ら私鉄5社、3月に「ある業績」で前年実績超えできた理由 - コロナで明暗!【月次版】業界天気図 https://diamond.jp/articles/-/268851 前年同期 2021-05-19 04:35:00
ビジネス ダイヤモンド・オンライン - 新着記事 変革よりも旧来ビジネス流用、海外依存度高める日本企業の「ジリ貧」 - 経済分析の哲人が斬る!市場トピックの深層 https://diamond.jp/articles/-/271503 変革よりも旧来ビジネス流用、海外依存度高める日本企業の「ジリ貧」経済分析の哲人が斬る市場トピックの深層日本企業は年代に入り、過剰ストック・債務問題が解決された後もコスト削減を続け、賃金、人的投資を抑制した。 2021-05-19 04:30:00
ビジネス ダイヤモンド・オンライン - 新着記事 JR九州の3月運輸取扱高が前年比14.4%増、「異例の急上昇」に見えるワケ - コロナで明暗!【月次版】業界天気図 https://diamond.jp/articles/-/268849 前年同期 2021-05-19 04:25:00
ビジネス ダイヤモンド・オンライン - 新着記事 日本株は一進一退相場に、「高配当&自社株買い」銘柄が注目の理由 - 政策・マーケットラボ https://diamond.jp/articles/-/271493 一進一退 2021-05-19 04:20:00
ビジネス ダイヤモンド・オンライン - 新着記事 光岡自動車の創業者が語る、「富山の板金屋」が自動車メーカーになれた理由 - 七転び八起き https://diamond.jp/articles/-/271301 2021-05-19 04:15:00
ビジネス ダイヤモンド・オンライン - 新着記事 中国製でさえある程度有効なのに、ワクチン確保に全力を挙げなかった日本 - 原田泰 データアナリシス https://diamond.jp/articles/-/271502 新型コロナウイルス 2021-05-19 04:10:00
ビジネス ダイヤモンド・オンライン - 新着記事 田原総一朗「日本はもう曖昧にできない」、米中の狭間で今こそ取るべき道は? - 田原総一朗の覧古考新 https://diamond.jp/articles/-/271501 2021-05-19 04:05:00
ビジネス 東洋経済オンライン 五輪開催か否か、菅首相が迫られる決断の「Xデー」 決断のカギを握る東京の緊急事態宣言解除 | 国内政治 | 東洋経済オンライン https://toyokeizai.net/articles/-/429195?utm_source=rss&utm_medium=http&utm_campaign=link_back 国内政治 2021-05-19 04:30:00
GCP Cloud Blog Google Cloud unveils Vertex AI, one platform, every ML tool you need https://cloud.google.com/blog/products/ai-machine-learning/google-cloud-launches-vertex-ai-unified-platform-for-mlops/ Google Cloud unveils Vertex AI one platform every ML tool you needToday at Google I O we announced the general availability of Vertex AI a managed machine learning ML platform that allows companies to accelerate the deployment and maintenance of artificial intelligence AI models Vertex AI requires nearly fewer lines of code to train a model versus competitive platforms enabling data scientists and ML engineers across all levels of expertise the ability to implement Machine Learning Operations MLOps to efficiently build and manage ML projects throughout the entire development lifecycle  Today data scientists grapple with the challenge of manually piecing together ML point solutions creating a lag time in model development and experimentation resulting in very few models making it into production To tackle these challenges Vertex AI brings together the Google Cloud services for building ML under one unified UI and API to simplify the process of building training and deploying machine learning models at scale In this single environment customers can move models from experimentation to production faster more efficiently discover patterns and anomalies make better predictions and decisions and generally be more agile in the face of shifting market dynamics Through decades of innovation and strategic investment in AI at Google the company has learned important lessons on how to build deploy and maintain ML models in production Those insights and engineering have been baked into the foundation and design of Vertex AI and will be continuously enriched by the new innovation coming out of Google Research Now for the first time with Vertex AI data science and ML engineering teams can Access the AI toolkit used internally to power Google that includes computer vision language conversation and structured data continuously enhanced by Google Research Deploy more useful AI applications faster with new MLOps features like Vertex Vizier which increases the rate of experimentation the fully managed Vertex Feature Store to help practitioners serve share and reuse ML features and Vertex Experiments to accelerate the deployment of models into production with faster model selection If your data needs to stay on device or on site Vertex ML Edge Manager can deploy and monitor models on the edge with automated processes and flexible APIs Manage models with confidence by removing the complexity of self service model maintenance and repeatability with MLOps tools like Vertex Continuous Monitoring Vertex ML Metadata and Vertex Pipelines to streamline the end to end ML workflow “We had two guiding lights while building Vertex AI get data scientists and engineers out of the orchestration weeds and create an industry wide shift that would make everyone get serious about moving AI out of pilot purgatory and into full scale production said Andrew Moore vice president and general manager of Cloud AI and Industry Solutions at Google Cloud “We are very proud of what we came up with in this platform as it enables serious deployments for a new generation of AI that will empower data scientists and engineers to do fulfilling and creative work “Enterprise data science practitioners hoping to put AI to work across the enterprise aren t looking to wrangle tooling Rather they want tooling that can tame the ML lifecycle Unfortunately that is no small order said Bradley Shimmin chief analyst for AI Platforms Analytics and Data Management at Omdia “It takes a supportive infrastructure capable of unifying the user experience plying AI itself as a supportive guide and putting data at the very heart of the process all while encouraging the flexible adoption of diverse technologies ModiFace uses Vertex AI to revolutionize the beauty industryModiFace a part of L Oréal is a global market leader in augmented reality and artificial intelligence for the beauty industry ModiFace creates new services for consumers to try beauty products such as hair color makeup and nail color virtually in real time ModiFace is using Vertex AI platform to train its AI models for all of its new services For example ModiFace s skin diagnostic is trained on thousands of images from L Oréal s Research amp Innovation the company s dedicated research arm Bringing together L Oréal s scientific research combined with ModiFace s AI algorithm this service allows people to obtain a highly precise tailor made skincare routine “We provide an immersive and personalized experience for people to purchase with confidence whether it s a virtual try on at web check out or helping to understand what brand product is right for each individual said Jeff Houghton chief operating officer at ModiFace part of L Oréal “With more and more of our users looking for information at home on their phone or at any other touchpoint Vertex AI allowed us to create technology that is incredibly close to actually trying the product in real life Essence is built for the algorithmic age with help of Vertex AI Essence a global data and measurement driven media agency that is part of WPP is extending the value of AI models made by its data scientists by integrating their workflows with developers using Vertex AI Historically AI models created by data scientists remain unchanged once created but this way of operating has evolved with the digital world as human behaviors and channel content is constantly changing With Vertex AI developers and data analysts can update models regularly to meet these fast changing business needs  “At Essence we are measured by our ability to keep pace with our clients rapidly evolving needs said Mark Bulling SVP Product Innovation at Essence “Vertex AI gives our data scientists the ability to quickly create new models based on the change in environment while also letting our developers and data analysts maintain models in order to scale and innovate The MLOps capabilities in Vertex AI mean we can stay ahead of our clients expectations  A unified data science and ML platform for all skill levelsMLOps lifecycleOne of the biggest challenges we hear from customers is finding the talent to work on machine learning projects Nearly two in five companies cite a lack of technical expertise as a major roadblock to using AI technologies  Vertex AI is a single platform with every tool you need allowing you to manage your data prototype experiment deploy models interpret models and monitor them in production without requiring formal ML training This means your data scientists don t need to be ML engineers  With Vertex AI they have the ability to move fast but with a safety net that their work is always something they are able to launch  The platform assists with responsible deployment and ensures you move faster from testing and model management to production and ultimately to driving business results  “Within Sabre s Travel AI technology Google s Vertex AI gives our technologists the tools they need to quickly experiment and deploy intelligent products across the travel ecosystem This advancement proves how the power of the partnership between our teams helps accelerate Sabre s vision for the future of personalized travel said Sundar Narasimhan SVP and President Sabre Labs and Product Strategy   As Iron Mountain provides more sophisticated technology and digital transformation services to our customers having a consolidated platform like Vertex AI will enable us to streamline building and running ML pipelines and simplify MLOps for our AI ML teams said Narasimha Goli Vice President Innovation Global Digital Solutions Iron Mountain Getting started with Vertex AITo learn more about how to get started on the platform check out our ML on GCP best practices this practitioners guide to MLOps whitepaper and sign up to attend our Applied ML Summit for data scientists and ML engineers on June th We can t wait to partner with you to apply groundbreaking machine learning technology to grow your skills career and business  For additional support getting started on Vertex AI Accenture and Deloitte have created design workshops proof of value projects and operational pilots to help you get up and running on the platform  Google Cloud internal research May Related ArticleAnnouncing our new Professional Machine Learning Engineer certificationLearn about the Google Cloud Professional Machine Learning Engineer certification Read Article 2021-05-18 19:50:00

コメント

このブログの人気の投稿

投稿時間:2021-06-17 05:05:34 RSSフィード2021-06-17 05:00 分まとめ(1274件)

投稿時間:2021-06-20 02:06:12 RSSフィード2021-06-20 02:00 分まとめ(3871件)

投稿時間:2020-12-01 09:41:49 RSSフィード2020-12-01 09:00 分まとめ(69件)