AWS |
AWS |
Meet Anna Green Head, of ISV & DNB APJ Team at AWS |
https://www.youtube.com/watch?v=KVH-xpDkU3Y
|
Meet Anna Green Head of ISV amp DNB APJ Team at AWSHere at AWS we welcome diverse thoughts perspectives to challenge convention push technology boundaries for our customers partners Meet Anna Green Head of ISV DNB APJ team at AWS as she shares more Learn more about AWS Careers at Subscribe More AWS videos More AWS events videos AWS AWSCareers HereAtAWS |
2021-02-12 16:51:12 |
AWS |
AWS |
Airtasker on AWS: Customer Story |
https://www.youtube.com/watch?v=B37yNDWYwdM
|
Airtasker on AWS Customer StoryIn this episode of Community Chats Aley Hammer interviews Tim Fung founder and CEO at Airtasker Tim shares how Airtasker has been able to manage their recent rapid growth how Airtasker uses AWS to help connect customers and advice for future founders on beginning their startup journey Learn more AWS Airtasker Subscribe More AWS videos More AWS events videos AWS |
2021-02-12 16:46:55 |
AWS |
AWS |
Nielsen Achieves Up to 20% Efficiency in Daily Compute Utilization with Amazon EMR |
https://www.youtube.com/watch?v=Kz9fpCL9u8E
|
Nielsen Achieves Up to Efficiency in Daily Compute Utilization with Amazon EMRMatthew Krepsik Global Head of Analytics at Nielsen shares how the company leveraged Amazon EMR for its advertising attribution business and is seeing efficiency on a daily basis in overall compute utilization Learn more about AWS for the advertising marketing industry at Subscribe More AWS videos More AWS events videos AWS |
2021-02-12 16:42:58 |
python |
Pythonタグが付けられた新着投稿 - Qiita |
yukicoder contest 282 参戦記 |
https://qiita.com/c-yan/items/cf749658eb1a952de5c1
|
|
2021-02-13 01:50:51 |
python |
Pythonタグが付けられた新着投稿 - Qiita |
テキスト中のIPythonコンソールのインデックスを自動で振り直す |
https://qiita.com/ezotaka/items/4bbe8d2ee992374743ac
|
テキスト中のIPythonコンソールのインデックスを自動で振り直すQiitaの記事にIPythonコンソールの実行結果を書きたいことがありました。 |
2021-02-13 01:36:51 |
python |
Pythonタグが付けられた新着投稿 - Qiita |
モルカーをCycleGANで現実の車っぽくしてみた |
https://qiita.com/torakichi0101/items/5eba51ec5b15cb94f365
|
モルカーをCycleGANで現実の車っぽくしてみたPUIPUIモルカー毎週火曜に数分のショートアニメが放送されているPUIPUIモルカーパペットキャラ・アニメとして愛らしいモルモットの姿をした車『モルカー』の愛嬌とその裏に存在する人間の愚かさを見事に表現した作品です月日はバレンタインデーチョコポテトシロモのバレンタインカードをプレゼントチョコレートに沿えるもよし、メッセージを書き込むもよし。 |
2021-02-13 01:08:40 |
js |
JavaScriptタグが付けられた新着投稿 - Qiita |
Optional Chaining のエラーをもう一度整理 |
https://qiita.com/terukazu/items/73d32b7907cc76328fb9
|
storybookでOptionalChainingを使うとエラーになるtsconfigjsonでtargetをESにするとエラーはなくなるエラーの発生に関連するものは。 |
2021-02-13 01:30:39 |
Program |
[全てのタグ]の新着質問一覧|teratail(テラテイル) |
Python:BeautifulSoupのfind().textで要素がない場合に別の文字を出力したい |
https://teratail.com/questions/322191?rss=all
|
PythonBeautifulSoupのfindtextで要素がない場合に別の文字を出力したい前提・実現したいことpythonのbeautifulSoupでfindaposaaaaapostextを実行した際、findaposaaaaaposで要素がない場合には、「AttributeErrornbspaposNoneTypeaposnbspobjectnbsphasnbspnonbspattributenbspapostextapos」となってしまいますが、要素がない場合はこのエラーを発生させずに別の文字で出力したいと考えています。 |
2021-02-13 01:40:42 |
Program |
[全てのタグ]の新着質問一覧|teratail(テラテイル) |
さくらVPS 追加ストレージ(NFS)について |
https://teratail.com/questions/322190?rss=all
|
さくらVPS追加ストレージNFSについてさくらVPSnbsp追加ストレージNFSについてnbspこちらのマニュアルにnbsp【設定例】nbspネットワークインターフェースethensにローカルIPアドレス「」とネットマスク「」を設定した後、追加ストレージNFSのローカルIPアドレス「」へ疎通確認をする。 |
2021-02-13 01:40:09 |
Program |
[全てのタグ]の新着質問一覧|teratail(テラテイル) |
admobのインタースティシャル広告を実装したい |
https://teratail.com/questions/322189?rss=all
|
admobのインタースティシャル広告を実装したい前提・実現したいことandroidnbspstudiojavaでAdMobのインタースティシャル広告オブジェクトを作成したいのですが、最近色々変更されたらしくネットで探してもやり方が分からなくて困っております。 |
2021-02-13 01:28:50 |
Program |
[全てのタグ]の新着質問一覧|teratail(テラテイル) |
Unityで三次元の点と直線の距離を調べたい |
https://teratail.com/questions/322188?rss=all
|
unity |
2021-02-13 01:20:54 |
Program |
[全てのタグ]の新着質問一覧|teratail(テラテイル) |
計算機能を実装したい!!! |
https://teratail.com/questions/322187?rss=all
|
現在RubynbsponnbspRailsで動的計算機能を実装して月間、毎日の支出を計算して表示する機能を実装しようとしています。 |
2021-02-13 01:19:39 |
AWS |
AWSタグが付けられた新着投稿 - Qiita |
「VPCとは」 |
https://qiita.com/houka/items/63cf6bbcdb304dbfa9ab
|
VPCの特徴・VPC内で多くのサブネットを構築可能。 |
2021-02-13 01:19:13 |
AWS |
AWSタグが付けられた新着投稿 - Qiita |
RDS for Oracle にEC2から接続する |
https://qiita.com/tomcl34/items/252e44394a40608942c0
|
ECには問題なくログインできたので、踏み台→AWSのネットワーク的には問題なかったはず。 |
2021-02-13 01:09:13 |
AWS |
AWSタグが付けられた新着投稿 - Qiita |
AWS S3 CSSが反映されないとき |
https://qiita.com/quryu/items/b1c28b25986014010a32
|
AWSSCSSが反映されないとき初めにsにbotoを使ってindexhtmlmainjsstylecssをアップロードした。 |
2021-02-13 01:00:41 |
Azure |
Azureタグが付けられた新着投稿 - Qiita |
Microsoft Certified Azure Fundamentals 認定 (試験 AZ-900 ) を取得しました。 |
https://qiita.com/k_hnd/items/e7c237ebf20e1329f265
|
まとめ上記の勉強を行うことで、合格点を点ほど超えることができ、MicrosoftCertifiedAzureFundamentals認定を受けることができました。 |
2021-02-13 01:53:15 |
海外TECH |
DEV Community |
Changing careers into tech: Why perseverance and mindset matters. |
https://dev.to/ritaxcorreia/changing-careers-into-tech-why-perseverance-and-mindset-matters-3g6j
|
Changing careers into tech Why perseverance and mindset matters was the year of change for all of us We had to adapt to a new reality where draconian words like “lockdown and “pandemic became the new normal It was also the year I decided to turned my life around and change careers… |
2021-02-12 16:39:36 |
Apple |
AppleInsider - Frontpage News |
Apple consolidates all iCloud Activation Lock resources to one support portal |
https://appleinsider.com/articles/21/02/12/apple-consolidates-all-icloud-activation-lock-resources-to-one-support-portal
|
Apple consolidates all iCloud Activation Lock resources to one support portalApple has further consolidated resources on how to turn off Activation Lock on iPhone iPad or iPod touch on a new portal page linking to all support avenues that users can access Image Credit AppleUsers who need to remove an Activation Lock from a device they no longer have access to now have an easier way to access support on how to do so The disparate support pages for the task have all been consolidated to Apple s new Turn off Activation Lock support portal The Activation Lock portal consolidation was first spotted by Reddit User amq Read more |
2021-02-12 16:50:11 |
Apple |
AppleInsider - Frontpage News |
Valentine's Day deals: save up to 50% on Twelve South iPhone, AirPods, Mac accessories |
https://appleinsider.com/articles/21/02/12/valentines-day-deals-save-up-to-50-on-twelve-south-iphone-airpods-mac-accessories
|
Valentine x s Day deals save up to on Twelve South iPhone AirPods Mac accessoriesValentine s Day sales are in full effect as the holiday approaches with accessory maker Twelve South issuing special deals knocking up to half off cases chargers and more this weekend Valentine s Day accessory dealsThe sale runs now through Feb and features quite a few goodies for your Mac iPhone or AirPods with prices starting at Read more |
2021-02-12 16:08:32 |
海外TECH |
Engadget |
The best deals we found this week: $50 off Apple AirPods and more |
https://www.engadget.com/weekly-deals-apple-airpods-ipad-air-amazon-fire-7-tablet-presidents-day-tech-sales-163043264.html
|
The best deals we found this week off Apple AirPods and moreThe week leading up to Valentine s Day and Presidents Day proved to be a boon for tech sales across the web A bunch of Apple products saw deep discounts including the AirPods Pro and the latest iMac Amazon knocked down the prices of most of its Fi |
2021-02-12 16:30:43 |
海外TECH |
Network World |
Juniper targets WAN automation with new software suite |
https://www.networkworld.com/article/3607472/juniper-targets-wan-automation-with-new-software-suite.html#tk.rss_all
|
Juniper targets WAN automation with new software suite Juniper has unwrapped a suite of automation software it says will help users ensure their wide area network and cloud connected services are running properly and cost effectively The company s Paragon Automation suite promises to help eliminate manual tasks and workflow processes to make sure WAN operations are working as expected and if not quickly fix problems Top metrics for multicloud managementThe suite which is aimed at large enterprises and service operators includes an amalgamation of technology from Juniper s existing NorthStar controller and Healthbot network diagnostics packages combined with other organically developed features in combination with software it got with its recent Netrounds acquisition To read this article in full please click here |
2021-02-12 16:15:00 |
海外TECH |
CodeProject Latest Articles |
Getting Notifications Upon Your Application Writing an Error to Events Log |
https://www.codeproject.com/Tips/392041/Getting-Notifications-Upon-Your-Application-Writin
|
event |
2021-02-12 16:30:00 |
海外TECH |
CodeProject Latest Articles |
Neon Intrinsics for Optimized Math, Networking, and String Operations |
https://www.codeproject.com/Articles/5294663/Neon-Intrinsics-for-Optimized-Math-Networking-and
|
intrinsics |
2021-02-12 16:29:00 |
海外科学 |
NYT > Science |
Was Stonehenge a ‘Secondhand’ Monument? |
https://www.nytimes.com/2021/02/12/science/stonehenge-archaeology-wales-parker-pearson.html
|
Was Stonehenge a Secondhand Monument The Neolithic site appears to have begun as a monument in Wales that was dismantled and carried miles west as part of a larger migration a new study suggests |
2021-02-12 16:24:23 |
海外科学 |
BBC News - Science & Environment |
Covid-19: How England's hotel quarantine will differ from Australia's |
https://www.bbc.co.uk/news/health-56030384
|
australia |
2021-02-12 16:08:49 |
金融 |
金融庁ホームページ |
スチュワードシップ・コード及びコーポレートガバナンス・コードのフォローアップ会議(第23回)議事録について公表しました。 |
https://www.fsa.go.jp/singi/follow-up/gijiroku/20210126.html
|
回議 |
2021-02-12 17:54:00 |
金融 |
金融庁ホームページ |
「スチュワードシップ・コード及びコーポレートガバナンス・コードのフォローアップ会議」(第24回)議事次第について公表しました。 |
https://www.fsa.go.jp/singi/follow-up/siryou/20210215.html
|
次第 |
2021-02-12 17:54:00 |
金融 |
金融庁ホームページ |
スチュワードシップ・コードの受入れを表明した機関投資家のリストについて更新しました。 |
https://www.fsa.go.jp/singi/stewardship/list/20171225.html
|
機関投資家 |
2021-02-12 17:53:00 |
金融 |
金融庁ホームページ |
「違法な金融業者に関する情報について」を更新しました。 |
https://www.fsa.go.jp/ordinary/chuui/index.html
|
Detail Nothing |
2021-02-12 17:00:00 |
金融 |
金融庁ホームページ |
LIBORの恒久的な公表停止に備えた対応について更新しました。 |
https://www.fsa.go.jp/policy/libor/libor.html
|
libor |
2021-02-12 17:00:00 |
金融 |
金融庁ホームページ |
LIBORの公表停止を踏まえた自己資本比率規制及びTLAC規制に関するQ&Aの追加について公表しました。 |
https://www.fsa.go.jp/news/r2/ginkou/20210212.html
|
libor |
2021-02-12 17:00:00 |
金融 |
金融庁ホームページ |
資本市場研究会第121回時事懇談会における氷見野長官の講演について掲載しました。 |
https://www.fsa.go.jp/common/conference/danwa/index_kouen.html
|
資本市場 |
2021-02-12 16:25:00 |
海外ニュース |
Japan Times latest articles |
WE League chair ‘grateful’ for Yoshiro Mori’s sexist remarks |
https://www.japantimes.co.jp/sports/2021/02/12/soccer/we-league-chair-yoshiro-mori-sexist-remarks/
|
japan |
2021-02-13 01:39:01 |
海外ニュース |
Japan Times latest articles |
Naomi Osaka shows gentle touch to reach last 16 at Australian Open |
https://www.japantimes.co.jp/sports/2021/02/12/tennis/naomi-osaka-australian-open/
|
Naomi Osaka shows gentle touch to reach last at Australian OpenNaomi Osaka had to deal with a butterfly landing on her nose Friday but was otherwise little bothered as she breezed into the last |
2021-02-13 01:24:43 |
ニュース |
BBC News - Home |
Covid: Virus cases are going down across the UK |
https://www.bbc.co.uk/news/health-56041029
|
infectious |
2021-02-12 16:11:11 |
ニュース |
BBC News - Home |
Libby Squire: Pawel Relowicz jailed for student's murder |
https://www.bbc.co.uk/news/uk-england-humber-56042200
|
libby |
2021-02-12 16:12:37 |
ニュース |
BBC News - Home |
Kids Company founder and former trustees win disqualification fight |
https://www.bbc.co.uk/news/uk-56044000
|
directors |
2021-02-12 16:33:13 |
ニュース |
BBC News - Home |
Colin Norris: Serial killer nurse case referred to Court of Appeal |
https://www.bbc.co.uk/news/uk-scotland-43216615
|
appealcolin |
2021-02-12 16:14:18 |
ニュース |
BBC News - Home |
Covid hotel quarantine less strict than Australia's |
https://www.bbc.co.uk/news/uk-56037420
|
analysis |
2021-02-12 16:24:26 |
ニュース |
BBC News - Home |
Lulu the dog inherits $5m from deceased US owner |
https://www.bbc.co.uk/news/world-us-canada-56045881
|
expenses |
2021-02-12 16:30:51 |
ニュース |
BBC News - Home |
Covid-19: How England's hotel quarantine will differ from Australia's |
https://www.bbc.co.uk/news/health-56030384
|
australia |
2021-02-12 16:08:49 |
ニュース |
BBC News - Home |
'Contradiction with reality' - Mourinho questions Bale's social media post |
https://www.bbc.co.uk/sport/football/56044972
|
gareth |
2021-02-12 16:17:48 |
ビジネス |
ダイヤモンド・オンライン - 新着記事 |
イエレン氏、気候変動対策担う財務高官ポスト新設へ - WSJ発 |
https://diamond.jp/articles/-/262766
|
気候変動 |
2021-02-13 01:14:00 |
北海道 |
北海道新聞 |
道、集中対策を3月7日まで延長 札幌の時短は2月中 |
https://www.hokkaido-np.co.jp/article/510982/
|
新型コロナウイルス |
2021-02-13 01:08:35 |
北海道 |
北海道新聞 |
松木氏が上限超え献金受領 1100万円分訂正 両親名義で4後援会経由 |
https://www.hokkaido-np.co.jp/article/510978/
|
立憲民主党 |
2021-02-13 01:06:28 |
Azure |
Azure の更新情報 |
Public preview: Azure Cost Management + Billing’s cost allocation now available in Azure Government |
https://azure.microsoft.com/ja-jp/updates/public-preview-azure-cost-management-billing-s-cost-allocation-now-available-in-azure-government/
|
Public preview Azure Cost Management Billing s cost allocation now available in Azure GovernmentSimplify your cost reporting in Azure government using Azure Cost Management Billing s cost allocation |
2021-02-12 16:43:50 |
GCP |
Cloud Blog |
NOAA and Google Cloud: A data match made in the cloud |
https://cloud.google.com/blog/products/data-analytics/noaa-datasets-on-google-cloud-for-environmental-exploration/
|
NOAA and Google Cloud A data match made in the cloudWith Valentine s Day upon us there is nothing the U S National Oceanic and Atmospheric Administration NOAA loves more than having our environmental data open and accessible to allーand the cloud is the perfect match for NOAA s goal to disseminate its environmental data more broadly than ever before In as part of the Google Cloud Public Datasets Program and NOAA s Big Data Program NOAA and Google signed a contract with the potential to span years so we could continue our partnership and expand our efforts to provide timely open equitable and useful public access to NOAA s unique high quality environmental information Democratizing data analysis and access for everyoneNOAA sits on a treasure trove of environmental information gathering and distributing scientific data about everything from the ocean to the sun Our mission includes understanding and predicting changes in climate weather oceans and coasts to help conserve and manage ecosystems and natural resources But like many federal agencies we struggle with data discoverability and adopting emerging technologies The reality is that on our own it would be difficult to share our massive volumes of data at the rate people want it Partnering up with cloud service providers such as Google and migrating to cloud platforms like Google Cloud lets people access our datasets without driving up costs or increasing the risks that come with using federal data access services It also unlocks other powerful processing technologies like BigQuery and Google Cloud Storage that enhance data analysis and improve accessibility Google Cloud and other cloud based platforms help us achieve our vision of making our data free and open and also aligns well with the overall agenda of the U S Government The Foundations for Evidence Based Policy Making Act signed in January generally requires U S Government data to be open and available to the public Working with cloud service providers such as Google Cloud helps NOAA democratize access to NOAA dataーit s truly a level playing field Everyone has the same access in the cloud and it puts the power of data in the hands of many rather than a select few Another critical benefit of data dissemination public private partnerships like our relationship with Google Cloud is their ability to jumpstart the economy and promote innovation In the past the bar for an entrepreneur to enter a market like the private weather industry was extremely high You needed to be able to build and maintain your own systems and infrastructure which limited entry to larger organizations with the right resources and connections available to them Today to access our data on Google Cloud all you need is a laptop and a Google account to get started You can spin up your own HPC cluster on Google Cloud run your model and put it out into the marketplace without being burdened with the long term maintenance As a result we see small businesses being able to leverage our data and operate in areas where previously they simply didn t exist Public private data partnerships at the heart of innovationNOAA s datasets have contributed to a number of innovative use cases that highlight the benefits of public private data partnerships Here are some projects to date Acoustic detection of humpback whalesUsing over years of underwater audio recordings from the Pacific Islands Fisheries Science Center of NOAA Google helped develop algorithms to identify humpback whale calls Historically passive acoustic monitoring to identify whales was done manually by somebody sitting with a pair of headphones on all day but using audio event analysis helped automate these tasksーand moved conservation goals forward by decades Researchers now have new techniques at their disposal that help them automatically identify the presence of humpback whales so they can mitigate anthropogenic impacts on whales such as ship traffic and other offshore activities Our National Centers for Environmental Information established an archive of the full collection of multi year acoustic data which is now hosted on Google Cloud as a public dataset Megaptera novaeangliae the humpback whale and a spectrogram of its call one of the audio events found in the dataset with time on the x axis and frequency on the y axis Weather forecasting for fire detectionOne of the most important aspects of our mission is the protection of lifeーand the cloud and other advanced technologies are driving the discovery of new potential life saving capabilities that keep people informed and safe NOAA s GOES satellite and GOES satellite provide critical datasets that help detect fires identify their locations and track their movements in near real time Combining our data and Google Earth Engine s data analysis capabilities Google recently introduced a new wildfire boundary map to provide deeper insights for areas impacted by ongoing wildfires Using data from NOAA s GOES satellites and Google Earth Engine Google creates a digital polygon to represent the approximate wildfire impact area on Search and Google Maps Start exploring and experimenting with NOAA s datasets including those found on Google Cloud Public Datasets If you re already using our public datasets we d love to hear from you What data are you using and how What are you looking forward to using the most |
2021-02-12 17:00:00 |
GCP |
Cloud Blog |
Why Yahoo picked BigQuery for scale, performance, and cost |
https://cloud.google.com/blog/products/data-analytics/benchmarking-cloud-data-warehouse-bigquery-to-scale-fast/
|
Why Yahoo picked BigQuery for scale performance and costAs the owner of Analytics Monetization and Growth Platforms at Yahoo one of the core brands of Verizon Media I m entrusted to make sure that any solution we select is fully tested across real world scenarios Today we just completed a massive migration of Hadoop and enterprise data warehouse EDW workloads to Google Cloud s BigQuery andLooker In this blog we ll walk through the technical and financial considerations that led us to our current architecture Choosing a data platform is more complicated than just testing it against standard benchmarks While benchmarks are helpful to get started there is nothing like testing your data platform against real world scenarios We ll discuss the comparison that we did between BigQuery and what we ll call the Alternate Cloud AC where each platform performed best and why we chose BigQuery and Looker We hope that this can help you move past standard industry benchmarks and help you make the right decision for your business Let s get into the details What is a MAW and how big is it Yahoo s MAW Media Analytics Warehouse is the massive data warehouse which houses all the clickstream data from Yahoo Finance Yahoo Sports Yahoo com Yahoo Mail Yahoo Search and various other popular sites on the web that are now part of Verizon Media In one month in Q running on BigQuery we measured the following stats for active users number of queries and bytes scanned ingested and stored Who uses the MAW data and what do they use it for Yahoo executives analysts data scientists and engineers all work with this data warehouse Business users create and distribute Looker dashboards analysts write SQL queries scientists perform predictive analytics and the data engineers manage the ETL pipelines The fundamental questions to be answered and communicated generally include How are Yahoo s users engaging with the various products Which products are working best for users And how could we improve the products for better user experience The Media Analytics Warehouse and analytics tools built on top of it are used across different organizations in the company Our editorial staff keeps an eye on article and video performance in real time our business partnership team uses it to track live video shows from our partners our product managers and statisticians use it for A B testing and experimentation analytics to evaluate and improve product features and our architects and site reliability engineers use it to track long term trends on user latency metrics across native apps web and video Use cases supported by this platform span across almost all business areas in the company In particular we use analytics to discover rends in access patterns and in which partners are providing the most popular content helping us assess our next investments Since end user experience is always critical to a media platform s success we continually track our latency engagement and churn metrics across all of our sites Lastly we assess which cohorts of users want which content by doing extensive analyses on clickstream user segmentation If this all sounds similar to questions that you ask of your data read on We ll now get into the architecture of products and technologies that are allowing us to serve our users and deliver these analytics at scale Identifying the problem with our old infrastructureRolling the clock back a few years we encountered a big problem We had too much data to process to meet our users expectations for reliability and timeliness Our systems were fragmented and the interactions were complex This led to difficulty in maintaining reliability and it made it hard to track down issues during outages That leads to frustrated users increasingly frequent escalations and the occasional irate leader Managing massive scale Hadoop clusters has always been Yahoo s forte So that was not an issue for us Our massive scale data pipelines process petabytes of data every day and they worked just fine This expertise and scale however were insufficient for our colleagues interactive analytics needs Deciding solution requirements for analytics needsWe sorted out the requirements of all our constituent users for a successful cloud solution AEach of these various usage patterns resulted in a disciplined tradeoff study and led to four critical performance requirements Performance RequirementsLoading data requirement Load all previous day s data by next day at am At forecasted volumes this requires a capacity of more than TB day Interactive query performance to seconds for common queriesDaily use dashboards Refresh in less than secondsMulti week data Access and query in less than one minute The most critical criteria was that we would make these decisions based on user experience in a live environment and not based on an isolated benchmark run by our engineers In addition to the performance requirements we had several system requirements that spanned the multiple stages that a modern data warehouse must accommodate simplest architecture scale performance reliability interactive visualization and cost System RequirementsSimplicity and architectural integrationsANSI SQL compliantNo op serverlessーability to add storage and compute without getting into cycles of determining the right server type procuring installing launching etc Independent scaling of storage and computeReliabilityReliability and availability monthly uptimeScaleStorage capacity hundreds of PBQuery capacity exabyte per monthConcurrency queries with graceful degradation and interactive responseStreaming ingestion to support s of TB dayVisualization and interactivityMature integration with BI toolsMaterialized views and query rewriteCost efficient at scaleProof of concept strategy tactics resultsStrategically we needed to prove to ourselves that our solution could meet the requirements described above at production scale That meant that we needed to use production data and even production workflows in our testing To focus our efforts on our most critical use cases and user groups we focused on supporting dashboarding use cases with the proof of concept POC infrastructure This allowed us to have multiple data warehouse DW backends the old and the new and we could dial up traffic between them as needed Effectively this became our method of doing a staged rollout of the POC architecture to production as we could scale up traffic on the CDW and then do a cut over from legacy to the new system in real time without needing to inform the users Tactics Selecting the contenders and scaling the dataOur initial approach to analytics on an external cloud was to move a three petabyte subset of data The dataset we selected to move to the cloud also represented one complete business process because we wanted to transparently switch a subset of our users to the new platform and we did not want to struggle with and manage multiple systems After an initial round of exclusions based on the system requirements we narrowed the field to two cloud data warehouses We conducted our performance testing in this POC on BigQuery and “Alternate Cloud To scale the POC we started by moving one fact table from MAW note we used a different dataset to test ingest performance see below Following that we moved all the MAW summary data into both clouds Then we would move three months of MAW data into the most successful cloud data warehouse enabling all daily usage dashboards to be run on the new system That scope of data allowed us to calculate all of the success criteria at the required scale of both data and users Performance testing resultsRound Ingest performance The requirement is that the cloud load all the daily data in time to meet the data load service level agreement SLA of “by am the next day ーwhere day was local day for a specific time zone Both the clouds were able to meet this requirement Bulk ingest performance TieRound Query performanceTo get an apples to apples comparison we followed best practices for BigQuery and AC to measure optimal performance for each platform The charts below show the query response time for a test set of thousands of queries on each platform This corpus of queries represents several different workloads on the MAW BigQuery outperforms AC particularly strongly in very short and very complex queries Half of the queries tested in BigQuery finished in less than sec compared to only on AC Even more starkly only of the thousands of queries tested took more than minutes to run on BigQuery whereas almost half of the queries tested on AC took minutes or more to complete Query performance BigQueryRound ConcurrencyOur results corroborated this study from AtScale BigQuery s performance was consistently outstanding even as the number of concurrent queries expanded Concurrency at scale BigQueryRound Total cost of ownershipThough we can t discuss our specific economics in this section we can point to third party studies and describe some of the other aspects of TCO that were impactful We found the results in this paper from ESG to be both relevant and accurate to our scenarios The paper reports that for comparable workloads BigQuery s TCO is to less than competitors Other factors we considered included Capacity and Provisioning EfficiencyScaleWith PB of storage and EB of query over those bytes each month AC s PB limit for a unified DW was a significant barrier Separation of Storage and ComputeAlso with AC you cannot buy additional compute without buying additional storage which would lead to significant and very expensive overprovisioning of compute Operational and Maintenance CostsServerlessWith AC we needed a daily standup to look at ways of tuning queries a bad use of the team s time We had to be upfront about which columns would be used by users a guessing game and alter physical schema and table layout accordingly We also had a weekly “at least once ritual of re organizing the data for better query performance This required reading the entire data set and sorting it again for optimal storage layout and query performance We also had to think ahead of time at least by a couple of months about what kind of additional nodes were required based on projections around capacity utilization We estimated this tied up significant time for engineers on the team and translated into a cost equivalent to person hours per week The architectural complexity on the alternate cloud because of its inability to handle this workload in a true serverless environment resulted in our team writing additional code to manage and automate data distribution and aggregation optimization of data load and querying This required us to dedicate effort equivalent to two full time engineers to design code and manage tooling around alternate cloud limitations During a time of material expansion this cost would go up further We included that personnel cost in our TCO With BigQuery the administration and capacity planning has been much easier taking almost no time Infact we barely even talk within the team before sending additional data over to Bigquery With BigQuery we spend zero little time doing maintenance or performance tuning activities Productivity ImprovementsOne of the advantages of using Google BigQuery as the database was that we could now simplify our data model and also unify our semantic layer by leveraging a then new BI tool Looker We timed how long it took our analysts to create a new dashboard using BigQuery with Looker and compared it to a similar development on AC with a legacy BI tool The time for an analyst to create a dashboard went from one to four hours to just minutes a productivity improvement across the board The single biggest reason for this improvement was a much simpler data model to work with and the fact that all the datasets could now be together in a single database With hundreds of dashboards and analysis conducted every month saving about one hour per dashboard returns thousands of person hours in productivity to the organization The way BigQuery handles peak workloads also drove a huge improvement in user experience and productivity versus the AC As users logged in and started firing their queries on the AC they would get stuck because of the workload Instead of a graceful degradation in query performance we saw a massive queueing up of workloads That created a frustrating cycle of back and forth between users who were waiting for their queries to finish and the engineers who would be scrambling to identify and kill expensive queries to allow for other queries to complete TCO SummaryIn these dimensionsーfinances capacity ease of maintenance and productivity improvementsーBigQuery was the clear winner with a lower total cost of ownership than the alternative cloud Lower TCO BigQueryRound The intangiblesAt this point in our testing the technical outcomes were pointing solidly to BigQuery We had very positive experiences working with the Google account product and engineering teams as well Google was transparent honest and humble in their interactions with Yahoo In addition the data analytics product team at Google Cloud conducts monthly meetings of a customer council that have been exceedingly valuable Another reason why we saw this kind of success with our prototyping project and eventual migration was the Google team with whom we engaged The account team backed by some brilliant support engineers stayed on top of issues and resolved them expertly Support and Overall Customer ExperiencePOC SummaryWe designed the POC to replicate our production workloads data volumes and usage loads Our success criteria for the POC were the same SLAs that we have for prod Our strategy of mirroring a subset of our production with the POC paid off well We fully tested the capabilities of the data warehouses and consequently we have very high confidence that the chosen tech products and support team will meet our SLAs at our current load and future scale Lastly the POC scale and design are sufficiently representative of our prod workloads that other teams within Verizon can use our results to inform their own choices We ve seen other teams in Verizon move to BigQuery at least partly informed by our efforts Here s a roundup of the overall proof of concept trial that helped us pick BigQuery as the winner With these results we concluded that we would move more of our production work to BigQuery by expanding the number of dashboards that hit the BigQuery backend as opposed to Alternate Cloud The experience of that rollout was very positive as BigQuery continued to scale in storage compute concurrence ingest and reliability as we added more and more users traffic and data I ll explore our experience fully using BigQuery in production in the second blog post of this series |
2021-02-12 17:00:00 |
GCP |
Cloud Blog |
Migrate to regional backend services for Network Load Balancing |
https://cloud.google.com/blog/products/networking/migrate-to-regional-backend-services-for-network-load-balancing/
|
Migrate to regional backend services for Network Load BalancingWith Network Load Balancing Google Cloud customers have a powerful tool for distributing external TCP and UDP traffic among virtual machines in a Google Cloud region In order to make it easier for our customers to manage incoming traffic and to control how the load balancer behaves we recently added support for backend services to Network Load Balancing This provides improved scale velocity performance and resiliency to our customers in their deploymentーall in an easy to manage way As one of the earliest members of the Cloud Load Balancing family Network Load Balancing uses a tuple hash consisting of the source and destination IP address protocol and source and destination ports Network load balancers are built using Google s own Maglev which load balances all traffic that comes into our data centers and front end engines at our network edges and can scale to millions of requests per second optimizing for latency and performance with features like direct server return and minimizing the impact of unexpected faults on connection oriented protocols In short Network Load Balancing is a great Layer load balancing solution if you want to preserve a client IP address all the way to the backend instance and perform TLS termination on the instances We now support backend services with Network Load Balancingーa significant enhancement over the prior approach target pools A backend service defines how our load balancers distribute incoming traffic to attached backends and provides fine grained control for how the load balancer behaves This feature now provides a common unified data model for all our load balancing family members and accelerates the delivery of exciting features on Network Load Balancing As a regional service a network load balancer has one regional backend service In this blog post we share some of the new features and benefits you can take advantage of with regional backend services and how to migrate to them Then stay tuned for subsequent blogs where we ll share some novel ways customers are using Network Load Balancing upcoming features and ways to troubleshoot regional backend services Regional backend services bring the benefitsChoosing a regional backend service as your load balancer brings a number of advantages to your environment Click to enlargeOut of the gate regional backend services provide High fidelity health checking with unified health checking With regional backend services you can now take full advantage of load balancing health check features freeing yourself from the constraints of legacy HTTP health checks For compliance reasons TCP health checks with support for custom request and response strings or HTTPS were a common request for Network Load Balancing customers Better resiliency with failover groups With failover groups you can designate an Instance Group as primary and another one as secondary and failover the traffic when the health of the instances in the active group goes below a certain threshold For more control on the failover mechanism you can use an agent such as keepalived or pacemaker and have a healthy or failing health check exposed based on changes of state of the backend instance Scalability and high availability withManaged Instance Groups Regional backend services support Managed Instance Groups as backends You can now specify a template for your backend virtual machine instances and leverage autoscaling based on CPU utilization or other monitoring metrics In addition to the above you will be able to take advantage of Connection Draining for connection oriented protocol TCP and faster programming time for large deployments Migrating to regional backend servicesYou can migrate from target pools to regional backend services in five simple steps Create unified health checks for your backend service Create instance groups from existing instances in the target pool Create a backend service and associate it with the newly created health checks Configure your backend service and add the instance groups Run get health on your configured Backend Services to make sure the set of backends are accurate and health status determined Then use the set target API to update your existing forwarding rules to the newly created backend service UDP with regional backend services Google Cloud networks forward UDP fragments as they arrive In order to forward the UDP fragments of a packet to the same instance for reassembly set session affinity to None NONE This indicates that maintaining affinity is not required and hence the load balancer uses a tuple hash to select a backend for unfragmented packets but tuple hash for fragmented packets Next stepsWith support for regional backend services with Network Load Balancing you can now use high fidelity health checks including TCP getter better performance in programming times use a uniform data model for configuring your load balancing backends be they Network Layer Load Balancing or others get feature parity with Layer Internal Load Balancing with support for connection draining and failure groups Learn more about regional backend services here and get a head start on your migration We have a compelling roadmap for Network Load Balancing ahead of us so stay tuned for more updates Related ArticleGoogle Cloud networking in depth Cloud Load Balancing deconstructedTake a deeper look at the Google Cloud networking load balancing portfolio Read Article |
2021-02-12 17:00:00 |
コメント
コメントを投稿