投稿時間:2023-03-08 02:28:49 RSSフィード2023-03-08 02:00 分まとめ(32件)

カテゴリー等 サイト名等 記事タイトル・トレンドワード等 リンクURL 頻出ワード・要約等/検索ボリューム 登録日
AWS AWS News Blog Subscribe to AWS Daily Feature Updates via Amazon SNS https://aws.amazon.com/blogs/aws/subscribe-to-aws-daily-feature-updates-via-amazon-sns/ Subscribe to AWS Daily Feature Updates via Amazon SNSWay back in I showed you how to Subscribe to AWS Public IP Address Changes via Amazon SNS Today I am happy to tell you that you can now receive timely detailed information about releases and updates to AWS via the same simple mechanism Daily Feature Updates Simply subscribe to topic arn aws sns us east aws new feature updates using the … 2023-03-07 16:05:15
AWS AWS Marketplace Create catchment areas using drive times with Redshift and AWS Data Exchange https://aws.amazon.com/blogs/awsmarketplace/create-catchment-areas-drive-times-redshift-aws-data-exchange/ Create catchment areas using drive times with Redshift and AWS Data ExchangeSpatial data is a key ingredient for many analytical use cases such as route optimization location based marketing asset tracking or environmental risk assessment Bulk geospatial tasks like geocoding and generating isoline polygons have traditionally required complex APIs or highly specialized softwareーnot to mention the Extract Transform Load ETL processes involved in those approaches CARTO has … 2023-03-07 16:18:02
AWS AWS Desktop and Application Streaming Blog Streaming from interface VPC endpoints for Regulated environments with AppStream 2.0 https://aws.amazon.com/blogs/desktop-and-application-streaming/streaming-from-vpc-endpoints-with-appstream-2-0/ Streaming from interface VPC endpoints for Regulated environments with AppStream Customers with strict compliance requirements such as financial industries healthcare and government sectors use End User Compute EUC solutions to regulate access and centralize tooling For these organizations users are often required to connect to a Virtual Private Network VPN to access the private corporate network In this blog I explain how users with such … 2023-03-07 16:42:07
AWS AWS Management Tools Blog Build Cloud Operations skills using the new AWS Observability Training https://aws.amazon.com/blogs/mt/build-cloud-operations-skills-using-the-new-aws-observability-training/ Build Cloud Operations skills using the new AWS Observability TrainingFull stack observability at AWS includes AWS native Application Performance Monitoring APM and open source solutions giving you the ability to understand what is happening across your technology stack at any time nbsp AWS Observability lets you collect correlate aggregate and analyze telemetry in your network infrastructure and applications in the cloud hybrid or on premises environments so you can gain … 2023-03-07 16:23:32
AWS AWS What's the best way to transfer large amounts of data from one Amazon S3 bucket to another? https://www.youtube.com/watch?v=FDzMSKeNW5s What x s the best way to transfer large amounts of data from one Amazon S bucket to another For more details see the Knowledge Center article with this video Kashif shows you the best way to transfer large amounts of data from one Amazon S bucket to another Introduction Chapter Chapter Chapter ClosingSubscribe More AWS videos More AWS events videos ABOUT AWSAmazon Web Services AWS is the world s most comprehensive and broadly adopted cloud platform offering over fully featured services from data centers globally Millions of customers ーincluding the fastest growing startups largest enterprises and leading government agencies ーare using AWS to lower costs become more agile and innovate faster AWS AmazonWebServices CloudComputing 2023-03-07 16:59:51
AWS lambdaタグが付けられた新着投稿 - Qiita 【C# .NET】 AWS Lambdaを使ってEC2を起動停止させる。 https://qiita.com/bota_bota/items/5b34271233cbfcbd93d5 awslambda 2023-03-08 01:26:07
js JavaScriptタグが付けられた新着投稿 - Qiita ニコ生のコメントをnode.jsで取得するときにハマったこと https://qiita.com/boxfish_jp/items/95763a300425c98bb287 nodejs 2023-03-08 01:28:34
js JavaScriptタグが付けられた新着投稿 - Qiita Jestの使い方メモ https://qiita.com/AsherN/items/f94fdafd05dca944608f sumjsfunctionsumab 2023-03-08 01:14:06
AWS AWSタグが付けられた新着投稿 - Qiita 【C# .NET】 AWS Lambdaを使ってEC2を起動停止させる。 https://qiita.com/bota_bota/items/5b34271233cbfcbd93d5 awslambda 2023-03-08 01:26:07
Docker dockerタグが付けられた新着投稿 - Qiita airflow で別コンテナ上でPythonスクリプトを実行する https://qiita.com/tkosht/items/e74d5d34e07d9cf2d94a airflow 2023-03-08 01:10:25
海外TECH Ars Technica Microsoft makes Outlook for Mac free, no Office or Microsoft 365 required https://arstechnica.com/?p=1921015 windows 2023-03-07 16:34:30
海外TECH MakeUseOf Are Tracking Cookies Spyware and Can You Disable Them? https://www.makeuseof.com/are-tracking-cookies-spyware/ tracking 2023-03-07 16:30:16
海外TECH MakeUseOf How to Merge Folders and Files in Windows 10 and 11 https://www.makeuseof.com/merge-folders-files-windows/ windows 2023-03-07 16:15:17
海外TECH MakeUseOf Online Resources to Feel Your Best During Each Stage of Your Menstrual Cycle https://www.makeuseof.com/optimize-well-being-each-stage-menstrual-cycle/ responds 2023-03-07 16:15:17
海外TECH DEV Community Read and Write Python Json https://dev.to/max24816/read-and-write-python-json-h6i Read and Write Python JsonJSON stands for JavaScript Object Notation is a lightweight text based format for representing data in structured format JSON is often used for exchanging data between a web server and a client in web applications It can be used with any programming language that can parse and generate JSON data In this article we will learn how to read and write json file using python Python have the inbuild package to handle json Python dictionary and JSON are similar in terms of their syntax but both are different Python dictionary is a built in data structure that stores a collection of key value pairs in a mutable way only exist in memory Where as JSON is a lightweight data interchange format can store in physical storage for later use Let see how to write a json to a file using pythonUsing the write w operation in with and json dump we have converted the dictionary data to json file import jsondata name George age phone with open jsonfile json w as file json dump data file How to read a json file using pythonWe are going to use the read r operation in with an use json loads to convert the json data to back to dictionary import jsonwith open jsonfile json r as file data json load file print data data Output data name George age phone Explore Other Related ArticlesPython List ComprehensionHow to combine two dictionaries in python using different methodsHow to check if a key exists in a dictionary pythonPython try exception tutorialPython classes and objects tutorialPython Recursion Function Tutorial 2023-03-07 16:43:24
海外TECH DEV Community Six Years on DEV, Already? https://dev.to/jarvisscript/six-years-on-dev-3j4c Six Years on DEV Already I just got my six year badge While I ve been a member of DEV for six years I lurked for the first few years and didn t really become active till the pandemic hit and I needed more community Blogging here helps me cement what I am learning One of the best way to learn is to explain it to someone else It s been a great six years and I hope DEV continues to grow I trying to help by being a DEV Trusted Member where I can help point out great content It been great to express ideas here and share things I build There are a lot of great developers and content here JarvisScript git push 2023-03-07 16:06:42
Apple AppleInsider - Frontpage News AirPods Pro 2 are on sale for $199 at Verizon, beating Amazon by $35 https://appleinsider.com/articles/23/03/07/airpods-pro-2-are-on-sale-for-199-at-verizon-beating-amazon-by-35?utm_medium=rss AirPods Pro are on sale for at Verizon beating Amazon by AirPods deal hunters can pick up Apple AirPods Pro for at Verizon with free day shipping beating Amazon s price by Save on AirPods Pro Verizon s promo discounts Apple s powerful earbuds by with free day shipping or free express store pickup at select locations Read more 2023-03-07 16:54:52
Apple AppleInsider - Frontpage News Apple Pay is coming to South Korea very soon https://appleinsider.com/articles/23/03/07/apple-pay-is-coming-to-south-korea-very-soon?utm_medium=rss Apple Pay is coming to South Korea very soonRevealed alongside the yellow iPhone Apple is officially launching Apple Pay in South Korea after months of regulatory review from the country s financial regulator Apple Pay coming to South KoreaThe company announced on Tuesday that the iPhone and iPhone Plus are available in a new yellow color Furthermore in the Korean version of its press release Apple included that it is about to launch Apple Pay in South Korea Read more 2023-03-07 16:04:45
海外TECH Engadget Sonos speakers will support Apple Music spatial audio starting March 28th https://www.engadget.com/sonos-speakers-will-support-apple-music-spatial-audio-starting-march-28th-161505844.html?src=rss Sonos speakers will support Apple Music spatial audio starting March thYou won t have to buy a HomePod to listen to Apple Music spatial audio in your living room Sonos has confirmed that its speakers will support Apple Music s Dolby Atmos playback from March th You won t need the new Era to experience the more immersive format either Sonos also says the Arc and second gen Beam soundbars will also handle spatial audio tracks Play series speakers and older Sonos soundbars unsurprisingly won t support spatial audio as they were built around conventional stereo and surround output You ll need to upgrade if you want the grander audio experience unfortunately The news makes the Era considerably more appealing Before today Sonos was only committed to supporting spatial audio through Amazon Music Unlimited This effectively doubles the potential audience Statista notes that Amazon had percent market share in the second quarter of last year while Apple Music had percent Unless you re a Spotify die hard where spatial audio isn t really an option as we write this there s a real chance you can try this feature yourself It s not certain if other speaker brands will support Apple Music spatial audio We ve asked Apple for comment For now though this gives Sonos an edge over competitors that might not offer Atmos music at any price point Whether or not it fares well against Apple s own hardware is another matter If you re looking for spatial audio support at the lowest price possible the HomePod is decidedly more affordable than the Era We won t be surprised if the Sonos model sounds better but it also represents a larger investment This article originally appeared on Engadget at 2023-03-07 16:15:05
Cisco Cisco Blog Cisco Demonstrates Co-packaged Optics (CPO) System at OFC 2023 https://feedpress.me/link/23532/16008727/cisco-demonstrates-co-packaged-optics-cpo-system-at-ofc-2023 switch 2023-03-07 16:47:15
海外科学 NYT > Science Long Covid Patients More Likely to Have Gastrointestinal Problems, Study Finds https://www.nytimes.com/2023/03/07/health/long-covid-stomach-pain-acid-reflux.html Long Covid Patients More Likely to Have Gastrointestinal Problems Study FindsThe study which examined patients infected early in the pandemic found they were significantly more likely than people who didn t get Covid to experience lingering reflux constipation and other issues 2023-03-07 16:37:50
海外科学 NYT > Science Dunk Was Chunky, but Still Deadly https://www.nytimes.com/2023/03/04/science/chunky-dunk-fossil.html devonian 2023-03-07 16:28:14
海外TECH WIRED The MoonSwatch Mission to Moonshine Gold Is Limited in Every Way https://www.wired.com/story/the-moonswatch-mission-to-moonshine-gold-is-limited-in-every-way/ The MoonSwatch Mission to Moonshine Gold Is Limited in Every WaySwatch s new limited edition Omega collaboration is on sale for just one day but it s not the luxury busting timepiece we were hoping for 2023-03-07 16:03:46
海外科学 BBC News - Science & Environment Nobel scientist says 'UK research is in jeopardy' https://www.bbc.co.uk/news/science-environment-64879544?at_medium=RSS&at_campaign=KARANGA longstanding 2023-03-07 16:11:50
金融 金融庁ホームページ 貸金業関係資料集を更新しました。 https://www.fsa.go.jp/status/kasikin/20230307/index.html 関係 2023-03-07 17:00:00
ニュース BBC News - Home Cardiff car crash: Mum criticises two-day search to find group https://www.bbc.co.uk/news/uk-wales-64872517?at_medium=RSS&at_campaign=KARANGA search 2023-03-07 16:30:34
ニュース BBC News - Home Two dead, two alive after Americans kidnapped in Mexico https://www.bbc.co.uk/news/world-latin-america-64878721?at_medium=RSS&at_campaign=KARANGA governor 2023-03-07 16:20:26
ニュース BBC News - Home In pictures: Snow blankets parts of the UK as cold snap starts https://www.bbc.co.uk/news/uk-64875441?at_medium=RSS&at_campaign=KARANGA sweeps 2023-03-07 16:51:46
ニュース BBC News - Home Nobel scientist says 'UK research is in jeopardy' https://www.bbc.co.uk/news/science-environment-64879544?at_medium=RSS&at_campaign=KARANGA longstanding 2023-03-07 16:11:50
ニュース BBC News - Home Six Nations 2023: England's Courtney Lawes to miss France game https://www.bbc.co.uk/sport/rugby-union/64864675?at_medium=RSS&at_campaign=KARANGA injury 2023-03-07 16:31:40
ニュース BBC News - Home Small boats bill aimed at galvanising political support at home https://www.bbc.co.uk/news/uk-politics-64879314?at_medium=RSS&at_campaign=KARANGA ratings 2023-03-07 16:17:13
GCP Cloud Blog At Box, a game plan for migrating critical storage services from HBase to Cloud Bigtable https://cloud.google.com/blog/products/databases/how-box-migrated-from-hbase-to-cloud-bigtable/ At Box a game plan for migrating critical storage services from HBase to Cloud BigtableIntroductionWhen it comes to cloud based content management collaboration and file sharing tools for businesses and individuals Box Inc is a recognized leader Recently we decided to migrate from Apache HBase a distributed scalable big data store deployed on premises to Cloud Bigtable Google Cloud s HBase compatible NoSQL database By doing so we achieved the many benefits of a cloud managed database reduced operational maintenance work on HBase flexible scaling decreased costs and an smaller storage footprint At the same time the move allowed us to enable BigQuery Google Cloud s enterprise data warehouse and run our database across multiple geographical regions But how Adopting Cloud Bigtable meant migrating one of Box s most critical services whose secure file upload and download functionality is core to its content cloud It also meant migrating over TB of data with zero downtime Read on to learn how we did it and the benefits we re ultimately enjoying  BackgroundHistorically Box has used HBase to store customer file metadata with the schema in the table below This provides us a mapping from a file to a file s physical storage locations This metadata is managed by a service called Storage Service which runs on Kubernetes this metadata is used on every upload and download at Box For some context on our scale at the start of the migration we had multiple HBase clusters that each stored over billion rows and terabytes of data Additionally these clusters received around writes per second and reads per second but could scale to serve millions of requests for analytical jobs or higher loads Our HBase architecture consisted of three fully replicated clusters spread across different geographical regions Two active clusters for high availability and another to handle routine maintenance Each regional Storage Service wrote to its local HBase cluster and those modifications were replicated out to other regions On reads Storage Service first fetched from the local HBase cluster and fell back onto other clusters if there was a replication delay Preparing to migrateTo choose the best Bigtable cluster configuration for our use case we ran performance tests and asynchronous reads and writes before the migration You can learn more about this on the Box blog here  Since Bigtable requires no maintenance downtime we decided to merge our three HBase clusters down to just two Bigtable clusters in separate regions for disaster recovery That was a big benefit but now we needed to figure out the best way to merge three replicas into two replicas Theoretically the metadata in all three of our HBase clusters should have been the same because of partitioned writes and guaranteed replication However in practice metadata across all the clusters had drifted and Box s Storage Service handled these inconsistencies upon read Thus during the backfill phase of the migration we decided to take snapshots of each HBase cluster and import them into Bigtable But we were unsure about whether to overlay the snapshots or to import the snapshots to separate clusters To decide on how to merge three clusters to two we ran the Google provided Multiverse Scan Job a customized MapReduce job that sequentially scans HBase table snapshots in parallel This allowed us to effectively perform a ​​sort merge join of the three tables and compare rows and cells for differences between the three HBase clusters While the job scanned the entire table a random of critical rows were compared This job took Dataproc worker nodes and ran for four days Then we imported the differences into BigQuery for analysis We found that inconsistencies fell into three categories Missing rows in an HBase clusterA row existed but was missing columns in an HBase clusterA row existed but had differing non critical columns in an HBase clusterThis exercise helped us decide that consolidating all three snapshots into one would provide us with the most consistent copy and to have Bigtable replication handle importing the data into the secondary Bigtable cluster This would resolve any issues with missing columns or rows Migration planSo how do you migrate trillions of rows into a live database Based on our previous experience migrating a smaller database into Bigtable we decided to implement synchronous modifications In other words every successful HBase modification would result in the same Bigtable modification If either step failed the overall request would be considered a failure guaranteeing atomicity For example when a write to HBase succeeded we would issue a write to Bigtable serializing the operations This increased the total latency of writes to the sum of a write to HBase and Bigtable However we determined that was an acceptable tradeoff as doing parallel writes to both these databases would have introduced complex logic in Box s Storage Service One complexity was that Box s Storage Service performed many check and modify operations These couldn t be mirrored in Bigtable for the duration of migration where Bigtable had not been backfilled and consequently check and modify operations would differ from the HBase check and modifies For this reason we decided to rely on the result of the HBase check and modify and would only perform the modification if the HBase check and modify succeeded Rollout planTo roll out synchronous modifications safely we needed to control it by both percentage and region For example our rollout plan for a region looked like the following region region region region region Synchronous modifications ensured that Bigtable had all new data written to it However we still needed to backfill the old data After running synchronous modifications for a week and observing no instabilities we were ready to take the three HBase snapshots and move onto the import phase Bigtable import Backfilling dataWe had three HBase snapshots of TB each We needed to import these into Bigtable using the Google provided Dataproc Import Job This job had to be run carefully since we were fully dependent on the performance of the Bigtable cluster If we overloaded our Bigtable cluster we would immediately see adverse customer impact ーan increase in user traffic latency In fact our snapshots were so large that we scaled up our Bigtable cluster to nodes to avoid any performance issues We then began to import each snapshot sequentially An import of this size was completely unknown to us so we controlled the rate of import by slowly increasing the size of the Dataproc cluster and monitoring Bigtable user traffic latencies ValidationBefore we could start relying on reads from Bigtable there was a sequence of validations that had to happen If any row was incorrect this could lead to negative customer impact The size of our clusters made it impossible to do validation on every single row Instead we took three separate approaches to validation to gain confidence on the migration Async read validation Optimistic customer download driven validationOn every read we asynchronously read from Bigtable and added logging and metrics to notify us of any differences The one caveat with this approach was that we have a lot of reads that are immediately followed by an update This approach created a lot of noise since all of the differences we surfaced were from modifications that happened in between the HBase read and the Bigtable read During this read validation we discovered that Bigtable regex scans were different from HBase regex scans For one Bigtable only supports “equals regex comparators Also the Bigtable Regex uses RE which treats “ any character which unless specified excludes newline differently than HBase Thus we had to roll out a specific regex for Bigtable scans and validate that they were returning the expected results Sync validation Run a Dataproc job with hash comparison between Bigtable and HBaseThis validation job similar to the one found here performed a comparison of hashes across Bigtable and HBase rows We ran it on a sample of of the rows and uncovered a mismatch We printed these mismatches and analyzed them Most of these mismatches were from optimistic modifications to certain columns and found that no re import or correction was needed Customer perspective validation We wanted to perform an application level validation to see what customers would be seeing instead of a database level validation We wrote a job to scan the whole filesystem that would queue up objects where we would call an endpoint in Storage Service that would compare the entry in Bigtable and HBase For more information check out this Box blog This validation supported the output of the Sync Validation job We didn t find any differences that weren t explained above Flipping to BigtableAll these validations gave us the confidence to return reads from Bigtable instead of HBase We kept synchronous dual modifications to HBase on as a backup in case we needed to roll anything back After returning only Bigtable data we were finally ready to turn off modifications to HBase At this point Bigtable became our source of truth Thumbs up to BigtableSince completing the migration to Bigtable here are some benefits we ve observed  Speed of developmentWe now have full control of scaling up and down Bigtable clusters We turned on Bigtable autoscaling which automatically increases or decreases our clusters given CPU and storage utilization parameters We were never able to do this before with physical hardware This has allowed our team to develop quickly without impacting our customers Our team now has much less overhead related to managing our database In the past we would constantly have to move around HBase traffic to perform security patches Now we don t need to worry about managing that at all Finally MapReduce jobs that would take days in the past now finish under hours Cost savingsBefore Bigtable we were running three fully replicated clusters With Bigtable we are able to run one primary cluster that takes in all the requests and one replicated secondary cluster that we could use if there were any issues with the primary cluster Besides for disaster recovery the secondary cluster is extremely useful to our team to run data analysis jobs Then with autoscaling we can run our secondary cluster much more lightly until we need to run a job at which point it self scales The secondary cluster runs with less nodes than the primary cluster When we used HBase all three of our clusters were sized evenly New analysis toolsWe ported all our HBase MapReduce jobs over to Bigtable and found that Bigtable has provided us with parity in functionality with minor configuration changes to our existing jobs Bigtable has also enabled us to use the Google Cloud ecosystem We were able to add Bigtable as an external BigQuery source This allowed us to query our tables in real time which was never possible in HBase This application was best suited to our small tables Care should be taken with running queries on a production Bigtable cluster due to impact on CPU utilization App profiles may be used to isolate traffic to secondary clusters For our larger tables we decided to import them into BigQuery through a Dataproc job This enabled us to pull ad hoc analytics data without running any extra jobs Further querying BigQuery is also much faster than running MapReduce jobs Long story short migrating to Bigtable was a big job but with all the benefits we ve gained we re very glad we did  Considering a move to Bigtable Find more information about migrations and Google Cloud supported tools Learn about our live migration tools such as HBase Replication and the HBase mirroring clientWalk through the migration guide for step by step instructionsWatch Box s presentation How Box modernized their NoSQL databases with minimal effort and downtime 2023-03-07 17:00:00

コメント

このブログの人気の投稿

投稿時間:2021-06-17 05:05:34 RSSフィード2021-06-17 05:00 分まとめ(1274件)

投稿時間:2021-06-20 02:06:12 RSSフィード2021-06-20 02:00 分まとめ(3871件)

投稿時間:2020-12-01 09:41:49 RSSフィード2020-12-01 09:00 分まとめ(69件)