投稿時間:2023-05-04 06:23:05 RSSフィード2023-05-04 06:00 分まとめ(26件)

カテゴリー等 サイト名等 記事タイトル・トレンドワード等 リンクURL 頻出ワード・要約等/検索ボリューム 登録日
AWS AWS News Blog Introducing Bob’s Used Books—a New, Real-World, .NET Sample Application https://aws.amazon.com/blogs/aws/introducing-bobs-used-books-a-new-real-world-net-sample-application/ Introducing Bob s Used Booksーa New Real World NET Sample ApplicationToday I m happy to announce that a new open source sample application a fictitious used books eCommerce store we call Bob s Used Books is available for NET developers working with AWS The NET advocacy and development teams at AWS talk to customers regularly and during those conversations often receive requests for more in depth samples Customers tell … 2023-05-03 20:56:16
AWS AWS Big Data Blog Amazon OpenSearch Service now supports 99.99% availability using Multi-AZ with Standby https://aws.amazon.com/blogs/big-data/amazon-opensearch-service-now-supports-99-99-availability-using-multi-az-with-standby/ Amazon OpenSearch Service now supports availability using Multi AZ with StandbyCustomers use Amazon OpenSearch Service for mission critical applications and monitoring But what happens when OpenSearch Service itself is unavailable If your ecommerce search is down for example you re losing revenue If you re monitoring your application with OpenSearch Service and it becomes unavailable your ability to detect diagnose and repair issues with your application is diminished … 2023-05-03 20:49:10
AWS AWS Media Blog AWS Thinkbox Deadline adds multi-regional support to Spot Event Plugin https://aws.amazon.com/blogs/media/aws-thinkbox-deadline-adds-multi-regional-support-to-spot-event-plugin/ AWS Thinkbox Deadline adds multi regional support to Spot Event PluginAmazon Web Services AWS has announced AWS Thinkbox Deadline the addition of AWS multi regional support to the Spot Event Plugin which allows Deadline customers to easily scale rendering by launching and managing Spot Fleets in multiple AWS regions from a single Spot Event Plugin Introduction In order to leverage the elasticity of the cloud … 2023-05-03 20:56:34
AWS AWS Mobile Blog Introducing Private APIs on AWS AppSync https://aws.amazon.com/blogs/mobile/introducing-private-apis-on-aws-appsync/ Introducing Private APIs on AWS AppSyncAWS AppSync is a fully managed service that enables developers to create GraphQL APIs that can securely access manipulate and combine data from one or more data sources When you create a GraphQL API on AppSync a public endpoint will be generated which can be used to send queries mutations and subscriptions requests to the … 2023-05-03 20:56:46
AWS AWS Mobile Blog Benchmarking your Mobile App with Rooted Android Private Devices and AWS Device Farm https://aws.amazon.com/blogs/mobile/benchmarking-your-mobile-app-with-rooted-android-private-devices-and-aws-device-farm/ Benchmarking your Mobile App with Rooted Android Private Devices and AWS Device FarmUntil recently the primary reasons for rooting an Android device were to install custom ROMS themes or have access to file explorer on the device Now however rooting devices is not just for customization but also unlocks utilities that help analyze and improve the performance of your app With the launch of support for rooted … 2023-05-03 20:51:13
js JavaScriptタグが付けられた新着投稿 - Qiita Run an alert when kintone value changes https://qiita.com/t-noue/items/ae517316ef488514d622 changesalert 2023-05-04 05:30:33
js JavaScriptタグが付けられた新着投稿 - Qiita kintoneのレコードの保存の際に変更があれば処理を行う。 https://qiita.com/t-noue/items/b4b0fb70f12a08180805 fruitnameconstrecordcheck 2023-05-04 05:04:52
海外TECH MakeUseOf ROUND vs. ROUNDUP vs. ROUNDDOWN: Excel's Rounding Functions Compared https://www.makeuseof.com/round-vs-roundup-vs-rounddown-excel-functions/ ROUND vs ROUNDUP vs ROUNDDOWN Excel x s Rounding Functions ComparedAre you confused about Excel s rounding functions Learn the difference between ROUND ROUNDUP and ROUNDDOWN in this helpful guide 2023-05-03 20:15:17
海外TECH DEV Community Building Travel Advisory Apps with Cloudera Data Flow (built on Apache NiFi) https://dev.to/tspannhw/building-travel-advisory-apps-with-cloudera-data-flow-built-on-apache-nifi-1k7d Building Travel Advisory Apps with Cloudera Data Flow built on Apache NiFi FLaNK TravelAdvisoryTravel Advisory RSS Processing Apache NiFi Apache Kafka Apache Flink SQL Overview Final Flow Adding Processors to the DesignerHere I list most of the processors available Flow ParametersGo to parameters and enter all you will need for the flow You can add all the ones listed below Flow Walk ThroughIf you are loading my pre built flow when you enter you will see the details for the process group in the configuration pallet We add an invokeHTTP processor and set the parameters Now we can add a parameter for the HTTP URL for Travel Advisories Connect InvokeHTTP to QueryRecord Name your connection for monitoring later QueryRecord convert XML RSS to JSON you will need RSSXMLReader and TravelJsonRecordSetWriter Connect QueryRecord to SplitJson if no errors SplitJson we set the JsonPath Expression to item We then connect SplitJson to SplitRecord For SplitRecord we set the Record Reader to JSON Reader InferRoot the Record Writer to TravelJsonRecordSetWriter and records per split to SplitRecord connected to EvaluateJSONPathWe set the Destination to flowfile attribute Return Type to json and add several new fields description descriptionguid guididentifier identifierlink linkpubdate pubDatetitle titleWe connect EvaluateJsonPath to SplitJson For SplitJson we set the JsonPath Expression to categoryFrom SplitJson to UpdateRecordIn UpdateRecord we set Record Reader to JSON Reader InferRoot and Record Writer to TravelJsonRecordSetWriter We set Replacement Value Strategy to Literal Value We add new fields for our new record format advisoryId filename description description domain identifier trim guid guid link link pubdate pubdate title title ts now toNumber uuid uuid Next we connect UpdateRecord to our Slack Sub Processor GroupThe other branches flows from UpdateRecord to Write to KafkaFor PublishKafkaRecordCDP there s a lot of parameters to set This is why we recommend starting with a ReadyFlow There are a lot of parameters here we need to set our Kafka Brokers Destination Topic Name JSON Reader InferRoot for Reader AvroRecordSetWriterHWX for writer turn transactions off Guarantee Replicated Delivery Use Content as Record Value SASL SSL Plain security Username to your login user id or machine user and then the associated password the SSL Context maps to the Default NiFi SSL Context Service is built in set uuid as the Message Key Field and finally set the client id to a unique Kafka producer id We then send messages also to Slack about our travel advisories We only need one processor to send to slack We connect input to our PutSlack processor For PutSlack we need to set the Webhook URL to the one from your Slack group admin and put the text from the ingest set your channel to the channel mapped in the web hook and set a username for your bot Flow ServicesAll these services need to be set copy Tim Spann 2023-05-03 20:26:21
海外TECH DEV Community Lessons we learned while building a stateful Kafka connector and tips for creating yours https://dev.to/bytewax/lessons-we-learned-while-building-a-stateful-kafka-connector-and-tips-for-creating-yours-157b Lessons we learned while building a stateful Kafka connector and tips for creating yoursThe Bytewax framework is a flexible tool designed to meet the challenges faced by Python developers in today s data driven world It aims to provide seamless integrations and time saving shortcuts for data engineers dealing with streaming data making their work more efficient and effective One of the important sides of developing Bytewax is input connectors These connectors help in establishing the connection between the external systems and Bytewax to help users in importing data from external systems Here we re going to show how to write a custom input connector by walking through how we wrote our built in Kafka input connector Writing input connectors for arbitrary systems while supporting failure recovery and strong delivery guarantees requires a solid understanding of how recovery works internal to Bytewax and the chosen output system We strongly encourage you to use the connectors we have built into bytewax connectors if possible and read the documentation on their limits If you are interested in writing your own this article can give you an introduction into some of the decisions involved in writing an input connector for an ordered partitioned input stream If you need any help at all writing a connector come say hi and ask questions in the Bytewax community Slack We are happy to help PartitionsWriting a subclass for bytewax inputs PartitionedInput is the core API for writing an input connector when you have an input that has a fixed number of partitions A partition is a sub stream of data that can be read concurrently and independently To write a PartitionedInput subclass you need to answer three questions How many partitions are there How can I build a source that reads a single partition How can I rewind a partition and read from a specific item This is done via the abstract methods list parts build part and the resume state variable respectively We re going to use the confluent kafka package to actually communicate with the Kafka cluster Let s import all the things we ll need for this input source from typing import Dict Iterablefrom confluent kafka import Consumer KafkaError OFFSET BEGINNING TopicPartition from confluent kafka admin import AdminClientfrom bytewax inputs import PartitionedInput StatefulSourceOur KafkaInput connector is going to read from a specific set of topics on a cluster First let s define our class and write a constructor that takes all the arguments that make sense for configuring this specific kind of input source This is going to be the public entry point to this connector and is what you ll pass to the bytewax dataflow Dataflow input operator class KafkaInput PartitionedInput def init self brokers Iterable str topics Iterable str tail bool True starting offset int OFFSET BEGINNING add config Dict str str None add config add config or if isinstance brokers str raise TypeError amp quot brokers must be an iterable and not a string amp quot self Listing PartitionsNext let s answer question one how many partitions are there Conveniently confluent kafka provides an AdminClient list topics which give you the partition count of each topic packed deep in a metadata object The signature of PartitionedInput list parts says it must return a set of strings with IDs of all the partitions Let s build the AdminClient using our configuring instance variables and then delegate to a list parts function so we can re use it if necessary Continued class KafkaInput PartitionedInput def list parts self config amp quot bootstrap servers amp quot amp quot amp quot join self brokers config update self add config client AdminClient config return set list parts client self topics This function unpacks the nested metadata returned from AdminClient list topics and returns a string that looks like my topic for the third partition in the topic my topic def list parts client topics for topic in topics List topics one by one so if auto create is turned on we respect that cluster metadata client list topics topic topic metadata cluster metadata topics topic if topic metadata error is not None raise RuntimeError f amp quot error listing partitions for Kafka topic topic r amp quot f amp quot topic metadata error str amp quot part idxs topic metadata partitions keys for i in part idxs yield f amp quot i topic amp quot How do you decide what the partition ID string should be It should be something that globally identifies this partition hence combining partition number and topic name PartitionedInput list parts might be called multiple times from multiple workers as a Bytewax cluster is setup and resumed so it must return exactly the same set of partitions on every call in order to work correctly Changing numbers of partitions is not currently supported with recovery Building PartitionsNext let s answer question two how can I build a source that reads a single partition We can use confluent kafka s Consumer to make a Kafka consumer that will read a specific topic and partition starting from an offset The signature of PartitionedInput build part takes a specific partition ID we ll ignore the resume state for now and must return a stateful source We parse the partition ID to determine which Kafka partition we should be consuming from Hence the importance of having a globally unique partition ID Then we build a Consumer that connects to the Kafka cluster and build our custom KafkaSource stateful source That is where the actual reading of input items happens Continued class KafkaInput PartitionedInput def build part self for part resume state part idx topic for part split amp quot amp quot part idx int part idx assert topic in self topics amp quot Can amp apos t resume from different set of Kafka topics amp quot config We amp apos ll manage our own amp quot consumer group amp quot via recovery system amp quot group id amp quot amp quot BYTEWAX IGNORED amp quot amp quot enable auto commit amp quot amp quot false amp quot amp quot bootstrap servers amp quot amp quot amp quot join self brokers amp quot enable partition eof amp quot str not self tail config update self add config consumer Consumer config return KafkaSource consumer topic part idx self starting offset resume state Stateful Input SourceWhat is a stateful source It is defined by subclassing bytewax inputs StatefulSource You can think about it as a snapshot able Python iterator something that produces a stream of items via StatefulSource next and also lets the Bytewax runtime ask for a snapshot of the position of the source via StatefulSource snapshot Our KafkaSource is going to read items from a specific Kafka topic s partition Let s define that class and have a constructor that takes in all the details to start reading that partition the consumer already configured to connect to the correct Kafka cluster the topic the specific partition index the default starting offset beginning or end of the topic and again we ll ignore the resume state for just another moment class KafkaSource StatefulSource def init self consumer topic part idx starting offset resume state self offset resume state or starting offset Assign does not activate consumer grouping consumer assign TopicPartition topic part idx self offset self consumer consumer self topic topicThe beating heart of the input source is the StatefulSource next method It is periodically called by Bytewax and behaves similar to a built in Python iterator s next method It must do one of three things return a new item to send into the dataflow return None signaling that there is no data currently but might be later or raise StopIteration when the partition is complete Consumer poll gives us a method to ask if there are any new messages on the partition we setup this consumer to follow And if there are unpack the data message and return it Otherwise handle the no data case the end of stream case or an exceptional error case Continued class KafkaSource StatefulSource def next self msg self consumer poll seconds if msg is None return elif msg error is not None if msg error code KafkaError PARTITION EOF raise StopIteration else raise RuntimeError f amp quot error consuming from Kafka topic self topic r msg error amp quot else item msg key msg value Resume reading from the next message not this one self offset msg offset return itemAn important thing to note here is that StatefulSource next must never block The Bytewax runtime employs a sort of cooperative multitasking and so each operator must return quickly even if it has nothing to do so other operators in the dataflow that do have work can run Unfortunately currently there is no way in the Bytewax API to prevent polling of input sources as input comes from outside the dataflow Bytewax has no way of knowing when more data is available so must constantly check The best practice here is to pause briefly if there is no data to prevent a full spin loop on no new data but not so long you block other operators from doing their work There is also a StatefulSource close method which enables you to do any well behaved shutdown when EOF is reached This is not guaranteed to be called in a failure situation and should not be crucial to the connecting system In this case Consumer close does graceful shutdown class KafkaSource StatefulSource def close self self consumer close Resume StateLets explain how failure recovery works for input connectors Bytewax s recovery system allows the dataflow to quickly resume processing and output without needing to replay all input It does this by periodically snapshot all internal state input positions and output positions of the dataflow Then when it needs to recover after a failure it loads all state from a recent snapshot and starts re playing input items in the same order from the instant of the snapshot and overwriting output items This will cause the state and output of the dataflow to evolve in the same way during the resume execution as during the previous execution SnapshottingSo we need to keep track of the current position somewhere in each partition Kafka has the concept of message offsets which is an incrementing immutable integer that is the position of each message In KafkaSource next we kept track of the offset of the next message that partition will read via self offset msg offset Bytewax calls StatefulSource snapshot when it needs to record that partition s position and returns that internally stored next message offset Continued class KafkaSource StatefulSource def snapshot self return self offset ResumeOn resume after a failure Bytewax s recovery machinery does the hard work of collecting all the snapshots finding the ones that represent a coherent set of states across the previous execution s cluster and threading each bit of snapshot data back through into PartitionedInput build part for the same partition To properly take advantage of that your resulting partition must resume reading from the same spot represented by that snapshot Since we were storing the Kafka message offset of the next message to be read in KafkaSource offset we need to ensure we thread through that message offset back into the Consumer when it is built That happens via passing resume state into the KafkaSource constructor and it assigning that consumer to start reading from that offset Looking at that code again Continued class KafkaSource StatefulSource def init self consumer topic part idx starting offset resume state self offset resume state or starting offset Assign does not activate consumer grouping consumer assign TopicPartition topic part idx self offset As one extra wrinkle if there is no resume state for this partition if the partition is being built for the first time None will be passed for resume state in PartitionedInput build part In that case we need to fill in the requested default starting offset either beginning of topic or end of topic In the case where we do have resume state we should ignore that since we need to start from the specific offset to uphold the recovery contract Delivery GuaranteesLet s talk for a moment about how this recovery model with snapshots impacts delivery guarantees A well designed input connector on its own can only guarantee that the output of a dataflow to a downstream system is at least once the recovery system will ensure that we replay any input that might not have been output due to where the execution cluster failed but it requires coordination with the output connector via something like transactions or two phase commits to ensure that the replay does not result in duplicated writes downstream and exactly once processing Non Replay Able SourcesIf your input source does not have the ability to replay old data you can still use it with Bytewax but your delivery guarantees are limited to at least once For example listening to an ephemeral SSE or WebSocket stream you can always start listening but often the request API does not let you specify an ability to replay missing events When Bytewax attempts to resume all the other operators will have their internal state returned to that last coherent snapshot but since the input sources do not rewind it will appear that the dataflow has missed out on all input between when that snapshot was taking and resume In this case your StatefulSource snapshot can return None and no recovery data will be saved You can then ignore the resume state argument of PartitionedInput build part because it will always be None 2023-05-03 20:16:54
海外TECH Engadget Even Gmail has blue verification checks now https://www.engadget.com/even-gmail-has-blue-verification-checks-now-200234105.html?src=rss Even Gmail has blue verification checks nowGoogle is rolling out a Gmail feature that aims to help you figure out whether a sender is genuine or if they may be a scammer When you receive an email from a company that has verified its identity you ll see a blue check next to their name in your inbox The checkmark update is Google s latest implementation of the Brand Indicators for Message Identification BIMI tech Google started testing BIMI in Gmail in At first it enabled brands that were enrolled in BIMI to include authenticated logos in their emails The blue check is a perhaps more obvious indicator that the sender is legitimate When you hover over the blue check in Gmail you ll see a pop up that reads quot The sender of this email has verified that they own the domain was sent from and the logo in the profile image quot The pop up includes a link that directs you to a page with more information Google quot Strong email authentication helps users and email security systems identify and stop spam and also enables senders to leverage their brand trust quot Google wrote in a blog post quot This increases confidence in email sources and gives readers an immersive experience creating a better email ecosystem for everyone quot The feature should be live for all users by the end of the week while Workspace admins can help set up BIMI for their company It s nice to see one company bring back an element of trust to the blue check which used to be a pretty clear indicator that the person brand or business on the other end is the real deal Unlike a certain other company at least Google doesn t seem to have weaponized blue checks as part of a culture war or used them to wring more revenue out of users while damaging its overall trustworthiness This article originally appeared on Engadget at 2023-05-03 20:02:34
海外科学 NYT > Science Eli Lilly Trial Finds Alzheimer’s Drug Can Slow Progress of Disease https://www.nytimes.com/2023/05/03/health/alzheimers-drug-eli-lilly-trial.html Eli Lilly Trial Finds Alzheimer s Drug Can Slow Progress of DiseaseDonanemab is not a cure and comes with significant side effects but patients had longer periods of independent living while on the drug 2023-05-03 20:46:00
海外科学 NYT > Science Sultan al-Jaber, Who Heads U.N. Climate Talks, Hints at His Approach https://www.nytimes.com/2023/05/03/climate/un-climate-oil-uae-al-jaber.html Sultan al Jaber Who Heads U N Climate Talks Hints at His ApproachIn a speech Sultan al Jaber the Emirati official presiding over this year s climate summit spoke of emissions cuts but experts also cited ambiguity in his statements 2023-05-03 20:32:58
金融 ニュース - 保険市場TIMES マニュライフ生命、コールセンターの応対品質で最高評価の三つ星を獲得 https://www.hokende.com/news/blog/entry/2023/05/04/060000 マニュライフ生命、コールセンターの応対品質で最高評価の三つ星を獲得HDIJapan主催「年HDI格付けベンチマーク」マニュライフ生命保険株式会社以下、マニュライフ生命は月日、HDIJapan主催の「年HDI格付けベンチマーク」生命保険業界にて、コールセンターの応対および保険代理店を担当するサポートデスクの品質、クオリティ格付けの個人評価において名のスタッフが最高評価となる三つ星を獲得したと発表した。 2023-05-04 06:00:00
ニュース BBC News - Home Fed raises US interest rates to highest in 16 years https://www.bbc.co.uk/news/business-65474456?at_medium=RSS&at_campaign=KARANGA tenth 2023-05-03 20:11:31
ニュース BBC News - Home Erling Haaland record: Manchester City striker breaks Premier League record for goals in a season https://www.bbc.co.uk/sport/football/65474843?at_medium=RSS&at_campaign=KARANGA Erling Haaland record Manchester City striker breaks Premier League record for goals in a seasonManchester City striker Erling Haaland scores against West Ham to break the record for goals in a Premier League season 2023-05-03 20:42:16
ニュース BBC News - Home Liverpool 1-0 Fulham: Mohamed Salah scores for eighth straight Anfield game https://www.bbc.co.uk/sport/football/64923724?at_medium=RSS&at_campaign=KARANGA Liverpool Fulham Mohamed Salah scores for eighth straight Anfield gameLiverpool s outside hopes of qualifying for next season s Champions League remain alive after Mohamed Salah continues his remarkable Anfield scoring form to help sink Fulham 2023-05-03 20:51:53
ニュース BBC News - Home Chelsea 2-1 Liverpool: Women's Super League title hopes boosted by Sam Kerr winner https://www.bbc.co.uk/sport/football/65475104?at_medium=RSS&at_campaign=KARANGA Chelsea Liverpool Women x s Super League title hopes boosted by Sam Kerr winnerChelsea s Women s Super League title hopes remain in their own hands after Sam Kerr s late goal gives them victory over Liverpool 2023-05-03 20:32:27
ビジネス ダイヤモンド・オンライン - 新着記事 生前贈与「難解だが重要な変更」を資産管理のプロが解説、孫・義理の子への贈与が主流に? - シン富裕層の投資・節税・相続 https://diamond.jp/articles/-/321959 生前贈与 2023-05-04 05:25:00
ビジネス ダイヤモンド・オンライン - 新着記事 宿泊業DX支援の新参者・tripla高橋CEOが語る「ポストコロナのインバウンド戦略」 - 注目テーマをメッタ斬り! “人気株”の勝者・敗者 https://diamond.jp/articles/-/322202 最高経営責任者 2023-05-04 05:20:00
ビジネス ダイヤモンド・オンライン - 新着記事 伊勢丹新宿本店が絶好調!高島屋、大丸松坂屋は?百貨店の業績&インバウンド回復度を探る - コロナで明暗!【月次版】業界天気図 https://diamond.jp/articles/-/321909 伊勢丹新宿本店が絶好調高島屋、大丸松坂屋は百貨店の業績インバウンド回復度を探るコロナで明暗【月次版】業界天気図コロナ禍の収束を待たずに、今度は資源・資材の高騰や円安が企業を揺さぶっている。 2023-05-04 05:15:00
ビジネス ダイヤモンド・オンライン - 新着記事 インバウンド関連株は「値上げ力」で明暗!勝ち組はオリエンタルランドやJフロント、家電量販は苦戦? - 注目テーマをメッタ斬り! “人気株”の勝者・敗者 https://diamond.jp/articles/-/322201 2023-05-04 05:10:00
ビジネス ダイヤモンド・オンライン - 新着記事 日銀「政策修正見送り」の可能性高まる、それでも円が買われる理由 - マーケットフォーカス https://diamond.jp/articles/-/322334 中央銀行 2023-05-04 05:05:00
ビジネス 東洋経済オンライン "国会での論戦"で見えた「日本の保育」重大争点 「異次元の少子化対策」保育においてはどうなのか? | 国内政治 | 東洋経済オンライン https://toyokeizai.net/articles/-/669534?utm_source=rss&utm_medium=http&utm_campaign=link_back 国会中継 2023-05-04 05:50:00
ビジネス 東洋経済オンライン 「サバ・サケも高騰」日本人が魚を食べなくなる日 日本の食卓に迫る、買い負けと漁獲量減の深刻 | 食品 | 東洋経済オンライン https://toyokeizai.net/articles/-/669332?utm_source=rss&utm_medium=http&utm_campaign=link_back 東京都中央卸売市場 2023-05-04 05:30:00
ビジネス 東洋経済オンライン 合格平均24歳!若年、コンサル化する公認会計士 独立もよし、監査法人でパートナーなら「億」も | 最新の週刊東洋経済 | 東洋経済オンライン https://toyokeizai.net/articles/-/667675?utm_source=rss&utm_medium=http&utm_campaign=link_back 公認会計士 2023-05-04 05:10:00

コメント

このブログの人気の投稿

投稿時間:2021-06-17 22:08:45 RSSフィード2021-06-17 22:00 分まとめ(2089件)

投稿時間:2021-06-20 02:06:12 RSSフィード2021-06-20 02:00 分まとめ(3871件)

投稿時間:2021-06-17 05:05:34 RSSフィード2021-06-17 05:00 分まとめ(1274件)