python |
Pythonタグが付けられた新着投稿 - Qiita |
Windows + PostgreSQL構成のプリザンターのバックアップ取得 |
https://qiita.com/nwtba1plt2610/items/46267162af7375cdace6
|
windowspostgresql |
2022-12-18 18:42:41 |
python |
Pythonタグが付けられた新着投稿 - Qiita |
『BIZ UDPゴシック + Inter = 最強』って話。Pythonでフォント合成する方法も徹底解説! |
https://qiita.com/tawara_/items/34c3e85329949b3c5b7a
|
bizudp |
2022-12-18 18:38:14 |
python |
Pythonタグが付けられた新着投稿 - Qiita |
【Python】CustomTkinter でコンボボックスの選択肢を動的に変化させる |
https://qiita.com/hajime-f/items/607183684b2d532e1ed7
|
combobox |
2022-12-18 18:05:37 |
Ruby |
Rubyタグが付けられた新着投稿 - Qiita |
【Rspec】レシーバーの「FactoryBot」の記述を省略する設定 |
https://qiita.com/so__hei__/items/30dd23d53850959e21f6
|
userfact |
2022-12-18 18:31:10 |
Ruby |
Rubyタグが付けられた新着投稿 - Qiita |
RSpecでtravel_toを使ってみた |
https://qiita.com/tech-white/items/6fe5944fd3265aea013b
|
rspec |
2022-12-18 18:14:59 |
Ruby |
Rubyタグが付けられた新着投稿 - Qiita |
【Rspec】FactoryBotとは |
https://qiita.com/so__hei__/items/138ab9306e6c8f20fdc8
|
gemfactorybot |
2022-12-18 18:02:26 |
AWS |
AWSタグが付けられた新着投稿 - Qiita |
CDK でプライベートサブネットの EC2 on ECS を書くときに困ったこと |
https://qiita.com/Yamato1923/items/f4e13d1838af18b5c358
|
econecs |
2022-12-18 18:56:51 |
AWS |
AWSタグが付けられた新着投稿 - Qiita |
AWSで独自ドメインメールを送受信 |
https://qiita.com/shibata_ninja/items/4353e18029e1340fd5ec
|
workmai |
2022-12-18 18:23:41 |
golang |
Goタグが付けられた新着投稿 - Qiita |
Google Go Style Guideを読んでみよう! - Style Decisions編 |
https://qiita.com/TakumaKurosawa/items/fbb1418111604837d8ac
|
agostyleguidefrom |
2022-12-18 18:20:55 |
Azure |
Azureタグが付けられた新着投稿 - Qiita |
Azure 仮想マシンを FW と Proxy にする検証環境を Azure CLI で作ってみた |
https://qiita.com/mnrst/items/0558e2fd9496c608f1dc
|
azure |
2022-12-18 18:17:45 |
Ruby |
Railsタグが付けられた新着投稿 - Qiita |
【Rspec】レシーバーの「FactoryBot」の記述を省略する設定 |
https://qiita.com/so__hei__/items/30dd23d53850959e21f6
|
userfact |
2022-12-18 18:31:10 |
Ruby |
Railsタグが付けられた新着投稿 - Qiita |
RSpecでtravel_toを使ってみた |
https://qiita.com/tech-white/items/6fe5944fd3265aea013b
|
rspec |
2022-12-18 18:14:59 |
Ruby |
Railsタグが付けられた新着投稿 - Qiita |
【Rspec】FactoryBotとは |
https://qiita.com/so__hei__/items/138ab9306e6c8f20fdc8
|
gemfactorybot |
2022-12-18 18:02:26 |
技術ブログ |
Developers.IO |
[レポート] B-6 あなたならどうする?PM組織のスクラップアンドビルド – プロダクトマネージャーカンファレンス2022 #pmconf2022 |
https://dev.classmethod.jp/articles/report-pmconf2022-b6/
|
pmconf |
2022-12-18 09:36:59 |
海外TECH |
DEV Community |
Everything you should know as a Cloud Guru for Storage |
https://dev.to/aws-builders/everything-you-should-know-as-a-cloud-guru-for-storage-3b15
|
Everything you should know as a Cloud Guru for Storage DAY Everything you should know as a Cloud Guru for AWS Storage Day Thirty Seven days of Cloud on GitHub Read On iCTPro co nz Read on Dev to For who Data engineers security engineers AWS certification preparation cloud engineers devops engineers Solutions Architects Why shloud you learnDemand for AWS skills AWS is one of the most popular cloud computing platforms and there is high demand for professionals with AWS skills Learning about AWS storage services can help you to become proficient in using AWS which can improve your job prospects and earning potential Variety of storage options AWS offers a range of storage options to suit different needs and use cases By learning about these options you can better understand which storage solution is the best fit for your needs Cost savings AWS storage services are designed to be scalable and cost effective By learning how to use these services you can potentially save money on storage costs while still getting the performance and reliability you need Improved data management AWS storage services can help you to better manage your data and ensure it is secure available and easy to access This can be especially important for organizations that generate and handle large amounts of data In summary learning about AWS storage services can help you to become more proficient in using AWS understand the various storage options available potentially save on storage costs and improve your data management Types of Storage ServicesAmazon SAmazon GlacierAmazon EBSAmazon Instance StorageAmazon EFS Amazon CloudFrontAWS Storage GatewayAWS SnowballFree HandsOn AWS Storage Lab Link Amazon S Simple Storage ServiceA block storage which offer high scalability data availability security and performance Can use for data lakes cloud native applications and mobile apps infoLinksIntroduction to Amazon SVideo LinkStorage classesLink Link Object based storageUnlimited Storage durabilityMax file size for one file is TerrabyteFlat file structure Regional Based ServiceData replicated for availabilityYou can encrypt the objectsRestrict bucket to just your VPCStorage classes S Standard S Intelligent Tiering S Standard IA S One Zone IA S Glacier Instant Retrieval S Glacier Flexible Retrieval S Glacier Deep Archive S Outposts Instance store VolumesinfoLinksAmazon EC instance storeLinkEphemeral storageDonot store critical dataif instance is start or stop you will loose dataif rebooted the data will remain intactIf EC failed the data will be lost Start Stop Terminate No additional feesIOPS are blazing fast mainly used as cache or bufferNot all Ec Support Instance store volume Amazon EBS VolumesinfoLinksHow to Encrypt an EBS VolumeLinkPersistence and block level storage Can be attached to Ec instance amp in some regions we can multi attach the EBS Volume More info on multi attachIndependent of EC instance Logically attached to EC instanceSupports Snapshotting can copy EBS snapshot from one region to other regionsEBS data is replicated to same AZ for Availability Types of EBS are SSD And HDD Ability to encrypt during rest and transit AES Here is an awesome blog about this by STUART SCOTT from Cloud Academy Amazon Elastic File System EFS infoLinksOverview EFSVideoLinkHow to create EFSLinkEFS a file storage servicesCan attach to multiple EC using mount pointUses Hierarchy system Build under Gluster filesystemFully managed shared file system with low latency accessPB of Storage availableUses NFS and replicated across multiple AZ for high availabilityHighly scalable regional services available on most of the regions For more info Amazon Cloud front CDN infoLinksIntroVideoLinkWorkshopLinkContent Delivery NetworkDelivered as cache data from edge locationsUse Distributions to deliver contentVia Web distributionDynamic and StaticHTTP and HTTPSAllow add remove update objectLive stream functionality supportOrigins can be Ec or SVia RTMP DistributionDistribute streaming media using RMTP protocol Adobe flash media Origins can only from SAmazon CloudFront is a content delivery network CDN provided by Amazon Web Services AWS It is designed to deliver content such as websites and applications quickly and securely to users around the world CloudFront uses a network of edge locations located in various parts of the world to deliver content to users with low latency This means that users can access the content they need faster and with fewer interruptions CloudFront also integrates with other AWS services such as Amazon S and Amazon EC to provide a complete solution for delivering content over the internet Storage GatewayinfoLinksFeatureslinkPricingLinkBest way to transfer data from NAS SAN amp DAS to AWSSecure and cost efficientFile Volume and Tape Configurations availableConsider Egress traffic charges while architectingFile gateways used to access files stored as objects from s can be mounted as NFS to Corporate Environment Using casing will reduce further latency and the Egress traffic charges Storage Volume gateway used for low latency access data from S Synchronously copied to S Volume can be GB to TB and MAX TB Snapshots are stored inclemently in S Cached Volume gateways data provided by s local volume used for buffering and local cache for recently access data Volume can be TB and storage capacity of TBTape gateway back to s from on prem take advantage of AWS glacier AWS SnowballinfoLinksSpecificationsLinkShipping an AWS Snowball devicelinkSecurely transfer PB of data in and out of AWSOn perm to S or S to On permHigh speed data transferData to Snowball is encrypted automaticallycan be tracked also use SNS for trackingHIPAA CompliantData removed with NIST standard after the transferAWS Snowball process ImportExportAWS Snow FAMILYinfoLinksIntroLinkAWS Snowcone is a small rugged and secure device offering edge computing data storage and data transfer on the go in austere environment with little or no connectivity These devices are designed for offline data processing and storage and are particularly useful for transferring large amounts of data to or from AWS when dealing with data that is too large or too costly to transfer over the Internet AWS Snowball is a physical device that can hold up to TB of data while AWS Snowball Edge is a device that combines the capabilities of AWS Snowball with additional compute and storage capabilities AWS Snowmobile is a shipping container that can hold up to PB of data and is designed for transferring extremely large amounts of data to AWS Connect with me on Twitter Connect with me on LinkedinRead more post on dev to or iCTPro co nzConnect with me on GitHub Anuvindh SankaravilasamFollow Experienced Cloud Technology Specialist with a demonstrated skillset of working with Medical Service NZ Police amp Education industry |
2022-12-18 09:41:01 |
海外TECH |
DEV Community |
Limitations and Solutions to consider while using SQS ️ |
https://dev.to/aws-builders/limitations-and-solutions-to-consider-while-using-sqs-3h9f
|
Limitations and Solutions to consider while using SQS ️Amazon Simple Queue Service SQS is a fully managed distributed message queuing service that enables you to decouple and scale microservices distributed systems and serverless applications While SQS is a powerful and reliable service it does have some limitations that you should consider when using it In this article we will explore some of these limitations and discuss potential solutions that you can consider to overcome them Please find notable limitations in using SQS services below Message size limit Message retention period Visibility timeout Delivery guarantees Throughput limits Regional availabilityIn this article we will discuss these limitations in more detail and provide potential solutions that you can consider to overcome them By understanding these limitations and implementing appropriate solutions you can ensure that SQS is an effective and reliable message queuing service for your applications Message size limit 🪂Each message in an SQS queue has a maximum size limit of KB If you need to send larger messages you may need to use a different service such as Amazon SNS or Amazon Kinesis Solutions to use large message size To overcome the message size limit of Amazon Simple Queue Service SQS you can use the following approaches Split the message into smaller chunks If your message is larger than the size limit you can split it into smaller chunks and send multiple messages for each chunk You can then have the consumer reassemble the chunks into the original message This approach can be used for messages of any size as long as you have the necessary logic to split and reassemble the message import as AWS from aws sdk Set up an SQS clientconst sqs new AWS SQS Define the queue URLconst queueUrl Define the maximum chunk sizeconst chunkSize KB Split the message into chunksconst chunks for let i i lt largeMessage length i chunkSize chunks push largeMessage slice i i chunkSize Send a message for each chunkfor const chunk of chunks sqs sendMessage QueueUrl queueUrl MessageBody chunk err gt if err console error err This example uses a loop to split the largeMessage string into chunks of chunkSize bytes and then sends a message for each chunk using the sendMessage method of the SQS client To reassemble the message on the consumer side you can use a similar approach to receive and process the messages in a loop import as AWS from aws sdk Set up an SQS clientconst sqs new AWS SQS Define the queue URLconst queueUrl Receive and process messages from the queuewhile true sqs receiveMessage QueueUrl queueUrl MaxNumberOfMessages err data gt if err console error err return for const message of data Messages Concatenate the message body with the reconstructed message reconstructedMessage message Body Delete the message from the queue sqs deleteMessage QueueUrl queueUrl ReceiptHandle message ReceiptHandle delErr gt if delErr console error delErr Using message attributes SQS allows you to attach up to message attributes to each message which can be used to store additional metadata or data You can use message attributes to store parts of a larger message and then have the consumer retrieve and reassemble the message using the attributes This approach is limited to a total of KB of message attributes per message Amazon Simple Queue Service SQS allows you to attach up to message attributes to each message which can be used to store additional metadata or data You can use message attributes to store parts of a larger message and then have the consumer retrieve and reassemble the message using the attributes This approach is limited to a total of KB of message attributes per message You can use message attributes to send a large message using the AWS SDK for JavaScript AWS SQS import as AWS from aws sdk Set up an SQS clientconst sqs new AWS SQS Define the queue URLconst queueUrl Define the maximum chunk sizeconst chunkSize bytes Split the message into chunksconst chunks for let i i lt largeMessage length i chunkSize chunks push largeMessage slice i i chunkSize Send a message for each chunkfor let i i lt chunks length i const messageAttributes chunkNumber DataType Number StringValue i toString sqs sendMessage QueueUrl queueUrl MessageBody chunks i MessageAttributes messageAttributes err gt if err console error err This example uses a loop to split the largeMessage string into chunks of chunkSize bytes and then sends a message for each chunk using the sendMessage method of the SQS client The message attributes include a chunkNumber attribute that indicates the order of the chunk in the original message To receive a large message using Amazon Simple Queue Service SQS please follow the below exampleimport as AWS from aws sdk Set up an SQS clientconst sqs new AWS SQS Define the queue URLconst queueUrl Receive and process messages from the queuewhile true sqs receiveMessage QueueUrl queueUrl MaxNumberOfMessages err data gt if err console error err return for const message of data Messages Get the chunk number from the message attributes const chunkNumber parseInt message MessageAttributes chunkNumber StringValue Store the chunk in the reconstructed message array reconstructedMessage chunkNumber message Body Delete the message from the queue sqs deleteMessage QueueUrl queueUrl ReceiptHandle message ReceiptHandle delErr gt if delErr console error delErr Reassemble the message from the chunksconst reconstructedMessage reconstructedMessage join This example uses a loop to receive and process messages from the queue using the receiveMessage method of the SQS client For each message it gets the chunkNumber attribute from the message attributes and stores the message body in the corresponding position in the reconstructedMessage array Once all the messages have been received and processed the reconstructedMessage array can be joined to form the original message If you need to send larger messages you can use a different service that supports larger message sizes such as Amazon SNS or Amazon Kinesis These services offer various options for sending large payloads such as message batching compression and chunking Two popular options are Amazon Simple Notification Service SNS and Amazon Kinesis These services offer various options for sending large payloads such as message batching compression and chunking Using SNS service You can send a large message using Amazon SNS and with the AWS SDK for JavaScript AWS SNS import as AWS from aws sdk Set up an SNS clientconst sns new AWS SNS Define the topic ARNconst topicArn arn aws sns ap south my topic Send the messagesns publish TopicArn topicArn Message largeMessage err gt if err console error err This example uses the publish method of the SNS client to send the largeMessage string to the specified topic SNS supports message sizes up to MB To receive and process the message on the consumer side you can set up a subscription to the SNS topic and specify a delivery mechanism such as an SQS queue or an HTTP S endpoint The consumer can then receive and process the messages as usual You can set up an SQS queue as a subscription to an SNS topic import as AWS from aws sdk Set up an SNS clientconst sns new AWS SNS Define the topic ARN and queue URLconst topicArn arn aws sns ap south my topic const queueUrl Create the subscriptionsns subscribe Protocol sqs TopicArn topicArn Endpoint queueUrl err gt if err console error err This example uses the subscribe method of the SNS client to create a subscription to the specified SNS topic using the SQS queue as the delivery mechanism To receive and process the messages from the SQS queue you can use the same approach as in the previous examples using the receiveMessage and deleteMessage methods of the SQS client To set up an HTTP or HTTPS endpoint as a subscription for an SQS queue you will need to create an Amazon Simple Notification Service SNS topic and then configure the topic to send notifications to the HTTP or HTTPS endpoint Here is an overview of the process Create an SNS topic and choose HTTP S as the protocol for the endpoint Provide the URL for the HTTP or HTTPS endpoint as the endpoint address Create an SQS queue and choose SNS as the protocol for the subscription Choose the SNS topic that you created in step as the topic for the subscription Once you have set up the subscription any messages that are sent to the SQS queue will be forwarded to the HTTP or HTTPS endpoint as a POST request The body of the request will contain a JSON object with details about the message including the message body and any metadata It s important to note that the HTTP or HTTPS endpoint must be able to process the POST request and handle the message promptly If the endpoint is unable to process the message it may be retried multiple times before the message is considered to have failed delivery import as cdk from aws cdk lib import as sqs from aws cdk lib sqs export class MyStack extends cdk Stack constructor scope cdk Construct id string props cdk StackProps super scope id props Create an SQS queue const queue new sqs SqsQueue this MyQueue visibilityTimeout cdk Duration seconds Create an HTTP subscription for the queue const subscription new sqs HttpSubscription this MySubscription queue endpoint Using Kinesis service Amazon Web Services AWS Kinesis is a fully managed streaming data platform that enables real time processing of streaming data at scale It is designed to handle high volume and velocity data streams such as clickstreams log files and social media feeds In this case the SQS queue is being used to decouple Kinesis This means that the system that is producing the data stream e g a log generator or social media platform can send the data to the Kinesis stream without needing to directly communicate with the consumer of the data e g an analytics system or machine learning model Instead the producer can simply add a message to the SQS queue which will trigger further processing to send the data to the Kinesis stream This decoupling can make it easier to scale and maintain the system as the producer and consumer can operate independently without needing to directly communicate with each other import as cdk from aws cdk lib import as kinesis from aws cdk aws kinesis import as sqs from aws cdk aws sqs export class MyStack extends cdk Stack constructor scope cdk App id string props cdk StackProps super scope id props Create a Kinesis stream const stream new kinesis Stream this MyStream shardCount Create an SQS queue const queue new sqs Queue this MyQueue visibilityTimeout cdk Duration seconds Send data to the Kinesis stream and post a message to the SQS queue once it has been sent stream grantWrite queue This code creates a new Kinesis stream with a single shard and an SQS queue with a visibility timeout of seconds It then grants the queue write permissions to the stream which allows it to send data to the stream When data is added to the stream it will trigger further processing to send the data to any consumers that are subscribed to the stream The message that is posted to the SQS queue will contain information about the data that was sent to the stream such as the partition key and the sequence number This can be used to track the progress of the data through the system import Kinesis from aws sdk const kinesis new Kinesis export async function handler event any console log Received event JSON stringify event null const message event Records body const data JSON parse message Get the shard iterator for the specified shard and sequence number const result await kinesis getShardIterator StreamName process env STREAM NAME ShardId data ShardId ShardIteratorType AT SEQUENCE NUMBER StartingSequenceNumber data SequenceNumber promise const shardIterator result ShardIterator Read data from the stream using the shard iterator let records await kinesis getRecords ShardIterator shardIterator Limit promise while records Records length gt Process the records that were read from the stream for const record of records Records if record PartitionKey data PartitionKey This record has the desired partition key process it Get the next shard iterator shardIterator records NextShardIterator Read the next batch of records records await kinesis getRecords ShardIterator shardIterator Limit promise Delete the message from the queue once it has been processed const deleteResult await sqs deleteMessage QueueUrl process env QUEUE URL ReceiptHandle event Records receiptHandle promise console log Deleted message JSON stringify deleteResult null This code reads a single record from the stream using the ShardIterator from the message and the getRecords method of the Kinesis client It then processes the data that was read from the stream Finally it deletes the message from the queue using the deleteMessage method of the SQS client This will mark the message as processed and remove it from the queue Message retention period 🪂SQS queues can retain messages for a maximum of days If you need to retain messages for a longer period you may need to use a different service such as Amazon S or Amazon Glacier The message retention period of an Amazon Simple Queue Service SQS queue determines how long a message is retained in the queue before it is automatically deleted The default retention period is days and the maximum retention period is days You can set the message retention period when you create a queue or update an existing queue using the SetQueueAttributes action of the SQS API or the corresponding method of the AWS SDKs import as AWS from aws sdk Set up an SQS clientconst sqs new AWS SQS Define the queue URLconst queueUrl Set the message retention period to dayssqs setQueueAttributes QueueUrl queueUrl Attributes MessageRetentionPeriod days in seconds err gt if err console error err If you need to retain messages for a longer period you can use a different service such as Amazon S or Amazon Glacier These services offer long term storage options that can store messages indefinitely or for a specified period You can use Amazon S to store messages from an SQS queue for long term retention import as AWS from aws sdk Set up an SQS clientconst sqs new AWS SQS Set up an S clientconst s new AWS S Define the queue URL and bucket nameconst queueUrl const bucketName my s bucket Receive and process messages from the queuewhile true sqs receiveMessage QueueUrl queueUrl MaxNumberOfMessages err data gt if err console error err return for const message of data Messages Store the message in S s putObject Bucket bucketName Key message MessageId Body message Body putErr gt if putErr console error putErr return Delete the message from the queue sqs deleteMessage QueueUrl queueUrl ReceiptHandle message ReceiptHandle delErr gt if delErr console error delErr This example uses a loop to receive and process messages from the queue using the receiveMessage method of the SQS client For each message it stores the message body in an S bucket using the putObject method of the S client and then deletes the message from the queue using the deleteMessage method of the SQS client To retrieve the messages from S you can use the getObject method of the S client You can also set up lifecycle policies on the S bucket to automatically delete or transition the messages to Amazon Glacier after a specified period Visibility timeout 🪂When a message is received from an SQS queue it becomes hidden from other consumers for a specified period known as the visibility timeout If the consumer fails to process the message within this timeout the message will become visible again and may be processed by another consumer This can lead to duplicate processing if not handled properly The visibility timeout of an Amazon Simple Queue Service SQS queue is the amount of time that a message is hidden from other consumers when it is being processed This is to prevent other consumers from processing the same message simultaneously which can lead to duplicate processing of the message The visibility timeout is specified in seconds and has a maximum value of hours seconds The default value is seconds The visibility timeout of an Amazon Simple Queue Service SQS queue is designed to prevent multiple consumers from processing the same message simultaneously which can lead to duplicate processing of the message However there may be situations where you need to extend the visibility timeout of a message to allow more time for processing To extend the visibility timeout of a message in an SQS queue you can use the ChangeMessageVisibility action of the SQS API or the corresponding method of the AWS SDKs You can extend the visibility timeout of a message by minutes seconds using the AWS SDK for JavaScript AWS SQS import as AWS from aws sdk Set up an SQS clientconst sqs new AWS SQS Define the queue URLconst queueUrl Receive and process messages from the queuewhile true sqs receiveMessage QueueUrl queueUrl MaxNumberOfMessages err data gt if err console error err return for const message of data Messages Process the message processMessage message processErr gt if processErr console error processErr Extend the visibility timeout by minutes sqs changeMessageVisibility QueueUrl queueUrl ReceiptHandle message ReceiptHandle VisibilityTimeout visErr gt if visErr console error visErr else Delete the message from the queue sqs deleteMessage QueueUrl queueUrl ReceiptHandle message ReceiptHandle delErr gt if delErr console error delErr This example uses a loop to receive and process messages from the queue using the receiveMessage method of the SQS client For each message it calls the processMessage function to process the message If the processMessage function returns an error it means that the message processing failed and the visibility timeout of the message is extended by minutes using the changeMessageVisibility method of the SQS client If the processMessage function succeeds the message is deleted from the queue using the deleteMessage method of the SQS client Delivery guarantees 🪂SQS is a best effort delivery service which means that it does not guarantee the order of messages or that every message will be delivered If you need stronger delivery guarantees you may need to use a different service such as Amazon SNS or Amazon Kinesis Amazon Simple Queue Service SQS provides different delivery guarantees to ensure the reliability and availability of message delivery Here are the main delivery guarantees provided by SQS At least once delivery This guarantee ensures that each message is delivered at least once but it may be delivered more than once in certain circumstances such as when the consumer fails to process the message or when the visibility timeout is extended This is the default delivery guarantee for SQS queues At least once delivery is a delivery guarantee provided by Amazon Simple Queue Service SQS that ensures that each message is delivered at least once but it may be delivered more than once in certain circumstances This is the default delivery guarantee for SQS queues At least once delivery Here is how the at least once delivery guarantee works in SQS When you send a message to an SQS queue the message is stored in the queue and made available to consumers When a consumer retrieves a message from the queue using the receiveMessage the action of the SQS API or the corresponding method of the AWS SDKs the message is hidden from other consumers for a specified period known as the visibility timeout During the visibility timeout the consumer processes the message and then sends a request to delete the message from the queue using the deleteMessage the action of the SQS API or the corresponding method of the AWS SDKs If the consumer successfully deletes the message from the queue the message is removed from the queue and is not delivered again If the consumer fails to delete the message from the queue the message becomes visible to other consumers again after the visibility timeout expires and the process is repeated until the message is successfully deleted from the queue This means that a message may be delivered more than once in the following circumstances The consumer fails to process the message and does not delete it from the queue before the visibility timeout expires The consumer extends the visibility timeout of the message using the changeMessageVisibility the action of the SQS API or the corresponding method of the AWS SDKs To avoid these issues it is important to design your consumer application to process the message and delete it from the queue promptly and to handle duplicate messages if they occur At most once delivery This guarantee ensures that each message is delivered at most once but it may not be delivered at all in certain circumstances such as when the consumer is unable to process the message To use this delivery guarantee you can use the SQS FIFO First In First Out queue which is designed to prevent duplicates and guarantee the order of delivery A FIFO First In First Out queue is a type of Amazon Simple Queue Service SQS queue that preserves the order of messages and ensure that each message is processed and deleted exactly once At most once delivery is a delivery guarantee provided by Amazon Simple Queue Service SQS that ensures that each message is delivered at most once but it may not be delivered at all in certain circumstances To use this delivery guarantee you can use the SQS FIFO First In First Out queue which is designed to prevent duplicates and guarantee the order of delivery Here is how the at most once delivery guarantee works in an SQS FIFO queue When you send a message to an SQS FIFO queue the message is stored in the queue and made available to consumers When a consumer retrieves a message from the queue using the receiveMessage the action of the SQS API or the corresponding method of the AWS SDKs the message is hidden from other consumers for a specified period known as the visibility timeout During the visibility timeout the consumer processes the message and then sends a request to delete the message from the queue using the deleteMessage the action of the SQS API or the corresponding method of the AWS SDKs If the consumer successfully deletes the message from the queue the message is removed from the queue and is not delivered again If the consumer fails to delete the message from the queue the message becomes visible to other consumers again after the visibility timeout expires and the process is repeated until the message is successfully deleted from the queue This means that a message may not be delivered at all in the following circumstances The consumer fails to process the message and does not delete it from the queue before the visibility timeout expires The consumer extends the visibility timeout of the message using the changeMessageVisibility the action of the SQS API or the corresponding method of the AWS SDKs To avoid these issues it is important to design your consumer application to process the message and delete it from the queue promptly and to handle the failure to process the message if it occurs Exactly once delivery This guarantee ensures that each message is delivered exactly once but it requires additional effort to implement and may not be suitable for all use cases To use this delivery guarantee you can use the SQS FIFO First In First Out queue in combination with a distributed transaction system such as Amazon DynamoDB or Amazon Aurora to store the message processing state and coordinate the processing of the message Exactly once delivery is a delivery guarantee that ensures that each message is delivered exactly once This guarantee requires additional effort to implement and may not be suitable for all use cases To use this delivery guarantee you can use the SQS FIFO First In First Out queue in combination with a distributed transaction system such as Amazon DynamoDB or Amazon Aurora to store the message processing state and coordinate the processing of the message You can implement exactly once delivery using an SQS FIFO queue and Amazon DynamoDB When you send a message to an SQS FIFO queue the message is stored in the queue and made available to consumers When a consumer retrieves a message from the queue using the receiveMessage the action of the SQS API or the corresponding method of the AWS SDKs the consumer stores the message processing state in a DynamoDB table using the putItem action of the DynamoDB API or the corresponding method of the AWS SDKs The consumer processes the message and then sends a request to delete the message from the queue using the deleteMessage the action of the SQS API or the corresponding method of the AWS SDKs If the deleteMessage request succeeds the consumer removes the message processing state from the DynamoDB table using the deleteItem action of the DynamoDB API or the corresponding method of the AWS SDKs If the deleteMessage the request fails the consumer retrieves the message processing state from the DynamoDB table using the getItem action of the DynamoDB API or the corresponding method of the AWS SDKs and checks whether the message has already been processed If the message has already been processed the consumer removes the message processing state from the DynamoDB table and does not process the message again If the message has not been processed the consumer processes the message and repeats the deleteMessage request until it succeeds This example uses DynamoDB to store the message processing state and coordinate the processing of the message which ensures that the message is delivered exactly once Throughput limits 🪂SQS has limits on the number of requests per second that can be made to a queue as well as the maximum number of inflight messages messages that have been received by a consumer but have not yet been deleted If you exceed these limits you can design your consumers to handle the workload more efficiently or consider using a higher throughput queue type such as an Amazon SQS FIFO queue Workarounds Optimize your consumer design 🪴One way to avoid exceeding throughput limits is to design your consumers to handle the workload efficiently This may involve using techniques such as batch processing multithreading or asynchronous processing to reduce the number of requests made to the queue Use a higher throughput queue type SQS offers two types of queues 🪴standard and FIFO First In First Out Standard queues have higher throughput limits but do not guarantee message order while FIFO queues have lower throughput limits but guarantee message order If message order is not critical for your use case you may be able to increase your throughput by using a standard queue Monitor your usage and adjust as needed 🪴You can use Amazon CloudWatch to monitor your queue usage and adjust your workload as needed to stay within the limits For example if you are consistently approaching the inflight message limit you may need to increase the number of consumers processing messages from the queue Use Auto scaling If you are using Amazon Elastic Container Service ECS or Amazon EC to run your consumers you can use Auto scaling to adjust the number of consumers based on the workload This can help you to optimize your usage of SQS and avoid exceeding throughput limits Regional availability 🪂SQS is available in multiple regions but a queue can only be accessed from within the same region it was created in If you need to access a queue from another region you can set up a cross region replication configuration or use a service such as AWS Global Accelerator to access the queue Workarounds Choose the right region 🪴When creating a queue you should choose the region that is closest to the consumers that will be accessing the queue This will help to minimize latencies and improve performance Use cross region replication 🪴If you need to access a queue from a different region you can set up a cross region replication configuration to replicate messages to a secondary queue in the destination region This allows you to access the queue from multiple regions while maintaining a consistent copy of the messages import as cdk from aws cdk lib import as sqs from aws cdk aws sqs export class MyStack extends cdk Stack constructor scope cdk Construct id string props cdk StackProps super scope id props Create the source queue const sourceQueue new sqs Queue this SourceQueue queueName my queue redrivePolicy deadLetterQueue queueUrl maxReceiveCount Create the target queue in a different region const targetQueue new sqs Queue this TargetQueue queueName my queue replica region ap south This will set up the source queue to send messages that it can t process to the target queue in the specified region The deadLetterQueue property of the redrivePolicy specifies the target queue and the maxReceiveCount the property specifies the maximum number of times a message can be delivered to the source queue before it is sent to the dead letter queue Keep in mind that cross region replication for SQS is based on the DLQ feature When a message is sent to the DLQ it means that the message couldn t be processed by the source queue You can configure the number of times a message can be delivered to the source queue before it is sent to the DLQ using the maxReceiveCount parameter in the RedrivePolicy Use AWS Global Accelerator 🪴AWS Global Accelerator allows you to access resources in different regions using static IP addresses This can be used to access an SQS queue from a different region without the need for cross region replication AWS Global Accelerator is a network service that routes traffic to the optimal Amazon Web Services AWS Region for lower latency and higher performance It uses static anycast IP addresses to route traffic to the optimal AWS Region and it includes health checking to ensure that traffic is routed to healthy endpoints import as cdk from aws cdk lib import as globalaccelerator from aws cdk aws globalaccelerator import as sqs from aws cdk aws sqs export class MyStack extends cdk Stack constructor scope cdk Construct id string props cdk StackProps super scope id props Create a new Global Accelerator const accelerator new globalaccelerator GlobalAccelerator this Accelerator acceleratorType globalaccelerator AcceleratorType STATIC enabled true regions globalaccelerator Regions US EAST globalaccelerator Regions US WEST Create an SQS queue const queue new sqs Queue this Queue visibilityTimeout cdk Duration seconds Create an accelerator endpoint for the queue const endpoint new globalaccelerator AcceleratorEndpoint this Endpoint accelerator port protocol globalaccelerator Protocol TCP resource queue Create a listener for the endpoint new globalaccelerator Listener this Listener accelerator port protocol globalaccelerator Protocol TCP endpointGroups endpoints endpoint Cost 🪂Using SQS can incur costs including charges for the number of requests made the number of messages sent and received and the amount of data transferred You will need to consider these costs and optimize your usage as needed SQS is charged based on the number of requests and the volume of data transferred Here are the main factors that affect the cost of using SQS Number of requests SQS charges a request fee for each action performed on a queue such as sending a message receiving a message or deleting a message The request fees vary depending on the type of queue and the region in which the queue is located For example the request fees for an SQS Standard queue in the US East N Virginia region are per request for sending a message and per request for receiving and deleting a message Data transfer SQS charges for data transfer based on the amount of data transferred in and out of the service The data transfer fees vary depending on the region in which the queue is located For example the data transfer fees for an SQS Standard queue in the US East N Virginia region are per GB for data transferred in and per GB for data transferred out Number of messages SQS charges a fee for the number of messages stored in a queue which is based on the volume of data stored in the queue The number of messages fee varies depending on the type of queue and the region in which the queue is located For example the number of messages fee for an SQS Standard queue in the US East N Virginia region is per messages Visibility timeout SQS charges a fee for the number of seconds that a message is in the queue and not visible to consumers The visibility timeout fee varies depending on the type of queue and the region in which the queue is located For example the visibility timeout fee for an SQS Standard queue in the US East N Virginia region is per seconds Optimization on Cost There are several ways you can reduce the cost of using Amazon Simple Queue Service SQS by optimizing the usage of the service Use the appropriate queue type 🪴SQS offers different queue types that have different characteristics and pricing models For example the SQS Standard queue provides best effort delivery at the lowest cost while the SQS FIFO First In First Out queue provides guaranteed delivery and order at a higher cost Choose the queue type that best fits your requirements and optimize your usage accordingly Optimize the number of requests 🪴SQS charges a request fee for each action performed on a queue such as sending a message receiving a message or deleting a message To reduce the cost of requests you can batch requests together using the sendMessageBatch and deleteMessageBatch actions of the SQS API or the corresponding methods of the AWS SDKs which allows you to send or delete multiple messages in a single request Optimize the data transfer 🪴SQS charges for data transfer based on the amount of data transferred in and out of the service To reduce the cost of data transfer you can minimize the size of the messages you send and receive and compress the data if possible You can also use Amazon CloudWatch to monitor the data transfer volume and identify opportunities to optimize your usage Optimize the number of messages 🪴SQS charges a fee for the number of messages stored in a queue which is based on the volume of data stored in the queue To reduce the cost of storing messages you can delete messages from the queue as soon as they are processed and use the SQS Dead Letter Queue feature to store and analyze failed messages Optimize the visibility timeout 🪴SQS charges a fee for the number of seconds that a message is in the queue and not visible to consumers To reduce the cost of the visibility timeout you can set a shorter visibility timeout for messages that can be processed quickly and a longer visibility timeout for messages that require more time to process You can also use the changeMessageVisibility the action of the SQS API or the corresponding method of the AWS SDKs to extend the visibility timeout of a message if necessary These are some of the main limitations of SQS to consider It is important to carefully evaluate your specific needs and requirements to determine if SQS is the right solution for your use case Thanks for supporting Would be great if you like to Buy Me a Coffee to help boost my efforts Also free to comment review if you find something which could be wrong or explained in a better way I am open to your thoughts and also ready to clarify so that our readers are getting the best content Original post at Dev PostReposted at dev to aravindvcyberCheck out lots more coming direct to your mailbox by joining my free newsletterExploring Serverless substack Exploring Serverless Aravind Vadamalaimuthu Substack Exploring Serverless Click to read Exploring Serverless by Aravind Vadamalaimuthu a Substack publication Launched a day ago exploringserverless substack com |
2022-12-18 09:22:44 |
海外科学 |
NYT > Science |
How Can Tainted Spinach Cause Hallucinations? |
https://www.nytimes.com/2022/12/18/world/australia/spinach-hallucinations.html
|
brain |
2022-12-18 09:29:25 |
ニュース |
BBC News - Home |
1,200 troops to cover ambulance and border strikes |
https://www.bbc.co.uk/news/uk-64012800?at_medium=RSS&at_campaign=KARANGA
|
action |
2022-12-18 09:42:23 |
ニュース |
BBC News - Home |
Ukraine: Russia to deploy musicians to front to boost morale |
https://www.bbc.co.uk/news/world-europe-64016599?at_medium=RSS&at_campaign=KARANGA
|
brigade |
2022-12-18 09:11:58 |
ニュース |
BBC News - Home |
England: Steve Borthwick to be confirmed as new head coach in coming days |
https://www.bbc.co.uk/sport/rugby-union/64017216?at_medium=RSS&at_campaign=KARANGA
|
coach |
2022-12-18 09:19:52 |
サブカルネタ |
ラーブロ |
Pizzeria luna e Dolce@柴又 マルゲリータ |
http://ra-blog.net/modules/rssc/single_feed.php?fid=205854
|
pizzerialunaedolce |
2022-12-18 09:30:41 |
北海道 |
北海道新聞 |
鈴木氏が出馬表明 道議選北見市 |
https://www.hokkaido-np.co.jp/article/776911/
|
出馬表明 |
2022-12-18 18:53:00 |
北海道 |
北海道新聞 |
アザラシひょっこり、心ほっこり 室蘭水族館、冬の特別営業 |
https://www.hokkaido-np.co.jp/article/776910/
|
市立室蘭水族館 |
2022-12-18 18:51:00 |
北海道 |
北海道新聞 |
ウチダザリガニ駆除に貢献 洞爺湖の団体に前田一歩園賞 |
https://www.hokkaido-np.co.jp/article/776909/
|
特定外来生物 |
2022-12-18 18:50:00 |
北海道 |
北海道新聞 |
救急車、中央分離帯に衝突 搬送の女性が腕にけが 七飯 |
https://www.hokkaido-np.co.jp/article/776908/
|
七飯町大中山 |
2022-12-18 18:48:00 |
北海道 |
北海道新聞 |
関学大が最多並ぶ5連覇 甲子園ボウル、早大下す |
https://www.hokkaido-np.co.jp/article/776900/
|
全日本大学選手権 |
2022-12-18 18:34:14 |
北海道 |
北海道新聞 |
夜景列車乗客を歓迎、木古内・札苅駅ライトアップ 地元住民ら12月20、23日も |
https://www.hokkaido-np.co.jp/article/776904/
|
地元住民 |
2022-12-18 18:43:00 |
北海道 |
北海道新聞 |
冬の観光、雪ミク盛り上げ 函館と弘前でイラスト展やスタンプラリー |
https://www.hokkaido-np.co.jp/article/776903/
|
青森県弘前市 |
2022-12-18 18:41:00 |
北海道 |
北海道新聞 |
防衛増税、不支持64% 内閣支持低迷、共同通信調査 |
https://www.hokkaido-np.co.jp/article/776878/
|
世論調査 |
2022-12-18 18:24:47 |
北海道 |
北海道新聞 |
C―C―Bの笠浩二さんが死去 ドラムとボーカル担当 |
https://www.hokkaido-np.co.jp/article/776898/
|
笠浩二 |
2022-12-18 18:34:12 |
北海道 |
北海道新聞 |
強い冬型、広範囲で荒天 気象庁、19日も大雪警戒を |
https://www.hokkaido-np.co.jp/article/776902/
|
冬型の気圧配置 |
2022-12-18 18:29:00 |
北海道 |
北海道新聞 |
金正恩氏の宮殿訪問報じず 北朝鮮、金正日氏命日に |
https://www.hokkaido-np.co.jp/article/776886/
|
北朝鮮メディア |
2022-12-18 18:07:13 |
北海道 |
北海道新聞 |
競馬、ドルチェモアがG1初制覇 朝日杯FS、1番人気 |
https://www.hokkaido-np.co.jp/article/776899/
|
朝日杯fs |
2022-12-18 18:22:00 |
北海道 |
北海道新聞 |
レバンガ連敗ストップ 広島に97―92 |
https://www.hokkaido-np.co.jp/article/776896/
|
連敗 |
2022-12-18 18:08:00 |
北海道 |
北海道新聞 |
東京でコロナ感染1万3646人 死亡11人、重症者32人 |
https://www.hokkaido-np.co.jp/article/776895/
|
新型コロナウイルス |
2022-12-18 18:02:00 |
コメント
コメントを投稿