投稿時間:2023-01-18 15:14:52 RSSフィード2023-01-18 15:00 分まとめ(19件)

カテゴリー等 サイト名等 記事タイトル・トレンドワード等 リンクURL 頻出ワード・要約等/検索ボリューム 登録日
IT InfoQ Making IntelliJ Work for the Dev: The Insights Exposed by the New Book Written by Gee and Scott https://www.infoq.com/news/2023/01/know-intellij-book/?utm_campaign=infoq_content&utm_source=infoq&utm_medium=feed&utm_term=global Making IntelliJ Work for the Dev The Insights Exposed by the New Book Written by Gee and ScottProfessional developers spend most of their productive time writing code in an IDE Getting to Know IntelliJ IDEA is the new book that promises to teach you how to make your IDE work for you In return amplifying their productivity In order to extract its gist InfoQ reached out to the authors both former developer advocates at JetBrains with a couple of questions By Olimpiu Pop 2023-01-18 05:05:00
ROBOT ロボスタ スタイリッシュなデザインの配膳・運搬ロボット「Delivery X1 アイリスエディション」販売開始 最大4段のトレーで大容量運搬に対応 https://robotstart.info/2023/01/18/delivery-x1-iris.html 2023-01-18 05:07:55
IT ITmedia 総合記事一覧 [ITmedia PC USER] サードウェーブ、Radeon RX 7900 XT/XTX搭載ゲーミングPCを発売 https://www.itmedia.co.jp/pcuser/articles/2301/18/news127.html itmediapcuser 2023-01-18 14:43:00
IT ITmedia 総合記事一覧 [ITmedia PC USER] M2 Pro/Maxチップを搭載した14/16インチ「MacBook Pro」が登場! 価格は28万8800円から https://www.itmedia.co.jp/pcuser/articles/2301/18/news123.html itmediapcusermpromax 2023-01-18 14:30:00
IT ITmedia 総合記事一覧 [ITmedia Mobile] au PAY、オンライン決済で最大1万ポイント還元 マクドナルドやTOHOシネマズなど対象 https://www.itmedia.co.jp/mobile/articles/2301/18/news124.html aupay 2023-01-18 14:18:00
AWS AWS - Webinar Channel Analytics in 15: What's New with Amazon Redshift Following re:Invent 2022 https://www.youtube.com/watch?v=vQM1PdeHMAI Analytics in What x s New with Amazon Redshift Following re Invent Amazon Redshift continues to reinvent data warehousing to help you analyze all your data across data lakes data warehouses and databases with the best price performance In this session hear about the important new features of Amazon Redshift announced at re Invent and how you can start building for a variety of use cases today 2023-01-18 05:22:31
python Pythonタグが付けられた新着投稿 - Qiita [Python] Selenium タイムアウトエラー時にリトライ https://qiita.com/Qiitaman/items/9575a1331edc9f83f3f9 doutreceivingmessagefr 2023-01-18 14:27:35
js JavaScriptタグが付けられた新着投稿 - Qiita ElasticBeanstalkのUnknown or duplicate parameter: NodeCommandの対処法 https://qiita.com/ysk-s/items/5de61640a99ce1a5b5e1 awsbeanstalk 2023-01-18 14:59:16
Linux Ubuntuタグが付けられた新着投稿 - Qiita Lenovo G500を在宅勤務で使い尽くす https://qiita.com/yamadaakira/items/31404d307e949ed926de gbyte 2023-01-18 14:36:28
AWS AWSタグが付けられた新着投稿 - Qiita ElasticBeanstalkのUnknown or duplicate parameter: NodeCommandの対処法 https://qiita.com/ysk-s/items/5de61640a99ce1a5b5e1 awsbeanstalk 2023-01-18 14:59:16
技術ブログ Developers.IO 【2/15(水)リモート】クラスメソッドグループの会社説明会を開催します https://dev.classmethod.jp/news/jobfair-230215/ 会社説明会 2023-01-18 05:42:13
海外TECH DEV Community Serverless Latency: Understanding and Reducing the Delay https://dev.to/aws-builders/serverless-latency-understanding-and-reducing-the-delay-2nna Serverless Latency Understanding and Reducing the DelayServerless computing has become increasingly popular in recent years due to its ability to scale on demand and provide a pay per use model for computing resources At AWS re Invent Keynote with Dr Werner Vogels Werner gave a great speech and explained that the world is asynchronous and it is excellent but if I step back and look at many applications I see a lot of synchronous workloads I could convert to an asynchronous workload but the fact is that the world is complex and sometimes for many reasons you cannot would switch to asynchronous During the past year I went to conferences and user groups Like everybody else I follow the Serverless Gurus and have noticed that they all speak asynchronously in a serverless context When they show the hello world serverless example There is always based on my personal opinion some confusion on the fact that serverless is sold as Can scale from zero to thousands in instantsIt is perfect for unpredictable and spiky trafficWhile it is true I found that the missing parts in those statements are If your workload is lowIf spiky traffic is inside the Lambda quotaIf you are not on the above cases you will usually hear Convert all to asynchronousTry to move to a region with a better quotaLet the Lambda services keep up returning errors to your users for X minutesDo not use serverlessAll the above makes sense on paper and in a perfect world but sadly it is only sometimes possible and because I am stubborn as a mule I decided to look a bit deeper and be more creative Latency is a complex subject and it is composed of many factors I have written in the past something The hidden serverless latency As you can see from the image above I can optimise for the following How the user reaches my front doorSelect the best front doorOptimise the AWS Lambda DurationOf course I can Caching at the client appCaching at CloudFrontCaching at API GatewayCaching in the Lambda functionCaching in front of the datastoreCaching can be complex and it comes with its challenges Moreover I could use different services in the chain that do not provide any caching capability and because of this I am concentrating on AWS Lambda Legacy applicationImagine I have a Containerize application that receives an API input and inserts the payload into a database Once the item is inserted I must communicate this action to other services departments and save this request for future analysis The Service Layer has the logic to run all the steps and send the correct information to each service department This design has some obvious problems and becomes complex to update and maintain Introducing new features languages frameworks and technologies becomes very hard it is just an example and take it for what it is If I need to change one operation I need to deploy allIf I need to add a new operation I need to deploy allIf I need to change the flow inside the Service Layer I need to test all carefullyIf the data structure should change I could have an incompatible issueWith the assumption that is what I got in the good and bad times I now have new requirements always an example Move to the cloudUse native serverless services when it is possibleImprove the scalability of the applicationMake the application more maintainable because currently deploying takes more than h and each time a bug takes days to fix Moving to the cloudAssuming I am not an expert with Serverless I will do my research and start replacing my legacy application with serverless native services For reasons that are not important to this article I will end up to something like this I have replaced Express js with Amazon API gateway The Service Layer code is now running inside AWS Lambda where I orchestrate the connections with the old operations and I have managed to move some of the procedures to different AWS Lambda functions At this point I have satisfied all my new requirements my boss is happy and I deploy in production Does it scale It depends on what I want to achieve If I develop this application without using any best practices this application will run around ms with this setup Node jsLambda at MB of memoryNo parallelismNo usage of execution contextBurst quota LambdaLambda concurrency That will result in more or less K TPS The integration between APIGW and Lambda is synchronous and the quotas of the two services are different APIGW has K requests per second while Lambda can serve only concurrent requests simultaneously up to based on regions So if it receives more than concurrent requests some will be throttled until Lambda scales by per minute So the faster your Lambda is the more load the application can support AWS Lambda is a serverless computing service offered by Amazon Web Services AWS As with any serverless computing service latency can be an issue especially for front end facing applications One potential drawback of AWS Lambda is the latency or the time it takes for the code to be executed in response to a request Therefore it is crucial To understand AWS Lambda scaling and throughput concepts Several factors can contribute to latency in serverless computing including Cold start When a Lambda function is invoked for the first time there may be a delay because at the same time the necessary resources are provisioned the code is initialised etc Cold starts can add significant latency But the Cold start is a small percentage in applications with a constant load but still can influence scalabilityNetwork latency The distance between the request and the front door of our applicationIntegration latency Refers to the time it takes for serverless services to communicate with each other and for a Lambda function to communicate with other services through the aws sdk API Integration latency can be caused by various factors including internal AWS network delays the time it takes to establish a connection and the time it takes to send and receive data Memory allocation The amount of memory allocated to a Lambda function can also affect its latency If a function lacks resources it may take longer to execute This is what I consider monolith fat Lambda A fat lambda is a term used to describe a Lambda function that includes a large amount of code or dependencies and it can be more resource intensive to execute and may have longer cold start times From the image I can see one AWS Lambda function does too many things Emit events using different servicesInsert the request into DynamoDBSave the request into SRun a Step Function flowSeveral strategies can be used to minimise integration latency in a Lambda context including Caching data Caching data locally or in a cache service can reduce the time it takes to retrieve data from a remote resource mainly if the information is accessed frequently Using asynchronous communication Asynchronous communication such as message queues or event driven architectures can help decouple functions from resources and reduce the time it takes to execute Minimising the size and complexity of code and dependencies By keeping code and dependencies as small and straightforward as possible you can reduce the time it takes for functions to initialise which can help to reduce cold start latency Using compute optimised instances AWS Lambda provides arm and x architecture types and by selecting the right architecture type for your functions you can reduce latency Parallelism over Sequential processing Using parallelism in lambda functions can help to improve performance by allowing the function to execute multiple tasks concurrently This can be particularly useful for workloads that can be easily divided into independent tasks that can be processed in parallel Use Lambda Execution Context As the Lambda function can be invoked numerous times we can use the execution context to initialise database connection HTTP clients SDK clients etc In this way we do the initialisation only once instead of each time for each invocation Use a faster runtime Rust is the fastest runtime that you can use In addition thanks to the single responsibility function concept the complexity of the code is reduced making Rust syntax the same as other common languages with significant benefits in Lambda Duration and Costs Following the best practice I can reach the Single Responsibility Lambda and ideally an EDA scenario A Single Responsibility Lambda is a function with a small amount of code and minimal dependencies and it is generally faster and more efficient to execute Moving in this direction I can redesign the application into an Event driven architecture where the first Lambda will fan out the event to downstream Lambda functions rather than running a predetermined sequence of steps I have decoupled my application with this new design without touching the business logic The application still receives an API input and inserts the payload into a database Once the item is inserted I fan out this action to other services departments and save this request for future analysis With this design I moved to an Asynchronous application I can receive a total of K requests per second from the APIGW or even more which can be raised upon request without worrying about throttling because the Lambda function consumes the messages from SQS where I can control the speed of processing through a combination of two characteristics BatchSize which is the number of messages received by each invocation Maximum Concurrency maximum concurrency allows you to control Lambda function You set the maximum concurrency on the event source mapping not on the Lambda functionIn asynchronous workflows it s impossible to pass the result of the function back to the origin because the caller acknowledges that the message has been accepted However there are multiple mechanisms for returning the result to the caller IoT CoreWebSocket APIsBuild some custom logic and let the client do another call to check the statusI have found the most scalable is to use IoT Core from a Lambda ConclusionLatency can be a potential issue with serverless applications but some steps can be taken to minimise it By following these best practices you can help reduce latency in your AWS Lambda environment and ensure that your functions can execute quickly and efficiently In addition doing so will significantly influence your application s scalability I wrote something in the past about Serverless scalability Serverless scale ish with an enormous gain in scalability What if I tell you that each best practices have a more or less increase relation in scalability Even if I apply all the above and manage to Increased the scalability of my application Moved to an asynchronous applicationUsed IoT Core to send a response to the user I still have a minor issue with the response time and size to the user that will be slower and smaller than a Synchronous application End of the day it is all about the tradeoff 2023-01-18 05:30:00
海外ニュース Japan Times latest articles Tokyo court upholds acquittal of ex-Tepco executives over Fukushima nuclear crisis https://www.japantimes.co.jp/news/2023/01/18/national/crime-legal/tepco-execs-acquittal/ Tokyo court upholds acquittal of ex Tepco executives over Fukushima nuclear crisisThe court s decision followed a ruling that the utility s former chairman and two former vice presidents could not have predicted the massive tsunami that 2023-01-18 14:21:31
ニュース BBC News - Home Ron Jeremy: US porn star declared unfit for sex crimes trial https://www.bbc.co.uk/news/world-us-canada-64313546?at_medium=RSS&at_campaign=KARANGA charges 2023-01-18 05:29:57
ニュース BBC News - Home The Papers: 'Worst day for strikes' and 'bonfire of EU laws' https://www.bbc.co.uk/news/blogs-the-papers-64313083?at_medium=RSS&at_campaign=KARANGA civil 2023-01-18 05:16:57
ビジネス ダイヤモンド・オンライン - 新着記事 米バイオテク株が今年回復しそうな理由 - WSJ発 https://diamond.jp/articles/-/316310 理由 2023-01-18 14:26:00
IT 週刊アスキー ユニークなアイデアが光る! 横浜高島屋、地元小学校の児童たちが考案した4種のパンを販売 https://weekly.ascii.jp/elem/000/004/120/4120926/ foodiesport 2023-01-18 14:40:00
IT 週刊アスキー 高級ブランド、海外レアブランド、こだわり素材のチョコレートが一堂に集結! 「Keio CHOCOLATE MARKET」1月31日~2月14日開催 https://weekly.ascii.jp/elem/000/004/120/4120922/ keiochocolatemarket 2023-01-18 14:20:00
IT 週刊アスキー 『フォースポークン』SSDが当たるTwitterキャンペーン「我こそは新たなタンタ」が開催 https://weekly.ascii.jp/elem/000/004/120/4120927/ forspoken 2023-01-18 14:15:00

コメント

このブログの人気の投稿

投稿時間:2021-06-17 05:05:34 RSSフィード2021-06-17 05:00 分まとめ(1274件)

投稿時間:2021-06-20 02:06:12 RSSフィード2021-06-20 02:00 分まとめ(3871件)

投稿時間:2020-12-01 09:41:49 RSSフィード2020-12-01 09:00 分まとめ(69件)