投稿時間:2022-07-11 15:24:54 RSSフィード2022-07-11 15:00 分まとめ(27件)

カテゴリー等 サイト名等 記事タイトル・トレンドワード等 リンクURL 頻出ワード・要約等/検索ボリューム 登録日
IT ITmedia 総合記事一覧 [ITmedia ビジネスオンライン] クルマが壊れる3つの原因 故障のパターンとこれからの自動車社会に起こる変化 https://www.itmedia.co.jp/business/articles/2207/11/news072.html itmedia 2022-07-11 14:30:00
IT ITmedia 総合記事一覧 [ITmedia ビジネスオンライン] 在宅勤務手当を払うと、残業代が高くなるって本当ですか? https://www.itmedia.co.jp/business/articles/2207/11/news117.html itmedia 2022-07-11 14:30:00
IT ITmedia 総合記事一覧 [ITmedia ビジネスオンライン] ファミマ、200万枚以上売れた「PUMA」のロゴ入りマスクをリニューアル 特徴は? https://www.itmedia.co.jp/business/articles/2207/11/news111.html itmedia 2022-07-11 14:15:00
IT ITmedia 総合記事一覧 [ITmedia Mobile] 最大30dBのノイキャン対応ワイヤレスイヤフォン「EarFun Air S」発売 7月13日まで20%オフ https://www.itmedia.co.jp/mobile/articles/2207/11/news113.html amazon 2022-07-11 14:03:00
IT 情報システムリーダーのためのIT情報専門サイト IT Leaders ネオジャパン、ビジネスチャット「ChatLuck」の新版5.0、リアクション機能を追加 | IT Leaders https://it.impress.co.jp/articles/-/23460 chatluck 2022-07-11 14:09:00
Ruby Rubyタグが付けられた新着投稿 - Qiita 【draw.io】ER図テーブルに列と行を追加する方法 https://qiita.com/Natty0404/items/8e3ba060a2d2d6a7e920 table 2022-07-11 14:41:53
Ruby Rubyタグが付けられた新着投稿 - Qiita Webpacker::Manifest::MissingEntryError 解消法 - Rails version 6.0 環境 https://qiita.com/motoya0118/items/3141a8fe99ba75758fa7 xxrailsgscaffolduserrails 2022-07-11 14:23:09
Ruby Railsタグが付けられた新着投稿 - Qiita 【draw.io】ER図テーブルに列と行を追加する方法 https://qiita.com/Natty0404/items/8e3ba060a2d2d6a7e920 table 2022-07-11 14:41:53
Ruby Railsタグが付けられた新着投稿 - Qiita Webpacker::Manifest::MissingEntryError 解消法 - Rails version 6.0 環境 https://qiita.com/motoya0118/items/3141a8fe99ba75758fa7 xxrailsgscaffolduserrails 2022-07-11 14:23:09
技術ブログ Developers.IO ทดลอง Deploy PHP ด้วย Elastic Beanstalk และลองเชื่อมต่อกับ RDS https://dev.classmethod.jp/articles/try-deploy-php-with-elastic-beanstalk-and-try-connecting-to-rds/ ทดลองDeploy PHP ด้วยElastic Beanstalk และลองเชื่อมต่อกับRDSครั้งนี้ผมจะมาอธิบายเกี่ยวกับการทดลองDeploy PHP ด้วยElastic Beanstalk และลองเชื่อมต่อกับRDS ครับเป้าหมายใ 2022-07-11 05:51:50
技術ブログ Developers.IO วิธีการติดตั้ง Visual Studio Code บน Windows 10/11 [2022] https://dev.classmethod.jp/articles/how-to-install-visual-studio-code-on-windows10-11-2022/ วิธีการติดตั้งVisual Studio Code บนWindows ครั้งนี้ผมจะมาแนะนำวิธีการติดตั้งVisual Studio Code บนWindows ในVersion Visual Studio Co 2022-07-11 05:30:03
海外TECH DEV Community Using Goroutines is Slower?? https://dev.to/jpoly1219/using-goroutines-is-slower-3b53 Using Goroutines is Slower Ah goroutines One of the most defining features of the Go programming language Once you understand the syntax of goroutines and the theory behind concurrency you feel as if you just gained a superpower A hammer if you will We get so excited to make everything concurrent I am definitely guilty I mean why not right Concurrency solves the issue of blocking code so making everything as concurrent as possible will speed things up right Sometimes too much is too much and not everything is a nail that you can hammer But first an introduction to concurrency in GoReading this post you probably have at least some experience writing concurrent Go code But just in case I will explain concurrency and goroutines quickly As we get better at programming and build bigger projects we inevitably run into an obstacle There is a job that takes at least a couple of seconds Maybe that job is sending an email to your users Maybe it is reading and parsing CSV or JSON Maybe it s just a stupidly complicated calculation It s a bit better if your program is meant to serve one or two users at a time However imagine having to send a million emails or having to parse a million JSON stream objects Your service will be blocked by these slow operations and people will have a horrible experience using it How do we solve this A person thought about this question for some time and decided to think about it more after he cooked dinner He wanted a nice juicy grilled chicken He started marinating the chicken and put it in the fridge to let it sit for minutes While the chicken was sitting in the fridge he started chopping some lettuce and onions for his salad then it came to him This is what he needed to do Just let the blocking code run first and run other bits of the code in the meantime He can just check on the chicken once it s done marinating and grill it later The above story is a gross oversimplification of how concurrent programs work Go uses goroutines to delegate these tasks The main goroutine is responsible for running the main function and the worker goroutines each handle parts of the code to run concurrently Now concurrency is a bit different from parallelism a similar concept However parallelism is like having two chefs cooking different things at the same time while concurrency is like having one chef juggling different tasks Yes this might be confusing I think the confusion stems from us treating each goroutine like separate objects We call them worker goroutines to simplify them but they aren t actually separate workers working together They are merely separate processes that are fired off by a single chef They are goroutines not goworkers Get it Coroutines and coworkers Yeah Ok I ll stop It s super effective We will use this snippet below for our experiment package mainimport encoding csv fmt os func main db map string string AgeDataset V csv nil neo v csv nil nba csv nil airquality csv nil titanic csv nil for file range db db file ReadCsv file func ReadCsv filepath string string f err os Open filepath if err nil fmt Println err defer f Close csvr csv NewReader f rows err csvr ReadAll if err nil fmt Println err return rows ReadCsv is a very simple code used to read CSV files It accepts the file s path and returns the data inside the file in a format of string The main function contains a db object which is a map that matches the file path to the data inside This main function is serial because the for loop won t move onto the next iteration until the operation inside the loop is done Here is a concurrent version of the main function func main db map string string AgeDataset V csv nil neo v csv nil nba csv nil airquality csv nil titanic csv nil var wg sync WaitGroup wg Add for file range db go func file string defer wg Done db file ReadCsv file file wg Wait We first create wg a waitgroup that keeps check of how many goroutines are still working For each file we fire off a goroutine that runs ReadCsv and subtracts from wg once it s done We wait at the end until every goroutine is done Our benchmark code is very simple func BenchmarkMain b testing B for i i lt b N i main Here are the benchmark results serial go test bench Main benchtime sgoos linuxgoarch amdpkg example com slowconccpu Intel R Core TM i K CPU GHzBenchmarkMain ns opPASSok example com slowconc s concurrent go test bench Main benchtime sgoos linuxgoarch amdpkg example com slowconccpu Intel R Core TM i K CPU GHzBenchmarkMain ns opPASSok example com slowconc sYou can see that the concurrent code ran about ms faster than the serial code We are happy in these cases concurrency does make our code run faster Let s see what happens when this doesn t happen It s not very effective Let s see when using goroutines backfires func FindSum list int int sum for number range list sum number return sum func FindSumConc list int int sum var rwm sync RWMutex var wg sync WaitGroup wg Add len list for number range list go func number int defer wg Done rwm Lock defer rwm Unlock sum number number wg Wait return sum These are the two functions we will use to run our second experiment Both of these return a sum of integers in a list The concurrent version uses a mutex to prevent data race This means that when one goroutine is writing to sum no other goroutines can write to it We will use this benchmark code func BenchmarkFindSum b testing B list make int for i i lt i list append list rand Intn b ResetTimer for i i lt b N i FindSum list func BenchmarkFindSumConc b testing B list make int for i i lt i list append list rand Intn b ResetTimer for i i lt b N i FindSumConc list Here are the results go test bench FindSum benchtime sgoos linuxgoarch amdpkg example com slowconccpu Intel R Core TM i K CPU GHzBenchmarkFindSum ns opBenchmarkFindSumConc ns opPASSok example com slowconc sSurprising isn t it You d assume that the concurrent version runs faster After all there are a million integers to add But no the concurrent version is about times slower Why is this the case Simply put it is because of the nature of the two problems There are two main types of bottlenecks that a software developer encounters CPU bound jobs These are jobs that are reliant on your CPU s speed Jobs like complex calculation breaking encryption and finding the nth digit of pi are CPU bound IO bound jobs These are jobs that are reliant on read speeds and write speeds Jobs like reading data from files and requesting a resource over a network are IO bound In our example our first example of reading CSV files is IO bound because the goroutines have to call the operating system for reading files and wait until the data can be used Our second example of calculating the sum is CPU bound because we are constantly calculating until we are done with the last integer in the list There is a concept we need to understand when dealing with concurrency When there are multiple goroutines running our scheduler that manages the goroutines needs to decide which goroutine to run Remember concurrency isn t running several jobs simultaneously it s about juggling between tasks This act of juggling is called a context switch Here s an issue when we context switch from one goroutine to another the goroutine that got swapped out is technically paused for the time being More exactly they go into either of the two states waiting or runnable A waiting state means that the goroutine is waiting on a response from the system or the network A runnable state means that the goroutine wants attention so that it can run whatever it was doing Let s think about this When the goroutine is in a waiting state the scheduler doesn t have to spend time worrying about it for now because the goroutine has to wait for a response anyways The momentary pause is therefore mostly fine However if the goroutine is in a running state this means that the calculation is paused until the scheduler swaps it in A momentary pause here would be devastating to performance If you need to throw and catch ten balls it s an IO bound task You throw one ball then move on to the next ball You re not slowing down your progress when you switch because when you throw a ball that ball will need time to fly up and come back down anyways You are basically throwing the rest of the balls during a downtime If you need to empty ten baskets of balls however it s a CPU bound task The only way to make this faster is to throw away the balls faster Switching from one basket to the other stops any emptying from happening in other baskets So here switching doesn t help you finish any faster If anything the act of switching to another basket will add latency and slow you down IO bound jobs benefit much more from concurrent designs than CPU bound ones We can see this in our benchmark results as well ConclusionHopefully this post wasn t too confusing I remember getting absolutely lost when trying to study concurrency If you ever wondered why your concurrent code runs slow check if your job is CPU bound or IO bound If it s CPU bound you may need to utilize parallelism instead Or more simply do some optimization and improve your algorithm s time complexity Thanks for reading You can read this post on Medium and my personal site 2022-07-11 05:41:42
海外TECH DEV Community Consistency between Cache and Database, Part 2 https://dev.to/lazypro/consistency-between-cache-and-database-part-2-2k41 Consistency between Cache and Database Part Part Read Aside CachingPart Enhanced ApproachesThis is the last article in the series In the previous article we introduced why caching is needed and also introduced the Read Aside process and potential problems and of course explained how to improve the consistency of Read Aside Nevertheless Read Aside is not enough for high consistency requirements One of the reasons why Read Aside can cause problems is because all users have access to the cache and database When users manipulate data at the same time inconsistencies occur due to various combinations of operation order Then we can effectively avoid inconsistency by limiting the behavior of manipulating data which is the core concept of the next few methods Read ThroughRead PathReading data from cacheIf the cache data does not existRead from database by cacheCache returns to the application clientWrite PathDon t care usually used in combination with Write Through or Write Ahead Potential ProblemsThe biggest problem with this approach is that not all caches are supported and the Redis example in this article does not support this approach Of course some caches are supported such as NCache but NCache also has its problems First it does not support many client side SDKs NET Core is the native support language and there are not many options left Besides it is divided into open source version and enterprise version but you should know that if the open source version is not used by many people then it is a tragedy when something goes wrong Even so the Enterprise version requires a license fee not only for the infrastructure but also for the software license How to ImproveSince NCache has its high cost can we implement Read Through ourselves The answer is yes For the application we don t really care what kind of cache is behind it as long as it provides us with data fast enough that s all we need Therefore we can package Redis as a standalone service called Data Access Layer DAL with an internal API server to coordinate the cache and database The application only needs to use the defined API to get data from the DAL and doesn t need to care about how the cache works or where the database is Write ThroughRead PathDon t care the actual work is usually done through Read Through Write PathData only written for cachingUpdated database by cachePotential ProblemsAs with Read Through not every cache is supported and must be implemented on your own In addition caching is not designed to be used for data manipulation Many databases have capabilities that caching does not have especially the ACID guarantee for relational databases More importantly caching is not suitable for data persistence When an application writes to a cache and considers the update to be finished the cache may still lose the data for some reason Then the current update will never happen again How to ImproveAs with Read Through a DAL had to be implemented but the ACID and persistence problems were still not overcome So Write Ahead was born Write AheadRead PathDon t care the actual work is usually done through Read Through Write PathData only written for cachingUpdated database by cachePotential ProblemsSimilarly Write Ahead is not supported by many caches Even though the read path and write path look the same as Write Through the implementation behind it is very different Write Ahead was created to solve the problem of Write Through so let s introduce it first We will also implement a DAL but unlike Write Through it is actually an internal message queue rather than a cache As you can see from the diagram above the entire DAL architecture becomes more complex To use the message queue correctly requires more domain knowledge and more human resources to design and implement How to ImproveBy using message queues the persistence of changes can be effectively ensured and message queues also guarantee a certain degree of atomicity and isolation which is not as complete as a relational database but still has a basic level of reliability Moreover message queues can merge fragmented updates into batches For example when an application wants to update three caches so it sends three messages the DAL worker can merge the three messages into a single SQL syntax to reduce access to the database It is important to note the message queue must be used to ensure the order of messages because for database updates inserting and then deleting has a very different meaning than deleting and then inserting The way to ensure message order is slightly different for each message queue and in the case of Kafka it can be achieved by using the correct partition keys Nevertheless the complexity of implementing Write Ahead is very high If you cannot afford such complexity then Read Aside is a better choice Double DeleteWe have already talked about two major types of cache patterns which areRead AsideRead Through Write Through Write AheadThe most fundamental difference between these two types is the complexity of implementation In the case of Read Aside it is very easy to implement and it is also very simple to do right However Read Aside can easily generate various corner cases under many interactions On the other hand corner cases can be avoided by implementing DAL but it is very difficult to implement DAL correctly and it requires extensive domain knowledge to implement correctly which further makes DAL difficult to achieve So is DAL the only way to reduce the number of corner cases No not really This is what the Double Delete pattern is trying to solve Read PathReading data from cacheIf the cache data does not existRead from the database insteadand write back to the cacheThe process is exactly the same as Read Aside Write PathClear the cache firstThen write the data into the databaseWait for a while then clear the cache againPotential ProblemsThe purpose of Double Delete is to minimize the time spent in disaster due to Read Aside corner cases The entire inconsistency depends entirely on the waiting time which is equal to the maximum time waiting But how to wait is also a difficult practical problem If we let the client originally started to deal with it then the killed scenario in corner case would still not be solved If someone else performs it in an asynchronous way then the communication contract and workflow control in between will be complicated How to ImproveThe same corner case as Read Aside but again it can be reduced by a graceful shutdown ConclusionIn this article we introduce many ways to improve consistency In general when consistency is not a critical requirement Cache Expiry is sufficient and requires a very low implementation effort In fact the widely used CDN is just one of the cases where Cache Expiry is used As the scenario becomes more and more critical and requires higher and higher consistency then consider using Read Aside or even Double Delete to achieve it The correct implementation of these two methods is sufficient for consistency to satisfy most scenarios However as consistency requirements continue to increase more complex implementations such as Read Through and Write Through or even Write Ahead become necessary Although this can improve consistency it is also costly First it requires sufficient manpower and domain knowledge to implement In addition the time cost of implementation and the maintenance cost afterwards are significantly higher Furthermore there are additional expenses to operate such an infrastructure To further improve consistency it is necessary to use more advanced techniques such as consensus algorithms to ensure the consistency of cache and database content by majority consensus This is also the concept behind TAO but I am not going to introduce such a complex approach after all we are not Meta at least I am not In a general organization the requirements for consistency are not as strict as let s say or more nines and a general organization cannot operate such a complex and large architecture Therefore in this article I have chosen practices that we can all achieve but even if they are simple practices there is already a high enough level of consistency if they are implemented correctly 2022-07-11 05:20:25
医療系 医療介護 CBnews 看護の処遇改善、職種・病院間で分断も-賃上げ対象に薬剤師含めるよう要望へ、日病 https://www.cbnews.jp/news/entry/20220711141129 定例記者会見 2022-07-11 14:50:00
金融 ニッセイ基礎研究所 岸田政権のスタートアップ政策と注目ポイント https://www.nli-research.co.jp/topics_detail1/id=71729?site=nli 同専門調査会は、科学技術・イノベーション政策の観点から、成長志向の資金循環形成、「人材」の基盤強化などイノベーション・エコシステム構築に向けた調査・検討を行うことを目的とした会議体だ。 2022-07-11 14:51:36
ニュース BBC News - Home Space: First Welsh satellite set to be launched later in 2022 https://www.bbc.co.uk/news/uk-wales-61978510?at_medium=RSS&at_campaign=KARANGA cardiff 2022-07-11 05:14:14
ニュース BBC News - Home 'England have more questions than certainties' https://www.bbc.co.uk/sport/cricket/62107733?at_medium=RSS&at_campaign=KARANGA world 2022-07-11 05:06:18
ビジネス ダイヤモンド・オンライン - 新着記事 コロナ後遺症は製薬会社泣かせ 目標設定難しく - WSJ発 https://diamond.jp/articles/-/306285 製薬会社 2022-07-11 14:11:00
北海道 北海道新聞 2代目の交通安全大使に パンダ楓浜、和歌山 https://www.hokkaido-np.co.jp/article/704278/ 交通安全 2022-07-11 14:46:00
北海道 北海道新聞 錦織159位、西岡は99位 男子テニス世界ランキング https://www.hokkaido-np.co.jp/article/704277/ 世界ランキング 2022-07-11 14:46:00
北海道 北海道新聞 棺が語るナイルの栄華 札幌で古代エジプト展開幕 https://www.hokkaido-np.co.jp/article/704092/ 古代エジプト 2022-07-11 14:32:08
北海道 北海道新聞 南富良野町汚職事件 加重収賄罪の秋山被告に有罪判決 https://www.hokkaido-np.co.jp/article/704262/ 上川管内 2022-07-11 14:18:33
北海道 北海道新聞 岸信介元首相に反感か、元自衛官 宗教団体「日本に招いた」 https://www.hokkaido-np.co.jp/article/704260/ 元自衛官 2022-07-11 14:13:00
北海道 北海道新聞 パタゴニア、パートら労組結成 「無期転換逃れ」訴え https://www.hokkaido-np.co.jp/article/704259/ 契約期間 2022-07-11 14:04:00
IT 週刊アスキー ついに主人公格が解禁!『ドラクエタクト』で7月16日よりSランク英雄「ロトの勇者」が登場 https://weekly.ascii.jp/elem/000/004/097/4097504/ 情報番組 2022-07-11 14:50:00
IT 週刊アスキー プラチナゲームズが「ソルクレスタ生放送 ~気になるコレクターズ版徹底解説SP!~」を7月15日に配信決定! https://weekly.ascii.jp/elem/000/004/097/4097501/ 発売予定 2022-07-11 14:40:00
IT 週刊アスキー デジタル技術を活用した経営課題の解決を支援!「唐津市DXイノベーションセンター」オープン https://weekly.ascii.jp/elem/000/004/097/4097495/ 課題 2022-07-11 14:10:00

コメント

このブログの人気の投稿

投稿時間:2021-06-17 05:05:34 RSSフィード2021-06-17 05:00 分まとめ(1274件)

投稿時間:2021-06-20 02:06:12 RSSフィード2021-06-20 02:00 分まとめ(3871件)

投稿時間:2020-12-01 09:41:49 RSSフィード2020-12-01 09:00 分まとめ(69件)