投稿時間:2023-01-06 19:37:59 RSSフィード2023-01-06 19:00 分まとめ(46件)

カテゴリー等 サイト名等 記事タイトル・トレンドワード等 リンクURL 頻出ワード・要約等/検索ボリューム 登録日
IT ITmedia 総合記事一覧 [ITmedia News] ThinkPadブランドのスマホ「ThinkPhone」登場、モトローラから https://www.itmedia.co.jp/news/articles/2301/06/news149.html itmedianewsthinkpad 2023-01-06 18:40:00
IT ITmedia 総合記事一覧 [ITmedia News] ポケモンGOで「はなまるうどん」ジム・ポケスト消失 スポンサー契約「昨年終了した」 https://www.itmedia.co.jp/news/articles/2301/06/news150.html itmedia 2023-01-06 18:39:00
IT ITmedia 総合記事一覧 [ITmedia News] 「ガンダム 水星の魔女」最新話は“見逃し配信”に遅れ、新型コロナの影響 放送はいつも通り https://www.itmedia.co.jp/news/articles/2301/06/news145.html itmedia 2023-01-06 18:10:00
IT 情報システムリーダーのためのIT情報専門サイト IT Leaders グローバルロジック、南米のデジタルエンジニアリング企業Hexactaを買収、同分野の世界的な需要の高まりに対応 | IT Leaders https://it.impress.co.jp/articles/-/24277 グローバルロジック、南米のデジタルエンジニアリング企業Hexactaを買収、同分野の世界的な需要の高まりに対応ITLeaders日立製作所の米国子会社である米グローバルロジックGlobalLogicは年月日米国現地時間、ウルグアイのデジタルデータエンジニアリング企業、Hexactaヘクサクタを買収する契約を締結したと発表した。 2023-01-06 18:30:00
python Pythonタグが付けられた新着投稿 - Qiita [python]webページのHTMLソースをディレクトリそのままにまとめてスクレイピング https://qiita.com/akira-hagi/items/749f70127ee1d4fa4206 pythonweb 2023-01-06 18:45:05
python Pythonタグが付けられた新着投稿 - Qiita OCIのAlways Free環境で、Oracle Machine Learning for Python (OML4Py)を試してみた https://qiita.com/bambooseven/items/a55f8fce57011042e82e alwaysfree 2023-01-06 18:14:08
python Pythonタグが付けられた新着投稿 - Qiita DreamBoothを8GBのVRAM環境で動作させる https://qiita.com/kitsume/items/d1a27316504f83b84bea dreambooth 2023-01-06 18:13:40
AWS AWSタグが付けられた新着投稿 - Qiita 【AWS/お名前.com】お名前.comでドメイン取得&AWSでSSL証明書発行し、既存ELBに紐づける方法(新規ELBパターン・サブドメイン発行パターンも) https://qiita.com/Ryo-0131/items/2abcdd0b8c866f89244d https 2023-01-06 18:08:57
AWS AWSタグが付けられた新着投稿 - Qiita AWS CLIで現在のAWSアカウントIDを取得する https://qiita.com/value-urano/items/560f8b95c35cb41ed160 awscli 2023-01-06 18:04:45
Docker dockerタグが付けられた新着投稿 - Qiita docker-composeでMinIO立ててLaravelでS3ごっこしゅる https://qiita.com/layzy_glp/items/bbf4ee5237bae50db9f2 approo 2023-01-06 18:45:56
技術ブログ Developers.IO ZoomやGoogle Meetを使ったウェビナー開催時の心得 https://dev.classmethod.jp/articles/how-to-webinar/ googlemeet 2023-01-06 09:08:40
海外TECH DEV Community How python changed the Machine Learning Game since it's inception? https://dev.to/darkxenium/how-python-changed-the-machine-learning-game-since-its-inception-5bkk How python changed the Machine Learning Game since it x s inception Python is a high level general purpose programming language that has become one of the most popular languages for data science and machine learning Since its inception in the late s Python has evolved into a powerful tool for analyzing and manipulating data and it has played a major role in the development and advancement of the field of machine learning One of the key reasons for Python s popularity in the field of machine learning is its simplicity and ease of use Python has a clean and intuitive syntax making it relatively easy for beginners to learn and for experts to read and understand This has made Python a popular choice for teaching and learning machine learning as it allows students to focus on the concepts and techniques rather than getting bogged down in syntax and language specific details Another factor that has contributed to Python s popularity in machine learning is the vast array of libraries and frameworks that are available for data manipulation visualization and machine learning The Python ecosystem includes powerful libraries such as NumPy pandas and scikit learn which provide tools for handling and manipulating large datasets as well as powerful machine learning algorithms and tools for evaluating and improving model performance In addition to the core Python language and its libraries there are also a number of powerful open source machine learning frameworks available in Python such as TensorFlow and PyTorch These frameworks provide a high level interface for building and training machine learning models and they have become popular choices for developing and deploying machine learning applications in a production environment Python s versatility and power have made it a go to choice for machine learning practitioners and researchers around the world It is used in a wide variety of applications including natural language processing computer vision and predictive modeling to name just a few Overall Python has had a major impact on the field of machine learning since its inception and it shows no signs of slowing down Its simplicity power and versatility make it an essential tool for anyone working in the field and it is likely to continue to be a major player in the world of machine learning for years to come 2023-01-06 09:53:02
海外TECH DEV Community Is Blogging Worth it? https://dev.to/darkxenium/is-blogging-worth-it-6hh Is Blogging Worth it Blogging has become a popular way for individuals to share their thoughts ideas and experiences with a wider audience But is it really worth the time and effort that goes into creating and maintaining a blog On one hand blogging can be a great way to share your passions and interests with others It can be a creative outlet and a platform for self expression Blogging can also be a great way to connect with like minded people and build a community of readers who share your interests On the other hand blogging can be a lot of work It requires consistent effort to come up with new ideas write quality content and engage with your audience It can also be time consuming to maintain and promote a blog Despite these challenges many people still find blogging to be worth it For some the sense of accomplishment and self fulfillment that comes from sharing their ideas and connecting with others is enough of a reward For others blogging can be a way to showcase their expertise and establish themselves as thought leaders in their field In some cases blogging can even lead to new opportunities such as consulting or speaking engagements But for others the time and effort that goes into blogging may not be worth the return If you are considering starting a blog it s important to consider your motivations and what you hope to gain from the experience If you are primarily interested in sharing your ideas and connecting with others then the rewards of blogging may outweigh the challenges However if you are hoping to make a significant amount of money or gain a large following it may be more difficult to see a return on your investment of time and effort In the end whether or not blogging is worth it really depends on your individual goals and motivations If you enjoy writing and connecting with others and you are willing to put in the time and effort required to maintain a blog then it could be a rewarding experience However if you are only considering blogging as a way to make money or gain fame it may not be the best use of your time and resources 2023-01-06 09:44:36
海外TECH DEV Community Storage Evolution of Unisound’s HPC Platform with JuiceFS https://dev.to/daswu/storage-evolution-of-unisounds-hpc-platform-with-juicefs-36kh Storage Evolution of Unisound s HPC Platform with JuiceFS BackgroundUnisound is an AI company focused on speech and natural language processing technology The technology stack has grown to a full stack AI capability with images natural language processing signals etc It is the head AI unicorn company in China The company embraces cloud computing and has corresponding solutions in healthcare hotels education industry etc Atlas is Unisound s internal HPC platform which supports Unisound with basic computing capabilities such as training acceleration for model iterations Atlas Architecture The top layer is the business layer which handles speech processing image processing natural language processing etc The second layer is the control center which is responsible for data production data access and model release The third layer is the core computing layer which supports deep learning and data pre processing The bottom layer is the infrastructure layer which consists of GPU cluster computing cluster and distributed storage all machines are interconnected with Gbps InfiniBand high speed network Scenarios and Storage ChallengesThe initial goal of Atlas is to build a one stop AI platform including AI model production data pre processing model development model training and model launch As shown above each step in the pipeline deals with data data pre processing and model training require relatively large IO Data pre processing speech processing extracts speech features and converts the features into numpy format files While image processing transforms the data for training Model development algorithm engineers edit code and debug model algorithms Model training There will be multiple rounds of data reading and the model will be output to the corresponding storage This step requires a very large IO When the model is launched the service will read the model file in the storage system To summarize our requirements for storage First it can work with the entire model development pipeline Second it can support CPU and GPU data read tasks Third our scenario is mainly voice text and image data processing These scenarios are characterized by relatively small file size so we need to support high performance small file processing Fourth in the phase of model training we usually have lots of data to read rather than write Based on these requirements we need a high performance and reliable distributed storage system History and evolution of storageIn the early days we only had about a dozen GPUs and we used NFS to make a small scale cluster Meanwhile we evaluated CephFS in At that time CephFS did not perform well in small file scenarios so we did not bring CephFS into the production environment Then we continued research and found that Lustre is the most commonly used file system in the HPC space Tests showed that Lustre performed well at scale so from to we used Lustre to host all our data operations But as more and more GPUs are used now with PFLOPS the IO of the underlying storage can no longer keep up with the computing capabilities of the application layers So we started exploring new solutions for storage expansion while encountering some problems with Lustre First Maintenance Lustre is based on the kernel sometimes troubleshooting the problem will involve the reboot of the machine Second technology stack our cloud platform uses golang so it is more inclined to use storage that fits better with the development language Lustre uses the C language which requires more human effort in customization and optimization Third data reliability Lustre mainly relies on hardware reliability such as RAID technology and ensuring HA for metadata nodes and data nodes Compared to these we still prefer to use more reliable software solutions such as triple replicas or erasure coding Fourth The need for multi level caching capabilities In we used Fluid Alluxio as a distributed acceleration for Lustre and Alluxio was able to do a better job of speeding up our cluster and reducing the pressure on the underlying storage But we ve been exploring the idea of doing client side caching directly from the storage system so that the operation can be more transparent to the user When JuiceFS was first open sourced in we did research on its features First features JuiceFS supports POSIX and can be mounted by HostPath which is exactly the same as the way we use NAS so users basically do not have to make any changes Users can choose metadata and object storage flexibly according to their seniors For metadata engines AI users can choose Redis and TiKV And there will be lots of options for object storage Ceph MinIO etc Second scheduling JuiceFS supports not only HostPath but also CSI driven which enables users to access the storage in a more cloud native way Third framework adaptation the POSIX interface is suitable for adapting deep learning frameworks Fourth O amp M there are lots of mature solutions for metadata engine and object storage And JuiceFS has automatic metadata backup and recycle bin function Since JuiceFS is a good fit for business we conducted a POC test The test environment is shown in the figure below It turns out that compared to Lustre s direct access to mechanical disks there is a significant performance improvement using JuiceFS the smaller the better as shown in the figure below thanks to JuiceFS use of kernel page caching After POC we decided to bring JuiceFS into the production environment The JuiceFS client is currently installed on all GPU compute nodes of the entire Atlas cluster as well as all development nodes JuiceFS is directly connected to redis clusters and ceph and most of the compute nodes are accessed by HostPath At the same time Atlas cluster also deployed JuiceFS CSI Driver thus users can use JuiceFS in a cloud native way How JuiceFS was used in AtlasTo ensure data security each group on Atlas belongs to a different directory and under each directory are the members within their respective groups or departments and the directories between different groups are not visible The directory permissions are based on the Linux permission model When a user submits a training task in Atlas cluster the task submission tool of the cluster will automatically read the UID and GID information of the user on the system and then inject it into the SecurityContext field of the task Pod submitted by the user so that the UIDs of all the container processes running in the container Pod on Atlas cluster are consistent with the information on the storage system to ensure the security of permissions Node access to JuiceFS which implements a multi level cache The first level of cache is the page cache of memory The second tier is multiple SSDs for all the compute nodes to provide the second level of acceleration The third tier is ceph If three t SSDs still can t support the user s data then it will read from ceph At the beginning of we integrated JuiceFS Runtime into Fluid together with the JuiceFS team Because cache resides in bare metal we found that the user s visibility of the cache is not good and the cache cleanup is all done automatically by the system so the user s customizability is not that high that s why we integrated JuiceFS into Fluid Fluid launches JuiceFS related components including FUSE and Worker Pod where FUSE Pod provides caching capabilities for JuiceFS clients and Worker Pod enables cache lifecycle management Nodes while users are able to visualize cache usage e g size of cached datasets percentage of cache cache capacity etc When in model training the JuiceFS FUSE client is used to read the entire metadata engine and object storage data Adopting and building JuiceFSCurrently Atlas does not have access to the public network it is on a private isolated network so we are deploying it all privately We use Redis as the metadata engine for our production environment In TiKV is not very mature so we used Redis for transition first and Ceph for object storage The data is persisted once per second The object storage is a self hosted Ceph cluster Ceph clusters are deployed using Cephadm and the current production environment is using the Octopus version We borrowed a lot of industry solutions and made some optimizations to the storage at the memory level as well as the corresponding tuning at the software level mainly as follows Server reference Cores GB T HDDSystem Disk G SAS SSDBlueStoreDisable NUMAUpgrade kernel io uring enabledKernel pid max modify proc sys kernel pid maxCeph configurationCeph RADOS direct call to librados interface no S protocolBucket shardDisable auto tuning of pgOSD log storage bluestore recommended bare capacity ratio block block db block wal SSD or NVMe SSD recommended for the last two CopyIn particular we need to upgrade the kernel of the Ceph cluster to a newer version and turn on the io uring feature so that the performance will be greatly improved In terms of software we directly call the rados interface so we don t use the S protocol which is a little more efficient JuiceFS is connected to Ceph RADOS which is the object storage in UniSound s environment JuiceFS uses librados to interact with Ceph so you need to recompile the JuiceFS client and it is recommended that the version of librados should correspond to that of Ceph so pay attention to that If you use CSI Driver the creation of PV PVC will read etc ceph ceph conf and also pay attention to the version support Complete monitoring systemNow the whole chain is longer The bottom layer has a metadata engine cluster Ceph object storage cluster and the upper layer of clients each layer should have a corresponding monitoring solution In the client node we mainly do the log collection It is important to note that each mount point JuiceFS client logs should be collected and monitored properly rotate the logs so that disks are not filled full Each JuiceFS client should also have the appropriate monitoring means For example check the stat file and logs of each mount point to observe whether the indicators are normal and then look at the IO and logs of redis and ceph clusters to ensure that the entire link is controllable so that it is easier to locate the problem The above diagram is the monitoring diagram of ceph because our client node is using SSD cache now the data is basically not read to Ceph most of it is read by cache so the traffic of Ceph is not much The above figure is the data intercepted from JuiceFS monitoring you can see that the nodes are basically to hit the cache hit rate is still relatively high Participate in JuiceFS communityIn the process of using JuiceFS Community Edition we have been actively involved in building the community In we worked with Juicedata team to develop Fluid JuiceFS Runtime And recently we found that the community version of directory based quota has not been developed so we developed a version a few months ago to limit the number of files and file size of the directory the PR has been submitted and now we are working with JuiceFS community to merge Scenarios and benefits of JuiceFS in AtlasJuiceFS client s multi level cache is currently used in text recognition speech noise suppression and speech recognition scenarios Since the data access patterns of AI model training are characterized by more reads and fewer writes we make full use of the client side cache to bring the acceleration benefit of IO reads AI model training acceleration Noise Suppression TestThe data used in the test of noise reduction scenario are unmerged raw files each data is a small file of less than KB and in WAV format We tested the I O performance of the data load phase with a memory cache of G on the JuiceFS client node and a batch size of for a hours size data According to the test results in terms of data reading efficiency alone JuiceFS is it s for small WAV files while Lustre is it s a performance improvement JuiceFS effectively accelerates our end to end model training and reduces model output time overall Text recognition scenariosIn the text recognition scenario the model is CRNN and the backbone is MobileNetV The test environment is as follows In this test we have done the speed comparison between JuiceFS and Lustre From the experimental results it takes s to read each batch from Lustre and s to read each batch from JuiceFS which is a improvement In terms of model convergence time it decreases from hours for Lustre to hours for JuiceFS Using JuiceFS can reduce the output time of CRNN model by hours Model debugging amp data processingWhen doing code debugging multiple users would run model tests and code traversal on a debugging machine at the same time At that time statistics showed that most users would use some remote IDEs to connect to debug nodes and then build their own virtual environments and would install a large number of installation packages on Lustre in advance Most of them are small files of tens or hundreds of kilobytes and we have to import these packages on our memory Previously when using Lustre the demand throughput was high because there were too many users and the performance requirements for small files were high so we found that the results were not very good and we were stuck when importing packages which led to slower debugging of code and lower overall efficiency Later we used the cache of the JuiceFS client which was also slow in the first compilation but the second compilation was faster and more efficient because the data had all fallen on the cache the code jump was faster and the code hint import was faster There is about times speedup after user testing Summary From Lustre to JuiceFSFrom to it was stable to use Lustre especially when the cluster storage was less than As the veteran storage system in the HPC domain Lustre has powered many world s largest HPC systems with years of experience in production environments But there are some drawbacks First Lustre cannot support the cloud native CSI Driver Second Lustre s requirements for maintenance staff are relatively high because it s written in c sometimes some bugs can not be quickly resolved and the overall openness and activity of the community is not very high And the advantages of JuiceFS are as follows First JuiceFS is a cloud native distributed storage system providing CSI Driver and Fluid for better integration with Kubernetes Second it is quite flexible to deploy JuiceFS There are many options for metadata engine and object storage services Third the maintenance of JuiceFS is simple Full compatibility with the POSIX allows deep learning applications to migrate seamlessly but due to the characteristics of the object storage JuiceFS random write latency is high Fourth JuiceFS supports local caching kernel page caching which enables layering and acceleration of hot and cold data This is something we value and is more appropriate in our scenario but not so much when it comes to random writes The community version currently does not provide distributed caching yet either PlanningFirst upgrade the metadata engine TiKV is suitable for scenarios with more than million files up to billion files and high requirements for performance and data security We have finished the internal testing of TiKV and are actively following up the progress of the community We will migrate the metadata engine to TiKV Second optimize the directory quota The features of the basic version have been merged into the JuiceFS community version And we also discussed with the JuiceFS community in order to optimize in some scenarios Third we want to do some non root features Currently JuiceFS requires root privileges in all nodes We want to restrict root privileges to specific nodes Finally we will also see if the community has a QoS solution such as UID based or GID based speed limits About AuthorDongdong Lv Architect of Unisound HPC PlatformHe is responsible for the architecture design and development of the large scale distributed machine learning platform as well as application optimization of deep learning algorithms and acceleration of AI model training His research areas include large scale cluster scheduling high performance computing distributed file storage distributed caching etc He is also a fan of the open source community especially cloud native related And he has contributed several important features to the JuiceFS community From Juicedata JuiceFS ᴗ✿ 2023-01-06 09:17:47
海外TECH CodeProject Latest Articles CSV Data Management: unleashing the VBA arrays power https://www.codeproject.com/Tips/5351378/CSV-Data-Management-unleashing-the-VBA-arrays-powe CSV Data Management unleashing the VBA arrays powerTurn Microsoft Office applications into a masterpiece for cleansing reshaping managing and analyzing data from CSV files No more intermediary spreadsheets no more headaches due to dll updates that make your implementations stop working the pure VBA solution is here 2023-01-06 09:11:00
海外TECH CodeProject Latest Articles A PWM Based Fan Controller for Arduino https://www.codeproject.com/Articles/5351014/A-PWM-Based-Fan-Controller-for-Arduino arduinocontrol 2023-01-06 09:01:00
金融 ニッセイ基礎研究所 今週のレポート・コラムまとめ【12/27~1/6発行分】 https://www.nli-research.co.jp/topics_detail1/id=73548?site=nli 今週のレポート・コラムまとめ【発行分】研究員の眼nbsp年の暦などー祝日と、太陽・月などの様子流星群その他nbsp昨年のJリート市場は下落。 2023-01-06 18:54:32
金融 ニッセイ基礎研究所 2023年の原油相場展望~波乱の火種になる可能性も https://www.nli-research.co.jp/topics_detail1/id=73547?site=nli nbsp目次トピック年の原油相場展望・年の展望・・・年初は低迷、年半ば以降持ち直しへ・上振れリスクに注意、市場の波乱の火種となる可能性も日銀金融政策月・日銀長期金利許容上限の引き上げを決定・受け止めと評価・今後の予想金融市場月の振り返りと予測表・年国債利回り・ドル円レート・ユーロドルレート昨年の原油相場は波乱の展開となった。 2023-01-06 18:21:42
ニュース @日本経済新聞 電子版 テスラ再値下げ・中国渡航制限・ウクライナGDP30%減 https://t.co/nVmp7vIgTB https://twitter.com/nikkei/statuses/1611287647623999492 渡航 2023-01-06 09:04:46
ニュース @日本経済新聞 電子版 自衛隊の弾薬庫、南西諸島に分散へ 台湾有事念頭 【日経イブニングスクープ】 https://t.co/k4HyziIRS9 https://twitter.com/nikkei/statuses/1611287382451683329 南西諸島 2023-01-06 09:03:42
海外ニュース Japan Times latest articles Russian Muppets or American puppets? https://www.japantimes.co.jp/opinion/2023/01/06/commentary/world-commentary/russia-sesame-street/ russian 2023-01-06 18:06:43
ニュース BBC News - Home Harry's Taliban kill remarks tarnish his reputation - ex-commander https://www.bbc.co.uk/news/uk-64185176?at_medium=RSS&at_campaign=KARANGA chess 2023-01-06 09:46:27
ニュース BBC News - Home Gianluca Vialli: Former Chelsea, Juventus and Italy striker dies aged 58 https://www.bbc.co.uk/sport/football/64039302?at_medium=RSS&at_campaign=KARANGA gianluca 2023-01-06 09:46:32
ニュース BBC News - Home House prices drop for fourth month in a row https://www.bbc.co.uk/news/business-64183817?at_medium=RSS&at_campaign=KARANGA halifax 2023-01-06 09:48:04
ニュース BBC News - Home Edwin Chiloba: LGBTQ activist found dead in Kenya https://www.bbc.co.uk/news/world-africa-64184372?at_medium=RSS&at_campaign=KARANGA designer 2023-01-06 09:22:31
ビジネス 不景気.com リーダー電子が希望退職者の募集による10名の人員削減へ - 不景気com https://www.fukeiki.com/2023/01/leader-electronics-cut-10-job.html 人員削減 2023-01-06 09:21:08
北海道 北海道新聞 千歳のオジロワシから高病原性鳥インフル 道内24例目 https://www.hokkaido-np.co.jp/article/784255/ 鳥インフル 2023-01-06 18:35:00
北海道 北海道新聞 香港の不動産大手、倶知安で別荘地造成 リゾート規模拡大へ https://www.hokkaido-np.co.jp/article/784254/ 不動産開発 2023-01-06 18:33:00
北海道 北海道新聞 「公務員舞妓」を募集中 岐阜・下呂温泉、活性化へ https://www.hokkaido-np.co.jp/article/784253/ 下呂温泉 2023-01-06 18:32:00
北海道 北海道新聞 村上、WBC「4番を打ちたい」 自主トレーニングを公開 https://www.hokkaido-np.co.jp/article/784240/ 日本代表 2023-01-06 18:17:05
北海道 北海道新聞 中国、SNS停止相次ぐ 当局がネット規制強化か https://www.hokkaido-np.co.jp/article/784252/ 投稿サイト 2023-01-06 18:32:00
北海道 北海道新聞 ビール類市場、18年ぶり拡大 業務用需要、コロナ禍から回復 https://www.hokkaido-np.co.jp/article/784251/ 需要 2023-01-06 18:32:00
北海道 北海道新聞 旭川ロフト3月末に開店 「イオンモール旭川駅前」内 7年ぶり再出店 https://www.hokkaido-np.co.jp/article/784247/ 生活雑貨 2023-01-06 18:28:47
北海道 北海道新聞 年末年始、JRと国内線1・1倍 利用客数、21年度比 https://www.hokkaido-np.co.jp/article/784250/ 年末年始 2023-01-06 18:29:00
北海道 北海道新聞 ひこにゃんへ年賀状1万3千通超 「ブラボー」な年祈るものも https://www.hokkaido-np.co.jp/article/784238/ 滋賀県彦根市 2023-01-06 18:13:36
北海道 北海道新聞 J1鹿島、今季初練習で始動 岩政監督、優勝へ意気込み https://www.hokkaido-np.co.jp/article/784248/ 意気込み 2023-01-06 18:28:00
北海道 北海道新聞 <サタデーBANBA>年末年始重賞4レース回顧 キングフェスタ今後にも期待 アオノブラック差しきり優勝 https://www.hokkaido-np.co.jp/article/784245/ 帯広競馬場 2023-01-06 18:15:00
北海道 北海道新聞 北ガスが新型フェリーにLNG燃料供給 2025年から https://www.hokkaido-np.co.jp/article/784242/ 北海道ガス 2023-01-06 18:12:00
北海道 北海道新聞 東京のコロナ死者、最多35人 2万720人が感染 https://www.hokkaido-np.co.jp/article/784241/ 新型コロナウイルス 2023-01-06 18:09:00
北海道 北海道新聞 札幌で3年ぶりにジャンプW杯開催 1月7日から大倉山で女子2連戦 https://www.hokkaido-np.co.jp/article/784237/ 開催 2023-01-06 18:06:25
北海道 北海道新聞 少子化財源、明示は4月以降 自民に消費税増税論 https://www.hokkaido-np.co.jp/article/784239/ 少子化対策 2023-01-06 18:04:00
IT 週刊アスキー 岸田メル氏が手掛ける美少女たちと過ごす幸せ!『BLUE REFLECTION SUN/燦』を最速レポ―ト https://weekly.ascii.jp/elem/000/004/117/4117829/ bluereflectionsun 2023-01-06 18:30:00
IT 週刊アスキー 【2023冬アニメ】『イジらないで、長瀞さん』の2期に、『TRIGUN STAMPEDE』など6作品 https://weekly.ascii.jp/elem/000/004/119/4119716/ asciijp 2023-01-06 18:30:00
IT 週刊アスキー DMM GAMES、ヒロイックRPG「BLUE REFLECTION SUN/燦」の事前登録受付を1月6日より開始 https://weekly.ascii.jp/elem/000/004/119/4119707/ bluereflectionsun 2023-01-06 18:15:00
IT 週刊アスキー NTTドコモ、5G対応Wi-Fiルーター「Wi-Fi STATION SH-54C」。上り最大1.1Gbps https://weekly.ascii.jp/elem/000/004/119/4119708/ 通信速度 2023-01-06 18:10:00
IT 週刊アスキー 呼吸するクッション「fufuly」 抱きかかえると呼吸のリズムと同調 https://weekly.ascii.jp/elem/000/004/119/4119715/ fufuly 2023-01-06 18:10:00

コメント

このブログの人気の投稿

投稿時間:2021-06-17 05:05:34 RSSフィード2021-06-17 05:00 分まとめ(1274件)

投稿時間:2021-06-20 02:06:12 RSSフィード2021-06-20 02:00 分まとめ(3871件)

投稿時間:2020-12-01 09:41:49 RSSフィード2020-12-01 09:00 分まとめ(69件)