IT |
InfoQ |
Googleが2800億パラメータのAI言語モデル”Gopher”をトレーニング |
https://www.infoq.com/jp/news/2022/01/deepmind-gopher/?utm_campaign=infoq_content&utm_source=infoq&utm_medium=feed&utm_term=global
|
Googleが億パラメータのAI言語モデルGopherをトレーニングGoogle子会社のDeepMindが、億のパラメータを持つAI自然言語処理NLPモデルGopherを発表した。 |
2022-01-25 03:19:00 |
IT |
InfoQ |
コーチングでよりよい技術リーダになる |
https://www.infoq.com/jp/news/2022/01/better-tech-leader-coaching/?utm_campaign=infoq_content&utm_source=infoq&utm_medium=feed&utm_term=global
|
derstranslatedbyhyoshida |
2022-01-25 03:11:00 |
TECH |
Engadget Japanese |
Meta、世界最速をうたうAIスーパーコンピューター「RSC」を発表 |
https://japanese.engadget.com/meta-ai-supercomputer-rsc-034059888.html
|
airesearchsuperclusterrsc |
2022-01-25 03:40:59 |
TECH |
Engadget Japanese |
ポケモンGO『ハネッコ』コミュニティ・デイ、2月12日開催。「公園でアメXL」の新仕様 |
https://japanese.engadget.com/pokemon-go-poppip-community-day-033839728.html
|
開催 |
2022-01-25 03:38:39 |
TECH |
Engadget Japanese |
M1 Pro/Max搭載Mac mini、2022年春のイベントで発表か。新型iMac Proも可能性あり |
https://japanese.engadget.com/redesigned-macmini-m1pro-max-spring-event-032052573.html
|
imacpro |
2022-01-25 03:20:52 |
TECH |
Engadget Japanese |
ジェイムズ・ウェッブ宇宙望遠鏡がL2軌道に到着。光学機器の調整へ |
https://japanese.engadget.com/jwst-arrives-at-its-final-orbit-030049221.html
|
光学機器 |
2022-01-25 03:00:49 |
ROBOT |
ロボスタ |
SB C&S AIを手軽に学べる・作れる・試せる「AIMINA」のフリートライアルプランを提供開始 通常プランは2022年5月を予定 |
https://robotstart.info/2022/01/25/aimina-ai-platform.html
|
aimina |
2022-01-25 03:21:40 |
IT |
ITmedia 総合記事一覧 |
[ITmedia News] Qiita、運営と直接対話できる機能 透明性向上に |
https://www.itmedia.co.jp/news/articles/2201/25/news115.html
|
itmedianewsqiita |
2022-01-25 12:30:00 |
IT |
ITmedia 総合記事一覧 |
[ITmedia News] 「進撃の巨人」作者も使った「ポーズマニアックス」クラファン大成功 “人体ポーズをWebでグリグリ回せる”無料サイト復活へ |
https://www.itmedia.co.jp/news/articles/2201/25/news113.html
|
itmedia |
2022-01-25 12:30:00 |
TECH |
Techable(テッカブル) |
ミニマムでリリース、週2でアプデ。誰でも使える日程調整ツール「Tocaly」PdMのこだわり |
https://techable.jp/archives/171805
|
tocaly |
2022-01-25 03:00:19 |
python |
Pythonタグが付けられた新着投稿 - Qiita |
HHKB プログラミングコンテスト 2022(ABC235) A~D問題 ものすごく丁寧でわかりやすい解説 python 灰色~茶色コーダー向け #AtCoder |
https://qiita.com/sano192/items/84e199b2faeae293b02b
|
【提出】入力の受け取りaNmapintinputsplit操作回数の記録リスト初期値は「」numdequeのインポートfromcollectionsimportdequequedequeキューに「」を追加queappend「」への操作回数はnumキューが空になるまでwhileltlenque今の数nowquepopleft今の操作回数knumnowa倍した場合の数tonowaa倍した時を超えていなければiftoltまだ一度も出てきていない数の場合ifnumto操作回数kを記録numtokキューへqueappendto今の数がより大きいかつで割り切れないifltnowandnow文字列へ変換nowstrstrnow末尾を先頭へtostrnowstrnowstr整数へ変換tointtostrまだ一度も出てきていない数の場合ifnumto操作回数kを記録numtokキューへqueappendto答えの出力printnumN【広告】「AtCoder凡人が『緑』になるための精選問詳細解説」AtCoderで緑になるための典型問をひくほど丁寧に解説した本kindle、pdfboothを販売しています。 |
2022-01-25 12:07:07 |
python |
Pythonタグが付けられた新着投稿 - Qiita |
ABC234 A~E問題 ものすごく丁寧でわかりやすい解説 python 灰色~茶色コーダー向け #AtCoder |
https://qiita.com/sano192/items/a047672868e16355ba12
|
【提出】入力の受け取りNKmapintinputsplitPlistmapintinputsplitheapqのインポートimportheapqリストを用意queK個目までをリストへ入れるforiinrangeKqueappendPi最小値を出力printminquequeをheap化heapqheapifyqueiKNforiinrangeKN①queの最小値を取り出すxheapqheappopque②「queの最小値」と「Pi」の大きい方をqueに戻すheapqheappushquemaxxPi③「queの最小値」を出力printqueEArithmeticNumber等差数列は初項公差項数のつが決まれば作れます。 |
2022-01-25 12:06:55 |
Ruby |
Rubyタグが付けられた新着投稿 - Qiita |
railsでカレンダー機能を実装する方法 |
https://qiita.com/yuhi_taka/items/421cdfca45ae634a7518
|
これで、継続率が分かる実装ができましたまとめ今回はsimplecalenderを使ってカレンダー機能を導入する方法についてまとめました。 |
2022-01-25 12:56:55 |
Ruby |
Railsタグが付けられた新着投稿 - Qiita |
railsでカレンダー機能を実装する方法 |
https://qiita.com/yuhi_taka/items/421cdfca45ae634a7518
|
これで、継続率が分かる実装ができましたまとめ今回はsimplecalenderを使ってカレンダー機能を導入する方法についてまとめました。 |
2022-01-25 12:56:55 |
技術ブログ |
Developers.IO |
CLOVA OCRのInvoice OCR APIで読み取った範囲を描画する |
https://dev.classmethod.jp/articles/rectangle-clova-invoice-ocr-api/
|
clovaocr |
2022-01-25 03:21:22 |
海外TECH |
DEV Community |
In-depth of tnpm rapid mode - how we managed to be 10s faster than pnpm |
https://dev.to/atian25/in-depth-of-tnpm-rapid-mode-how-could-we-fast-10s-than-pnpm-3bpp
|
In depth of tnpm rapid mode how we managed to be s faster than pnpm BackgroundAs a front end veteran I have to point out that the increasing complexity of front end projects nowadays makes the dependency installation slower and slower At Alibaba and Ant Group Engineering Productivity is an important metric for engineers and the speed of installation of front end dependencies is a big negative impact factor We are the team responsible for front end infrastructure in Ant Group We mainly focus on building the Node js community within the company and maintaining many open source projects like eggjs and cnpm We started an initiative in one of its goals was to optimize the installation speed of dependencies We managed to speed up the dependency installation by times In this article we d like to share with you the ideas and results of tnpm rapid mode Thank sodatea nonamesheep Sikang Bian RichSFO geekdada so much for the translation of this article the original article was written by atian and published on Zhihu Why is npm soooo slow In the modern front end ecosystem the number of total modules has been exploding and the dependency graphs are becoming increasingly complex There are a galaxy of modules in the ecosystem With over million npm packages in total npm has several times as many modules as in other languages by the end of Module relationships are becoming exceedingly complex Duplicate dependencies and lots of small files are wasting disk space and slowing down disk writes The front end module system prefers small and well crafted modules While this brought unprecedented prosperity to the community it also resulted in complex dependencies which directly led to slower installation There are trade offs to be made Whether the ecological status quo is correct or not is way beyond the scope of our discussion today so let s focus on how to improve installation speed for the time being The dependencies installation process for an application is briefly illustrated as above with the key operations including Query the package information of the child dependencies and then get the download address Download the tgz package locally unzip it then install it Create the node modules directory and write the downloaded files under it Dependencies InstallationLet s take vuepress as an example It has about distinct dependencies taking up MB disk spaces with files But if we install the dependencies in a nested way following npm s implementation we ll end up installing as many as dependency packages There are more than redundant dependencies And the actual disk footprint is MB with files File I O operations are very costly especially for reading writing large numbers of small files npm first came up with an optimization idea to solve the problem of duplicated dependencies unnecessarily deep hierarchies the flattening dependency capability where all child dependencies are slapped flat under node modules in the root directory However this optimization ended up introducing new problems Phantom dependenciesNPM doppelgangers It might still result in several copies of the same package e g there are still duplicate packages in the abovementioned example Non deterministic dependency structure though this is solvable via dependencies graph The performance penalty from a complex flattening algorithmGiven so many side effects of the flattening dependencies pnpm proposed an alternative solution by means of symbolic hard links This approach works great because It reduces package duplications while staying compatible with the resolution algorithm of Node js The method does not introduce side effects like phantom dependencies doppelgangers etc The hard linking approach with global caching reduces file duplication and saves disk footprint The resulting data speaks for itself modules files directories symlinks M disk footprint Similarly inspired by pnpm we ve refactored and implemented cnpm npminstallin cnpm to utilize symlinks But it didn t make use of hard links neither did it hoist transitive dependencies However it is worth noting that there are some potential issues of this approach We ve observed symbolic linking could cause indexing problems with dead loops in some IDEs like WebStorm and VSCode several years ago This issue which might not be fully resolved should ve been mitigated with IDE optimizations nowadays Compatibility Relative paths need to be adapted for plug in loading logic like EggJS and Webpack as they may not follow the Node js standard resolving strategy which looks modules up in the directory structure till the root of the disk The dependencies of different applications are hard linked to the same file so modifying the file while debugging may inadvertently affect other projects Hard links cannot be used across the file system And the implementation of symlinks varies among different operating systems Moreover there is still some performance loss due to disk IO on non SSD hard disks In addition yarn also proposed other optimizations such as Plug n Play Since it is too radical to be compatible with the existing Node js ecosystem we will not discuss those optimizations further here Metadata RequestsLet s take a look at the dependencies installation process Each dependency needs one metadata query and one tgz download resulting in a total of HTTP requests If there are different versions of the same package the metadata is queried only once and then the tgz for each version is downloaded separately Since the number of dependencies is typically very large the total number of HTTP requests is subsequently magnified resulting in a significant increase in time consumption In the above example npm will make more than HTTP requests A common optimization strategy is to calculate the dependencies graph in advance so that package managers can download tgz s directly without querying the package metadata As a result much of the network requests can be avoided NPM is the first to come up with the idea of shrinkwrap It was soon superseded by the idea of lockfile from yarn There re similar concepts in pnpm but different formats Although lockfile was meant to lock the dependency versions people discovered that the lockfile could also be used as a dependencies graph to speed up installations However there are unsolved problems like The first installation will not speed up unless the lockfile was pre stored in source code management Locking version would lead to some governance problems in large scale projects in practice A Brief SummaryTo summarize to speed up the installation process we need to think about how to get the dependencies graph faster Parsing Strategy how to make tgz downloads faster Network I O how to make to disk faster How to deal with the duplicated dependencies File I O The community was able to reach some common ground where Utilizations of the dependencies graph lead to more efficient concurrent downloads because the requests are better scheduled Simplified node modules directory leads to less time in File I O operations because of fewer duplicate dependencies Global caching can reduce the number of download requests Still existing problems Lockfile will increase maintenance costs Neither locking nor unlocking version is a silver bullet Flat dependencies and symbolic links symlinks in short have their own compatibility issues There is no consensus on the best implementation of global caching The uncompressed copy approach would generate a lot of file IO and the hard linking approach would cause potential conflict issues So there are trade offs to be made What are tnpm and cnpm As shown in the above diagram briefly speaking cnpm is our open source implementation of npm which supports mirror synchronization with the official npm registry and private package capabilities npmmirror is a community deployed project based on cnpm which provides mirroring services for Chinese front end developers tnpm is our enterprise service for Alibaba and Ant Group which is also based on cnpm with additional enterprise level customization Optimization Results Test ScenarioIf you can t measure it you can t improve it Peter DruckerPS We are probably the first company in the industry to re install Mac mini m with Linux OS to form a front end build cluster This reinstallation itself doubled our overall build speed on top of all the other optimizations Test ResultsWe will not interpret the result for now You ll get more in depth feeling and understanding after we systematically discuss the optimization ideas for tnpm rapid mode The Supporting DataRecall the data we gave earlier at the beginning of our analysis about the reasons behind the overall slowdown The complete datasets are shown below We collect the relevant data without lock or cache by strace and charles We also counted the corresponding file counts and sizes Here is the brief interpretation Number of files the number of flat dependencies and symbolic and hard links are basically the same They both reduce the disk footprint significantly Disk IO an important indicator the number of file writes is directly related to the installation speed Network speed reflects whether the installation process can run as full bandwidth as possible the greater the better Number of requests includes the number of tgz downloads and the number of query package information The number can be approximated as the number of overall modules From the data we can see that tnpm is more optimized for both Disk IO and Network IO How were the optimizations achieved Network I OWe only have one goal in optimizing the network I O how do we maximize the network utilization The first optimization comes from dependencies graph The common practice is using dependencies graph to avoid requesting every package s metadata on the client side thus significantly reducing the number of HTTP requests What s special in our approach is that we generate the dependencies graph on the server side with a multi level caching strategy It s based on npmcli arborist so it s npm compatible Our experience and philosophy in our enterprise scale projects are that we do not advocate locking versions locally but only reuse the dependencies graph from the previous phase in the iteration workflows such as from development environments to test environments or emergency iterations Locking version vs not locking version is a common topic for debates There is no common consensus Finding the respective balance based on the enterprise team s situation is generally recommended We will not discuss it here The second optimization is HTTP request warm ups tgz download process will first visit the registry and then be redirected to the OSS Alibaba Cloud Object Storage Service download address We can improve concurrency by warming up in advance and thus reduce the overall HTTP time consumption It is worth mentioning that we encountered an issue of intermittent DNS second delay There s no such redirection in the official npm registry We separated the download traffic from the registry by redirecting them to CDN cached OSS addresses which improved the stability of the registry service The third optimization is to combine the files We found during testing that we could not utilize full bandwidth Through analysis we found that with a huge number of dependency packages frequent writing small files often leads to file IO bottlenecks Simply extracting tgz files to tar files made it easy to properly merge files when writing to disk given that tar is an archive file format Repeated testing showed that combining tgz files into tarball files is ideal The fourth optimization is to use Rust to reimplement the download and decompressing process Forty concurrent threads were used to download decompress and merge the original packages into tarball files all in a streaming manner The value comes from repeated testing Rust was used to implement this feature as an experiment It showed some potential in decompressing files but not enough to let us believe it s a silver bullet for solving every performance issue We used neon to bridge the gap between Rust and Node js and planned to rewrite it to napi modules by napi rs FUSE TechnologyWe believe the original nested directory approach is better than the flattening node modules one But we don t want the compatibility issues caused by symlinks How can we hit two birds with one stone First let s introduce a black technology FUSE FileSystem in Userspace Sounds abstract Let s think of an analogy that front end developers are familiar with using ServiceWorker to refine and customize HTTP Cache Control Logic Similarly We can think of FUSE as the file system counterpart of ServiceWorker from the perspective of front end developers We can take over a directory s file system operation logic via FUSE As shown above We implemented the npmfs as a FUSE daemon on top of nydus it would mount one directory for one project When the OS needs to read the files in that directory our daemon process would take care of that The daemon process would look up the dependencies graph to retrieve the corresponding file contents from the global cache In this way we were able to achieve that All system calls for files and directories would treat this directory as a real directory Files are independent of each other Modifications made in one file would not result in changes in other projects unlike the hard links approach nydus doesn t support macOS at the moment so we implemented an adapter of nydus to macfuse We ll open source it when it s ready Trivia Nydus is a Zerg structure in StarCraft which is used to move units quickly around the map OverlayFSWe may need to temporarily modify the code inside node modules during our day to day development for debugging Editing files within a module can inadvertently lead to changes in another module because of how symbolic and hard linking solutions work FUSE supports custom write operations but the implementation is more verbose So we directly use a union mount filesystem OverlayFS OverlayFS can aggregate multiple different mount points into a single directory A common scenario is to overlay a read write layer on top of a read only layer to enable the read write layer This is how Docker images are implemented where the layers in the image can be reused in different containers without affecting each other So we further implement Using the FUSE directory as the Lower Dir of OverlayFS we construct a read write filesystem and mount it as the node modules directory of the application Using its COW copy on write feature we can reuse the underlying files to save space and support independent file modifications isolate different applications to avoid interference and reuse one copy of the global cache independently File I ONext let s talk about the global cache There are two main options in the industry npm Unpack tgz into tar as a global cache and unpack it into node modules when installing dependencies again pnpm Unpack tgz into files and cache them globally as hash so that different versions of the same package can share the same file and hard link it directly when installing again What they both have in common is that at some point the tgz files would be decompressed to standalone files and written to the disk As we mentioned above huge amount of small files generated by decompression can cause a huge amount of I O operations One day it occurs to us that maybe we can just skip decompressing So we went one step further The node modules are directly mapped to tar archives via FUSE dependencies graph eliminating the need for File I O operations happened in decompression At the same time the highly controllable nature of FUSE allows us to easily support both nested directories and flat structures switching between them on demand Even better How can we further improve the performance of cloud storage access in the future so that we don t even have to download tgz Some other attempts we tried to use stargz lz instead of tar gzip but the benefits were not significant stargz has more indexing capabilities than tar But in fact a separate dependencies graph would serve a similar purpose and there is no need to package them together lz has a huge performance gain over gzip but we have found that the ROI is not high in our current practice Extra CostsNo solution can be perfect and there are some extra costs to our solution The first one is the cost of FUSE We need to be aware of the cross system compatibility issues Although there are support libraries for every operating system it takes time to test their compatibility We need to support privileged containers for scenarios used within the enterprise Community scenarios like CI CD rely on whether GitHub Actions and Travis support FUSE The second one is the maintenance burden of the registry server The capability to generate dependencies graph analysis can only be turned on in the private enterprise registry due to server side resource constraints Public mirror services will fall back to the CLI side to generate a dependencies graph PS Community s solution including ours cannot solve the problem of multiple require cache for the same dependency Maybe it can be solved by ESM Loader but it is beyond our discussion today Summary Key IdeasIn conclusion the core advantages of our solution are Network I OSkipping the metadata requests by using server generated dependencies graph This saves Number of packages Metadata request duration The performance gain from using Rust language and increased concurrency due to download process optimization File I OReducing disk writes by storing the combined tar files This saves Number of packages Disk operation duration Reducing disk writes by not unpacking files but using FUSE mounting instead in the projects This saves Number of files Number of directories Number of symlinks and hard links Disk operation duration CompatibilityStandard Node js directory structure No symlinks no issues caused by flattening node modules One primary reason was that tnpm is not only a local command line interface but also a remote registry service that allows deeper optimization compare to other package managers The difference between black magic and black technology is that the former is a pile of this is fine dirty hacks to achieve the goal while the latter is a cross disciplinary juggernaut to solve challenges once and for all Data InterpretationFrom the above analysis one might already fully understand the optimization idea of tnpm rapid mode Now let s go back and interpret the data of the previous test results Note tnpm rapid mode is still under small scale testing and improvement is expected in future iterations So the test data is for reference only Also yarn in the table is slower than npm We don t know why for now but we ve tested it many times with pnpm benchmark and the same results kept showing up Here are the brief interpretations The time taken to generate the dependencies graph The difference between test and test is the time taken by the corresponding package manager pnpm analyzes the graph by client side HTTP request which is about seconds or so querying package information and downloading are parallel tnpm analyzes the graph by server side calculation which currently takes seconds when hitting remote cache this should cost less than second The speed is the same now but since tnpm has less network latency than pnpm we still need to optimize this in the future In the enterprise scenario the dependency modules are relatively convergent so most of the time the first test of tnpm should take seconds in case of hitting the cache the dependencies graph generation of tnpm has a caching mechanism File I O overheadTest is closer to CI CD scenarios which have dependencies graph no global cache The primary time consumption observed was from tgz download time File IO time As the tgz download time was alike the time gap was mainly from file IO What we concluded from the data is that tnpm is seconds faster than pnpm FUSE helped save the decompress file write time as well as the TAR merge time Local developmentBoth dependencies graph and global cache are made available for local development This corresponds to Test dependency is not new second development Test second development reinstallation of dependencies and Test first development of new application In principle time used dependencies graph update writing to node modules file few package downloads and updates Since tnpm is still under development we couldn t test it this time but from the above formula analysis tnpm has IO advantage over pnpm To summarize the speed advantage of tnpm over pnpm is seconds for dependencies graph seconds for FUSE free decompression Future planningFront end package management has been developing for nearly a decade Npm was once the trailblazer who kept innovating and advancing this area However the advancement was somewhat stagnated after npm won against all the other alternatives like bower Soon after Yarn became the challenger and rejuvenated the overall competition pushing further innovation on npm Pnpm raised from the new challenge and led the innovation again We believe that for front end dependency optimization and governance there is still a long way to go We hope to continue strengthening cooperation with our domestic and international colleagues to keep pushing the advancement of package managers together Therefore our subsequent plan is to give our experience gathered from enterprise level private deployment and governance back to the community as much as we can Currently cnpm npmcore is under refactoring to better support private deployments We sincerely welcome contributions from the open source community to further expedite this effort After the tnpm rapid model is refined we will open source the corresponding capabilities as well as the npmfs suite Unfortunately there s currently no way for the community to experience it In the meantime it would be highly beneficial for the community if we could work together to standardize the front end package management We need a standard like ECMAScript to regulate the behavior of each package manager We need a conformance test suite like Test We should accelerate the transition from CommonJS to ES modules We should find a way to fully resolve the chaotic situation resulting from the deltas among different dependency scenarios of frontend and Node js About meI m TZ atian currently work for Ant Group I am in charge of building and optimizing our front end infrastructure I love open source and am the main maintainer of eggjs cnpm Node js is an indispensable infrastructure in the field of front end Maybe the future changes of front end would make all existing engineering problems irrelevant Nonetheless no matter what will happen I just hope that I can seriously record what I see and think in this field I d like to exchange ideas with colleagues who are experiencing the evolution of the current front end industrialization and are equally troubled by it In the enterprise application scenario optimization of front end build execution speed is a system engineering challenge Dependency resolution and installation is only one of the many challenges we are facing The opportunities are abundant We are continuously looking for talented engineers to join us and keep pushing the innovation forward We look forward to hearing from you |
2022-01-25 03:37:16 |
海外科学 |
NYT > Science |
How Anxiety Can Benefit Us |
https://www.nytimes.com/2022/01/19/well/mind/anxiety-benefits.html
|
alarm |
2022-01-25 03:52:12 |
金融 |
JPX マーケットニュース |
[東証]新規上場日の初値決定前の気配運用について:ジェイレックス・コーポレーション(株) |
https://www.jpx.co.jp/news/1031/20220125-01.html
|
新規上場 |
2022-01-25 13:00:00 |
ニュース |
BBC News - Home |
American Keys races into Australian Open semi-finals |
https://www.bbc.co.uk/sport/tennis/60122011?at_medium=RSS&at_campaign=KARANGA
|
australian |
2022-01-25 03:40:28 |
北海道 |
北海道新聞 |
東証、午前終値は2万7027円 反落、一時500円安 |
https://www.hokkaido-np.co.jp/article/637476/
|
日経平均株価 |
2022-01-25 12:18:00 |
北海道 |
北海道新聞 |
米軍8500人派兵準備を命令 バイデン氏、ウクライナ対処で |
https://www.hokkaido-np.co.jp/article/637399/
|
米大統領 |
2022-01-25 12:03:37 |
北海道 |
北海道新聞 |
道内冷え込む 占冠で氷点下24・9度 下川、陸別で氷点下23・9度 |
https://www.hokkaido-np.co.jp/article/637438/
|
陸別 |
2022-01-25 12:13:29 |
IT |
週刊アスキー |
UGREEN、USB Type-C×2とUSB Type-A×1を搭載した65W充電器「CD244」を発売 |
https://weekly.ascii.jp/elem/000/004/081/4081316/
|
amazoncojp |
2022-01-25 12:40:00 |
IT |
週刊アスキー |
はま寿司「旨ねた100円祭り」3週開催! 人気の中とろ、うなぎ、天然赤えびが週替わりで特別価格に |
https://weekly.ascii.jp/elem/000/004/081/4081315/
|
特別価格 |
2022-01-25 12:30:00 |
マーケティング |
AdverTimes |
CMで営業・採用強化へ 防災アプリが好調 ゼネテック |
https://www.advertimes.com/20220125/article374875/
|
防災 |
2022-01-25 03:55:30 |
コメント
コメントを投稿