投稿時間:2023-05-23 20:34:22 RSSフィード2023-05-23 20:00 分まとめ(33件)

カテゴリー等 サイト名等 記事タイトル・トレンドワード等 リンクURL 頻出ワード・要約等/検索ボリューム 登録日
IT バズ部 MEOとは?自分で行うMEO対策でGoogleマップの集客UPする方法 https://lucy.ne.jp/bazubu/what-is-meo-47433.html 問い合わせ 2023-05-23 10:51:19
TECH Techable(テッカブル) 受講者のモチベーション・理解度を見える化!学習管理システム「Revot」 https://techable.jp/archives/208217 japan 2023-05-23 10:00:23
AWS lambdaタグが付けられた新着投稿 - Qiita AWS LambdaでSeleniumを動かすときに"Unable to import module 'lambda_function': urllib3 v2.0 only supports OpenSSL 1.1.1+, … https://qiita.com/wonderland90th/items/a54fa021882ec3c080e3 AWSLambdaでSeleniumを動かすときにquotUnabletoimportmodulexlambdafunctionxurllibvonlysupportsOpenSSL…問題・AWSLambdaPythonでSeleniumを使える環境を構築するDevelopersIOこちらの記事に従い、サンプルコードを実行したところ以下のエラーが出た。 2023-05-23 19:41:39
python Pythonタグが付けられた新着投稿 - Qiita 【PythonAnywhere・Django】デプロイ時、仮想環境でpip3 installしたモジュールがなぜかないことになっていてデプロイできない問題 https://qiita.com/natua_tcmm/items/7ea6819472a6b6334ff6 aconsoleinthisvirtual 2023-05-23 19:24:31
python Pythonタグが付けられた新着投稿 - Qiita 【PythonAnywhere】Webアプリ内からのウェブアクセス(スクレイピングetc)で403が返ってくる問題について https://qiita.com/natua_tcmm/items/9e7b1e156dafe1e55b9e pythonanywhere 2023-05-23 19:16:27
python Pythonタグが付けられた新着投稿 - Qiita Renpyのsmooth transformが半透明のまま止まってしまう https://qiita.com/PenguinCabinet/items/ad09a81bc289360a26a0 renpy 2023-05-23 19:16:06
AWS AWSタグが付けられた新着投稿 - Qiita Amazon CodeWhispererのはじめかた https://qiita.com/nanzhihutailang/items/c7385c2a34e51eae4466 amazoncodewhisperer 2023-05-23 19:47:49
AWS AWSタグが付けられた新着投稿 - Qiita AWS LambdaでSeleniumを動かすときに"Unable to import module 'lambda_function': urllib3 v2.0 only supports OpenSSL 1.1.1+, … https://qiita.com/wonderland90th/items/a54fa021882ec3c080e3 AWSLambdaでSeleniumを動かすときにquotUnabletoimportmodulexlambdafunctionxurllibvonlysupportsOpenSSL…問題・AWSLambdaPythonでSeleniumを使える環境を構築するDevelopersIOこちらの記事に従い、サンプルコードを実行したところ以下のエラーが出た。 2023-05-23 19:41:39
GCP gcpタグが付けられた新着投稿 - Qiita OSD on GCP の Trial を試して見る https://qiita.com/Yuhkih/items/1b7eb05920404332e0fc ccscustomercloudsubscrip 2023-05-23 19:27:08
Git Gitタグが付けられた新着投稿 - Qiita windowsでgit bashから使えるコマンド一覧 https://qiita.com/tondemonai7/items/6a439661c21bfc7b2d0b gitbash 2023-05-23 19:59:44
Git Gitタグが付けられた新着投稿 - Qiita git config username https://qiita.com/aizwellenstan/items/00fdcc4ab1c993df8c19 aizwellenstan 2023-05-23 19:34:40
技術ブログ Developers.IO クラスメソッドタイランドにジョインしました。ベンです。 https://dev.classmethod.jp/articles/joined-class-method-thailand-im-benj/ iamwhoiam 2023-05-23 10:29:07
技術ブログ Developers.IO AWS ParallelCluster 3.6.0 で RHEL 8 をサポートしました https://dev.classmethod.jp/articles/aws-parallelcluster-v360-released/ awsparallelcluster 2023-05-23 10:02:08
海外TECH MakeUseOf 6 Ways to Maximize the Battery Life on Your iPhone 14 Pro https://www.makeuseof.com/ways-to-save-battery-life-iphone-14-pro/ battery 2023-05-23 10:16:17
海外TECH DEV Community How I converted a podcast into a knowledge base using Orama search and OpenAI whisper and Astro https://dev.to/brainrepo/how-i-converted-a-podcast-into-a-knowledge-base-using-orama-search-and-openai-whisper-2aca How I converted a podcast into a knowledge base using Orama search and OpenAI whisper and AstroAs many of you know I have been hosting a podcast for the last three and a half years It is one of the most exciting experiences of my life I have produced approximately hours of audio content in all these years The audio content has a drawback if you want to search for something that was said you often have to listen to hours and hours of old episodes without reaching the searched point How to solve this problem TranscriptThe first step is to transcribe the episodes Since the beginning I have implemented a simple pipeline to transcribe each episode It turned out to be a total failure The transcriptions were based on AWS Transcribe and the service couldn t transcribe Italian audio correctly perhaps due to the presence of technical English words or the remarkable Sardinian accent The outcome was terrible it was impossible to read and understand the meaning and they were not usable for my primary purpose Aside from that it also had a cost each transcription was around euro and despite the low price it was an absolute waste of money After one year I stopped using the lambdas and decided not to transcribe the episodes anymore Also the wise man retraces his steps and who am I to not review my decisions Reviewing the decision is an outstanding practice and doing it when the context changes could help us to stay tuned with the world around us and catch opportunities Since Open Ai started releasing its products our industry was swept into a whirlwind of astonishment chat GPT copilot and DALL•E were perceived as masterpieces but another service caught my attention Whisper as its name suggests arrived without making much noise While all the attention was on ChatGPT Whisper was exceptionally interesting Its quality compared to other transcription services is remarkable It performs excellently in Italian accurately recognising English words and technical jargon I have never seen such precision before Moreover there is another non trivial aspectーit is open source and released under the MIT license After conducting a test I quickly embraced Whisper to transcribe the episodes At first I was tempted to set up a machine on AWS to run the entire process in the cloud However Whisper requires a massive amount of resources and time Ultimately I chose to run it on my gaming machine which has been dormant for a while My desktop computer equipped with a GTX was the perfect candidate to put it to the test especially after I stopped playing video games Whisper offers different pre trained models ranging from small to large The largest one which provides the best quality but is also the slowest can utilise the mathematical capabilities of the GPU Since I am not a Python developer and PyTorch is unfamiliar to me starting from scratch to implement the transcription script was nearly impossible Thankfully a simple Docker image came to my rescue This container simplifies all the steps and provides a REST API directly Now it is enough to navigate to the web port exposed by the container to reach the swagger ui From the Swagger UI select the file select the language in my case Italian and wait around minutes Each episode is one and a half hours long so Whisper needs quite a bit of time for transcribing In the end we will receive a well structured JSON file containing the transcriptions with time references Cool but now it s time to play with the code Make it searchableWith Whisper I have completed the first half of the problem Now it s time to discuss how to implement the search functionality Whisper can export in multiple formats including TXT VTT SRT TSV and JSON In my case I will be using the JSON data format This format contains both the raw translated text and the time coded text The raw translated text is displayed on the episode page and is crucial for SEO purposes The second part of the implementation revolves around the search functionality which is one of the main pillars of this project The search process is fairly straightforward There will be an input box where users can enter the words they wish to search for After submitting the search query a list of audio samples that match the searched terms will be displayed The episode will be played by clicking on the play button next to each text slice starting from when the word is pronounced The Gitbar website has no backend it is entirely static and built using Astro an excellent framework How to manage the search feature Should I install Elasticsearch How much will it cost Or should I consider using Algolia These questions arose as I started implementing the feature From the beginning using Elasticsearch was excluded as an option Managing an Elasticsearch instance is not trivial as it requires a server or computational capabilities Similarly using Algolia incurs additional costs and since we rely on donations to support Gitbar s expenses we need to minimise the expenditures Therefore I needed to find an alternative solution I have been following Michele s project Orama Search since its inception and I believe he Paolo Angela and their crime partners are doing incredible work with it If JavaScript has democratised software development I would say that Orama Search also known as Lyra for nostalgic folks like me has done the same for the search experience Initially JavaScript may seem limiting but thanks to it we can run Orama Search everywhere from the client to the server It s truly amazing Another appealing aspect of Orama is its immutable nature which makes it the perfect fit for my use case Since the Gitbar website is statically generated it is not an issue for me to build the search index during the page generation process and share it as a simple JSON file To accomplish that I created an Astro plugin inspired by the official one Now let s dive into the details of what I have implemented Creating an Astro Plugin for Orama SearchOrama Search provides built in support for Astro It takes the generated files of the website and creates an index or database from the content within the HTML pages However my use case had specific requirements that differed from the common ones You can refer to the Astro plugin documentation for Orama for more information To meet my specific needs I had to index a particular data structure that included the following fields text The transcribed fragment usually consisting of words title The episode title from The timestamp indicating when the words are pronounced episodePath The path of the episode page Given this requirement I had to create a plugin from scratch to support it Astro provides a plugin API that allows us to extend its capabilities It s important to note that the plugin API is relatively low level While it grants access to many internal details it also requires caution when making changes to avoid unintended consequences Now let s go through the steps involved in creating the plugin Initial Setup To start create a new folder in the root directory of your project called plugins This folder will hold all the plugins for this project Each Astro plugin is a JavaScript or TypeScript file that exports a single function export default gt name GITBAR ASTRO SEARCH hooks astro server start async gt await initDB dev astro build done async gt await initDB prod Astro s core functionalities can be extended using hooks which allow us to run custom JavaScript code at specific moments in Astro s lifecycle In this case we want to extend two pivotal moments the server starting phase and the build done phase when the website is fully built astro server start Since Gitbar is a static website and will be served by Netlify servers we don t require a Node server to run it However during the development environment we want the plugin to build the Orama database for us to use in the development process astro build done We use this hook to build the production database When we release the website along with the static pages we also release a JSON file that contains a serialized Orama database Data Preparation and IngestionTo prepare the data for seeding the Orama database I followed a multi step process Here s a breakdown of the steps you took Fetch the episodes from the podcast feed using the podverse podcast feed parser library This allowed me to retrieve the necessary episode data const episodes await getPodcastFeed podcastFeed Iterate over the list of episodes and check if there is a corresponding translation file in the transcriptions folder You searched for a file with the episode number as the filename const episodes await getPodcastFeed podcastURL const episodeSegments episodes map async episode gt const episodeNumber extractEpisodeNumber episode title try if Number episodeNumber gt const json await import transcriptions episodeNumber json const segments json segments map s gt title episode title path getSlug episode from s start text s text mpurl episode enclosure url return segments catch e console log Transcription episodeNumber not found return const results await Promise all episodeSegments const segmentsToInsert results flat After obtaining the segments for each episode I flattened them into a single array that contains all the segments of all the episodes After that I created the Orama database by calling the create function providing the desired schema const db await create schema title string text string from number path string mpurl string To efficiently insert the segments into the Orama database I divided them into chunks of units and used the insertMultiple function for batch insertion await insertMultiple db episodes Finally to complete the plugin I serialised the database to a JSON file This allowed me to share the database as a simple JSON file With these steps I am able to prepare and ingest the necessary data into the Orama search index using the appropriate schema and chunking techniques to optimise performance Search componentNow is the time to come out from the shadows and create the components for our search form Since Astro supports React components let s write our search component const Search gt return I will skip some parts to keep the focus on the interesting ones The first thing I want to do is fetch the Orama database that we built earlier from the network I want to do this during the mounting phase of the component I ll use a useEffect hook where I initialise the Orama instance with the same schema used before I load the database file and track the loading state to disable the search UI during the loading process I load the fetched data into the Orama instance I update the DB state to make the Orama instance available to the component import search create load from orama orama const Search gt State that holds the database const DB setDB useState null State that holds the loading status const isLoading setIsLoading useState true useEffect gt const getData async gt const db await create schema title string text string from number path string mpurl string setIsLoading true const resp await fetch in episode db json setIsLoading false const data await resp json load db data setDB db getData return Lastly we need to create our search function which we ll call when there is a change in the search field input To avoid creating a new find function with every rendering we ll use the useCallback hook to cache it and update it when DB or setResults changes The rest of the function calls Orama s search function with the search term and runs the search on all properties retrieving the first results const find useCallback async term string gt const res await search DB term properties limit hits map e gt e document setResults res DB setResults Now all that s left is to attach this function to an input field change event and we re done lt input className bg transparent w full text white p text xl outline yellow outline placeholder text yellow placeholder Search onChange e gt console log find e target value gt The audio player features are beyond the scope of this article Let me know if you would like me to write an article on that I intentionally left out some other implementation details If you want a running code example take a look at for the database creation and for the frontend JSX code The limitsCurrently the search feature is in production on this url but if you check the network inspector you will see that the Orama database size is more the MB note that not all the episode are indexed the transcription process takes lots of time and I transcribed and indexed from the episonde n to the number The size is not negligible and I expect that the size of the database can reach MB pretty soon Botstrapping a nodejs server for running it seems the most reasonable way to solve the problem but I don t want to do it than I need a plan B The B PlanMy Plan B has two levels The first solution is very straightforward The JavaScript ecosystem offers some fantastic libraries that can assist with GZIP compression It appears that the Netlify servers don t handle compression so I can leverage libraries such as pako tiny inflate uzip js or fflate to achieve this goal I conducted some tests and the compressed database size was reduced to just of the original size Implementing this solution requires only a few lines of code specifically less than By incorporating this solution I can easily handle up to episodes with sustainable download times Considering that I have recorded around episodes in the past three and a half years I can sleep soundly because I have ample time ahead Whenever I encounter an upper limit I usually engage in a mental exercise to find a workaround What if I group the episodes into chunks of or elements and create an Orama database for each group Furthermore what if I begin my search with the most recent episodes and once I have fetched and searched within the first group I proceed to the older databases until the result limit is reached This approach would compel the results to be sorted by date which could even be considered a feature and the sequential style of this search feature prevents the need to download all the databases in advance There is always a C PlanOk these two B Plans have stimulated my rambling What if I incorporate the Chat GPT API to provide answers in natural language using Orama for the time stamped results That way I can have a contextual conversation with the generated excerpt from Chat GPT and rely on the accurate source of information and timestamp reference from Orama search results Alright alright it s time to come back to reality now micheleriva I m blaming you for this mind bending journey Before wrapping up this article I want to extend my heartfelt appreciation to Michele Paolo Angela and everyone involved in the hard work on Orama Hey folks keep up the great workーI m a big fan 2023-05-23 10:08:24
海外TECH DEV Community #Githubhack23 - Monoripify, a CI CD web app https://dev.to/delavalom/githubhack23-monoripify-a-ci-cd-web-app-5gk9 Githubhack Monoripify a CI CD web app What I builtI built a web app where you can come a see insights about your repo s build process and also deploy your repo with just one click to Railway Category Submission DIY Deployments App LinkMonoripify Screenshots DescriptionMonoripify is designed to increase productivity within monorepos since Prime video take that route With seamless integration and a user friendly interface Monoripify makes your development experience more efficient and insightful Here s what you can expect from Monoripify ️⃣Easy Sign In Simply sign in using your GitHub account ️⃣Instant Integration Install our GitHub app to create an insightful build process for your repo ️⃣In Depth Insights Gain valuable insights along with an efficiency score that helps identify potential areas for improvement ️⃣One Click Deployment Deploy your repo to Railway with just one click and a token of course Monoripify leverages the power of GitHub Actions VM to clone your repo and run the build process share logs for analysis We use GPT AI technology to processes and analyse these logs It s seen I m building Vercel probably expect to find many bugs while using the app it works better at localhost Give it a try and maybe join me open sourced in the future of code management Link to Source Coderepo link Permissive LicenseMonoripify is released under the permissive MIT License This license allows developers to freely use modify and distribute the action while providing appropriate attribution Background What made you decide to build this particular app What inspired you In my opinion managing a monorepo and scaling the code can be quite challenging While I appreciate the speed and efficiency monorepos offer they can sometimes be overwhelming if not properly organized with a clear file structure and deployment plan My vision is to create an open source solution that combines your preferred deployment provider with a user friendly interface and exceptional developer experience ultimately achieving the perfect balance for monorepo management or that what I think How I built it How did you utilize GitHub Actions or GitHub Codespaces Did you learn something new along the way Pick up a new skill First and foremost I am not an architecture engineer However I am often motivated to start building a project in order to challenge myself and develop new skills To achieve this I began by delving deeply into the technologies I needed to use The first technology I came across was YAML ain t markup language ️ which I have grown to love more than JSON due to its usefulness in configuring projects Initially CI CD was a daunting subject for me but after getting my hands dirty I started to feel like a Cloud developer and even considered changing my career path The GitHub API is a vast and complex resource that I saw as a potential gold mine now for every project In addition the information I gained about the isolate VMs of GitHub actions the use of artifacts and running bash scripts was invaluable Reflecting on my experience if I had to take something back it would be my choice of stack For future projects I think that a serverful environment would be more preferable for this kind of needs I also realized that deploying the app on AWS was a better option for me in order to effectively manage edge cases and monitor production logs than Vercel and restricted DX Additional Resources InfoGithub Rest API DocsGithub Actions DocsGithub App DocsRailway App DocsUI LibraryAuthentication LibraryServeless Websocket service 2023-05-23 10:03:18
Apple AppleInsider - Frontpage News How to use SFTP and rsync for file transfers in macOS https://appleinsider.com/inside/macos/tips/how-to-use-sftp-and-rsync-for-file-transfers-in-macos?utm_medium=rss How to use SFTP and rsync for file transfers in macOSSFTP and rsync are two tools that can help you transfer files across networks and the web Here s how to use them within macOS You can use SFTP and rsync on a Mac to handle file transfers The are many occasions on which you need to transfer files between two computers on a LAN in an office across the web or to a remote server Many workers today use cloud services such as Google Drive or Dropbox for such transfers Read more 2023-05-23 10:57:27
海外TECH Engadget Netflix makes it easier to find titles you've added to your list but haven't watched yet https://www.engadget.com/netflix-makes-it-easier-to-find-titles-youve-added-to-your-list-but-havent-watched-yet-104554461.html?src=rss Netflix makes it easier to find titles you x ve added to your list but haven x t watched yetNetflix s latest updates to its mobile app make it easier to search through bookmarked content in the quot My List quot feature TechCrunch has reported New filters let you sort titles by movie series release date alphabetical order and date added The most interesting are the quot Started quot and quot Haven t Started quot filters though A lot of folks bookmark content start watching it and then don t finish for whatever reason Now if you re looking for something you added to quot My List quot but have yet to start watching you can see all of it at once rather than needing to painfully look through everything nbsp NetflixNetflix added the My List feature nearly years ago but until now there have been no filters ーthe only way to find things was to scroll through the list As such this will be a welcome improvement for those who habitually bookmark content The new feature will come to Android devices first and hit iOS over quot the next few weeks quot Netflix said nbsp Along with that update Netflix is adding a quot Coming Soon quot row to its TV apps The idea is to provide a preview of any upcoming content and you can set a reminder when upcoming shows are available That will put future content front and center as it was previously hidden in the quot New amp Popular quot tab nbsp This article originally appeared on Engadget at 2023-05-23 10:45:54
海外TECH Engadget Amazon has a big sale on Razer gaming accessories and peripherals https://www.engadget.com/amazon-has-a-big-sale-on-razer-gaming-accessories-and-peripherals-100043361.html?src=rss Amazon has a big sale on Razer gaming accessories and peripheralsA variety of Razer s gaming accessories and peripherals are currently discounted on Amazon including its Blackwidow V TKL keyboard a favorite of gamers at Engadget Normally retailing at the silent version is percent off at while the clicky model is percent off at Both models support million colors across the keys and can handle up to million clicks The Kraken X headset is also on sale with a percent discount bringing the surround sound headphones down from to They feature a noise canceling microphone along with volume and mute buttons right on the left earcup Razer s Viper Ultralight Mouse has one of the biggest discounts with a percent slash dropping the price from to The ambidextrous mouse has an Hz polling rate meaning there s next to no input latency The woven wire means there are no concerns about battery life while still allowing for smooth movements It also holds up to five stored profiles and utilizes the Focus K Optical Sensor for features like motion sync Rounding out the Razer gaming basics currently on sale is the Wolverine V Chroma controller for Xbox with a percent discount bringing the price from to Like with the Blackwidow V TKL keyboard gamers can customize it with million colors and light effects The controller also offers four extra triggers and two remappable bumpers Users can change the color effects and button controls through the Razer Control Setup for Xbox app Follow EngadgetDeals on Twitter and subscribe to the Engadget Deals newsletter for the latest tech deals and buying advice This article originally appeared on Engadget at 2023-05-23 10:00:43
医療系 医療介護 CBnews 障害児と障害者の支援を「一体的に議論」-こども家庭庁が報酬改定検討チームの構成員に https://www.cbnews.jp/news/entry/20230523191811 厚生労働省 2023-05-23 20:00:00
医療系 医療介護 CBnews マイナ保険証の別人情報、全国の組合などに点検要請-加藤厚労相「7月末までに結果の報告を求める」 https://www.cbnews.jp/news/entry/20230523194652 個人情報 2023-05-23 19:45:00
医療系 医療介護 CBnews 熱中症救急搬送1763人、前週比1455人増-総務省消防庁が15-21日の1週間の速報値公表 https://www.cbnews.jp/news/entry/20230523190609 救急搬送 2023-05-23 19:40:00
ニュース BBC News - Home Ukraine war: Fleeing Belgorod residents told to stay away https://www.bbc.co.uk/news/world-europe-65683374?at_medium=RSS&at_campaign=KARANGA border 2023-05-23 10:23:23
ニュース BBC News - Home IMF expects UK economy to avoid recession https://www.bbc.co.uk/news/business-65669399?at_medium=RSS&at_campaign=KARANGA global 2023-05-23 10:43:58
ニュース BBC News - Home Christian Glass: Family of Colorado man shot by police get $19m settlement https://www.bbc.co.uk/news/world-us-canada-65682512?at_medium=RSS&at_campaign=KARANGA crisis 2023-05-23 10:10:43
ニュース BBC News - Home Margaret Ferrier: Covid breach MP faces fresh calls to quit https://www.bbc.co.uk/news/uk-scotland-scotland-politics-65681240?at_medium=RSS&at_campaign=KARANGA ferrier 2023-05-23 10:11:26
ニュース BBC News - Home England vs Ireland: Ollie Robinson given all-clear for Ireland Test https://www.bbc.co.uk/sport/cricket/65682420?at_medium=RSS&at_campaign=KARANGA ireland 2023-05-23 10:06:06
ニュース BBC News - Home NBA play-offs: Nikola Jokic stars as Denver Nuggets complete series sweep over Los Angeles Lakers https://www.bbc.co.uk/sport/av/basketball/65683738?at_medium=RSS&at_campaign=KARANGA NBA play offs Nikola Jokic stars as Denver Nuggets complete series sweep over Los Angeles LakersWatch Nikola Jokic s best shots as he scores points in Denver Nuggets win over the Los Angeles Lakers to complete a series sweep in the NBA Western Conference final 2023-05-23 10:15:57
ニュース Newsweek F-16がロシアをビビらせる2つの理由──元英空軍司令官 https://www.newsweekjapan.jp/stories/world/2023/05/f-162.php 2023-05-23 19:21:07
マーケティング MarkeZine 音楽関連の年間消費額、アーティストグッズ購入者の最多回答は「1~1.5万円未満」/CCCMK総研調査 http://markezine.jp/article/detail/42321 cccmk 2023-05-23 19:15:00
IT 週刊アスキー 『信長の野望 覇道』5月公式生放送を5月30日に放送決定! https://weekly.ascii.jp/elem/000/004/137/4137836/ pcsteam 2023-05-23 19:45:00
IT 週刊アスキー 『パワプロ2022』最新の選手データを反映する無料アップデートを5月25日に実施! https://weekly.ascii.jp/elem/000/004/137/4137834/ ebase 2023-05-23 19:15:00
IT 週刊アスキー ヤマハ、プロフェッショナルソリューション事業でLumensと技術連携 https://weekly.ascii.jp/elem/000/004/137/4137819/ lumens 2023-05-23 19:45:00

コメント

このブログの人気の投稿

投稿時間:2021-06-17 22:08:45 RSSフィード2021-06-17 22:00 分まとめ(2089件)

投稿時間:2021-06-20 02:06:12 RSSフィード2021-06-20 02:00 分まとめ(3871件)

投稿時間:2021-06-17 05:05:34 RSSフィード2021-06-17 05:00 分まとめ(1274件)