投稿時間:2022-11-07 17:15:58 RSSフィード2022-11-07 17:00 分まとめ(18件)

カテゴリー等 サイト名等 記事タイトル・トレンドワード等 リンクURL 頻出ワード・要約等/検索ボリューム 登録日
IT ITmedia 総合記事一覧 [ITmedia PC USER] サンワ、Type-Cポートで3.5mmステレオミニジャックを利用できる変換アダプター https://www.itmedia.co.jp/pcuser/articles/2211/07/news142.html itmediapcuser 2022-11-07 16:01:00
python Pythonタグが付けられた新着投稿 - Qiita Pythonに パラメータとして日付を渡してみました - 第三弾 https://qiita.com/turupon/items/575e7592cde829c3642d costmanagemen 2022-11-07 16:17:13
python Pythonタグが付けられた新着投稿 - Qiita ホールドアウト法と交差検証法をまとめてみた https://qiita.com/TOBi0/items/b9666aa3abfde7c62931 機械学習 2022-11-07 16:13:00
js JavaScriptタグが付けられた新着投稿 - Qiita 【JavaScript】同一オリジン間ではwindow.postMessage()ではなくBroadcastChannelを使う https://qiita.com/silane1001/items/b01777cf8ac9c5c842ac broadcastchannel 2022-11-07 16:28:26
AWS AWSタグが付けられた新着投稿 - Qiita Aurora Serverlessってサーバレスなのはわかるんだけどだからなんやねん。 https://qiita.com/shohei-harada/items/685fce0f7e0505a4e63f amazonauroraserverlessrds 2022-11-07 16:42:10
AWS AWSタグが付けられた新着投稿 - Qiita Proxmox VE 7 で Amazon Linux 2 を走らせる手順まとめ https://qiita.com/hillxrem/items/1af594043c2ea3cb9b4c amazonlinux 2022-11-07 16:34:33
AWS AWSタグが付けられた新着投稿 - Qiita amplify configureでエラーが出た際の対処法 https://qiita.com/village802/items/3928082478954cd197b7 amplifyconfigure 2022-11-07 16:27:46
技術ブログ Developers.IO 令和なのに SSL/TLS サーバー証明書を ACM ではなく IAM にアップロードしてみた https://dev.classmethod.jp/articles/iam-server-certificate-reiwa-nanoni/ https 2022-11-07 07:53:03
海外TECH DEV Community Building a Web Scraper in Golang: Complete Tutorial https://dev.to/oxylabs-io/building-a-web-scraper-in-golang-complete-tutorial-34if Building a Web Scraper in Golang Complete TutorialEver wondered how to build a web scraper in Golang Check out this practical tutorial Golang or Go is designed to leverage the static typing and run time efficiency of C and usability of Python and JavaScript with added features of high performance networking and multiprocessing It s also compiled and excels in concurrency making it quick This article will guide you through the step by step process of writing a fast and efficient Golang web scraper that can extract public data from a target website Installing GoTo start head over to the Go downloads page Here you can download all of the common installers such as Windows MSI installer macOS Package and Linux tarball Go is open source meaning that if you wish to compile Go on your own you can download the source code as well A package manager facilitates working with first party and third party libraries by helping you to define and download project dependencies The manager pins down version changes allowing you to upgrade your dependencies without fear of breaking the established infrastructure Installing Go on macOSIf you prefer package managers you can use Homebrew on macOS Open the terminal and enter the following brew install goInstalling Go on WindowsOn Windows you can use the Chocolatey package manager Open the command prompt and enter the following choco install golangInstalling Go on LinuxInstalling Go on Linux requires five simple steps Remove previous Go installations if any using the following command rm rf usr local go Download the GO for Linux package head over to the Go downloads page or use wget https go dev dl go linux amd tar gz Once the tar gz file is downloaded extract the archive in the usr localdirectory through tar C  usr local xzf go linux amd tar gz Add the Go path to the PATH environment variable by adding the following line into HOME profile file or for a system wide installation add it in etc profile file export PATH PATH usr local go bin Use the source HOME profile command to apply changes in the environment variable of the  profile file Now you can use the go version command to verify that the Go version is installed Once Go is installed you can use any code editor or an integrated development environment IDE that supports Go How to install Golang in Visual Studio Code While you can use virtually any code editor to write a Go program one of the most commonly used ones is Visual Studio Code For Golang to be supported you ll need to install the Go extension To do that select the Extensions icon on the left side type in Go in the search bar and simply click Install Go extension for Visual Studio CodeOnce you ve finished installing the Go extension you ll need to update Go tools Press Ctrl Shift P to open the Show All Commands window and search for Go Install Update tools Take a look at the image below to see how it looks Go tools for Visual Studio CodeAfter selecting all the available Go tools click on the OK button to install We can also use a separate IDE e g GoLand to write debug compile and run the Go projects Both Visual Studio Code and GoLand are available for Windows macOS and Linux Web scraping frameworksGo offers a wide selection of frameworks Some are simple packages with core functionality while others such as Ferret Gocrawl Soup and Hakrawler provide a complete web scraping infrastructure to simplify data extraction Let s have a brief overview of these frameworks FerretFerret is a fast portable and extensible framework for designing Go web scrapers It s pretty easy to use as the user simply needs to write a declarative query expressing which data to extract Ferret handles the HTML retrieving and parsing part by itself GocrawlGocrawl is a web scraping framework written in Go language It gives complete control to visit inspect and query different URLs using goquery This framework allows concurrent execution as it applies goroutines SoupSoup is a small web scraping framework that can be used to implement a Go web scraper It provides an API for retrieving and parsing the content HakrawlerHakrawler is a simple and fast web crawler available with Go language It s a simplified version of the most popular Golang web scraping framework GoColly It s mainly used to extract URLs and JavaScript file locations GoQueryGoQuery is a framework that provides functionalities similar to jQuery in Golang  It uses two basic Go packages net html  a Golang HTML parser and cascadia  a CSS Selector  CollyThe most popular framework for writing web scrapers in Go is Colly Colly is a fast scraping framework that can be used to write any kind of crawler scraper or spider If you want to know more about differentiating a scraper from a crawler check this article Colly has a clean API handles cookies and sessions automatically supports caching and robots txt and most importantly it s fast Colly offers distributed scraping HTTP request delays and concurrency per domain In this Golang Colly tutorial we ll be using Colly to scrape books toscrape com The website is a dummy book store for practicing web scraping How to import a package in Golang As the name suggests the import directive imports different packages into a Golang program For example the fmt package has definitions of formatted I O library functions and can be imported using the import preprocessor directive as shown in the following snippet package mainimport  fmt func main     fmt Println Hello World The code above first imports the fmt package and then uses its Println function to display the Hello World text in the console We can also import multiple packages using a single import directive as you can see from the example below package mainimport     fmt    math rand func main     fmt Println Hello World   fmt Println rand Intn Parsing HTML with CollyTo easily extract structured data from the URLs and HTML the first step is to create a project and install Colly Create a new directory and navigate there using the terminal From this directory run the following command go mod init oxylabs io web scraping with goThis will create a go mod file that contains the following lines with the name of the module and the version of Go In this case the version of Go is module oxylabs io web scraping with gogo Next run the following command to install Colly and its dependencies go get github com gocolly collyThis command will also update the go mod file with all the required dependencies as well as create a go sum file We are now ready to write the web scraper code file Create a new file save it as books go and enter the following code package mainimport     encoding csv    fmt    log    os    github com gocolly colly func main      Scraping code here  fmt Println Done The first line is the name of the package Next there are some built in packages being imported as well as Colly itself The main  function is going to be the entry point of the program This is where we ll write the code for the web scraper Sending HTTP requests with CollyThe fundamental component of a Colly web scraper is the Collector The Collector makes HTTP requests and traverses HTML pages The Collector exposes multiple events We can hook custom functions that execute when these events are raised These functions are anonymous and pass as a parameter First to create a new Collector using default settings enter this line in your code c   colly NewCollector There are many other parameters that can be used to control the behavior of the Collector In this example we are going to limit the allowed domains Change the line as follows c   colly NewCollector   colly AllowedDomains books toscrape com Once the instance is available the Visit  function can be called to start the scraper However before doing so it s important to hook up to a few events The OnRequest event is raised when an HTTP request is sent to a URL This event is used to track which URL is being visited Simple use of an anonymous function that prints the URL being requested is as follows c OnRequest func r colly Request     fmt Println Visiting  r URL Note that the anonymous function being sent as a parameter here is a callback function It means that this function will be called when the event is raised Similarly OnResponse can be used to examine the response The following is one such example c OnResponse func r colly Response     fmt Println r StatusCode The OnHTML event can be used to take action when a specific HTML element is found Locating HTML elements via CSS selectorThe OnHTML event can be hooked using the CSS selector and a function that executes when the HTML elements matching the selector are found For example the following function executes when a title tag is encountered c OnHTML title  func e colly HTMLElement     fmt Println e Text This function extracts the text inside the title tag and prints it Putting together all we have gone through so far the main  function is as follows func main     c   colly NewCollector colly AllowedDomains books toscrape com      c OnHTML title  func e colly HTMLElement      fmt Println e Text      c OnResponse func r colly Response      fmt Println r StatusCode      c OnRequest func r colly Request      fmt Println Visiting  r URL      c Visit This file can be run from the terminal as follows go run books goThe output will be as follows Visiting https books toscrape com All products  Books to Scrape  Sandbox Extracting the HTML elementsNow that we know how Colly works let s modify OnHTML to extract the book titles and prices The first step is to understand the HTML structure of the page The books are in the lt article gt tagsEach book is contained in an article tag that has a product pod class The CSS selector would be product pod Next the complete book title is found in the thumbnail image as an altattribute value The CSS selector for the book title would be image container img Finally the CSS selector for the book price would be price color The OnHTML can be modified as follows c OnHTML product pod  func e colly HTMLElement     title   e ChildAttr image container img   alt   price   e ChildText price color This function will execute every time a book is found on the page Note the use of the ChildAttr function that takes two parameters the CSS selector and the name of the attribute it isn t subtle A better idea would be to create a data structure to hold this information In this case we can use struct as follows type Book struct         Title string        Price string The OnHTML will be modified as follows c OnHTML product pod  func e colly HTMLElement           book   Book         book Title  e ChildAttr image container img   alt         book Price  e ChildText price color         fmt Println book Title  book Price For now this web scraper is simply printing the information to the console which isn t particularly useful We ll revisit this function when it s time to save the data to a CSV file Handling paginationFirst we need to locate the “next button and create a CSS selector For this particular site the CSS selector is next gt a Using the selector a new function can be added to the OnHTML event In this function we ll convert a relative URL to an absolute URL Then we ll call the Visit  function to crawl the converted URL c OnHTML next gt a  func e colly HTMLElement           nextPage   e Request AbsoluteURL e Attr href         c Visit nextPage The existing function that scrapes the book information will be called on all of the resulting pages as well No additional code is needed Now that we have the data from all of the pages it s time to save it to a CSV file Writing data to a CSV fileThe built in CSV library can be used to save the structure to CSV files If you want to save the data in JSON format you can use the JSON library as well To create a new CSV file enter the following code before creating the Colly collector file  err   os Create export csv if err  nil   log Fatal err defer file Close This will create export csv and delay closing the file until the program completes its cycle Next add these two lines to create a CSV writer writer   csv NewWriter file defer writer Flush  Now it s time to write the headers headers    string Title   Price writer Write headers Finally modify the OnHTML function to write each book as a single row c OnHTML product pod  func e colly HTMLElement           book   Book         book Title  e ChildAttr image container img   alt         book Price  e ChildText price color         row    string book Title  book Price         writer Write row That s all The code for the Golang web scraper is now complete Run the file by entering the following in the terminal go run books goThis will create an export csv file with rows of data Scheduling tasks with GoCronFor some tasks you might want to schedule a web scraper to extract data periodically or at a specific time You can do that by using your OS s schedulers or a high level scheduling package usually available with the language you re using To schedule a Go scraper you can use OS tools like Cron or Windows Task Scheduler Alternatively you can equip a high level GoCron task scheduling package available with Golang It s essential to keep in mind that scheduling a scraper through OS provided schedulers limits the portability of the code However the GoCron task scheduler package solves this problem and works well with almost all operating systems GoCron is a task scheduling package available in Golang for running specific codes at a particular time It offers similar functionalities as Python s job scheduling module named schedule Scheduling a task with GoCron requires a package to be installed with Golang which you can do by using the following command go get github com go co op gocronThe next step is to write a GoCron script to schedule our code Let s look at the following code example to understand how GoCron scheduler works package mainimport     fmt    time    github com go co op gocron func My Task     fmt Println Hello Task func main     my scheduler  gocron NewScheduler time UTC   my scheduler Every Seconds Do My Task   my scheduler StartAsync   my scheduler StartBlocking The code above schedules the My task  function to run every seconds Moreover we can start the GoCron scheduler in two modes asynchronous mode and blocking mode StartAsync  will start the scheduler asynchronously while the StartBlocking  method will start the scheduler in blocking mode by blocking the current execution path Side note The above code example starts the GoCron scheduler in both the asynchronous and the blocking modes However we can choose either of these as per our requirements Let s schedule our Golang web scraper code example using the GoCron scheduling module package mainimport     encoding csv    fmt    log    os    time    github com go co op gocron    github com gocolly colly  type Book struct   Title string  Price string  func BooksScraper      fmt Println Start scraping   file  err   os Create export csv   if err  nil     log Fatal err      defer file Close   writer   csv NewWriter file   defer writer Flush   headers    string Title   Price   writer Write headers   c   colly NewCollector     colly AllowedDomains books toscrape com      c OnHTML product pod  func e colly HTMLElement       book   Book     book Title  e ChildAttr image container img   alt     book Price  e ChildText price color     row    string book Title  book Price     writer Write row      c OnResponse func r colly Response       fmt Println r StatusCode      c OnRequest func r colly Request       fmt Println Visiting  r URL      c Visit  func main      my scheduler   gocron NewScheduler time UTC   my scheduler Every Minute Do BooksScraper   my scheduler StartBlocking Summary The code used in this article ran in less than seconds Executing the same task in Scrapy which is one of the most optimized modern frameworks for Python took seconds If speed is what you prioritize for your web scraping tasks it s a good idea to consider Golang in tandem with a modern framework such as Colly You can click here to find the complete code used in this article for your convenience 2022-11-07 07:20:45
海外TECH DEV Community Building an Awesome Carousel Reusable Component with React and Splide.js https://dev.to/meenahgurl/building-an-awesome-carousel-reusable-component-with-react-and-splidejs-g0p Building an Awesome Carousel Reusable Component with React and Splide js IntroductionCarousel widely known as sliders galleries and slideshows helps developers display text graphics images and even video in one interactive “sliding block They re a great design option for grouping content and ideas together allowing you to form visual relationships between specific pieces of content Components in ReactJS are basically reusable and independent bits of code that render the HTML and data passed to it React has two component sides the Function component and the Class component Splide js is lightweight flexible and accessible it is a library that aids in building sliders however you want to design your slides without writing any CSS styles or codes In this tutorial you will be learning how to build splide js reusable carousel components for react PrerequisiteBefore you start learning ensure you have the requirement below Node v or later Creating a new React js ProjectYou can create a new react project with this commandnpm create vite latestWhile the above command is running remember to select react and javascript as shown belowRun these commands to complete your installation cd lt project name gt npm install npm run dev Adding Tailwind CSS to React Project Install Tailwind CSSInstall tailwindcssvia npmnpm install D tailwindcss postcss autoprefixernpx tailwindcss init Add Tailwind to your PostCSS configurationAdd tailwindcss and autoprefixerto your postcss config jsfile or wherever PostCSS is configured in your project module exports plugins tailwindcss autoprefixer Configure your template pathsAdd the paths to all of your template files in your tailwind config jsfile module exports content src html js theme extend plugins Add the Tailwind directives to your CSS tailwind base tailwind components tailwind utilities You can now start your project to test the installation process npm run dev Integrating Splide js for this ProjectThe following steps below are the process we will be needing to carry out our awesome carousel with Splide Library Installing Splide jsSplide js has integration for vue react and svelte but for this tutorial you will install for react Run the command below npm install splidejs react splid Add Auto scroll Extension npm install splidejs splide extension auto scrollVisit Splide Documentation to learn more Restart your Projectnpm run dev Creating a ComponentCreate a component inside your src folder and call component inside our component folder we create a file and call it Slider jsx and paste the following code snippetsimport React from react import Splide SplideSlide from splidejs react splide import AutoScroll from splidejs splide extension auto scroll import splidejs splide dist css splide min css export function Slider imageUrl return lt Splide options type loop rewind true autoplay true perMove perPage gap rem arrows false pagination false autoScroll pauseOnHover true pauseOnFocus false speed extensions AutoScroll gt lt SplideSlide gt lt img src imageUrl gt lt SplideSlide gt lt Splide gt export default SliderAs seen above we imported Splide SplideSlide and AutoScroll from our Splide jsOn our App jsx we are to paste these code snippets as well if you noticed we imported our Slider jsx component into this fileimport React from react import Slider from components Slider export function App return lt div className max w xl mx auto py flex justify center items center gt lt div className bg purple rounded lg py gt lt Slider imageUrl src assets nature jpg gt lt div gt lt div gt export default App OutputWe finally made it to the end here is how our output looks like ConclusionNow you have learnt how to integrate Splide js into your project and how to use it Try it again on a different project Go and explore Splide js to know more Here is the Link to this project on Github 2022-11-07 07:18:12
海外TECH Engadget Elon Musk says Twitter will permanently ban users that impersonate accounts https://www.engadget.com/elon-says-twitter-will-permanently-ban-users-that-impersonate-accounts-093024671.html?src=rss Elon Musk says Twitter will permanently ban users that impersonate accountsBefore acquiring Twitter Elon Musk said he was against lifetime suspensions promising to reinstate banned users like Donald Trump Now Musk wrote that Twitter will permanently suspend account impersonators if they are not clearly labeled as parody The move comes after several verified blue check users changed their accounts to impersonal Musk himself nbsp Twitter appears to have just banned comedian Kathy Griffin for impersonating Musk at least temporarily after she used his name and image in her own verified Twitter account Other verified accounts impersonating Musk including Jeph Jacques also appear to have been kicked off the site Going forward any Twitter handles engaging in impersonation without clearly specifying “parody will be permanently suspendedーElon Musk elonmusk November Prior to Musk s takeover Twitters rules already stated that users may not impersonate individuals groups or organizations to mislead confuse or deceive others nor use a fake identity in a manner that disrupts the experience of users on Twitter Parody accounts were required to say so in both their accounts and bio Consequences included profile moderation temporary suspension or permanent suspension ーthough the latter was rarely imposed Twitter has been awash in drama over the last few days Early in Musk s tenure trolls and racists flooded the site with epithets and other hate speech presumably to test the new limits of the site This week a flood of advertisers put a hold on spending on the site In reply to a user who suggested a boycott on those companies Musk tweeted a a thermonuclear name amp shame is exactly what will happen if this continues 2022-11-07 07:55:24
医療系 内科開業医のお勉強日記 宿泊療養ホテルも設備次第で館内全体感染へ https://kaigyoi.blogspot.com/2022/11/blog-post_6.html 台湾疾病管理センターは、この最後のつの症例の発生を調査し、米国ニューヨークからの旅行者から始まり、その後オミクロン株の活発な感染を経験していたと仮定しました。 2022-11-07 07:52:00
医療系 医療介護 CBnews 職種別に1人当たりの給与把握できる制度要望-医療法人の費用「見える化」へ、財務省 https://www.cbnews.jp/news/entry/20221107160227 事業報告書 2022-11-07 16:10:00
金融 ニッセイ基礎研究所 信託銀行が2カ月連続の買い越し~2022年10月投資部門別売買動向~ https://www.nli-research.co.jp/topics_detail1/id=72898?site=nli 月第週に日経平均株価が下落幅約円と大幅に下落する中で億円買い越しており、月第週の株価上昇に合わせて売却したとみることができる。 2022-11-07 16:14:19
海外ニュース Japan Times latest articles Taiwan’s bomb shelters: ‘A space for life. And a space for death.’ https://www.japantimes.co.jp/news/2022/11/07/asia-pacific/taiwan-keelung-bomb-shelters/ Taiwan s bomb shelters A space for life And a space for death Preparing for war over hundreds of years has left a mark on the island with its hundreds of bomb shelters Some are being turned into 2022-11-07 16:19:45
IT 週刊アスキー 『ドラクエタクト』で『DQMJ』コラボが開催!四神獣がピックアップ https://weekly.ascii.jp/elem/000/004/112/4112028/ 開催 2022-11-07 16:35:00
IT 週刊アスキー 『GOW ラグナロク』の店頭体験会スケジュールが発表 https://weekly.ascii.jp/elem/000/004/112/4112020/ 発売予定 2022-11-07 16:10:00
マーケティング AdverTimes グリコ100周年で歴代おもちゃ500点が集合「クリエイターズグリコ展」開催 https://www.advertimes.com/20221107/article401264/ 入場無料 2022-11-07 07:45:12

コメント

このブログの人気の投稿

投稿時間:2021-06-17 05:05:34 RSSフィード2021-06-17 05:00 分まとめ(1274件)

投稿時間:2021-06-20 02:06:12 RSSフィード2021-06-20 02:00 分まとめ(3871件)

投稿時間:2020-12-01 09:41:49 RSSフィード2020-12-01 09:00 分まとめ(69件)