投稿時間:2021-05-02 21:39:47 RSSフィード2021-05-02 21:00 分まとめ(49件)

カテゴリー等 サイト名等 記事タイトル・トレンドワード等 リンクURL 頻出ワード・要約等/検索ボリューム 登録日
IT MOONGIFT Improve YouTube! - YouTubeのUI/UXを改善するブラウザ機能拡張 http://feedproxy.google.com/~r/moongift/~3/ho_j1rd3viU/ ImproveYouTubeYouTubeのUIUXを改善するブラウザ機能拡張YouTubeには毎日たくさんの動画がアップロードされています。 2021-05-02 21:00:00
AWS lambdaタグが付けられた新着投稿 - Qiita TerraformでLambda自体は作るけど、ソースコードの管理やデプロイは別でやりたい https://qiita.com/cohey0727/items/c3cb28785d9827e33443 TerraformでLambda自体は作るけど、ソースコードの管理やデプロイは別でやりたいTerraformでLambda自体は作るけど、ソースコードの管理やデプロイは別でやりたい背景TerraformでAWSのLambda関数を作成していたときに、「LambdaのソースコードもTerraformで管理するの」って思ったのがきっかけです。 2021-05-02 20:30:57
python Pythonタグが付けられた新着投稿 - Qiita kivyMDチュートリアル其の什肆 Components - Layout篇 https://qiita.com/virty/items/d5fa0a529912e6b0beb0 CircularLayoutサンプルコードが動かないというか実装されているか断定できなかったためRefreshLayout触れ込みの対象範囲が大きすぎるため理由としては上記の通りですが、CircularLayoutについては別モジュールのことも考えてimportなど試してみましたが、見事に動いてはくれませんでした。 2021-05-02 20:42:09
python Pythonタグが付けられた新着投稿 - Qiita ダイクストラの枝刈り高速化まとめ【python実装】 https://qiita.com/ansain/items/8a2762446cdf2eb47759 ただしこれも、タプルから数値の計算と数値からタプルの復元が必要なので、元から速い場合さらなる高速化にはならないこともある。 2021-05-02 20:41:15
js JavaScriptタグが付けられた新着投稿 - Qiita JavaScriptのメソッドとプロパティ https://qiita.com/amimi/items/3059080c26ee860a568a 2021-05-02 20:46:45
js JavaScriptタグが付けられた新着投稿 - Qiita Javascript  else if  私初心者なので 復習 https://qiita.com/Hoshi10Lighthouse/items/d1c97e4d62798292fa3b Javascriptelseif私初心者なので復習ifとelseだけじゃ足りないさらに条件を追加したい時constnumber定数numberの値がより大きいYESより大きいですNO定数numberがより大きいYESより大きいですNO以下ですifとelseの間にelseif条件を追加することで、if文に条件を追加できるよif条件式条件式が「true」の時の処理elseif条件式条件式が「false」、条件式が「true」の時の処理elseどちらの条件式も「false」の時の処理当てはまんない時constnumberifnumbergtfalseconsolelognumberはより大きいですelseifnumbergttrueconsolelognumberはより大きいですelseconsolelognumberは以下ですコンソール表示にはnumberはより大きいですと出るよ。 2021-05-02 20:45:45
js JavaScriptタグが付けられた新着投稿 - Qiita Javascript  else 条件成り立ちません  私初心者なので 復習 https://qiita.com/Hoshi10Lighthouse/items/ea0b3cefc9d345c9c3f1 Javascriptelse条件成り立ちません私初心者なので復習if文の条件が満たない場合に、別の処理を行いたい場合がある例えば、numberの値がより大きくない場合には以下ですと出力してみる。 2021-05-02 20:25:14
Program [全てのタグ]の新着質問一覧|teratail(テラテイル) OpenSSLで秘密鍵読み込めません https://teratail.com/questions/336284?rss=all OpenSSLで秘密鍵読み込めません前提・実現したいことwebサイトを作成しています。 2021-05-02 20:56:57
Program [全てのタグ]の新着質問一覧|teratail(テラテイル) wheelグループはどのような時に意味があるのでしょうか? https://teratail.com/questions/336283?rss=all wheelグループはどの様な場合に使用するのでしょうかただ、単にユーザーをwheelグループに追加すると、せっかくあるsudoコマンドの意味が無くなっている気がします。 2021-05-02 20:54:57
Program [全てのタグ]の新着質問一覧|teratail(テラテイル) DjangoのFormVIewとBootstrapのform_layoutを使って整ったformを作りたいです。 https://teratail.com/questions/336282?rss=all DjangoのFormVIewとBootstrapのformlayoutを使って整ったformを作りたいです。 2021-05-02 20:51:35
Program [全てのタグ]の新着質問一覧|teratail(テラテイル) throw文があるのに独自例外クラスを作るのは何故ですか?(java) https://teratail.com/questions/336281?rss=all throw文で補えない事って何なのでしょうかまた、throw文と独自例外クラスの双方のデメリット・メリットを教えてほしいです。 2021-05-02 20:33:50
Program [全てのタグ]の新着質問一覧|teratail(テラテイル) vue create直後のプロジェクトで「Cannot read property 'runtime' of undefined」 https://teratail.com/questions/336280?rss=all vuecreate直後のプロジェクトで「Cannotreadpropertyxruntimexofundefined」困っていることvuenbspcreateで新規に作成したプロジェクトを実行するとBraveとChromeで次のようなエラーがブラウザのコンソールに出力されます。 2021-05-02 20:32:34
Program [全てのタグ]の新着質問一覧|teratail(テラテイル) 分散システムについて https://teratail.com/questions/336279?rss=all 分散システムについて調べると「利用者に対して単一で首尾一貫したシステムとしてみえる独立したコンピュータの集合」と定義されていますがよくわかりません。 2021-05-02 20:17:57
Program [全てのタグ]の新着質問一覧|teratail(テラテイル) modelsをインポートしてAutoencoderを定義したい。 https://teratail.com/questions/336278?rss=all modelsをインポートしてAutoencoderを定義したい。 2021-05-02 20:08:45
Program [全てのタグ]の新着質問一覧|teratail(テラテイル) rails AdminLTE3で管理画面 https://teratail.com/questions/336277?rss=all railsAdminLTEで管理画面AdminLTEで管理画面の作成をしました。 2021-05-02 20:07:13
Program [全てのタグ]の新着質問一覧|teratail(テラテイル) JS 自作のキーボード focusが外れてしまう https://teratail.com/questions/336276?rss=all 自作のキーボードをクリックするとinput要素からfocusが外れてしまいます。 2021-05-02 20:06:19
Program [全てのタグ]の新着質問一覧|teratail(テラテイル) Gmailからのスプレッドシート転記 https://teratail.com/questions/336275?rss=all Gmailからのスプレッドシート転記Gmailからのスプレッドシート転記Gmailからスプレッドシートへの書き出しを行っていますが、うまくscriptが書けずに、書き出しが行われず、、、初心者なので、scriptが間違っている可能性が高いですが、以下のscriptでの問題点を教えて頂けますでしょうか。 2021-05-02 20:03:21
Program [全てのタグ]の新着質問一覧|teratail(テラテイル) SwiftUI ニフクラ データストア検索 https://teratail.com/questions/336274?rss=all 2021-05-02 20:01:54
Ruby Rubyタグが付けられた新着投稿 - Qiita 【RSpec】モデル単体テスト実行時の「MissingAttributeError」を解決する https://qiita.com/seaturtle_m_o/items/b79b151db724f4a71142 まとめpostはuserに紐付いているため、FactoryBotでassociationuserと記述し、紐付くuserデータを同時生成しようとしました。 2021-05-02 20:00:57
AWS AWSタグが付けられた新着投稿 - Qiita TerraformでLambda自体は作るけど、ソースコードの管理やデプロイは別でやりたい https://qiita.com/cohey0727/items/c3cb28785d9827e33443 TerraformでLambda自体は作るけど、ソースコードの管理やデプロイは別でやりたいTerraformでLambda自体は作るけど、ソースコードの管理やデプロイは別でやりたい背景TerraformでAWSのLambda関数を作成していたときに、「LambdaのソースコードもTerraformで管理するの」って思ったのがきっかけです。 2021-05-02 20:30:57
AWS AWSタグが付けられた新着投稿 - Qiita Flutter for Web + AWS S3 + CloudFront + 独自ドメインで始める爆速PWA https://qiita.com/nashitake/items/89ac499a622955a0b1d3 FlutterforWebAWSSCloudFront独自ドメインで始める爆速PWAはじめにFlutterforWebを使っておよそ時間でPWAサービスをリリースしますちょっと前までFlutterの名前も知らず、知ってからもiOSAndroidクロスプラットフォームという印象でしたしかし、年月のFlutterリリースで正式にWeb対応したということを知りまして、将来性しか感じておりませんということで独自ドメインとAWSを併用して爆速でPWAサービスをリリースしてみましたPWAとはProgressiveWebAppsの略で、ウェブサイトにも関わらずネイティブアプリのような挙動を実現する機能ですアプリストアにリリースすることもなく、スマホのホーム画面にインストールできたり、プッシュ通知ができたりしますブラウザのUIがなくなる分、画面を最大限使えるのが個人的にはイチオシのポイントです↓は今回作成したサンプルページの例ですが、PWA化した方が上下の表示領域が広いですできることは限られますが、ストアの審査も要らずにネイティブ風のアプリが作れるのはアツいのではないでしょうかちなみにサンプルはこちらに置いておきます開発環境FlutterMacOSBigSurAndroidStudioちなみにAppleMチップですが、特に問題ありませんでした作るFlutterのサンプルアプリを作るFlutterの導入Flutterそのもののインストールについては公式もありますし、他の方も書いていると思うので割愛しますFlutter公式を参照しながら、flutterdoctorDoctorsummarytoseealldetailsrunflutterdoctorv✓FlutterChannelstableonmacOSDdarwinarmlocalejaJP✓AndroidtoolchaindevelopforAndroiddevicesAndroidSDKversion✓XcodedevelopforiOSandmacOS✓Chromedevelopfortheweb✓AndroidStudioversion✓Connecteddeviceavailable•Noissuesfoundコマンドが全て問題なく通るようにしてあればOKです加えて、AndroidStudioをエディタとして使うのが個人的に好きですこちらもやり方は割愛しますが、AndroidStudioにFlutterとDartのプラグインを入れることになります※参考【Flutter】アプリ開発入門HelloFlutterFlutterプロジェクトを作成するAndroidStudioを起動し、「FlutterApplication」を選択してプロジェクトを作成しますWeb向けのリリースだけならば、iOSやAndroidのコードについてはチェックを外してもOKですプロジェクトを作成したら、ターミナルでFlutterforWebを有効化しますflutterconfigenablewebFlutterforWebが有効化されていると、デバッグデバイスにChromeが追加されますflutterdevicesconnecteddeviceChromeweb•chrome•webjavascript•GoogleChrome動かしてみるAndroidStudioに戻ってFlutterプロジェクトを開くと、デバッグ対象にChromeが追加されていますこの状態でデバッグボタンを押せば、勝手にChromeが起動してアプリが動きますdoctorをオールOKにするのがやや手こずりますが、環境構築はものすごく簡単ですねビルドする動くことが確認できたら、Web向けにビルドします。 2021-05-02 20:18:05
Docker dockerタグが付けられた新着投稿 - Qiita dockerコンテナ内でGPUが使用できない際のトラブルシューティング https://qiita.com/max_marketter/items/d01f5f68d79c549e60e3 dockerコンテナ内でGPUが使用できない際のトラブルシューティング概要dockerコンテナ内でGPUが使用できず、下記のようなエラーが出た。 2021-05-02 20:37:13
GCP gcpタグが付けられた新着投稿 - Qiita GCS上のgz圧縮ファイルを解凍する https://qiita.com/Maniwa1021/items/087a39970fb6f74ae982 インスタンスを作成したらSSHをクリックして起動する。 2021-05-02 20:28:26
Ruby Railsタグが付けられた新着投稿 - Qiita 【RSpec】モデル単体テスト実行時の「MissingAttributeError」を解決する https://qiita.com/seaturtle_m_o/items/b79b151db724f4a71142 まとめpostはuserに紐付いているため、FactoryBotでassociationuserと記述し、紐付くuserデータを同時生成しようとしました。 2021-05-02 20:00:57
海外TECH DEV Community Daily Share Price Notifications using Python, SQL and Africas Talking - Part One https://dev.to/ken_mwaura1/daily-share-price-notifications-using-python-sql-and-africas-talking-part-one-17p Daily Share Price Notifications using Python SQL and Africas Talking Part OneOn a hot dry afternoon we sat discussing investment avenues during this pandemic there are limited However we kept coming back to shares as well it s a good opportunity to learn about workings of the financial sector Here in Kenya the main bourse is the Nairobi Stock Exchange NSE there are about companies listed on the exchange as of The NSE operates from Monday from Friday from am to pm except holidays The main aim of this article is to develop a web scraper and a notification script to notify us when certain ticker reaches a specific price or alternatively above a certain price threshold Having done web scraping projects before I have researched an extensive list of libraries and frameworks and other tools You can check out my write up on a news scraper I had a little experience with using scrapy and seemed like the perfect fit for this project Scrapy is a web scraping framework thus it makes assumptions on how to handle certain aspects ranging from folder structure to its own cli and storing data This makes a great for structuring large projects or even multiple scrapers in one project However it also has a steep learning curve but the in depth documentation and fairly large community more than makes up for it For storing data usually a JSON file would be adequate but a database ensures it ll be easy to persist and query data later on We ll be making use of Postgresql mainly because I used already in other projects and serves our needs nicely Prerequisites before getting startedTo follow along this post and code the same features You re going to need a few things Python and pip I am currently using Any version above should work An Africas Talking account Api Key and username from your account Create an app and take note of the api key Once you ve got the above sorted Create a new directory and change into it mkdir nse scraper cd nse scraper Create a new virtual environment for the project or activate the previous one Using python package manager pip install beautifulsoup scrapy africastalking python sdk python dotenv library sqlachemy and psycopg libraries Save the installed libraries in a requirements txt file python m venv source bin activate pip install africastalking beautifulsoup scrapy python dotenv sqlachemy psycopg pip freeze gt requirements txtAs mentioned above we are using Postgresql as our database of choice hence we need a library to interface with the database psycopg is a good option although there areothers Although not necessary we ll be making use of SqlAlchemy as our Object Relation Mapper ORM This allows us to use python objects classes functions to make transactions instead of raw SQL Install Postgresql database to save all of our scraped data Depending on which platform you code on you could do it natively on your system Personally I am using docker as it is easy to manage containers and prevents my system from being cluttered This article is an awesome resource on how to get Postgresql and pgadmin installed as containers Alternatively check the finished code on Github Spiders EverywhereScrapy operates on the concept of spiders we define our own custom spiders to crawl and scrape data Scrapy has its commands that makes creating a project and a spider s quick and easy Now we will create a scrapy project generate a spider with the required boilerplate code using the cli scrapy startproject nse scraper Running the startproject command will create a folder with the structure outlined below There is a top folder with the project name nse scraper that contains the Scrapy configuration and a subfolder with the same name containing the actual crawling code python projects tree nse scrapernse scraper├ーnse scraper│├ー init py│├ーitems py│├ーmiddlewares py│├ーpipelines py│├ーsettings py│└ーspiders│└ー init py└ーscrapy cfg directories filesNB I don t want to go into too much detail about Scrapy because there are many tutorials for the tool online and because I normally use requests with lxml to make very simple data crawlers Many people prefer to use BeautifulSoup or other higher level data crawl libraries so feel free to go for that I picked Scrapy in this particular case because it creates a nice scaffold when working with crawlers and databases but this can be completely done from scratch as well cd nse scraper scrapy genspider afx scraper https afx kwayisi org nseke Created spider afx scraper using template basic in module nse scraper spiders afx scraperYou could choose to not use the generator and write the Scrapy files yourself but for simplicity I use the boilerplate that comes with Scrapy Now navigate to the top level project folder and create the spider afx scraper using genspider In my case I will be crawling data from afx kwayisi org about NSE share prices There is the main nse websiteor even mystocks website however both require a subscription to get real time stock quotes Since this project is meant to be a DIY scraper with minimal costs afx was the most viable option As a bonus they structure their data in a table and regularly update the prices If we take a look at the file structure again inside the spiders folder a new file afx scraper py has been created python projects nse scraper tree ├ーnse scraper │├ー init py│├ーitems py│├ーmiddlewares py│├ーpipelines py│├ーsettings py│└ーspiders│├ー init py│└ーafx scraper py └ーscrapy cfgThe content of afx scraper py is the minimum code required to get started with crawling data afx scraper py import scrapyclass AfxScraperSpider scrapy Spider name afx scraper allowed domains start urls def parse self response pass Scraper SetupThe first element we want to crawl is the table element holding all the data we then loop through and get each ticker symbol share name and price The code to get data is added to the parse function Looking through the developer tools inside our browser we see that table element has a tbody element that holds tr elements This refers to table row html element each row contains td elements This refers to table data element this is element we want to scrape Scrapy allows for two ways of selecting elements in a html document Using CSS selectorsUsing Xpath We ll start off with using CSS selector as its straightforward We assign a row variable to the code referencing the row of data Due to the nature of how the individual data is displayed similar html tags we need to use xpath to extract data afx scraper py def parse self response print Processing response url Extract data using css selectors row response css table tbody tr use XPath and regular expressions to extract stock name and price raw ticker symbol row xpath td re A Z raw stock name row xpath td re A Z raw stock price row xpath td re create a function to remove html tags from the returned list print raw ticker symbol For each row above we use xpath to extract the required elements The result is a combined list of data including data from the top table including top gainers and losers Inorder to filter out what we dont need we use regular expressions In the case of raw ticker symbol and raw stock price we only need alphabetic letters thus we pass along A Z rules to our regex As for our price data we need integers we pass as our regex rule Creepy CrawlersNow the scraper is ready to be executed and retrieve the items Run the crawler and verify that it is returning indeed the items that you would expect There is no output that stores the items yet but the log tells me that there were items that actually had a symbol name and the price defined item scraped count Note that I set the loglevel to INFO to prevent an information overload in the console scrapy utils log INFO Scrapy started bot nse scraper scrapy utils log INFO Versions lxml libxml cssselect parsel wlib Twisted Python default Apr GCC pyOpenSSL OpenSSL k Mar cryptography Platform Linux tkg bmq x with glibc scrapy crawler INFO Overridden settings BOT NAME nse scraper EDITOR usr bin micro LOG LEVEL INFO NEWSPIDER MODULE nse scraper spiders SPIDER MODULES nse scraper spiders scrapy extensions telnet INFO Telnet Password bbab scrapy middleware INFO Enabled extensions scrapy extensions corestats CoreStats scrapy extensions telnet TelnetConsole scrapy extensions memusage MemoryUsage scrapy extensions feedexport FeedExporter scrapy extensions logstats LogStats scrapy middleware INFO Enabled downloader middlewares scrapy downloadermiddlewares httpauth HttpAuthMiddleware scrapy downloadermiddlewares downloadtimeout DownloadTimeoutMiddleware scrapy downloadermiddlewares defaultheaders DefaultHeadersMiddleware scrapy downloadermiddlewares useragent UserAgentMiddleware scrapy downloadermiddlewares retry RetryMiddleware scrapy downloadermiddlewares redirect MetaRefreshMiddleware scrapy downloadermiddlewares httpcompression HttpCompressionMiddleware scrapy downloadermiddlewares redirect RedirectMiddleware scrapy downloadermiddlewares cookies CookiesMiddleware scrapy downloadermiddlewares httpproxy HttpProxyMiddleware scrapy downloadermiddlewares stats DownloaderStats scrapy middleware INFO Enabled spider middlewares scrapy spidermiddlewares httperror HttpErrorMiddleware scrapy spidermiddlewares offsite OffsiteMiddleware scrapy spidermiddlewares referer RefererMiddleware scrapy spidermiddlewares urllength UrlLengthMiddleware scrapy spidermiddlewares depth DepthMiddleware scrapy middleware INFO Enabled item pipelines nse scraper pipelines NseScraperPipeline scrapy core engine INFO Spider opened scrapy extensions logstats INFO Crawled pages at pages min scraped items at items min py warnings WARNING home zoo pyenv versions stock price scraper lib python site packages scrapy spidermiddlewares offsite py URLWarning allowed domains accepts only domains not URLs Ignoring URL entry in allowed domains warnings warn message URLWarning scrapy extensions telnet INFO Telnet console listening on Processing scrapy core engine INFO Closing spider finished scrapy extensions feedexport INFO Stored json feed items in test json scrapy statscollectors INFO Dumping Scrapy stats downloader request bytes downloader request count downloader request method count GET downloader response bytes downloader response count downloader response status count downloader response status count elapsed time seconds finish reason finished finish time datetime datetime item scraped count log count INFO log count WARNING memusage max memusage startup response received count scheduler dequeued scheduler dequeued memory scheduler enqueued scheduler enqueued memory start time datetime datetime scrapy core engine INFO Spider closed finished Lets Clean the data The data we get is not usable in its current format as it contains html tags classes attributes etc Thus we need to clean it afx scraper py import BeautifulSoup at the top of the filefrom bs import BeautifulSoup create a function to remove html tags from the returned list def clean stock name raw name clean name BeautifulSoup raw name lxml text clean name clean name split gt return clean name def clean stock price raw price clean price BeautifulSoup raw price lxml text return clean price Use list comprehension to unpack required values stock name clean stock name r name for r name in raw stock name stock price clean stock price r price for r price in raw stock price stock symbol clean stock name r symbol for r symbol in raw ticker symbol using list slicing to remove the unnecessary datastock symbol stock symbol cleaned data zip stock symbol stock name stock price for item in cleaned data scraped data ticker item name item price item yield info to scrapy yield scraped dataWe first import BeautifulSoup library from the bs package This will give us an easier time cleaning the data The first function clean stock name accepts a value raw name we then call the BeautifulSoup constructor pass our value as an argument we then specify lxml as our parser For further details on how Beautiful Soup works and different parsers check out the documentation We then specify we want only the text and assign it to our clean name variable While cleaning the name we still had additional characters that we didn t need thus we call the split method and return the required string The second function clean stock name pretty much repeats the process outlined above with the only adjustment is we don t need the extra step of adding the string split method We then call the functions on the each value of raw ticker symbol raw name and raw stock price We proceed to assign the result to appropriately named variables stock symbol stock price and stock name The stock symbol returns additional characters than we need hence we do list slicing to get the correct length of characters and assign it to the variable We use the zip function to create a list of all of the data retrieved Finally we create a dictionary scraped data and assign relevant keys to the value of cleaned data By using the yield keyword our parse function is now generator thus able to return values when needed This is especially critical to performance when crawling multiple pages Lets Store all the Data First of all I define the schema of the element that I am crawling in the items py There is no fancy schema yet but this can obviously be improved in the future when more items are being retrieved and the actual datatypes do make a difference items py See documentation in from scrapy item import Item Fieldclass NseScraperItem Item define the fields for your item here like stock name Field stock price Field stock symbol Field The middlewares py is left untouched for the project The important bit for storing data in a database is inside models py As described before I use SQLAlchemy to connect to the PostgreSQL database The database details are stored in settings py see below and are used to create the SQLAlchemy engine I define the Items model with the three fields and use the create items table to create the table nse scraper nse scraper models pyfrom sqlalchemy import Column Float Integer String create enginefrom sqlalchemy engine base import Enginefrom scrapy utils project import get project settingsfrom sqlalchemy ext declarative import declarative baseBase declarative base def db connect gt Engine Creates database connection using database settings from settings py Returns sqlalchemy engine instance return create engine get project settings get DATABASE def create items table engine Engine Create the Items table Base metadata create all engine class StockData Base Defines the items model tablename stock data id Column id Integer primary key True autoincrement True stock ticker Column stock ticker String stock name Column stock name String stock price Column stock price Float Inside the pipelines py the spider is connected to the database When the pipeline is started it will initalize the database and create the engine create the table and setup a SQLAlchemy session The process item function is part of the default code and is executed for every yielded item in the scraper In this case it means it will be triggered every time a stock is retrieved with a ticker name and price Remember to always commit when adding or removing items to the table nse scraper nse scraper pipelines py Define your item pipelines here Don t forget to add your pipeline to the ITEM PIPELINES setting See useful for handling different item types with a single interfacefrom sqlalchemy orm import sessionmakerfrom nse scraper models import StockData create items table db connectclass NseScraperPipeline def init self Initializes database connection and sessionmaker Creates stock data table engine db connect create items table engine self Session sessionmaker bind engine def process item self item spider process item and store to database session self Session stock data StockData stock data stock name item name stock data stock price float item price replace stock data stock ticker item ticker try session add stock data session commit query again obj session query StockData first print obj stock ticker except Exception as e session rollback print f we have a problem houston e raise finally session close return itemFinally the settings py is short and contains the information for the crawler The only items I have added are the DATABASE and LOG LEVEL variables You could choose to add your security details in this file but I would recommend to keep them secret and store them elsewhere I have used a env file to store my credentials then used the python dotenv library to retrieve them Note The env should be in the same folder as the settings py file or specify file path in the brackets nse scraper nse scraper settings py Scrapy settings for nse scraper project For simplicity this file contains only settings considered important or commonly used You can find more settings consulting the documentation import osfrom dotenv import load dotenvload dotenv BOT NAME nse scraper SPIDER MODULES nse scraper spiders NEWSPIDER MODULE nse scraper spiders POSTGRES SETTINGShost os getenv POSTGRES HOST port os getenv POSTGRES PORT username os getenv POSTGRES USER password os getenv POSTGRES PASS database os getenv POSTGRES DB drivername postgresql DATABASE f drivername username password host port database Configure item pipelines See ITEM PIPELINES nse scraper pipelines NseScraperPipeline LOG LEVEL INFO Crawl responsibly by identifying yourself and your website on the user agent USER AGENT nse scraper Obey robots txt rulesROBOTSTXT OBEY FalseYour scraper is now ready to run it scrapy crawl afx scraper You should now see stock data in your database Optionally you could output to a json file to quickly preview the data retrieved scrapy crawl afx scraper o stock jsonThis article was originally meant to cover setup data scraping and notification however its already long and its easier break it down to two parts Part two will cover Database queries sms notification using africas talking deployment and scheduling of the web scraper 2021-05-02 11:16:37
Apple AppleInsider - Frontpage News Apple vs. Epic Games App Store antitrust trial starts on Monday - what you need to know https://appleinsider.com/articles/20/08/23/apple-versus-epic-games-fortnite-app-store-saga----the-story-so-far?utm_medium=rss Apple vs Epic Games App Store antitrust trial starts on Monday what you need to knowThe Epic Games Fortnite versus Apple s App Store antitrust trial starts on Monday Here s what you need to know before it gets going A still from Epic s parody of Apple s Super Bowl commercialWithin the space of a few weeks a disagreement between the ambitions of Epic Games and the intention to maintain the App Store status quo by Apple has courted considerable controversy The affair commenced with little warning to consumers but quickly led to international interest as the battle sought to change one of the fundamental elements of the App Store how much Apple earns Read more 2021-05-02 11:58:27
Apple AppleInsider - Frontpage News Warren Buffett calls Tim Cook a 'fantastic manager' of Apple https://appleinsider.com/articles/21/05/01/warren-buffett-calls-tim-cook-a-fantastic-manager-of-apple?utm_medium=rss Warren Buffett calls Tim Cook a x fantastic manager x of AppleWarren Buffet has hailed Tim Cook as a fantastic manager of Apple declaring him one of the best managers in the world during the annual Berkshire Hathaway shareholders meeting Streamed on Saturday Warren Buffett answered questions from shareholders about Berkshire Hathaway and its investments In one question asking why the investment firm sold some common stock in Apple despite being considered the company s fourth jewel Buffett moved to compliment Apple as a company and Cook as its leader It s an extraordinary Apple it s got a fantastic manager starts Buffett in a video feed aired by Yahoo Finance Tim Cook was underappreciated for a while He s one of the best managers in the world And I ve seen a lot of managers And he s got a product that people absolutely love And there s an installed base of people and they get satisfaction rates of Read more 2021-05-02 11:05:54
海外科学 NYT > Science SpaceX Makes First Nighttime Splashdown With Astronauts Since 1968 https://www.nytimes.com/2021/05/02/science/spacex-nasa-landing.html resilience 2021-05-02 11:49:06
ニュース @日本経済新聞 電子版 60歳過ぎても働く 知っておきたい年金・給付金 https://t.co/xNfaoq67He https://twitter.com/nikkei/statuses/1388825647032770561 年金 2021-05-02 12:00:12
ニュース @日本経済新聞 電子版 ⚡️緊急事態宣言後も「電車混雑」「感染レッドゾーン」。アサヒ「生ジョッキ缶」にみる差異化とボトルネック。政府や自治体、医療界は1年間何をしていたのか。この1週間、日経電子版会員に読まれた記事です。 https://t.co/fJi2DH7oDe https://twitter.com/nikkei/statuses/1388825622789693441 ️緊急事態宣言後も「電車混雑」「感染レッドゾーン」。 2021-05-02 12:00:06
ニュース @日本経済新聞 電子版 巣ごもりは限界寸前 緊急事態宣言でも目立つ人出 https://t.co/HfrnAVbym4 https://twitter.com/nikkei/statuses/1388821835840737283 緊急事態 2021-05-02 11:45:03
ニュース @日本経済新聞 電子版 東シナ海の現状変更反対 日米制服トップ会談、中国念頭 https://t.co/1DGpL56IYE https://twitter.com/nikkei/statuses/1388819304762679300 東シナ海 2021-05-02 11:34:59
ニュース @日本経済新聞 電子版 【日経特報】任天堂、「スイッチ」5年目で異例の増産 3000万台視野 https://t.co/BybYVzBFH6 https://twitter.com/nikkei/statuses/1388818282174554113 異例 2021-05-02 11:30:56
ニュース @日本経済新聞 電子版 最高益テスラ、中国傾斜に政治リスク https://t.co/g6TZzSwxPH https://twitter.com/nikkei/statuses/1388818086967586821 最高 2021-05-02 11:30:09
ニュース @日本経済新聞 電子版 投資会社アルケゴスに群がった世界の金融機関。クレディ・スイスは5900億円、野村HDは3100億円の損失。一方で大打撃を回避したアメリカ勢。規制の抜け穴、隠れみの……。取引をめぐる影響や背景をまとめました。 https://t.co/MnujxIRx4p https://twitter.com/nikkei/statuses/1388818082932662276 2021-05-02 11:30:08
ニュース @日本経済新聞 電子版 【日経特報】玉川学園が幼稚園年長から小学1年の学習をする、秋入学・9月始業で高校までの一貫教育構想を文部科学省と協議。実現すれば他の私学や国の議論にも影響を与えます。 https://t.co/IiY6Qhzd5l https://twitter.com/nikkei/statuses/1388814954833354753 【日経特報】玉川学園が幼稚園年長から小学年の学習をする、秋入学・月始業で高校までの一貫教育構想を文部科学省と協議。 2021-05-02 11:17:42
ニュース @日本経済新聞 電子版 ネット証券「手数料ゼロ」競争 収益源の多様化が課題https://t.co/37lt6byyip https://twitter.com/nikkei/statuses/1388814307895517192 競争 2021-05-02 11:15:08
海外ニュース Japan Times latest articles Hopes for Iran nuclear breakthrough ‘within weeks’ but success ‘not guaranteed’ https://www.japantimes.co.jp/news/2021/05/02/world/eu-iran-nuclear-deal/ Hopes for Iran nuclear breakthrough within weeks but success not guaranteed The deal which curbs Iran s nuclear program in exchange for sanctions relief has been on life support since then U S president Donald Trump bolted in 2021-05-02 21:22:53
海外ニュース Japan Times latest articles Indian diaspora struggles to help homeland ‘gasping for air’ https://www.japantimes.co.jp/news/2021/05/02/asia-pacific/india-diaspora-help/ Indian diaspora struggles to help homeland gasping for air The diaspora is collecting funds lobbying governments in countries where they reside and making pledges to shuttle essential supplies and equipment 2021-05-02 21:06:42
海外ニュース Japan Times latest articles Up to 300 people per day breaking self-quarantine pledge in Japan https://www.japantimes.co.jp/news/2021/05/02/national/quarantine-period-breaking/ Up to people per day breaking self quarantine pledge in JapanAround to people every day could not be confirmed to be in their pledged quarantine locations with of those people found to 2021-05-02 20:48:24
海外ニュース Japan Times latest articles North Korea vows response after saying Biden policy shows hostile U.S. intent https://www.japantimes.co.jp/news/2021/05/02/asia-pacific/north-korea-biden-united-states/ North Korea vows response after saying Biden policy shows hostile U S intentIn one statement a Foreign Ministry spokesman accused Washington of insulting the dignity of the country s supreme leadership by criticizing North Korea s human rights situation 2021-05-02 20:47:37
海外ニュース Japan Times latest articles Diving star Tom Daley to combat Olympic boredom with knitting https://www.japantimes.co.jp/sports/2021/05/02/olympics/summer-olympics/olympics-diving/tom-daley-knitting-olympic-bubble/ bubble 2021-05-02 21:12:15
海外ニュース Japan Times latest articles Tokyo welcomes foreign divers to test event under close supervision https://www.japantimes.co.jp/sports/2021/05/02/more-sports/swimming/diving-world-cup-opens/ Tokyo welcomes foreign divers to test event under close supervisionThe diving World Cup began Saturday with some athletes already chafing under COVID countermeasures that will see wider use at the upcoming Summer Olympics 2021-05-02 20:23:27
ニュース BBC News - Home Raab dismisses 'gossip' as he defends Johnson over flat revamp costs https://www.bbc.co.uk/news/uk-politics-56962642 conservative 2021-05-02 11:02:07
ニュース BBC News - Home Trespass arrests at Prince Andrew's Windsor home https://www.bbc.co.uk/news/uk-56963548 grounds 2021-05-02 11:35:49
ニュース BBC News - Home Nazanin Zaghari-Ratcliffe: Iran treatment 'amounts to torture', says Dominic Raab https://www.bbc.co.uk/news/uk-56963590 iranian 2021-05-02 11:47:26
ニュース BBC News - Home Israel crush: Day of mourning after dozens killed at Jewish festival https://www.bbc.co.uk/news/world-middle-east-56961945 festival 2021-05-02 11:16:56
LifeHuck ライフハッカー[日本版] 充実したGWにするための過ごし方のヒント13選 https://www.lifehacker.jp/2021/05/233998matome-gw.html 充実 2021-05-02 21:00:00
北海道 北海道新聞 バスケBリーグ、三河がCS進出 1部 https://www.hokkaido-np.co.jp/article/539964/ 進出 2021-05-02 20:06:00

コメント

このブログの人気の投稿

投稿時間:2021-06-17 22:08:45 RSSフィード2021-06-17 22:00 分まとめ(2089件)

投稿時間:2021-06-20 02:06:12 RSSフィード2021-06-20 02:00 分まとめ(3871件)

投稿時間:2021-06-17 05:05:34 RSSフィード2021-06-17 05:00 分まとめ(1274件)