金融 |
金融庁ホームページ |
「違法な金融業者に関する情報について」を更新しました。 |
https://www.fsa.go.jp/ordinary/chuui/index.html
|
|
2020-02-10 17:00:00 |
金融 |
金融庁ホームページ |
「違法な金融業者に関する情報について」を更新しました。 |
https://www.fsa.go.jp/ordinary/chuui/index.html
|
|
2020-02-10 17:00:00 |
TECH |
Engadget Japanese |
ソニー、テレビリモコン一体型のワイヤレスピーカー「SRS-LSR200」発表 |
https://japanese.engadget.com/jp-2020-02-13-srs-lsr200.html
|
高齢者 |
2020-02-14 01:50:00 |
IT |
ITmedia 総合記事一覧 |
[ITmedia エグゼクティブ] 【書評】山口桂著「美意識の値段」 人を魅了してやまないオークションの世界描写 |
https://mag.executive.itmedia.co.jp/executive/articles/2002/14/news070.html
|
itmedia |
2020-02-14 10:50:00 |
IT |
ITmedia 総合記事一覧 |
[ITmedia Mobile] Broadcom、世界初のWi-Fi 6Eチップ「BCM4389」を発表 高速省電でAR向き |
https://www.itmedia.co.jp/mobile/articles/2002/14/news068.html
|
bluetooth |
2020-02-14 10:34:00 |
TECH |
Techable(テッカブル) |
消防車も電動化! 米ロサンゼルス市消防局が北米で初導入へ |
https://techable.jp/archives/117143
|
電動 |
2020-02-14 01:30:27 |
python |
Pythonタグが付けられた新着投稿 - Qiita |
python glob/osの使い方まとめ |
https://qiita.com/be_AIer/items/a9714d327c985eb39bcf
|
|
2020-02-14 10:45:48 |
python |
Pythonタグが付けられた新着投稿 - Qiita |
デコレーター |
https://qiita.com/hikurochan/items/f5453ab368b334482c98
|
|
2020-02-14 10:22:25 |
python |
Pythonタグが付けられた新着投稿 - Qiita |
コマンドラインで blender から python script を起動するときの引数の渡し方 |
https://qiita.com/TeppeiMIURA/items/9b6ddacd451d8e24bf72
|
runbatblenderexebackgroundpythongeneratepygenderflocationiと、エラーが出力される。sysargvからargparseで引数を取り出すまでのpythonコードは以下のようになる。 |
2020-02-14 10:18:39 |
Program |
[全てのタグ]の新着質問一覧|teratail(テラテイル) |
[Python] FastAPI自動生成ドキュメント |
https://teratail.com/questions/241265?rss=all
|
PythonでFastAPIを実装しています。httpxxxxxdocsnbspとやると自動生成されたドキュメントが表示されますが、私としては、swaggerで定義したnbspswaggeryamlswaggerjsonの内容を反映したいのですが、どのようにすれば反映可能でしょうか例えば、ルートのフォルダにxxxyamlという名前で置けば反映される。 |
2020-02-14 10:57:42 |
Program |
[全てのタグ]の新着質問一覧|teratail(テラテイル) |
rails db:migrateでエラー |
https://teratail.com/questions/241264?rss=all
|
試したこと起きたエラーMysql起動中にiTermが再起動してしまい、それからdb周りの操作とrailsコマンドが使用できなくなりました。別のターミナルで実行していたrailssからlocalhostにアクセスしたら以下のようなエラーが出ていました。 |
2020-02-14 10:56:13 |
Program |
[全てのタグ]の新着質問一覧|teratail(テラテイル) |
JSONデータをSQLiteに保存しようとするとクラッシュする |
https://teratail.com/questions/241263?rss=all
|
インターネット経由でJSONデータを取得し、Sqliteに保存するという処理を実現したいと考えています。画像をこのツール→、imageにその文字列が入ったJSONをダウンロードし、Sqliteに保存、後で取り出して画像として扱うのが目標です。 |
2020-02-14 10:55:12 |
Program |
[全てのタグ]の新着質問一覧|teratail(テラテイル) |
サイトにJavaScriptを実装しました。 |
https://teratail.com/questions/241262?rss=all
|
auderoflashing |
2020-02-14 10:30:47 |
Program |
[全てのタグ]の新着質問一覧|teratail(テラテイル) |
XAMMPの構造タグが表示されません |
https://teratail.com/questions/241261?rss=all
|
phpmyadmin |
2020-02-14 10:21:28 |
Program |
[全てのタグ]の新着質問一覧|teratail(テラテイル) |
cssでcolorプロパティが違う場所に適用されてしまう |
https://teratail.com/questions/241260?rss=all
|
前提・実現したいこともっと詳しくと書かれているところにcolorプロパティで緑色をつけたいのですが、article全体に色が付いてしまいます。どう変えればいいいのでしょうかここに質問の内容を詳しく書いてください。 |
2020-02-14 10:19:54 |
Program |
[全てのタグ]の新着質問一覧|teratail(テラテイル) |
tensorflow,kerasでのGPUメモリエラーが解決できない |
https://teratail.com/questions/241259?rss=all
|
前提・実現したいことLinux環境でKeras環境構築をしています。kerasのmodelfitを呼びだすとメモリのエラーが出るため、メモリ制限をする下のようなコードを追加しましたが同じエラーが出ています。 |
2020-02-14 10:09:06 |
Ruby |
Rubyタグが付けられた新着投稿 - Qiita |
Ruby コーティング規約について コレクション編 1 |
https://qiita.com/raigakun/items/68f96b3704cb9ad6398b
|
はじめに命名規則編はこちらをクリック願います。qiitarb悪い例arrArraynewhashHashnew良い例arrarrArraynewhashhashHashnew単語空でなく、スペースを含まない文字列の配列を生成する時はwリテラルを使う。 |
2020-02-14 10:18:56 |
Linux |
Ubuntuタグが付けられた新着投稿 - Qiita |
Ubuntu 18.04.4 LTS、Apache/2.4.29、php 7.2.24で、HTTP/2に対応する方法 |
https://qiita.com/Nelson605/items/1c37e1f38ab4e17f6674
|
場所etcapachemodsenabledmpmpreforkloadしたがって、単純にHTTPの設定をするだけでなくpreforkMPMモジュールも別のものに切り替える必要があります。sudoaptinstallphpfpmsudoaenmodproxyfcgisetenvifsudoaenconfphpfpmsudoadismodphpsudoserviceapacherestartpreforkモジュールからeventモジュールに切り替えpreforkモジュールをeventモジュールに切り替えるsudoadismodmpmpreforksudoaenmodmpmeventsudoserviceapacherestartsudoservicephpfpmrestartHTTPの設定etcapachesitesavailabledefaultlesslconfのltVirtualHostgtに下記を追加します。 |
2020-02-14 10:58:39 |
AWS |
AWSタグが付けられた新着投稿 - Qiita |
AWS GuarDutyでS3バケットへのログ保存について |
https://qiita.com/shiranuikai/items/4956aa4b5812d8657b74
|
覚書です。AWSGuarDutyを有効にするSバケットへのログ保存のためGuarDuty画面から設定ー結果のエクスポートオプションでSバケットを新規に作成し保存をするがKMSが無いので失敗KMSキーをKMSConsole画面から作成するGuarDuty画面に戻って再度KMSキーエイリアスが選択できるを選択してほぞんするが失敗KMSConsole画面に戻ってカスタマー管理型のキーで対象のエイリアスを選択しキーポリシーエディタを編集で開く※これを実施しないとずーっとエラーSidAllowGuardDutytousethekeyEffectAllowPrincipalServiceguarddutyamazonawscomActionkmsGenerateDataKeyResource上記内容を適当な場所に追加して保存再度GuardDuty画面からSバケット作成、キー選択、保存することで成功時間おきに収集していたので翌日確認したところ年月日のフォルダにgzファイルとして保存されていた。 |
2020-02-14 10:51:23 |
GCP |
gcpタグが付けられた新着投稿 - Qiita |
GCP アップデート (2/14-2/20 2020) |
https://qiita.com/kenzkenz/items/406ea1c213b309f12e10
|
ディスクレーマアップデートの内容はあくまでも個人の理解していることを書いています。リリース内容できるだけ拾うようにしているのですが、抜けていそうなリリースノートが追加するのでぜひ教えて下さい一覧ここで更新してます。 |
2020-02-14 10:52:20 |
技術ブログ |
Developers.IO |
[アップデート] AWS Elemental MediaConnect の各フローでサポートされる出力数が 20 → 50 に増えました |
https://dev.classmethod.jp/cloud/aws/update-aws-elemental-mediaconnect-flownum/
|
mediaconn |
2020-02-14 01:33:37 |
海外TECH |
DEV Community |
Loves Me, Loves Me Not: Classify Texts with TensorFlow and Twilio |
https://dev.to/twilio/loves-me-loves-me-not-classify-texts-with-tensorflow-and-twilio-23c3
|
Valentine s Day is coming up and both love and machine learning are in the air Some would use flower petals to determine if someone loves them or not but developers might use a tool like TensorFlow This post will go over how to perform binary text classification with neural networks using Twilio and TensorFlow in Python PrerequisitesA Twilio account sign up for a free one hereA Twilio phone number with SMS capabilities configure one hereSet up your Python and Flask developer environment Make sure you have Python downloaded as well as ngrok SetupActivate a virtual environment in Python and download this requirements txt file Be sure to use Python x for TensorFlow On the command line run pip install r requirements txt to import all the necessary libraries and then to import nltk make a new directory with mkdir nltk data cd into it and then run python m nltk downloader You should see a window like this select all packages as shown in the screenshot below Your Flask app will need to be visible from the web so Twilio can send requests to it Ngrok simplifies this With Ngrok installed run ngrok http in the directory your code is in You should see the screen above Grab that ngrok URL to configure your Twilio number Prepare Training DataMake a new file called data json to contain two arrays of phrases corresponding to labels either loves me or loves me not Feel free to modify phrases to the arrays or add your own the more training data the better this is not close to being enough but it s a fun start loves me do you want some food you re so nice i got you some food I like your hair You looked nice today Let s dance I spent time on this for you i got this for you heyyyyyyy i got you pizza loves me not I didn t have the time Can you get your own food You ll have to get your own food Do it yourself i can t next time i m sorry you up hey wyd k idk man cool Make a Python file called main py At the top import the required libraries then make a function open file to save the data from data json as a variable data import refrom nltk tokenize import word tokenizefrom nltk stem import WordNetLemmatizerimport numpy as npimport tflearnimport tensorflow as tfimport randomimport jsonfrom twilio twiml messaging response import MessagingResponsefrom flask import Flask requestdef open file file with open file r as f data json load f return datadata open file data json print data Read Training DataThis post will use a lemmatizer to get to the base of a word ie turning going into go A stemmer which also reduces words to their word stem could be used for this task but would be unable to identify that good is the lemma of better Though lemmas take more time to use they tend to be more efficient You can experiment with both stemmers and lemmatizers when working with natural language processing NLP Right underneath the data variable declaration initialize the lemmatizer and make this function to stem each word lemma WordNetLemmatizer def tokenize and stem text text return lemma lemmatize word lower for word in text binary categories list data keys training words json data This next function will read the training data remove punctuation handle contractions and extract words in each sentence appending them to a word list Next get the possible labels loves me and loves me not that the model will train for and initialize an empty list json data to hold tuples of words from the sentence and also the label name The training words list will contain all the unique stemmed words from the training data JSON and binary categories contains the possible categories they could classify as def read training data data for label in data keys for text in data label for word in text split if word lower in contractions text text replace word contractions word lower text re sub a zA Z text training words extend word tokenize text json data append word tokenize text label return json dataThe json data returned is a list of words from each sentence and either loves me or loves me not for example one element of that list is “ do “ you “ want some food “ loves me This list does not cover every possible contraction but you get the idea contractions aren t are not can t cannot could ve could have couldn t could not didn t did not don t do not hadn t had not hasn t has not haven t have not how d how did how s how is i d I had i ll I will i m I am i ve I have isn t is not let s let us should ve should have shouldn t should not that d that had that s that is there s there is wasn t was not we d we would we ll we will we re we are we ve we have what ll what will what s what is when s when is where d where did where s where is won t will not would ve would have wouldn t would not you d you had you ll you will you re you are Then stem each word to remove duplicates and call the read training data function training words tokenize and stem text training words print read training data data read training data data For TensorFlow to understand this data the strings must be converted into numbers This can be done with the bag of words NLP model which keeps a count of the total number of occurrences of the most commonly used words For example the sentence Never gonna give you up never gonna let you down could be represented as For the loves me and loves me not labels a bag of words is initiated as a list of tokenized words called vector here We loop through the words in the phrase stemming them and comparing with each word in the vocabulary If the sentence has a word in our training data or vocabulary is appended to the vector signaling which label the word belongs to If not a is appended At the end our training set has a bag of words model and the output row corresponding to the label the bag belongs to training for item in json data bag vector token words item token words lemma lemmatize word lower for word in token words for word in training words if word in token words bag vector append else bag vector append output row list len binary categories output row binary categories index item training append bag vector output row Convert training to a numpy array so TensorFlow can process it as well and split it into two variables data has the bag of words and labels has the label training np array training data list training labels list training Now reset the underlying graph data and clear defined variables and operations from the previous cell each time the model is run Next build a neural network with three layers The input data input layer is for inputting or feeding data to a network and the input to the network has size len data for the length of our encoded bag of words and labels Then make two fully connected intermediate layers with hidden units or neurons While some functions need more than one layer to run more than three layers probably won t make a difference so two layers is enough and shouldn t be too computationally expensive We use the softmax activation function in this case because the labels are exclusive Lastly we make the final net from the estimator layer like regression At a high level regression linear or logistic helps predict the outcome of an event based on the data Neural networks have multiple layers to better learn more complicated abstractions relationships from the input tf reset default graph net tflearn input data shape None len data net tflearn fully connected net net tflearn fully connected net len labels activation softmax net tflearn regression net A deep neural network DNN automatically performs neural network classifier tasks like training the model and prediction based on input Calling the fit method begins training and applies the gradient descent algorithm a common first order optimization deep learning algorithm n epoch is the number of times the network will see all the data and batch size is the size data is sliced in to for the model to train on model tflearn DNN net model fit data labels n epoch batch size show metric True Similar to how the data for the bag of words model was processed this data needs to be converted to a numerical form that can be passed to TensorFlow def clean for tf text input words tokenize and stem text word tokenize text vector len training words for input word in input words for ind word in enumerate training words if word input word vector ind return np array vector To test this without text messages you could addtensor model predict clean for tf INSERT TEXT HERE print binary categories np argmax tensor This calls the predict method on the model getting the position of the largest value which represents the prediction We will test this with text messages by building a Flask application Create a Flask AppAdd the following code to make a Flask app get the inbound text message create a tensor and call the model app Flask name app route sms methods POST def sms resp MessagingResponse inbMsg request values get Body lower strip tensor model predict clean for tf inbMsg resp message f The message inbMsg r corresponds to binary categories np argmax tensor r return str resp Open a new terminal tab separate from the one running ngrok In the folder housing your code run and text your Twilio number a phrase like get someone else to do it and you should see something like this The complete code and requirements txt can be found on GitHub here What s NextWhat will you classify next You could use TensorFlow s Universal Sentence Encoder to perform similar text classification in JavaScript classify phone calls or emails use a different activation function like sigmoid if you have categories that are mutually exclusive and more Let me know what you re building online or in the comments GitHub elizabethsiegleTwitter lizziepikaemail lsiegle twilio com |
2020-02-14 01:38:42 |
Apple |
AppleInsider - Frontpage News |
YouTube TV to cancel subscriptions purchased through Apple in-app payments in March |
https://appleinsider.com/articles/20/02/13/youtube-tv-to-cancel-subscriptions-purchased-through-apple-in-app-payments-in-march
|
Google s YouTube unit this week informed YouTube TV customers of a change in policy that will see a discontinuation of subscriptions purchased through Apple s in app payments mechanism forcing users to subscribe elsewhere or cancel the service |
2020-02-14 01:59:22 |
海外TECH |
WIRED |
The US Hits Huawei With New Charges of Trade Secret Theft |
https://www.wired.com/story/us-hits-huawei-new-charges-trade-secret-theft
|
chinese |
2020-02-14 01:07:54 |
ニュース |
@日本経済新聞 電子版 |
@nikkei ... |
https://twitter.com/nikkei/statuses/1228131054038458368
|
債務免除 |
2020-02-14 02:37:15 |
ニュース |
@日本経済新聞 電子版 |
@nikkei JDI、10~12月期決算再延期へ 不適切会計調査で ... |
https://twitter.com/nikkei/statuses/1228130487975079937
|
nikkei |
2020-02-14 02:35:00 |
ニュース |
@日本経済新聞 電子版 |
@nikkei 企業年金とは 公的年金を補完、重要性高まる(きょうのことば)
... |
https://twitter.com/nikkei/statuses/1228125329912451072
|
nikkei |
2020-02-14 02:14:30 |
ニュース |
@日本経済新聞 電子版 |
@nikkei ... |
https://twitter.com/nikkei/statuses/1228124032152866821
|
nikkei |
2020-02-14 02:09:21 |
ビジネス |
ダイヤモンド・オンライン - 新着記事 |
米インフレ加速の可能性も、新型ウイルスの影響で - WSJ発 |
https://diamond.jp/articles/-/228881
|
新型 |
2020-02-14 10:04:00 |
TECH |
ロイター: テクノロジー |
グーグル、英ビッグデータ分析のルッカー買収完了 英当局が承認 |
http://feeds.reuters.com/~r/reuters/JPTechnologyNews/~3/ywSDbjVoE_w/looker-m-a-alphabet-idJPKBN20805B
|
非上場企業 |
2020-02-14 10:24:03 |
北海道 |
北海道新聞 |
サイバー攻撃対策で会議、経産省 五輪開催時の電力供給に万全を |
https://www.hokkaido-np.co.jp/article/393069/
|
東京五輪 |
2020-02-14 10:31:00 |
北海道 |
北海道新聞 |
舞台版「ハリポタ」22年東京で 赤坂の専用劇場で無期限上演 |
https://www.hokkaido-np.co.jp/article/393056/
|
世界各国 |
2020-02-14 10:22:04 |
北海道 |
北海道新聞 |
中国本土の感染者、6万4千人超 新型肺炎の死者1483人 |
https://www.hokkaido-np.co.jp/article/393062/
|
中国本土 |
2020-02-14 10:20:07 |
北海道 |
北海道新聞 |
公立高の出願変更状況を発表 道教委 |
https://www.hokkaido-np.co.jp/article/393055/
|
道教委 |
2020-02-14 10:10:00 |
北海道 |
北海道新聞 |
札幌・南区のマンションで火災、女性1人死亡 |
https://www.hokkaido-np.co.jp/article/393058/
|
札幌市南区真駒内緑町 |
2020-02-14 10:08:00 |
北海道 |
北海道新聞 |
米中、制裁関税の税率引き下げへ 第1合意発効、貿易摩擦は休戦 |
https://www.hokkaido-np.co.jp/article/393029/
|
引き下げ |
2020-02-14 10:08:07 |
マーケティング |
WEB担当者Forum |
[ユーザー投稿] アクセスを増やす施策〜コスパ優秀6選、避けるべき3悪手 |
http://feedproxy.google.com/~r/web-tan/~3/PQQjrngqZR0/35321
|
続きを読む |
2020-02-14 10:39:47 |
マーケティング |
WEB担当者Forum |
データ保護・活用関連業務をサポートする「ゼロパーティデータ構築支援サービス」開始 |
http://feedproxy.google.com/~r/web-tan/~3/55mMyBiEDVY/35318
|
ゼロパーティデータは、消費者が企業に自発的に提供する興味や好みのデータを指す。BICPデータ、デジタルインテリジェンスがそれぞれ保有している知見やノウハウを活用する。 |
2020-02-14 10:30:00 |
コメント
コメントを投稿