python |
Pythonタグが付けられた新着投稿 - Qiita |
HHKBプログラミングコンテスト2022 Winter(ABC282) A~D問題 ものすごく丁寧でわかりやすい解説 python 灰色~茶色コーダー向け #AtCoder |
https://qiita.com/sano192/items/c2a0a81ed0163f8c3e28
|
atcoder |
2023-01-09 19:54:48 |
python |
Pythonタグが付けられた新着投稿 - Qiita |
日本語言語モデルで現状最大規模を誇るdeberta-v2をQAタスク用にファインチューニングして公開してみた |
https://qiita.com/Mizuiro__sakura/items/b7252f06d21ad531556a
|
questiona |
2023-01-09 19:48:52 |
python |
Pythonタグが付けられた新着投稿 - Qiita |
小道具:ホスト名からIPアドレスを得る(getaddrinfo の動作確認) |
https://qiita.com/ikiuo/items/44e3cddc7bf8247b8793
|
rbinenvpythonimportsocke |
2023-01-09 19:30:34 |
Azure |
Azureタグが付けられた新着投稿 - Qiita |
【Azure】CosmosDB for PostgreSQLを試してみました |
https://qiita.com/mushipan66/items/2fbd7a170d3068c9daa8
|
azure |
2023-01-09 19:56:11 |
海外TECH |
DEV Community |
Understanding MongoDB Aggregation Pipeline |
https://dev.to/qbentil/understanding-mongodb-aggregation-pipeline-3964
|
Understanding MongoDB Aggregation PipelineIn most of my REST API I have built MongoDB has been my go to NoSQL database MongoDB is arguably the most popular NoSQL database in the world In most of my use cases I typically use the find command for a wide range of queries However I realized the need to learn more about MongoDB aggregation in my most recent projects where I needed to write complex queries In this article I ll explain the key stages in MongoDB s aggregation pipeline and how to apply them to the pipeline What is Aggregation Pipeline Aggregation is the process of going through various stages with a huge collection of documents to process them The states is what is termed as pipeline The aggregation pipeline in MongoDB has several stages each of which modifies the document Or to put it another way the aggregation pipeline is a multi stage pipeline so in each state the documents taken as input and produce the resultant set of documents now in the next step if available the resultant documents taken as input and produce output and so on until the last stage Below is an image illustration of how MongoDB Aggregation pipeline works Stages in Aggregation pipelinesThere are several stage in MongoDB Aggregation Pipelines In this section I will be explaining a few of the stages that are commonly used match Filters the documents to pass only the documents that match the specified condition s to the next pipeline stage This can reduce the amount of documents that are given as input to the next stage Syntaxdb users aggregate match email example gmail com group It is use to group documents from collections based on a group key Syntaxdb sales aggregate group id null count count sort This stage is used to rearrange the resulting document Ascending or Descending Syntaxdb contestants aggregate sort votes project It is used to select some specific fields from the resulting collection to be sent to the client Syntaxdb events aggregate project id name description banner opening date closing date Note assigned to a field means inclusive This will include that field in the output document is the vice versa It will exclude the field from the document It is important to note that non zero values also considered as inclusive lookup Performs a left outer join to a collection in the same database to filter in documents from the joined collection for processing Syntaxdb orders aggregate lookup from inventory localField item foreignField sku as inventory docs unwind Deconstructs an array field from the input documents to output a document for each element Each output document is the input document with the value of the array field replaced by the element Syntaxdb events aggregate unwind organizer Each of the Stages has their prototype and how to implement them You can explore them more usingDocumatic VScode ExtensionThis extension brings Documatic to VSCode quickly search your large codebases using simple queries what does it do what dependencies does it have And more Documatic search uses AI to link relations between your query and snippets of code so you don t have to know the exact keywords you re looking for Sample usecase In this use case There are two models or collections Events and their Organizers Each event document has it s organizer id as a foreign key I used aggregation pipeline to fetch events together with their organizer data instead of just the organizer id This snippet makes use of a combination of the stages discussed about const events await Event aggregate lookup from organizers localField organizer foreignField id as organizer unwind organizer project name description banner vote price opening date closing date organizer name organizer name email organizer email phone organizer phone company organizer company address organizer address res status json success true data events message Events fetched successfully OUTPUT ConclusionI have discussed with you some of the stages of MongoDB Aggregation Pipeline with examples of how to use them As you use MongoDB for more complex projects Aggregation will become an inevitable concept to perform your advanced querying which forms integral part of your API Happy Hacking Bentil hereHave you used Aggregation Pipelines in your project before What was the experience I will be glad to hear from you in the comment section This will help other who are yet to use Aggregation in their projects If you find this content helpful Please Like comment and share |
2023-01-09 10:19:31 |
海外TECH |
DEV Community |
Ddosify Latency Testing GitHub Action |
https://dev.to/fatihbaltaci/ddosify-latency-testing-github-action-4jpf
|
Ddosify Latency Testing GitHub ActionIn this article we will demonstrate how to use the Ddosify GitHub Action to periodically test the latency of our target endpoints from cities worldwide By configuring the action to run on a schedule we can ensure that the performance of our locations is consistently monitored If the latency of any of our locations exceeds the expected value we will receive an email notification alerting us to the issue By using this action we can proactively identify and address potential performance issues before they impact our users Check out the blog post Ddosify Latency Testing GitHub Action Ddosify Blog Ddosify Latency Testing GitHub Action ddosify com Check out the Ddosify Latency Action ddosify ddosify latency action Ddosify Latency Testing Action Ddosify Latency Testing ActionDdosify s Latency Observation Action allows you to measure the latency of endpoints from cities around the worldWith this action you can track the performance of your endpoints over time and ensure that they are meeting your desired latency targets Whether you are running a web application a mobile backend or any other type of service this action can help you monitor the latency for the best user experience It is easy to use just add it to your workflow and configure the request details You can specify the Target URL Locations and Fail If scenarios to fail the pipeline based on latencies This action uses Ddosify Latency Testing API under the hood ️This action uses Ddosify Cloud API Key that you can store in Github Secrets Inputsinputrequireddefaultexampledescriptionapi keytrueDdosify Cloud API Key Instructions are… View on GitHub |
2023-01-09 10:17:44 |
海外TECH |
DEV Community |
The theory behind Image Captioning |
https://dev.to/meetkern/the-theory-behind-image-captioning-41j2
|
The theory behind Image Captioning IntroductionOne of the most challenging tasks in artificial intelligence is automatically describing the content of an image This requires the knowledge of both computer vision using artificial neural networks and natural language processing This can have great impact in many different domains be it to make it easier for visually impaired people of the community to understand the contents of the images on the web or for tedious tasks like data labelling where data is in the form of images In this article we will walk through the basic concepts that are needed in order to create your own image captioning model Textual description of an imageIn principle converting an image into text is a significantly hard task The description should not only contain the objects highlighted in the image but also the context of the image On top of that the output has to be expressed in a natural language like English German etc so a language model is also needed to complete the picture In the above image we see people on a vacation hiking on the foothills of a mountain range Let us say that we want to generate a text describing this image The image is used as input I and is fed to the model called the Show and Tell model developed by Google which is trained to maximise the likelihood p S I of producing a sequence of words S S₁ S₂ Sₙ where each word Sₖcomes from a given dictionary which describes the image accurately In order to process the input data we use Convolutional Neural Networks CNNs as encoders and the output of the CNN is fed to a type of Recurring neural network called Long Short Term memory LSTM network which is responsible for generating natural language outputs Before describing the model let us briefly look into CNNs and LSTM Convolutional neural networksCNN is a type of neural network which is used mainly for image classification and recognition As the name suggests it uses a mathematical operation called convolution to process the data The CNN consists of an input layer single or multiple hidden layers and an output layer The middle layers are called hidden because their inputs and outputs are masked by the activation function Convolutions operate over D tensors called feature maps with two spatial axes and a channel axis The hidden layers convolutional layers are made up of multiple convolutions that scan the input data and apply filters to extract output feature This output feature is also a D tensor which is passed through a non linear activation function in order to induce non linearity The output of the convolutional layers is passed through a pooling layer which aggressively downsamples the feature maps and reduce computation complexity Eventually the output of the pooling layer is passed through a fully connected dense layer which computes the final prediction Below is an example of how to instantiate a convolutional neural network in python from tensorflow keras import layersfrom tensorflow keras import modelsfrom tensorflow keras import optimizersmodel models Sequential model add layers ConvD activation relu input shape model add layers MaxPoolD model add layers ConvD activation relu model add layers MaxPoolingD model add layers ConvD activation relu model add layers MaxPoolingD model add layers ConvD activation relu model add layers MaxPoolingD model add layers Flatten model add layers Dense activation relu model add layers Dense activation sigmoid model compile loss binary crossentropy optimizer optimizers RMSprop learning rate e metrics acc Here we have assumed that the user already has the pre processed input data Let us assume that this data is split into training and test sets In the next step we can fit the model and save it fitting model fit generator training data steps per epoch epochs validation data validation data validation steps model save output data h Long short term memoryLSTM networks are type of recurrent neural networks that are well suited for modelling long term dependencies in data They are called long short term because they can remember information for long periods of time but they can also forget information that is no longer relevant RNN was a consequence of the failure of the feedforward neural networks Problems with feedforward neural networks Not designed for sequences and time seriesDo not model memory in the sense that they do not retain information from previous data points when processing new data RNNs solve this issue conveniently The recursive formula defined as St Fw St Xt S t F w S t X t St Fw St Xt states that the new state at time t is a function of the old state at t and input at time t This makes the RNNs different from other neural nets NNs since NNs learn from backpropagation and RNNs learn from backpropagation through time The output from this network is now used to calculate the loss In the image shown above we describe a recurrent neural network which is run for say time steps Our aim is to compute the loss Let us assume that at each state out gradient is As we go back a time steps the update in our weights is Δw ≈ Delta w approx Δw ≈which is negligible Thus the neural network won t learn at all This is known as the vanishing gradient problem In order to solve this we need to add some extra interactions to the RNN This gives rise to the Long Short Term Memory LSTM like any other NN consists of three main components input layer single or multiple hidden layers and the output layer What makes it different are the operations happening in the hidden layers The hidden layer consists of three gates input gate forget gate and output gate and one cell state The memory cells are responsible for storing and manipulating the information over time Each memory cell is connected to a gate which decides what information stays and what information is forgotten Gosh these machines are getting smart To describe the functionality of the gates mathematically we can look at this expression gt σ WgSt WgXt g t sigma W g S t W g X t gt σ WgSt WgXt where g represents either input i forget f or output o gates W denotes the respective weights for the gates S denotes the state at t time step and X is the input The above image shows the functionality of the LSTM network The important part is that the network can decide what information to discard and what to keep This resolves the vanishing gradient problem that we face in a normal RNN Here is a simple implementation of the LSTM network in keras from tensorflow keras layers import LSTMfrom tensorflow keras import modelsmodel models Sequential model add Embedding max features model add LSTM model add Dense activation sigmoid model compile optimizer rmsprop loss binary crossentropy metrics acc model fitting model fit input train y train epochs batch size validation split Back to the textual descriptionNow that we have gone through the concepts of the tools required it is understood that using CNNs to process the image inputs and LSTM for natural language output we can build rather accurate models to generate image captions For that use a CNN as an encoder by pre training it first for image classification and using the last hidden layer as an input to the RNN decoder that generates sentences The model is trained to maximise the likelihood of generating correct description given the image which is given as θ∗ argmax∑I Slogp S∣I θ theta arg max sum I S log p S I theta θ∗ argmaxI S∑logp S∣I θ where θare the parameters of the model I is the input image and S is correct word The loss is described as the negative log likelihood of the correct word at each step L I S ∑t Nlogpt St L I S sum t N log p t S t L I S t ∑Nlogpt St Once the loss is minimized and the likelihood maximized we have to consider the epoch where the validation loss is minimum And tada We have our model ready All you would have to do is to input the images and the expected output should be a sentence describing that image This article is more about the in depth knowledge of the tools used to build this use case Once you are proficient enough you can create your own use case and build your own models for it Citations A neural image caption generator What is LSTM IBM TechnologyDeep Learning using python |
2023-01-09 10:17:38 |
海外TECH |
Engadget |
Anker charging accessories are at all-time lows today |
https://www.engadget.com/anker-charging-accessories-are-at-all-time-lows-today-101525837.html?src=rss
|
Anker charging accessories are at all time lows todayAnker is notable for its premium charging products but you also pay a premium for the extra quality You can now grab a number of Anker chargers and other smartphone accessors at Amazon with discounts of up to percent Some of the key items include the Anker Charger Nano II W or percent off the Power Bank priced at instead of the usual and a two pack of USB C to Lightning cables or percent off Shop Anker charging accessories at AmazonIf you need portable power for up to four devices the Power Bank delivers mAh with USB C high speed charging for MacBooks iPhones or Android smartphones tablets smartwatches and more You can connect devices via the two USB C and two USB A ports to ensure they stay charged on the go It s on sale for right now saving your percent off the full price Meanwhile Anker s Nano II charger offers an impressive W of charging power in a small size That lets you charge a single device like MacBook Pro Air quickly at a size percent smaller than the stock Apple charger You can also charge up to two USB C devices at a time Normally priced at you can pick it up now for off Finally Anker s foot PowerLine II USB C to Lightning cable is available in a two pack for or percent off the regular price That gets you a pair of long cables compatible with watt USB C chargers Plenty of other Anker devices or on sale too including the PowerExpand in USB charger or percent off the W PowerPort Strip PD Mini instead of and more Follow EngadgetDeals on Twitter and subscribe to the Engadget Deals newsletter for the latest tech deals and buying advice |
2023-01-09 10:15:25 |
海外科学 |
BBC News - Science & Environment |
Single-use plastic cutlery and plates to be banned in England |
https://www.bbc.co.uk/news/business-64205460?at_medium=RSS&at_campaign=KARANGA
|
england |
2023-01-09 10:17:24 |
海外ニュース |
Japan Times latest articles |
Tobizaru takes down ozeki Takakeisho on second day of New Year Tournament |
https://www.japantimes.co.jp/sports/2023/01/09/sumo/basho-reports/tobizaru-upset-takakeisho/
|
straight |
2023-01-09 19:06:32 |
ニュース |
BBC News - Home |
Jermaine Cools: Teenager pleads guilty to murder of boy, 14 |
https://www.bbc.co.uk/news/uk-england-london-64209347?at_medium=RSS&at_campaign=KARANGA
|
london |
2023-01-09 10:51:04 |
ニュース |
BBC News - Home |
Single-use plastic cutlery and plates to be banned in England |
https://www.bbc.co.uk/news/business-64205460?at_medium=RSS&at_campaign=KARANGA
|
england |
2023-01-09 10:17:24 |
ニュース |
BBC News - Home |
Owen Farrell: England captain could miss Six Nations after dangerous tackle citing |
https://www.bbc.co.uk/sport/rugby-union/64210151?at_medium=RSS&at_campaign=KARANGA
|
Owen Farrell England captain could miss Six Nations after dangerous tackle citingEngland captain Owen Farrell could miss the start of the Six Nations after being cited for a dangerous tackle playing for Saracens |
2023-01-09 10:28:00 |
ニュース |
BBC News - Home |
Brazil Congress storming: How did we get here? |
https://www.bbc.co.uk/news/world-latin-america-64206220?at_medium=RSS&at_campaign=KARANGA
|
bolsonaro |
2023-01-09 10:26:30 |
ニュース |
BBC News - Home |
Bolsonaro supporters storm Brazil government buildings - Key moments |
https://www.bbc.co.uk/news/world-latin-america-64209957?at_medium=RSS&at_campaign=KARANGA
|
brasilia |
2023-01-09 10:05:21 |
北海道 |
北海道新聞 |
日本ハム 新庄監督「勝つために鬼になる」 オープン戦の時期に開幕メンバー固める方針 |
https://www.hokkaido-np.co.jp/article/785066/
|
千葉県鎌ケ谷市 |
2023-01-09 19:38:00 |
北海道 |
北海道新聞 |
藤井王将、初防衛に好発進 7番勝負第1局、羽生九段下す |
https://www.hokkaido-np.co.jp/article/785035/
|
藤井聡太 |
2023-01-09 19:23:11 |
北海道 |
北海道新聞 |
国内で9万2662人感染 新型コロナ、258人死亡 |
https://www.hokkaido-np.co.jp/article/785065/
|
新型コロナウイルス |
2023-01-09 19:35:00 |
北海道 |
北海道新聞 |
日本ハム、9新人がプロの第一歩 合同自主トレ始まる |
https://www.hokkaido-np.co.jp/article/785058/
|
北海道日本ハム |
2023-01-09 19:34:04 |
北海道 |
北海道新聞 |
空知管内16人感染 新型コロナ |
https://www.hokkaido-np.co.jp/article/785064/
|
新型コロナウイルス |
2023-01-09 19:32:00 |
北海道 |
北海道新聞 |
ウエアは個性派「ボーゲン」に 北長沼スキー場、スタッフ用一新 町内に店舗がある縁で |
https://www.hokkaido-np.co.jp/article/785063/
|
長沼 |
2023-01-09 19:31:00 |
北海道 |
北海道新聞 |
今年も豊漁期待 枝幸の卸売市場で初売り |
https://www.hokkaido-np.co.jp/article/785051/
|
卸売市場 |
2023-01-09 19:32:15 |
北海道 |
北海道新聞 |
バレーボール道小学生選抜優勝大会 遠別イーグルス4連覇 |
https://www.hokkaido-np.co.jp/article/785062/
|
選抜 |
2023-01-09 19:31:00 |
北海道 |
北海道新聞 |
<ともに生きる>NPO法人「いんくるらぼ」=胆振管内安平町 春から食堂、広がる働く場 |
https://www.hokkaido-np.co.jp/article/785061/
|
胆振管内 |
2023-01-09 19:28:00 |
北海道 |
北海道新聞 |
貴景勝、早くも初黒星 豊昇龍2連勝、若隆景に土 |
https://www.hokkaido-np.co.jp/article/785057/
|
両国国技館 |
2023-01-09 19:22:00 |
北海道 |
北海道新聞 |
ジャーナリストB・カルブ氏死去 ニクソン大統領の歴史的訪中同行 |
https://www.hokkaido-np.co.jp/article/785056/
|
広報担当 |
2023-01-09 19:19:00 |
北海道 |
北海道新聞 |
スキー距離フォーカード杯 男子は山下が優勝 |
https://www.hokkaido-np.co.jp/article/785053/
|
距離 |
2023-01-09 19:10:00 |
北海道 |
北海道新聞 |
留萌管内3人感染、宗谷管内は1人 新型コロナ |
https://www.hokkaido-np.co.jp/article/785052/
|
宗谷管内 |
2023-01-09 19:09:00 |
北海道 |
北海道新聞 |
犯罪被害者支援、条例整備に遅れ 道内10市町村のみ 全国平均下回る 凶悪犯罪の少なさ背景? |
https://www.hokkaido-np.co.jp/article/785046/
|
凶悪犯罪 |
2023-01-09 19:01:00 |
コメント
コメントを投稿