Docker |
dockerタグが付けられた新着投稿 - Qiita |
ただただ初心者がdockerを使ってみるだけ |
https://qiita.com/koko_kotsu/items/c514ce652511c8bc710b
|
apache |
2022-12-29 18:09:43 |
技術ブログ |
Developers.IO |
EC2でMicrosoft Officeを使うのはライセンス上一筋縄ではいかない話 |
https://dev.classmethod.jp/articles/microsoft-office-on-ec2-licensing-problems/
|
excel |
2022-12-29 09:36:27 |
技術ブログ |
Developers.IO |
[レポート]「Build smart camera applications using Amazon Kinesis Video Streams WebRTC」セッションのハンズオンしてみた #reinvent |
https://dev.classmethod.jp/articles/reinvent-2022-reoprt-iot309-r1/
|
amazon |
2022-12-29 09:07:52 |
海外TECH |
DEV Community |
Data detective: Tips and tricks for conducting effective exploratory data analysis |
https://dev.to/anurag629/data-detective-tips-and-tricks-for-conducting-effective-exploratory-data-analysis-184c
|
Data detective Tips and tricks for conducting effective exploratory data analysisExploratory data analysis EDA is an approach to analyzing and understanding data that involves summarizing visualizing and identifying patterns and relationships in the data There are many different techniques and approaches that can be used in EDA and the specific techniques used will depend on the nature of the data and the questions being asked Here are some common techniques that are often used in EDA Visualization Plotting the data in various ways can help reveal patterns and trends that may not be immediately apparent Common types of plots include scatter plots line plots bar plots and histograms Summary statistics Calculating summary statistics such as mean median and standard deviation can provide useful information about the distribution and spread of the data Correlation analysis Examining the relationships between different variables can help identify correlations and dependencies Data cleaning Removing missing or incorrect values and ensuring that the data is in a consistent format is an important step in EDA Dimensionality reduction Techniques such as principal component analysis PCA can be used to reduce the number of dimensions in the data making it easier to visualize and analyze Anomaly detection Identifying unusual or unexpected values in the data can be important in identifying errors or outliers Feature engineering Creating new features or transforming existing features can improve the performance of machine learning models and facilitate analysis Overall the goal of EDA is to gain a better understanding of the data identify potential issues or problems and develop hypotheses about the relationships and patterns in the data that can be further tested and refined Now we will study in more detail all the points mentioned above VisualizationHere is a simple example using a sample dataset of weather data for a single location The data includes the temperature humidity and wind speed for each day in a month indexDateTemperatureHumidityWind SpeedMonth January January January January January January January January January JanuaryFirst we will import the necessary libraries and read in the data from a CSV file import pandas as pdimport matplotlib pyplot as plt Read in the data from a CSV filedf pd read csv weather csv Next we can use various types of plots to visualize the data in different ways Here are a few examples Scatter plot Scatter plot of temperature vs humidityplt scatter df Temperature df Humidity plt xlabel Temperature °F plt ylabel Humidity plt show Line plot Line plot of temperature over timeplt plot df Date df Temperature plt xlabel Date plt ylabel Temperature °F plt show Bar plot Bar plot of average temperature by monthdf groupby Month mean Temperature plot kind bar plt xlabel Month plt ylabel Temperature °F plt show Histogram Histogram of temperatureplt hist df Temperature bins plt xlabel Temperature °F plt ylabel Frequency plt show Summary statistics From same above weather data we can do the following statistics visualization Mean Calculate the mean temperaturemean temp df Temperature mean print f Mean temperature mean temp f °F Mean temperature °FMedian Calculate the median humiditymedian humidity df Humidity median print f Median humidity median humidity f Median humidity Standard deviation Calculate the standard deviation of wind speedstd wind speed df Wind Speed std print f Standard deviation of wind speed std wind speed f mph Standard deviation of wind speed mphMinimum and maximum Calculate the minimum and maximum temperaturemin temp df Temperature min max temp df Temperature max print f Minimum temperature min temp f °F print f Maximum temperature max temp f °F Minimum temperature °FMaximum temperature °FNow I am not sure but I can read your mind I am sure you thought that I forgets the pandas describe data frame function but don t worry it s here df describe Output indexTemperatureHumidityWind Speedcountmeanstdmin maxI hope this helps Let me know if you have any questions or if you would like to see examples of other summary statistics Correlation analysis Here is an example using a sample dataset of student grades indexStudentMidtermFinalAliceBobCharlieDaveEveFrankGaryHollyIvyJillFirst we will import the necessary libraries and read in the data from a CSV file import pandas as pdimport seaborn as sns Read in the data from a CSV filedf pd read csv student grades csv To analyze the correlations between different variables we can use a variety of techniques Here are a few examples Scatter plot Scatter plot of midterm grades vs final gradessns scatterplot x Midterm y Final data df Correlation matrix Correlation matrixcorr df corr sns heatmap corr annot True Linear regression Linear regression of midterm grades vs final gradessns lmplot x Midterm y Final data df As you know it is a hard task and also time taking to cover any topic in detail but here I have provided a summary of the Correlation analysis Correlation analysis is a statistical method used to identify the strength and direction of the relationship between two variables It is commonly used in exploratory data analysis to understand the relationships between different variables in a dataset and to identify patterns and trends There are several different measures of correlation including Pearson s correlation coefficient Spearman s rank correlation coefficient and Kendall s tau These measures range from perfect negative correlation to perfect positive correlation with indicating no correlation To perform correlation analysis you can use various techniques such as scatter plots correlation matrices and linear regression Scatter plots can be used to visualize the relationship between two variables and correlation matrices can be used to visualize the correlations between multiple variables Linear regression can be used to fit a line to the data and assess the strength of the relationship between the variables It is important to note that correlation does not imply causation meaning that the presence of a correlation between two variables does not necessarily mean that one variable causes the other It is always important to consider other factors that may be influencing the relationship between the variables Data cleaning Here is an example using a sample dataset of student grades with some missing and incorrect values indexStudentMidtermFinalAliceBobCharlieDaveEveFrankGaryHollyIvyJillKimNaNLarryMandyNaNNancyThis dataset includes the names of students and their grades on a midterm and final exam Some of the values are missing indicated by empty cells and some of the values are incorrect e g a final grade of First we will import the necessary libraries and read in the data from a CSV file import pandas as pd Read in the data from a CSV filedf pd read csv student grades with errors csv Here are a few examples of data cleaning techniques that can be used to address missing and incorrect values Identifying missing values Check for missing valuesdf isnull sum Student Midterm Final dtype intDropping rows with missing values Drop rows with missing valuesdf dropna inplace True Filling missing values with a placeholder value Fill missing values with a placeholder value df fillna inplace True Replacing incorrect values Replace incorrect values e g grades above with a placeholder value df Midterm mask df Midterm gt inplace True df Final mask df Final gt inplace True There is much more in data cleaning but I have provided some general things Data cleaning is the process of identifying and addressing issues with the data such as missing or incorrect values inconsistent formats and outliers It is an important step in the data analysis process as it helps ensure that the data is accurate consistent and ready for analysis There are a variety of techniques that can be used for data cleaning depending on the specific issues with the data and the desired outcome Some common techniques include Identifying missing values Use functions such as isnull or notnull to identify cells that contain missing values Dropping rows with missing values Use the dropna function to remove rows that contain missing values Filling missing values Use the fillna function to fill missing values with a placeholder value e g or Replacing incorrect values Use functions such as mask or replace to replace incorrect values with a placeholder value It is important to carefully consider the appropriate approach for addressing missing or incorrect values as simply dropping rows or filling missing values with a placeholder value may not always be the best solution It is often helpful to investigate the cause of the missing or incorrect values and consider whether there may be other factors that need to be taken into account Dimensionality reduction Here is a sample dataset of student grades with three variables midterm grades final grades and attendance indexStudentMidtermFinalAttendanceAliceBobCharlieDaveEveFrankGaryHollyIvyJillThis dataset includes the names of students their grades on a midterm and final exam and their attendance percentage The grades are out of and the attendance percentage is out of First we will import the necessary libraries and read in the data from a CSV file import pandas as pdfrom sklearn decomposition import PCA Read in the data from a CSV filedf pd read csv student grades with attendance csv One common technique for dimensionality reduction is principal component analysis PCA PCA is a linear transformation technique that projects the data onto a lower dimensional space reducing the number of variables while still retaining as much of the variance as possible Here is an example of using PCA to reduce the dimensionality of the data from three variables to two Select only the numeric columnsdata df select dtypes include number Perform PCApca PCA n components pca fit data Transform the datatransformed data pca transform data Print the explained variance ratio for each principal componentprint pca explained variance ratio Summary for the same for tips and note point Dimensionality reduction is the process of reducing the number of variables in a dataset while still retaining as much of the information as possible It is often used in machine learning and data analysis to reduce the complexity of the data and improve the performance of algorithms There are a variety of techniques for dimensionality reduction including principal component analysis PCA linear discriminant analysis LDA and t distributed stochastic neighbor embedding t SNE These techniques can be used to transform the data into a lower dimensional space typically by projecting the data onto a smaller number of orthogonal uncorrelated dimensions PCA is a linear transformation technique that projects the data onto a lower dimensional space by finding the directions in which the data varies the most LDA is a supervised learning technique that projects the data onto a lower dimensional space by maximizing the separation between different classes t SNE is a nonlinear dimensionality reduction technique that projects the data onto a lower dimensional space by preserving the local structure of the data It is important to carefully consider the appropriate dimensionality reduction technique for a given dataset as the choice of technique can have a significant impact on the results Anomaly detection Here is an example using a sample dataset of student grades with some anomalous values indexStudentMidtermFinalAliceBobCharlieDaveEveFrankGaryHollyIvyJillKimLarryMandyNancyThis dataset includes the names of students and their grades on a midterm and final exam The grades are out of The values for Kim s midterm grade and Nancy s final grade are anomalous as they are much higher than the other values in the dataset First we will import the necessary libraries and read in the data from a CSV file import pandas as pdfrom sklearn ensemble import IsolationForest Read in the data from a CSV filedf pd read csv student grades with anomalies csv One common technique for anomaly detection is isolation forest which is a type of unsupervised machine learning algorithm that can identify anomalous data points by building decision trees on randomly selected subsets of the data and using the number of splits required to isolate a data point as a measure of abnormality Here is an example of using isolation forest to detect anomalous values in the midterm grades Create an isolation forest modelmodel IsolationForest contamination Fit the model to the datamodel fit df Midterm Predict the anomaliesanomalies model predict df Midterm Print the anomaliesprint anomalies usr local lib python dist packages sklearn base py UserWarning X does not have valid feature names but IsolationForest was fitted with feature names warnings warn The contamination parameter specifies the expected proportion of anomalous values in the data In this example we set it to which means that we expect of the values to be anomalous I hope this helps Let me know if you have any questions or if you would like to see examples of other anomaly detection techniques More about it Anomaly detection also known as outlier detection is the process of identifying data points that are unusual or do not conform to the expected pattern of the data It is often used in a variety of applications such as fraud detection network intrusion detection and fault diagnosis There are a variety of techniques for anomaly detection including statistical methods machine learning algorithms and data mining techniques Statistical methods involve calculating statistical measures such as mean median and standard deviation and identifying data points that are significantly different from the expected values Machine learning algorithms such as isolation forests and one class support vector machines can be trained on normal data and used to identify anomalies in new data Data mining techniques such as clustering can be used to identify data points that are significantly different from the majority of the data It is important to carefully consider the appropriate technique for a given dataset as the choice of technique can have a significant impact on the results It is also important to consider the specific context and requirements of the application as well as the cost of false positives and false negatives Feature engineeringFeature engineering is the process of creating new features variables from the existing data that can be used to improve the performance of machine learning models It is an important step in the data analysis process as it can help extract more meaningful information from the data and enhance the predictive power of models There are a variety of techniques for feature engineering including Combining multiple features Creating new features by combining existing features using arithmetic operations or logical statements Deriving new features from existing features Creating new features by applying mathematical transformations or aggregations to existing features Encoding categorical variables Converting categorical variables into numerical form so that they can be used in machine learning models It is important to carefully consider the appropriate approach for feature engineering for a given dataset as the choice of features can have a significant impact on the results It is often helpful to explore the data and identify potential opportunities for feature engineering such as combining or transforming variables to better capture relationships or patterns in the data Here is an example using a sample dataset of student grades indexStudentMidtermFinalGenderAliceFemaleBobMaleCharlieMaleDaveMaleEveFemaleFrankMaleGaryMaleHollyFemaleIvyFemaleJillFemaleFirst we will import the necessary libraries and read in the data from a CSV file import pandas as pd Read in the data from a CSV filedf pd read csv student grades csv Feature engineering is the process of creating new features variables from the existing data that can be used to improve the performance of machine learning models There are a variety of techniques for feature engineering including Combining multiple features Create a new feature by combining two existing featuresdf Total df Midterm df Final Deriving new features from existing features Create a new feature by dividing one feature by anotherdf Average df Total Create a new feature by taking the square root of a featureimport numpy as npdf Sqrt Midterm np sqrt df Midterm Encoding categorical variables One hot encode a categorical featuredf pd get dummies df columns Gender After doing feature engineering data frame look like this indexStudentMidtermFinalTotalAverageSqrt MidtermGender FemaleGender MaleAliceBobCharlieDaveEveFrankGaryHollyIvyJill Did you learn something new from this post Let us know in the comments |
2022-12-29 09:29:24 |
海外TECH |
DEV Community |
Playwright - Generate tests' code |
https://dev.to/this-is-learning/playwright-generate-tests-code-n2p
|
Playwright Generate tests x codeHi there today you will learn how to generate the code for your tests First of all you have to run the application so type in your terminal npm run dev and you will have your application up and running at the address http localhost Now you have to run the Playwright code generator To do that you have to open another tab in your terminal and type npx playwright codegen This command opens in your display a new browser and the Playwright inspector Now the browser shows an empty tab and the Playwright inspector is on a new page ready to listen to what you will do in the browser If you go into the browser s address bar and type http localhost you can notice that something has changed in the Playwright Inspector The inspector noted the page change and added this step to the test body Now you can click on one square and simulate a game between two players you can notice that the inspector records all these steps and creates the body of the test for you The result of these actions is thistest test async page gt await page goto http localhost await page locator Square pbar first click await page locator button nth child click await page locator button nth child click await page locator button nth child click await page locator button nth child click await page locator button nth child click await page locator button nth child click As you can see the generated code is good but not perfect so please use this tool with the head on the shoulders and check the result every time to ensure your test s value Say that now it s time to copy and paste the code into our test file and improve its content Let s start by changing the test name from test to should win the player X and changing the goto value from http localhost to because our base root corresponds to the home page Then refactoring the first click probably is a good refactor to improve the test so you have to change the line from await page locator Square pbar first click to await page locator button nth child click and last but not least add the assertion In this case if the test name is should win the player X the best assertion is something like thisconst winnerParagraph await page getByText winner i await expect winnerParagraph toContainText X So the code tries to get the paragraph with the winner text and checks if the value contains the X value The final result is thistest should win the player X async page gt await page goto await page locator button nth child click await page locator button nth child click await page locator button nth child click await page locator button nth child click await page locator button nth child click await page locator button nth child click await page locator button nth child click const winnerParagraph await page getByText winner i await expect winnerParagraph toContainText X Now you can run the test and check the result which should be green Ok I think that s all from the code generation In this article you learnt how to record your steps and convert them to code for your tests Please as I said before use this tool responsibly and check the generated code every time That s it folk See you soon p s you can find the result of this article here |
2022-12-29 09:00:46 |
海外ニュース |
Japan Times latest articles |
‘Darkness before dawn’: Tourism sector in China eyes slow but sure recovery |
https://www.japantimes.co.jp/news/2022/12/29/asia-pacific/china-travel-hesitation/
|
Darkness before dawn Tourism sector in China eyes slow but sure recoveryTravel agents across China don t expect demand to immediately surge following the lifting of pandemic related travel restrictions due to various factors |
2022-12-29 18:58:10 |
ニュース |
BBC News - Home |
Russia fires 120 missiles from air and sea - Ukraine |
https://www.bbc.co.uk/news/world-europe-64114784?at_medium=RSS&at_campaign=KARANGA
|
ukraineukrainian |
2022-12-29 09:13:50 |
ニュース |
BBC News - Home |
Wallasey pub shooting: Third arrest over Christmas Eve attack |
https://www.bbc.co.uk/news/uk-england-merseyside-64095523?at_medium=RSS&at_campaign=KARANGA
|
merseyside |
2022-12-29 09:04:56 |
ニュース |
BBC News - Home |
TUC: Ministers refusing to negotiate on pay, says union body |
https://www.bbc.co.uk/news/business-64109462?at_medium=RSS&at_campaign=KARANGA
|
bodies |
2022-12-29 09:33:34 |
ニュース |
BBC News - Home |
Eve Muirhead reflects on the year her career was made - and brought to an end |
https://www.bbc.co.uk/sport/winter-sports/64032342?at_medium=RSS&at_campaign=KARANGA
|
scotland |
2022-12-29 09:25:11 |
北海道 |
北海道新聞 |
中国・武漢で感染爆発、死者急増 住民ら証言、ゼロコロナ政策崩壊 |
https://www.hokkaido-np.co.jp/article/782090/
|
新型コロナウイルス |
2022-12-29 18:39:42 |
北海道 |
北海道新聞 |
被害夫婦、容疑者面識なく 埼玉殺人、一方的恨みか |
https://www.hokkaido-np.co.jp/article/782089/
|
埼玉県飯能市 |
2022-12-29 18:18:52 |
北海道 |
北海道新聞 |
全国高校サッカー 北海は初戦敗退 国見にPK戦の末敗れる |
https://www.hokkaido-np.co.jp/article/782057/
|
初戦敗退 |
2022-12-29 18:28:25 |
北海道 |
北海道新聞 |
野生カラスから鳥インフル 福岡・糸島市 |
https://www.hokkaido-np.co.jp/article/782095/
|
鳥インフル |
2022-12-29 18:26:00 |
北海道 |
北海道新聞 |
空知の70代女性、135万円だまし取られる PCウイルス除去費用名目 |
https://www.hokkaido-np.co.jp/article/782094/
|
空知管内 |
2022-12-29 18:26:00 |
北海道 |
北海道新聞 |
楽天の辰己は2千万円増でサイン Gグラブ、打撃も自己最高 |
https://www.hokkaido-np.co.jp/article/782093/
|
仙台市内 |
2022-12-29 18:25:00 |
北海道 |
北海道新聞 |
ジョコ、豪国外退去は貴重な経験 1月の全豪オープンへ意気込み |
https://www.hokkaido-np.co.jp/article/782088/
|
全豪オープン |
2022-12-29 18:10:12 |
北海道 |
北海道新聞 |
箱根駅伝、駒大・田沢は2区 3年連続、区間エントリー |
https://www.hokkaido-np.co.jp/article/782087/
|
大学駅伝 |
2022-12-29 18:24:05 |
北海道 |
北海道新聞 |
安保の歴史的転換、闇の中 3文書改定議論 議事録ほぼ黒塗り 専門家「チェック機能働かぬ」 |
https://www.hokkaido-np.co.jp/article/781908/
|
国家安全保障戦略 |
2022-12-29 18:20:55 |
北海道 |
北海道新聞 |
東京円、133円台後半 |
https://www.hokkaido-np.co.jp/article/782092/
|
東京外国為替市場 |
2022-12-29 18:20:00 |
北海道 |
北海道新聞 |
磐梯山で火山性地震続く、福島 28日は過去最多を観測 |
https://www.hokkaido-np.co.jp/article/782082/
|
火山性地震 |
2022-12-29 18:02:03 |
北海道 |
北海道新聞 |
女子3000mで19歳堀川がV スピード、高木に競り勝つ |
https://www.hokkaido-np.co.jp/article/782068/
|
全日本選手権 |
2022-12-29 18:10:15 |
北海道 |
北海道新聞 |
札幌も帰省ラッシュ本格化 札幌駅、バスターミナル混み合う |
https://www.hokkaido-np.co.jp/article/782063/
|
帰省ラッシュ |
2022-12-29 18:07:14 |
北海道 |
北海道新聞 |
120発超のミサイル攻撃か 首都キーウで一部迎撃、破片落下 |
https://www.hokkaido-np.co.jp/article/782079/
|
大統領府長官 |
2022-12-29 18:04:02 |
コメント
コメントを投稿