IT |
気になる、記になる… |
「Google マップ」、地図の新しいカラーリングをテスト中 ー 道路がより見易いデザインに |
https://taisy0.com/2023/09/01/176165.html
|
google |
2023-09-01 05:16:08 |
IT |
ITmedia 総合記事一覧 |
[ITmedia PC USER] ViewSonic、180Hz駆動をサポートした27型WQHDゲーミング液晶ディスプレイ |
https://www.itmedia.co.jp/pcuser/articles/2309/01/news119.html
|
itmediapcuserviewsonic |
2023-09-01 14:47:00 |
IT |
ITmedia 総合記事一覧 |
[ITmedia News] カラー液晶搭載“ディズニー変身ベルト”、3万9600円できょう予約スタート |
https://www.itmedia.co.jp/news/articles/2309/01/news116.html
|
bandaispirits |
2023-09-01 14:44:00 |
IT |
ITmedia 総合記事一覧 |
[ITmedia Mobile] Google「Pixel 5」「Pixel 6/6a」の整備済製品がAmazonでセールに |
https://www.itmedia.co.jp/mobile/articles/2309/01/news113.html
|
amazon |
2023-09-01 14:35:00 |
IT |
ITmedia 総合記事一覧 |
[ITmedia Mobile] Amazonで「iPhone 13」と「iPhone 13 Pro Max」がセールに |
https://www.itmedia.co.jp/mobile/articles/2309/01/news111.html
|
amazon |
2023-09-01 14:12:00 |
AWS |
AWS Japan Blog |
AWS へのシステム移行の要点 (後編) |
https://aws.amazon.com/jp/blogs/news/key-points-of-migrating-to-aws-part2/
|
customersolutionsmanager |
2023-09-01 05:21:49 |
AWS |
AWS Japan Blog |
AWS へのシステム移行の要点 (前編) |
https://aws.amazon.com/jp/blogs/news/key-points-of-migrating-to-aws-part1/
|
customersolutionsmanager |
2023-09-01 05:21:28 |
python |
Pythonタグが付けられた新着投稿 - Qiita |
Ubuntu環境でdockerを扱うために必要なコマンドの備忘録 |
https://qiita.com/eine-kenta/items/9e89e98a550d40fbed89
|
docker |
2023-09-01 14:57:16 |
python |
Pythonタグが付けられた新着投稿 - Qiita |
AI機能搭載Pythonライブラリ、PandasAIとAzure OpenAIでデータ分析を楽しく!機密性の高いデータも安心分析 |
https://qiita.com/hiromitsu_iwasaki/items/e29b2a8f46167496d1d7
|
azureopenai |
2023-09-01 14:39:55 |
python |
Pythonタグが付けられた新着投稿 - Qiita |
PDF 抽出 |
https://qiita.com/vice_star/items/d059c1eb4a4d95c801b1
|
ompdfminerconverterimpor |
2023-09-01 14:34:30 |
js |
JavaScriptタグが付けられた新着投稿 - Qiita |
Qiitaで自作の素因数分解プログラムを公開したら添削されて30倍速くなった件 |
https://qiita.com/TETSURO1999/items/ffc1cae68c16a5c26fde
|
qiita |
2023-09-01 14:41:59 |
Ruby |
Rubyタグが付けられた新着投稿 - Qiita |
Mysql2::Error::ConnectionError への対処方法 |
https://qiita.com/ym0628/items/272b29df59030e6628f6
|
mysqlerrorconnectionerror |
2023-09-01 14:08:59 |
Linux |
Ubuntuタグが付けられた新着投稿 - Qiita |
Ubuntu環境でdockerを扱うために必要なコマンドの備忘録 |
https://qiita.com/eine-kenta/items/9e89e98a550d40fbed89
|
docker |
2023-09-01 14:57:16 |
Docker |
dockerタグが付けられた新着投稿 - Qiita |
Ubuntu環境でdockerを扱うために必要なコマンドの備忘録 |
https://qiita.com/eine-kenta/items/9e89e98a550d40fbed89
|
docker |
2023-09-01 14:57:16 |
Git |
Gitタグが付けられた新着投稿 - Qiita |
git commit時に、デバッグ用コードの消し忘れをチェックしたい |
https://qiita.com/kotobuki5991/items/ab5359ced9ea7dcb2ce7
|
gitcommit |
2023-09-01 14:07:58 |
Ruby |
Railsタグが付けられた新着投稿 - Qiita |
Mysql2::Error::ConnectionError への対処方法 |
https://qiita.com/ym0628/items/272b29df59030e6628f6
|
mysqlerrorconnectionerror |
2023-09-01 14:08:59 |
技術ブログ |
Developers.IO |
【9/28(木)】DevelopersIO 2023 沖縄 オンライン |
https://dev.classmethod.jp/news/devio-2023-okinawa-online/
|
developersio |
2023-09-01 05:57:59 |
海外TECH |
DEV Community |
Explained in 2023: 10 of the Most Popular Machine Learning Algorithms |
https://dev.to/durgesh4993/explained-in-2023-10-of-the-most-popular-machine-learning-algorithms-4jbl
|
Explained in of the Most Popular Machine Learning Algorithms Linear RegressionLinear regression is a statistical method used to examine the relationship between two continuous variables one independent variable and one dependent variable The goal of linear regression is to find the best fitting line through a set of data points which can then be used to make predictions about future observations The equation for a simple linear regression model is y b b xwhere y is the dependent variable x is the independent variable b is the y intercept the point at which the line crosses the y axis and b is the slope of the line The slope represents the change in y for a given change in x To determine the best fitting line we use the method of least squares which finds the line that minimizes the sum of the squared differences between the predicted y values and the actual y values Linear regression can also be extended to multiple independent variables known as multiple linear regression The equation for a multiple linear regression model is y b bx bx … bn xnwhere x x … xn are the independent variables and b b … bn are the corresponding coefficients Linear regression can be used for both simple linear regression and multiple linear regression problems The coefficients b and b … bn are estimated using the method of least squares Once the coefficients are estimated they can be used to make predictions about the dependent variable Linear regression can be used to make predictions about the future such as predicting the price of a stock or the number of units of a product that will be sold However linear regression is a relatively simple method and may not be appropriate for all problems It assumes that the relationship between the independent and dependent variables is linear which may not always be the case Additionally Linear Regression is highly sensitive to outliers meaning if there are any extreme values that don t follow the general trend of the data it will significantly impact the accuracy of the model In conclusion linear regression is a powerful and widely used statistical method that can be used to examine the relationship between two continuous variables It is a simple yet powerful tool that can be used to make predictions about the future However it is important to keep in mind that linear regression assumes a linear relationship between the variables and is sensitive to outliers which can impact the accuracy of the model Linear Regression Interview Questions and Answers What are the assumptions of linear regression The assumptions of linear regression are Linearity The relationship between the independent and dependent variables is linear Independence The observations are independent of each other Homoscedasticity The variance of the error term is constant across all levels of the independent variables Normality The error term is normally distributed No multicollinearity The independent variables are not highly correlated with each other No autocorrelation The error term is not autocorrelated with itself How do you determine the goodness of fit of a linear regression model There are several ways to determine the goodness of fit of a linear regression model R squared R squared is a statistical measure that represents the proportion of the variance in the dependent variable that is explained by the independent variables in the model An R squared value of indicates that the model explains all the variance in the dependent variable and a value of indicates that the model explains none of the variances Adjusted R squared Adjusted R squared is a modified version of R squared that accounts for the number of independent variables in the model It is a better indicator of the model s goodness of fit when comparing models with different numbers of independent variables Root Mean Squared Error RMSE RMSE measures the difference between the predicted values and the actual values A lower RMSE indicates a better fit of the model to the data Mean Absolute Error MAE MAE measures the average difference between the predicted values and the actual values A lower MAE indicates a better fit of the model to the data How do you deal with outliers in linear regression Outliers in linear regression can have a significant impact on the model s predictions as they can skew the regression line There are several ways to deal with outliers in linear regression including Removing outliers One option is to simply remove outliers from the dataset before training the model However this can lead to the loss of important information Transforming the data Applying a transformation such as taking the log of the data can help to reduce the impact of outliers Using robust regression methods Robust regression methods such as RANSAC or Theil Sen are less sensitive to outliers than traditional linear regression Using regularization Regularization can help to prevent overfitting which can be caused by outliers by adding a penalty term to the cost function Ultimately the best approach will depend on the specific dataset and the goals of the analysis Logistic RegressionLogistic Regression is a statistical method used for predicting binary outcomes such as success or failure based on one or more independent variables It is a popular technique in machine learning and is often used for classification tasks such as determining whether an email is spam or not or predicting whether a customer will churn The logistic regression model is based on the logistic function which is a sigmoid function that maps the input variables to a probability between and The probability is then used to make a prediction about the outcome The logistic regression model is represented by the following equation P y x e b bx bx … bn xn where P y x is the probability that the outcome y is given the input variables x b is the intercept and b b … bn are the coefficients for the input variables x x … xn The coefficients are determined by training the model on a dataset and using a optimization algorithm such as gradient descent to minimize the cost function which is typically the log loss Once the model is trained it can be used to make predictions by inputting new data and calculating the probability of the outcome being The threshold for classifying the outcome as or is typically set at but this can be adjusted depending on the specific task and desired trade off between false positives and false negatives Below is a diagram representing the logistic regression model In this diagram the input variables x and x are used to predict the binary outcome y The logistic function maps the input variables to a probability which is then used to make a prediction about the outcome The coefficients b and b are determined by training the model on a dataset and the threshold is set to In conclusion logistic regression is a powerful technique for predicting binary outcomes and is widely used in machine learning and data analysis It is easy to implement interpret and can be easily regularized to prevent overfitting Logistic Regression Interview Questions and Answers What is the logistic function The logistic function also known as the sigmoid function is an S shaped curve that maps any real valued number to a value between and It is defined as f x e x where e is the base of the natural logarithm The logistic function is used in logistic regression to model the probability of a binary outcome Can logistic regression be used for multiclass classification Yes logistic regression can be used for multiclass classification by creating a separate binary logistic regression model for each class and choosing the class with the highest predicted probability This is known as one vs all or one vs rest approach Alternatively we can use softmax regression which is a generalization of logistic regression which can handle multiple classes directly How do you interpret the coefficients in logistic regressionThe coefficients in logistic regression represent the change in the log odds of the outcome for a one unit change in the predictor variable while holding all other predictors constant The odds ratio can be used to interpret the magnitude of the coefficients An odds ratio greater than indicates that a unit increase in the predictor increases the odds of the outcome while an odds ratio less than indicates that a unit increase in the predictor decreases the odds of the outcome Support Vector Machines SVMs Support Vector Machines SVMs are a type of supervised learning algorithm that can be used for classification or regression problems The main idea behind SVMs is to find the boundary that separates different classes in the data by maximizing the margin which is the distance between the boundary and the closest data points from each class These closest data points are called support vectors SVMs are particularly useful when the data is not linearly separable which means that it cannot be separated by a straight line In these cases SVMs can transform the data into a higher dimensional space using a technique called kernel trick where a non linear boundary can be found Some common kernel functions used in SVMs are polynomial radial basis function RBF and sigmoid One of the main advantages of SVMs is that they are very effective in high dimensional spaces and have a good performance even when the number of features is greater than the number of samples Additionally SVMs are memory efficient because they only need to store the support vectors and not the entire dataset On the other hand SVMs can be sensitive to the choice of kernel function and the parameters of the algorithm It is also important to note that SVMs are not suitable for large datasets as the training time can be quite long In conclusion Support Vector Machines SVMs are a powerful supervised learning algorithm that can be used for classification and regression problems especially when the data is not linearly separable The algorithm is known for its good performance in high dimensional spaces and its ability to find non linear boundaries However it can be sensitive to the choice of kernel function and parameters and also not suitable for large datasets Pros Effective in high dimensional spaces SVMs have good performance even when the number of features is greater than the number of samples Memory efficient SVMs only need to store the support vectors and not the entire dataset making them memory efficient Versatile SVMs can be used for both classification and regression problems and can handle non linearly separable data using kernel trick Robust to noise and outliers SVMs are robust to noise and outliers in the data as they only rely on the support vectors Cons Sensitive to the choice of kernel function and parameters The performance of an SVM can be highly dependent on the choice of kernel function and the parameters of the algorithm Not suitable for large datasets The training time for SVMs can be quite long for large datasets Difficulty in interpreting results It can be difficult to interpret the results of an SVM especially when using non linear kernels Doesn t work well with overlapping classes SVMs can struggle when classes have significant overlap In conclusion SVMs are a powerful and versatile machine learning algorithm that can be used for both classification and regression problems especially when the data is not linearly separable However they can be sensitive to the choice of kernel function and parameters not suitable for large datasets and difficult to interpret the results Decision treeDecision trees are a type of machine learning algorithm used for both classification and regression tasks They are a powerful tool for decision making and can be used to model complex relationships between variables A decision tree is a tree like structure with each internal node representing a decision point and each leaf node representing a final outcome or prediction The tree is built by recursively splitting the data into subsets based on the values of the input features The goal is to find splits that maximize the separation between the different classes or target values One of the main advantages of decision trees is that they are easy to understand and interpret The tree structure allows for a clear visualization of the decision making process and the importance of each feature can be easily assessed The process of building a decision tree begins with selecting the root node which is the feature that best separates the data into different classes or target values The data is then split into subsets based on the values of this feature and the process is repeated for each subset until a stopping criterion is met The stopping criterion can be based on the number of samples in the subsets the purity of the subsets or the depth of the tree One of the main disadvantages of decision trees is that they can easily overfit the data particularly when the tree is deep and has many leaves Overfitting occurs when the tree is too complex and fits the noise in the data rather than the underlying patterns This can lead to poor generalization performance on new unseen data To prevent overfitting techniques such as pruning regularization and cross validation can be used Another problem with decision trees is that they are sensitive to the order of the input features Different feature orders can lead to different tree structures and the final tree may not be the optimal one To overcome this problem techniques such as random forests and gradient boosting can be used In conclusion decision trees are a powerful and versatile tool for decision making and predictive modeling They are easy to understand and interpret but they can easily overfit the data To overcome these limitations various techniques such as pruning regularization cross validation random forests and gradient boosting have been developed Pros Easy to understand and interpret The tree structure allows for a clear visualization of the decision making process and the importance of each feature can be easily assessed Handle both numerical and categorical data Decision trees can handle both numerical and categorical data making them a versatile tool for a wide range of applications High accuracy Decision trees can achieve high accuracy on many datasets especially when the tree is not deep Robust to outliers Decision trees are not affected by outliers which makes them suitable for datasets with noise Can be used for both classification and regression tasks Cons Overfitting Decision trees can easily overfit the data particularly when the tree is deep and has many leaves Sensitive to the order of the input features Different feature orders can lead to different tree structures and the final tree may not be the optimal one Unstable Decision trees are sensitive to small changes in the data which can lead to different tree structures and different predictions Bias Decision trees can be biased towards features with more levels or categorical variables with many levels which can lead to inaccurate predictions Not good for continuous variable Decision Trees are not good for continuous variable if the variable is continuous then it could lead to split the variable into many levels which will make the tree complex and lead to overfitting Random forestRandom Forest is an ensemble machine learning algorithm that is used for both classification and regression tasks It is a combination of multiple decision trees where each tree is grown using a random subset of the data and a random subset of the features The final prediction is made by averaging the predictions of all the trees in the forest The idea behind using multiple decision trees is that while a single decision tree may be prone to overfitting a collection of decision trees or a forest can reduce the risk of overfitting and improve the overall accuracy of the model The process of building a Random Forest begins with creating multiple decision trees using a technique called bootstrapping Bootstrapping is a statistical method that involves randomly selecting data points from the original dataset with replacement This creates multiple datasets each with a different set of data points which are then used to train individual decision trees Another important aspect of Random Forest is the use of a random subset of features for each tree This is known as random subspace method This reduces the correlation between the trees in the forest which in turn improves the overall performance of the model One of the main advantages of Random Forest is that it is less prone to overfitting than a single decision tree The averaging of multiple trees smooths out the errors and reduces the variance Random Forest also performs well in high dimensional datasets and datasets with a large number of categorical variables The disadvantage of Random Forest is that it can be computationally expensive to train and make predictions As the number of trees in the forest increases the computational time increases as well Additionally Random Forest can be less interpretable than a single decision tree because it is harder to understand the contribution of each feature to the final prediction In conclusion Random Forest is a powerful ensemble machine learning algorithm that can improve the accuracy of decision trees It is less prone to overfitting and performs well in high dimensional and categorical datasets However it can be computationally expensive and less interpretable than a single decision tree Naive BayesNaive Bayes is a simple and efficient machine learning algorithm that is based on Bayes theorem and is used for classification tasks It is called “naive because it makes the assumption that all the features in the dataset are independent of each other which is not always the case in real world data Despite this assumption Naive Bayes has been found to perform well in many practical applications The algorithm works by using Bayes theorem to calculate the probability of a given class given the values of the input features Bayes theorem states that the probability of a hypothesis in this case the class given some evidence in this case the feature values is proportional to the probability of the evidence given the hypothesis multiplied by the prior probability of the hypothesis Naive Bayes algorithm can be implemented using different types of probability distributions such as Gaussian Multinomial and Bernoulli Gaussian Naive Bayes is used for continuous data Multinomial Naive Bayes is used for discrete data and Bernoulli Naive Bayes is used for binary data One of the main advantages of Naive Bayes is its simplicity and efficiency It is easy to implement and requires less training data than other algorithms It also performs well on high dimensional datasets and can handle missing data The main disadvantage of Naive Bayes is the assumption of independence between features which is often not true in real world data This can lead to inaccurate predictions especially when the features are highly correlated Additionally Naive Bayes is sensitive to the presence of irrelevant features in the dataset which can decrease its performance In conclusion Naive Bayes is a simple and efficient machine learning algorithm that is based on Bayes theorem and is used for classification tasks It performs well on high dimensional datasets and can handle missing data but it s main disadvantage is the assumption of independence between features which can lead to inaccurate predictions if the data is not independent KNNK Nearest Neighbors KNN is a simple and powerful algorithm for classification and regression tasks in machine learning It is based on the idea that similar data points tend to have similar target values The algorithm works by finding the k nearest data points to a given input and using the majority class or average value of the nearest data points to make a prediction The process of building a KNN model begins with selecting a value for k which is the number of nearest neighbors to consider for the prediction The data is then split into training and test sets with the training set used to find the nearest neighbors To make a prediction for a new input the algorithm calculates the distance between the input and each data point in the training set and selects the k nearest data points The majority class or average value of the nearest data points is then used as the prediction One of the main advantages of KNN is its simplicity and flexibility It can be used for both classification and regression tasks and does not make any assumptions about the underlying data distribution Additionally it can handle high dimensional data and can be used for both supervised and unsupervised learning The main disadvantage of KNN is its computational complexity As the size of the dataset increases the time and memory required to find the nearest neighbors can become prohibitively large Additionally KNN can be sensitive to the choice of k and finding the optimal value for k can be difficult In conclusion K Nearest Neighbors KNN is a simple and powerful algorithm for classification and regression tasks in machine learning It is based on the idea that similar data points tend to have similar target values The main advantage of KNN is its simplicity and flexibility it can handle high dimensional data and can be used for both supervised and unsupervised learning The main disadvantage of KNN is its computational complexity and it can be sensitive to the choice of k K meansK means is an unsupervised machine learning algorithm used for clustering Clustering is the process of grouping similar data points together K means is a centroid based algorithm or distance based algorithm where we calculate the distances to assign a point to a cluster The algorithm works by randomly selecting k centroids where k is the number of clusters we want to form Each data point is then assigned to the cluster with the nearest centroid Once all the points have been assigned the centroids are recalculated as the mean of all the data points in the cluster This process is repeated until the centroids no longer move or the assignment of points to clusters no longer changes One of the main advantages of K means is its simplicity and scalability It is easy to implement and can handle large datasets efficiently Additionally it is a fast and robust algorithm and it has been widely used in many applications such as image compression market segmentation and anomaly detection The main disadvantage of K means is that it assumes that the clusters are spherical and equally sized which is not always the case in real world data Additionally it is sensitive to the initial placement of centroids and the choice of k It also assumes that the data is numerical and if the data is not numerical it must be transformed before using the algorithm In conclusion K means is an unsupervised machine learning algorithm used for clustering It is based on the idea that similar data points tend to be close to each other The main advantage of K means is its simplicity scalability and it s widely used in many applications The main disadvantage of K means is that it assumes that the clusters are spherical and equally sized it is sensitive to the initial placement of centroids and the choice of k and it assumes that the data is numerical Dimensionality reduction algorithmsDimensionality reduction is a technique used to reduce the number of features in a dataset while maintaining the important information It is used to improve the performance of machine learning algorithms and make data visualization easier There are several dimensionality reduction algorithms available including Principal Component Analysis PCA Linear Discriminant Analysis LDA and t Distributed Stochastic Neighbor Embedding t SNE Principal Component Analysis PCA is a linear dimensionality reduction technique that uses orthogonal transformation to convert a set of correlated variables into a set of linearly uncorrelated variables called principal components PCA is useful for identifying patterns in data and reducing the dimensionality of the data without losing important information Linear Discriminant Analysis LDA is a supervised dimensionality reduction technique that is used to find the most discriminative features for the classification task LDA maximizes the separation between the classes in the lower dimensional space t Distributed Stochastic Neighbor Embedding t SNE is a non linear dimensionality reduction technique that is particularly useful for visualizing high dimensional data It uses probability distributions over pairs of high dimensional data points to find a low dimensional representation that preserves the structure of the data One of the main advantages of dimensionality reduction techniques is that they can improve the performance of machine learning algorithms by reducing the computational cost and reducing the risk of overfitting Additionally they can make data visualization easier by reducing the number of dimensions to a more manageable number The main disadvantage of dimensionality reduction techniques is that they can lose important information in the process of reducing the dimensionality Additionally the choice of dimensionality reduction technique depends on the type of data and the task at hand and it can be difficult to determine the optimal number of dimensions to retain In conclusion Dimensionality reduction is a technique used to reduce the number of features in a dataset while maintaining the important information There are several dimensionality reduction algorithms available such as PCA LDA and t SNE which are useful for identifying patterns in data improving the performance of machine learning algorithms and making data visualization easier However it can lose important information in the process of reducing the dimensionality and the choice of dimensionality reduction technique depends on the type of data and the task at hand Gradient boosting algorithm and AdaBoosting algorithmGradient boosting and AdaBoost are two popular ensemble machine learning algorithms that are used for both classification and regression tasks Both algorithms work by combining multiple weak models to create a strong final model Gradient boosting is an iterative algorithm that builds a model in a forward stage wise fashion It starts by fitting a simple model such as a decision tree to the data and then adds additional models to correct the errors made by the previous models Each new model is fit to the negative gradient of the loss function with respect to the previous model s predictions The final model is a weighted sum of all the individual models AdaBoost short for Adaptive Boosting is a similar algorithm that also builds a model in a forward stage wise fashion However it focuses on improving the performance of the weak models by adjusting the weights of the training data In each iteration the algorithm focuses on the training examples that were misclassified by the previous model and it adjusts the weights of these examples so that they have a higher probability of being selected in the next iteration The final model is a weighted sum of all the individual models Both gradient boosting and AdaBoost have been found to produce highly accurate models in many practical applications One of the main advantages of both algorithms is that they can handle a wide range of data types including categorical and numerical data Additionally both algorithms can handle data with missing values and they are robust to outliers One of the main disadvantages of both algorithms is that they can be computationally expensive especially when the number of models in the ensemble is large Additionally they can be sensitive to the choice of the base model and the learning rate In conclusion Gradient boosting and AdaBoost are two popular ensemble machine learning algorithms that are used for both classification and regression tasks Both algorithms work by combining multiple weak models to create a strong final model Both have been found to produce highly accurate models in many practical applications but they can be computationally expensive and sensitive to the choice of the base model and the learning rate Thanks for Reading If you enjoyed this subscribe and become a member today to never miss another article on data science guides tricks and tips life lessons and more |
2023-09-01 05:12:00 |
金融 |
ニッセイ基礎研究所 |
ユーロ圏消費者物価(23年8月)-総合指数・コアともに5%台の伸び率が続く |
https://www.nli-research.co.jp/topics_detail1/id=75978?site=nli
|
前月比ではか国がプラスの伸び率で、か国はマイナスの伸び率となった図表。 |
2023-09-01 14:22:59 |
ニュース |
BBC News - Home |
Arms giant BAE Systems sets up Ukraine entity |
https://www.bbc.co.uk/news/business-66679312?at_medium=RSS&at_campaign=KARANGA
|
artillery |
2023-09-01 05:04:16 |
ニュース |
BBC News - Home |
Great Fire of London: Man who first raised alarm identified |
https://www.bbc.co.uk/news/uk-england-london-66671399?at_medium=RSS&at_campaign=KARANGA
|
alarm |
2023-09-01 05:19:44 |
マーケティング |
MarkeZine |
【視聴無料】ゼビオグループが行う、LINEを活用した顧客視点のデータ活用 |
http://markezine.jp/article/detail/43342
|
顧客 |
2023-09-01 14:30:00 |
IT |
週刊アスキー |
散歩しながら楽しめるマーケット! 横浜みなとみらい臨海部で「BAY WALK MARKET 2023」を開催 |
https://weekly.ascii.jp/elem/000/004/153/4153475/
|
baywalkmarket |
2023-09-01 14:30:00 |
IT |
週刊アスキー |
横スクロールマリオ最新作『スーパーマリオブラザーズ ワンダー』の詳細が判明! |
https://weekly.ascii.jp/elem/000/004/153/4153487/
|
direct |
2023-09-01 14:15:00 |
IT |
週刊アスキー |
『白猫GOLF』に新ゴルファー「アイシャ(CV:井上麻里奈さん)」が登場! |
https://weekly.ascii.jp/elem/000/004/153/4153490/
|
井上麻里奈 |
2023-09-01 14:15:00 |
IT |
週刊アスキー |
今日は丸亀製麺で「釜揚げうどん半額」 毎月1日のスペシャルデー |
https://weekly.ascii.jp/elem/000/004/153/4153499/
|
丸亀製麺 |
2023-09-01 14:30:00 |
コメント
コメントを投稿