投稿時間:2022-02-08 14:17:03 RSSフィード2022-02-08 14:00 分まとめ(21件)

カテゴリー等 サイト名等 記事タイトル・トレンドワード等 リンクURL 頻出ワード・要約等/検索ボリューム 登録日
TECH Engadget Japanese Meta社、FacebookやInstagramの欧州撤退を警告。データ転送規制をめぐり https://japanese.engadget.com/meta-warns-facebook-instagram-shutdown-eu-043008895.html facebook 2022-02-08 04:30:08
TECH Engadget Japanese ポケモンGOバレンタイン、今年はフラベベが初登場。地域限定で全5色 https://japanese.engadget.com/pokemon-go-valentine-041518085.html 地域限定 2022-02-08 04:15:18
TECH Engadget Japanese NvidiaがArm買収を断念の報道。規制当局の介入受け https://japanese.engadget.com/nvidia-not-buy-arm-040046571.html nvidia 2022-02-08 04:00:46
ROBOT ロボスタ ローカル5G網が整備された街で実現できる未来の防災、災害対策とは?「“Digital×Town” 5G ラボコンテンツ」を制作 https://robotstart.info/2022/02/08/digital-town-5g-lab-content.html 2022-02-08 04:48:56
IT ITmedia 総合記事一覧 [ITmedia PC USER] SteelSeries、小型軽量設計のUSBゲーミングマウス「Prime Mini」 https://www.itmedia.co.jp/pcuser/articles/2202/08/news105.html itmediapcusersteelseries 2022-02-08 13:18:00
IT ITmedia 総合記事一覧 [ITmedia News] 数年放置された福井県公式Twitterが乗っ取り被害に NFT関連ツイートを開始 https://www.itmedia.co.jp/news/articles/2202/08/news104.html fukui 2022-02-08 13:15:00
TECH Techable(テッカブル) メモリ不足時の味方! 最大容量4TB、ワイヤレスでデータ転送できるSSD「Airmini」 https://techable.jp/archives/173050 airmini 2022-02-08 04:00:43
python Pythonタグが付けられた新着投稿 - Qiita [TensorFlow / C++ / CPU] Pythonで学習 & C++で推論 (2022 Ver.). III. C++で推論実行 https://qiita.com/kyaFUK/items/e4dd03301e601298504f 参考にしたページTensorFlowの学習・推論をCで行う。 2022-02-08 13:41:13
js JavaScriptタグが付けられた新着投稿 - Qiita MicrosoftのWeb開発教材を使ってみた 〜②JavaScript基礎まで【導入/アクセシビリティ/JavaScript の基礎】 https://qiita.com/NasuPanda/items/d78514969191491a4bcd プログラミング言語と開発ツールプログラミングとは何かコードを介してデバイスに命令を出すことコードを書かずにプログラムを作っていたとしても、実際に解釈されているのはコードプログラミング言語について各言語の簡単な特徴高級言語人間にとってわかりやすい低級言語機械にとってわかりやすい世界初のプログラマはエイダ・ラブレスという女性らしいWeb開発で有用なドキュメントの紹介MozillaDeveloperNetworkMDNFrontendMastersWebdevGoogleMicrosoft独自の開発者向けドキュメントMicrosoftEdgeの場合課題ドキュメントを読むMDNClientsidetoolingoverviewこのページからいくつかピックアップして、ツールの概要を調べるリンターeslintcsslintなどコード内にエラーが存在しないか教えてくれるチームで共通のスタイルを導入することで生産性を高める事ができるソースコード管理GitGithubなどソースコードのバックアップを取ることで、万が一エラーが起きても元に戻せるなどの利点があるチームでコードを共有しながら作業をすすめることが出来るバンドラーWebpackなど開発時は人間にとってわかりやすいインデントなどコードを書き、本番環境では最小構成にするミニファイリング、といった事が可能依存関係をよしなにしてくれるGithubの基礎Githubの基礎的な使い方ほぼやったことのある内容なので流し読み程度。 2022-02-08 13:20:19
Docker dockerタグが付けられた新着投稿 - Qiita Kubernetes の CRI を Dockershim から containerdに移行する https://qiita.com/soumi/items/8c42c0ee712580fd330f ホストのアップデートなど、他に用があればkubectldrainした後にノードを物理的に再起動しても良いのですが、サービスだけを再起動する場合は以下のコマンドになります。 2022-02-08 13:33:04
海外TECH DEV Community Best practices for ML lifecycle stages https://dev.to/cloudtech/best-practices-for-ml-lifecycle-stages-4g9b Best practices for ML lifecycle stagesBuilding a machine learning model is an iterative process For a successful deployment most of the steps are repeated several times to achieve optimal results The model must be maintained after deployment and adapted to changing environment Let s look at the details of the lifecycle of a machine learning model Data collectionThe first step in the development of ML workloads is identification of data that is needed for training and performance evaluation of an ML model In the cloud environment a data lake usually serves as a centralized repository that enables you to store all structured and unstructured data regardless of scale AWS provides a number of ways to ingest data both in bulk and in real time from a wide variety of sources You can use services such as AWS Direct Connect and AWS Storage Gateway to move data from on premises environments and tools like AWS Snowball and AWS Snowmobile for moving data at scale You can also use Amazon Kinesis to collect and ingest streaming data You also have the option to use services such as AWS Lake Formation and Amazon HealthLake to quickly set up data lakes The following best practices are recommended for data collection and integration Detail and document various sources and steps needed to extract the data This can be achieved using AWS Glue Catalog which automatically discovers and profiles your data and generates ETL code to transform your source data to target schemas AWS also recently announced a new feature named AWS Glue DataBrew which provides a visual data preparation interface that makes it easy for data analysts and data scientists to clean and normalize data to prepare it for analytics and ML Define data governance ーWho owns the data who has access the appropriate usage of the data and the ability to access and delete specific pieces of data on demand Data governance and access management can be handled using AWS Lake Formation and AWS Glue Catalog Data integration and preparationAn ML model is only as good as the data being used to train it Bad data is often referred to as “Garbage in Garbage out Once the data has been collected the next step is to integrate prepare and annotate data AWS provides a number of services that data engineers and data scientists can use to prepare their data for ML model training In addition to the services such as AWS Glue and Amazon EMR which provide traditional ETL capabilities AWS also provides tools as part of Amazon SageMaker designed specifically for data scientists These include Amazon SageMaker Ground Truth which can be used for data labelingSageMaker Data Wrangler which simplifies the process of data preparation and feature engineeringSageMaker Feature Store which enables you to store update retrieve and share ML featuresAdditionally SageMaker Processing allows you to run your pre processing post processing and model evaluation workloads on a fully managed environment We recommend implementation of the following best practices for data integration and preparation Track data lineage so that the location and data source is tracked and known during further processing Using AWS Glue you can visually map the lineage of their data to understand the various data sources and transformation steps that the data has been through You can also use metadata provided by AWS Glue Catalog to establish data lineage The SageMaker Data Wrangler Data Flow UI provides a visual map of the end to end data lineage Versioning data sources and processing workflows ーVersioning data sources processing workflows enables you to maintain an audit trail of the changes being made to your data integration processes over time and recreate previous versions of your data pipelines AWS Glue provides versioning capabilities as part of AWS Glue Catalog and AWS Glue Schema Registry for streaming data sources AWS Glue and Amazon EMR jobs can be versioned using a version control system such as AWS CodeCommit Automate data integration deployment pipelines ーMinimize human touch points in deployment pipelines to ensure that the data integration workloads are consistently and repeatedly deployed using a pipeline that defines how code is promoted from development to production AWS Developer Tools allow you to build CI CD pipelines to promote your code to a higher environment Feature engineeringFeature engineering involves the selection and transformation of data attributes or variables during the development of a predictive model Amazon SageMaker Data Wrangler can be used for selection extraction and transformation of features You can export your data flow designed in Data Wrangler as a Data Wrangler Job or export to SageMaker Pipelines ETL services like Amazon EMR and AWS Glue can be used for feature extraction and transformation Finally you can use Amazon SageMaker Feature Store to store update retrieve and share ML features The following best practices are recommended for feature engineering Ensure feature standardization and consistency ーIt is common to see a different definition of similar features across a business The use of Amazon SageMaker Feature Store allows for standardization of features and helps to ensure consistency between model training and inference If you are using SageMaker for feature engineering you can use SageMaker Lineage Tracking to store and track information about the feature engineering steps along with other ML workflow steps performed in SageMaker Model trainingThe model training step involves the selection of appropriate ML algorithms and using the input features to train an ML model Along with the training data provided as input features prepared during the feature engineering stage you generally provide model parameters to optimize the training process To measure how well a model is performing during training AWS uses several metrics such as training error and prediction accuracy Metrics reported by the algorithm depend on the business problem and the ML technique being used Certain model parameters called hyperparameters can be tuned to control the behavior of the model and the resulting model architecture Model training typically involves an iterative process of training a model evaluating its performance against relevant metrics and tuning the hyperparameters in search for the most optimal model architecture This process is generally referred to as hyperparameter optimization AWS recommends the application of the following best practices during the model training step Follow a model testing plan and track your model experiments ーAmazon SageMaker Experiments enables you to organize track compare and evaluate ML experiments and model versions Take advantage of managed services for model turning ーSageMaker Automatic Model Tuning and SageMaker Autopilot help ML practitioners explore a large number of combinations to automatically and quickly zoom in on high performance models Monitor your training metrics to ensure your model training is achieving the desired results ーSageMaker Debugger can be used for this purpose which is designed to profile and debug your training jobs to improve the performance of ML models Ensure traceability of model training as part of the ML lifecycle ーSageMaker Lineage Tracking can be used for this purpose Model validationAfter the model has been trained evaluate it to determine if its performance and accuracy will enable you to achieve your business goals Data scientists typically generate multiple models using different methods and evaluate the effectiveness of each model The evaluation results inform the data scientists decision to fine tune the data or algorithms to further improve the model performance During fine tuning data scientists might decide to repeat the data preparation feature engineering and model training steps AWS recommends the following best practices for model validation Keep track of the experiments performed to train models using different sets of features and algorithms ーAmazon SageMaker Experiments as discussed in the Model training section can help keep track of different training iterations and evaluation results Maintain different versions of the models and their associated metadata such as training and validation metrics in a model repository ーSageMaker Model Registry enables you to catalog models for production manage model versions manage approval status of the models and associate metadata such as the training metrics of a model Transparency about how a model arrives at their predictions is critical for regulators who require insights into how a model makes a decision ーAWS recommends that you use model explainability tools which can help explain how ML models make predictions SageMaker Clarify provides the necessary tools for model explainability Biases in the data can result in can introduce bias in ML algorithms which can significantly limit the effectiveness of the models This is of special significance in healthcare and life sciences because poorly performing or biased ML models can have a significant negative impact in the real world SageMaker Clarify can be used to perform the post training bias analysis against the ML models Additional considerations for AI ML complianceAdditional considerations include AuditabilityTraceabilityReproducibilityModel monitoringModel interpretability AuditabilityAnother consideration for a well governed and secure ML environment is having a robust and transparent audit trail that logs all access and changes to the data and models such as a change in the model configuration or the hyperparameters AWS CloudTrail is one service that will log nearly continuously monitor and retain account activity related to actions across your AWS infrastructure CloudTrail logs every AWS API call and provides an event history of your AWS account activity including actions taken through the AWS Management Console AWS SDKs command line tools and other AWS services Another service AWS Config enables you to nearly continuously monitor and record configuration changes of your AWS resources More broadly in addition to the logging and audit capabilities AWS recommends a defense in depth approach to security applying security at every level of your application and environment AWS CloudTrail and AWS Config can be used as Detective controls responsible for identifying potential security threats or incidents As the Detective controls identify potential threats you can set up a corrective control to respond to and mitigate the potential impact of security incidents Amazon CloudWatch is a monitoring service for AWS resources which can trigger CloudWatch Events to automate security responses For details on setting up Detective and corrective controls refer to Logging and Monitoring in AWS Glue TraceabilityEffective model governance requires a detailed understanding of the data and data transformations used in the modeling process in addition to nearly continuous tracking of all model development iterations It is important to keep track of which dataset was used what transformations were applied to the data where the dataset was stored and what type of model was built Additional variables such as hyperparameters model file location and model training metadata also need to be tracked Any post processing steps that have been applied to remove biases from predictions during batch inference also need to be recorded Finally if a model is promoted to production for inference there needs to be a record of model files weights used in production and model performance in production needs to be monitored One aspect of traceability that helps ensure you have visibility of what components or artifacts make their way into production and how they evolve over time in the form of updates and patches is the use of versioning There are three key components that provide versioning for different types of components involved in developing an ML solution Using software version controls through tools such as GitHub to keep track of changes made to processing training and inference script AWS provides a native version control system in the form of AWS CodeCommit that can be used for this purpose Alternatively you can use your own GitHub implementations Using a model versioning capability to keep track of different iterations of models being created as part of iterative training runs SageMaker Model Registry which natively integrated with the wider SageMaker features can be used for this purpose Using a container repository to keep track of different container versions which are used in SageMaker for processing training and inference SageMaker natively integrates with Amazon ECR which maintains a version of every container update ReproducibilityReproducibility in ML is the ability to produce identical model artifacts and results by saving enough information about every phase in the ML workflow including the dataset so that it can be reproduced at a later date or by different stakeholders with the least possible randomness in the process For GxP compliance customers may need to reproduce and validate every stage of the ML workflow to reduce the risk of errors and ensure the correctness and robustness of the ML solution Unlike traditional software engineering ML is experimental highly iterative and consists of multiple phases that make reproducibility challenging It all starts with the data It s important to ensure that the dataset is reproducible at each phase in the ML workflow Variability in the dataset could arise due to randomness in subsampling methods creating train validation test splits and dataset shuffling Variability could also arise due to changes in the data processing feature engineering and post processing scripts Inconsistencies in any of these phases can lead to an irreproducible solution Methods that can help ensure reproducibility of the dataset as well as the data processing scripts include Dataset versioningUsing a fixed seed value across all the libraries in the code baseUnit testing code to ensure that the outputs remain the same for a given set of inputsVersion controlling the code baseThe core components of the ML workflow are the ML models which consist of a combination of model parameters and hyperparameters which need to be tracked to ensure consistent and reproducible results In addition to these parameters the stochastic uncertain or random nature of many ML algorithms adds a layer of complexity because the same dataset along with the same code base could produce to different outputs This is more pronounced in deep learning algorithms which make efficient approximations for complex computations These results can be approximately reproduced with the same dataset the same code base and the same algorithm In addition to the algorithms the underlying hardware and software environment configurations could impact reproducibility as well Methods that can help ensure reproducibility and limit the number of sources of nondeterministic behavior in ML modeling include Consistency in initializing model parametersStandardizing the infrastructure CPUs and GPUs Configuration management to ensure consistency in the runtimes libraries and frameworksWhen the solutions aren t fully deterministic the need for quantifying the uncertainty in model prediction increases Uncertainty quantification UQ plays a pivotal role in the reduction of uncertainties during optimization and decision making and promotes transparency in the GxP compliance process A review of uncertainty quantification techniques applications and challenges in deep learning are presented in A Review of Uncertainty Quantification in Deep Learning Techniques Applications and Challenges Few methods for uncertainty quantification include Ensemble learning techniques such as Deep Ensembles which are generalizable across ML models and can be integrated into existing ML workflows Temperature scaling which is an effective post processing technique to restore network calibration such that the confidence of the predictions matches the true likelihood Refer to a reference paper on calibrating neural networks Bayesian neural networks with Monte Carlo dropout For more information about these methods refer to Methods for estimating uncertainty in deep learning Amazon SageMaker ML Lineage Tracking provides the ability to create and store information about each phase in the ML workflow In the context of GxP compliance this can help you establish model governance by tracking model lineage artifacts for auditing and compliance verification SageMaker ML Lineage Tracking tracks entities that are automatically created by SageMaker or custom created by customers to help maintain the representation of all elements in each phase of the ML workflow Model interpretabilityInterpretability is the degree to which a human can understand the cause of a decision The higher the interpretability of an ML model the easier it is to comprehend the model s predictions Interpretability facilitates UnderstandingDebugging and auditing ML model predictionsBias detection to ensure fair decision makingRobustness checks to ensure that small changes in the input do not lead to large changes in the outputMethods that provide recourse for those who have been adversely affected by model predictionsIn the context of GxP compliance model interpretability provides a mechanism to ensure the safety and effectiveness of ML solutions by increasing the transparency around model predictions as well as the behavior of the underlying algorithm Promoting transparency is a key aspect of the patient centered approach and is especially important for AI ML based SaMD which may learn and change over time There is a tradeoff between what the model has predicted model performance and why the model has made such a prediction model interpretability For some solutions a high model performance is sufficient in others the ability to interpret the decisions made by the model is key The demand for interpretability increases when there is a large cost for incorrect predictions especially in high risk applications Trade off between performance and model interpretabilityBased on the model complexity methods for model interpretability can be classified into intrinsic analysis and post hoc analysis Intrinsic analysis can be applied to interpret models that have low complexity simple relationships between the input variables and the predictions These models are based on Algorithms such as linear regression where the prediction is the weighted sum of the inputsDecision trees where the prediction is based on a set of if then rulesThe simple relationship between the inputs and output results in high model interpretability but often leads to lower model performance because the algorithms are unable to capture complex non linear interactions Post hoc analysis can be applied to interpret simpler models as described earlier as well as more complex models such as neural networks which have the ability to capture non linear interactions These methods are often model agnostic and provide mechanisms to interpret a trained model based on the inputs and output predictions Post hoc analysis can be performed at a local level or at a global level Local methods enable you to zoom in on a single data point and observe the behavior of the model in that neighborhood They are an essential component for debugging and auditing ML model predictions Examples of local methods include Local Interpretable Model Agnostic Explanations LIME which provides a sparse linear approximation of the model behavior around a data pointSHapley Additive exPlanations SHAP a game theoretic approach based on Shapley values which computes the marginal contribution of each input variable towards the outputCounterfactual explanations which describe the smallest change in the input variables that causes a change in the model s predictionIntegrated gradients which provide mechanisms to attribute the model s prediction to specific input variablesSaliency maps which are a pixel attribution method to highlight relevant pixels in an imageGlobal methods enable you to zoom out and provide a holistic view that explains the overall behavior of the model These methods are helpful for verifying that the model is robust and has the least possible bias to allow for fair decision making Examples of global methods include Aggregating local explanations as defined previously across multiple data pointsPermutation feature importance which measures the importance of an input variable by computing the change in the model s prediction due to permutations of the input variablePartial dependence plots which plot the relationship and the marginal effect of an input variable on the model s predictionSurrogate methods which are simpler interpretable models that are trained to approximate the behavior of the original complex modelIt is recommended to start the ML journey with a simple model that is both inherently interpretable and provides sufficient model performance In later iterations if you need to improve the model performance AWS recommends increasing the model complexity and leveraging post hoc analysis methods to interpret the results Selecting both a local method and a global method gives you the ability to interpret the behavior of the model for a single data point as well as across all data points in the dataset It is also essential to validate the stability of model explanations because methods in post hoc analysis are susceptible to adversarial attacks where small perturbations in the input could result in large changes in the output prediction and therefore in the model explanations as well Amazon SageMaker Clarify provides tools to detect bias in ML models and understand model predictions SageMaker Clarify uses a model agnostic feature attribution approach and provides a scalable and efficient implementation of SHAP To run a SageMaker Clarify processing job that creates explanations for ML model predictions refer to Explainability and bias detection with Amazon SageMaker Clarify Model monitoringAfter an ML model has been deployed to a production environment it is important to monitor the model based on Infrastructure ーTo ensure that the model has adequate compute resources to support inference workloadsPerformance ーTo ensure that the model predictions do not degrade over timeMonitoring model performance is more challenging because the underlying patterns in the dataset are constantly evolving which causes a static model to underperform over time In addition obtaining ground truth labels for data in a production environment is expensive and time consuming An alternative approach is to monitor the change in data and model entities with respect to a baseline Amazon SageMaker Model Monitor can help to nearly continuously monitor the quality of ML models in production which may play a role in postmarket vigilance by manufacturers of Software as a Medical Device SaMD SageMaker Model Monitor provides the ability to monitor drift in data quality model quality model bias and feature attribution A drift in data quality arises when the statistical distribution of data in production drifts away from the distribution of data during model training This primarily occurs when there is a bias in selecting the training dataset for example where the sample of data that the model is trained on has a different distribution than that during model inference or in non stationary environments when the data distribution varies over time A drift in model quality arises when there is a significant deviation between the predictions that the model makes and the actual ground truth labels SageMaker Model Monitor provides the ability to create a baseline to analyze the input entities define metrics to track drift and nearly continuously monitor both the data and model in production based on these metrics Additionally Model Monitor is integrated with SageMaker Clarify to identify bias in ML models Model deployment and monitoring for driftFor model monitoring perform the following steps After the model has been deployed to a SageMaker endpoint enable the endpoint to capture data from incoming requests to a trained ML model and the resulting model predictions Create a baseline from the dataset that was used to train the model The baseline computes metrics and suggests constraints for these metrics Real time predictions from your model are compared to the constraints and are reported as violations if they are outside the constrained values Create a monitoring schedule specifying what data to collect how often to collect it how to analyze it and which reports to produce Inspect the reports which compare the latest data with the baseline and watch for any violations reported and for metrics and notifications from Amazon CloudWatch The drift in data or model performance can occur due to a variety of reasons and it is essential for the technical product and business stakeholders to diagnose the root cause that led to the drift Early and proactive detection of drift enables you to take corrective actions such as model retraining auditing upstream data preparation workflows and resolving any data quality issues If all else remains the same then the decision to retrain the model is based on considerations such as Reevaluate target performance metrics based on the use caseA tradeoff between the improvement in model performance vs the time and cost to retrain the modelThe availability of ground truth labeled data to support the desired retraining frequencyAfter the model is retrained you can evaluate the candidate model performance based on a champion challenger setup or with A B testing prior to redeployment Hope this guide helps you understand the Best practices for ML lifecycle stages Let me know your thoughts in the comment section And if you haven t yet make sure to follow me on below handles connect with me on LinkedInconnect with me on Twitter‍follow me on github️Do Checkout my blogs Like share and follow me for more content ltag user id follow action button background color important color fac important border color important Adit ModiFollow Cloud Engineer AWS Community Builder x AWS Certified x Azure Certified Author of Cloud Tech DailyDevOps amp BigDataJournal DEV moderator ‍Join our Cloud Tech Slack CommunityFollow us on Linkedin Twitter for latest news Take a Look at our Github Repos to know more about our projects ️Our WebsiteReference Notes 2022-02-08 04:20:24
海外TECH DEV Community #100Devs Bootcamp - Month 1 https://dev.to/hvedrungsmaer/100devs-bootcamp-month-1-52hm Devs Bootcamp Month I ve always had an interest in learning to code I ve taken classes in high school college and even tried the self taught route By far the best experience in learning to code for me has been Leon Noel s Devs bootcamp What is Devs Leon Noel a k a The Kindest Person Ever has created a free online web development bootcamp Leon puts in extra hours purely out of the kindness of his heart and passion to see others succeed January marked the beginning of the current and second cohort It s a great alternative for those interested to learn how to code but can t afford other bootcamps or degree programs All Leon asks is that you pay it forward The community that Leon s created is amazing kind and very supportive I ve met so many others in different levels of learning from complete newbies to tech pros that are so committed to helping eachother and not leaving anyone behind Even just within these first few weeks I ve made so many new friends that I ll continue to cheer on throughout this program and after You can catch up on the current classes here and the first cohort s classes here What I ve Learned So FarSo far we ve already had a lot of homework and projects that I m really excited to share Shay Howe Learn HTML amp CSS AssignmentFirst off we went through all lessons of Shay Howe s basic HTML and CSS course in a week It was a great exercise in reading documentation and the progress from the beginning to end was inspiring I kept reminding myself that if I can take a site from looking like this to looking like this in just a few weeks then what I can make by the end of the program will be a sight to see Raeshelle Rose hvedrungsmaer Finally completed the code along of the original Shay Howe assignment On to the next AM Feb Layout PracticeAfter Shay s assignment practicing building layouts made a ton of sense Even though I could use some practice in picking better color pallets I love how my layouts came out Raeshelle Rose hvedrungsmaer Had a few hiccups along the way but my layouts came out beautifully PM Jan Here are much more aesthetic Pokémon Layouts from Maribel gMaribel ggmaribel Pokémon Layout Devs PM Jan Responsiveness PracticeSo far my favorite assignment has been working on site responsiveness using media queries another thank you to Mr Howe This was a little bit more challenging specifically trying to get the gif files to completely fill their specific containers and not overlap Help from the Dev community was key to my success in fixing that issue Classmates helped me work through the differences between the values cover auto and contain for background size Finally I went with percents to make the images more fluid and ended up with the following CSS Media Queries small sized state to show only confused Zuko media max width px section panel one width background image url background size background repeat no repeat contains the confused Zuko makes this panel fit the full width of the page panel two panel three display none these two classes need to disappear at this small size medium two panels displayed here media min width px and max width px section panel one section panel two width section panel one background image url background size background repeat no repeat section panel two background image url background size background repeat no repeat panel three display none h panel one h panel three display none large three panels displayed here media min width px section panel one section panel two section panel three width section panel one background image url background size background repeat no repeat section panel two background image url background size background repeat no repeat section panel three background image url background size background repeat no repeat h panel one h panel two display none I decided to dedicate this to my favorite Future Fire Lord Zuko Raeshelle Rose hvedrungsmaer Finally completed my Devs responsive site assignment featuring my fave fire lord and a dash of Zutara iykyk If you haven t seen ATLA yet this is your sign to clear fix that AM Feb I was inspired by my classmates to make this assignment my own by all their great projects featuring Lunar New Year La Rainne La Rainne they she larainnepasion Finally got around to the responsive layout homework and decided to make it fit for the occasion Happy Lunar New Year Devs PM Feb Dragon Ball Z Joseph Joseph itsdaijoebu My take on making the minutes of pain layout responsive Can anyone else hear this or have I just been working on it for too long Devs AM Feb and Jujitsu Kaisen Cy Cy cyscodes Finally My take on the minutes of pain hwAdded some animation on the text since it feels a little more harmonious with the GIF that way Devs I wanted to make it more fluid but that s probably enough media queries for now PM Feb I m so proud of how far we ve come in just weeks They re all amazing devs already and I can t wait to see their takes on future assignments What s Next I m excited to delve into the next topics of the bootcamp especially JavaScript I ll be writing up monthly posts like this one for bootcamp related updates as well as development journals for any games or side projects I work on in the future Here s where I m popping up next This week s Developer Week conference and the Amplifying Black Excellence summit Teaming up with Joseph for the Inspire Game Jam from the Dallas Society of Play Follow me and all the above devs on twitter for more updates about our coding journeys 2022-02-08 04:18:00
金融 article ? The Finance 日本版SPAC(Special Purpose Acquisition Company:特別買収目的会社)の行方 https://thefinance.jp/strategy/220208 purposeacquisitioncompany 2022-02-08 04:57:01
海外ニュース Japan Times latest articles Taiwan to relax Japan nuclear disaster-related food import ban https://www.japantimes.co.jp/news/2022/02/08/national/taiwan-lifts-fukushima-import-ban/ Taiwan to relax Japan nuclear disaster related food import banCabinet spokesperson Lo Ping cheng said the government had decided to make a fair adjustment to its ban saying that with so many countries lifting restrictions 2022-02-08 13:35:58
ニュース BBC News - Home Ukraine crisis: Macron says crucial days ahead after Putin summit https://www.bbc.co.uk/news/world-europe-60297732?at_medium=RSS&at_campaign=KARANGA french 2022-02-08 04:43:08
ビジネス ダイヤモンド・オンライン - 新着記事 バイデン氏の科学顧問が辞任、職場での行動規範違反で - WSJ発 https://diamond.jp/articles/-/295733 顧問 2022-02-08 13:05:00
北海道 北海道新聞 北京冬季パラ主将にアルペン村岡 旗手はノルディック距離の川除 https://www.hokkaido-np.co.jp/article/643247/ 日本パラリンピック委員会 2022-02-08 13:12:00
IT 週刊アスキー 炒め物などに小さじ1杯入れるだけで本格中華、崎陽軒と北海道ぎょれんがコラボレーションした「XO醤」2月10日発売 https://weekly.ascii.jp/elem/000/004/082/4082856/ 一部店舗 2022-02-08 13:40:00
IT 週刊アスキー コーエーテクモゲームスが『零 ~濡鴉ノ巫女~』で開催していたフォトコンテストの受賞作を発表! https://weekly.ascii.jp/elem/000/004/082/4082858/ nintendo 2022-02-08 13:40:00
IT 週刊アスキー ファミマ「肉弁当四天王」が出揃う!とんかつ、チキンステーキなど肉の旨みにこだわった新作 https://weekly.ascii.jp/elem/000/004/082/4082861/ 発売中 2022-02-08 13:30:00
ニュース THE BRIDGE 医師向け臨床支援アプリ「HOKUTO」運営、シリーズAで8.25億円を調達——会員数は全国医師人口の1割に到達 https://thebridge.jp/2022/02/hokuto-series-a-round-funding ジェネシア・ベンチャーズは前回ラウンドに続くフォローオン。 2022-02-08 04:00:46

コメント

このブログの人気の投稿

投稿時間:2021-06-17 05:05:34 RSSフィード2021-06-17 05:00 分まとめ(1274件)

投稿時間:2021-06-20 02:06:12 RSSフィード2021-06-20 02:00 分まとめ(3871件)

投稿時間:2020-12-01 09:41:49 RSSフィード2020-12-01 09:00 分まとめ(69件)