海外TECH |
MakeUseOf |
How to Apply Local Group Policies to a Specific User Account in Windows 10 and 11 |
https://www.makeuseof.com/local-group-policy-user-account-windows/
|
account |
2023-07-01 17:15:17 |
海外TECH |
DEV Community |
How to Wrap Your Errors with Enums when using Error-stack |
https://dev.to/neon_mmd/how-to-wrap-your-errors-with-enums-when-using-error-stack-49p0
|
How to Wrap Your Errors with Enums when using Error stack IntroductionI am an intermediate rust developer and you may know me from my posts on subreddits like r opensource or r rust or from my various projects on GitHub Recently we decided to do error handling and provide custom error messages for the errors related to each engine code present under src engines folder in one of my project websurfx when using error stack and we wanted to use enums for it and we found that the error stack project provides no guide tutorial or example but thanks to one of our maintainers xffxff xffxff provided a really cool solution to this problem and which helped me learn a lot so I decided to share with you all on what I learned from it in this post and how you can wrap errors with enums when using error stack so stick till the end of the post img alt A Person Trimming a Wood Plank with and Edge height src images pexels com photos pexels photo jpeg Fauto Dcompress cs Dtinysrgb w D h D dpr D width For this tutorial I will be writing a simple scraping program to scrape example com webpage and then we will write code to handle errors with enums on the basis of it Let s Dive Straight Into It Simple Scraping ProgramLet s first start by explaining the code I have written to scrape the More information href link in the example com webpage that we will be using throught the program to write error handling code for it Here is the code The main module that fetches html code and scrapes the more information href link and displays it on stdout use reqwest header HeaderMap CONTENT TYPE REFERER USER AGENT use scraper Html Selector use std println time Duration fn main gt Result lt Box lt dyn std error Error gt gt A map that holds various request headers let mut headers HeaderMap new headers insert USER AGENT Mozilla Windows NT Win x rv Gecko Firefox parse gt headers insert REFERER parse gt headers insert CONTENT TYPE application x www form urlencoded parse gt A blocking request call that fetches the html text from the example com site let html results reqwest blocking Client new get timeout Duration from secs headers headers send gt text gt Parse the recieved html text to html for scraping let document Html parse document amp html results Initialize a new selector for scraping more information href link let more info href link selector Selector parse div gt p gt a gt Scrape the more information href link let more info href link document select amp more info href link selector next unwrap value attr href unwrap Print the more information link println More information link more info href link Ok In the Cargo toml file you will need to provide something like this under the dependencies section dependencies scraper error stack reqwest version features blocking Now I know the above code might seem intimidating and daunting but you don t need to focus on the implementation because it doesn t matter what the code is That s why I have numbered the parts that is important for this tutorial You might be asking what are the those question marks in the numbered parts and what it does What is the Question Mark Operator According to our favourite go to resource the rust lang book The question mark operator unwraps valid values or returns erroneous values propogating them to the calling function It is a unary operator that can only be applied to the types Result and Option Let me explain in it brief The question mark operator in Rust allows to handle any operation that returns a result type with either a value or an error if something bad happens to be handled more graciously like If the operation is completed successfully then the execution of the rest of the function or program continues otherwise it is propogated to the function which called it and exits execution of the rest of the function The propogated error by the function which can then be handled by matching over it by a match statement or if let syntax in the caller function on the other hand if the operation was to be done within the main function then if the operation was to fail then as before the rest of the code is never execuated and the error is then propogated to the standard output stdout and it gets dislayed on stdout For example take this rust code fn main match caller Ok sum gt println Sum is sum Err error gt println error fn sum numbers from string number x as string amp str number y as string amp str gt Result lt u Box lt dyn std error Error gt gt let number x u number x as string parse let number y u number y as string parse println This code is being executed and the code below will also be executed Ok number x number y Here you can see the main function calls the function sum numbers from string which returns a Result type now if the code with the question mark operator were to error out in sum numbers from string function then the execution will stop rightthere and the code below it including the println statement will never be executed and a ParseError will be propogated to the main function which will then be matched over by the match statement and the Err handle will be executed Now let s take another example by placing the operations with the question mark operator in the main function The code for which will look something like this fn main gt Result lt Box lt dyn std error Error gt gt let number x u parse let number y u parse println This code is being executed and the code below will also be executed println Sum is number x number y Ok Now if the above code were to be executed and the operation with the question mark operator were to fail then as usual the program will stop executing and the error will be propogated to stdout and printed on the terminal NoteI know I have missed a lot of finer details but I have done it on purpose for the sake of understanding and simplicity but if you wish to learn more in depth about it then I would recommend reading this blog post Code analyzed and explained Writing Code to Handle Errors with EnumsBefore we start let s go briefly over what each operation with question mark operator in each numbered part returns For the first numbered part the operation gives a Result type something like this Result lt HeaderValue InvalidHeaderValue gt as you now know if this operation fails the InvalidHeaderValue will be propogated by the main function to the stdout and will be displayed on it Similarly The second and Third parts return Result types Result lt String ReqwestError gt and Result lt Selector SelectorErrorKind gt respectively Now as we know what each part returns we can start writing code to wrap this errors with enums when using error stack crate we will first start by writing the error enum and let s call it ScraperError derive Debug enum ScraperError InvalidHeaderMapValue RequestError SelectorError Then we will need to implement two traits on our error enum the Display and Context traits the code for which looks something like this impl fmt Display for ScraperError fn fmt amp self f amp mut fmt Formatter lt gt gt fmt Result match self ScraperError InvalidHeaderMapValue gt write f Invalid header map value provided ScraperError RequestError gt write f Error occurred while requesting data from the webpage ScraperError SelectorError gt write f An error occured while initializing new Selector impl Context for ScraperError By implementing Display trait we provide each error type that will be encountered an approriate error messages and with the implementation of Context trait we give the error enum the ability to be converted into a Report type otherwise if this is not implemented the program results into a compile time error stating that the following cannot be converted into a Report type Now we will need to replace each Question mark operator and change the return type of the main function to Result lt ScraperError gt So the code will look something like this fn main gt Result lt ScraperError gt A map that holds various request headers let mut headers HeaderMap new headers insert USER AGENT Mozilla Windows NT Win x rv Gecko Firefox parse into report change context ScraperError InvalidHeaderMapValue gt headers insert REFERER parse into report change context ScraperError InvalidHeaderMapValue gt headers insert CONTENT TYPE application x www form urlencoded parse into report change context ScraperError InvalidHeaderMapValue gt A blocking request call that fetches the html text from the example com site let html results reqwest blocking Client new get timeout Duration from secs headers headers send into report change context ScraperError RequestError gt text into report change context ScraperError RequestError gt Parse the recieved html text to html for scraping let document Html parse document amp html results Initialize a new selector for scraping more information href link let more info href link selector Selector parse div gt p gt a into report change context ScraperError SelectorError gt Scrape the more information href link let more info href link document select amp more info href link selector next unwrap value attr href unwrap Print the more information link println More information link more info href link Ok Putting it altogather The code looks like this The main module that fetches html code and scrapes the more information href link and displays it on stdout use core fmt use error stack Context IntoReport Result ResultExt use reqwest header HeaderMap CONTENT TYPE REFERER USER AGENT use scraper Html Selector use std println time Duration derive Debug enum ScraperError InvalidHeaderMapValue RequestError SelectorError impl fmt Display for ScraperError fn fmt amp self f amp mut fmt Formatter lt gt gt fmt Result match self ScraperError InvalidHeaderMapValue gt write f Invalid header map value provided ScraperError RequestError gt write f Error occurred while requesting data from the webpage ScraperError SelectorError gt write f An error occured while initializing new Selector impl Context for ScraperError fn main gt Result lt ScraperError gt A map that holds various request headers let mut headers HeaderMap new headers insert USER AGENT Mozilla Windows NT Win x rv Gecko Firefox parse into report change context ScraperError InvalidHeaderMapValue gt headers insert REFERER parse into report change context ScraperError InvalidHeaderMapValue gt headers insert CONTENT TYPE application x www form urlencoded parse into report change context ScraperError InvalidHeaderMapValue gt A blocking request call that fetches the html text from the example com site let html results reqwest blocking Client new get timeout Duration from secs headers headers send into report change context ScraperError RequestError gt text into report change context ScraperError RequestError gt Parse the recieved html text to html for scraping let document Html parse document amp html results Initialize a new selector for scraping more information href link let more info href link selector Selector parse div gt p gt a into report change context ScraperError SelectorError gt Scrape the more information href link let more info href link document select amp more info href link selector next unwrap value attr href unwrap Print the more information link println More information link more info href link Ok Don t run the above code in excitement or else you will be shocked that it does not work But don t be excited just yet because when you run the above code it will throw a scary compile time error as follows error E the method into report exists for enum Result lt Selector SelectorErrorKind lt gt gt but its trait bounds were not satisfied home destruct cargo registry src github com eccdbec error stack src report rs pub struct Report lt C gt doesn t satisfy From lt SelectorErrorKind lt gt gt home destruct rustup toolchains stable x unknown linux gnu lib rustlib src rust library core src result rs pub enum Result lt T E gt doesn t satisfy IntoReport gt src main rs let more info href link selector Selector parse div gt p gt a into report note the following trait bounds were not satisfied error stack Report lt SelectorErrorKind lt gt gt From lt SelectorErrorKind lt gt gt which is required by Result lt Selector SelectorErrorKind lt gt gt IntoReport For more information about this error try rustc explain E error could not compile error stack blog due to previous error Decoding the above error If you try to decode the error it is very confusing and it doesn t really explain the real problem The problem with our code is that the error returned from Selector parse operation in part three is not thread safe Fixing the Thread Safety IssueTo fix the above thread safety error we will need to map the error of the selector operation of part three to the error stack Report type by constructing it Also we will add a custom error message that we want to get it print when this error is encountered Fixing the code The code for mapping the error from the operation in part three will look like this let more info href link selector Selector parse div gt p gt a map err Report new ScraperError SelectorError attach printable lazy invalid CSS selector provided gt Putting it altogather the whole code looks like this The main module that fetches html code and scrapes the more information href link and displays it on stdout use core fmt use error stack Context IntoReport Report Result ResultExt use reqwest header HeaderMap CONTENT TYPE REFERER USER AGENT use scraper Html Selector use std println time Duration derive Debug enum ScraperError InvalidHeaderMapValue RequestError SelectorError impl fmt Display for ScraperError fn fmt amp self f amp mut fmt Formatter lt gt gt fmt Result match self ScraperError InvalidHeaderMapValue gt write f Invalid header map value provided ScraperError RequestError gt write f Error occurred while requesting data from the webpage ScraperError SelectorError gt write f An error occured while initializing new Selector impl Context for ScraperError fn main gt Result lt ScraperError gt A map that holds various request headers let mut headers HeaderMap new headers insert USER AGENT Mozilla Windows NT Win x rv Gecko Firefox parse into report change context ScraperError InvalidHeaderMapValue gt headers insert REFERER parse into report change context ScraperError InvalidHeaderMapValue gt headers insert CONTENT TYPE application x www form urlencoded parse into report change context ScraperError InvalidHeaderMapValue gt A blocking request call that fetches the html text from the example com site let html results reqwest blocking Client new get timeout Duration from secs headers headers send into report change context ScraperError RequestError gt text into report change context ScraperError RequestError gt Parse the recieved html text to html for scraping let document Html parse document amp html results Initialize a new selector for scraping more information href link let more info href link selector Selector parse div gt p gt a map err Report new ScraperError SelectorError attach printable lazy invalid CSS selector provided gt Scrape the more information href link let more info href link document select amp more info href link selector next unwrap value attr href unwrap Print the more information link println More information link more info href link Ok NoteIn the above code I have introduced a bug on purpose which will allow us to test whether the error stack has been implemented successfully with the ScraperError enum Running the above code you will see that the code runs as expected it throws a ScraperError and it gives a beautiful error output with the first message we had provided it while implementing Display and the last message which we had mapped it to the error provided by the operation in third part Error An error occured while initializing new Selector├╴at src main rs ╰╴invalid CSS selector provided ConclusionFinally we end this post as we have covered everything that was need to wrap errors with enums when using error stack crate I would love to hear from you What new thing you learn t alongside this post Also if you found this post helpful then feel free to share it on different social media platforms like twitter reddit lemmy etc Contact me on Reddit where I am know by the username u RevolutionaryAir or message me on Discord where I am know by the username neon mmd or tag me on Rust Discord server If you want to geek out with us you can join our project Discord server |
2023-07-01 17:32:05 |
海外TECH |
DEV Community |
Experiment Tracking and Hyperparameter Tuning with TensorBoard in PyTorch 🔥 |
https://dev.to/akshayballal/experiment-tracking-and-hyperparameter-tuning-with-tensorboard-in-pytorch-402j
|
Experiment Tracking and Hyperparameter Tuning with TensorBoard in PyTorch IntroductionTracking Experiments and Tuning Hyperparameters with TensorBoard in PyTorchExperiment tracking involves logging and monitoring machine learning experiment data and TensorBoard is a useful tool for visualizing and analyzing this data It helps researchers understand experiment behavior compare models and make informed decisions Hyperparameter tuning is the process of finding the best values for configuration settings that impact model learning Examples include learning rate batch size and number of hidden layers Appropriate tuning improves model performance and generalization Hyperparameter tuning strategies include manual search grid search random search Bayesian optimization and automated techniques These methods systematically explore and evaluate different hyperparameter values You can assess model performance during tuning using evaluation metrics like accuracy or mean squared error Effective hyperparameter tuning leads to improved model results on unseen data In this blog we ll demonstrate hyperparameter tuning using grid search with the FashionMNIST dataset and a custom VGG model Stay tuned for future blogs on other tuning algorithms Let s begin Install and Import DependenciesStart by opening a new Python notebook on Jupyter or on Google Colab Write these commands in the code block to install and import the dependencies pip install q torchinfo torchmetrics tensorboardimport torchimport torchvisionimport osfrom torchvision transforms import Resize Compose ToTensorimport matplotlib pyplot as pltfrom torchinfo import summaryimport torchmetricsfrom tqdm auto import tqdmfrom torch utils tensorboard import SummaryWriter Load the Dataset and DataLoaderBATCH SIZE if not os path exists data os mkdir data train transform Compose Resize ToTensor test transform Compose Resize ToTensor training dataset torchvision datasets FashionMNIST root data download True train True transform train transform test dataset torchvision datasets FashionMNIST root data download True train False transform test transform train dataloader torch utils data DataLoader training dataset batch size BATCH SIZE shuffle True test dataloader torch utils data DataLoader test dataset batch size BATCH SIZE shuffle False Here we initiate a batch size of Generally you would want to go for the maximum batch size that your GPU can handle without giving the cuda out of memory error We define the transforms to convert our images to Tensors We initiate the training datasets and the test datasets from the built in FashionMNIST dataset in torchvision datasets We set the root folder as the data folder download as True because we want to download the dataset and train as True for the training data and False for the test data Next we define the training and test dataloaders We can see how many images we have in our training and testing dataset using this command print f Number of Images in test dataset is len test dataset print f Number of Images in training dataset is len training dataset output Number of Images in test dataset is Number of Images in training dataset is Create a TinyVGG ModelI am demonstrating experiment tracking using this custom model But you can use any model of your choice class TinyVGG nn Module A small VGG like network for image classification Args in channels int The number of input channels n classes int The number of output classes hidden units int The number of hidden units in each convolutional block n conv blocks int The number of convolutional blocks dropout float The dropout rate def init self in channels n classes hidden units n conv blocks dropout super init self in channels in channels self out features n classes self dropout dropout self hidden units hidden units Input block self input block nn Sequential nn Convd in channels in channels out channels hidden units kernel size padding stride nn Dropout dropout nn ReLU Convolutional blocks self conv blocks nn ModuleList nn Sequential nn Convd in channels hidden units out channels hidden units kernel size padding stride nn Dropout dropout nn ReLU nn MaxPoold kernel size stride for in range n conv blocks Classifier self classifier nn Sequential nn Flatten nn LazyLinear out features nn Dropout dropout nn Linear in features out features nn Linear in features out features n classes def forward self x Forward pass of the network Args x torch Tensor The input tensor Returns torch Tensor The output tensor x self input block x for conv block in self conv blocks x conv block x x self classifier x return x Define Training and Test Functionsdef train step dataloader model optimizer criterion device train acc metric Perform a single training step Args dataloader torch utils data DataLoader The dataloader for the training data model torch nn Module The model to train optimizer torch optim Optimizer The optimizer for the model criterion torch nn Module The loss function for the model device torch device The device to train the model on train acc metric torchmetrics Accuracy The accuracy metric for the model Returns The accuracy of the model on the training data for X y in tqdm tqdm dataloader Move the data to the device X X to device y y to device Forward pass y preds model X Calculate the loss loss criterion y preds y Calculate the accuracy train acc metric update y preds y Backpropagate the loss loss backward Update the parameters optimizer step Zero the gradients optimizer zero grad return train acc metric compute def test step dataloader model device test acc metric Perform a single test step Args dataloader torch utils data DataLoader The dataloader for the test data model torch nn Module The model to test device torch device The device to test the model on test acc metric torchmetrics Accuracy The accuracy metric for the model Returns The accuracy of the model on the test data for X y in tqdm tqdm dataloader Move the data to the device X X to device y y to device Forward pass y preds model X Calculate the accuracy test acc metric update y preds y return test acc metric compute TensorBoard Summary Writerdef create writer experiment name str model name str conv layers dropout hidden units gt SummaryWriter Create a SummaryWriter object for logging the training and test results Args experiment name str The name of the experiment model name str The name of the model conv layers int The number of convolutional layers in the model dropout float The dropout rate used in the model hidden units int The number of hidden units in the model Returns SummaryWriter The SummaryWriter object timestamp str datetime now strftime d m Y H M S log dir os path join runs timestamp experiment name model name f conv layers f dropout f hidden units replace return SummaryWriter log dir log dir Hyper Parameter TuningHere there are several hyperparameters as you can see Learning Rate Number of Epochs type of optimizer number of convolution layers dropout and number of hidden units We can first fix the learning rate and number of epoch and try to find the best number of convolution layers dropout and hidden units Once we have those we can then tune the number of epochs and learning rate Fixed Hyper Parameters EPOCHS LEARNING RATE This code performs hyperparameter tuning for a TinyVGG model The hyperparameters that are tuned are the number of convolutional layers the dropout rate and the number of hidden units The results of the hyperparameter tuning are logged to a TensorBoard file experiment number hyperparameters to tunehparams config n conv layers dropout hidden units for n conv layers in hparams config n conv layers for dropout in hparams config dropout for hidden units in hparams config hidden units experiment number print f nTuning Hyper Parameters Conv Layers n conv layers Dropout dropout Hidden Units hidden units n create the model model TinyVGG in channels n classes len training dataset classes hidden units hidden units n conv blocks n conv layers dropout dropout to device create the optimizer and loss function optimizer torch optim Adam params model parameters lr LEARNING RATE criterion torch nn CrossEntropyLoss create the accuracy metrics train acc metric torchmetrics Accuracy task multiclass num classes len training dataset classes to device test acc metric torchmetrics Accuracy task multiclass num classes len training dataset classes to device create the TensorBoard writer writer create writer experiment name f experiment number model name tiny vgg conv layers n conv layers dropout dropout hidden units hidden units model train train the model for epoch in range EPOCHS train step train dataloader model optimizer criterion device train acc metric test step test dataloader model device test acc metric writer add scalar tag Training Accuracy scalar value train acc metric compute global step epoch writer add scalar tag Test Accuracy scalar value test acc metric compute global step epoch add the hyperparameters and metrics to TensorBoard writer add hparams conv layers n conv layers dropout dropout hidden units hidden units train acc train acc metric compute test acc test acc metric compute This will take a while to run depending on your hardware Check Results in TensorBoardIf you are using Google Colab or Jupyter Notebooks you can view TensorBoard Dashboard with this command load ext tensorboard tensorboard logdir runsFrom this now you can find the best hyperparameters That s it This is how you can use TensorBoard to tune hyperparameters Here we used the grid search for simplicity but you can use a similar approach for other tuning algorithms and use TensorBoard to see how these algorithms perform in real time Want to connect My WebsiteMy TwitterMy LinkedIn |
2023-07-01 17:13:39 |
海外TECH |
DEV Community |
Hosting Games with Express.js and Socket.io |
https://dev.to/vulcanwm/hosting-games-with-expressjs-and-socketio-8nf
|
Hosting Games with Express js and Socket ioI ve recently created a template where you can host games in Express js with the help of Socket io VulcanWM host game expressjs Template for hosting games with Socket io in Express js Hosting Games with Express js and Socket ioThis is a template where you can host games in Express js with the help of Socket io All Pages has a link to the join game page and the host game page host generates a game id and creates a game which others can play the game The host can decide when to start the game and this triggers a change in Socket io which changes the content on the player s screens join contains a form in which you have to enter your game id join POST redirects you to join game id join game id renders a page where you enter your nickname for the game join game id POST the user s game id and nickname gets saved to the session and then they get redirected to play play the play screen is rendered and the screen is updated whenever a new… View on GitHubThese are all the pages the template has has a link to the join game page and the host game page host generates a game id and creates a game in which others can play the game The host can decide when to start the game and this triggers a change in Socket io which changes the content on the player s screens join contains a form in which you have to enter your game id join POST redirects you to join game id join game id renders a page where you enter your nickname for the game join game id POST the user s game id and nickname gets saved to the session and then they get redirected to play play the play screen is rendered and the screen is updated whenever a new socket event is triggeredMake sure to check it out and if you have any suggestions do let me know by commenting below |
2023-07-01 17:10:59 |
海外TECH |
DEV Community |
☸️ Managed Kubernetes : Our dev is on AWS, our prod is on OVH |
https://dev.to/zenika/managed-kubernetes-our-dev-is-on-aws-our-prod-is-on-ovh-3nbf
|
️Managed Kubernetes Our dev is on AWS our prod is on OVH IntroductionWhat are the services needed for the project Why OVH Why not OVH all the way from dev to prod Why AWS for dev Installation Terraform on both sides but with different implementation ️ Impact on cluster wide tooling only a few differences Ingress ControllerMetrics serverAutoscalingThe Elastic StackMetricbeatFilebeatOther tools no noticeable difference Impact on application manifests nearly nothing ️ Impact on development nearly nothing ️ Cloud experience considerations ConclusionTL DR It works very well for us with minimal initial investment and development overhead We would have done it again if given the choice to reboot IntroductionKubernetes is an open source container orchestration platform used to manage and automate the deployment and scaling of containerized applications It has gained popularity in recent years due to its ability to provide a consistent experience across different cloud providers and on premises environments Operating Kubernetes may seem scary The uncertainty of Kubernetes on non Big Tech Cloud providers is scary to a higher level So mixing Kubernetes providers using AWS and OVH is it calling for the Apocalypse to fall down on our heads Not as much as we imagined there are surprisingly fewer impacts on development than you would guess What are the services needed for the project Kubernetes of course but also other managed services PostGreSQL databaseS file serviceDocker registryLoad balancers Why OVH The application we were building had to be deployed in Europe on a sovereign Cloud At the time of taking the decision OVH was the most mature European Cloud provider offering managed Kubernetes using Terraform as far as we knew is the largest hosting provider in EuropeScaleway which also provides managed Kubernetes using Terraform had been tested in the early stage of the project But these limitations again at that time were blockers for our use cases Some side effect of Kubernetes management overlays especially some problems using Traefik No Kubernetes on private networkUnsatisfying availability of docker registryAny feedback on these or other European cloud providers are welcome Why not OVH all the way from dev to prod In the past we had performance issues on OVH managed Kubernetes We did not want to slow down the development team with infrastructure problems The most important thing is to have an optimum development phase where data governance is not an issue since there is no client data on dev environments The Cloud provider had to be adapted to the project need not the other way around Why AWS for dev Knowing the daily cost of a development team in comparison to dev infrastructure it had to be an optimum dev experience It could have been GCP also But my experience was deeper on AWS and it is as of now my Cloud provider of choice dev team performance wise We experience significant performance differences between AWS and OVH We chose not to bother for production performance since the production load will be significant but quite reasonable for now And we still have to experiment with different OVH VM sizes as Kubernetes nodes Now that the team is happy with our setup here are the main differences Installation Terraform on both sides but with different implementation ️Terraform can be used on AWS and OVH for the whole stack to be deployed But as always Terraform configuration is provider related If you are interested our Terraform manifests have been shared in blog posts for AWS and for the OVH equivalent To sum up the differences OVH is dependent on OpenStack Terraform provider so you have to configure access to both these providersOVH offers S compliant buckets but uses AWS Terraform configuration for that so you have to configure this provider alsoOVH needs only one Docker registry for multiple images but on the other and you have a manual action to create a private space in the registryAWS documentation is easier to find on internet and largely discussed on StackoverflowAWS EKS integrate seamlessly with the AWS ECR Docker registry in the same project For OVH you have to create secrets with credentials in the cluster Impact on cluster wide tooling only a few differences We use some in cluster tooling and there is not much of a difference between AWS and OVH Kubernetes on this side Ingress ControllerAWS EKS and OVH fully support NGINX Ingress Controller Including auto provisioning of load balancer We decided to have a DNS for dev environments managed by AWS and a DNS for staging prod environments managed by OVH So we can opt out easily of one of these Cloud providers The differences reside in the TLS certificate handling We use a single certificate with a termination on the load balancer for AWS which does not seem to be possible with OVH a termination on NGINX Ingress Controller for OVH as a fallback with alsothe Cert ManagerOVH Webhook for Cert Manager LetsEncrypt with a ClusterIssuer handling DNS on OVH managed DNS endpoint Metrics serverMetrics server is installed by default on OVH and has to be installed manually on AWS EKS cluster AutoscalingAutoscaling is already provided on OVH but we don t use it for now Autoscaler has to be manually installed on the AWS EKS cluster The Elastic StackThe Elastic Stack is our Swiss Army Knife on the cluster The whole Elastic Stack has been installed in version inside the cluster We know it s not recommended to have stateful apps in Kubernetes but we are in the early stage of our production We have installed Elasticsearch the data and search engineKibana the all in one UIMetricbeat the Kubernetes metrics collector as far as we are concerned Filebeat The containers log collectorLogstash The data transformation tool used for the PostGres plugin allowing us to nicely display database contentMost of them have been installed with no difference except for ones detailed below MetricbeatMetricbeat collects Kubernetes metrics Some metrics are collected through the kubernetes API in a deployment and others through the kubelet in a daemonset one node on pod For the daemonset s pods to communicate with their respective nodes by default they use the HOST NAME variable In AWS EKS it works like a charm Sadly on OVH clusters the hostname is not resolvable to an IP inside pods The workaround is to attach these daemonset pods to the host network to be able to use the HOSTNAME variable This works fine for an extra cluster elasticsearch which is the recommended architecture But for an intra cluster elasticsearch with no ingress we have to rely on an elasticsearch kubernetes service which is not accessible from host network pods We then created a elasticsearch nodeport service to complete the workaround architecture We will document this part in another article if anyone is interested FilebeatFilebeat is our containers log collector and dissector To have advanced capabilities Filebeat has to be used in autodiscover mode Either docker or kubernetes provider Docker was an option before but now it is not the default container runtime system So only kubernetes provider remains It relies on the kubernetes API to identify pods Default configuration works well on AWS Since OVH provides fewer kubernetes API power I suspect that OVH has fixed master nodes number and AWS auto scales them default configuration tends to flood it resulting in very slow administration even complete downtimes sometimes The workaround was to change some configuration options to lower the pressure Now it is working like a charm We will document this part in another article if anyone is interested Other tools no noticeable differenceWe installed these tools the exact same way on both providers Kube downscalerGitLab Runner AgentTeleport including service load balancer auto provisioning Impact on application manifests nearly nothing ️Not much to say this is the beauty of Kubernetes manifests work the same way We have dev prod differences that are not related to Cloud providers so we won t detail them here We still have to handle the certificates to handle on OVH since TLS termination is on NGINX Ingress Controller Impact on development nearly nothing ️On the development side the team experienced some differences using S on AWS and OVH It was expected this is not the default setup to target OVH But once the documentation is applied and OVH tests performed usage is the same We decided to encrypt files using a cloud agnostic library so we are not tied to a specific Cloud feature Cloud experience considerationsOVH is still under development to match Big Tech services We experience degraded experience on these aspects compared to AWS Master nodes performanceWorker nodes performanceOfficial technical supportNon official technical support problems solving using internet search But this is expected when comparing investments on these products and that is a price we are willing to pay as long as it matches our constraints and desire to go to alternative Cloud providers ConclusionIn conclusion managed Kubernetes services offer a streamlined way of deploying and managing Kubernetes clusters By abstracting away the complexities of Kubernetes these services enable developers to focus on their applications rather than infrastructure management With the ability to deploy the same application to multiple cloud providers using a declarative approach teams can achieve greater consistency and efficiency in their deployment processes So whether you re using Big Tech or not OVH can be your Cloud provider of choice if it fits your constraints Do you have any experience in multi cloud Kubernetes Please share your insight especially if you would have used other providers for the same requirements Images generated locally by DiffusionBee using ToonYou model |
2023-07-01 17:10:45 |
海外TECH |
Engadget |
Europe’s Euclid space telescope launches to map the dark universe |
https://www.engadget.com/europes-euclid-space-telescope-launches-to-map-the-dark-universe-175331413.html?src=rss
|
Europe s Euclid space telescope launches to map the dark universeOn late Saturday morning a SpaceX Falcon rocket carrying the European Space Agency s Euclid spacecraft successfully lifted off Cape Canaveral Florida The near infrared telescope named after the ancient Greek mathematician who is widely considered the father of geometry will study how dark matter and dark energy shape the universe In addition to a megapixel camera astronomers will use to image a third of the night sky over the next six years Euclid is equipped with a near infrared spectrometer and photometer for measuring the redshift of galaxies In conjunction with data from ground observatories that information will assist scientists with estimating the distance between different galaxies As The New York Times notes one hope of physicists is that Euclid will allow them to determine whether Albert Einstein s theory of general relativity works differently on a cosmic scale There s a genuine possibility the spacecraft could revolutionize our understanding of physics and even offer a glimpse of the ultimate fate of the universe Safe travels ESAEuclid The DarkUniverse ️ ️detective ventures into the unknown pic twitter com JvWBpIzSxーESA s Euclid mission ESA Euclid July “If we want to understand the universe we live in we need to uncover the nature of dark matter and dark energy and understand the role they played in shaping our cosmos said Carole Mundell the ESA s director of science “To address these fundamental questions Euclid will deliver the most detailed map of the extra galactic sky With Euclid now in space it will travel approximately a million miles to the solar system s second Lagrange point That s the same area of space where the James Webb Space Telescope has been operating for the past year It will take Euclid about a month to travel there and another three months for the ESA to test the spacecraft s instruments before Euclid can begin sending data back to Earth This article originally appeared on Engadget at |
2023-07-01 17:53:31 |
ニュース |
BBC News - Home |
Twitter temporarily restricts tweets users can see, Elon Musk announces |
https://www.bbc.co.uk/news/technology-66077195?at_medium=RSS&at_campaign=KARANGA
|
restricts |
2023-07-01 17:53:50 |
ニュース |
BBC News - Home |
Cesc Fabregas: World Cup-winning former Chelsea and Arsenal midfielder retires |
https://www.bbc.co.uk/sport/football/66077796?at_medium=RSS&at_campaign=KARANGA
|
Cesc Fabregas World Cup winning former Chelsea and Arsenal midfielder retiresFormer Arsenal Barcelona and Chelsea midfielder Cesc Fabregas a World Cup winner with Spain announces his immediate retirement from football |
2023-07-01 17:17:20 |
海外TECH |
reddit |
Team Vitality vs. KOI / LEC 2023 Summer - Week 3 / Post-Match Discussion |
https://www.reddit.com/r/leagueoflegends/comments/14o113i/team_vitality_vs_koi_lec_2023_summer_week_3/
|
Team Vitality vs KOI LEC Summer Week Post Match DiscussionLEC SUMMER Official page Leaguepedia Liquipedia Eventvods com New to LoL Patch Team Vitality KOI VIT Leaguepedia Liquipedia Website Twitter Facebook YouTube Subreddit KOI Leaguepedia Liquipedia Website Twitter YouTube MATCH VIT vs KOI Winner KOI in m Bans Bans G K T D B VIT vi sejuani kennen poppy jarvaniv k H H C C KOI milio yuumi xayah nami nautilus k CT I C B VIT vs KOI Photon renekton TOP gnar Szygenda Daglas maokai JNG trundle Malrang Perkz azir MID leblanc Larssen Upset zeri BOT aphelios Comp Kaiser soraka SUP lulu Advienne This thread was created by the Post Match Team submitted by u Soul Sleepwhale to r leagueoflegends link comments |
2023-07-01 17:44:46 |
コメント
コメントを投稿