投稿時間:2021-10-09 03:20:34 RSSフィード2021-10-09 03:00 分まとめ(25件)

カテゴリー等 サイト名等 記事タイトル・トレンドワード等 リンクURL 頻出ワード・要約等/検索ボリューム 登録日
IT 気になる、記になる… 「Google Pixel 6 Pro」を組み立てていく様子を撮影した動画が公開される https://taisy0.com/2021/10/09/147182.html google 2021-10-08 17:46:00
IT 気になる、記になる… Rockstar Games、「Grand Theft Auto」3部作のリマスターバンドル版『Grant Theft Auto: The Trilogy』を正式発表 https://taisy0.com/2021/10/09/147178.html grandtheftauto 2021-10-08 17:32:32
TECH Engadget Japanese 「BALMUDA The Brew」レビュー、このコーヒーメーカーは"ワクワク"します https://japanese.engadget.com/balmuda-the-brew-173033568.html balmudathebrew 2021-10-08 17:30:33
python Pythonタグが付けられた新着投稿 - Qiita 高精度気圧センサーモジュール - DPS310 をPythonで動くようにした https://qiita.com/emguse42/items/7fdcb64389e1c18d6477 初めにRaspberryPiで気圧ロガーを作るためにセンサーモジュールを買ったが、販売元のページにはArduino用のCライブラリしか無かったので、仕方なくPythonのドライバを書いたら、実はチップメーカーのGithubにすでにPythonのコードが公開されていた。 2021-10-09 02:37:54
Program [全てのタグ]の新着質問一覧|teratail(テラテイル) railsで作るタグ機能 タグidではなくnameの配列を表示したい https://teratail.com/questions/363527?rss=all railsで作るタグ機能タグidではなくnameの配列を表示したい前提・実現したいことRailsでネットのコードを参考にしながらタグ機能を作っています。 2021-10-09 02:45:40
Program [全てのタグ]の新着質問一覧|teratail(テラテイル) python seleniumでsendkeyで画像を送ろうとすると複数送られる。 https://teratail.com/questions/363526?rss=all 一応こちらの記事に書いてあるclearを試しましたが、elementnbspnotnbspinteractableとなってしまいます。 2021-10-09 02:41:12
Program [全てのタグ]の新着質問一覧|teratail(テラテイル) 0.0.0.0:80ポートのどれをkillしても良いか教えてください https://teratail.com/questions/363525?rss=all ポートのどれをkillしても良いか教えてください経緯および質問nginxの起動を行ったところ、下記の通りポートのバインドエラーが発生しました。 2021-10-09 02:38:32
Program [全てのタグ]の新着質問一覧|teratail(テラテイル) Unity iOS用のlaunchスクリーンでUnityアイコンが非表示にできない https://teratail.com/questions/363524?rss=all Unityのbuildsettingでlaunchスクリーンを自分で用意した画像を設定し、変更できました、Unityのデフォルトアイコンの表示を消せずに困っています。 2021-10-09 02:38:07
Program [全てのタグ]の新着質問一覧|teratail(テラテイル) dokcer composeで posrgresを立ち上げたらvolumeのエラーが発生する https://teratail.com/questions/363523?rss=all dokcercomposeでposrgresを立ち上げたらvolumeのエラーが発生するdockercomposenbspupコマンドを実行したらvolumeのエラーが発生しました。 2021-10-09 02:08:09
海外TECH DEV Community A practical tracing journey with OpenTelemetry on Node.js https://dev.to/shalvah/a-practical-tracing-journey-with-opentelemetry-on-node-js-5706 A practical tracing journey with OpenTelemetry on Node jsI ve talked a good deal about observability tracing and instrumentation in the past Recently I decided to try out some new things in those areas and here s how it went The challengeIn my app Tentacle there s an internal Node js API which is called by the main app from time to time This API in turn calls other external services often more than once and it can take anywhere from one to ten seconds for the whole thing to end depending on the external services I wanted to see how I could improve speed Yes the external service might be slow but perhaps there was some way I could improve things on my endーbetter configuration improved logic in handling the response parallelization I decided to add tracing so I could see if there were bottlenecks I could fix If you aren t familiar with tracing think of it as being able to look inside your service to see what s going on If I could instrument my app I d be able to view traces of my requests which would show details about what my app did and how much time it spent I ve used Elastic APM and Sentry for tracing before and there are other vendors as well but I decided to try OpenTelemetry Why OpenTelemetry The idea behind OpenTelemetry is to be a neutral standard It s like cables for charging your devices each vendor can make something that works with their devices eg Apple amp Lightning cables but USB was created so we could have a single standard so in an emergency you could borrow your friend s charging cable and know it works in your device OpenTelemetry is a set of vendor agnostic agents and API for tracing Vendor agnostic doesn t mean you won t use any vendors but that you aren t bound to them if you have issues with Elastic say cost features or UX you can switch to a different vendor by changing a few lines in your configuration as long as the vendor supports the OpenTelemetry API It s a beautiful idea in theory In practice it has a few rough edges for example vendor specific options often offer better UX than OTel Personally I d have preferred Sentry since I use them for error monitoring but Sentry s tracing features are expensive Elastic is free and open source but I didn t want to have to bother about running three components Elasticsearch Kibana and APM Server even with Docker Elasticsearch in production can still be a pain I d read and talked a lot about OpenTelemetry so I figured it was time to actually use it Setting up locallyTracing is most useful in production where you can see actual usage patterns but first I wanted to try locally and see if I could gain any insights To set up OpenTelemetry I d need to install the agent and an exporter then configure the agent to send to that exporter An exporter is a backend storage UI where I can explore the traces Setting up OTel took a while to get right unfortunate but unsurprising There was documentation but it was confusing and outdated in some places Eventually I came up with this in a tracing js file const openTelemetry require opentelemetry sdk node const HttpInstrumentation require opentelemetry instrumentation http const ExpressInstrumentation require opentelemetry instrumentation express const ZipkinExporter require opentelemetry exporter zipkin const Resource require opentelemetry resources const SemanticResourceAttributes require opentelemetry semantic conventions const sdk new openTelemetry NodeSDK resource new Resource SemanticResourceAttributes SERVICE NAME tentacle engine SemanticResourceAttributes DEPLOYMENT ENVIRONMENT process env NODE ENV traceExporter new ZipkinExporter instrumentations HttpInstrumentation ExpressInstrumentation module exports sdk Holy shit that is a ton of very intimidating looking code and it gets worse later In Elastic APM and Sentry it would have been less than lines But on we go The gist of the code is that it sets the service name to tentacle engine sets the exporter as Zipkin and enables the automatic instrumentation of http and express modules The service doesn t use any database or caching so I didn t enable those Let s talk about the exporter Because OTel is an open standard you can theoretically export to any tool that supports the OTel API For example there s a ConsoleExporter included that prints traces to the console but that s not very useful There s an exporter to Elasticsearch and you can write your own library to export to a file or database or whatever However two of the most popular options are Jaeger and Zipkin and you can easily run them locally with Docker I tried both options but decided to go with Zipkin because it s easier to deploy Plus it has a slightly better UI I think Running Zipkin with Docker was easy docker run rm d p name zipkin openzipkin zipkinAnd then I modified my app js to wait until tracing had been initialized before setting up the Express appwait until all traces were sent before exiting when you hit Ctrl C So it went from this const express require express const app express app post fetch const gracefulShutdown gt console log Closing server and ending process server close gt process exit process on SIGINT gracefulShutdown process on SIGTERM gracefulShutdown to this const tracing require tracing tracing start then gt const express require express const app express const gracefulShutdown gt console log Closing server and ending process server close async gt await tracing shutdown process exit It was quite annoying to move that code into a then but it was necessary the express module has to be fully instrumented before you use it otherwise the tracing won t work properly Finally I was ready Started my app and made some requests opened Zipkin on localhost and the traces were there Inspecting and inferringNow let s take a look at what a trace looks like Here s the details view for a trace On the right we have tags that contain relevant information about the trace For example an HTTP request would include details about the path user agent request size On the left we have the trace breakdown showing the different spans that happened during the request Think of a span as an action like an incoming outgoing request a database or cache query or whatever represents a unit of work in your app In this request we have spans parent and children We have spans recorded for each of our Express middleware and then spans for the calls we made to the external services https get of them All these spans were captured automatically because we configured OTel to use the HttpInstrumentation and ExpressInstrumentation earlier Now what can we glean from these First off the bottleneck isn t in the framework or our code You can see that the Express middleware take only microseconds to execute Meanwhile the external requests take up almost all the time You can see the first request alone takes s Let s hypothesize Okay the external site is obviously slow and my local Internet is slow too but how can we optimize around this I decided to try switching my request client from got to the new fast Node js client undici Maybe I could shave some tens of milliseconds off I made a couple of requests and here are the results Using got first and undici after The duration each request takes is shown on the right Well what do you think The first thing I m going to say is that this is not just an unscientific benchmark but a bad one It s silly to draw conclusions based on requests made on my machine So many factors could affect the performance of both librariesーfluctuating connection speeds Nigerian internet does this a lot machine features machine load etc A more scientific benchmark would be to write a script that triggers hundreds of requests in mostly stable conditions and returns useful statistics like the average max and standard deviation But there s an even better option run both libraries in production for real world requests and see if there s a meaningful gain from undici I learnt this approach from GitHub s Scientist Sadly this article is about tracing not experimentation so I won t continue down that path now but I hope to write another article about it soon My implementation would probably be to have a switch that randomly picks one of the two libraries for each request Then I ll compare the metrics and see which performs better over time That said from these preliminary tests it looks like most of undici s requests are faster than most of got s but I ll hold off on switching until I can experiment in production Another thing I wanted to see was if I could reduce the number of external service calls or parallelize them maybe You ll notice from the original trace I posted that there are HTTP requests done in three sets then in parallel then in parallel I went through my code again and realized two things I couldn t parallelize any better it had to be because of dependencies on the response In this particular case I could actually get rid of the first external call Yup it turned out that I could remove the first request It would lead to more requests overall but only two parallel sets So I decided to try and Compare these with the very first screenshot I postedーthe previous times were around seconds or more while these are around seconds Big win Here s what a single trace looks like now Like I said more requests but in less sets leading to an overall time savings However once again I decided to hold off on making the change permanent I ll spend some more time and tests to be sure the endpoint logic still works consistently for all inputs with the first call removed I can t make the changes immediately but it s obvious that tracing has been helpful here We ve moved from guessing about what works and what doesn t to seeing actual numbers and behaviour It s awesome Manual instrumentationOne problem with using undici is that it uses the net module not the http module making it difficult to instrument If we use undici as our request client we won t see any spans for http get in Zipkin If I enable OTel s NetInstrumentation there ll be spans but they will be for TCP socket connection events not for a complete request response cycle So I did some manual instrumentation to mark the request start and end by wrapping each external call in its own custom span const request require undici function wrapExternalCallInSpan name url callback const tracer openTelemetry api trace getTracer tentacle engine let span tracer startSpan name openTelemetry api context active span setAttribute external url url const context openTelemetry api trace setSpan openTelemetry api context active span return openTelemetry api context with context callback undefined span Promise versionconst makeRequest url gt return wrapExternalCallInSpan first external call url span gt return request url then response gt span setAttribute external status code response statusCode return response body text catch handleError finally gt span end async await versionconst makeRequest url gt return wrapExternalCallInSpan second external call url async span gt try let response await request url span setAttribute external status code response statusCode return response body text catch e return handleError e finally span end And we ve got this Now even without auto instrumentation for undici we can still get a good overview of our requests Even better if we switch back to got we see the https get spans nested under our custom spans which gives an insight into how much time was actually spent in the request versus in handling the response PS I m naming the spans things like first external call here but in my real codebase they re named after what the request does e g check API status fetch user list Capturing errorsI mentioned earlier that I d have preferred to use Sentry The reason for that besides UX is correlation When an uncaught exception happens in my app I m able to view it in Sentry However I might want to view more details about what happened in that request Sentry allows you to do this by attaching context But sometimes I might need more I might want to investigate a report deeper for instance to see where the error was caused my main app or the internal service or when the error occurred was it before or after the first set of external calls or how long the calls took and what they returned So it s often ideal to have both error monitoring and tracing in the same place But I can t so the next best thing is to make it easy to correlate To do that I ll do two things Add the trace ID to the Sentry context so I can copy it from Sentry and look it up in Zipkin and vice versa Add some basic error details to the OTel trace so I can see the error info right there const tracing require tracing tracing start then gt const express require express const app express Custom Express middleware add the current trace ID to the Sentry context app use req res next gt const traceId getCurrentSpan Sentry setContext opentelemetry traceId next app post fetch error handler store the error details as part of the trace app use function onError err req res next const currentSpan getCurrentSpan const SemanticAttributes require opentelemetry semantic conventions currentSpan setAttributes SemanticAttributes EXCEPTION MESSAGE err message SemanticAttributes EXCEPTION TYPE err constructor name SemanticAttributes EXCEPTION STACKTRACE err stack currentSpan setStatus code require opentelemetry sdk node api SpanStatusCode ERROR message An error occurred res status send Something broke gracefulShutdown function getCurrentSpan const openTelemetry require opentelemetry sdk node return openTelemetry api trace getSpan openTelemetry api context active Let s test this out I ll enable Sentry on my local machine and add a line that crashes console log thisVariableDoesntExist somewhere in my request handler Here we go Here s the exception context in Sentry And we can take that trace ID and search for it in Zipkin top right You can see the extra exception attributes we added to the span in the right panel And if we scroll further down you ll see that the rest of the usual span attributes are there as well allowing us to glean more information about what was going on in that request On the left you can see a breakdown of all the action that happened so we know at what stage the error happened There s an extra https post from the request to Sentry s API and it happens after the response has been sent We ve successfully tied our Sentry exceptions with our OpenTelemetry traces Thinking about Going liveWe could stop here since we ve gained some useful insights but tracing is best in production because then you re not just looking at how you re using the app but at how real world users are using it In production there are more things to consider Is your app running on a single server or across multiple servers Where is your backend Zipkin running Where is it storing its data and what s the capacity For me the two biggest concerns were storage and security My app runs on a single server so obviously Zipkin was going to be there as well Storage I would have to write a script to monitor and prune the storage at intervals I could but meh Security I would need to expose Zipkin on my server to my local machine over the public Internet The easiest secure way would be whitelisting my IP to access port each time I want to check my traces Stressful Honestly I don t like self hosting things but I was willing to try Luckily I found out that some cloud vendors allow you to send your OTel traces to them directly New Relic gives you GB days of retention on the free plan while Grafana Cloud gives you GB days I decided to go with New Relic because their traces product is more mature and has more features I need like viewing traces as graphs and filtering by span tags This complicates things a bit There s no direct exporter library to New Relic so we ll have to use yet another library the OpenTelemtry Collector There s a good reason we won t export directly to New Relic it will have an impact on our application if we re making HTTP requests to New Relic s API after every request The OTel collector runs as a separate agent on your machine the instrumentation libraries will send traces to the collector which will asynchronously export them to New Relic or wherever Here s what our tracing js looks like now All those other require sconst CollectorTraceExporter require opentelemetry exporter collector const sdk new openTelemetry NodeSDK Other config items traceExporter new CollectorTraceExporter url http localhost v trace This is the Collector s default ingestion endpoint To test this locally we ll run the OTel Collector via Docker First we ll create a config file that tells it to receive from our OTel libraries via the OpenTelemetry Protocol OTLP and export to our local Zipkin for now receivers otlp protocols http exporters zipkin This tells the collector to use our local Zipkin running on localhost endpoint api v spans service pipelines traces receivers otlp processors exporters zipkin Then we start the OTel Collector container docker run name otelcol p v otel config yaml etc otel config yaml otel opentelemetry collector This works Our traces still show up in Zipkin but now they re sent by the collector not directly from our libraries Switching to New Relic was pretty easy We change our collector config to point to New Relic s ingest endpoint with my API key Other config exporters otlp endpoint headers api key myApiKeyservice pipelines traces receivers otlp processors exporters otlp And when we restart the collector and make some requests to our API we can see the traces in New Relic SamplingWe re almost ready to go live But we have to enable sampling Sampling means telling OpenTelemetry to only keep a sample of our traces So if we get requests in an hour when we check Zipkin we might see only Sampling helps you manage your storage usage and potentially reduces the performance impact from tracing on every request If we recorded every sample from every request in production on a busy service we d soon have MBs GBs of data There are different kinds of sampling Here s what I m going with All those other require sconst ParentBasedSampler TraceIdRatioBasedSampler AlwaysOnSampler require opentelemetry core const sdk new openTelemetry NodeSDK Other config items sampler process env NODE ENV development new AlwaysOnSampler new ParentBasedSampler root new TraceIdRatioBasedSampler I m using a combination of samplers here In development mode we want to see all our traces so we use the AlwaysOnSamplerIn production the TraceIdRatioBasedSampler tracer will keep of our traces That means if requests come in it will only trace out of those But remember that our service will be called by another app which may have its own trace that may or may not be kept The ParentBasedSampler says if there s an incoming trace from another service that is being kept then keep this trace too That way every trace from the main app that makes a request to tentacle engine will have that child trace present as well Switching to tail samplingThe problem with our current sampling approach is that the keep drop decision is made at the start of the trace head sampling The benefit of this is that it saves us from collecting unneeded data during the request since we already know the trace will be dropped But what about traces where an exception happens I want to always have those traces so I can look deeper at what happened around the error Since there s no way to know whether an error will happen at the start of a trace we have to switch to tail sampling Tail sampling is making the keep drop decision at the end of the trace So we can say if this request had an exception then definitely keep the trace Otherwise do the ratio thing Here s how I did this FIrst disable sampling in the OTel JS agent use the AlwaysOnSampler or remove the sampler key we added above Next update the OTel collector config to handle the sampling Other config processors groupbytrace wait duration s num traces probabilistic sampler sampling percentage service pipelines traces receivers otlp processors groupbytrace probabilistic sampler exporters otlp Essentially we ve moved our ratio config into the collector s probabilistic sampler config The groupbytrace processor makes sure all child spans of a trace are included FInally in our error handler we add this app use function onError err req res next currentSpan setAttributes sampling priority sampling priority is a convention supported by the probabilistic sampler Setting it to tells the sampler to override the ratio and keep this trace BatchingOne final thing we need to do before deploying is batch our exports By default immediately a request ends its trace is sent to Zipkin In production that might be unnecessary overhead so we ll send them in batches All those other require sconst BatchSpanProcessor require opentelemetry sdk trace base const exporter process env NODE ENV development new ZipkinExporter new CollectorTraceExporter url http localhost v trace const sdk new openTelemetry NodeSDK Other config items spanProcessor new BatchSpanProcessor exporter The BatchSpanProcessor will wait for a bit to collect as many spans as it can up to a limit before sending to the backend Going live finally To go live we need to set up the OpenTelemetry collector on the server wget sudo dpkg i otel contrib collector amd debRunning this installs and starts the otel contrib collector service Then I copy my config to etc otel contrib collector config yaml and restart the service and we re good Now we deploy our changes and we can see traces from production on New Relic ReflectionI still have to write another article about experimenting with both got and undici in production but I ve got thoughts on OpenTelemetry First the things I don t like Asynchronous initialization tracing start then is a pain Other APM vendors know this and made their setup synchronous There s less abstraction and more verbosity Look at the things we ve had to deal with in the tracing jsーprocessors exporters resources semantic conventions Related to the above There are too many packages to install to get a few things working Worse there s a compatibility matrix that expects you to chek different versions of your tools Additionally the package structure is unclear It s not always certain why a certain export belongs to a certain package And a lot of exports have been moved from one package to another so old code examples are incorrect Confusing documentation A lot of the docs still reference old links and deprecated packages Some things are just not covered and I had to read issues type definitions and source code to figure things out Another thing that confused me was that there are two different ways to go about tracing with OTel JS the simpler way we used here vs a more manual way but this isn t mentioned anywhere I feel bad complaining because the OpenTelemetry ecosystem is huge API protocols documentation collector libraries for different languages community it takes a massive amount of effort to build and maintain this for free and you can tell the maintainers are doing a lot Which is why despite all the rough edges I still like it It s a good idea and it s pretty cool how I can wire different things together to explore my data Once you get over the rough patches it s a pretty powerful set of tools 2021-10-08 17:44:08
海外TECH DEV Community CSS positions: Everything you need to know https://dev.to/thatanjan/css-positions-everything-you-need-to-know-2ng4 CSS positions Everything you need to knowCSS position is a property to position an element to the viewport I have already created a video about it on my youtube channel Check that out for more details If you like this video please like share and Subscribe to my channel To apply CSS position you need to use the position property property to manipulate the position topbottomleftrightThere are types of CSS positions StaticThe static position is the default behavior It is always positioned according to the normal flow of the page Note header should be static Sorry for my mistake lt h gt static lt h gt lt div class outer parent gt outer parent lt div class parent gt parent lt div class children gt children lt div gt lt div gt lt div gt color fff h color font size rem margin bottom rem outer parent background f width vw height vh padding rem font size rem parent background f height vh margin top rem children background ff height vh color margin top rem RelativeThe relative position is almost the same as the Static position But you can change the position from its normal position with the properties mentioned above children background ff height vh color margin top rem position relative top px left px By the way this blog is originally published on cules coding website I would be glad if you give it a visit AbsoluteUnlike the relative position it will be positioned relative to its nearest relative parent If it doesn t find any then it will be positioned to document the body It will be removed from the flow of the webpage It will be also be scrolled with other elements without relative parent children background ff height vh color margin top rem position absolute top px left px with relative parent parent background f height vh margin top rem position relative children background ff height vh color margin top rem position absolute top px left px Fixedfixed is similar to absolute with some difference It will be positioned relative to the document body It will stay fixed inside the viewport and will never be scrolled children background ff height vh color margin top rem position fixed top px left StickyA sticky element toggles between relative and fixed It will stay at relative first When it will be scrolled down or up it will meet an offset the position that you will give Then it turns it to fix If the parent is passed from the viewport it will also be scrolled If the parent is the document body Then it will always stay fixed t children background ff height vh color margin top rem position sticky top px Shameless PlugI have made a video about how to build a carousel postcard with React Material UI and Swiper js If you are interested you can check the video You can also demo the application form herePlease like and subscribe to Cules Coding It motivates me to create more content like this If you have any questions please comment down below You can reach out to me on social media as thatanjan Stay safe Goodbye About me Why do I do what I do The Internet has revolutionized our life I want to make the internet more beautiful and useful What do I do I ended up being a full stack software engineer What can I do I can develop complex full stack web applications like social media applications or e commerce sites What have I done I have developed a social media application called Confession The goal of this application is to help people overcome their imposter syndrome by sharing our failure stories I also love to share my knowledge So I run a youtube channel called Cules Coding where I teach people full stack web development data structure algorithms and many more So Subscribe to Cules Coding so that you don t miss the cool stuff Want to work with me I am looking for a team where I can show my ambition and passion and produce great value for them Contact me through my email or any social media as thatanjan I would be happy to have a touch with you ContactsEmail thatanjan gmail comlinkedin thatanjanportfolio anjanGithub thatanjanInstagram personal thatanjanInstagram youtube channel thatanjanTwitter thatanjanFacebook thatanjanBlogs you might want to read Eslint prettier setup with TypeScript and react What is Client Side Rendering What is Server Side Rendering Everything you need to know about tree data structure reasons why you should use NextjsVideos might you might want to watch 2021-10-08 17:36:29
海外TECH DEV Community Building an Astro Website with WordPress as a Headless CMS https://dev.to/asayerio_techblog/building-an-astro-website-with-wordpress-as-a-headless-cms-47mo Building an Astro Website with WordPress as a Headless CMSby author Chris BongersHi everyone I m sure you ve heard of WordPress and you may or may not like it In this article I ll show you how you can leverage only the good parts of WordPress by using it as a headless CMS for your next project As the front end we ll be using the new popular static site generator Astro Let s dive a bit into the details of what we are working with What is Astro Astro is a Static Site Generator SSG meaning it can output a static HTML website You might be wondering ok but why do we need that An SSG is excellent since it outputs static HTML which in return means your website will be blazing fast There is nothing faster than a plain HTML website We often want dynamic parts and components on our website That s where SSG comes in handy Astro is quite the new kid on the block yet very powerful and full of potential Here are some benefits to using Astro SEO Focused out of the boxBYOF Bring your own framework approach bring which ever framework you like to work in and Astro makes it workPartial hydration making components render at the right timeLots of built in supportRouting is very extendedActive communityThese are just some of the reasons Astro is pretty amazing if you ask me But if you wonder how Astro compares to other tools check out this amazing document they set up What is a headless CMS Now that we have the front end part explained let s take a moment to check out what precisely is a Headless CMS I m sure you ve heard about WordPress the bloated and well used CMS system WordPress is an absolute package monster allowing people to manage their websites with little developer experience The development community often dislikes WordPress because it gets a bit too bloated Meaning the websites are slow and full of stuff we don t need That s where WordPress as a headless system comes in A headless system means you can use the entire backend system of WordPress but you don t have to use the front end output Instead we use an API to query our data and use it in other system In our case that would be an Astro front end For the API system we ll use GraphQL as the query language but more on that in the step below Setting up WordPress as a headless CMSBefore we continue let s set up WordPress and especially set it up as a Headless CMS The easiest way to set up WordPress on your local machine is to use a docker image If you don t have Docker Desktop installed follow this guide on the Docker website Next up create a new folder and navigate to it mkdir wordpress amp amp cd wordpressThen create a docker compose yml file and fill out the following details version services db image mariadb volumes db data var lib mysql restart unless stopped ports environment MYSQL ROOT PASSWORD rootpress MYSQL DATABASE wordpress MYSQL USER wordpress MYSQL PASSWORD wordpress wordpress depends on db image wordpress latest volumes wordpress data var www html ports restart always environment WORDPRESS DB HOST db WORDPRESS DB USER wordpress WORDPRESS DB PASSWORD wordpress WORDPRESS DB NAME wordpressvolumes db data wordpress data Then we can spool up our docker image by running the following command docker compose upOnce it s up you should see the following in your Docker Desktop client The next step is to visit our WordPress installation and follow the install steps You can find your WordPress installation on ‌http localhost and should be welcomed by the WordPress install guide To set it up as a Headless CMS we need to install the WP GraphQL plugin Follow the install guide of the plugin Once it s installed we even get this fantastic GraphQL editor to test out our queries And we get a GraphQL endpoint available at the following URL http localhost graphql While you are in the WordPress section create some demo pages Next up it s time to set up our Astro project Setting up the Astro projectPlease create a new folder and navigate to it mkdir astro wordpress amp amp cd astro wordpressThen we can install Astro by running the following command npm init astroYou can choose the start template to get started with Next up run npm install to install all dependencies and start up your Astro project by running npm start You can now visit your front end at http localhost Adding Tailwind CSS as our styling frameworkRight before we move on to loading our WordPress data let s install TailwindCSS as it will make our lives easier in styling the website Installing Tailwind in an Astro project is pretty easy Let s see what needs to happen step by step Install Tailwind npm install D tailwindcssCreate a tailwind config js filemodule exports mode jit purge public html src astro js jsx svelte ts tsx vue Enable tailwind config in your astro config mjs file export default devOptions tailwindConfig tailwind config js And lastly we need to create a styles folder in the src directory Inside this creates a global css file and add the following contents tailwind base tailwind components tailwind utilities To use this style in our pages we need to load it like so lt link rel stylesheet type text css href Astro resolve styles global css gt Installing the Tailwind typography pluginSeeing as our content comes from WordPress we can leverage the Tailwind Typography plugin to not have to style things manually Run the following command to install the plugin npm install tailwindcss typographyThen open your tailwind config js file and add the plugin module exports mode jit purge public html src astro js jsx svelte ts tsx vue plugins require tailwindcss typography And that s it We can now use Tailwind and its fantastic typography plugin Creating a env fileSince our endpoint might vary depending on our environment let s install the dotenv package npm install D dotenvThen we can create a env file that will contain our WordPress graphQL endpoint WP URL http localhost graphql Open Source Session ReplayDebugging a web application in production may be challenging and time consuming OpenReplay is an Open source alternative to FullStory LogRocket and Hotjar It allows you to monitor and replay everything your users do and shows how your app behaves for every issue It s like having your browser s inspector open while looking over your user s shoulder OpenReplay is the only open source alternative currently available Happy debugging for modern frontend teams Start monitoring your web app for free Creating the API calls in AstroAlright we have our WordPress set up and our basic Astro website up and running It s time to bring these two together Create a lib folder in the src directory and create a file called api js This file will contain our API calls to the WordPress GraphQL API endpoint The first thing we need to do in this file is loading our environment import dotenv from dotenv dotenv config const API URL process env WP URL Then we need to create a basic fetchAPI call that will execute our GraphQL queries This generic call will handle the URL and actual posting async function fetchAPI query variables const headers Content Type application json const res await fetch API URL method POST headers body JSON stringify query variables const json await res json if json errors console log json errors throw new Error Failed to fetch API return json data Then let s create a function that can fetch all our WordPress pages that have a slug export async function getAllPagesWithSlugs const data await fetchAPI pages first edges node slug return data pages As you can see we pass a GraphQL query to our fetchAPI function and return all the pages we get in return Remember you can try out these GraphQL queries in the WordPress plugin GraphQL viewer Seeing the above query will only give us the slugs for each page We can go ahead and create a detailed call that can retrieve a page s content based on its slug export async function getPageBySlug slug const data await fetchAPI page id slug idType URI title content return data page Rendering WordPress pages in AstroNow that we have these functions set up we need to create these pages in our front end Astro project dynamically Remember how Astro outputs static HTML That means we need a way to retrieve these and dynamically build these pages Luckily Astro can do just that for us To create a dynamic page we must create a file called slug astro in our pages directory As this is an Astro file it comes in two sections the code and the HTML The code is wrapped in frontmatter three lines and it looks like this Code lt html gt lt h gt HTML lt h gt lt html gt Let s first import the two functions we need from our API file import getAllPagesWithSlugs getPageBySlug from lib api Then Astro comes with a getStaticPaths function that enables us to create dynamic pages Inside this function we can wrap all our pages like so export async function getStaticPaths const pagesWithSlugs await getAllPagesWithSlugs And then we can map those to return a slugged page for each of our WordPress pages export async function getStaticPaths const pagesWithSlugs await getAllPagesWithSlugs return pagesWithSlugs edges map node gt return params slug node slug You can see the file name must match with the params there as we have slug as the filename The params must also be slug Then the last thing we need is to fetch the current page based on the slug const slug Astro request params const page await getPageBySlug slug Then we can move to the HTML part to render the page lt html lang en gt lt head gt lt meta charset UTF gt lt title gt page title lt title gt lt meta name viewport content width device width gt lt link rel stylesheet type text css href Astro resolve styles global css gt lt head gt lt body gt lt div class flex flex col p gt lt div class mb text xl font bold gt page title lt div gt lt article class prose lg prose xl gt page content lt article gt lt div gt lt body gt lt html gt You should now be able to visit any of your slugs Let s see my privacy policy page for instance Loading the primary WordPress menu in AstroIt s pretty cool that we have these pages at our disposal but we can t tell the user to type in the URLs they want to visit So let s create a primary menu in WordPress and use that instead First head over to your WordPress admin panel and find the Appearance gt Menu section Add a new menu You can give this any name you want However for the display location choose Primary menu You can then go ahead and add some pages to this menu The next thing we need to do is query this menu in our lib api js file in our front end project export async function getPrimaryMenu const data await fetchAPI menus where location PRIMARY nodes menuItems edges node path label connectedNode node on Page isPostsPage slug return data menus nodes To use this let s create a new component that we can re use Remember that s one of the powers Astro brings us Create a Header astro file in your components directory In there let s first go to the code section import getPrimaryMenu from lib api const menuItems await getPrimaryMenu This will retrieve all the menu items in the primary menu we just defined Next up the HTML section for this lt nav class flex flex wrap items center justify between p bg blue shadow lg gt lt a href class cursor pointer p ml text white gt AstroPress lt a gt lt ul class flex items center justify end flex grow gt menuItems edges map item gt lt li key item node path gt lt a href item node connectedNode node slug class cursor pointer p ml text white gt item node label lt a gt lt li gt lt ul gt lt nav gt To use this component and see it in action let s open up the slug astro file and import it in our code section import Header from components Header astro Then we can use it in our HTML section by adding the following code in our body tag lt body gt lt Header gt lt Other code gt lt body gt And if we refresh our project we have a super cool menu ConclusionToday we learned how to set up WordPress as a headless CMS and how to load this through a GraphQL endpoint in an Astro website For me this brings the best of two worlds WordPress as an established CMS system something we don t want to be rebuilding from scratch And Astro as the SSG that outputs the fastest possible website for us From here the options are endless as you can retrieve posts custom elements and more from WordPress If you are interested you can find the complete code on GitHub Or check out the sample website here 2021-10-08 17:33:00
海外TECH DEV Community How to setup Appwrite on Ubuntu https://dev.to/noviicee/how-to-setup-appwrite-on-ubuntu-3j67 How to setup Appwrite on UbuntuSetting up Appwrite on any Operating System or Kernel is pretty easy Here we are going to go through an easy and simple method to setup Appwrite on a Linux Kernel Well I use Ubuntu Operating System so let s get started with setting up Appwrite on Ubuntu Before getting started I would like to give a brief intro to What is Appwrite So Appwrite is a self hosted solution that provides developers with a set of easy to use and integrate REST APIs to manage their core backend needs Basically it is a new open source end to end back end server for front end and mobile developers that allows to build apps much faster Its main goal is to abstract and simplify common development tasks behind REST APIs and tools helping developers build advanced apps faster RequirementsThe only System Requirements to install Appwrite is a small CPU core and GB of RAM and an operating system that supports Docker If you have these in place let us start then Note If you are migrating from an older version of Appwrite you need to follow the migration instructions steps of which are provided as the last part of this article Upgrading from Older Versions So feel free to jump on to the last part if you already have a version on Appwrite installed and are planning to migrate to another version How to setup Appwrite on UbuntuNow there can be multiple ways to install Appwrite but the easiest way to start running an Appwrite server is by running Docker installer tool from the terminal Before running the installation command make sure you have Docker CLI installed If you don t have it installed here s how you can do it Refer to it and you are good to go After that the process is very simple You just need to open your terminal and run the following bash commands docker run it rm volume var run docker sock var run docker sock volume pwd appwrite usr src code appwrite rw entrypoint install appwrite appwrite You may change the Appwrite version accordinglyI followed the same procedure and the results were like so Basically here we are making use of Docker to install application on our local machine It is simple fast and easy to run and use If you are confused as to What is Docker then you may want to refer hereWell the entire process took me around minutes when I did everything from scratch after installing a Docker CLI my net was also slow then Upgrading From Older VersionsIf you are migrating from an older version of Appwrite you need to follow these steps In order to upgrade your Appwrite server from an older version you should use the Appwrite migration tool after you have installed the new version The migration tool will adjust your Appwrite data to the new version s structure to make sure your Appwrite data is compatible with any internal changes The first step is to install the latest version of Appwrite Go to the directory where you first installed Appwrite and find the appwrite directory Inside the directory there will be a docker compose yml file Now we need to execute the following command from the same appwrite directory docker run it rm volume var run docker sock var run docker sock volume pwd appwrite usr src code appwrite rw entrypoint install appwrite appwrite What this command will do is pull the docker compose yml file for the new version of Appwrite and after that it will perform the installation After the command is successfully executed and the setup is completed we can verify that we have the latest version of Appwrite We can make use of the following command to do sodocker ps grep appwrite appwriteMake sure that the STATUS doesn t have any errors and all the appwrite appwrite containers have the same latest version We can now start with the migration part For that we will again have to navigate to the appwrite directory where our docker compose yml is present and run the following command cd appwrite docker compose exec appwrite migrateOnce the migration process has completed successfully we are all set to use the latest version of Appwrite You can also have a look at the official migration instructions present in the official documentation page of AppwriteAll Done Now you are all set to make your first api call Thanks for reading All reviews and feedbacks are welcomed hug 2021-10-08 17:23:28
Apple AppleInsider - Frontpage News HyperDrive iMac Hub review: More ports on the front of the 24-inch iMac https://appleinsider.com/articles/21/10/08/hyperdrive-imac-hub-review-more-ports-on-the-front-of-the-24-inch-imac?utm_medium=rss HyperDrive iMac Hub review More ports on the front of the inch iMacThe inch Apple Silicon iMac is an excellent machine with a shortage of ports overall and none on the front Hyper has a solution to both problems with the HyperDrive iMac Hub The is an excellent machine but not only does it have a shortage of ports it has none on the front ーbut Hyper has a solution to both problems We love the new inch iMac The only drawback is the port situation Read more 2021-10-08 17:13:29
海外TECH Engadget Kia’s Sorento plug-in hybrid is racing in the 1,500-mile Rebelle Rally https://www.engadget.com/kia-sorento-phev-rebelle-rally-171654356.html?src=rss Kia s Sorento plug in hybrid is racing in the mile Rebelle RallyThis week the Rebelle Rally kicked off with participants in the all female race embarking on a mile trek across the deserts of Nevada and California Hyundai s Kia is fielding two modified Sorento plug in hybrids as part of the event In the spirit of the rally the automaker asked LGE CTS Motorsports a female owned shop in Southern California to make the two vehicles race ready Each one features underbody armor to protect its vulnerable internal components Additionally the shop fitted both Sorentos with bumper guards skid plates and inch spacers to elevate them just a bit higher off the ground They re riding on inch KMC matte black wheels fitted with Hankook Dynapro AT tires For carrying equipment LGE CTS removed the rear seats to make room for an interior cargo mounting system and added roof racks Notably the shop didn t modify the powertrain of either PHEV We re starting to see more and more electric vehicles take part in endurance races like the Rebelle Rally At the end of April Volkswagen s ID SUV took part in the Mexican Rally The company s Audi division is also getting ready to race a custom built PHEV at the Dakar Rally at the start of next year At this rate it feels like it s only a matter of time before they become a more common sight at endurance races 2021-10-08 17:16:54
ニュース BBC News - Home Energy prices: Industry calls for government action https://www.bbc.co.uk/news/business-58846999?at_medium=RSS&at_campaign=KARANGA energy 2021-10-08 17:18:00
ニュース BBC News - Home David Fuller admits killing two women in 1987 https://www.bbc.co.uk/news/uk-england-kent-58849085?at_medium=RSS&at_campaign=KARANGA bedsits 2021-10-08 17:04:55
ニュース BBC News - Home Sarah Everard: Baroness Casey to lead review into Met Police https://www.bbc.co.uk/news/uk-58833349?at_medium=RSS&at_campaign=KARANGA murder 2021-10-08 17:30:31
ニュース BBC News - Home Comedian Rosie Jones 'more determined' after abuse from Question Time viewers https://www.bbc.co.uk/news/entertainment-arts-58846736?at_medium=RSS&at_campaign=KARANGA appearance 2021-10-08 17:30:27
ニュース BBC News - Home Fire breaks out at stadium where England will play Andorra in World Cup qualifier https://www.bbc.co.uk/sport/football/58849244?at_medium=RSS&at_campaign=KARANGA andorra 2021-10-08 17:40:46
ビジネス ダイヤモンド・オンライン - 新着記事 自己肯定感が一気に上がる!? 幸福感が高まる「朝の習慣」とは? - 宇宙人が教える ポジティブな地球の過ごし方 https://diamond.jp/articles/-/284281 人間関係 2021-10-09 02:50:00
ビジネス ダイヤモンド・オンライン - 新着記事 超一流の営業マンが、お客様との「会食」で密かにやっていることとは? - 超★営業思考 https://diamond.jp/articles/-/284114 超一流の営業マンが、お客様との「会食」で密かにやっていることとは超営業思考プルデンシャル生命保険で「前人未到」の圧倒的な業績を残した「伝説の営業マン」である金沢景敏さん。 2021-10-09 02:45:00
ビジネス ダイヤモンド・オンライン - 新着記事 冷蔵庫にあるもので、 副菜がパパっと作れるようになる 万能ルールとは? - ぽんこつ主婦の高見えごはん https://diamond.jp/articles/-/284347 冷蔵庫にあるもので、副菜がパパっと作れるようになる万能ルールとはぽんこつ主婦の高見えごはんフォロワー約万人の料理インスタグラマー「ぽんこつ主婦」こと橋本彩さんの、おかず、おつまみ、副菜までレシピが掲載されたレシピ本「ぽんこつ主婦のいつもの食材でパパっと“高見えレシピ」が発売即重版に鶏むね肉、豚こま肉、サバ缶、卵、豆腐……など、お財布にも優しい食材で、おいしいのはもちろん、見映えもよい、ひとひねりしたレシピが超簡単に作れると話題です。 2021-10-09 02:40:00
ビジネス ダイヤモンド・オンライン - 新着記事 自分で自分を傷つけている!? がんばりすぎて無気力になりやすい人の「思考のクセ」とは? - 大丈夫じゃないのに大丈夫なふりをした https://diamond.jp/articles/-/283791 精神科医 2021-10-09 02:35:00
ビジネス ダイヤモンド・オンライン - 新着記事 サムスンの半導体景気、ピーク越えにも耐性 - WSJ発 https://diamond.jp/articles/-/284412 耐性 2021-10-09 02:01:00

コメント

このブログの人気の投稿

投稿時間:2021-06-17 05:05:34 RSSフィード2021-06-17 05:00 分まとめ(1274件)

投稿時間:2021-06-20 02:06:12 RSSフィード2021-06-20 02:00 分まとめ(3871件)

投稿時間:2020-12-01 09:41:49 RSSフィード2020-12-01 09:00 分まとめ(69件)