AWS |
lambdaタグが付けられた新着投稿 - Qiita |
IoT リアルタイムデータを Lambda タンブリングウィンドウで集計してみた その1 IoT Core 準備編 |
https://qiita.com/sugimount-a/items/66e5cbbbf678f9960555
|
iotcore |
2022-04-10 23:44:18 |
python |
Pythonタグが付けられた新着投稿 - Qiita |
ROSの自作パッケージからPythonファイルをimportする |
https://qiita.com/wish/items/2e8e5dbc5dd849e66600
|
import |
2022-04-10 23:57:38 |
python |
Pythonタグが付けられた新着投稿 - Qiita |
[ABC247 E] Max Min ざっくり解説 |
https://qiita.com/mo124121/items/80be01ab44ece7d090b5
|
abcemaxmin |
2022-04-10 23:24:27 |
js |
JavaScriptタグが付けられた新着投稿 - Qiita |
【更新中】プログラミングノート(JavaScript) |
https://qiita.com/tetsuya_tech/items/d2b6dd5c117ac5a021f9
|
avascript |
2022-04-10 23:47:32 |
js |
JavaScriptタグが付けられた新着投稿 - Qiita |
お年玉計算機を作りました2、React.jsで。 |
https://qiita.com/taoka-toshiaki/items/4a7c9e0b66a6a1455455
|
reactjs |
2022-04-10 23:26:43 |
js |
JavaScriptタグが付けられた新着投稿 - Qiita |
ウクライナ国旗を油彩画風に描画する Generative Art |
https://qiita.com/mimonelu/items/75bb1c232d4710f1a007
|
generativeart |
2022-04-10 23:01:18 |
Ruby |
Rubyタグが付けられた新着投稿 - Qiita |
単体テストコードの種類 |
https://qiita.com/Q-junior/items/fd8db2cdd8f1436607a2
|
expectat |
2022-04-10 23:50:47 |
AWS |
AWSタグが付けられた新着投稿 - Qiita |
IoT リアルタイムデータを Lambda タンブリングウィンドウで集計してみた その1 IoT Core 準備編 |
https://qiita.com/sugimount-a/items/66e5cbbbf678f9960555
|
iotcore |
2022-04-10 23:44:18 |
AWS |
AWSタグが付けられた新着投稿 - Qiita |
AWSでのSSL化(ELBの設定からRoute53にレコード追加までの流れ)の手順 |
https://qiita.com/sora32/items/b7abc7148b9deaeff979
|
redmine |
2022-04-10 23:18:25 |
技術ブログ |
Developers.IO |
AWS CDKとReactのモノリポ構成にしたらSVGファイルの読み込みエラーが発生するようになった??? |
https://dev.classmethod.jp/articles/when-i-set-the-monolipo-configuration-of-the-aws-cdk-and-react-i-got-an-error-reading-the-svg-file/
|
awscdk |
2022-04-10 14:50:13 |
海外TECH |
MakeUseOf |
8 Ways to Fix the Windows Command Prompt When It’s Unresponsive |
https://www.makeuseof.com/windows-command-prompt-unresponsive-fix/
|
prompt |
2022-04-10 14:15:13 |
海外TECH |
DEV Community |
How to Create Lambda function URLs |
https://dev.to/lasanthasilva/how-to-create-lambda-function-urls-c88
|
How to Create Lambda function URLsAWS recently introduced AWS Lambda Function URLs Built in HTTPS Endpoints for Single Function Microservices It helps users to configure an HTTPS endpoint for a Lambda function without using any other AWS services such as AWS API Gateway or Application Load Balancer First of all go through Architecture Diagram Create IAM role for Lambda functionIn AWS management console search and go inside IAM service Next create IAM Role for lambda function Use Trusted entity type as AWS service and Use case as Lambda Use permissions as AWSLambdaBasicExecutionRole AWS managed policy Type role name as function url role Finally click Create role button Create Lambda functionIn AWS management console search and go inside Lambda service Next click Create function button Select Author from scratch and given function name as function url demo Select runtime as Python Expand Execution role and select Use an existing role after that select previos step created Role name asfunction url role After that expand Advanced settings and select Enable Function URL and Auth type as NONE Finally click Create function button Test the functionAdd following code part for Code source section lambda function py file lambda function pyimport jsondef lambda handler event context body Hello Lambda Function URL statusCode return statusCode statusCode body json dumps body headers Content Type application json Click Deploy button to deploy code After that we can Test the function Go to Test section and add test event Give the event name as Test and select hello world template Click save button Next click test button you can see output as following Test the function URL endpointsYou can use curl or Postman you can get function URL in Function overview section or go to configurations section and copy Function URL you can use curl command paste it your terminal You can see response curl X GET Your Function URL H Content Type application json In postman you use GET method amp paste Function URL By deleting AWS resources no longer using you can prevent unnecessary charges to your AWS account Open the Functions page in the Lambda console and select the function and go action section After that click the Delete button Thanks for reading the Article References |
2022-04-10 14:52:06 |
海外TECH |
DEV Community |
(Part 2) Hate YAML? Build your next tool with HCL! |
https://dev.to/weakpixel/part-2-hate-yaml-build-your-next-tool-with-hcl-2b7i
|
Part Hate YAML Build your next tool with HCL This is the second part of my HCL series You find the first part here Part In the second post of my HCL series I want to extend our example with Cobra Commandline Variables Functions CobraCobra is my favorite library to build command line tools We start off with the example program from the first post source As I write before I want to introduce you to the Cobra command line tool In order to use it we have to add a new import import fmt os github com spf cobra Next rename the main function to newRunCommand and refactor it to return a cobra Commandfunc newRunCommand cobra Command contains all variables given by the user with var key value vars string cmd cobra Command Use run Short Executes tasks RunE func cmd cobra Command args string error config amp Config err hclsimple Decode example hcl byte exampleHCL nil config if err nil return err for task range config Tasks fmt Printf Task s n task Name for step range task Steps fmt Printf Step s s n step Type step Name var runner Runner switch step Type case mkdir runner amp MkdirStep case exec runner amp ExecStep default return fmt Errorf unknown step type q step Type diags gohcl DecodeBody step Remain nil runner if diags HasErrors return diags err runner Run if err nil return err return nil Define an optional var flag for the commnd cmd Flags StringArrayVar amp vars var nil Sets variable Format lt name gt lt value gt return amp cmd The Use field describes the subcommand name The Short field allows defining a short command description The RunE implements the execution of the sub command It contains our HCL parsing code Since RunE allows us to return an error we also have refactored the code to just return an error instead of using os Exit After that we implement a new main function looking like func main root cobra Command Use taskexec root AddCommand newRunCommand err root Execute if err nil fmt Println err os Exit The root command is just an empty cobra Command To the root command we add our subcommand with root AddCommand newRunCommand Let s try out what happens if we run our program go run main go Usage taskexec command Available Commands completion Generate the autocompletion script for the specified shell help Help about any command run Executes tasksFlags h help help for taskexecLet s try to show the help for the subcommand go run main go run hExecutes tasksUsage taskexec run flags Flags h help help for run var stringArray Sets variable Format lt name gt lt value gt Great Next we want to make use of the variables To use variables in our HCL config we must learn about the hcl EvalContext EvalContextThe hcl EvalContext allows as to define variables and functionstype EvalContext struct Variables map string cty Value Functions map string function Function For now we focus on the variables The Variables map allows us to define the variable name as key and as value a cty Value The cty Value is part of the github com zclconf go cty cty package The package provides a dynamic type system You can read more about cty on the github project Let s come back to hcl EvalContext Where is this context struct actually used In our example code we have two instances hclsimple Decode example hcl byte exampleHCL amp hcl EvalContext nil config anddiags gohcl DecodeBody step Remain amp hcl EvalContext nil runner VariablesIn our command we have defined a vars slice which contains the user defined variables in the format var key value So let s get started and create hcl EvalContext and populate it with the vars parameters from the command line func newEvalContext vars string hcl EvalContext error varMap map string cty Value for v range vars el strings Split v if len el return nil fmt Errorf invalid format s v varMap el cty StringVal el ctx amp hcl EvalContext ctx Variables map string cty Value var cty ObjectVal varMap return ctx nil We use the newEvalContext function in our subcommand to create the EvalContext and use the context in all places where we decode the HCL document RunE func cmd cobra Command args string error ctx err newEvalContext vars if err nil return err config amp Config err hclsimple Decode example hcl byte exampleHCL ctx config for task range config Tasks fmt Printf Task s n task Name for step range task Steps diags gohcl DecodeBody step Remain ctx runner return nil And finally we change our exampleHCL to make use of variables exampleHCL task first task step mkdir build dir path var buildDir step exec list build dir command ls var buildDir Let s try to execute the command without defining the buildDir variable go run main go run example hcl Unsupported attribute This object does not have an attribute named buildDir and other diagnostic s exit status Good it fails with a detailed error message Now we try to execute the command with the needed variable go run main go run var buildDir buildTask first task Step mkdir build dir Step exec list build dirAnd it works as expected You can see the full source code here FunctionsNext we want to explore how e g Terraform provides these nice inline functions which makes life so much easier to deal with input variables It might not make much sense in our example but let s try to implement a function that converts all cased letters into uppercase helloValue upper hello World To implement a function we must add a new module to our import github com zclconf go cty cty function We have to use the function Spec struct to create with function New our function implementation var upperFn function New amp function Spec Define the required parameters Params function Parameter Name str Type cty String AllowDynamicType true Define the return type Type function StaticReturnType cty String Function implementation Impl func args cty Value retType cty Type cty Value error in args AsString out strings ToUpper in return cty StringVal out nil And last we add the new function to our EvalContext func newEvalContext vars string hcl EvalContext error ctx Functions map string function Function upper upperFn return ctx nil Update the exampleHCL to make use of our brand new define function exampleHCL task first task step mkdir build dir path upper var buildDir step exec list build dir command ls upper var buildDir Add some debug output to our example Step execution mkdir exec and run the program go run main go run var buildDir build Task first task Step mkdir build dir Path build Step exec list build dir Command ls BUILDand as expected we have an upper case build directory If you don t want to implement all the functions yourself or you need some inspiration to implement a function you find want you looking for here ResourcesResources Part Hate YAML Build your next tool with HCL Full Source Code Gist Full Source Codepackage mainimport fmt os strings github com spf cobra github com zclconf go cty cty github com hashicorp hcl v github com hashicorp hcl v gohcl github com hashicorp hcl v hclsimple github com zclconf go cty cty function var exampleHCL task first task step mkdir build dir path upper var buildDir step exec list build dir command ls upper var buildDir func main root cobra Command Use taskexec root AddCommand newRunCommand err root Execute if err nil fmt Println err os Exit func newRunCommand cobra Command vars string cmd cobra Command Use run Short Executes tasks RunE func cmd cobra Command args string error ctx err newEvalContext vars if err nil return err config amp Config err hclsimple Decode example hcl byte exampleHCL ctx config if err nil return err for task range config Tasks fmt Printf Task s n task Name for step range task Steps fmt Printf Step s s n step Type step Name var runner Runner switch step Type case mkdir runner amp MkdirStep case exec runner amp ExecStep default return fmt Errorf unknown step type q step Type diags gohcl DecodeBody step Remain ctx runner if diags HasErrors return diags err runner Run if err nil return err return nil cmd Flags StringArrayVar amp vars var nil Sets variable Format lt name gt lt value gt return amp cmd func newEvalContext vars string hcl EvalContext error varMap map string cty Value for v range vars el strings Split v if len el return nil fmt Errorf invalid format s v varMap el cty StringVal el ctx amp hcl EvalContext ctx Variables map string cty Value var cty ObjectVal varMap ctx Functions map string function Function upper upperFn return ctx nil var upperFn function New amp function Spec Define the required parameters Params function Parameter Name str Type cty String AllowDynamicType true Define the return type Type function StaticReturnType cty String Function implementation Impl func args cty Value retType cty Type cty Value error in args AsString out strings ToUpper in return cty StringVal out nil type Config struct Tasks Task hcl task block type Task struct Name string hcl name label Steps Step hcl step block type Step struct Type string hcl type label Name string hcl name label Remain hcl Body hcl remain type ExecStep struct Command string hcl command func s ExecStep Run error fmt Println tCommand s Command return nil type MkdirStep struct Path string hcl path func s MkdirStep Run error fmt Println tPath s Path return nil type Runner interface Run error |
2022-04-10 14:28:25 |
海外TECH |
DEV Community |
Docker - When "tty: true" is necessary in docker-compose.yml |
https://dev.to/kakisoft/docker-when-tty-true-is-necessary-in-docker-composeyml-568l
|
DockerWhenquotttytruequotisnecessaryindockercomposeymlAboutttytrueindockercomposeymlIndockercomposeymlIwasntsurehowttytruemeanttoworksoIlookeditupWhatis“ttytrueindockercomposeymlbyKeisukeKoshikawaMediumIfyouwrite“ttytrueinthedockercomposeymlyouwillbeableto“keepthecontainerrunningHesaidwhenthecontainersarestartedbydockercomposeupdthecontainersterminateimmediatelyYouneedanoptioncalledttytruetokeepthecontainersrunningttyseemstobethesameastheLinuxcommandWhatisttyApseudoterminalalsoknownasattyoraptsconnectsausers“terminalwiththestdinandstdoutstreamcommonlybutnotnecessarilythoughashellsuchasbash…InthecaseofdockeryoulloftenusetanditogetherwhenyourunprocessesininteractivemodesuchaswhenstartingabashshellThisexplanationismoresimpletty【コマンド】とは「分かりそう」で「分からない」でも「分かった」気になれるIT用語辞典WhatisttyITdictionarythatmakesyoufeellikeyouunderstandevenifyoudont「tty」コマンドはコマンドライン上から接続端末のデバイスファイル名を取得する際に使います。 |
2022-04-10 14:27:03 |
海外TECH |
DEV Community |
UI vs UX |
https://dev.to/itsharshag/ui-vs-ux-3c4j
|
UI vs UXIf you have been curious about the difference between UI and UX this article will help you in understanding the difference User Interface UI UI designers deal with the visual aspect of a product They are responsible for the look and feel of a website They deal with things like typography white space and colors to make a website look good They focus on having a contrast between text and other elements on a page to ensure readability and accessibility They think about the layout of the website like which is the best area to place a piece of text or an image or a button They make designs for multiple screen sizes like laptops and mobiles They integrate the brand s identity into the whole design by using brand colors and brand specific graphics They ensure that the design of the different pages is consistent User ExperienceUX designers are responsible for designing and optimizing the user experience of a product UX designers think about how to enhance a product and make it more user friendly They analyze whether interacting with the product is straightforward or not Is the experience smooth or clunky They map out and optimize the user journey which is all the possible ways a user can interact with a product They take care of things like how many steps is it taking to do a particular task They try to reduce complexity They talk to users see the analytics data and critically think about ways to improve the experience of a product Now let s go over a few examples Example Suppose you have a quotes website where you have curated the quotes from the best authors and celebrities in the world The UX designer discovers that what people often do is that they copy a quote by selecting the desired text manually and then send it to their friends The UX designer proposes the addition of a copy button to allow users to copy the text with a button click The UX designer came up with a solution to improve the user experience The UI designer will now think about where to place the copy button what icon to use for it and what should be the color for it Example You have a gaming news websiteThe UX designer discovers that people are coming to the signup page but only a small percentage of people finish signup The designer investigates and finds that the signup form is long It s asking for Full Name Email Password Gender Address and Preferences Users don t like this and don t signup The designer proposes that we only ask for the full name email and password and after a user logs in we can get the other details eventually Now the UI designer makes the desired changes to the form and ensures that the design still looks good In this example also the UX designer identified the area of improvement and proposed a solution for it The UI designer made the necessary visual changes and ensured that the new design is consistent with the rest of the design So that s it in a nutshell I hope you got an idea of the difference between UI and UX |
2022-04-10 14:25:48 |
海外TECH |
DEV Community |
React context simplified |
https://dev.to/coloene/react-context-simplified-8dn
|
React context simplifiedContext is a react object that provides an easier way to pass data through the component tree without having to use props and drilling down the data at each stage Illustration of context Images from Scrimba React courseIllustration of context Images from Scrimba React courseThe disadvantage of using props is that you need to pass data from one branch of the component to the other until it is passed to the child component This pipeline of data passing known as drilling can be frustrating Context can be thought of as a state management tool that can be used to manage different states in an app such as light dark mode and for some people this has been used to replace state management tools like redux To use context we use the createContext method which comes with the React package This is called by using the following syntax React createConetext or import createContext This method comes with two components the Provider and the Consumer The Provider is used to hold the state and pass it down to the Consumer which in turn passes it to the component to be rendered In order to see how context is being used in practice we would use context to make our app have a light mode and a dark mode We can see how this is being used in the following lines of code to make the context object available to the entire app import React from react import ReactDOM from react dom import App from App const ThemeContext React createContext ReactDOM render lt ThemeContext Provider gt lt App gt lt ThemeContext Provider gt document getElementById root index jsFor best practices it s not advisable to have the context object created in the index js file as this would lead to bugs Thus we shall create a different file for our context and instantiate it there which would be exported to our index js file This is illustrated belowimport React from react const ThemeContext React createContextexport default ThemeContextthemeContext jsWe can now call the theme context in the index js file by importing it from the themeContext js file and wrapping it around the App component as seen below with a value provided to the Provider component The value provided in this case is either dark or light which would be styled in the index css file to reflect the dark and light mode import React from react import ReactDOM from react dom import App from App import ThemeContext from themeContext ReactDOM render lt ThemeContext Provider value dark gt lt App gt lt ThemeContext Provider gt document getElementById root index js updated file in which the Context provider is being imported from another fileIn the following lines of code we would illustrate how to use the Consumer component of a context method The file below is the Header file of the dark light mode theme project we would be using the Consumer context to pass the state to the className in order to change the theme color based on the value of the parent Provider import React Component from react import ThemeContext from themeContext class Header from extends Component static contextType ThemeContext render const theme this context return lt header className theme theme gt lt h gt Light theme lt h gt lt header gt header jsI hope this was helpful in your quest to understand context in React Happy hacking as you evolve into a superb frontend engineer For further reading check out the official documentation from React here |
2022-04-10 14:24:17 |
海外TECH |
DEV Community |
The absolute beginner’s guide to Docker |
https://dev.to/codeclown/the-absolute-beginners-guide-to-docker-11gb
|
The absolute beginner s guide to DockerThis article was originally published at shipmight com Shipmight is a self hosted PaaS powered by Kubernetes Check it out if that sounds interesting Link to original post The purpose of this tutorial is to illustrate to a complete beginner how Docker works and how it can radically simplify their development environment and dependency management Focus is on the very basics We ll talk enough about how Docker and containers work and what they mean but we ll keep it at a general and most importantly practical level In the end of the tutorial we ll also note some helpful extra commands including cleaning up images and containers from your machine ContentsPrerequisitesContainers and Docker a short terminologyBuilding a very basic image just to understand what an image isDocker Hub continuous processes and detached containersPorts and volumesEnvironment variablesWhere to go from here including ready to use examplesOther helpful commands when getting started including a cheatsheet PrerequisitesDownload Docker here for Mac or Windows note that we will be using some Unix commands in this tutorial Windows users may still be able to follow You should be familiar with basic Unix commands and comfortable with using the terminal Containers and DockerLet s start by clearing up the concepts and terminology Containers are isolated parts of your operating system They are almost like virtual machines The difference is that they share a lot of resources like the kernel with the host operating system whereas virtual machines enclose their own operating systems completely Containers are much lighter to set up and run but they are just as sufficient for running isolated software Simplified comparison of containers and virtual machines Docker is a suite of tools for configuring running and managing containers The main command line tool docker can be used to quickly configure and start containers using pre built images The suite also includes tools like docker compose previously a separate command called docker compose now included as a subcommand of docker which is used to quickly start and stop a specific configuration of multiple containers Images are pre built containers for Docker In virtual machine land they would be comparable to VM snapshots Anyone can build an image and then share it and others will be able to run it without having to build it themselves Also images can be extended Note Containers and container images are not exclusive to Docker and can be used without Docker While Docker really brought containers to the everyday toolkit of developers there have been other similar tools developed e g podman At the time of writing however Docker is still the most popular and widely used container platform used by developers Building a very basic imageImages are a core concept of Docker The first thing we want to do in order to fully grasp all the upcoming topics is to build a very basic image from scratch Images are built using docker build The command takes in a single file Dockerfile which it reads step by step to configure the image The beauty of Docker is that you can build an image on any machine like your own computer and it can be used on any other computer which has Docker installed This makes Docker great for packaging dependencies and software without worrying about what operating system everyone is using and if they have conflicting dependencies installed Let s build an image right now You need to have Docker installed Note The following commands affect your local Docker installation only and all resources we create here can be easily removed Nothing apart from Docker itself is installed on your machine directly Everything else goes into disposable containers To prepare create a new directory and an empty Dockerfile in it mkdir docker tutorial cd docker tutorial touch Dockerfile lsDockerfileNext write the following contents in Dockerfile FROM alpine ENV MESSAGE Hello from Docker CMD echo MESSAGE You may have noticed that I ve used whitespace to align the columns of text This is not required but is common practice to make the file more readable Let s break it down line by line FROM alpine tells Docker to use version of alpine as our base image alpine is a minimal Linux distribution and is a great starting point for any custom image Most pre built images you ll find online like nginx are based on it You could also base your image on e g ubuntu This is a cool feature of Docker you can easily pick an existing image of any level of complexity and just extend it to your needs See FROM ENV MESSAGE Hello from Docker sets an environment variable inside the container We simply set the value Hello from Docker to the environment variable MESSAGE The value gets hardcoded into the image and so any program running inside the container after this step has this environment value in their environment See ENV CMD echo MESSAGE sets a default command to execute when a container from this image is started See CMD Other useful instructions would be COPY for copying files from the host machine to the container filesystem RUN for running commands such as apt get inside the container and WORKDIR for setting the working directory inside the container See the reference for all available instructions Let s now use this Dockerfile to build an image We ll give it a name of first image docker build tag first image Sending build context to Docker daemon kBStep FROM alpine Pulling from library alpinedcce Pull completeDigest sha dabfabdfebfaaddcdccbdStatus Downloaded newer image for alpine gt defbStep ENV MESSAGE Hello from Docker gt Running in babcbfdRemoving intermediate container babcbfd gt caaaaeStep CMD echo MESSAGE gt Running in fbdRemoving intermediate container fbd gt ebddfSuccessfully built ebddfSuccessfully tagged first image latestNice Docker went through all the lines in out Dockerfile and performed the operations we had configured Each step was also cached when we change things only the changed layers identified by the SHA digests you see in the output above will be rebuilt Here s an illustration of the process The image was built and we can now see it available on our machine docker image lsREPOSITORY TAG IMAGE ID CREATED SIZEfirst image latest ebddf minutes ago MBalpine defb minutes ago MBLet s start a container with our new image docker run first imageHello from Docker That is all it took for us to run a command inside a contained Linux distribution isolated on our machine You could push this image to an image registry and your colleague could pull it from there and run it and they d get the exact same behaviour This is how Docker can be used to package software in a reusable manner You can also override the default command For example run date which prints out current date docker run first image dateThu Jan UTC Let s make the image a bit more complex by installing curl into it and calling a mock API Update Dockerfile to look like this FROM alpine apk is the package manager in alpine same as apt get in debian ubuntu RUN apk no cache add curlENV MESSAGE Hello from Docker CMD curl X POST data raw MESSAGE s Now let s build it again using the name second image docker build t second image Sending build context to Docker daemon kBStep FROM alpine gt defbStep RUN apk no cache add curl gt Running in acfetch fetch Installing ca certificates r Installing libssh r Installing libcurl r Installing curl r Executing busybox r triggerExecuting ca certificates r triggerOK MiB in packagesRemoving intermediate container ac gt aebcStep ENV MESSAGE Hello from Docker gt Running in cafaeRemoving intermediate container cafae gt deaefStep CMD curl X POST data raw MESSAGE s gt Running in fefafRemoving intermediate container fefaf gt eeafdSuccessfully built eeafdSuccessfully tagged second image latestNow run the updated image docker run second image args data files form Hello from Docker headers x forwarded proto https host postman echo com content length accept content type application x www form urlencoded user agent curl x forwarded port json Hello from Docker url Works as expected At this point let s list our Docker containers docker ps allCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMESbfaccd second image bin sh c curl X… About a minute ago Exited About a minute ago strange napierdafdd first image date minutes ago Exited minutes ago condescending fermatafaafc first image bin sh c echo M… minutes ago Exited minutes ago ecstatic leavittNote Earlier we used the command docker image ls which lists images that are available in the machine The last command docker ps lists containers which have been started using those images As you can see each line says Exited X minutes ago The containers were started but they are not running anymore we ll cover continuous processes in the next section We can remove the unused containers like so docker rm strange napier condescending fermat ecstatic leavittNote docker has autocomplete for bash so you can just type docker rm TAB and the container names will be suggested In the future when we run containers like these we might want to specify the rm option so that Docker will automatically remove the container after it stops Like so docker run rm second image args data files form Hello from Docker headers x forwarded proto https host postman echo com content length accept content type application x www form urlencoded user agent curl x forwarded port json Hello from Docker url docker ps allCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMESNo containers were listed as we hoped for Docker removed the container automatically after it had finished execution You also probably noticed the strange names of your containers They were auto generated by Docker You can specify a name for a container by using the name option docker run rm name my container second imageGreat Let s move on to containers that stay running in the background Docker Hub continuous processes and detached containersIn the previous section we built our own custom image In this one we ll utilize the powerful Docker Hub which contains pre configured images for nearly any software you might need in your projects For example this is all it takes to start an isolated Postgres database on our machine using the postgres image docker run name my postgres postgres UTC LOG listening on IPv address port UTC LOG listening on IPv address port UTC LOG listening on Unix socket var run postgresql s PGSQL UTC LOG database system was shut down at UTC UTC LOG database system is ready to accept connections Note As with any terminal command you can terminate the process by pressing Ctrl C Docker first pulled the image from Docker Hub and then created and started a container with it You can run many of these containers at the same time While the old one is running open another tab in your terminal and start a second one docker run name another postgres postgres UTC LOG listening on IPv address port UTC LOG listening on IPv address port UTC LOG listening on Unix socket var run postgresql s PGSQL UTC LOG database system was shut down at UTC UTC LOG database system is ready to accept connections Note Even if both containers logged listening on port they are actually listening inside the containers not on your host machine So there is no port collision In the next section we ll learn how to expose ports to the host machine You now have two Postgres databases running on the same machine without having to install anything else but Docker on your computer The containers don t have access to your filesystem and only make changes inside their own How neat is that Notice that the second time Docker didn t have to pull the postgres image again It was already available on your machine As mentioned before you can list all the available images docker image lsREPOSITORY TAG IMAGE ID CREATED SIZEpostgres latest e minutes ago MBsecond image latest ceee minutes ago MBfirst image latest ebddf minutes ago MBalpine defb minutes ago MBThese Postgres instances keep running until you press Ctrl C This isn t very practical if you want to run a database in the backgrond The solution is to run it in detached mode by setting the detach or d option docker run name my postgres detach postgresfacefcfbfacbdcdaaaceeaebeeddbbcNow the container was started and it is still running but in the background the output from the command is its ID You can see it by listing all running containers docker psCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMESface postgres docker entrypoint s… seconds ago Up seconds tcp my postgresYou can view its console output docker logs tail my postgresdoneserver stoppedPostgreSQL init process complete ready for start up UTC LOG listening on IPv address port UTC LOG listening on IPv address port UTC LOG listening on Unix socket var run postgresql s PGSQL UTC LOG database system was shut down at UTC UTC LOG database system is ready to accept connectionsYou can execute a command inside it substitute whoami with your command docker exec my postgres whoamirootNote Processes inside Docker containers run as root by default This can and should be changed per image via the USER instruction in Dockerfile You can stop it docker stop my postgresmy postgresAnd you can remove it add force or f to force removal if it s running docker rm my postgresmy postgresYou can usually find a premade image for any software Simply google for software docker For example here s a bunch of ready to use popular images postgresmysqlnginxminio minio self hosted version of AWS S redisYou can try any of these by simply running docker run lt image gt Ports and volumesAbove we started some containers but didn t really communicate with them In most projects there are two types of communication you would want to do with your software dependencies Network for example connecting to a Postgres database at a specific portFilesystem for example reading and writing nginx configuration filesIt s very easy to achieve both in Docker For network access we can configure shared ports for containers For example Postgres by default listens to port We can expose this port to our host machine via the publish or p option docker run name my postgres publish postgresAbove we tell Docker to map the port on our host machine to the port inside the container You can try it if you have psql the Postgres client installed on your host machine psql postgres postgres postgres localhost postgresNote The username password and database name are all defaults postgres as is documented on the Docker Hub page We will learn how to customize them in the next section For filesystem access we can tell Docker to bind mount a specific directory or file to a location inside the container For example we could persist the Postgres data directory on our host machine docker run name my postgres volume path to docker tutorial postgres data var lib postgresql data postgresNow when inside the container Postgres writes its data to var lib postgresql data the files are actually stored on your host machine at path to docker tutorial postgres data Or we could substitute the nginx configuration file with our own docker run name my nginx volume path to custom nginx conf etc nginx conf d default conf nginxIt is also possible to specify read only access for the container by adding ro if necessary In that case the container can t write to the mounted location docker run name my nginx volume path to custom nginx conf etc nginx conf d default conf ro nginxIn these examples we mounted locations to actual locations on the host machine by specifying the absolute path to them Docker also supports Docker volumes which are storage volumes managed via docker commands You can create a volume docker volume create my postgres dataAnd the use it by its name docker run name my postgres volume my postgres data var lib postgresql data postgresBehind the scenes Docker volumes are actually just directories created by Docker They are stored in a hidden folder e g var lib docker volumes in Linux You can choose to use bind mounts or Docker volumes based on your preference Bind mounts are perhaps easier to understand and inspect in the beginning because you have to specify a concrete location for them Environment variablesIf a container expects custom configuration it is usually done via environment variables env or e For example we can customize the Postgres user password and database name when starting the container docker run e POSTGRES USER foobar e POSTGRES PASSWORD secret e POSTGRES DB my database postgresSuch configuration varies based on the image and is usually documented on the Docker Hub page for that image Search for Environment Variables on the postgres image page Where to go from hereIn the beginning you ll probably find Docker more useful for deploying your development dependencies than building your own images The simplest way to get going is to just use the docker command in your next project to start third party dependencies You will probably find it faster and more convenient to manage simultaneous database instances etc than what you were using before Some useful examples to get started with databases connection string postgres postgres postgres localhost postgres add to persist data on host v path on host var lib postgresql data docker run d p postgres connection string mysql example secret localhost add to persist data on host v path on host var lib mysql docker run d p e MYSQL ROOT PASSWORD super secret e MYSQL USER example e MYSQL PASSWORD secret e MYSQL DATABASE my database mysqlOnce you re comfortable with starting and managing containers manually the next step could be to specify your development environment in a file called docker compose yml and to use the docker compose command to start stop dependencies This way anyone can clone your project source code run docker compose up and be ready to start developing Here s an example docker compose yml taken directly from postgres Docker Hub page version services db image postgres restart always environment POSTGRES PASSWORD example Web admin interface for SQL databases similar to PhpMyAdmin adminer image adminer restart always ports You could add other services you need into the specification then just run docker compose up and Docker will start the new ones and remove the old ones Running docker compose down will remove all containers and their volumes Try it and you will see it is a very efficient way to set up and share local development environments per project for your team Other helpful commands when getting startedCheatsheet for common operations docker run p lt host port gt lt container port gt lt image gt docker run e NAME value lt image gt docker run v host path container path lt image gt docker ps docker ps a docker logs lt container gt docker logs tail lt container gt docker logs f lt container gt docker exec lt container gt lt command gt docker kill s lt signal gt lt container gt docker kill s HUP lt container gt docker start lt container gt docker restart lt container gt docker stop lt container gt docker rm lt container gt Remove all containers including running and not running docker rm f docker ps a q Remove any images that are not used currently by any container docker image pruneRemove any volumes that are not used currently by any container docker volume pruneHere s an image version you can save to your disk This article was originally published at shipmight com Shipmight is a self hosted PaaS powered by Kubernetes Check it out if that sounds interesting Link to original post |
2022-04-10 14:21:26 |
海外TECH |
DEV Community |
Change app icon within app | Flutter | iOS only |
https://dev.to/coderreviewio/change-app-icon-within-app-flutter-ios-only-28kj
|
Change app icon within app Flutter iOS onlyUsing Apple doc sDeveloping an app where we can change the icons with Flutter Please follow for more tutorials Twitter YouTube Dev to Medium mushti devGithub Source code The sub will be great The tutorial |
2022-04-10 14:16:48 |
海外TECH |
DEV Community |
A review of JSON Schema libraries for Haskell |
https://dev.to/sshine/a-review-of-json-schema-libraries-for-haskell-321
|
A review of JSON Schema libraries for HaskellJSON Schema is a JSON format that describes JSON formats It is mostly used to validate that a JSON value has the correct format but it can also be used to generate random values that fit a schema definition This may be useful for testing The latest version of JSON Schema is called Draft and the way to define a schema using this draft schema Previous versions were called Draft Draft up to Draft which was renamed into Draft When exploring Haskell libraries that handle JSON Schema definitions they tend to have stalled on an earlier draft There seems to be a pattern and Juspay s medea package has already summarised what s going on in their README section Why Medea where their answer goes in depth with some sober roasting I ll summarise The JSON Schema standard is complexIt covers considerably more than simply validating JSON documentsJSON Schema requires arbitrary URI resolutionThis reminds me of Bloodhound s support for ElasticSearch version and and why the library doesn t support version or The complexity and constant release of new ElasticSearch API versions makes it difficult to make a typed library around it I m not sure exactly how to phrase it but Haskell seems like a bad fit for this type of highly volatile interface Adding recursive unbounded network I O as part of the validation makes a Haskeller less likely to pursue a full implementation Never the less I did a deep scan for the word schema on and procured the following list of JSON Schema specific packages They fall into one of two categories When a spec is too complex a few things can happenFirst there s the we tried and gave up category they all have in common that they re attempts to make a working library for an early version of JSON Schema and they all have in common that they re abandoned such is the fate of open source sometimes aeson schema Only Draft recommends hjsonschema instead hjsonschema Only Draft announced deprecated in an attempt to be too modular jsonschema gen jsons to schema Only Draft neither does validate generates schemas based on values some limitations apply Then there are the libraries that appear to be JSON Schema related libraries judging by their name but they are in fact all variations that explicitly do not attempt to support JSON Schema but build something similar with more limited assumptions json schema It s haskell specific and has no relation to json schema org aeson schemas Last updated in Type safe schema language using Template Haskell But it doesn t come with an option to load a JSON Schema json file So they re schemas in JSON but not JSON Schema schemas hschema aeson Last updated in A similar project that lets me specify schemas for Haskell data types and encode them as JSON So they re schemas in JSON but not JSON Schema schemas schematic Last updated in It can be thought of as a subset of JSON Schema Schematic schemas can be exported to json schema medea Last updated in Medea is a schema language for JSON document structure It is similar to JSON Schema but is designed to be simpler and more self contained quick schema Last updated Minimalistic JSON schema language Not maintained and not exactly up to par SummaryIf you wish to mostly support a year old draft of JSON Schema using an unmaintained package that currently fails to build in a modern build environment you have two options aeson schema and hjsonschema While one recommends the other and the other is self deprecated I m not completely sarcastic when I say that aeson schema could work It appears well made and you may want to support a super old JSON Schema definition before it got complex or even extend it to a later draft If you wish for any coverage of Draft you re in bad luck Haskellers simply gave up and wrote alternative JSON schema libraries If you re not trying to release a JSON specification into the public domain and you re just picking a good internal pipe language that supports JSON any of aeson schemas hschema aeson schematic and medea may be good choices Or you may look at entirely different serialisation frameworks While I cannot currently prioritise evaluating each of these since I am in the process of releasing a specification for my new pet project JSON Flashcard and I do need for a specification format and not just a library I m trying my luck with schematic because it allows me to export to JSON Schema without trying to support it fully |
2022-04-10 14:14:14 |
海外TECH |
DEV Community |
Deepgram x Twitch Hackathon Submission |
https://dev.to/mishmanners/deepgram-x-twitch-hackathon-submission-4bm6
|
Deepgram x Twitch Hackathon SubmissionTwitch is being used more and more for live streaming Whether it s gamers playing through video games developer live coding music streams or someone cooking everyone is using Twitch for live streams Something that was released a little while ago is closed captions on Twitch live streams Unfortunately the capability of this service is very limited Closed captions must be enabled by the streamer and isn t necessarily a straightforward process What would be awesome is something built into the browser ie a browser extension or similar where users can enable closed captions using Deepgram for any and all videos Introduction My Deepgram Use CaseTwitch stream Closed Captioner extension Closed Captions for Streams extension as well as other third party applications like built in OBS plugins Web Captioner Pixel Chat or PubNub The main problem with all these methods is they need to be enabled by the broadcaster and not the viewer The viewer has very limited capability to have subtitles If a streamer doesn t have subtitles enabled then unfortunately people won t be able to toggle subtitles Thus accessibility becomes a real problem for many people Already supports multiple languages Dive into DetailsI use Twitch as the example here but it this was a browser extension or similar closed captions could be enabled on anything with voice Twitter Spaces which is built in already but has issues Twitch VODs Facebook streams TikTok and more Next stepsWhy stop there With captions enabled via the browser on all media platforms why Conclusion |
2022-04-10 14:10:57 |
海外TECH |
DEV Community |
The Power of Memento Design Pattern in JavaScript |
https://dev.to/jsmanifest/the-power-of-memento-design-pattern-in-javascript-25d1
|
The Power of Memento Design Pattern in JavaScriptThe Memento Pattern in programming is useful in situations where we need a way to restore an object s state As a JavaScript developer we work with this concept in many situations especially now in modern web applications If you ve been developing in the web for some time then you might have heard of the term hydration If you don t know what hydration is it s a technique in web development where the client side takes static content which was stored in any programming language such as JSON JavaScript HTML etc and converts it into code where browsers are able to run during runtime At that stage JavaScript is run and is able to do things like attach event listeners when the DOM begins running on the page The memento pattern is similar In this post we are going to implement the Memento pattern for the runtime and will not be storing anything statically If you worked with JSON parse and JSON stringify chances are you might have accidentally implemented a memento before Usually there are three objects that implement the full flow of the Memento pattern OriginatorMementoCaretakerThe Originator defines the interface that triggers the creation and storing of itself as the memento The Memento is the internal state representation of the Originator that is passed and retrieved from the Caretaker The Caretaker has one job to store or save the memento to be used later It can retrieve the memento it stored but it does not mutate anything Implementing the Memento Design PatternNow that we described the pattern we are going to implement it to master this practice in code We will be creating an interactive email input field as a DOM element We are going to add one smart behavior to our input field so that our user will immediately become aware that they need to add the symbol before submitting They will know this when their input field is in an error state which will look like this This is the html markup we are going to work right on top of lt DOCTYPE html gt lt html gt lt head gt lt title gt Memento lt title gt lt meta charset UTF gt lt head gt lt body style margin px text align center background linear gradient deg rgba rgba height px overflow hidden gt lt input type email id emailInput style padding px border radius px font size px placeholder Enter your email gt lt input gt lt script src src index js gt lt script gt lt body gt lt html gt This will start us off with this interface Now the first thing we are going to do is define a couple of constant variables for the error state that we will use throughout our code to assign as values to the error styles This is to ensure that we don t make any typos when writing our code since we will be reusing them multiple times const ERROR COLOR tomato const ERROR BORDER COLOR red const ERROR SHADOW px px px rgba const CIRCLE BORDER const ROUNDED BORDER px This has nothing to do with the pattern but I think it s a good habit for me to randomly slip in some best practices just so you can get extra tips from this post why not right Now we are going to create a helper function that toggles between the error state and the normal state since we are going to be using this multiple times as well const toggleElementStatus el status gt if status error return Object assign el style borderColor ERROR BORDER COLOR color ERROR COLOR boxShadow ERROR SHADOW outline red return Object assign el style borderColor black color black boxShadow outline I might as well just slip in a helper to toggle the border radius while we toggle between the two style presets This is to make our code feel more natural as if it was a real app so we don t just focus directly on the relationship between the colors and the memento in this post Sometimes I think we learn better when we also see the perspective of random code vs the actual code that we are going over with const toggleBorderRadius el preset gt el style borderRadius preset rounded ROUNDED BORDER preset circle CIRCLE BORDER px The next thing we are going to do is write the Originator Remember the originator defines the interface that triggers the creation and storing of itself as the memento function createOriginator serialize deserialize return serialize deserialize Actually we just created a simply factory that produces the originator for us Here is the real originator const originator createOriginator serialize nodes const state nodes forEach param HTMLInputElement node node gt const item id node id item tagName node tagName toLowerCase if item tagName input item isError node style borderColor ERROR BORDER COLOR amp amp node style color ERROR COLOR item value node value item isRounded node style borderRadius ROUNDED BORDER item isCircle node style borderRadius CIRCLE BORDER state push item return state deserialize state const providedNode state state length if providedNode state pop const nodes state forEach item gt const node providedNode document createElement item tagName if item tagName input if item isError toggleElementStatus node error if item isRounded toggleBorderRadius node rounded else if item isCircle toggleBorderRadius node circle node value item value if item placeholder node placeholder item placeholder if item id node id item id nodes push node return nodes In the originator the serialize method takes in a DOM node and returns us a state representation of the DOM node so that we can store it inside the local storage as a string This is required because the local storage only accepts strings Right now we are the peak of this pattern in JavaScript The serialization is the only reason why this pattern is important to us otherwise we d be able to directly store DOM nodes to the local storage and call it a day Inside our serialize method we implicitly defined a couple of rules that help us determine the representation Here are the lines I m referring to if item tagName input item isError node style borderColor ERROR BORDER COLOR amp amp node style color ERROR COLOR item value node value item isRounded node style borderRadius ROUNDED BORDERitem isCircle node style borderRadius CIRCLE BORDERWhen storing mementos of input elements we have a choice whether to implement it that way or this way if item tagName input item style borderColor node style borderColor item style color node style color item value node value item style borderRadius node style borderRadiusTake my advice on this A good practice is to create useful meaning out of your code especially in your design pattern implementations When you inaugurate meaning in your code it it helps you to think of higher level abstractions that might be useful in other areas of your code Using item isError to represent a preset of error styles opens up wider opportunities to make interesting reusable mementos that we can reuse as our project grows more complex over time as opposed to assigning arbitrary styles directly For example it s common for forms to not submit when a crucial field is left unblank The form must transition to some kind of state where it needs to stop itself from submitting If we were to save a memento of a form we need to ensure that when we restore this state the user is restored to the disabled state const originator createOriginator serialize nodes const state nodes forEach param HTMLInputElement node node gt const item id node id item tagName node tagName toLowerCase if item tagName input item isError node style borderColor ERROR BORDER COLOR amp amp node style color ERROR COLOR item value node value item isRounded node style borderRadius ROUNDED BORDER item isCircle node style borderRadius CIRCLE BORDER if node textContent item textContent node textContent state push item return state deserialize state const nodes if Array isArray state state state state forEach item gt const node document createElement item tagName if item style Object entries item style forEach key value gt node style key value if item isRounded toggleBorderRadius node rounded else if item isCircle toggleBorderRadius node circle if item spacing node style padding item spacing if item id node id item id if item tagName input if item isError toggleElementStatus node error node value item value if item placeholder node placeholder item placeholder else if item tagName label if item isError node style color ERROR COLOR else if item tagName select if item options item options forEach obj gt node appendChild originator deserialize obj node if item textContent node textContent item textContent nodes push node return nodes const caretaker createCaretaker function restore state container onRendered let statusSubscribers let status const setStatus value options gt status value statusSubscribers forEach fn gt fn status options const renderMemento memento container gt return originator deserialize memento map el gt container appendChild el if memento isError amp amp status error setStatus error if memento children memento children forEach mem gt renderMemento mem el forEach childEl gt el appendChild childEl return el const render props container gt const withStatusObserver fn gt statusSubscribers push updatedStatus gt if updatedStatus error Do something return args gt const elements fn args return elements const renderWithObserver withStatusObserver renderMemento const elements renderWithObserver props container statusSubscribers length return elements const elements render state container if onRendered onRendered status elements return status elements const container document getElementById root const status elements renderedElements restore mementoJson container onRendered status elements gt if status error const submitBtn container querySelector submit btn submitBtn disabled true submitBtn textContent You have errors toggleElementStatus submitBtn error Instead of returning the elements directly we make sure that what s also returned is the current state of rendering the memento Looking at this in a higher level perspective we take advantage of the fact that isError can represent and overview of something like a form A form should not be submitted if either one little required field is missing or a value was not entered correctly In that case we make sure that the form should not be interactive by disabling the submit button right before displaying to the user If you haven t noticed our restore wraps our original deserialize method from our Originator What we have now is a higher level abstracted memento that supports deep children and the rendering state isError of our entire memento ConclusionAnd that concludes the end of this post I hope you found this to be valuable and look out for more in the future Find me on medium |
2022-04-10 14:03:27 |
Apple |
AppleInsider - Frontpage News |
Updated MacBook Air could launch at WWDC 2022 |
https://appleinsider.com/articles/22/04/10/updated-macbook-air-could-launch-at-wwdc-2022?utm_medium=rss
|
Updated MacBook Air could launch at WWDC Apple will probably introduce new hardware at WWDC a report claims with a MacBook Air update potentially up for launch at the developer event Apple will be holding its Worldwide Developer Conference from June until June with the main focus being on its software updates However one Sunday rumor proposes that Apple could show off multiple new pieces of hardware during its keynote address According to Mark Gurman s Power On newsletter for Bloomberg Apple is preparing to launch new Macs within the coming months Gurman reasons what better place to do so than WWDC That s the same venue where Mac s transition from Intel to Apple s own chips was announced two years ago Read more |
2022-04-10 14:15:19 |
北海道 |
北海道新聞 |
ロシア新司令官が残虐行為を助長 CNNテレビで米高官 |
https://www.hokkaido-np.co.jp/article/667936/
|
国家安全保障 |
2022-04-10 23:14:00 |
北海道 |
北海道新聞 |
「恵庭花めぐり」国の制度に登録 市内6庭園探訪の観光プログラム |
https://www.hokkaido-np.co.jp/article/667932/
|
国土交通省 |
2022-04-10 23:01:00 |
コメント
コメントを投稿