投稿時間:2023-02-18 18:12:45 RSSフィード2023-02-18 18:00 分まとめ(14件)

カテゴリー等 サイト名等 記事タイトル・トレンドワード等 リンクURL 頻出ワード・要約等/検索ボリューム 登録日
IT ITmedia 総合記事一覧 [ITmedia News] Twitter、SMSによる2FAはBlueユーザーのみに変更 非Blueユーザーは3月20日に無効に https://www.itmedia.co.jp/news/articles/2302/18/news064.html itmedianewstwitter 2023-02-18 17:20:00
TECH Techable(テッカブル) 累計2,300万円以上達成!PCも入る究極の手ぶらダウンが3度目のクラウドファンディングに挑戦 https://techable.jp/archives/195992 thegeniusdownparka 2023-02-18 08:00:52
python Pythonタグが付けられた新着投稿 - Qiita Blenderでおみくじ https://qiita.com/SaitoTsutomu/items/883bc3cda561355093f6 blender 2023-02-18 17:26:48
Linux Ubuntuタグが付けられた新着投稿 - Qiita TS 録画ファイルを B25 デコードする https://qiita.com/nanbuwks/items/2db83f2c5b9209d9647f dockermirakurunepgstation 2023-02-18 17:07:32
Docker dockerタグが付けられた新着投稿 - Qiita Docker:whalesayイメージ、かわいい https://qiita.com/epirock/items/c8f9c974d8f1a4ac020f dockerrundocke 2023-02-18 17:41:32
Docker dockerタグが付けられた新着投稿 - Qiita Docker:Dockerイメージ、リポジトリ、タグ、レジストリ、ビルド、プル、プッシュの意味 https://qiita.com/epirock/items/c1472ba0892cf5fc9037 docker 2023-02-18 17:05:04
golang Goタグが付けられた新着投稿 - Qiita http: superfluous response.WriteHeader call from~エラー https://qiita.com/dvd092bhbn/items/baa769bb99af9bb87a8c gorun 2023-02-18 17:36:32
海外TECH DEV Community Unit Testing Backward Compatibility of Message Format https://dev.to/kirekov/unit-testing-backward-compatibility-of-message-format-27lj Unit Testing Backward Compatibility of Message FormatDo you apply Apache Kafka or RabbitMQ in your software project If so then you definitely have some message schemas Have you ever encountered in backward compatibility issue An accidental message format changing and your entire system is no longer functioning I bet you have such an unpleasant experience at least once in your career Thankfully there is a solution In this article I m telling you Why backward compatibility is crucial in message driven systems How can you validate the backward compatibility automatically with simple unit tests in Java A few words about forward compatibility You can find code examples and the entire project setup of unit testing backward compatibility in this repository Why backward compatibility matters Suppose there are two services The Order Service transfers a message about shipment details to the Shipment Service Look at the schema below The Shipment Service the consumer defines the message schema i e the contract with two fields orderId and address And the Order Service the producer sends the message according to the specified schema Everything works well But suddenly the new requirement came into play Now we have to set the complex address instead of just putting the city name e g country city postcode etc No problem we just need to update the contract accordingly Look at the fixed schema below orderId Long address country String city String postcode Integer Now there are two likely outcomes The producer had updated the contract before the consumer did Vice versa Assuming that the producer is the first one to make changes What happens in such situation Look at the diagram below to understand the consequences Here is what happens The Order Service updates the contract The Order Service sends the message with the new format The Shipment Service tries to deserialize the received message with the old contract Deserialization process fails But what if the Shipment Service updates the contract firstly Let s discover this scenario on the schema below The problem remains the same As a matter of fact the Shipment Service cannot deserialize received message So we can conclude that altering the message schema has to be backward compatible Some of you may ask whether is it possible to update the Shipment Service and the Order Service contracts simultaneously or not That would eliminate the problem with the backward compatibility right Well it s unbearable to perform such update in real life Besides even if you did there s a possibility that the broker e g Apache Kafka still contains the messages with the old format that you need to process somehow So backward compatibility is always a concern The idea of unit testing backward compatibilityLet s get to the code I m using Java Jackson for de serialization But the idea remains the same for any other library Look at the first version of OrderCreated message schema below Data Builder Jacksonizedpublic class OrderCreated private final Long orderId private final String address public OrderCreated Long orderId String address this orderId notNull orderId Order id cannot be null this address notNull address Address cannot be null The Jacksonized annotation tells Jackson to use the builder generated by the Builder annotation as the deserialization entry point Quite useful because you don t have to repeat JsonCreator and JsonProperty usage while dealing with immutable classes How do we start with the backward compatibility check Let s add the message example to the resources backward compatibility directory Look at the Json message example and the folder structure below orderId address NYC Each schema altering should come up with adding a new file to the backward compatibility directory And the content should describe the increment made to the schema Each new file name should be lexically greater than the previous one In that case the tests run will be deterministic The easiest way is just setting the incrementing natural number But what if you need something more complicated Then I recommend you to read my article about Flyway migrations naming strategy in a big project I describe there the similar principle So there are two rules we have to follow Each new schema update should come up with creating a new file in resources backward compatibility directory You should never delete the previously added files to ensure backward compatibility consistency Anyway there are cases when you want to break the backward compatibility For example no one uses this field anymore and you just need to eliminate it In that case you can remove some backward compatibility data files However don t treat it as a regular situation It s an exception but not a valid common scenario The automation processNow we need the test that reads the described files and generates the validation cases Look at the code example below class BackwardCompatibilityTest private final ObjectMapper objectMapper new ObjectMapper ParameterizedTest SneakyThrows void shouldRemainBackwardCompatibility String filename String json final var orderCreated assertDoesNotThrow gt objectMapper readValue json OrderCreated class Couldn t parse OrderCreated for filename filename final var serializedJson assertDoesNotThrow gt objectMapper writeValueAsString orderCreated Couldn t serialize OrderCreated to JSON from filename filename final var expected new JSONObject json final var actual new JSONObject serializedJson JSONAssert assertEquals expected actual false I parametrised the backward compatibility test because the number of files will grow therefore the true number of test cases as well Here is what happens step by step The test receives the json content and the latest combined filename I describe the algorithm later in the article We the latter parameter for a more informative assertion message in case of failures Then we try to parse the supplied json to the OrderCreated object If this step fails then we definitely have broken the backward compatibility Afterwards we serialize the parsed OrderCreated object back to json Usually this operation doesn t fail Anyway we should always be prepared for a dangerous scenario And finally we check that the supplied json equals to the one we got on step I use JSONAssert library here The false boolean parameter tells to check only overlapping fields For example if the actual result contains the addressV field and the expected object doesn t it won t trigger the failure That s a normal situation because backward compatibility data is static while the OrderCreated might grow with new parameters It s time to provide the input parameters to the shouldRemainBackwardCompatibility test Look at the code example below private static Stream lt Arguments gt provideMessagesExamples final var resourceFolder Thread currentThread getContextClassLoader getResources backward compatibility nextElement final var fileInfos Files walk Path of resourceFolder toURI filter path gt path toFile isFile sorted comparing path gt path getFileName toString map file gt new FileInfo file getFileName toString Files readString file toList private record FileInfo String name String content In the beginning we read all the files from the backward compatibility directory sort them by their names and assign the result tuples list filename content … to the fileInfos variable Look at the next step below final var argumentsList new ArrayList lt Arguments gt for int i i lt fileInfos size i JSONObject initialJson null for int j j lt i j if j initialJson new JSONObject fileInfos get i content final var curr fileInfos get j deepMerge new JSONObject curr content initialJson if j i argumentsList add Arguments arguments curr name initialJson toString return argumentsList stream And here comes the algorithm of creating the backward compatibility data list itself Suppose we have files json json and json Therefore you should validate these combinations to ensure that newly added changes haven t broken the backward compatibility The content of the jsonThe content of the json json the latter one rewrites existing fields The content of the json json json the latter one rewrites existing fields The loop above does this by calling the deepMerge method The first parameter is the file with the current index in the loop And the initialJson values are the target to merge all the latter changes Look at the deepMerge method declaration below private static void deepMerge JSONObject source JSONObject target final var names requireNonNullElse source names new JSONArray for int i i lt names length i final String key String names get i Object value source get key if target has key new value for key target put key value else existing value for key recursively deep merge if value instanceof JSONObject valueJson deepMerge valueJson target getJSONObject key else target put key value I didn t write this code example by myself but took it from this StackOverflow question There are two arguments source and target The source is the one to read new values And the target is the JSONObject to put them or replace existing ones Finally to supply the computed data set just add the MethodSource annotation to the shouldRemainBackwardCompatibility test method Look at the code example below ParameterizedTest MethodSource provideMessagesExamples void shouldRemainBackwardCompatibility String filename String json Here is the test run result As you can see an example with a single file works as expected Evolving data schemaLet s get to the initial point of the article There is an address field of String type and we want to store city country and postcode separately As we already discussed we cannot just replace the existing field with the new value type Therefore we should add the new field and mark the previous one as deprecating Look at the updated version of OrderCreated class below Data Builder Jacksonizedpublic class OrderCreated private final Long orderId Deprecated forRemoval true private final String address private final Address newAddress public OrderCreated Long orderId String address Address newAddress this orderId notNull orderId Order id cannot be null this address notNull address Address cannot be null this newAddress notNull newAddress Complex address cannot be null Builder Data Jacksonized public static class Address private final String city private final String country private final Integer postcode public Address String city String country Integer postcode this city notNull city City cannot be null this country notNull country Country cannot be null this postcode notNull postcode Postcode cannot be null I deprecated the address field and added the newAddress field that is a complex object containing country city and postcode All right now we need to add the new json file with the example of filled newAddress Look at the code snippet below newAddress city NYC country USA postcode Let s run the test to check the backward compatibility We did break the backward compatibility Why is that Because we marked the newAddress field as mandatory and checked it for non nullability No surprise that the json content failed the test because it has no mention of the newAddress field So here comes an important conclusion If you add a new field to the existing schema you cannot check it for non nullability There is no guarantee that the producer will start sending messages with the newly stated field instantly Meaning that the solution is the default values usage Look at the fixed OrderCreated class declaration below Data Builder Jacksonizedpublic class OrderCreated private final Long orderId Deprecated forRemoval true private final String address private final Address newAddress public OrderCreated Long orderId String address Address newAddress this orderId notNull orderId Order id cannot be null this address notNull address Address cannot be null this newAddress requireNonNullElse newAddress Address builder build Builder Data Jacksonized public static class Address Builder Default private final String city Builder Default private final String country Builder Default private final Integer postcode null We don t want to deal with null value That s why we assign Address builder build instance if the supplied newAddress equals to null which basically means it s not present in the provided json The Builder Default Lombok annotation does the same thing as the requireNonNullElse function but without specifying the constructor manually Let s run the backward compatibility test again Now we re ready The new field presence has not broken the backward compatibility and we can update the contracts safely on both sided in any order A few words about forward compatibilityI ve seen that developers tend to care about forward compatibility much less than the backward one Though it s crucial as well I ve covered the topic of forward compatible enums in this article If you re dealing with message driven systems I strongly recommend you to check it out What if somebody put additional properties that aren t present in the message schema Look at the example below orderId address NYC newAddress city NYC country USA postcode unknown field unknown value Nothing should break right Anyway there is no field to map the value However let s add the backward compatibility test to ensure consistency Look at the new json file declaration below unknown field unknown value Let s run the test to see the results Wow something went wrong Look at the output message below Caused by UnrecognizedPropertyException Unrecognized field unknown field By default Jackson treats unknown property presence as an error Sometimes that can be really annoying Especially if there are many producers and some may not be the part of your project Thankfully the fix is a piece of cake You just need to add JsonIgnoreProperties annotation with the proper value Look at the code snippet below Data Builder Jacksonized JsonIgnoreProperties ignoreUnknown true public class OrderCreated Let s run the test again It still fails Though the message is different Expected unknown field but none foundThe assertion checks that the input and the serialized json values are equal But in this situation that s not the case The input json contains the unknown field But the JsonIgnoreProperties ignoreUnknown true annotation presence just erases it So we lost the value Again this might be acceptable under particular circumstances But what if you still want to guarantee that we won t lose the user provided fields even if there is no business logic on these values For example your service might act as a middleware that consumes messages from a Kafka topic and produces them to another one Jackson has a solution for this case as well There is JsonAnySetter and JsonAnyGetter annotations that helps to store unknown values and serialize them lately Look at the final version of OrderCreated class declaration below Data Builder Jacksonized JsonIgnoreProperties ignoreUnknown true public class OrderCreated private final Long orderId Deprecated forRemoval true private final String address private final Address newAddress JsonAnySetter Singular any private final Map lt String Object gt additionalProperties JsonAnyGetter Map lt String Object gt getAdditionalProperties return additionalProperties public OrderCreated Long orderId String address Address newAddress Map lt String Object gt additionalProperties this orderId notNull orderId Order id cannot be null this address notNull address Address cannot be null this newAddress requireNonNullElse newAddress Address builder build this additionalProperties requireNonNullElse additionalProperties emptyMap Builder Data Jacksonized public static class Address Builder Default private final String city Builder Default private final String country Builder Default private final Integer postcode null I marked the getAdditionalProperties as package private so the users of this class cannot access the unknown properties values Let s run the backward compatibility test again Everything works like a charm now The morale is that forward compatibility might be as important as backward one And if you want to maintain it then make sure that you don t simply erase unknown values but transmit them further to be serialized in the expected json ConclusionThat s all I wanted to tell you about backward and a bit of forward compatibility Hope this knowledge will be useful to keep your contracts consistent to avoid unexpected deserialization errors on consumer side If you have questions or suggestions leave your comments down below Thanks for reading ResourcesThe repository with the whole setup of unit testing backward compatibilityApache KafkaRabbitMQJava Jackson libraryJacksonized Lombok annotationBuilder Lombok annotationJsonCreator and JsonProperty Jackson annotationsMy article about Flyway migrations naming strategy in a big projectJSONAssert libraryStackOverflow question with example of merging two JSON objectsMy article about forward compatible enum values in API with Java JacksonJsonIgnoreProperties Jackson annotationJsonAnySetter and JsonAnyGetter Jackson annotations 2023-02-18 08:50:27
海外TECH DEV Community Better Ways To Handle Data Storage on The Web Client https://dev.to/ecorreia/better-ways-to-handle-data-storage-on-the-web-client-4219 Better Ways To Handle Data Storage on The Web Client Better Ways To Handle Data Storage on The Web ClientWhenever you mention data storage and state management on the web client different people will provide you with different solutions From the vanilla developers fans who like to mess with the browser s raw storage APIs to those who prefer third party libraries there is a lot to consider when deciding how to handle data on web clients Native Browser Storage SolutionsFirst let s look at how browsers allow us to manage track and store data from our web applications LocalStorage ーPart of the Web Storage API localStorage is a data storage solution for in between sessions These are data you want the browser to keep for the next time the user visits the website Such data have no expiration date but are cleared after a private or incognito session end SessionStorage ーAlso part of the Web Storage API sessionStorage is a data storage solution for a single session It works just like localStorage but the data is cleared as soon as the tab for the app is closed IndexedDB ーA low level API for client storage Just like localStorage the data is kept in between sessions but unlike localStorage it handles a larger more complex and structured amount of data It is also built for performance Another advantage is that it is accessible inside Web Worker where Web Storage API is not available WebSQL ーAlso a low level API for client storage intended to work like SQLite but it is currently in the process of being deprecated from browsers Cookies ーThis is not per se a data storage solution but it is a way to set a small amount of data often used for tracking web preferences and session management like sessionStorage Nowadays it is mostly used for tracking as more powerful data storage solutions now exist in the browser In Memory Javascript ーIncreasingly we have seen complex applications keeping all their data in memory This is more flexible as the developer can decide how the API should look like This is great for data that only needs to exist between specific contexts but it is often used for global state management as well As you can see the browser pretty much provides a lot of solutions depending on your data needs However it is more complex than it seems which leads to a lot of confusion and mental overload for developers The problem with Browser Storage SolutionsSerialization The Web Storage API only supports strings which means that for complex data you must use JSON strings or find a good serialization package to handle everything API complexity If you decide to try IndexedDB or WebSQL you will quickly regret it since the API is super complex You are much better off using a third party library if that s the way want to go API differences All of these data storage solutions are different APIs which means you need to learn them all in order to use them The problem is one application may have different data storage needs and switching back and forward between these APIs adds complexity to your application as they behave differently Data type support Not every data type of your application can be stored Again you need to concern yourself with serialization and data type when picking solutions Asynchronous actions IndexDB is asynchronous while Web Storage API is not Being asynchronous is super important so data processing does not block your code Storage Limits and Data size Although it is getting better different browsers handle storage limits differently It also depends on the disk space of the user As a developer you need to be mindful of this to make sure you dont run into quota issue which will break your app or find flexible solutions to keep data fresh Availability As mentioned above only IndexedDB is available in Web Worker which means you may need to find ways around this if you need another type of storage solution In general all these storage solutions are well supported in all browsers Structure and Validation Normally you want your data to have some structure otherwise you will spend a lot of time performing validation checks and data mapping You may also need to handle defaults which is additional complexity You may look for some sort of schema to guarantee data integrity so you have to worry about these things less A lot of these problems can be solved quickly others like limits you just need to be mindful of as a web developer Let s look at few of my preferred solutions Third party library solutions recommendations If Dealing with IndexedDB and WebSQLDexie this is a wrapper around IndexedDB which removes the complexity and pairs up fine with all your UI Frameworks const db new Dexie MyDatabase Declare tables IDs and indexes db version stores friends id name age Find some old friends const oldFriends await db friends where age above toArray or make a new one await db friends add name Camilla age street East th Street picture await getBlob camilla png PouchDB this is a wrapper around IndexedDB and WebSQL which is compatible with your backend CouchDB setup const db new PouchDB dbname db put id dave gmail com name David age db changes on change function console log Ch Ch Changes db replicate to JSStore this is a wrapper around IndexedDB but has a SQL like behavior const dbName JsStore Demo const connection new JsStore Connection new Worker jsstore worker js var tblProduct name Product columns Here Id is name of column id primaryKey true autoIncrement true itemName notNull true dataType string price notNull true dataType number quantity notNull true dataType number const database name dbName tables tblProduct await connection initDb database const insertCount await connection insert into Product values itemName Blue Jeans price quantity Handles all Storage Solutions simple to use and provides additional featuresLocalForage this is a wrapper around IndexedDB WebSQL LocalStorage and SessionStorage with a way to define more interfaces called drivers for additional storage solutions It does a great job handling all your serialization needs it is asynchronous and handles a large set of data types Its API resembles Web Storage API and it is ridiculously simple to learn const todoStore localforage createInstance name todo version const todoId crypto randomUUID const todo await todoStore setItem todoId id todoId name Check LocalForage out description Try to find the perfect storage solution for my app todoStore removeItem todoId ClientWebStorage this is a wrapper on LocalForage ーwhich means it inherits all its benefits ー but takes to a whole new level your data storage needs to also be your preferred application state manager It is asynchronous event driven schema based handles data defaults and type checks for you allows for data action subscription and interception to handle side effects and integrates nicely with the backend server for continuous data synchronization It also can be used as a state management solution for any UI framework like React and Angular interface ToDo name string description string complete boolean const todoStore new ClientStore lt ToDo gt todo define schema name String description No Description complete false config type INDEXEDDB LOCALSTORAGE WEBSQL MEMORYSTORAGE default version listen to action events on the store todoStore on EventType Error error action data gt trackErrorJS track error track errors console log Action action failed with error error message for data data todoStore intercept EventType Created data gt intercept create action to call API and return API response to update todoStore with return todoService createTodo data todoStore intercept EventType Removed id gt intercept delete action to call API for same action todoService deleteTodo id const todo await todoStore createItem name Check ClientWebStorage out Creates id e eb d a createdDate January th lastUpdatedDate January th name Check ClientWebStorage out description No Description complete false await todoStore removeItem todo id If you are looking for a single full data storage and application state management solution I recommend taking a look at ClientWebStorage If you just need a data storage solution for everything LocalForage is the one If you are just looking for a solution for your IndexedDB needs I find Dexie to be one of the best but depending on other needs the others in the list are also good to consider ConclusionThe web client is a great platform and as the complexity of web applications increases a solution for our data storage and management follows This is a very sensitive topic that needs careful consideration You must understand your data well to decide which solutions to go for There are no silver bullets here but in general I love swiss knife solutions like ClientWebStorage and LocalForage which offer everything out of the box in a very simple and powerful API that still allows me to configure and extend as needed YouTube Channel Before SemicolonWebsite beforesemicolon com 2023-02-18 08:20:57
海外TECH DEV Community Backend Delivery - Hands-On Node & Docker 1/2 https://dev.to/costica/backend-delivery-hands-on-node-docker-12-3mol Backend Delivery Hands On Node amp Docker We re going to explore how to set yourself up for success when it comes to exploring Kubernetes and running Dockerized applications It all starts with understanding whys and hows to deliver your application while being productive in your local dev environment WhyEvery time I want to start a new project I feel like reinventing the wheel There are so many tutorials that cover Node Docker and debugging node…but not how to combine those While individually each of those is a minute setup the internet runs short when it comes to an explanation of how to set up the whole thing So let us follow The twelve factors and start thinking “cloud first and have a project setup that makes sense for both delivery and development What s coveredIn this article building amp running your container In depth examples and explanations of how I reason about setting things up In the follow up article in depth explanation about using containers for “one command ready to run projects Hot reload debugger tests Delivery first mentalitySay you finished your project and want to deploy it in the wild so other people can access it All you have to do is run node src app js right Right Not quite What were the steps you took on your local environment to run your application Any prerequisites Any OS dependencies etc etc…The same goes for any server It needs to have some things installed so that it can run your code This applies to your friend s PC and to your mom s laptop too You need to actually install node and ideally the same version so that it runs the same way as it does for you The problem if you only do this as a “final step is that most likely your app won t work anywhere else than on your local machine Or you are a remember it all kind of person that knows exactly all the steps needed to run the application And you re willing and have the time to do that setup manually on multiple machines The projectJust in case you want to follow along this is the app that we re going to consider a simple express application with cors enabled and nodemon for local development amp hot reload src app jsimport as dotenv from dotenv dotenv config import cors from cors import express from express const app express app use cors app get req res gt const testDebug res send Hello World const port process env APP PORTapp listen port gt console log App listening on port port package json scripts dev nodemon inspect src app js dependencies cors dotenv express devDependencies nodemon type module While there are many other things a real project would need this should suffice for this demo This is the commit used for this explanation btw node quickstart local dev After cloning downloading the files all you have to do is to run a simple npm install it will fetch all the dependencies defined in the package json Running npm run dev will start the nodemon process simulating node watch and exposing port for debugging Or we can run this in “prod mode by calling node src app js This is all that s needed to run this project on our local machine No it s not you silly You also need node to run the app and npm to install the dependencies To add to that probably some specific versions of those ContainersWhile we re discussing the simple case of just having node amp npm here in time projects and their runtime dependencies grow While keeping in sync your laptop configuration with the server s config is probably possible wouldn t it be easier if you…don t have to do it How about a way of deploying your application on any server some conditions may apply or laptop without having to install node Or any other dependencies our app might have Say Hello to containerization the buzzword for the past decade Instead of giving a wishlist of prerequisites to your mom and then emailing her the app js file we re going to wrap everything inside a…let s call it bundle for simplicity What bundle meansThis bundle is actually a…“bundle It has both application code the app js that we wrote and all the dependencies needed to run our app node engine and app s dependencies found in the node modules folder At a first glance this is nice We have everything we need to run the app in the bundle However I am still in the same uncomfortable position as earlier the only difference being that instead of “you need to install node I can now say to my mom “you need to install a container runtime so you can run the image I sent you It is however an improvement While not all laptops or servers will have the exact same version of node most “dev compatible machines will have a container runtime something that knows how to run the bundle There are more of them out there but for brevity we re just going to call the runtime Docker So we have changed our run instructions from “install node to “install Docker and run the file I gave you like so docker run bla bla Why it is an improvement I can now send any file to my mom and she will be able to run it Let s say we install Docker on her laptop beforehand then all she has to do is to run them That s a nice improvement I d say Let s create a bundleCreating a bundle starts from a template a Dockerfile It looks something like this DockerfileFROM node ARG WEB SERVER NODE APP PORT ENV WEB SERVER NODE APP PORT WEB SERVER NODE APP PORT Create app directoryWORKDIR node app Install app dependencies A wildcard is used to ensure both package json AND package lock json are copied where available npm COPY package json RUN npm install productionCOPY src src EXPOSE WEB SERVER NODE APP PORTCMD npm run server I know it doesn t make sense And all tutorials that I ve seen so far mix multiple concepts like building running exposing ports binding volumes etc without any explanation and everyone gets confused Let s take it step by step Build amp run your application Files onlyAs we ve seen so far just running the app on our local machine is simple Things get complicated only when we add different things into the mix So let s keep it simple Dockerfile Set up the base a Linux environment with node js version installed FROM node Create a new folder inside the node Linux OS This command both creates the folder amp cd s into it pwd output after running this command is node app WORKDIR node app Copy our package json amp package lock json to node app Because we re already in node app we can just use the current directory as the destination Copy HOST FILES gt Bundle Files COPY package json Now we want to install the dependencies inside the bundle While you might be tempted to copy the node modules just because you already have it on your local machine don t give in to the temptation The advantage of running this inside the bundle container is that you get reproducibility it will run inside the context of node and it doesn t even need your machine to have node or npm installed RUN npm install omit dev And finally make sure we also copy our actual application files to the bundle COPY src src Behind the scenes this is a rough estimation of what s going on That is the template the instructions of how the bundle should be built The actual building is done by running docker build Make sure the terminal has its current folder cd ed into the root project same level as the Dockerfile docker build no cache tag test bundle docker build I m sure you got that part we re at the build step no cache Optional but I like to add it because Docker cache is something I don t master why get frustrated things don t get changed when we can make sure with this simple flag that we get a clean slate every time tag TAG NAME set how the bundle is going to be called set the context of the command in our case since we are in the same folder as the Dockerfile we can just mark it as “ which means the current folder Running what we createdNow let s run our bundle docker run test bundleAaand it might seem that nothing happened But it did actually run even for just a moment But since we didn t specify what command it should run it defaulted to run a simple node command It defaulted to that because that s defined in the node template we re using as a base And because node is supposed to run and then exit our container also exited instantly Containers are supposed to be “short lived Let s make it run our app instead And since our app happens to be a web server long running process the container won t exit immediately docker run test bundle node src app jsNow we get the output App listening on port undefined Environment valuesOur app port is “dynamic listening on whatever port is specified in the environment variable called APP PORT Let s hard code that value for a bit to something like just to test our bundle const port process env APP PORTconst port We can t test this simple change again by just re running the docker run test bundle node src app js again because the bundle was already built with the version of the app js file where it reads the PORT value from an environment variable We will now have to rebuild the template amp run the application again docker build no cache tag test bundle amp amp docker run test bundle node src app jsSuccess App listening on port Let s see it in action In any browser go to localhost And it won t work… Checking the app inside the containerEverything s alright I promise The app really does work on the PORT but that s inside the running container Let s run a GET request from inside the container similar to what the browser does when we access it from the host machine First we need to be inside the container For that we need to know what the container ID is docker ps docker exec it CONTAINER NAME bin bashA more user dev friendly way to get a terminal into a container running in your local machine is via Docker s Dashboard Now that we have a shell terminal inside the container let s test our application curl localhost Success We get a “Hello world back Exposing ports so we can use our app from outside the containerThat s not helpful is it A web server is supposed to be accessed from a browser not from inside a container through a terminal What we need to do next is to set up some port forwarding p HOST PORT CONTAINER PORTdocker build no cache tag test bundle amp amp docker run p test bundle node src app jsFinally accessing localhost inside a browser works Ok we tested our bundle It works just fine even though all we did was make sure we have some files inside of it A glance into the future How containers are actually runLet s go back to the “hardcoded app port It isn t an issue to hardcode your app port like this We can rely on the fact that the app will always listen on the port and then just set up the container to do the forwarding between CUSTOM PORT lt gt I m not a big fan of this solution though Simply because if I want to understand why a bundle behaves as it does I have to check its build command its run command and also the application code This is not a real issue now but it will become one if we try to run this container inside Kubernetes for example Without going into too many details in the future we will want to configure something like this spec containers image localhost project name project ports containerPort CONTAINER PORT Preparing for the real worldIt will be a lot easier if we just set our container and app port binding right from the beginning The trick here is to make sure we keep the two values in sync on one hand we care about the PORT that the container exposes to the outer worldon the other hand we care about the PORT that the apps listen toSo revert our app code to read the env variable instead of being hardcoded to const port const port process env APP PORTThe end goal is to have complete control over what APP PORT gets defined at build time when the bundle is created keep APP PORT in sync with the port exposed by the containerwhat HOST PORT is bound to the APP PORTIf the two APP PORT CONTAINER EXPOSED PORT are the same the exposing part of the run command will be p HOST PORT APP PORT The advantage of doing that is that it allows our run command to be as slim as docker run p HOST PORT APP PORT EXPOSED PORT test bundle node src app jsAnd the good news is we can find the APP PORT EXPOSED PORT without looking into the app js file we can just check the Dockerfile Build argumentsFor that we re going to use a combination of build arguments and env variables Dockerfile Set the BUILD ARG variable with the default value of Can be overriden with the build arg Note if overriden the run command must also change We rely on this build argument to be the same in the ENV and the EXPOSE commandsARG ARG WEB SERVER NODE APP PORT If no env variable is provided default it to the ARG we already setENV WEB SERVER NODE APP PORT ARG WEB SERVER NODE APP PORT Expose the port we define when we build the container or the default settingEXPOSE ARG WEB SERVER NODE APP PORTThe complete Dockerfile looks like so Dockerfile FROM node ARG ARG WEB SERVER NODE APP PORT ENV APP PORT ARG WEB SERVER NODE APP PORTWORKDIR node appCOPY package json RUN npm install omit devCOPY src src EXPOSE ARG WEB SERVER NODE APP PORTBuilding the bundle command now becomes docker build no cache t test bundle And we can run it using docker run p test bundle node src app jsSince we didn t specify any build arg both the APP PORT and the exposed port defaulted to the defined That means all we have to do is to choose what host port to forward to the container exposed port in the example above we forwarded to the default What happens if we want to use something else for the APP PORT docker build no cache build arg ARG WEB SERVER NODE APP PORT t test bundle And then run docker run p test bundle node src app js Short recapSo we know how to build our app and how to run it using docker run node src app js However that s not a standard That s how I chose to run it This is not ideal for multiple reasons figuring out how to run the container requires some application code investigationif I want to change the command I should notify everyone using the container to run it using the new command there s also the chance that they will find out surprise that their way of running it no longer worksSo let s default the command that is run inside the Dockerfile CMD node src app js We can now rebuild amp run the container in a much simpler way docker build no cache tag test bundle amp amp docker run p test bundleThis doesn t reduce our flexibility in any way as the cmd can still be overridden at runtime docker run p test bundle Build amp run aka DeliverA final overview of what we achieved so far is this From “install node and run “npm install and “node src app js to docker run It s not much but it s honest work However this only solves the delivery part Coming up shareable dev setups with DockerWhile this article covers the delivery part and some basic concepts of Docker environment variables and port binding…we re just scratching the surface The real fun begins in the nd part of the article where we start creating Docker configurations for the dev environment The next article covers how to run the code and hot reload it inside a container each time you hit “save how to debug your code when it is run inside a containerhow to not rely on long hard to remember docker run commandsAnd the good news is…it s going to be easy since we re going to piggy bank on some of the concepts learned here Until the next one by bye 2023-02-18 08:15:00
海外TECH DEV Community Continuous Delivery: HTML to Kubernetes - the why https://dev.to/costica/continuous-delivery-html-to-kubernetes-the-why-323p Continuous Delivery HTML to Kubernetes the whyI think Continuous Delivery is the magic sauce that allows web to be the go to platform for all software nowadays Web development is about evolving software from idea to actual requirements from an MVP tested by your friends amp family to a system used by millions of users WhyThe internet is full of articles on “How to create a CRUD API with Node js “How to configure Nginx as a reverse proxy or “How to do X for Y However I think there s a lack of details about why some patterns and ideologies emerged in the last few years What I think is missing or maybe is only accessible via paid courses from the general “internet archive is how parts of a system work together in the whole lifecycle of a project and how a CI CD pipeline serves so many uses and it allows you to deliver fast bit sized iterations of your software so that users can benefit from it as soon as you are done coding it That s not an easy task There are a lot of trade offs that should be considered for the long term success of a project from small decisions like what library to use to validate the input to architectural decisions about how to make a system elastic but cost efficient at the same time all the way to being able to iterate fast and don t pay devs when could suffice How I see thingsWeb dev is not about Javascript or Kubernetes not only And it certainly is not about “PHP is dying vs “use Node It is about delivering software It has evolved and nowadays it also means delivering all kinds of software in a browser My hot takes about software engineering with a naturally biased mentality coming from web development Translating the product requirements into technical ones is not enough one should take into account actual deliverable milestones It is better to throw away months of work that was tested by real users than spend months to “do it right If you know how to test a feature or project the actual coding is a breeze Dev experience has a huge impact on time to market Devs should have an easy way to code and to prove that it works Yes I m the kind of person that wants to get the software out the door as soon as possible I m sold on the idea that software not delivered is worth Yes this translates into the iterative agile approach I also think that moving too fast and delivering bad software is just as bad The problemSo…how How can one move faster and get actual software out the door in users browsers when web dev nowadays means frontend BackendCDNsCloud javascript librariesMicroservicesKubernetesS or S compatible ServerlessKafkaGoRustJavaAsync processingMapReduceNginxHaproxyL LBL LBGitDevOpsetcThe list can go on The list does go on My contributionThe bad news if you want to create an environment where web dev is actually productive you need each web developer to be an entire IT department Yes it used to be a joke running on LinkedIn a couple of years ago IMO that joke has unfortunately become true it doesn t matter if you want to work as a Frontend Backend full stack DevOps Cloud Engineer etc As a “web developer you now need to understand the big picture of what delivering software in a browser means The good news I don t think it is impossible You don t have to be a master in all areas in order to navigate through what web means Quite the opposite having a general idea about how things work together is far more important than mastering only one topic That s why I am writing down my mental model my cheatsheet if you will This is what this whole “Continuous Delivery HTML to Kubernetes series is about HowLet this be the start of a “how to web in modern ages series My rough idea of what s important and how to tackle it First set up the basics of how the internet works Delivering software in a browser Frontend appsThen we re going to look at how important it is to understand how your app is going to be deployed…by running it locallyThen we re going to set up share able dev setups using docker compose work in progress sorry The next step is to see how one can integrate them and how to test them How to be efficient when coding How to deliver fast After we have a working PoC we are going to scale it up using a local Kubernetes cluster and deep dive into what delivering software at scale means while still being able to be efficient when coding Necessary disclaimerIt is a biased tutorial of what I think one should focus on if one wants to deliver good scalable software in a timely manner That means there will be a lot of focus on the CI CD testing and understanding how amp why things work in the modern world Closing notesFair warning most likely the plan will change while I write different articles and realize I want to write about things in a different order or about other things altogether And that s ok because that s what web dev is about delivering software before requirements or priorities change I would also like to get input from the community I am open to changing my plan see what I did there if I find out certain topics or areas would be of greater interest than what I initially considered So don t be shy say Hi …Ok now go on we can t go into Kubernetes if we can t display anything in a browser Go get familiar with web apps because understanding the front end is the stepping stone to understanding how everything works Just to be clear that statement comes from a backend dev By bye 2023-02-18 08:15:00
海外TECH CodeProject Latest Articles curry-console https://www.codeproject.com/Articles/5354858/curry-console event 2023-02-18 08:01:00
ニュース BBC News - Home Christian Atsu found dead after Turkey earthquake - agent https://www.bbc.co.uk/sport/football/64687384?at_medium=RSS&at_campaign=KARANGA agentfootballer 2023-02-18 08:36:12
ニュース BBC News - Home New Zealand v England: Stuart Broad magic spell puts tourists on course for victory https://www.bbc.co.uk/sport/cricket/64659001?at_medium=RSS&at_campaign=KARANGA New Zealand v England Stuart Broad magic spell puts tourists on course for victoryStuart Broad s devastating late burst puts England on course for victory over New Zealand on day three of the first Test in Mount Maunganui 2023-02-18 08:39:50

コメント

このブログの人気の投稿

投稿時間:2021-06-17 05:05:34 RSSフィード2021-06-17 05:00 分まとめ(1274件)

投稿時間:2021-06-20 02:06:12 RSSフィード2021-06-20 02:00 分まとめ(3871件)

投稿時間:2020-12-01 09:41:49 RSSフィード2020-12-01 09:00 分まとめ(69件)