投稿時間:2023-07-31 20:18:28 RSSフィード2023-07-31 20:00 分まとめ(23件)

カテゴリー等 サイト名等 記事タイトル・トレンドワード等 リンクURL 頻出ワード・要約等/検索ボリューム 登録日
IT ITmedia 総合記事一覧 [ITmedia ビジネスオンライン] ヒットした大作『FF16』 厳しい意見が目立つ なぜ https://www.itmedia.co.jp/business/articles/2307/31/news185.html itmedia 2023-07-31 19:23:00
js JavaScriptタグが付けられた新着投稿 - Qiita Next.js App Router API JavaScript Hello, World! https://qiita.com/yoneapp/items/b3f1a9f993c0ecea8c2a fromnextserverexportasync 2023-07-31 19:37:26
js JavaScriptタグが付けられた新着投稿 - Qiita コールバック関数ってなに https://qiita.com/99mm/items/3b7705c5cdddd54bc767 駆動 2023-07-31 19:15:58
AWS AWSタグが付けられた新着投稿 - Qiita EtagをつかってS3にアップロードしたファイルの整合性をたしかめてみる(マルチパートアップロードなし) https://qiita.com/SAITO_Keita/items/29cf207b969537b2b68b 紹介 2023-07-31 19:36:55
技術ブログ Developers.IO RedshiftのCOPYコマンドで空文字のデータをロードして結果を確認してみた https://dev.classmethod.jp/articles/redshift-load-empty-string-with-copy/ amazonredshift 2023-07-31 10:18:19
海外TECH MakeUseOf How to Group Mission Control Windows by Apps on Your Mac https://www.makeuseof.com/group-mission-control-windows-by-apps-on-mac/ macmission 2023-07-31 10:30:25
海外TECH MakeUseOf 8 Ways to Fix iPhone Not Sending Pictures to Android in the Messages App https://www.makeuseof.com/fix-iphone-not-sending-pictures-to-android-in-messages/ messages 2023-07-31 10:16:22
海外TECH DEV Community Create a search engine with PostgreSQL: Postgres vs Elasticsearch https://dev.to/xata/create-a-search-engine-with-postgresql-postgres-vs-elasticsearch-495k Create a search engine with PostgreSQL Postgres vs ElasticsearchIn Part we delved into the capabilities of PostgreSQL s full text search and explored how advanced search features such as relevancy boosters typo tolerance and faceted search can be implemented In this part we ll compare it with Elasticsearch First let s note that Postgres and Elasticsearch are generally not in competition with each other In fact it s very common to see them together in architecture diagrams often in a configuration like this In this architecture the source of truth for the data lives in Postgres which serves the transactional CRUD operations The data is continuously synced to Elasticsearch either via something like Postgres logical replication events change data capture or by the application itself via custom code During this data replication denormalization might be required The search functionality including facets and aggregations is served from Elasticsearch While this architecture is as common as it is for very good reasons it does have a few challenges Dealing with two types of stores means more operational burden and higher infrastructure costs Keeping the data in sync is more challenging than you might think I m planning a dedicated blog for this problem because it s quite interesting Let s just say it s pretty hard to get it completely right The data replication is at best near real time meaning that there can be consistency issues in the search service Point is generally solvable via engineering effort and careful dedicated code From the existing tools PGSync is an open source project that aims to specifically solve this problem ZomboDB is an interesting Postgres extension that tackles point and I think partially point by controlling and querying Elasticsearch through Postgres I haven t yet tried either of these two projects so I can t comment on their trade offs but I wanted to mention them And yes a data platform like Xata solves most of points and by taking that complexity and offering it as a service together with other goodies That said if the Postgres full text search functionality is enough for your use case making use of it promises to significantly simplify your architecture and application In this version Postgres serves both the CRUD app needs and the full text search needs This means you don t need to operate two types of stores no more data replication no more denormalization no more eventual consistency The search engine built into Postgres happens to support ACID transactions joins between tables constraints e g not null or unique referential integrity foreign keys and all the other Postgres goodies that make application development simpler Therefore it s no wonder the Hacker News thread for our part blog post had a lively discussion about the pros and cons of this approach Can we go for the Postgres only solution or does the best tool for the job argument wins We re going to compare the convenience search relevancy performance and scalability of the two options DIY versus built inAs we showed in part you can replicate a lot of the Elasticsearch functionality in Postgres even more advanced things like relevancy boosters typo tolerance suggesters autocomplete or semantic vector search However it s not always straight forward An example where it s not too simple is with typo tolerance called fuzziness in Elasticsearch It s not available out of the box in Postgres but you can implement it with the following steps index all lexemes words from all documents in a separate tablefor each word in the query use similarity or Levenshtein distance to search in this tablemodify the search query to include any words that are foundWhile the above is quite doable in dedicated search engines like Elasticsearch you can enable typo tolerance with a simple flag POST recipes search query multi match query biscaits fuzziness Search relevancy BM and TF IDFThe default ranking algorithm for keyword search in Elasticsearch is BM With the release of Elasticsearch in it dethroned TF IDF as the default ranking algorithm Postgres doesn t support either of them mainly because its ranking functions explained in here don t have access to global word frequency data which is needed by these algorithms To see how relevant pun intended or not so relevant that might be let s look at the ranking functions and algorithms from simple to complex ts rank Postgres function ranks based on the term frequency In other words it does the “TF term frequency part of TF IDF The principle is that if you are searching for a word the more often that word shows up in the matching document the higher the score In addition to using simple TF Postgres provides ways to normalize the term frequency into a score For instance one approach is to divide it by the document length ts rank cd Postgres function rank cover density In addition to the term frequency this function also takes into account the “cover density meaning the proximity of the terms in the document TF IDF term frequency inverse document frequency In addition to the term frequency this algorithm “penalizes words that are very common in the overall data set So if the word “egg matches but that word is super common because we have a recipes dataset it is valued less compared to other words in the query BM this algorithm is based on a probabilistic model of relevancy While the TF IDF formula is mostly based on intuition and practical experiments BM is the result of more formal mathematical research If you re curious about the said mathematical research I recommend this talk that makes it accessible Interestingly the resulting BM formula is not all that different from TF IDF but it incorporates a couple more concepts the frequency saturation and the document length Ultimately this gives better results over a wider range of document types There s no question that BM is a more advanced relevancy algorithm than what ts rank or ts rank cd use BM uses more input signals it s based on better heuristics and it typically doesn t require tuning One practical effect of BM is that it automatically penalizes the very common words the “in “or etc also called “stop words which means that they don t need to be excluded from the index This is why the Postgres english configuration for to tsvector removes the stop words details here in part but the Elasticsearch standard analyzer doesn t It doesn t need to While BM is superior there are some pro Postgres arguments to be considered if you aggressively exclude the stop words like the english configuration in Postgres does that compensates for the lack of IDF in some cases in practice there might be stronger signals of relevancy in the data itself upvotes reviews etc See the section on boosters from part on how to make use of them in Postgres Could BM or TF IDF be implemented on top of the existing Postgres functionality Actually yes See this blog post that uses ts stats and ts debug to compute TF IDF It s not very simple but possible as usual with Postgres Performance and scalability considerationsLet s start by noting that the two systems couldn t be more different PostgreSQL has a single master and multiple read replicas Elasticsearch has horizontal scalability via sharding Postgres is relational supports joining tables has ACID transactions and offers constraints while Elasticsearch is document oriented and offers consistency guarantees only per document Postgres is row oriented while Elasticsearch has an internal column store in the form of doc values Postgres is native C code while Elasticsearch runs on the JVM Postgres has a connection oriented wire protocol Elasticsearch has a REST like DSL over HTTP All of these impact performance and scalability and it s no surprise then that the two tend to shine in different areas PostgreSQL is commonly used as a primary data store whereas Elasticsearch is usually utilized as a secondary store particularly for search and analytics on time series data such as logs And yet they do overlap on the use case of full text search which is the point of this blog post I was curious to know at roughly what amount of data Postgres slows down compared to Elasticsearch On the movies dataset K rows that we used in part all queries were reasonably fast lt ms So for the testing here I chose a larger data set a recipes dataset from Kaggle containing M recipes The commands to load the CSV file in PostgreSQL can be found in this gist For Elasticsearch I ve loaded the same CSV file using this tool After loading the data I started by running searches similar to the ones used in part SELECT title ts rank search websearch to tsquery english darth vader rank FROM recipes WHERE search websearch to tsquery english darth vader ORDER BY rank DESC limit title rank Darth Vader Biscuits Cloud Pancakes rows Time msFor Elasticsearch I ve used the following to run the search POST recipes search query query string query darth AND vader I ran each query five times and recorded the best and worst times Typically the first query of a kind was the slowest because the following queries benefited from having the relevant pages already in memory While this approach is rather unscientific and you should conduct your own benchmarking on your data before drawing definitive conclusions it should be sufficient for drawing some initial conclusions Here are the results on a few queries queryElasticsearch worst time ms Elasticsearch best time ms Postgres worst time ms Postgres best time ms darth vaderchicken nuggetspancakecuracaomixAs you can see Postgres performs well on some queries such as darth vader or curacao responding within milliseconds However on other queries like pancake or mix it performs significantly worse than Elasticsearch with response times measured in seconds It gets as bad as seconds latency What s going on here The difference lies in how many rows match the query terms Searching for “darth vader in a recipes dataset matches rows But searching “mix in a recipes dataset matches a million rows literally to be precise Since we order by rank Postgres needs to call the ts rank function for each of the million rows The Postgres docs even warn about this Ranking can be expensive since it requires consulting the tsvector of each matching document which can be I O bound and therefore slow Unfortunately it is almost impossible to avoid since practical queries often result in large numbers of matches Indeed the issue is from ranking If we re only interested in matching and we order by an indexed column it is fast SELECT title FROM recipes WHERE search websearch to tsquery english mix ORDER BY title ASC LIMIT Time msBut we re working on the assumption that ranking is necessary for a good search experience One idea is to use what I call sampling before computing the ranks take a sample of K rows that match The assumption is that if your query matches so many documents the ranking is likely to be ineffective anyway so it s better to prioritize the response time The SQL to do this looks like this WITH search sample AS SELECT title search FROM recipes WHERE search websearch to tsquery english mix LIMIT SELECT title ts rank search websearch to tsquery english mix rank FROM search sample ORDER BY rank DESC limit Re running the tests with this sample approach gives us closer results queryElasticsearch worst time ms Elasticsearch best time ms Postgres worst time ms Postgres best time ms darth vaderchicken nuggetspancakecuracaomixMuch better Of course we did sacrifice on the relevancy which might or might not be ok in your case Here are some conclusions and more considerations on the topic of performance and scalability on search use cases over smaller datasets lt K rows both systems will perform well but a Postgres only solution will require less resources on medium datasets a few million rows Elasticsearch is already faster however Postgres can perform within a ms latency budged if you use the sampling trick explained above when the number of documents is really large for example logs or other time series Elasticsearch has the additional advantage of horizontal scalability if you need a lot of aggregations or analytics e g display a dashboard full of graphs and the data set is large enough Elasticsearch s columnar store will give it an advantage giving Postgres an extra workload can affect the performance of your main instance The solution is to move the searching to a replica but then you lose some of the consistency guarantees Semantic and hybrid searchBoth part and this blog post focused on keyword searching techniques However in the last few years semantic vector search has taken the world of search by storm so I feel like I need to touch on this aspect as well in comparing the two Semantic search leverages language models to generate embeddings for each document Embeddings are arrays of numbers that represent the text on a number of dimensions Pieces of text that have similar embeddings have a similar meaning In other words semantic search can “search by meaning rather than “by keywords This is quite exciting now because large language models LLMs give us very accurate understanding of meaning It means you don t have to maintain list of synonyms or add different keywords to your documents to match how your users are searching Postgres supports vector search via the pgvector extension while Elasticsearch has it built in via the KNN search You can find benchmarks on ann benchmarks look for pgvector and luceneknn but keep in mind that both implementations are under active development and their performance is being improved While exciting it turns out that semantic search alone doesn t really work great on the typical search experiences that we have today at least not on a majority of datasets If you are curious I recently wrote a comparison between keyword and semantic search for the particular use case of selecting the context for ChatGPT For search use cases like the recipes one in this blog post hybrid search might give better results use a combination of keyword and semantic search to improve the ranking Elastic has recently announced their “Elasticsearch Relevance Engine which includes hybrid search In Postgres given that it s all building blocks you can combine the full text search functionality and pgvector I m looking forward to diving deeper into this topic as well but I ll leave that for a follow up blog post ConclusionChoosing between a Postgres only architecture and a Postgres Elasticsearch architecture will depend on your use case and scale For example if you have a table or list in your application on which you support CRUD operations and you want to add full text search functionality to it Postgres will likely work well for you for quite some time On the other hand if you have a large data set search and search relevancy is critical to your application for example in e commerce using a dedicated search engine like Elasticsearch is going to perform better both in latency and relevancy In many cases it might make sense to start with the simpler Postgres only approach but be ready to pivot to the Postgres Elasticsearch architecture when needed If you read this far you might want to give Xata a try It offers both Postgres and Elasticsearch in the same data platform and can also handle the syncing between them with no extra effort If you have any feedback on this blog post or are interested in the follow up blog posts you can follow us on Twitter or join us in Discord 2023-07-31 10:49:03
海外TECH DEV Community Part 3. Ports, Adapters, and UI https://dev.to/bespoyasov/part-3-ports-adapters-and-ui-311b Part Ports Adapters and UI️Originally published at bespoyasov me Subscribe to my blog to read posts like this earlier Let s continue the series of posts and experiments about explicit software design Last time we finished designing the application core and created use case functions In this post we will write components for the user interface discuss its interaction with the domain model and look at the difference between different types of data that are displayed in the UI By the way all the source code for this series is available on GitHub Give the repo a star if you like the posts Analyzing UIThe very first thing we will do when developing the UI is to look at what it will consist of and what features it should provide to users Our application a currency converter will consist of one screen on which the converter component itself will be located The converter will store some stateーdata that will affect the components render To understand how to work with this data let s look at what components will be displayed on the screen how information will “flow through the application while working with it UI as Function of StateThe first thing we will pay attention to is the component header and the text field below it The field contains the current value of the base currency and the header contains the value of the quote currency calculated at the current rate We can think of these two components as a “transformation of data from the domain model into a set of components on the screen Domain Model gt UI Components BaseValue gt lt BaseValueInput gt BaseValue amp QuoteValue gt lt CurrencyPair gt amp ExchangeRateThe idea of describing UI as a function of data is not new and lies at the core of various frameworks libraries and patterns This approach helps to separate the data transformations from various side effects related to rendering components and reacting to user input One of the reasons for such separation is that data and its representation on the screen change for different reasons and at different pace and frequency If the code is not separated then changes in one part seep into another and vice versa In that case stopping and limiting the spread of such changes becomes difficult which makes updating the code unreasonably expensive When data and presentation are separated effects are somewhat “isolated somewhere on the edge of the application This makes the code more convenient for understanding debugging and testing However not all data that affects UI rendering is exclusively the domain model and we should also consider and separate in the code other types of state Types of StateIn addition to the model which describes the domain and is represented in our code by a set of types and functions we can also identify other types of data that affect the render of components on the screen For example if you click on the “Refresh Rates button in a currency converter the application will load data from the API server The data loaded from there is part of the server state We have almost no control over it and the UI task is to synchronize with it in order to show up to date information from there Actually instead of “server state I would use the term “remote state because the source of the data can be not only a server but also local storage file system or anything else In addition to that after the button is pressed while the data is not yet loaded from the server the button will be disabled and the converter will show a loading indicator to provide feedback to the user The flag responsible for the loading indicator can be considered as part of the UI state As a rule UI state is data that describes the UI directly For example it includes things like “Is the button disabled “How is the currency list sorted “Which screen is currently open and responds to user actions even if the model does not change By the way sometimes non permanent states such as “Loading “HasError “Retrying etc are placed in a separate category called “meta state and everything related to routing is placed in the “URL state This can be useful if the application is complex and different data is involved in different processes In our application there is no special need for this so we will not describe everything in such detail Data FlowIn complex interfaces components may depend on several types of state at once For example if the quote currency selector needs to be sorted this component will be a function of both the model and UI state simultaneously Domain Model gt UI State gt UI Component CurrentQuoteCode gt SortDirection gt lt CurrencySelector amp CurrencyList selected QuoteCurrencyCode options CurrencyList sort SortDirection gt In code we will strive to emphasize these dependencies and separate different types of state We won t always physically separate them on the file level but at least we ll try to do it conceptuallyーin the types and values of variables passed to components Different types of state change for different reasons and at different frequencies We will often put volatile code closer to the “edges of the application than more stable code This way we will try to eliminate the influence of frequent changes on the core of the application limit the spread of changes across the codebase show that the data in the model is primary and indicate the direction in which data “flows through the app Ports and AdaptersEarlier we represented the application as a “box with levers and slots through which it communicates with the outside world User input and the information rendered on the screen can be considered such “communication When a user clicks a button or changes the value in a text field the UI sends a signal a k a command or action to the application core to initiate some use case In our application this signal will be a function that implements an input port Input port to the app type RefreshRates gt Promise lt void gt Function that implements the RefreshRates type const refresh RefreshRates async gt When the button is clicked the UI will invoke the refresh function which implements the RefreshRates input port To handle a click event in React we can write a component like this function RefreshRates return lt button type button onClick refresh gt Refresh Rates lt button gt This component transforms a signal from the external world click on a button into a signal understandable by the application core a function that implements the input port In other words we can call this component an adapter between the application and the UI Driving Adapters an UIComponents handle user input and display information on the screen They “translate signals from the application language to a language that is understandable to the user and browser API To make this a little more obvious let s modify the component slightly so that it prevents default behavior when the button is clicked function RefreshRates const clickHandler useCallback e gt Interface of browser APIs e preventDefault Application core interface function refresh implements the input port the component relies on its type and expects the promised behavior refresh User interface the element looks like a button so it can be clicked it has a label that explains what will happen after clicking it return lt button type button onClick clickHandler gt Refresh Rates lt button gt The RefreshRates component effectively translates the user s intent into the language of the domain model UserIntent gt ButtonClickEvent gt RefreshRatesThis conversion makes the component similar to an adapter because it becomes a function that makes two interfaces compatible with each other By the way such adapters are usually called primary or driving adapters because they send signals to the application indicating what needs to be done In addition to them there are also driven adapters but we will talk more about them later “Adapters can be not only components but also any function that can handle UI events window addEventListener focus gt refresh window onresize debounce refresh setTimeout gt refresh FocusEvent gt RefreshRates ResizeEvent gt Debounced gt RefreshRates TimerFiredEvent gt RefreshRatesThe main value of these “adapters is that they scope the responsibility of the UI code It cannot directly interfere with the core functionality of the application because the only way to interact with it is by sending a signal through the input port Such separation helps to limit the spread of changes Since all input ports are clearly defined it doesn t matter to the core what will happen “on the other side The presentation on the screen can be changed however we want but we will always be sure that communication with the core will follow predetermined rules Component ImplementationLet s now try to write UI components based on the application ports We ll start with something simple and write a text field responsible for updating the value of the base currency BaseValueInputFirst let s create a component markup ui BaseValueInputexport function BaseValueInput return lt label gt lt span gt Value in RPC Republic Credits lt span gt lt input type number min step value gt lt label gt Now let s handle the user input and call the function that implements the UpdateBaseValue input port ui BaseValueInput For now just a stub that implements the input port type const updateBaseValue UpdateBaseValue gt export function BaseValueInput “Adapter component that translates the signal from the field the change event into a signal understandable by the application core call to the updateBaseValue port with the necessary parameters const onChange useCallback e ChangeEvent lt HTMLInputElement gt gt updateBaseValue e currentTarget valueAsNumber return lt label gt lt span gt Value in RPC Republic Credits lt span gt lt input type number min step value onChange onChange gt lt label gt Since the component depends on the type of the port rather than the specific function we can replace the stub with a function from the props This will help simplify unit tests for the component type BaseValueInputProps updateBaseValue UpdateBaseValue export function BaseValueInput updateBaseValue BaseValueInputProps return By the way it is not necessary to pass updateBaseValue as a prop It is acceptable to simply reference the specific function and in many cases this will even be preferable to abstracting through props But since we are trying to follow all the possible guidelines from various books we will make the components completely “decoupled from the application core Yet again it s not necessary In the component tests we can pass a mock in the props that will check that when entering text into the input field we are indeed calling this function with the correct parameters describe when entered a value in the field gt it triggers the base value update handler gt const updateBaseValue vi fn render lt BaseValueInput updateBaseValue updateBaseValue gt Find the field input the string in it const field screen getByLabelText Value in RPC act gt fireEvent change field target value ↑Better to use userEvent though Check that the input port was called with the number expect updateBaseValue toHaveBeenCalledWith The task of the component is to extract the required data from the field in the form of a number and pass it to the input port function Note that in the test we are only checking the code that is before the port Everything that happens after calling updateBaseValue is already not the responsibility of the component Its job is to call a certain function with the right parameters and then the application takes responsibility for executing it In an integration test this would be wrong we would need to test the whole use case and ensure that the user sees the correct updated result on the screen We will also write integration tests later when we start working on application composition For now let s focus on unit tests Whether to write unit tests for components is a controversial topic Some argue that components should be tested mainly with integration tests and we shouldn t to write unit tests for them at all I won t give any recommendations or advice on this topic because I don t know if there is a “right way that would work in all cases I can only advise you to weigh the costs and benefits of both options Trivial TestsSometimes we may find that a component is doing a trivial task such as just calling a function in response to a user action without any additional actions For example const SampleComponent someInputPort gt return lt button type button onClick someInputPort gt Click lt button gt The test of such a component will come down to checking that the passed function was called on click describe when clicked gt it triggers the input port function gt Whether to write tests for such components depends on preferences and project policy Here I agree with Mark Seemann who writes in the book “Code that Fits in Your Head that such tests are not particularly useful If the complexity of the function is equal to then tests for such functions can be skipped to avoid cluttering the code It s a different story if in the tests we are checking various conditions for rendering the component If depending on the props the component renders different text or is in different states such as loading error etc it may be useful to cover it with tests as well However the complexity of such components will be higher than so the general rule still applies BaseValueInput s ValueIn addition to reacting to user input the base currency field should also display its value Let s add a new input port that will provide this value core ports inputtype SelectBaseValue gt BaseValue Now we can update the component dependencies specifying this port as well ui BaseValueInputtype BaseValueInputProps updateBaseValue UpdateBaseValue selectBaseValue SelectBaseValue The implementation of such a port can be any function including a hook so we can change the name in the dependencies to useBaseValue ui BaseValueInputtype BaseValueInputProps updateBaseValue UpdateBaseValue useBaseValue SelectBaseValue After this we can use this hook to get the necessary value and display it in the field export function BaseValueInput updateBaseValue useBaseValue BaseValueInputProps Get the value via the hook that implements the input port const value useBaseValue return lt label gt lt span gt Value in RPC Republic Credits lt span gt Render the value in the text field lt input type number min step value value onChange onChange gt lt label gt Usually the value and the function to update it are put in the same hook to access them like this const value update useBaseValue This is a more canonical and conventional way of working with hooks in React We didn t do this because in the future we will need more granular access to data and updating it but in general we could have gathered input ports into such tuples By the way to test the data selector useBaseValue we will check the formatted value inside the field ui BaseValueInput testconst updateBaseValue vi fn const useBaseValue gt const dependencies updateBaseValue useBaseValue it renders the value from the specified selector gt render lt BaseValueInput dependencies gt const field screen getByLabelText lt HTMLInputElement gt Value in RPC expect field value toEqual Such a test can also be considered trivial and not created separately but checked altogether with an integration test However if we still decide to write unit tests it s important not to intrude into theresponsibility of other modules but to test only what this component does Dependencies as PropsYou may have noticed that we are currently passing the component dependencies as props type BaseValueInputProps updateBaseValue UpdateBaseValue useBaseValue SelectBaseValue This approach is quite common but it is kinda controversial and may raise questions However at this stage of development we have several reasons for doing so We don t have ready infrastructure API store from which to import the useBaseValue hook and use it directly Therefore we rely on the interface of this hook as a guarantee that this behavior will be provided by “someone sometime later The coupling between the UI and the core of the application is reduced due to the buffer zone between them so designing the UI and business logic can be done in parallel and independently This is not always necessary but we follow this recommendation as part of our “by the book coding approach The composition of the UI and the rest of the application becomes more explicit as if specific implementations of dependencies are not passed the application will not be built This does not mean that we will have to use these props in the future In one of the upcoming posts we will note the moment when we are ready to remove such “explicit composition and import hooks directly into the components In the source code for clarity I will leave the composition explicit in all examples so that the boundary between different parts of the application is better visible Remember that the code is “deliberately clean and writing it exactly like that is not necessary Presentational Components and ContainersIn addition to the components that provide a connection to the application core we can also distinguish components that are solely responsible for rendering data on the screenーthe so called presentational components They do not contain any business logic are unaware of the application and depend only on their props and their scope is limited to the UI layer In our case such a presentational component might be the Input component It s a wrapper for the standard text field with some preset styles import type InputHTMLAttributes from react import styles from Input module css type InputProps Omit lt InputHTMLAttributes lt HTMLInputElement gt className gt export function Input props InputProps return lt input props className styles input gt The task of this component is to correctly and nicely render a text field It has no logic does not use hooks does not access any data or functionality and all its behavior depends solely on its props We could use such a component in BaseValueInput like this import Input from shared ui Input export function BaseValueInput const value useBaseValue const onChange useCallback return lt label gt lt span gt Value in RPC Republic Credits lt span gt lt Input type number min step value value onChange onChange gt lt label gt Unlike the presentational Input component the BaseValueInput component knows about the application and can send signals to it and read information from it Some time ago such components were called containers With hooks the concept of containers has somewhat disappeared although after the emergence of server components people have started talking about them again In any case the term is quite old and seems useful for understanding the conceptual difference between different “kinds of components Roughly speaking the job of a presentational component is to look nice while the job of a container is to provide a connection to the application That s why BaseValueInput knows how to invoke “use cases and how to adapt application data and signals to the presentational Input component Presentational components become reusable because they don t know anything about the project context and the domain and can be used independently of the application functionality Asynchronous WorkflowsLet s go back to the stock quotes update button We remember that clicking on it triggers an asynchronous process core ports inputtype RefreshRates gt Promise lt void gt We can pass a function that implements the RefreshRates input port as a click handler for the button ui RefreshRatestype RefreshRatesProps refreshRates RefreshRates export function RefreshRates refreshRates RefreshRatesProps return lt Button type button onClick refreshRates gt Refresh Rates lt Button gt But that won t be enough For asynchronous processes we would like to show state indicators in the UI so that users receive feedback from the interface We can achieve this by combining the domain and UI states in the component “dependencies which will be responsible for the status of the operation Let s update the type RefreshRatesProps and indicate that the component depends not only on the input port of the application but also on some other state type RefreshAsync The component still gets the function that implements the input port execute RefreshRates But also it gets some new data that is associated with the state of that operation status is idle is pending type RefreshRatesDeps And we assume that the component gets everything from a hook useRefreshRates gt RefreshAsync In our case the status field is a part of the UI state because it only affects the user interface However if the operation status somehow affected other business logic processes we would probably have to include it in the model and describe various data states with regard to the status Inside the component we can rely on the guarantees from useRefreshRates that when the status changes the associated data will also change export function RefreshRates useRefreshRates RefreshRatesDeps const execute status useRefreshRates const pending status is pending return lt Button type button onClick execute disabled pending gt Refresh Rates lt Button gt We can loosely call such guarantees of status changes from useRefreshRates a form of a contract We may not write pre and post condition checks but we have them in mind while describing the RefreshAsync type Now the component can be tested by passing a stub as a hook dependency which will return a specific UI state const execute vi fn const idle Status is idle const pending Status is pending describe when in idle state gt it renders an enabled button gt const useRefreshRates gt status idle execute render lt RefreshRates useRefreshRates useRefreshRates gt const button screen getByRole lt HTMLButtonElement gt button expect button disabled toEqual false describe when in pending state gt it renders a disabled button gt const useRefreshRates gt status pending execute render lt RefreshRates useRefreshRates useRefreshRates gt const button screen getByRole lt HTMLButtonElement gt button expect button disabled toEqual true describe when the button is clicked gt it triggers the refresh rates action gt const useRefreshRates gt status idle execute render lt RefreshRates useRefreshRates useRefreshRates gt const button screen getByRole lt HTMLButtonElement gt button act gt fireEvent click button expect execute toHaveBeenCalledOnce And we will write the actual implementation of the useRefreshRates hook in one of the next posts Next TimeIn this post we described the user interface and discussed its interaction with the application core Next time we will create the application infrastructure write API requests and set up a runtime data store for the application Sources and ReferencesLinks to books articles and other materials I mentioned in this post Source code for the current step on GitHub UI as Function of StateDescribing the UIModel view viewmodel Wikipedia“The closest match to RSC You still use Redux Patterns and PrinciplesActions in ReduxAdapter PatternAnti Corruption LayerClient Side Architecture LayersCommand PatternContainers and Presentational ComponentsDependency Injection in ReactPrimary AdaptersSingle responsibility principle Wikipedia Testing UIDesign by ContractFunctional design is intrinsically testableWrite tests Not too many Mostly integration Other TopicsCode That Fits in Your Head by Mark SeemannCyclomatic complexity WikipediaMaterial UIMore functional pits of successUnrepresentable Invalid States 2023-07-31 10:37:46
海外TECH DEV Community Taming Badly Typed External Libraries - How Zod Boosts Type Safety https://dev.to/brainhubeu/taming-badly-typed-external-libraries-how-zod-boosts-type-safety-4jog Taming Badly Typed External Libraries How Zod Boosts Type Safety TL DRIn our projects we use AdminJS an external library that provides a GUI for managing database records It s a great tool for rapidly creating CRUD interfaces for our clients However we ran into some issues with poorly typed code inside the library that made it difficult to integrate with our own custom logic This is where Zod a validation library came to the rescue Zod enabled us to create robust type definitions and validate data against those definitions This helped us to avoid runtime errors and catch issues early in development In this article we ll take a look at how Zod helped us to deal with the poorly typed code in AdminJS and how it improved the overall quality of our code Challenges with Poorly Typed External LibraryWhen using external libraries it s not uncommon to run into issues with poorly typed code This was the case with AdminJS a library that we used in our projects to provide a GUI for managing database records While AdminJS is a great tool for rapidly creating CRUD interfaces for our clients it also presented some challenges One of these challenges was creating schemas for the objects that we received from AdminJS The library provides a Record lt string any gt type which is not particularly useful when it comes to type safety We needed to define more robust type definitions for our data to avoid runtime errors This is where Zod a validation library came into play We used Zod to create schemas for the objects that we received from AdminJS which enabled us to catch errors early in development and avoid issues in production The code snippet above demonstrates how we defined a schema for a dashboard object that we received from AdminJS We used Zod s object method to create an object schema with three properties title project and type We also used Zod s nativeEnum method to create an enumeration schema for the type property which accepts two specific string values By creating these schemas with Zod we were able to define the exact shape of the data we expected to receive and catch any errors if the data did not match that shape This helped us to ensure the quality of our code and avoid any issues caused by the poorly typed code in AdminJS import z from zod enum DashboardProviderType GRAFANA GRAFANA GOOGLE STUDIO GOOGLE STUDIO const DashboardProviderTypeEnum z nativeEnum DashboardProviderType const DashboardPayloadSchema z object title z string project z string type DashboardProviderTypeEnum Creating SubSchemas to Handle Specific DataIn the previous section we defined a schema for the entire payload that we receive from AdminJS However in some cases we might only be interested in a specific part of the payload or we might not need all the fields from the payload In such cases it is useful to create sub schemas which define a subset of the original schema ZOD provides several methods to create sub schemas such as pick omit and partial In the code snippet provided we are using these methods to create three different sub schemas from the original DashboardPayloadSchema DashboardTypeSchema This schema picks the type field from the DashboardPayloadSchema and creates a new schema with only that field DashboardPartialSchema This schema creates a partial schema from the DashboardPayloadSchema which means that all the fields are optional DashboardWithoutTypeSchema This schema omits the type field from the DashboardPayloadSchema and creates a new schema with only the title and project fields Creating sub schemas can be useful when we want to validate only a part of the object or when we want to reuse some of the fields in a different schema It can also help in simplifying the validation logic and making it more readable const DashboardTypeSchema DashboardPayloadSchema pick type true type DashboardProviderType const DashboardPartialSchema DashboardPayloadSchema partial type DashboardProviderType undefined title string undefined project string undefined const DashboardWithoutTypeSchema DashboardPayloadSchema omit type true title string project string Refining Schemas with Custom ValidationOne of the key features of Zod is the ability to refine schemas with custom validation logic The refine method can be used to add validation rules to a schema allowing you to ensure that data meets specific requirements before it is processed In the following code snippet we use refine to ensure that the title field in our DashboardPayloadSchema is no longer than characters If the title is too long Zod will throw an error with a custom message const DashboardTitle z string refine title gt title length lt message Title cannot be longer than chars const DashboardPayloadSchema z object title DashboardTitle project DashbaordProject type DashboardProviderTypeEnum The refine method can also accept asynchronous functions as shown in the following example Here we use refine to ensure that the project field in our schema exists in a database If the project does not exist Zod will throw an error with a custom message const DashboardProject z string refine async projectName gt const project await getProject projectName return project message Project must exist in database const DashboardPayloadSchema z object title DashboardTitle project DashbaordProject type DashboardProviderTypeEnum Modifying Validated Values with transform While validation ensures that the received data conforms to the schema sometimes you might need to transform the data to a different format or structure For example you might need to extract a certain substring from a string or make a database call to fetch additional data based on a received value Zod provides the transform method that allows you to modify the validated values before returning them The transform method accepts a synchronous or asynchronous function that takes the validated value and returns the transformed value In the following code snippet we define a DashboardProject schema that refines the received project string to ensure that it starts with the PR prefix Then we use the transform method to remove the prefix before returning the value const DashboardProject z string refine project gt project startsWith PR message Project must start with PR prefix transform project gt project replace PR DashboardProject parse PR MT gt MT You can also use an asynchronous function with transform to perform more complex operations such as making a database call const DashboardProject z string refine project gt project startsWith PR message Project must start with PR prefix transform async project gt const projectObject await getProjectObjectFromDB project return projectObject In the above example the getProjectObjectFromDB function is an asynchronous function that fetches the project object from the database based on the received project string The transform method applies this function to the validated value and returns the result Inferring Types and Creating Type Guardstype Dashboard z infer lt typeof DashboardPayloadSchema gt equivalent to type Dashboard title string project string type DashboardProviderType export const isDashboard payload unknown payload is Dashboard gt return DashboardPayloadSchema safeParse payload success In the above code snippet the infer method is used to automatically infer the type of the schema defined by DashboardPayloadSchema The inferred type is then assigned to a type alias called Dashboard This allows us to use the inferred type throughout our codebase without having to manually define it Next the code exports a type guard function called isDashboard A type guard is a function that checks if a value is of a certain type at runtime In this case the isDashboard function checks if the provided payload conforms to the Dashboard type by attempting to parse the payload using the DashboardPayloadSchema If the parsing succeeds the function returns true indicating that the payload is a valid Dashboard If the parsing fails the function returns false Using type guards like isDashboard can help catch type errors at runtime and make our code more robust especially when working with data from external sources like APIs or databases where the shape of the data may not be known in advance SummaryIn this article we explored how Zod a validation library came to the rescue when integrating AdminJS an external library with poorly typed code AdminJS provides a Record lt string any gt type leading to type safety issues Zod helped us create robust type definitions and validate data against those definitions catching errors early in development We defined a schema for a dashboard object using Zod s object and nativeEnum methods ensuring the expected data shape We also created sub schemas with pick omit and partial for specific parts of the payload Custom validations were added with the refine method to enforce requirements like string length and database existence Zod s transform method allowed us to modify validated values and we learned how to infer types using infer creating type guards to catch type errors at runtime Overall Zod improved code quality and reduced runtime errors making our integration with AdminJS more efficient and reliable 2023-07-31 10:07:39
Apple AppleInsider - Frontpage News Apple Watch emergency calls false positives slammed by UK police chiefs https://appleinsider.com/articles/23/07/31/apple-watch-emergency-calls-false-positives-slammed-by-uk-police-chiefs?utm_medium=rss Apple Watch emergency calls false positives slammed by UK police chiefsThe Apple Watch has been blamed for a rise in calls to emergency services in the United Kingdom police chiefs in the country have warned caused by the various automatic calling functions of the wearable device Dom J Pexels While features like Crash Detection and Fall Detection have been the source of stories where people have been saved from life threatening situations they can also be a burden on emergency services On Monday it was the UK s turn to complain about the automatic dialing function Read more 2023-07-31 10:59:50
Apple AppleInsider - Frontpage News Oregon driver saved by iPhone 14 crash detection https://appleinsider.com/articles/23/07/31/oregon-driver-saved-by-iphone-14-crash-detection?utm_medium=rss Oregon driver saved by iPhone crash detectionA woman who drove off a road in Vernonia Oregon and was knocked unconscious was rescued by emergency services who were alerted by her iPhone Apple s Crash Detection feature has helped before in cases where cars were driven off a cliff but Vernonia emergency services say this accident was the first time they d received such an automatic alert According to Fox Oregon a woman identified only as Ashley was driving on Friday evening July when the accident happened Read more 2023-07-31 10:31:43
Apple AppleInsider - Frontpage News What analysts expect from Apple's Q3 2023 earnings report https://appleinsider.com/articles/23/07/30/what-analysts-expect-from-apples-q3-2023-earnings-report?utm_medium=rss What analysts expect from Apple x s Q earnings reportApple s third fiscal quarter results will be issued on August accompanied by the usual call with analysts Here s what to expect from the results and what Wall Street thinks of the iPhone maker Apple confirmed its quarterly results will be released on August back on July As is typical for the event it will be followed by a call hosted by CEO Tim Cook and CFO Luca Maestri with the results released at around PM ET and the call itself starting from PM ET Some Apple guidance Read more 2023-07-31 10:47:27
医療系 医療介護 CBnews 通院困難な障害児者への「居宅療養管理指導」を-日医が報酬改定ヒアリングで新設求める https://www.cbnews.jp/news/entry/20230731191820 日本医師会 2023-07-31 19:45:00
金融 ニュース - 保険市場TIMES 大同生命、特別展示「大同生命の源流“加島屋と広岡浅子”」、来場者数が11万人突破 https://www.hokende.com/news/blog/entry/2023/07/31/200000 2023-07-31 20:00:00
ニュース BBC News - Home British husband who killed dying wife in Cyprus released https://www.bbc.co.uk/news/uk-england-tyne-66322478?at_medium=RSS&at_campaign=KARANGA janice 2023-07-31 10:47:12
ニュース BBC News - Home Etsy accused of 'destroying' sellers by withholding money https://www.bbc.co.uk/news/business-66201042?at_medium=RSS&at_campaign=KARANGA takings 2023-07-31 10:56:20
ニュース BBC News - Home Wagner pauses fighter recruitment and focuses on Africa and Belarus - Prigozhin https://www.bbc.co.uk/news/world-europe-66358269?at_medium=RSS&at_campaign=KARANGA group 2023-07-31 10:18:11
ニュース BBC News - Home Royal Marines veteran Fred Ames celebrates 100th birthday https://www.bbc.co.uk/news/uk-england-devon-66359373?at_medium=RSS&at_campaign=KARANGA italy 2023-07-31 10:33:17
ニュース BBC News - Home The Ashes: Chris Woakes removes openers Warner & Khawaja https://www.bbc.co.uk/sport/av/cricket/66360850?at_medium=RSS&at_campaign=KARANGA The Ashes Chris Woakes removes openers Warner amp KhawajaEngland bowler Chris Woakes removes Australia openers David Warner and Usman Khawaja within the opening minutes of play on day five of the fifth Ashes Test at The Oval 2023-07-31 10:36:47
ビジネス 不景気.com パナソニック液晶ディスプレイが特別清算へ、負債5800億円 - 不景気com https://www.fukeiki.com/2023/07/panasonic-liquid-crystal-display.html 液晶ディスプレイ 2023-07-31 10:45:07
ニュース Newsweek 猛毒コブラが極太ニシキヘビを「ゴクゴク」呑み込む戦慄映像...インドで撮影 https://www.newsweekjapan.jp/stories/world/2023/07/post-102313.php 米ニューヨーク・ポスト紙が報じたところによると、今回食べられたビルマニシキヘビはメートルに満たない個体だったようだが、それにしても体長メートルのインドコブラにとっては決して小さな相手ではない。 2023-07-31 19:50:00
IT 週刊アスキー 『信長の野望 覇道』で「ROG Phone 7」シリーズが当たるプレゼント企画が実施! https://weekly.ascii.jp/elem/000/004/147/4147611/ pcsteam 2023-07-31 19:25:00

コメント

このブログの人気の投稿

投稿時間:2021-06-17 05:05:34 RSSフィード2021-06-17 05:00 分まとめ(1274件)

投稿時間:2021-06-20 02:06:12 RSSフィード2021-06-20 02:00 分まとめ(3871件)

投稿時間:2020-12-01 09:41:49 RSSフィード2020-12-01 09:00 分まとめ(69件)