IT |
気になる、記になる… |
LINE、「watchOS 7」のサポートを終了 |
https://taisy0.com/2023/03/31/170186.html
|
watchos |
2023-03-30 15:14:49 |
IT |
InfoQ |
BBC’s Enablement Team Principles Focus On Openness, Collaboration, and Respect |
https://www.infoq.com/news/2023/03/bbc-enablement-principles/?utm_campaign=infoq_content&utm_source=infoq&utm_medium=feed&utm_term=global
|
BBC s Enablement Team Principles Focus On Openness Collaboration and RespectAt QCon London BBC shared the enablement principles paving the road for their teams towards improved development and release processes Egan shared techniques challenges and learnings from her team s journey with the major takeaway being that the principles have almost nothing to do with the tools themselves By Olimpiu Pop |
2023-03-30 15:15:00 |
AWS |
AWS Messaging and Targeting Blog |
How to create a WhatsApp custom channel with Amazon Pinpoint |
https://aws.amazon.com/blogs/messaging-and-targeting/whatsapp-with-amazon-pinpoint/
|
How to create a WhatsApp custom channel with Amazon PinpointHow to add WhatsApp as an Amazon Pinpoint Custom Channel WhatsApp now reports over billion users in countries making it a prime place for businesses to communicate with their customers In addition to native channels like SMS push notifications and email Amazon Pinpoint s custom channels enable you to extend the capabilities of Amazon … |
2023-03-30 15:14:37 |
Ruby |
Rubyタグが付けられた新着投稿 - Qiita |
【Ruby on Rails】number_to_currencyの使い方 |
https://qiita.com/m6mmsf/items/5eb1f352a5a00b608dac
|
gtltnumbertocurrencycart |
2023-03-31 00:37:34 |
Ruby |
Railsタグが付けられた新着投稿 - Qiita |
【Ruby on Rails】number_to_currencyの使い方 |
https://qiita.com/m6mmsf/items/5eb1f352a5a00b608dac
|
gtltnumbertocurrencycart |
2023-03-31 00:37:34 |
海外TECH |
DEV Community |
Event-driven Kubernetes testing with Testkube and Tracetest |
https://dev.to/kubeshop/event-driven-kubernetes-testing-with-testkube-and-tracetest-471f
|
Event driven Kubernetes testing with Testkube and TracetestWe are pleased to announce that Tracetest now works with Testkube the Kubernetes native testing framework By using a Testkube executor to build an integration with Tracetest you can now run event driven trace based tests in your Kubernetes cluster Note Check out this hands on Demo example of how Tracetest works with Testkube Configuring CI pipelines for running trace based tests in Kubernetes is tedious work Especially if you need to trigger tests based on Kubernetes events Look no further Once you re done reading you ll learn how to set up event driven trace based testing in Kubernetes What is Testkube Testkube is an open source project part of the CNCF landscape and testing framework designed for testers and developers who use Kubernetes It integrates test orchestration and execution into Kubernetes and your CI CD GitOps pipeline You can automate the execution of your tests regardless of the testing framework by using Testkube s executors or creating your own By adopting Kubernetes constructs and GitOps you can perform Ks native testing You can use Kubernetes CRDs to manage and store tests allowing you to validate your applications by executing tests from inside your cluster Use any CI CD framework for any testing scenario By decoupling them from your CI CD you will spend less time integrating different testing tools Analyze all your test results in a centralized place After running your tests you can view the results in an intuitive UI regardless of which testing framework you used Debug test failures with ease With Testkube it s easy to see all the results logs and artifacts of your tests in one place In addition Testkube allows you to easily store and download files generated by your tests from your Kubernetes cluster All files generated from your tests are saved What is Tracetest Tracetest is an open source project part of the CNCF landscape It allows you to quickly build integration and end to end tests powered by your distributed traces Tracetest uses your existing distributed traces to power trace based testing with assertions against your trace data at every point of the request transaction You only need to point Tracetest to your existing trace data source or send traces to Tracetest directly Tracetest makes it possible to Define tests and assertions against every single microservice that a trace goes through Work with your existing distributed tracing solution allowing you to build tests based on your already instrumented system Define multiple transaction triggers such as a GET against an API endpoint a GRPC request etc Define assertions against both the response and trace data ensuring both your response and the underlying processes worked correctly quickly and without errors Save and run the tests manually or via CI build jobs with the Tracetest CLI Tracetest Now Works with Testkube Tracetest now works with Testkube allowing you to unlock Testkube s capacity with Tracetest and leverage OpenTelemetry instrumentation in your services to run end to end and integration testing It works thanks to the Testkube Tracetest Executor ーa test executor to run Tracetest tests with Testkube Why is the Tracetest integration with Testkube important By integrating with Testkube you can now add Tracetest to the native CI CD GitOps pipeline in your Kubernetes cluster It allows you to run scheduled test runs on set intervals as well as asynchronous tests triggered by Kubernetes events All while following the trace based testing principle and enabling full in depth assertions against trace data not just the response Combining the ability to create tests with Tracetest with a Kubernetes native test runner like Testkube enables you to use the native events from the environment of your Kubernetes cluster as test triggers in your CI CD GitOps pipelines Why run trace based tests When running integration tests you have no way of knowing precisely at which point an HTTP transaction goes wrong in a network of microservices With tracing enabled Tracetest can run tests with assertions against existing trace data throughout every service in the entire transaction You can utilize these tests as part of your CI CD process to ensure system functionality and to catch regressions Try Tracetest with TestkubeTo run trace based tests with Tracetest and Testkube make sure you have these three things installed before starting A running Kubernetes cluster either locally or in the cloudKubectlHelm Install TestkubeTestkube is open source and easy to install Start by installing the Testkube CLI by following these instructions for your operating system MacOS examplebrew install testkubeInstall Testkube in your Kubernetes cluster via the CLI From here follow the official documentation to install the Testkube cluster testkube initConfirm that Testkube is running kubectl get all n testkubeBy default Testkube is installed in the testkube namespace To explore the Testkube dashboard run the command testkube dashboard Install TracetestTracetest is open source and easy to install Start by installing the Tracetest CLI by following these instructions for your operating system MacOS examplebrew install kubeshop tracetest tracetestNote Check out the download page for more info From here follow the official documentation to install the Tracetest server tracetest server install Output How do you want to run TraceTest type to search Using Docker Compose gt Using KubernetesSelect Using Kubernetes Output Do you have OpenTelemetry based tracing already set up or would you like us to install a demo tracing environment and app type to search I have a tracing environment already Just install Tracetest gt Just learning tracing Install Tracetest OpenTelemetry Collector and the sample app Select Just learning tracing Install Tracetest OpenTelemetry Collector and the sample app Confirm that Tracetest is running kubectl get all n tracetestBy default Tracetest is installed in the tracetest namespace To explore the Tracetest Web UI run the command kubectl kubeconfig HOME kube config context kind kind namespace tracetest port forward svc tracetest Once the server is installed open the Tracetest Web UI in the browser and follow the instructions for connecting the OpenTelemetry Collector with Tracetest if it has not been connected already If you followed the steps above the Tracetest server will have been automatically provisioned to connect to the OpenTelemetry Collector instance running in the tracetest namespace If you look closely you ll see the OpenTelemetry Sample Configuration from the settings page above matches the collector config yaml that was generated by the Tracetest CLI when provisioning the Tracetest server collector config yamlreceivers otlp protocols grpc http processors batch timeout msexporters otlp endpoint tracetest tls insecure trueservice pipelines traces receivers otlp processors batch exporters otlp Create a Test in TracetestStart by clicking Create gt Create New Test gt HTTP Request gt Next gt Choose Example dropdown gt Pokeshop List generates a sample test from the Tracetest demo gt Next gt URL is prefilled with skip gt Create amp Run This will trigger the test and display a distributed trace in the Trace tab to run assertions against Proceed to add a test spec to assert all database queries return within ms Click the Test tab and proceed to click the Add Test Spec button In the span selector make sure to add this selector span tracetest span type database In the assertion field add attr tracetest span duration lt msSave the test spec and publish the test The database spans that are returning in more than ms are labeled in red This is an example of a trace based test that asserts against every single part of an HTTP transaction including all interactions with the database However Tracetest cannot run this test as part of your CI CD without integrating it with another tool Let s introduce how Testkube makes it possible Deploy the Tracetest Testkube ExecutorNote As of the latest Teskube release the Tracetest Testkube executor has been added to the Testkube s available executors out of the box If you have an older version of Testkube running proceed with deploying the Tracetest Testkube executor manually Testkube works with the concept of Executors An Executor is a wrapper around a testing framework Tracetest in this case in the form of a Docker container and runs as a Kubernetes job To start you need to register and deploy the Tracetest executor in your cluster using the Testkube CLI Run the command below in your terminal kubectl testkube create executor image kubeshop testkube executor tracetest latest types tracetest test name tracetest executor icon uri icon content type string content type file uri Output Executor created tracetest executor Trigger a Trace based Test in Tracetest with TestkubeIn the Tracetest Web UI click the ️button in the top right Then click Test Definition This will open a YAML definition for the test run Save this into a file called test yaml test yamltype Testspec id RUkKQ aVR name Pokeshop List description Get a Pokemon trigger type http httpRequest url skip method GET headers key Content Type value application json specs name Database queries less than ms selector span tracetest span type database assertions attr tracetest span duration lt msExecute the following command to create the test object in Testkube Do not forget to provide the path to your Tracetest definition file using the file argument and also the Tracetest server endpoint using the TRACETEST ENDPOINT variable Remember that your TRACETEST ENDPOINT should be reachable from Testkube in your cluster Use your Tracetest service s CLUSTER IP PORT E g kubectl testkube create test file test yaml type tracetest test name pokeshop tracetest test variable TRACETEST ENDPOINT http CLUSTER IP PORT Output Test created testkube pokeshop tracetest test Opening the Testkube Dashboard will show the test is created successfully Finally to run the test execute the following command or run the test from the Testkube Dashboard kubectl testkube run test watch pokeshop tracetest testHere s what the Testkube CLI will look like if the test fails Output Type tracetest testName pokeshop tracetest testExecution ID fbeddbExecution name pokeshop tracetest test Execution number Status runningStart time UTCEnd time UTCDuration Variables TRACETEST ENDPOINT Getting logs from test job fbeddbExecution completedExecuting in directory tracetest test run server url definition tmp test content wait for result output pretty✘Pokeshop List test RUkKQ aVR run test ✘Database queries less than ms ✘ bdeff ✘attr tracetest span duration lt ms ms test RUkKQ aVR run test selectedAssertion amp selectedSpan bdeff efafceeb attr tracetest span duration lt ms ms eefbeed attr tracetest span duration lt ms ms ✘ acab ✘attr tracetest span duration lt ms ms test RUkKQ aVR run test selectedAssertion amp selectedSpan acab aeffd attr tracetest span duration lt ms ms abafbaa attr tracetest span duration lt ms ms aad attr tracetest span duration lt ms ms ✘ dbae ✘attr tracetest span duration lt ms ms test RUkKQ aVR run test selectedAssertion amp selectedSpan dbae And here s the Testkube Dashboard If the test passes it ll look like this in the terminal Output Type tracetest testName pokeshop tracetest testExecution ID dbeddbExecution name pokeshop tracetest test Execution number Status runningStart time UTCEnd time UTCDuration Variables TRACETEST ENDPOINT Getting logs from test job dbeddbExecution completedExecuting in directory tracetest test run server url definition tmp test content wait for result output prettyPokeshop List test RUkKQ aVR run test Database queries less than msExecution succeededExecution completed Pokeshop List test RUkKQ aVR run test Database queries less than msAnd like this in Testkube Dashboard Running Scheduled Trace based TestsIntegrating with Testkube enables you to add Tracetest to the native CI CD GitOps pipeline in your Kubernetes cluster This allows for scheduled test runs on set intervals also called synthetic tests Now with trace based testing available full in depth assertions against trace data is available not just a response By using Testkube s scheduling you can trigger the same test you defined above every minute It works by providing a CRON schedule You ll add an additional schedule flag kubectl testkube create test file test yaml type tracetest test name pokeshop tracetest scheduled test schedule variable TRACETEST ENDPOINT http CLUSTER IP PORT Output Test created testkube pokeshop tracetest scheduled test In your Testkube Dashboard you ll see this test run continuously and get triggered every minute Running Event driven Trace based TestsEvent based testing in Kubernetes is a critical aspect of ensuring the reliability and performance of microservices in Kubernetes This testing approach involves observing events that are emitted by various components and services in the system to trigger tests against the system s components under various conditions The main benefit of event based testing is that it provides a more comprehensive testing approach than traditional unit integration and functional testing With event based testing testers can simulate real world scenarios and test the system s response to different types of input load and failure while also verifying the system s ability to recover from such events To effectively perform event based testing in Kubernetes you ll use Testkube as an event monitoring and management system that can capture and analyze the events generated by the system This sample will trigger a test when a deployment is scaled You ve configured the Tracetest assertions to make sure all database queries finish within ms Now define a trigger that will run the trace based test every time the deployment scales to ensure each replica satisfies the defined assertions Define a Test Trigger for the Deployment resource to run the trace based test when a deployment scale update event occurs testkube trigger yamlapiVersion tests testkube io vkind TestTriggermetadata name deployment scale update trigger namespace testkubespec resource deployment resourceSelector labelSelector matchLabels app kubernetes io instance demo event deployment scale update action run execution test testSelector name pokeshop tracetest test namespace testkubeSave the file name it testkube trigger yaml and apply it kubectl apply f testkube trigger yamlThis will configure a trigger to run the test every time the demo app deployment is scaled Try it yourself by running kubectl scale deployment demo pokemon api replicas n demoMoving back to the testkube dashboard you ll see the test was triggered by the event Running Tests in a Test Suite or TransactionRunning singular isolated event driven tests has its own important use cases and values But in the wild you ll more often rely on chaining multiple tests together into a transaction or test suite Both Testkube and Tracetest support such logical constructs In Tracetest they re called transactions In Testkube they re called test suites Tracetest TransactionsRunning end to end tests is not simple It requires configuration before the actual test can be run such as creating a new user or removing all items from a cart Therefore it s important to be able to execute multiple steps as part of your transaction Tracetest introduces the concept of Transactions to achieve this goal A transaction is a group of steps executed in a defined order where each step is a test that can access information exported by previous tests The main benefit of using transactions is the ability to chain tests together and use values obtained in one test as input for a subsequent test When a test is executed within a transaction if it generates any outputs the test outputs will be injected into the transaction context environment After the outputs are injected all subsequent tests to be run within the transaction will be able to reference those values with env VARIABLE NAME Note Outputs generated by steps don t modify the selected environment It only modifies the transaction run context object Tracetest allows tests to declare outputs An output is a value that is extracted from a trace by providing a selector to choose which spans to use to get the information from and an expression to get the value from the selected spans Run a Tracetest TransactionStart by adding a new test by clicking Create gt Create New Test gt HTTP Request gt Next gt Choose Example dropdown gt Pokeshop Add generates a sample test from the Tracetest demo gt Next gt URL is prefilled with gt Create amp Run The request body will be populated with this JSON name meowth type normal imageUrl isFeatured true Navigate to Test gt Outputs and click Add Test Output Select the create pokeshop pokemon database span span tracetest span type database name create pokeshop pokemon db system postgres db name pokeshop db user ashketchum db operation create db sql table pokemon The attribute to export the Pokemon s id as a value is attr db result json path id Finally give it a name add pokemon db result idSave the test output and publish the variables Here s what the YAML definition of this test looks like type Testspec id RAtJIfVg name Pokeshop Add In Transaction description Add a Pokemon trigger type http httpRequest url method POST headers key Content Type value application json body name meowth type normal imageUrl isFeatured true outputs name add pokemon db result id selector span tracetest span type database name create pokeshop pokemon db system postgres db name pokeshop db user ashketchum db operation create db sql table pokemon value attr db result json path id Now you can edit the List Pokemon test to use this variable in an assertion Select the HTTP span span tracetest span type http name GET pokemon take amp skip http method GET Set this assertion attr http response body contains env add pokemon db result id Finally give the assertion a name Save the test spec and click publish Now you can add a transaction and see how it works together Click Create gt Create New Transaction gt Give it a name gt Create Once created add the Add and List tests to the transaction list You ll see the defined variables on the right below the execution steps The List test is passing as it is correctly asserting that the result of the List request contains the exported variable from the Add test Copy the transaction definition file and save it as a file named transaction yaml transaction yamltype Transactionspec id MnUSxIfg name Add List Pokemon steps RAtJIfVg UQe xIBVgNow you can create a test in Testkube to trigger the transaction kubectl testkube create test file transaction yaml type tracetest test name pokeshop tracetest transaction variable TRACETEST ENDPOINT http CLUSTER IP PORT Output Test created testkube pokeshop tracetest transaction Trigger the transaction in the same way as you did the test kubectl testkube run test watch pokeshop tracetest transaction Output Type tracetest testName pokeshop tracetest transactionExecution ID bafdbdExecution name pokeshop tracetest transaction Execution number Status runningStart time UTCEnd time UTCDuration Variables TRACETEST ENDPOINT Getting logs from test job bafdbdExecution completedExecuting in directory tracetest test run server url definition tmp test content wait for result output prettyAdd List Pokemon transaction MnUSxIfg run Pokeshop Add In Transaction test RAtJIfVg run test Pokeshop List In Transaction test UQe xIBVg run test Make sure pokemon id from the ADD is contained in the LISTExecution succeededExecution completed Add List Pokemon transaction MnUSxIfg run Pokeshop Add In Transaction test RAtJIfVg run test Pokeshop List In Transaction test UQe xIBVg run test Make sure pokemon id from the ADD is contained in the LIST Run a Testkube Test SuiteTest suites in Testkube are a way to orchestrate different test steps and entirely different testing frameworks to run in a suite Your front end team uses Cypress for browser tests while the back end team uses Tracetest You may also have Postman collections testing various parts of your apps With test suites you can orchestrate different test steps and combine them to run in a sequence Even if each team runs its tests on its own they can ultimately be combined and triggered from one location by a test suite Learn More About Kubernetes TestingCombined Testkube and Tracetest provide a comprehensive testing solution for Kubernetes applications By utilizing Testkube triggers you can automatically initiate trace based tests with Tracetest ensuring that your services are adhering to defined SLAs Tracetest provides detailed distributed trace data allowing you to gain insight into the behavior of your application at a granular level and ultimately create assertions against this data to write bullet proof tests Would you like to learn more about Tracetest and what it brings to the table Check the docs and try it out today by downloading it today To explore more options Testkube gives you check out the documentation on test triggers They enable you to trigger tests based on Kubernetes events Want to learn more about Testkube Read more here Also please feel free to join our Discord community give Tracetest a star on GitHub or schedule a time to chat |
2023-03-30 15:54:18 |
海外TECH |
DEV Community |
Simplify Python Dependency Management: Creating and Using Virtual Environments with Poetry |
https://dev.to/rainleander/simplify-python-dependency-management-creating-and-using-virtual-environments-with-poetry-22ee
|
Simplify Python Dependency Management Creating and Using Virtual Environments with PoetryAs a Python developer managing dependencies and libraries can become a bit of a hassle It s important to keep track of different versions of packages and ensure that they work together seamlessly Virtual environments and package managers can help to solve these issues Virtual environments are isolated Python environments where you can install packages and libraries without affecting the system wide installation You can have multiple virtual environments with different package versions and dependencies to work on different projects simultaneously One of the most popular package managers for Python is Poetry which simplifies package management and streamlines dependency resolution In this post we will walk you through how to create and use virtual environments in Python with Poetry Step Install PoetryThe first step is to install Poetry on your system Poetry can be installed on any operating system that supports Python To install Poetry you can use the following command in your terminal curl sSL python Step Create a new projectOnce you have installed Poetry create a new directory for your project and navigate into it Then run the following command to create a new project with Poetry poetry initThis command will create a pyproject toml file that contains information about your project and its dependencies Step Create a virtual environmentTo create a virtual environment with Poetry run the following command poetry env use pythonThis command will create a new virtual environment and activate it You can also specify a specific version of Python to use in your virtual environment by running poetry env use path to python Step Add dependenciesTo add dependencies to your project you can use the following command poetry add package nameThis command will install the package and its dependencies in your virtual environment and update your pyproject toml file You can also specify the version of the package that you want to install poetry add package name Step Install dependenciesTo install the dependencies of your project you can run the following command poetry installThis command will install all the dependencies listed in your pyproject toml file Step Use the virtual environmentTo use the virtual environment you need to activate it first source poetry envThis command will activate the virtual environment and you can start working on your project To deactivate the virtual environment simply run deactivateThat s it You can now create and use virtual environments in Python with Poetry With this approach you can keep your projects isolated and ensure that they work seamlessly without any dependency issues |
2023-03-30 15:41:45 |
海外TECH |
DEV Community |
Working with Microservices with NestJS |
https://dev.to/amplication/working-with-microservices-with-nestjs-3ekm
|
Working with Microservices with NestJSNode js is one of the best options out there when it comes to building microservices But on its own it lacks many features that enterprise level systems require it has neither the ability to easily generate a scalable architecture nor any type of helper function to build APIs That s where NestJS often referred to simply as Nest comes in Nest allows us to easily create back end systems using Node js and TypeScript It also provides a clear and efficient pattern for defining these systems and through the use of specific OOP tools which will be covered in this article we can also define scalable and extensible microservices In this article we re going to build a set of microservices to power the back end of a bookstore Through this process we ll cover all the basic concepts we need to know to get started with Nest NestJS and AmplicationAt Amplication we help developers build production ready GraphQL and REST API endpoints all on top of NestJS If you end up liking this article and want to dive right in to building your next service with NestJS be sure to sign up for Amplication at app amplication com Also we re closing in on the star milestone on GitHub and would love it if you gave us a star too Checkout our repo at thank you What Is Nest Nest is a framework that works on top of other frameworks including Express by default or Fastify which we ll call “base frameworks As such Nest provides a higher level of abstraction While these base frameworks are powerful they leave it up to us to determine how we structure our projects They also let us decide what kind of abstractions we code into them to make them easier to maintain or to scale Nest adds several tools to these base frameworks in the form of abstractions and functions that help to solve scaling problems preemptively These tools include Modules Nest uses a modular architecture allowing developers to organize their code into reusable modules that can be shared and imported across the application with ease Controllers Controllers handle incoming HTTP requests and define the routes for the application This separates the routing logic from the business logic of the application and makes the code easier to understand and maintain It s also a major time saver if we need to maintain an application with hundreds of routes Services Services encapsulate business logic and can be shared and injected across the application thus making it easier to manage and test the application s logic Guards and pipes Nest includes built in support for guards and pipes which can be used to add middleware and validation logic to the application Decorators Decorators can be used to add metadata to a class and its members which can then configure the application and manage its dependencies Support for TypeScript Nest was built with TypeScript As such we can use TypeScript to build our own applications as well which provides a more robust type system and improved tooling capabilities for building large scale applications CLI Nest comes with a powerful CLI tool that can help with the development and maintenance of our application such as generating boilerplate code and scaffolding new modules controllers and services Dependency injection This is a pattern that allows us easily to add dependencies to our otherwise generic code Injecting services into our controllers allows us to swap services with minimal knock on effect Nest s documentation shows their offerings in full Let s dive into a practical example to see how it performs in action Building Microservices with NestWe re going to be building a simple yet scalable backend for a bookstore To limit repeated code let s focus on the main API for such a business the book handler Let s start at the beginning installing Nest Assuming we re running a relatively recent version of Node simply type npm install g nestjs cliWith Nest installed we now have to plan our projects Since we re going to be building microservices we re not going to build one single project instead we want to build one project per microservice and one project per client Given we re going to build one single microservice here let s build two projects in total nest new bookstorenest new client“Bookstore and “client will end up being the names of the folders in which our projects will be saved When asked about what package manager we want to use any option works for this tutorial we ll use NPM my favorite Once the process is complete we ll have two brand new folders on our system with all the code we need to get started In Depth Walkthrough on Building Nest MicroservicesWith both our projects created let s first focus on the microservice that will handle our books There are already some files inside that project s src folder The app controller ts app module ts app service ts and main ts files are going to be our main targets First though we need to install a new dependency because by default we re missing that building block for our microservice npm install nestjs microservicesNow open the main ts file and change it to call the createMicroservice method of the NestFactory import NestFactory from nestjs core import AppModule from app module import MicroserviceOptions Transport from nestjs microservices async function bootstrap const app await NestFactory createMicroservice lt MicroserviceOptions gt AppModule transport Transport TCP options port await app listen bootstrap We re setting up a microservice on port that uses TCP as the transport layer Now let s define the structure of our books since we re dealing with TypeScript we have to define the types we re dealing with We re going to define a DTO data transfer object using an interface export interface BookDTO id string title string author string release date Date Our books will have a title an author and a release date They will also have an ID but we ll auto generate that piece of information Now that we have our DTO ready and the application is building an actual microservice let s edit the controller file where we ll handle the entry point for our messages From there we ll connect with the service which is where the actual business logic will reside The controller class will have the Controller annotation letting Nest know what they are and how they work Inside this class we ll define one method per endpoint of our microservice To specify that a method is meant to handle the incoming message we ll use the MessagePattern annotation The controller will look like this import Controller Get from nestjs common import AppService from app service import MessagePattern from nestjs microservices import BookDTO from book function delay ms var start new Date getTime var end start while end lt start ms end new Date getTime Controller export class AppController constructor private readonly appService AppService MessagePattern cmd new book newBook book BookDTO string delay const result this appService newBook book if result return Book already exists else return result MessagePattern cmd get book getBook bookID string BookDTO return this appService getBookByID bookID MessagePattern cmd get books getBooks BookDTO return this appService getAllBooks Let s defined three methods endpoints newBook receives and saves the data for a new bookgetBook returns the details of a single book given its IDgetBooks returns all books in our storeWith the annotation on each method we re defining the command to which each method will respond Our client application will not execute our method directly but rather will send a message Importantly this separates the implementation from the client It also gives us a chance to test two different ways of interacting with the method messages and events This is why we added a fake delay of ten seconds on the newBook method We ll use it to test both communication methods Let s now take a look at the service where our business logic resides Our service class is annotated as Injectable because Nest will inject it into our controller without us having to do anything special To keep things simple let s skip database storage and validations Instead we ll save everything in memory with this array import Injectable from nestjs common import BookDTO from book let bookStore BookDTO Injectable export class AppService getBookByID bookID string return bookStore find b BookDTO gt b id bookID getAllBooks return bookStore newBook book BookDTO const exists bookStore find b BookDTO gt return b title book title amp amp b author book author amp amp b release date book release date if exists return false book id Book bookStore length bookStore push book return book id These are the methods to which our controller will have access they simply deal with an array of BookDTO objects Now our microservice is ready We can test it with npm run start dev Here s a brief review of where we are We built a controller and a service and never really had to connect the two Our service is listening on port and we just defined the port we didn t have to do anything else There are no routes defined anywhere but rather some message patterns that we define per method Building microservices becomes a relatively simple process with NestJS Building the Client ApplicationLet s now take a look at the client application which is going to be our interface into the microservice we just built Inside this project we ll find the same files as with the previous project Our main ts file will remain the same and we can simply update the port import NestFactory from nestjs core import AppModule from app module async function bootstrap const app await NestFactory create AppModule await app listen bootstrap This time around we ll ignore the app service ts file but will change the app module ts file Nest needs to know where the microservice is located so that it can interface successfully without any manual processes import Module from nestjs common import AppController from app controller import ConfigModule ConfigService from nestjs config import ClientProxyFactory Transport from nestjs microservices Module imports ConfigModule forRoot controllers AppController providers provide BOOKS SERVICE inject ConfigService useFactory configService ConfigService gt return ClientProxyFactory create transport Transport TCP options host configService get BOOKSTORE SERVICE HOST port configService get BOOKSTORE SERVICE PORT export class AppModule Here we re specifying the controller for this application and the providers we re using which essentially means configuring the location of our microservice Since the provider property is an array we could potentially specify multiple microservices and access them all through the same client Both BOOKSTORE SERVICE HOST and BOOKSTORE SERVICE PORT are environment variables which have to be set Now that we ve set up our provider i e the microservice and named it BOOKS SERVICE we need to create the controller This controller will handle all incoming HTTP requests and will redirect them to the microservice through the specified message protocol For this example our client is going to be an HTTP API and so we ll set up some endpoints by defining our controller class and annotating it with the Controller annotation We ll also give that annotation a value which will act as the root for all the endpoints we define here With that done we can start defining methods inside our controller class Each method will correspond to an endpoint specified with yet another annotation Get or Post in our case since we have an endpoint for creating new books Each endpoint will be straightforward it will send a message through to the client to our microservice For each message we ll specify a different commandーthe one we defined in our message patterns back when we wrote the microservice Here s the code import Body Controller Get Inject Param Post from nestjs common import BookDTO from book import ClientProxy from nestjs microservices Controller booksstore export class AppController constructor Inject BOOKS SERVICE private client ClientProxy Get getAllBooks return this client send cmd get books Get id getBookByID Param id id return this client send cmd get book id Post createNewBook Body book BookDTO return this client send cmd new book book Notice how the constructor injects the service into our controller class Just like before we don t need to perform manual processes for example from where we import the service how we instantiate it into what property should be inserted etc The Controller annotation receives the string booksstore which makes the latter the string the root for all our requests As such to get the list of all books for example we need to query bookstores and to get the details of the book with id Book we query booksstore Book The key takeaway here is that we re using the send method from the client property This method sends a message to the microservice and the send method will wait until the microservice successfully completes the operation and sends back a response before returning To wrap up this tutorial let s take a quick look at both the message pattern and the event pattern Message Versus Events PatternNest allows us to select one of these two ways to communicate with our microservice Message Pattern Pros and ConsThe codes in this tutorial have been produced using the message pattern also known as the request response pattern This is the most common pattern for HTTP services The upside of the message pattern is that it s simple to work with and perhaps most importantly easy to debug However the downside is that the microservice could take too long to respond resulting in our connection being locked until the end running the risk of ruining users experience or even generating timeout errors As such this pattern is a great choice only when interacting with services with a low response latency For other instances fortunately Nest allows us to use the events pattern Events Pattern to the RescueThe events pattern allows us to define microservices that are waiting for specific events and then have other services trigger these events You can even register multiple event handlers i e methods for the same event and when triggered they will all fire in parallel Events are asynchronous by default This means that no matter how long the microservice takes to perform the operation the connection is closed immediately Events pattern is a great choice when we want to send information or commands to the microservice without having to worry about response time Usually microservices emit other events as a result thus triggering a network of communications that can result in a complex operation distributed across simple services For example in our tutorial earlier if we tried to create a new book using the events pattern it would have taken ten seconds due to the fake delay we added On the other hand if we go to the createNewBook method on the client s controller and change send to emit the result is instantaneous even though we no longer get the new book s ID as a response Event based patterns are usually more flexible as they offer the opportunity to create complex architectures that scale more easily and are highly responsive Yes they might require a bit more debugging if things don t work perfectly the first time but the superior end result warrants the extra upfront effort ConclusionBuilding a microservice based architecture in NestJS is a great experience offering a wealth of tools Remember that one project is required per microservice each provider needs configuring inside the app module ts file so that we re then able to build clients to communicate with one another Finally when communicating with another microservice we can choose between using a classic request response pattern or going with the more scalable event based pattern Each has their pros and cons To see the full code of the projects discussed here use the following repositories For the microservice code For the client code Intrigued by Nest and would like to try it on a future project Consider using Amplication Amplication auto generates boilerplate code leaving you free to focus on coding your business logic |
2023-03-30 15:35:04 |
Apple |
AppleInsider - Frontpage News |
Unique 'Lucky you' sealed original iPhone is up for auction |
https://appleinsider.com/articles/23/03/30/unique-lucky-you-sealed-original-iphone-is-up-for-auction?utm_medium=rss
|
Unique x Lucky you x sealed original iPhone is up for auctionA factory sealed original iPhone with a unique sticker is up for auction that is expected to end at as much as The Lucky You iPhoneComing from Wright auction house a factory sealed first generation iPhone has arrived from Donald Gajadhar of Fox White Art Antique Appraisals But it has something that sets it apart from other boxed iPhones Read more |
2023-03-30 15:19:34 |
Apple |
AppleInsider - Frontpage News |
Apple AR headset debut at WWDC in doubt |
https://appleinsider.com/articles/23/03/30/apple-ar-headset-debut-at-wwdc-in-doubt?utm_medium=rss
|
Apple AR headset debut at WWDC in doubtAnalyst Ming Chi Kuo says Apple is not optimistic about launching its AR VR headset and has pushed back mass production meaning it may not get an announcement at WWDC A render of a potential Apple headset AppleInsider Read more |
2023-03-30 15:08:13 |
海外TECH |
Engadget |
Midjourney ends free trials of its AI image generator due to 'extraordinary' abuse |
https://www.engadget.com/midjourney-ends-free-trials-of-its-ai-image-generator-due-to-extraordinary-abuse-153853905.html?src=rss
|
Midjourney ends free trials of its AI image generator due to x extraordinary x abuseMidjourney is putting an end to free use of its AI image generator after people created high profile deepfakes using the tool CEO David Holz says on Discord that the company is ending free trials due to quot extraordinary demand and trial abuse quot New safeguards haven t been quot sufficient quot to prevent misuse during trial periods Holz says For now you ll have to pay at least per month to use the technology As The Washington Postexplains Midjourney has found itself at the heart of unwanted attention in recent weeks Users relied on the company s AI to build deepfakes of Donald Trump being arrested and Pope Francis wearing a trendy coat While the pictures were quickly identified as bogus there s a concern bad actors might use Midjourney OpenAI s DALL E and similar generators to spread misinformation Midjourney has acknowledged trouble establishing policies on content In Holz justified a ban on images of Chinese leader Xi Jinping by telling Discord users that his team only wanted to quot minimize drama quot and that having any access in China was more important than allowing satirical content On a Wednesday chat with users Holz said he was having difficulty setting content policies as the AI enabled ever more realistic imagery Midjourney is hoping to improve AI moderation that screens for abuse the founder added Some developers have resorted to strict rules to prevent incidents OpenAI for instance bars any images of ongoing political events conspiracy theories and politicians It also forbids hate sexuality and violence However others have relatively loose guidelines Stability AI won t let Stable Diffusion users copy styles or make not safe for work pictures but it generally doesn t dictate what people can make Misleading content isn t the only problem for AI image production There are longstanding concerns that the pictures are stolen as they frequently use existing images as reference points While some companies are embracing AI art in their products there s also plenty of hesitation from firms worried they ll get unwanted attention This article originally appeared on Engadget at |
2023-03-30 15:38:53 |
Cisco |
Cisco Blog |
Cisco Partner Experience (PX) Cloud is Now Available Worldwide |
https://feedpress.me/link/23532/16050104/cisco-partner-experience-px-cloud-is-now-available-worldwide
|
Cisco Partner Experience PX Cloud is Now Available WorldwideWe have launched Cisco s Partner Experience PX Cloud in General Availability GA worldwide Starting this week Cisco partners that resell Success Tracks will now get automatic access to the PX Cloud platform the Cisco CX single pane of glass to manage customers lifecycles |
2023-03-30 15:00:42 |
海外科学 |
NYT > Science |
Ukraine Goes Dark: NASA Images Drive Home a Nation’s Anguish |
https://www.nytimes.com/2023/03/30/world/europe/ukraine-satellite-darkness.html
|
power |
2023-03-30 15:40:09 |
海外科学 |
NYT > Science |
This Is What It Sounds Like When Plants Cry |
https://www.nytimes.com/2023/03/30/science/plant-sounds-stress.html
|
cryscientists |
2023-03-30 15:20:47 |
金融 |
金融庁ホームページ |
金融機関における貸付条件の変更等の状況について更新しました。 |
https://www.fsa.go.jp/ordinary/coronavirus202001/kashitsuke/20200430.html
|
金融機関 |
2023-03-30 17:00:00 |
金融 |
金融庁ホームページ |
「新型コロナウイルス感染症関連情報」特設ページを更新しました。 |
https://www.fsa.go.jp/ordinary/coronavirus202001/press.html
|
感染拡大 |
2023-03-30 17:00:00 |
金融 |
金融庁ホームページ |
「金融商品取引法第六章の二の規定による課徴金に関する内閣府令の一部を改正する内閣府令(案)」等に対するパブリックコメントの結果等について公表しました。 |
https://www.fsa.go.jp/news/r4/shouken/20230330-2/20230330-2.html
|
内閣府令 |
2023-03-30 17:00:00 |
金融 |
金融庁ホームページ |
「最終指定親会社が当該最終指定親会社及びその子法人等の経営の健全性を判断するための基準として定める大口信用供与等に係る健全性の状況を表示する基準を定める件(案)」及び「銀行法施行規則の一部を改正する内閣府令(案)」等に対するパブリック・コメントの結果等について公表しました。 |
https://www.fsa.go.jp/news/r4/shouken/20230330/20230330.html
|
信用供与 |
2023-03-30 17:00:00 |
金融 |
金融庁ホームページ |
「金融商品取引業者等向けの総合的な監督指針」の一部改正(案)に対するパブリックコメントの結果等について公表しました。 |
https://www.fsa.go.jp/news/r4/shouken/20230330-3/20230330-3.html
|
金融商品取引業者 |
2023-03-30 17:00:00 |
ニュース |
BBC News - Home |
Thomas Cashman guilty of Olivia Pratt-Korbel shooting murder |
https://www.bbc.co.uk/news/uk-england-merseyside-65088182?at_medium=RSS&at_campaign=KARANGA
|
korbel |
2023-03-30 15:53:56 |
ニュース |
BBC News - Home |
King Charles celebrates UK-Germany ties in historic address |
https://www.bbc.co.uk/news/uk-65121371?at_medium=RSS&at_campaign=KARANGA
|
kraftwerk |
2023-03-30 15:20:53 |
ニュース |
BBC News - Home |
Russia arrests US journalist Evan Gershkovich on spying charge |
https://www.bbc.co.uk/news/world-europe-65121885?at_medium=RSS&at_campaign=KARANGA
|
journal |
2023-03-30 15:55:42 |
ニュース |
BBC News - Home |
PM pledges trans guidance for schools 'for summer term' |
https://www.bbc.co.uk/news/uk-65127170?at_medium=RSS&at_campaign=KARANGA
|
minefield |
2023-03-30 15:34:07 |
ニュース |
BBC News - Home |
What is happening to house prices, and could there be a crash? |
https://www.bbc.co.uk/news/explainers-63147101?at_medium=RSS&at_campaign=KARANGA
|
house |
2023-03-30 15:03:27 |
ニュース |
BBC News - Home |
Thomas Cashman: 'Brave' ex-partner helped convict Olivia's killer |
https://www.bbc.co.uk/news/uk-england-merseyside-65097495?at_medium=RSS&at_campaign=KARANGA
|
cashman |
2023-03-30 15:48:42 |
ニュース |
BBC News - Home |
Women's Six Nations 2023: Delaney Burns to make debut in new-look England pack |
https://www.bbc.co.uk/sport/rugby-union/65129219?at_medium=RSS&at_campaign=KARANGA
|
Women x s Six Nations Delaney Burns to make debut in new look England packBristol lock Delaney Burns will make her England debut as injuries and a retirement force changes to the Red Roses pack for Sunday s Women s Six Nations match against Italy |
2023-03-30 15:39:09 |
コメント
コメントを投稿