IT |
気になる、記になる… |
睡眠でポケモンの寝顔を集めるスマホアプリ「Pokémon Sleep」、7月下旬に配信開始へ |
https://taisy0.com/2023/07/06/173739.html
|
sleep |
2023-07-06 14:44:44 |
python |
Pythonタグが付けられた新着投稿 - Qiita |
Day 2: python初心者がダンジョンゲーム作成してみる |
https://qiita.com/tsuyoron515/items/7de431db61382b213e78
|
importpygameimportrandom |
2023-07-06 23:39:41 |
js |
JavaScriptタグが付けられた新着投稿 - Qiita |
p5.js でパターンの繰り返しを CanvasRenderingContext2D のメソッド「createPattern()」でシンプルに試す(drawingContext と createGraphics() を使って) |
https://qiita.com/youtoy/items/d2bfec4c49eb4d1dc8e2
|
drawingcontext |
2023-07-06 23:00:51 |
AWS |
AWSタグが付けられた新着投稿 - Qiita |
AWS SES 本稼働リクエストのフォームを利用せずにカンタンにサンドボックス外に移動する方法 |
https://qiita.com/daji110728/items/42d9ba5b2adc5b518e5c
|
awsses |
2023-07-06 23:06:49 |
Azure |
Azureタグが付けられた新着投稿 - Qiita |
Azure Machine Learning「Prompt flow」の実行方法を解説 |
https://qiita.com/YutaroOgawa2/items/1e5166e25f0288184560
|
azuremachinelearning |
2023-07-06 23:38:56 |
Azure |
Azureタグが付けられた新着投稿 - Qiita |
Azure Bot × Teams × Azure OpenAIの環境構築① ~オウム返しするまで~ |
https://qiita.com/sakue_103/items/c88f6dc19e0dfb9cc842
|
azure |
2023-07-06 23:10:19 |
技術ブログ |
Developers.IO |
【資料公開】進化し続けるサイバーセキュリティ脅威を防ぐSaaSソリューション – Showcase Security 2023 |
https://dev.classmethod.jp/articles/%e3%80%90%e8%b3%87%e6%96%99%e5%85%ac%e9%96%8b%e3%80%91%e9%80%b2%e5%8c%96%e3%81%97%e7%b6%9a%e3%81%91%e3%82%8b%e3%82%b5%e3%82%a4%e3%83%90%e3%83%bc%e3%82%bb%e3%82%ad%e3%83%a5%e3%83%aa%e3%83%86%e3%82%a3/
|
showcasesecurity |
2023-07-06 14:54:51 |
海外TECH |
DEV Community |
Celebrating 10 Years as a Microsoft MVP 🎉 |
https://dev.to/kasuken/celebrating-10-years-as-a-microsoft-mvp-2bed
|
Celebrating Years as a Microsoft MVP Hello I m excited to share with you that this month marks my th anniversary as a Microsoft Most Valuable Professional MVP It has been an incredible journey of learning sharing and growing with the amazing Microsoft community In this post I want to reflect on some of the highlights and milestones of my MVP journey as well as share some fun facts and statistics from the past decade What is a Microsoft MVP For those who are not familiar with the MVP program it is a way for Microsoft to recognize and reward outstanding community leaders who passionately share their knowledge and expertise on Microsoft products and services MVPs are always on the bleeding edge of technology and have an unstoppable urge to get their hands on new and exciting innovations They have very deep knowledge of Microsoft technologies while also being able to bring together diverse platforms products and solutions to solve real world problems MVPs make up a global community of over technical experts and community leaders across countries regions The MVP award is not based on a checklist of activities or achievements but rather on the quality and impact of the contributions MVPs make to the community These contributions can range from speaking engagements to blog posts to books to online forums to social media to GitHub and more The key benefits of being an MVP include early access to Microsoft products direct communication channels with product teams an invitation to the Global MVP Summit a close relationship with the local Microsoft teams and various subscriptions and licenses How did I become an MVP My MVP journey started in when I was working as a software developer using Microsoft technologies such as C ASP NET MVC SQL Server Azure etc I was always curious and eager to learn new things and improve my skills I started reading blogs watching videos listening to podcasts attending events and following experts on Twitter I was amazed by how much valuable information and insights I could get from these sources I also realized that there were many other developers like me who were looking for answers and guidance on various topics I decided to start my own blog where I could share my learnings and experiences with Microsoft technologies I also joined online communities such as Stack Overflow MSDN Forums CodeProject etc where I could ask and answer questions provide feedback and help others I also started speaking at local user groups meetups and conferences where I could meet other developers face to face and exchange ideas I enjoyed these activities immensely and found them very rewarding One day in a friend of mine nominated me as a Microsoft MVP I filled out the nomination form with details of my activities and waited for the result After a few months of review I got another email from Microsoft saying that I had been awarded the MVP status for ASPNET Technologies I was overjoyed and grateful for this opportunity What have I done as an MVP Since then I have been actively involved in the Microsoft community as an MVP Here are some of the things that I have done in the past years Written blog posts on various topics related to Microsoft technologies and not only Published book on ASP NET Core Minimal API Spoken at over events online or around the EuropeOrganized more than events online or in personCreated and creating LinkedIn Learning coursesContributed to over open source projects on GitHubParticipated to HackathonsMentored over aspiring developers through online platformsReceived other awards and recognitions such as GitHub Star Auth Ambassador Codemotion Ambassador etc What have I learned as an MVP Being an MVP has been a tremendous learning experience for me Here are some of the key lessons that I have learned along the way Be passionate Passion is the fuel that drives me to learn new things share my knowledge and help others It is what makes me excited about technology and keeps me motivated to do more Be curious Curiosity is the spark that ignites my creativity and innovation It is what makes me explore new possibilities and challenge myself to find better solutions Be humble Humility is the foundation that helps me grow as a professional and a person It is what makes me appreciate feedback acknowledge mistakes and respect others Be generous Generosity is the spirit that guides me to give back to the community and make a positive impact It is what makes me share my time resources and expertise with others without expecting anything in return Be grateful Gratitude is the attitude that shapes my perspective and happiness It is what makes me thankful for the opportunities support and recognition that I have received from Microsoft and the community What are some fun facts about MVPs To wrap up this post I want to share some fun facts and statistics about MVPs that I have collected from various sources The MVP program started in when a developer named Calvin Hsia developed a system to rank the most active users of the technology support forum CompuServe His list of Most Verbose People as he dubbed it was initially created for fun but it soon caught the eye of Microsoft The first MVPs were awarded in for Microsoft Access Visual Basic and FoxPro The first MVP Summit was held in with attendees The MVP award categories have changed over time to reflect the evolution of Microsoft technologies Currently there are award categories AI Azure Business Applications Cloud and Datacenter Management Data Platform Developer Technologies Enterprise Mobility Office Apps and Services Windows Development and Windows and Devices for IT The MVP award lasts for a year and is renewable based on the continued community contributions The average tenure of an MVP is years The MVP community is very diverse and inclusive There are MVPs from all ages genders backgrounds and professions The youngest MVP was years old and the oldest was years old There are MVPs from over countries regions and they speak over languages There are MVPs who are students teachers doctors lawyers artists musicians etc The MVP community is very collaborative and supportive There are many ways for MVPs to connect and interact with each other such as online forums social media groups podcasts newsletters etc There are also regional and global events where MVPs can meet in person and network with each other and with Microsoft teams The most prominent event is the Global MVP Summit which is held annually at the Microsoft headquarters in Redmond Washington It is an exclusive event where MVPs can learn about the latest Microsoft technologies provide feedback to product teams and have fun with fellow MVPs The MVP community is very influential and impactful MVPs are recognized as thought leaders and trusted advisors in their respective fields They have a strong voice and reach in the community through their various channels of communication They also have a direct impact on the development and improvement of Microsoft products and services through their feedback and suggestions According to Microsoft MVPs have influenced over million people worldwide through their community activities ConclusionI hope you enjoyed reading this post as much as I enjoyed writing it I also want to thank all of you who have supported me throughout my journey You are the reason why I do what I do I look forward to continuing my journey as an MVP for many more years to come You can find more information about the MVP program at Thank you and see you around Are you interested in learning GitHub but don t know where to start Try my course on LinkedIn Learning Learning GitHub Thanks for reading this post I hope you found it interesting Feel free to follow me to get notified when new articles are out Emanuele BartolesiFollow Microsoft MVP amp GitHub Star Auth Ambassador amp Codemotion Ambassador LinkedIn Technical Instructor |
2023-07-06 14:46:44 |
海外TECH |
DEV Community |
"ChatGPT Creator OpenAI" : Forming New "Research Team" to Bring 'Superintelligent AI' |
https://dev.to/soumyadeepdey/openai-is-forming-a-team-to-rein-in-superintelligent-ai-1nhh
|
quot ChatGPT Creator OpenAI quot Forming New quot Research Team quot to Bring x Superintelligent AI x OpenAI Forms Dedicated Team to Manage Risks of Superintelligent AIOpenAI a non profit organization focused on artificial intelligence research and development has announced the formation of a dedicated team to address the risks associated with superintelligent AI The team will be co led by Ilya Sutskever OpenAI s Chief Scientist and Jan Leike the head of alignment at the research lab The Concept of SuperintelligenceSuperintelligence refers to a hypothetical AI model that surpasses the cognitive abilities of even the most intelligent humans and excels in multiple areas of expertise OpenAI believes that such a superintelligent AI could emerge before the end of the current decade While superintelligence has the potential to solve critical global problems OpenAI acknowledges the significant risks associated with its development The Potential Impacts and Dangers of SuperintelligenceOpenAI recognizes that superintelligence has the potential to be the most impactful technology ever invented It could offer solutions to some of the world s most pressing challenges However the immense power of superintelligence also poses substantial risks including the disempowerment or even extinction of humanity OpenAI aims to address these risks proactively Automated Alignment Researcher and Compute Power AllocationAs part of their efforts OpenAI plans to dedicate percent of their current compute power to the superintelligence risk mitigation initiative The organization aims to develop an automated alignment researcher a system that would assist OpenAI in ensuring the safety and alignment of superintelligent AI with human values OpenAI acknowledges the ambitious nature of this goal but remains optimistic about its potential success Promising Ideas and Empirical StudyOpenAI emphasizes that there are numerous ideas that have shown promise in preliminary experiments related to the alignment and safety of superintelligent AI They also highlight the availability of increasingly useful metrics for measuring progress in this area OpenAI intends to leverage existing AI models to empirically study and gain insights into the potential risks and challenges posed by superintelligence Transparency and Future RoadmapOpenAI commits to sharing a roadmap for their research and initiatives regarding superintelligence risk mitigation in the future This transparency aims to foster collaboration and ensure that the broader AI community can contribute to the development of robust safety measures Broader Context AI Regulation and Immediate ConcernsOpenAI s announcement coincides with ongoing discussions worldwide on how to regulate the burgeoning AI industry OpenAI s CEO Sam Altman has engaged with numerous federal lawmakers in the United States expressing the organization s eagerness to collaborate with policymakers and highlighting the importance of AI regulation However the author of the article suggests skepticism toward initiatives like OpenAI s Superalignment team They argue that focusing on hypothetical risks may deflect attention from more immediate issues surrounding AI such as its impact on labor the spread of misinformation and copyright concerns The article posits that policymakers should prioritize addressing these pressing concerns rather than delaying regulation in anticipation of future risks Key Points OpenAI forms a dedicated team to manage the risks associated with superintelligent AI Superintelligence refers to AI models that surpass human cognitive abilities and excel in multiple domains OpenAI acknowledges the potential benefits of superintelligence but also recognizes the significant dangers and risks involved The organization plans to allocate percent of its compute power to develop an automated alignment researcher OpenAI emphasizes the importance of empirical study and leveraging existing AI models to address superintelligence risks OpenAI commits to transparency by sharing a roadmap for their research in the future The article raises concerns about the potential deflection of attentionfrom immediate AI related issues by focusing on hypothetical risks The broader context includes ongoing discussions on AI regulation with OpenAI expressing its willingness to collaborate with policymakers Immediate concerns include AI s impact on labor the spread of misinformation and copyright issues that require attention from policymakers In conclusion OpenAI s formation of a dedicated team to manage the risks of superintelligent AI highlights the organization s proactive approach to addressing the potential dangers associated with advanced artificial intelligence While superintelligence holds promise in solving significant global challenges OpenAI recognizes the need to ensure the alignment and safety of such AI systems with human values The commitment to dedicating compute power and developing an automated alignment researcher demonstrates OpenAI s seriousness in tackling the complexities of superintelligence Their emphasis on empirical study and leveraging existing AI models reflects a pragmatic approach to understanding and mitigating potential risks However the article raises valid concerns about the need for policymakers to address immediate AI related issues such as labor impact misinformation and copyright concerns Balancing the regulation of current AI technologies with the anticipation of future risks is crucial for ensuring a comprehensive and responsible approach to AI governance As discussions on AI regulation continue collaboration between organizations like OpenAI and policymakers becomes vital By working together it is possible to address both immediate concerns and future risks associated with AI ultimately promoting the responsible development and deployment of advanced AI systems for the benefit of humanity |
2023-07-06 14:34:36 |
海外TECH |
DEV Community |
Monitoring and Testing Cloud Native APIs with Grafana |
https://dev.to/kubeshop/monitoring-and-testing-cloud-native-apis-with-grafana-36c8
|
Monitoring and Testing Cloud Native APIs with GrafanaGrafana when combined with distributed tracing is widely used for troubleshooting and diagnosing problems What if you could use the data captured in the distributed trace as part of your testing strategy to prevent errors from reaching production in the first place By combining Grafana Tempo with Tracetest you can create a robust solution for monitoring and testing APIs with distributed tracing This tutorial guides you through setting up and using Docker Compose to run Grafana Tempo and Tracetest enabling effective monitoring and testing of your APIs See the full code for the example app you ll build in the GitHub repo here Microservices are Hard to Monitor…I ll use a sample microservice app called Pokeshop to demo distributed tracing and how to forward traces to Grafana Tempo It consists of services Node js APIHTTPgRPCNode js WorkerRabbitMQ Queue Redis Cache PostgresI ve prepared a docker compose yaml file with the Pokeshop services Check it out here version services Demo postgres image postgres environment POSTGRES PASSWORD postgres POSTGRES USER postgres healthcheck test pg isready U POSTGRES USER d POSTGRES DB interval s timeout s retries ports demo cache image redis restart unless stopped healthcheck test CMD redis cli ping interval s timeout s retries demo queue image rabbitmq management restart unless stopped healthcheck test rabbitmq diagnostics q check running interval s timeout s retries demo api image kubeshop demo pokemon api latest restart unless stopped pull policy always environment REDIS URL demo cache DATABASE URL postgresql postgres postgres postgres postgres schema public RABBITMQ HOST demo queue POKE API BASE URL COLLECTOR ENDPOINT http otel collector NPM RUN COMMAND api ports healthcheck test CMD wget spider localhost interval s timeout s retries depends on postgres condition service healthy demo cache condition service healthy demo queue condition service healthy demo worker image kubeshop demo pokemon api latest restart unless stopped pull policy always environment REDIS URL demo cache DATABASE URL postgresql postgres postgres postgres postgres schema public RABBITMQ HOST demo queue POKE API BASE URL COLLECTOR ENDPOINT http otel collector NPM RUN COMMAND worker depends on postgres condition service healthy demo cache condition service healthy demo queue condition service healthy demo rpc image kubeshop demo pokemon api latest restart unless stopped pull policy always environment REDIS URL demo cache DATABASE URL postgresql postgres postgres postgres postgres schema public RABBITMQ HOST demo queue POKE API BASE URL COLLECTOR ENDPOINT http otel collector NPM RUN COMMAND rpc ports healthcheck test CMD lsof i interval s timeout s retries depends on postgres condition service healthy demo cache condition service healthy demo queue condition service healthy Demo End OpenTelemetry Instrumentation in the Pokeshop Microservice AppThe Pokeshop is configured with OpenTelemetry code instrumentation using the official tracing libraries These libraries will capture and propagate distributed traces across the Pokeshop microservice app The tracing libraries are configured to send traces to OpenTelemetry Collector The OpenTelemetry Collector will then forward traces to Grafana Tempo It will be explained in the following section By opening the tracing ts you can see how to set up the OpenTelemetry SDKs to instrument your code It contains all the required modules and helper functions tracing tsimport as opentelemetry from opentelemetry api import NodeSDK from opentelemetry sdk node import OTLPTraceExporter from opentelemetry exporter trace otlp grpc import Resource from opentelemetry resources import as dotenv from dotenv import SemanticResourceAttributes from opentelemetry semantic conventions import SpanStatusCode from opentelemetry api dotenv config Loaded from envconst COLLECTOR ENDPOINT SERVICE NAME pokeshop process env rest of the file I m using an env var for the OpenTelemetry Collector endpoint See the env file here DATABASE URL postgresql ashketchum squirtle localhost pokeshop schema public REDIS URL localhostRABBITMQ HOST localhostPOKE API BASE URL https pokeapi co api vCOLLECTOR ENDPOINT http localhost APP PORT RPC PORT The rest of the tracing ts file contains helper methods for creating trace spans tracing js let globalTracer opentelemetry Tracer null null async function createTracer Promise lt opentelemetry Tracer gt const collectorExporter new OTLPTraceExporter url COLLECTOR ENDPOINT const sdk new NodeSDK traceExporter collectorExporter instrumentations sdk addResource new Resource SemanticResourceAttributes SERVICE NAME SERVICE NAME await sdk start process on SIGTERM gt sdk shutdown then gt console log SDK shut down successfully err gt console log Error shutting down SDK err finally gt process exit const tracer opentelemetry trace getTracer SERVICE NAME globalTracer tracer return globalTracer async function getTracer Promise lt opentelemetry Tracer gt if globalTracer return globalTracer return createTracer async function getParentSpan Promise lt opentelemetry Span undefined gt const parentSpan opentelemetry trace getSpan opentelemetry context active if parentSpan return undefined return parentSpan async function createSpan name string parentSpan opentelemetry Span undefined options opentelemetry SpanOptions undefined Promise lt opentelemetry Span gt const tracer await getTracer if parentSpan const context opentelemetry trace setSpan opentelemetry context active parentSpan return createSpanFromContext name context options return tracer startSpan name async function createSpanFromContext name string ctx opentelemetry Context options opentelemetry SpanOptions undefined Promise lt opentelemetry Span gt const tracer await getTracer if ctx return tracer startSpan name options opentelemetry context active return tracer startSpan name options ctx async function runWithSpan lt T gt parentSpan opentelemetry Span fn gt Promise lt T gt Promise lt T gt const ctx opentelemetry trace setSpan opentelemetry context active parentSpan try return await opentelemetry context with ctx fn catch ex parentSpan recordException ex parentSpan setStatus code SpanStatusCode ERROR throw ex export getTracer getParentSpan createSpan createSpanFromContext runWithSpan Monitoring with Grafana Tempo and OpenTelemetry CollectorGrafana Tempo is a powerful solution for monitoring and testing APIs using distributed tracing Tempo provides a highly scalable cost effective and easy to use trace data store It s optimized for trace visualization with Grafana With Tempo you can monitor and test your APIs in real time This allows you to identify potential bottlenecks or performance issues and respond quickly to ensure the reliability and performance of your APIs In this section you ll learn how to configure Grafana Tempo First you ll set up Grafana Tempo to receive and store traces from the Pokeshop app It will need the OpenTelemetry Collector as the main trace receiver and forwarder OpenTelemetry Collector The OpenTelemetry Collector will receive traces from the Pokeshop app and forward them to Grafana Tempo Grafana Lastly I ll explain how to configure Grafana to read trace data from Tempo Adding OpenTelemetry Collector Tempo and Grafana to Docker ComposeYou need to add more services to the Docker Compose docker compose yaml Grafana otel collector image otel opentelemetry collector contrib command config otel local config yaml volumes collector config yaml otel local config yaml depends on tempo tempo image grafana tempo latest command config file etc tempo yaml volumes tempo config yaml etc tempo yaml tempo data tmp tempo ports tempo otlp grpc otlp http grafana image grafana grafana volumes grafana config yaml etc grafana provisioning datasources datasources yaml environment GF AUTH ANONYMOUS ENABLED true GF AUTH ANONYMOUS ORG ROLE Admin GF AUTH DISABLE LOGIN FORM true GF FEATURE TOGGLES ENABLE traceqlEditor ports Grafana EndFor these three services you are loading three dedicated config files Keep the config files in the same directory as the docker compose yaml file Let s move on to the configuration OpenTelemetry Collector ConfigurationThe OpenTelemetry Collector is configured via a config file Let s configure it to ingest traces on the default HTTP and GRPC ports via the OTLP protocol HTTP gRPC Create a file called collector config yaml collector config yamlreceivers otlp protocols grpc http processors batch timeout msexporters logging loglevel debug otlp tempo endpoint tempo tls insecure trueservice pipelines traces receivers otlp processors batch exporters otlp tempo The exporter config defines the location you ll send traces to In this case Tempo The Tempo ingestion endpoint uses OTLP as well and uses the same port as the OpenTelemetry Collector Now let s configure Tempo to receive the traces Grafana Tempo ConfigurationTempo is configured with a config file Create another file in the same directory as the docker compose yaml called tempo config yaml tempo config yamlauth enabled falseserver http listen port grpc listen port distributor receivers otlp protocols http grpc ingester trace idle period s max block bytes max block duration mcompactor compaction compaction window h max compaction objects block retention h compacted block retention mstorage trace backend local wal path tmp tempo wal local path tmp tempo blocks pool max workers queue depth The important configs to note are the server and distributor sections The server defines how to access and query Tempo and the distributor will define how to ingest traces into Tempo Use port to query for traces from Tempo in the Grafana dashboards Use port to query for traces from Tracetest when running integration tests Let s set up Grafana and explore the trace data Configuring Grafana Data SourcesFor Grafana you define data sources that are used in a config file Create another file in the same directory as the docker compose yaml Give it a name grafana config yaml grafana config yamlapiVersion datasources name Tempo type tempo access proxy orgId url http tempo basicAuth false isDefault true version editable false apiVersion uid tempoYou can see that the URL field matches the Tempo http listen port View Traces in GrafanaWith Tempo OpenTelemetry Collector and Grafana added restart your Docker Compose docker compose downdocker compose up buildTrigger a simple cURL request to generate a few traces curl d id H Content Type application json X POST http localhost pokemon importOpen Grafana on localhost Choose Tempo and the TraceQL tab Add and run the query below name POST pokemon import Choose a trace from here and it will open up in the panel on the right With OpenTelemetry instrumentation and Grafana configuration you can elevate your trace debugging and validation as well as build integration tests to validate API behavior Trace Validation and Integration Testing with TracetestTracetest is an open source project part of the CNCF landscape It allows you to quickly build integration and end to end tests powered by your distributed traces Tracetest uses your existing distributed traces to power trace based testing with assertions against your trace data at every point of the request transaction You only need to point Tracetest to your Tempo instance or send traces to Tracetest directly With Tracetest you can Define tests and assertions against every single microservice that a trace goes through Work with your existing distributed tracing solution allowing you to build tests based on your already instrumented system Define multiple transaction triggers such as a GET against an API endpoint a GRPC request etc Define assertions against both the response and trace data ensuring both your response and the underlying processes worked correctly quickly and without errors Save and run the tests manually or via CI build jobs with the Tracetest CLI Install and Configure TracetestTracetest runs as a container in your Docker Compose stack just like Tempo or the OpenTelemetry Collector Start by adding Tracetest to the docker compose yaml Tracetest tracetest image kubeshop tracetest TAG latest platform linux amd volumes type bind source tracetest config yaml target app tracetest yaml type bind source tracetest provision yaml target app provisioning yaml ports command provisioning file app provisioning yaml extra hosts host docker internal host gateway depends on postgres condition service healthy otel collector condition service started healthcheck test CMD wget spider localhost interval s timeout s retries environment TRACETEST DEV TRACETEST DEV Tracetest EndTo connect to a Postgres instance Tracetest requires a configuration file This file will be used to store its test data Create a tracetest config yaml file in the same directory as the docker compose yaml tracetest config yaml postgres host postgres user postgres password postgres port dbname postgres params sslmode disableConnecting Tracetest to Grafana Tempo can be done in the Web UI but it s just as easy with a provisioning file Create a tracetest provision yaml file like this tracetest provision yaml type PollingProfilespec name Default strategy periodic default true periodic retryDelay s timeout m type DataStorespec name Tempo type tempo tempo type grpc grpc endpoint tempo tls insecure true type Demospec type pokeshop enabled true name pokeshop opentelemetryStore pokeshop httpEndpoint http demo api grpcEndpoint demo rpc Remember exposing port for Tempo You re using it here to query for traces with Tracetest when running integration tests Restart Docker Compose docker compose downdocker compose up buildNavigate to http localhost and open the settings You ll see Tempo selected and the endpoint set to tempo You can also configure this manually without the provision file The Demo section in the provision file will enable a preset of tests against the Pokeshop API for easier test definitions Omitting it will have no impact Let s jump into validating the traces generated by the Pokeshop API Validate API Traces Against OpenTelemetry Rules and StandardsThe Tracetest Analyzer is the first ever tool to analyze traces It can validate traces identify patterns and fix issues with code instrumentation It s the easy way to adhere to OpenTelemetry rules and standards to ensure high quality telemetry data Let s create a new test in Tracetest and run the Analyzer To create a test in Tracetest see the docs or follow these instructions Click CreateClick Create New TestSelect HTTP RequestAdd a name for your testThe URL field should be POST http demo api pokemon importThe Header list should be Content Type application jsonThe Request Body json with the content id Click Create and RunThis will trigger the test and display a distributed trace in the Trace tab and run validations against it This allows you to validate your OpenTelemetry instrumentation before committing code All rules and standards you need to adhere to will be displayed for you to see exactly what to improve Next when you re happy with the traces move on to creating test specifications Define Test Scenarios with TracetestThis section will cover adding four different test scenarios Validate that all HTTP spans return a status code Validate that a span exists after the RabbitMQ queue meaning a value has been picked up from it Validate that Redis is using the correct Pokemon id Validate that Postgres is inserting the correct Pokemon Opening the Test tab will let you create Test Specs Adding Common Test Specs from SnippetsOnce you land on the Test tab you ll be greeted with test snippets for common test cases These assertions will validate the properties from the trace spans the Pokeshop API generates By default Tracetest will give you snippets to add common assertions like All HTTP spans return the status code All database spans return in less than msStart by adding a first test spec for validating all HTTP spans return status code Click All HTTP Spans Status code is Save Test SpecSaveBut this case is common and easy to test with traditional tools However running tests on message queues and caches is not Let s jump into that Adding Test Cases for RabbitMQ Redis and PostgresCreate another test spec by clicking on the import pokemon span and the Add Test Spec button To learn more about selectors and expressions check the docs The selector you need is span tracetest span type general name import pokemon To validate that this span exists at all will validate the value has been picked up from the RabbitMQ queue attr tracetest selected spans count Save the test spec and move to add a test spec for Redis To validate that Redis is using the correct Pokemon ID we are comparing it to the value returned from Redis Select the Redis span You ll use this selector span tracetest span type database name get pokemon db system redis db operation get db redis database index And this assertion attr db payload key pokemon Lastly select the Postgres span Here you re validating that the value inserted into the Postgres database contains the correct Pokemon name Create another test spec Use this selector span tracetest span type database name create postgres pokemon db system postgres db name postgres db user postgres db operation create db sql table pokemon And this assertion attr db result contains charizard After all this work you ll end up with test specs This complex test scenario will run an API test with specs against trace data and give you deep assertion capabilities for microservices and async processes that are incredibly difficult to test with legacy testing tools With the test scenarios laid out let s automate Run Automated Tests with TracetestTracetest is designed to work with all CI CD platforms and automation tools To enable Tracetest to run in CI CD environments make sure to install the Tracetest CLI and configure it to access your Tracetest server Installing the CLI is a single command brew install kubeshop tracetest tracetestNote Check out the download page for more info about installing in either Linux or Windows You can also follow the official documentation to install the Tracetest server in your existing infrastructure running in Kubernetes or Docker Configuring the CLI is one more command tracetest configure endpoint http localhost Make sure to run the Docker Compose stack before configuring the CLI You can see the endpoint is set to http localhost It s where your Tracetest server is running You re ready to run automated tests Create a Tracetest Test DefinitionBut first you need a test definition In the Tracetest Web UI open the test you created click the ️ in the top right and then the Test Definition button Download the file and give it a name I ll call it test yaml because reasons test yamltype Testspec id aostJVg name Pokeshop Import description Import a Pokemon trigger type http httpRequest url http demo api pokemon import method POST headers key Content Type value application json body id specs name Import Pokemon Span Exists selector span tracetest span type general name import pokemon assertions attr tracetest selected spans count name Matching db result with the Pokemon Name selector span tracetest span type database name create postgres pokemon db system postgres db name postgres db user postgres db operation create db sql table pokemon assertions attr db result contains charizard name Uses correct Pokemon ID selector span tracetest span type database name get pokemon db system redis db operation get db redis database index assertions attr db payload key pokemon name All HTTP Spans Status code is selector span tracetest span type http assertions attr http status code This test definition contains the HTTP trigger and test specs for the API test Trigger trigger type http httpRequest url http demo api pokemon import method POST headers key Content Type value application json body id Test Specs specs name Import Pokemon Span Exists selector span tracetest span type general name import pokemon assertions attr tracetest selected spans count name Matching db result with the Pokemon Name selector span tracetest span type database name create postgres pokemon db system postgres db name postgres db user postgres db operation create db sql table pokemon assertions attr db result contains charizard name Uses correct Pokemon ID selector span tracetest span type database name get pokemon db system redis db operation get db redis database index assertions attr db payload key pokemon name All HTTP Spans Status code is selector span tracetest span type http assertions attr http status code If you wanted to you could have written this entire test in YAML right away Run a Tracetest Test with the CLIOnce you ve saved the file triggering the test with the CLI is done like this tracetest test run definition tests test yaml wait for result Output Pokeshop Import http localhost test aostJVg run test Import Pokemon Span Exists Matching db result with the Pokemon Name Uses correct Pokemon ID All HTTP Spans Status code is You can access the test run by following the URL in the test response To automate this behavior you ll specify a list of test definitions and run them with the CLI in your preferred CI CD platform Alternatively you do not need to download the CLI in your CI CD platform Instead use the official Tracetest Docker image that comes with the CLI installed Here s a list of guides we ve compiled for you in the docs GitHub ActionsTestkubeTekton Analyze Test ResultsYou have successfully configured both Grafana and Tracetest By enabling distributed tracing and trace based testing you can now monitor test executions and analyze the captured traces to gain insights into your API s performance and identify any issues Use Grafana Tempo s querying capabilities to filter and search for specific traces based on attributes like service name operation name or tags This will help you pinpoint the root cause of any performance degradation or errors Leverage Grafana s rich visualization capabilities to create dashboards and charts to track the performance and health of your APIs over time Use Tracetest to leverage existing distributed traces to power trace based testing You can define tests and assertions against every single microservice that a trace goes through With Tracetest you can work with Grafana Tempo to define assertions against both the response and trace data This ensures both your response and the underlying processes work as expected Finally save and run the tests in CI pipelines with the Tracetest CLI to enable automation How Grafana Works with Tracetest to Enhance ObservabilityIn conclusion by combining Grafana Tempo with Tracetest you can effectively monitor and test your APIs using distributed tracing This tutorial has provided a step by step guide to setting up and using these powerful tools enabling you to ensure the reliability performance and scalability of your APIs in complex distributed systems Do you want to learn more about Tracetest and what it brings to the table Check the docs and try it out by downloading it today Also please feel free to join our Discord community give Tracetest a star on GitHub or schedule a time to chat |
2023-07-06 14:01:51 |
Apple |
AppleInsider - Frontpage News |
Apple TV+ 'Strange Planet' animated series gets debut date |
https://appleinsider.com/articles/23/07/06/apple-tv-strange-planet-animated-series-gets-debut-date?utm_medium=rss
|
Apple TV x Strange Planet x animated series gets debut dateThe much awaited Apple TV adaptation of webcomic Strange Planet will begin streaming a episode series in August First announced in the TV version is co created by Strange Planet comic author Nathan W Pyle and Community creator Dan Harmon Based on the New York Times no bestselling graphic novel and social media phenomenon of the same name Strange Planet is a hilarious and perceptive look at a distant world not unlike our own says Apple Set in a whimsical world of cotton candy pinks and purples relatable blue beings explore the absurdity of everyday human traditions Read more |
2023-07-06 14:31:01 |
海外科学 |
NYT > Science |
Frank Field, Who Brought Expertise to TV Weathercasting, Dies at 100 |
https://www.nytimes.com/2023/07/02/business/media/frank-field-dead.html
|
Frank Field Who Brought Expertise to TV Weathercasting Dies at The first meteorologist to forecast the weather on New York television he later became known for among other things publicizing the Heimlich maneuver |
2023-07-06 14:52:40 |
ニュース |
BBC News - Home |
Elle Edwards: Connor Chapman guilty of Christmas Eve pub murder |
https://www.bbc.co.uk/news/uk-england-merseyside-66108449?at_medium=RSS&at_campaign=KARANGA
|
chapman |
2023-07-06 14:57:49 |
ニュース |
BBC News - Home |
Stephen Lawrence case: Retired detectives will not face prosecution over inquiry |
https://www.bbc.co.uk/news/uk-66118651?at_medium=RSS&at_campaign=KARANGA
|
lawrence |
2023-07-06 14:56:36 |
ニュース |
BBC News - Home |
Wagner boss Prigozhin is in Russia, Belarus ruler Lukashenko says |
https://www.bbc.co.uk/news/world-europe-66118007?at_medium=RSS&at_campaign=KARANGA
|
petersburg |
2023-07-06 14:47:57 |
ニュース |
BBC News - Home |
Cardiff: Funeral held for Ely boys whose deaths sparked riot |
https://www.bbc.co.uk/news/uk-wales-66118703?at_medium=RSS&at_campaign=KARANGA
|
capacity |
2023-07-06 14:18:02 |
ニュース |
BBC News - Home |
Mothers could have missed out on £1bn in state pension |
https://www.bbc.co.uk/news/business-66124840?at_medium=RSS&at_campaign=KARANGA
|
pension |
2023-07-06 14:13:08 |
ニュース |
BBC News - Home |
Trans charity Mermaids loses challenge against LGB Alliance |
https://www.bbc.co.uk/news/uk-65340857?at_medium=RSS&at_campaign=KARANGA
|
charity |
2023-07-06 14:14:27 |
ニュース |
BBC News - Home |
Elle Edwards: 'My daughter's murderer can rot in hell' |
https://www.bbc.co.uk/news/uk-england-merseyside-66088153?at_medium=RSS&at_campaign=KARANGA
|
edwards |
2023-07-06 14:34:44 |
ニュース |
BBC News - Home |
Star Wars studios at risk from asbestos and ‘dangerous’ roofs |
https://www.bbc.co.uk/news/uk-england-beds-bucks-herts-66121521?at_medium=RSS&at_campaign=KARANGA
|
studios |
2023-07-06 14:32:31 |
ニュース |
BBC News - Home |
The Ashes: Australia's Mitchell Marsh hits century - best shots |
https://www.bbc.co.uk/sport/av/cricket/66120900?at_medium=RSS&at_campaign=KARANGA
|
headingley |
2023-07-06 14:55:21 |
ニュース |
BBC News - Home |
Netherlands beat Scotland to reach Cricket World Cup on net run rate |
https://www.bbc.co.uk/sport/cricket/66121958?at_medium=RSS&at_campaign=KARANGA
|
bulawayo |
2023-07-06 14:41:33 |
コメント
コメントを投稿