投稿時間:2023-08-22 23:19:32 RSSフィード2023-08-22 23:00 分まとめ(27件)

カテゴリー等 サイト名等 記事タイトル・トレンドワード等 リンクURL 頻出ワード・要約等/検索ボリューム 登録日
IT 気になる、記になる… 「Pixel Watch 2」、FCCに続いてシンガポールやインドの認証機関も通過 https://taisy0.com/2023/08/22/175645.html google 2023-08-22 13:36:15
IT 気になる、記になる… Deff、ワイヤレス充電機能を搭載した真空吸着式スマホホルダー『Airput』を発表 ー 8月29日よりMakuakeで販売へ https://taisy0.com/2023/08/22/175642.html airput 2023-08-22 13:15:07
golang Goタグが付けられた新着投稿 - Qiita [Go]メソッド https://qiita.com/hasesiu/items/8c1feaffcb4ff86a0841 重要 2023-08-22 22:47:01
Git Gitタグが付けられた新着投稿 - Qiita NOEOL?No Newline at End of File? https://qiita.com/misaki_soeda/items/cbef42d93087661bf25c noeolnonewlineatendoffile 2023-08-22 22:17:16
海外TECH MakeUseOf Hisense 65U8K: Flagship Specs for $1000 https://www.makeuseof.com/hisense-65u8k-review/ design 2023-08-22 13:36:27
海外TECH MakeUseOf 8 Security Measures Gmail Uses to Keep You Safe https://www.makeuseof.com/security-measures-gmail-uses-to-keep-you-safe/ email 2023-08-22 13:30:25
海外TECH MakeUseOf How to Fix a USB Device That Keeps Disconnecting & Reconnecting in Windows 10 https://www.makeuseof.com/how-to-fix-usb-device-disconnecting-reconnecting-windows-10/ windows 2023-08-22 13:00:26
海外TECH MakeUseOf How to Use GOOGLEFINANCE to Track Stocks in Google Sheets https://www.makeuseof.com/track-stocks-in-google-sheets/ sheets 2023-08-22 13:00:25
海外TECH DEV Community HubSpot's Chatbot versus AINIRO's Chatbot https://dev.to/polterguy/hubspots-chatbot-versus-ainiros-chatbot-4h85 HubSpot x s Chatbot versus AINIRO x s ChatbotDo me a favour and go to HubSpot and ask their chatbot anything Then try our alternative version below AINIRO s HubSpot ChatbotNotice I created the above chatbot in some minutes by punching in HubSpot s URL into our website scraper which automatically created a custom Chatbot using HubSpot s website as its single source of truth For then to create a copy of their website and embed our chatbot on the copy A real production grade chatbot from AINIRO will obviously have x as high quality Still I suspect most people will agree with me that already our chatbot is running around HubSpot s chatbot in circles Below are some example questions and answers our chatbot gave me You are of course more than welcome to reproduce my test What is HubSpot Notice how our chatbot is actively trying to sell HubSpot s services and providing powerful words such as leading grow better and designed to scale In an attempt at making the recipient believe in that it s a great product Our chatbot is also displaying images to strenghten its message Below is the same answer as provided by HubSpot s chatbot I m obviously biased but I find the HubSpot chatbot s answer to be dull without any passion incapable of convincing me of their superiority as a CRM vendor What advantages does HubSpot s CRM have Notice the numbered list People are more likely to remember lists of advantages such as the above In addition the chatbot provides relevant emojis and selling words such as Unified Platform growth and Easy to Use etc If I scroll down I will even find a relevant image further emphasizing the sales message I want to emphasize there exists research in the subject that proves that one single product image increases sales by x Implying ignoring everything else our chatbot would purely scientifically already have sold twice as much product for HubSpot as their own chatbot Below is HubSpot chatbot s answer As you can see it completely gave up probably interpreting my question as a potential lead trying to connect me with a sales executive For the record I waited for minutes and no sales rep showed up Probably because nobody in sales at HubSpot cares anymore after having realized of all leads the chatbot generates are false positives Stuff ONLY our chatbot gave meSince HubSpot s chatbot completely gave up at this point probably ashamed of its own lack of quality I ll just show you some more screenshots of what our chatbot was able to tell me about HubSpot Yet again I want to emphasize I created this chatbot in minutes and I didn t apply any customization to it at ALL I love the image in addition to the bringing together part It creates a lot of trust and makes me believe in the product Let s check how it works in regards to tempting people to apply for a job at HubSpot Wow I m ready to apply I m speechless I particularly loved the little girl on her father s shoulders I m sure already of those reading this article is applying for a job at HubSpot And of course as I ask it if there are any vacant positions our chatbot literally leads me directly to their apply pages from where I can apply for jobs in specific regions of the world Click the link and VOILA You re applying for a job at HubSpot Now imagine this was a product you were trying to sell on your website ConclusionTage will hate me for saying this He believes we don t have to talk negatively about our competitors in order to succeed but I simply have to The lack of quality in HubSpot s chatbot combined with the fact that they actually put it into production AND are offering it to their clients for free forces my hand you might argue And my conclusion about HubSpot s Chatbot is as follows HubSpot s Chatbot is junkware and a flaming hot pile of garbage And I wouldn t use it myself if they PAID me to use it A qualified guess is that our chatbot would automate some to orders of magnitudes more customer service requests than HubSpot s chatbot and probably sell equally much more product for you But then again HubSpot s chatbot has ONE advantage we don t have which is as follows It s free ROII happen to know the history of their chatbot Apparently their CTO created it himself I saw him working on it still in March and he probably started on it in December Implying we re talking about man months worth of work here Being the CTO of HubSpot a class A SaaS provider with thousands of employees all over the world he s probably earning at least some €per year Probably more divided by becomes € Our professional plan starts at per month A company like HubSpot would probably have been able to use our professional plan for a single chatbot maybe the enterprise plan if they wanted customization Dividing by becomes months Divide further by you end up with years That means that ignoring all the lost sales the frustrated users turning their back on their product etc HubSpot will have a return on their investment ROI in years from now compared to simply buying our stuff If HubSpot s CEO or CTO is interested in initiating talks with us to see how A REAL CHATBOT IS BUILT they are more than welcome to contact us below Contact us 2023-08-22 13:53:34
海外TECH DEV Community Top 7 Featured DEV Posts from the Past Week https://dev.to/devteam/top-7-featured-dev-posts-from-the-past-week-10nm Top Featured DEV Posts from the Past WeekEvery Tuesday we round up the previous week s top posts based on traffic engagement and a hint of editorial curation The typical week starts on Monday and ends on Sunday but don t worry we take into account posts published later in the week Getting Started in a New CodebaseWhether contributing to open source or starting a new job the first step is familiarizing yourself with the codebase which can be daunting Here are some tips from abbeyperini to help you hit the ground running Getting Started in a New Codebase Abbey Perini・Aug beginners programming webdev softwaredevelopment Getting Started With SCSS The CSS Preprocessor With SuperpowersHave you ever felt there should be a way to make writing CSS easier and faster This is where SCSS Sassy CSS comes in and classicthedemigod will teach you all about it React Props A Visual Guide Reed Barger・Aug react webdev beginners javascript WebAssembly Byte Code of the FutureJS love it or hate it it s here to stay But it would be good if browsers supported more programming languages joshnuss is here to share that this is the promise of WebAssembly Providing a generic runtime to which any programming language can compile WebAssembly byte code of the future Joshua Nussbaum・Aug javascript webassembly webdev How to Dockerize a React ApplicationDocker keeps everything you need to run your app in one place so that the containerized image file can be run in any environment All you need is a React project and the desktop Docker app and ayesh nipun will show you how to dockerize it in just a few simple steps How to Dockerize a React Application Ayesh Nipun・Aug react docker webdev javascript I Was Tired of Langchain and Created My Own WrapperEvery programmer who wants to build a production ready LLM application inevitably stumbles upon certain libraries such as LangChain In this post zakharsmirnoff shares the most recent development on their tiny wrapper for OpenAI API I was tired of Langchain and created my own wrapper Zakhar Smirnoff・Aug ai gpt python opensource How To Make an Impact as a Developer AdvocateDeveloper Advocates are influencers Not in the sense that they re on Instagram taking pictures but they influence the developer community and their companies Here s blackgirlbytes with more on the role of developer advocates in the tech industry How to make an impact as a developer advocate Rizèl Scarlett・Aug career devrel leadership discuss Why You Should Make a Game Engine Years as a DeveloperFive years ago lkatkus made the switch from being an architect to becoming a software developer In this post Laimonas shares their journey of creating a game engine and progressing as a developer Why You Should Make a Game Engine Years as a Developer Laimonas K・Aug javascript webgl frontend webdev That s it for our weekly Top for this Tuesday Keep an eye on dev to this week for daily content and discussions and be sure to keep an eye on this series in the future You might just be in it 2023-08-22 13:51:02
海外TECH DEV Community Track AWS IAM changes in Git https://dev.to/castrapel/track-aws-iam-changes-in-git-1mn5 Track AWS IAM changes in GitFor DevOps CloudSec professionals navigating the complexities of AWS IAM configurations our recent blog post shines a light on how IAMbic keeps track of all IAM changes within version control now with CloudTrail attribution From Terraform to direct AWS Console adjustments every change is documented in your Git history regardless of how the change was made Full details in our latest blog post 2023-08-22 13:38:01
海外TECH DEV Community Day 31: Async Await https://dev.to/dhrn/day-31-async-await-2934 Day Async AwaitThe introduction of async and await in ECMAScript ES revolutionized how developers deal with asynchronous tasks making code more readable and maintainable Before async and await developers relied on callbacks and promises to manage asynchronous operations Callbacks led to the infamous callback hell while promises improved code readability However promises still required handling then and catch chains which could get complex in more intricate scenarios Enter async awaitasync and await are built on top of promises and offer a more elegant and intuitive way to work with asynchronous code async is used to declare an asynchronous function and within that function you can use await to pause execution until a promise is resolved async function fetchData url try const response await fetch url const data await response json return data catch error console error Error fetching data error throw error UsagefetchData then result gt console log result catch error gt console error error Benefits of async awaitReadability Asynchronous code reads almost like synchronous code making it easier for developers to understand the flow of execution Error Handling With try catch blocks error handling becomes more natural and localized within the function improving debugging Sequential Code await allows you to write sequential code even when dealing with asynchronous tasks enhancing the logical structure of your program Promise Chaining Reduction async await eliminates long chains of then and catch calls leading to cleaner code Concurrent Asynchronous Calls You can use Promise all with await to perform multiple asynchronous operations concurrently async function fetchMultipleData urls const promises urls map url gt fetchData url const results await Promise all promises return results UsagefetchMultipleData then results gt console log results ️Potential Pitfalls and Best PracticesNot Always Necessary Not every function needs to be async Only use it when you need to pause the function s execution to wait for a promise to resolve Avoid Blocking Although await can block the function it doesn t block the whole thread ensuring other tasks can still be executed Unhandled Promise Rejections Ensure you have appropriate error handling in place using try catch or catch to prevent unhandled promise rejections Sequential vs Parallel Be mindful of whether tasks should be executed sequentially or in parallel and structure your code accordingly 2023-08-22 13:30:00
海外TECH DEV Community Amazon S3 - Web Based Upload Object with POST request and Presigned URL in Python Boto3 https://dev.to/timetxt/amazon-s3-web-based-upload-object-with-post-request-and-presigned-url-in-python-boto3-5be5 Amazon S Web Based Upload Object with POST request and Presigned URL in Python BotoIn this article I will show you how to generate S Presigned URL for HTTP POST request with AWS SDK for Boto Python The unique part of this article is that I will show you how to apply Server Side Encryption with KMS key Tagging objects Updating Object Metadata and more with S Presigned URL for HTTP POST When using S there is a scenario about Broswer Based Uploads Using HTTP POST However it is required to calculate AWS SigV Signature to follow the section Instead of calculating the signature by you own codes you can actually use AWS Boto SDK with method generate presigned post to generate S PreSigned URL This is not only saving your time to debug Signature Mismatch error with your own codes you don t have to figure out requirements of crypto modules used by your codes to generate right signature It will all be handled by AWS SDK For example you owns an S bucket in your account One customer of yours is running a business to allow users of the customer to upload images The images will be directly uploaded from the customer s website into your S bucket The customer is not familiar with Amazon S service and does not own an AWS account so you need to provide your customer an easy method uploading objects from the customer website directly into your S bucket At the meantime you don t need to make your bucket public for uploading objects This is where S Presigned URL is needed You can generate the S Presigned URL for HTTP POST from AWS Lambda function by having these benefits Then you can provide the S Presigned URL with your customer to integrate into the customer s website But you might ask this question Why are you not using S Presigned URL for PutObject API call S Presigned URL for HTTP POST from broswer based uploads provides a unique feature You can define starts with condition in the policy You and your customers can both have some controls on requirements of the uploaded objects For example you only want your customer to upload text files so you can use the following start with condition to restrict value of Content Type starting with plain in uploading request The uploading request is created from your customer s website The value of Content Type request header is set when a file is being uploaded from your customer s website by using your S Presigned URL starts with Content Type plain In document of AWS SDK for Boto it did not share much information regarding how to use Fields and Conditions parameters mentioned at generate presigned post It took me some time to figure it out so I added my understanding in the code example I hope they will save your time in your code development Here is the Python Code Example Before you test it you will need to update the constants to match your resources import botoimport requestsfrom botocore config import ConfigACCESS KEY AKIAIOSFODNNEXAMPLE SECRET ACCESS KEY wJalrXUtnFEMI KMDENG bPxRfiCYEXAMPLEKEY BUCKET NAME example bucket name OBJECT NAME example key name REGION LOCATION ap southeast KMS KEY ARN arn aws kms lt region gt lt account id gt key lt key id gt EXPIRATION TIME hoursTEST FILE NAME Absolute Path To Local FileName my config Config region name REGION LOCATION signature version v retries max attempts mode standard define S client in ap southeast regions boto client s aws access key id ACCESS KEY aws secret access key SECRET ACCESS KEY config my config fields tagging lt Tagging gt lt TagSet gt lt Tag gt lt Key gt type lt Key gt lt Value gt test lt Value gt lt Tag gt lt TagSet gt lt Tagging gt x amz storage class STANDARD IA Cache Control max age success action status x amz server side encryption aws kms x amz server side encryption aws kms key id KMS KEY ARN x amz server side encryption bucket key enabled True acl public read conditions x amz storage class STANDARD IA starts with Content Type plain tagging lt Tagging gt lt TagSet gt lt Tag gt lt Key gt type lt Key gt lt Value gt test lt Value gt lt Tag gt lt TagSet gt lt Tagging gt Cache Control max age success action status x amz server side encryption aws kms x amz server side encryption aws kms key id KMS KEY ARN acl public read x amz server side encryption bucket key enabled True generate S Presigned URL for HTTP POST Requestresponse presigned url post s generate presigned post BUCKET NAME OBJECT NAME Fields fields Conditions conditions ExpiresIn EXPIRATION TIME print response presigned url post User requests post to test the URLpost fields response presigned url post fields files file open TEST FILE NAME rb you will see error comment the following line and uncomment the second following line you will see successfulpost fields Content Type application octet stream post fields Content Type plain text file key must be the last key in the files parameter form it is defined at RESTObjectPOST requests form fieldspost fields file open TEST FILE NAME rb print post fields making POST Requestresponse post request requests post response presigned url post url files post fields print response by default status code is success action status change it to print f Response Status of POST request with S Presigned URL response post request status code print f Response Headers of POST request with S Presigned URL response post request headers print f Response Body of POST request with S Presigned URL response post request text 2023-08-22 13:21:41
海外TECH DEV Community A deep-dive on a Progressive Web App implementation for a React-based App Platform (DHIS2) https://dev.to/kaivandivier/a-deep-dive-on-a-progressive-web-app-implementation-for-a-react-based-app-platform-dhis2-1bn6 A deep dive on a Progressive Web App implementation for a React based App Platform DHIS At DHIS we re a remote first team of developers building the world s largest health information management system DHIS is a free and open source global public good developed at the University of Oslo It is used in more than countries around the world serving as the national health information system for more than countries It is a general purpose data collection and analytics platform used to manage routine health service delivery as well as interventions targeting COVID Malaria HIV AIDS Tuberculosis maternal and child health and more Our tech stack includes a postgres database a Java server usually deployed on premise a native Android app and more than React based web applications To support the many web applications maintained by our team as well as those developed by a growing community of developers around the world we provide a suite of build tools and common application infrastructure we call the App Platform We are excited about the recent release of Progressive Web App PWA features in our App Platform which you can read about in this blog post introducing them and we think we have some interesting stories to share about their development We faced interesting design challenges as we sought to make these features easily generalizable to any app and the ways we used available technologies to solve those challenges are quite unique The purpose of this post is to share our novel approach to managing service worker lifecycles and other PWA functionality in a generic way Contents DHIS App Platform The App Platform at build time The App Platform at run time The App Platform orchestra Into Progressive Web Apps PWA Adding installability Adding simple offline capability Creating a service worker script to perform offline caching Compiling the service worker and adding it to the app Using a config option to enable PWA features Managing the service worker s updates and lifecycle Designing a good user experience for updating PWA apps Implementation of the app update flow Registration of the service worker Automatically applying app updates when possible Providing the UI for manually applying updates Handling precached static assets between versions Adding a kill switch for a rogue service worker ConclusionLet s start then with some necessary context about how our App Platform works DHIS App PlatformDHIS is used in many different countries and in many different contexts Each DHIS instance has specific requirements use cases and user experience workflows We wanted to make it as easy as possible for developers in other organizations to extend the core functionality of DHIS by creating their own web applications among other types of extensions and also to share those apps with other implementers on our App Hub We also wanted to make our own lives easier when creating and maintaining the more than web applications developed by our core developer team Enter the App Platform The App Platform is a unified application architecture and build pipeline to simplify and standardize application development within the DHIS ecosystem The platform provides many common services and functionalities including authentication and authorization translation infrastructure common UI components and a data access layer that are required by all DHIS web applications making it easier and faster to develop custom applications without reinventing the wheel Some features in this image are works in progress The App Platform at build timeThe App Platform consists of a number of build time components and development tools that you can find in our app platform repository App Adapter A wrapper for the app under development it wraps the root component exported from the app s entry point like lt App gt and performs other jobs App Shell Provides the HTML skeleton for the app and other assets imports the root lt App gt component from the app under development s entry point and wraps it with the App Adapter It also provides some environment variables to the app App Scripts CLI Provides development tools and performs build time jobs such as building the app itself and running a development server also part of d global CLI The App Platform at run timeAt run time our platform offers React components and hooks that provide services to the app under development These are mainly two libraries The App Runtime library that uses a universal lt Provider gt component to provide context and support several useful services The App Adapter adds the provider to apps using the platform by default The services include Data Service Publishes a declarative API for sending and receiving data to and from the DHIS back endConfig Service Exposes several app configuration parametersAlerts Service Provides a declarative API for showing and hiding in app alerts This also coordinates with an Alerts Manager component in the App Adapter to show the UIA UI Library that offers reusable interface components that implement the DHIS design system See more at the UI documentation and the ui repository The App Platform orchestraTo illustrate how the App Adapter App Shell and App Scripts CLI work together consider this series of events that takes place when you initialize and build an app Using the d global CLI a new Platform app is bootstrapped using d app scripts init new app in the terminal Inside the new app directory that the above script just created the yarn build command is run which in turn runs d app scripts build which initiates the following steps Any directory or file paths described below are relative to new app in jobs are executed out of scope for this post The build script bootstraps a new App Shell in the d shell directory A web app manifest is generated The app code written in src is transpiled and copied into the d shell src DApp directory Inside the Shell at this stage the files are set up so that the root component exported from the entry point in the app under development lt App gt from src App js by default now copied into d shell src DApp App js is imported by a file in the App Shell that wraps it with the App Adapter and then the wrapped app gets rendered into an anchor node in the DOM The shell encapsulated app that s now set up in the d shell directory is now basically a Create React App app and react scripts can be used to compile a minified production build The react scripts build script is run and the build is output to the build app directory in the app root A zipped bundle of the app is also created and output to build bundle which can be uploaded to a DHIS instance This example will be useful to refer back to when reading about the build process later in this article Some details of this process may change as we improve our build tooling but this is the current design as of writing To contextualize and preview the sections to come here are the extensions we make to this process to add PWA features into the App Platform We add a service worker script to the App Shell that s bootstrapped in step We generate a PWA manifest alongside the web app manifest in step We extend the App Adapter in step to support several client side PWA featuresThe service worker script in the App Shell gets compiled and added to the build app during step Into Progressive Web Apps PWA Now that you have some background on our apps architecture and platform let s talk about our implementation of Progressive Web App PWA technology and how it presented several design challenges as we developed it to be generalizable to any app We wanted our App Platform based web apps to support two defining features which are core to PWAs Installability which means the app can be downloaded to a device and run like a native app and Offline capability meaning the app can support most or all of its features while the device is offline This works both when the app is opened in a browser or as an installed app Adding PWA features especially offline capability in the DHIS App Platform is a large task implementing PWA features can be complex enough in a single app with some aspects being famously tricky On top of that we have some other unique design criteria that add complexity to our project The features should work in and be easy to add to any Platform app They should provide tools that any app can use for managing caching of individual content sections We call these tools Cacheable Sections and intend for them to support our Dashboard app s use case of saving individual dashboards for offline usage They should not cause side effects for apps that don t use the PWA features For now we ll cover installability and simple offline capability in this post Cacheable sections are introduced in our PWA intro blog but since they are more complex and face numerous particular design challenges they will be described in another deep dive post Stay tuned to the DHIS developer s blog Adding installabilityThis is the simplest PWA feature to add all that s needed is a PWA web manifest file which adds metadata about the web app so that it can be installed on a device then to link to it from the app s index html file like so lt link rel manifest crossorigin use credentials href PUBLIC URL manifest json gt In the App Platform this is implemented by extending the manifest generation step of the App Scripts CLI build script step in the example build sequence above The script accesses the app s config from d config js and generates a manifest json file with the appropriate app metadata including name description icons and theme colors then writes that manifest json to the resulting app s public directory which would be d shell public You can take a peek at the manifest generation source code in the App Scripts CLI here Then the App Shell package contains the index html file that the app will use so that s where the link to the manifest json file will be added All Platform apps generate a PWA web manifest even if PWA is not enabled but this alone will not make the app installable A service worker with a fetch handler must be registered too which is rather complex and described below Adding simple offline capabilityBasic offline capability is added to the platform by adding a service worker to the app A service worker is a script that installs and runs alongside the app and has access to the app s network traffic by listening to fetch events from the app then handles what to do with the requests and responses it receives The service worker can maintain offline caches with data that the app uses Then when the user s device is offline and the app makes a fetch event to request data the service worker can use the offline cache to respond to the request instead needing to fetch that data over the network This allows the app to work offline You can read more about the basics of service workers here the following sections assume some knowledge about the basics of how they work Implementing the service worker in the app platform takes several steps Creating a service worker script to perform offline cachingCompiling the service worker and adding it to the appRegistering the service worker from the app if PWA is enabled in the app s configManaging the service worker s updates and lifecycle Creating a service worker script to perform offline cachingWe use the Workbox library and its utilities as a foundation for our service worker There are several different strategies available for caching data offline that balance performance network usage and data freshness We settled on these strategies to provide basic offline functionality in Platform apps Static assets that are part of the built app javascript CSS images and more are precached Data that s requested during runtime always uses the network with a combination of a stale while revalidate strategy for fetched image assets and a network first strategy for other data If you want to read more about our decisions to use these strategies they are explained in more depth in our first PWA blog post Compiling the service worker and adding it to the appAn implementation constraint for service workers is that they must be a single self contained file when they are registered by the app to get installed in a user s browser which means all of the service worker code and its dependencies must be compiled into a single file at build time Our service worker depends on several external packages and is split up among several files to keep it in digestible chunks before being imported in the App Shell so we need some compilation tools in the Platform Workbox provides a Webpack plugin that can compile a service worker and then output the production build to the built app Our build process takes advantage of Create React App CRA s build script for the main compilation step once the app under development has been injected into our App Shell and CRA happens to be configured out of the box to use the Workbox Webpack plugin to compile a service worker It compiles a service worker js file in the CRA app s src directory and outputs it into the built app s public directory so most of our compilation needs are met by using CRA The Workbox Webpack plugin also injects a precache manifest into the compiled service worker which is a list of the URLs that the service worker will fetch and cache upon installation The plugin uses the list of minified static files that Webpack outputs from the build process to make this manifest which covers the app s javascript and CSS chunks as well as the index html file These do not cover all of the static assets in the app s build directory however other files like icons web manifests and javascript files from vendors like jQuery need to be handled separately To add those remaining files to the precache manifest we added another step to our CLI s build process After executing the CRA build step we use the injectManifest function from the workbox build package to read all of the other static files in the app s build directory generate a manifest of those URLs and inject that list into the compiled service worker at a prepared placeholder You can see the resulting injectManifest code here Handling these precache manifests correctly is also important for keeping the app up to date which will be described in the Managing the service worker s updates and lifecycle section below Using a config option to enable PWA featuresTo implement the opt in nature of the PWA features the service worker should only be registered if PWA is enabled in the app s configuration We added an option to the d config js app config file that can enable PWA which looks like this d config jsmodule exports type app title My App Add this line pwa enabled true entryPoints app src App js During the d app scripts start or build processes the config file is read and a PWA ENABLED value is added to the app s environment variables Then in the App Adapter s initialization logic it registers or unregisters the service worker based on the the PWA ENABLED environment variable The registration logic is described in more detail in the Registration of the service worker section below Managing the service worker s updates and lifecycleManaging the service worker s lifecycle is both complex and vitally important Because the service worker is responsible for serving the app from cached files it now has a role in what version of the app a user sees Note that the service worker serves the app from a list of files that s set at the time it gets compiled Because of this the service worker itself needs to be updated in a user s browser in order to serve an updated version of the app If the service worker lifecycle and updates are managed poorly the app can get stuck on an old version in a user s browser and never receive updates from the server This can be hard to diagnose and harder to fix The Handling precached static assets between versions section below explains more about why that happens Managing PWA updates can be a famously tricky problem and we think we ve come across a robust system to handle it which we ll describe below Designing a good user experience for updating PWA appsManaging service worker updates is complex from a UX perspective we want the user to use the most up to date version of the app possible but updating the service worker to activate new app updates in production requires a page reload for reasons described below Reloading can cause loss of unsaved data on the page so we don t want to do that without the user s consent Therefore it poses a UX design challenge to notify and persuade users to reload the app to use new updates as soon as possible and at the same time avoid any dangerous unplanned page reloads What s more we want to do so in the least invasive way possible ideally without the user needing to think about anything technical A notification like An update is available would be too invasive and would even look suspicious to some users To address these needs the UX design we settled on is this First if a service worker has installed and is ready we won t activate it right away We ll wait and try to sneak in an update without the user needing to do anything if possible What happens next depends on a few conditions If this is the first time a service worker is installing for this app then any page reload will take advantage of the installed service worker and PWA features will be ready in that reloaded page If multiple tabs are open they will each need to be reloaded to use the service worker and PWA features If the newly installed service worker is an update to an existing one however reloading will not automatically activate the waiting service worker If there is only one tab of this app open then it s possible to safely sneak in the update the next time the user loads the page Before loading the main interactive part of the app the app shell checks for a waiting service worker activates it if there is one and then reloads so the service worker can be safely updated without interfering with the user s activity If the user has multiple tabs of the app open however we can t sneak in a quick update and reload This is because the active service worker controls all the active tabs at the same time so to activate the new service worker all the tabs need to be reloaded simultaneously Reloading all of the tabs without the user s permission may lose unsaved data in the other open tabs so we don t want to do that In this case we rely on the next to options to happen If a new service worker is installed and waiting to take over a notification will be visible at the bottom of the user s profile menu If they click it the waiting service worker will be directed to take over and the page will reload If there are multiple tabs open a warning will be shown that all the tabs will reload so the data in those tabs should be saved before proceeding If possible the number of tabs is shown in the modal to help the user account for forgotten tabs as could happen if the user has many browser windows open or is on a mobile device If none of the above cases happen then the app will rely on the native browser behavior after all open tabs of the app in this browser are closed the new service worker will be active the next time the app is opened There are also two improvements that we re working on implementing to improve this UX When a new service worker is waiting a badge will be shown on the user profile icon in the header bar to indicate that there s new information to checkBefore any service worker is controlling the app some UI element in the header bar will indicate that PWA features aren t available yet Implementation of the app update flowImplementing this update flow in the App Platform requires several cooperating features and lots of logic behind the scenes in the service worker code the client side service worker registration functions and the React user interface To simplify communicating with the service worker from the React environment and abstract away usage of the navigator serviceWorker APIs we made an Offline Interface object that handles event based communication with the service worker and exposes easier to use methods for registration and update operations It also provides some functions that serve cacheable sections and complex offline capability which will be described in more detail in a follow up PWA blog post Our service worker registration functions draw much from the Create React App PWA Template registration boilerplate which includes some useful logic like checking for a valid service worker handling development situations on localhost and some basic update checking procedures These features were a useful starting place but our use case required more complexity which lead to the elaborations described below Registration of the service workerIf PWA is enabled a register function is called when an Offline Interface object is instantiated in the App Adapter while the app is loading The register function listens for the load event on the window object before calling navigator serviceWorker register to improve page load performance the browser checks for a new service worker upon registration and if there is one the service worker will download and install any app assets it needs to precache These downloads can be resource intensive so they are delayed to avoid interfering with page responsiveness on first load The Offline Interface also registers a listener to the controllerchange event on navigator serviceWorker that will reload the page when a new service worker takes control i e starts handling fetch events This is to make sure the app loads by using the latest precached assets Unlike some implementations our service worker is designed to wait patiently once it installs After it installs and activates for the first time it does not claim the open clients i e take control of those pages and start handling fetch events by using the clients claim API instead it waits for the page to reload before taking control This design ensures that a page is only ever controlled during its lifetime by one service worker or none a reload is required for a service worker to take control of a page that was previously uncontrolled or to take over from a previous one This makes sure the app only uses the core scripts and assets from one version of the app The service worker also does not automatically skip waiting and take control of a page when a new update has installed it will continue waiting for a signal from the app or for the default condition described in part of the UX flow above What the service worker does do is listen for messages from the client instructing it to claim clients or skip waiting which are sent went the user clicks the Click to reload option in the profile menu The listeners look like this self addEventListener message event gt if event data type CLAIM CLIENTS Calls clients claim and reloads all tabs claimClients if event data type SKIP WAITING self skipWaiting CLAIM CLIENTS is used the first time a service worker has installed for this app and SKIP WAITING is used when an updated service worker is installed and ready to take over Below you can see more details about these messages Automatically applying app updates when possibleThe PWALoadingBoundary component enables the app to sneak in app updates upon page load in most cases without the user needing to know or do anything It s implemented in the App Adapter and is supported by the Offline Interface It wraps the rest of the app and before rendering the component tree below it it checks if there is a new service worker waiting to take over If there is one and only one tab of the app is open it can instruct the new service worker to take over before loading the rest of the app This allows the app to update and reload safely and without interfering with the user s work export const PWALoadingBoundary children gt const pwaReady setPWAReady useState false const offlineInterface useOfflineInterface useEffect gt const checkRegistration async gt const registrationState await offlineInterface getRegistrationState const clientsInfo await offlineInterface getClientsInfo if registrationState REGISTRATION STATE WAITING registrationState REGISTRATION STATE FIRST ACTIVATION amp amp clientsInfo clientsCount console log Reloading on startup to activate waiting service worker offlineInterface useNewSW else setPWAReady true checkRegistration catch err gt console error err setPWAReady true offlineInterface return pwaReady children null Upon render the Loading Boundary first checks for any new service workers by using the Offline Interface s getRegistrationState method a convenience method for accessing the getRegistrationState registration function getRegistrationState is a simplified check for service workers installation status intended to determine if there s a new service worker ready right now It returns one of several values UNREGISTERED WAITING if there is an updated service worker ready FIRST ACTIVATION if this is the first time a service worker has installed or ACTIVE if there s already a service worker in control and none currently waiting Then to check how many tabs of the app are open the PWALoadingBoundary uses the Offline Interface s getClientsInfo method which asks the ready service worker how many clients are associated with this service worker scope To get this info accurately in every situation the service worker needs to perform some special checks as shown in the code below Get all clients including uncontrolled but only those within SW scope export function getAllClientsInScope Include uncontrolled clients necessary to know if there are multiple tabs open upon first SW installation return self clients matchAll includeUncontrolled true then clientsList gt Filter to just clients within this SW scope because other clients on this domain but outside of SW scope are returned otherwise clientsList filter client gt client url startsWith self registration scope The service worker uses the self clients matchAll API with the includeUncontrolled option since some tabs may be uncontrolled the first time the service worker installs Then since that function returns every open client on this domain even ones outside of the scope of the service worker s control the resulting clients need to be filtered down to just the clients in scope After the service worker gets the right clients list it posts a message back to the client to report the clients info Then the getClientsInfo method returns a promise that either resolves to the clients info or rejects with a failure reason If there is a service worker waiting to take over either the WAITING or FIRST ACTIVATION conditions above and there is only one tab of the app open the PWALoadingBoundary will apply the ready update by calling the useNewSW method on the Offline Interface The method instructs the new service worker to take over  it detects if this new service worker is the first one that has installed for this app or an update to an existing service worker then sends either a CLAIM CLIENTS message to a first install service worker or a SKIP WAITING message to an updated service worker Skipping waiting or claiming clients by the service worker both result in a controllerchange event in open clients which triggers the event listener that the Offline Interface set up on navigator serviceWorker to listen for that event recall from the Registration of the service worker section The listener will then call window location reload to reload the page so that the page can load under the control of the new service worker If there isn t a new service worker or if there are multiple tabs open then the rest of the app will load as normal By doing this check before loading the app the app can apply PWA updates without the user needing to do anything in most cases which is a nice win for the user experience Providing the UI for manually applying updatesThe usePWAUpdateState hook provides the logic to support the UI for applying updates manually and the ConnectedHeaderBar component connects the hook to the relevant UI components Like the PWALoadingBoundary component the hook and the ConnectedHeaderBar component are implemented in the App Adapter and are supported by the Offline Interface The code for both is shown below ーlook closely at the usePWAUpdateState hook s onConfirmUpdate function the confirmReload function and the useEffect hook export const usePWAUpdateState gt const offlineInterface useOfflineInterface const updateAvailable setUpdateAvailable useState false const clientsCount setClientsCount useState null const onConfirmUpdate gt offlineInterface useNewSW const onCancelUpdate gt setClientsCount null const confirmReload gt offlineInterface getClientsInfo then clientsCount gt if clientsCount Just one client go ahead and reload onConfirmUpdate else Multiple clients warn about data loss before reloading setClientsCount clientsCount catch reason gt Didn t get clients info console warn reason Go ahead with confirmation modal with as clientsCount setClientsCount useEffect gt offlineInterface checkForNewSW onNewSW gt setUpdateAvailable true offlineInterface const confirmationRequired clientsCount null return updateAvailable confirmReload confirmationRequired clientsCount onConfirmUpdate onCancelUpdate export function ConnectedHeaderBar const appName useConfig const updateAvailable confirmReload confirmationRequired clientsCount onConfirmUpdate onCancelUpdate usePWAUpdateState return lt gt lt HeaderBar appName appName updateAvailable updateAvailable onApplyAvailableUpdate confirmReload gt confirmationRequired lt ConfirmUpdateModal clientsCount clientsCount onConfirm onConfirmUpdate onCancel onCancelUpdate gt null lt gt By using the useEffect hook with an empty dependency array upon first render the usePWAUpdateState hook checks for new service workers by calling the Offline Interface s checkForNewSW method which basically just exposes the checkForUpdates registration function Compared to the the getRegistrationState function that the PWALoadingBoundary uses checkForUpdates is more complex since it checks for service workers installed and ready listens for new ones becoming available and checks for installing service workers between those states We need to check a number of variables to handle all the possible installation conditions Service workers can be in one of the four steps of their lifecycle installing installed activating or activated Multiple service workers can be simultaneously present in the service worker registration object as either installing waiting or active Sometimes the active service worker is not in control because it s the first service worker installation for this appFor the full control flow take a look at the checkForUpdates source code If there is a new service ready then the onNewSW callback function provided as an argument to checkForNewSW is called which sets the updateAvailable boolean returned by the hook to true The ConnectedHeaderBar component passes this value as a prop to the HeaderBar which shows the New app version available ーClick to reload notification in the user profile menu If the user opens the profile menu and clicks the New version available notification the confirmReload function in usePWAUpdateState is called It handles the next part of the update flow by checking how many tabs of this app are open so that if multiple tabs are open a warning can be shown that they will all be reloaded Like the PWALoadingBoundary it uses the Offline Interface s getClientsInfo method to get the number of clients associated with this service worker Once the clients info is received if there is one client open for this service worker scope confirmReload will use the Offline Interface s useNewSW method to instruct the new service worker to take control as the PWALoadingBoundary does If there are multiple clients open or if the getClientsInfo request fails then the confirmationRequired boolean returned by the usePWAUpdateState hook will resolve to true In the ConnectedHeaderBar component this will result in rendering the ConfirmReloadModal that warns about data loss when all open tabs will be reloaded If the user clicks Reload in the modal the onConfirmUpdate function is called which calls the offlineInterface useNewSW function and the update is triggered If the user clicks Cancel the onCancelUpdate function is called which resets the confirmationRequired boolean to false by setting clientsCount to null which will close the modal All these steps under the hood are coordinated to create the robust user experienc described above and make sure service workers and apps update correctly Handling precached static assets between versionsAs mentioned in the Compiling the service worker section above when using precaching for app assets there are several considerations that should be handled correctly with respect to app and service worker updates Conveniently these best practices are handled by the Workbox tools the Webpack plugin and the workbox build package introduced earlier When using a precaching strategy it s possible for an app to get stuck on old version in a user s client even though there s a new version of the app on the server Since precached assets will be served directly from the cache without accessing the network new app updates will never be accessed until the service worker itself updates downloads the new assets to use and serves them To get the service worker to update the script file on the server needs to be byte different from the the file of the same name that s running on the client service worker js in our case The browser checks for service worker updates upon navigation events in the service worker s scope or when the navigator serviceWorker register function is called To make sure updates in app files on the server end up in clients browsers revision info is added to filenames in the service worker s precache manifest if the filename doesn t already have it When an app file is changed the content hash will change in the precache manifest and thus the contents of the service worker js file will be different Now when a user s browser checks the service worker js file on the server it will be byte different and the client will download and install new app assets to use You can read more about precaching with Workbox at the Workbox documentation Adding a kill switch for a rogue service workerIn some cases a service worker lifecycle can get out of control and an app can be stuck with a service worker serving old app assets If the app doesn t detect a new service worker and doesn t offer the user the option to reload the app in the user s browser will not be updated This can be a difficult problem to debug and requires manual steps by the user to resolve As described in this article we have worked hard to build our application platform in such a way that apps don t need to do anything special to deal with service worker updates it is all handled in the platform layer and the Offline Interface We sometimes encounter this problem when an old version of an app once registered a service worker and served the app assets via a precaching strategy Then when a new version of the app is deployed without a service worker there is no way for the newly deployed app to take over from the previous version It would seem like the app was stuck on an old version and missing new fixes even though a new version had been deployed to the server To handle this rogue service worker case we added a kill switch mode to the service worker in the platform which will help unstick apps with a service worker that s serving an old version of the app This takes advantage of browsers service worker update design in response to a registration event or a navigation in scope of an active service worker the browser will check the server for a new version of the service worker with the same filename even if that service worker is cached If there is a service worker on the server and it is byte different from the active one the browser will initiate the installation process of the new service worker downloaded from the server this was relevant to the update process described above as well To take advantage of that process every Platform app actually gets a compiled service worker called service worker js added to the built app whether or not PWA is enabled This helps a non PWA app take over from and uninstall a PWA app that s installed in a user s browser For non PWA apps the service worker will run this code if it does get installed to take over from a PWA app Called if the pwaEnabled env var is not true export function setUpKillSwitchServiceWorker A simple no op service worker that takes immediate control and tears everything down Has no fetch handler self addEventListener install gt self skipWaiting self addEventListener activate async gt console log Removing previous service worker Unregister in case app doesn t self registration unregister Delete all caches const keys await self caches keys await Promise all keys map key gt self caches delete key Delete DB await deleteSectionsDB Force refresh all windows const clients await self clients matchAll type window clients forEach client gt client navigate client url It will skip waiting as soon as it s done installing to claim all open clients and upon taking control will unregister itself delete all CacheStorage caches and a sections IndexedDB that will be introduced in a follow up post about Cacheable Sections then reload the page After this reload the service worker will be inactive and the new app assets will be fetched from the server instead of served by the offline cache allowing the app to run normally Ultimately by including this kill switch mode we prevent apps from getting stuck in the future and we unstick apps that have been stuck in the past Be aware however that this might cause some loss of data if your app is also using the CacheStorage or Cacheable Section tools It s highly unusual for a kill switch worker to activate however so running into such a problem is highly unlikely but we want to point it out for the few developers who may be using those tools ConclusionWe hope you enjoyed this introduction to the DHIS App Platform and to its PWA features We covered installability build tooling to read an app s config and compile a service worker caching strategies and service worker updates and lifecycle management Many of the challenges and solutions we described are applicable to any PWA application developer We hope that you have also come away with a deeper understanding of how these features work together to enable offline capability in DHIS apps If you found this post interesting or useful please leave a comment below In a follow up post we ll describe design challenges and solutions for creating the Cacheable Sections and some other App Runtime features that were described in the PWA introduction blog post stay tuned to the DHIS developer s blog and follow here Is there anything you d like to know more about on this subject or have any other questions or comments Feel free to reach out to us via e mail Slack Twitter or our Community of Practice We re always happy to hear from interested developers and community members If you would like to join our team to tackle challenges like the PWA implementation please check our careers section in our website All of our software team roles are remote friendly and we encourage people of all identities and backgrounds to apply 2023-08-22 13:13:42
海外TECH DEV Community AWS Advanced: Serverless Prometheus in Action https://dev.to/authress/aws-advanced-serverless-prometheus-in-action-j1h AWS Advanced Serverless Prometheus in Action Note this article continues from Part AWS Metrics Advanced We can t use PrometheusIt turns out Prometheus can t support serverless Prometheus works by polling your service endpoints fetching data from your database and storing it For simple things you would just expose the current CPU and Memory percentages And that works for virtual machines It does not work for ECS Fargate and definitely does not work for AWS Lambda There is actually a blessed solution to this problem Prometheus suggests what is known as a PushGateway What you do is deploy yet another service which you run and you can push metrics to Then later Prometheus can come and pick up the metrics by polling the PushGateway There is zero documentation for this And that s because Prometheus was built to solve one problem Ks Prometheus exists because Ks is way to complicated to monitor and track usage yourself so all the documentation that you will find is a yaml file with some meaningless garbage in it Also we don t want to run another service that does actual things that s both a security concern as well as a maintenance burden But the reason why PushGateway exists to supposedly solve the problems of serverless is confusing because why doesn t Prometheus just support pushing events directly to it And if you look closely enough at the AWS console for Prometheus you might also notice this What s a Remote Write Endpoint You ve got me because there is no documentation on it 🗎Promotheus RemoteWrite DocumentationSo I m going to write here the documentation for everything you need to know about Remote Writing which Prometheus does support Although you won t find any documentation anywhere on it and I ll explain why Throughout your furious searching on the internet for how to get Prometheus working with Lambdas and other serverless technology you will no doubt find a larger number of articles trying to explain how different types of metrics work in Prometheus But metric types are a lie They don t exist they are fake ignore them To explain how remote write works I need to first explain what we ve learned about Prometheus Prometheus Data Storage strategyPrometheus stores time series that s all it does A time series has a number of a labels and a list of values at a particular time Then later provides those time series to query in an easy way That s it that s all there is Metric types exist because the initial source of the metric data doesn t want to think about time series data so Prometheus SDKs offer a bunch of ways for you to create metrics the internal SDKs convert those metric types to different time series and then these time series are hosted on a metrics endpoint available for Prometheus to come by and use This is confusing I know It works for of events of type A happened at Time B But it does not support average response time for type T I ll get to this later Handling data transferBecause you could have multiple Prometheus services running in your architecture they need to communicate with each other This is where RemoteWrite comes it RemoteWrite is meant for you to run your own Prometheus Service and use RemoteWrite to copy the data from one Prometheus to another one That s our ticket out of here We can fake being a Prometheus service and publish our time series to the AWS Managed Prometheus service directly If we fake being a Prometheus Server then as long as we fit the API used for handling this we can actually push metrics to Prometheus The problem here is that most libraries don t support even writing to the RemoteWrite url We need to figure out how to write to this url now that we have it and also how to secure it The Prometheus SDKLuckily in nodejs there is the prometheus remote write library It supports AWS SigV which means that we can put this library in a Lambda APIGateway service and proxy requests to Prometheus through it It also sort of handles the messy bit with the custom protobuf format Remember Ks was created by Google so everything is more complicated than it needs to be With APIGateway we can authenticate our microservices to call the metrics microservice we need to build The API can take the request and use IAM to secure the push to Prometheus It s worth noting here you can actually push from one AWS account to another s Promotheus Workspace but trying to get this to work is bad for two reasons you never want to expose one AWS account s infra services to another that s just a bad architecture and two trying to get a lambda to assume a role and then pass the credentials correctly to the libraries that need them to authenticate is a huge headache And this is the whole code of the service const createSignedFetcher require aws sigv fetch const pushTimeseries require prometheus remote write const fetch require cross fetch const signedFetch createSignedFetcher service aps region process env AWS REGION fetch const options url fetch signedFetch labels service request body service const series request body series await pushTimeseries series options And just like that we now have data in Prometheus But where is it So Prometheus has no viewer unlike DynamoDB and others AWS provides no way to look at the data at Prometheus directly So we have no idea if it is working The API response tells us but like good engineers we aren t willing to trust that We ve also turned on Prometheus logging and that s not really enough help How the hell do we look at the data At this point we are praying there is some easy solution for displaying the data My personal thought is this is how AWS gets you AWS Prometheus is cheap but you have to throw AWS Grafana on top in order to use it And that says per user Wow that s expensive to just look at some data I don t even want to create graphs just literally show me the data What s really cool though is Grafana Cloud offers a near free tier for just data display and that might work for us Well at least the free tier makes it possible for us to validate our managed Prometheus service is getting our metrics And after way too much pain and suffering it turns out it is However we only sent three metric updates to Prometheus so why are there so many data points The problem is actually in the response from AWS Prometheus If we dive down into the actual request that Grafana is making to Prometheus we can see the results actually include all these data points That means it isn t something weird with the configuration of the UI I m pretty sure it has to do with the fact that the step size is s It doesn t really matter that it does this because all our graphs will be continuous anyway and expose the connected data Also since this is a sum over the timespan we should absolutely treat this as a minimum a s resolution unless we actually do get summary metrics more frequently No matter what anyone says these graphs are beautiful It was the first thing that hit me when I actually figured out what all the buttons were on the screen Quick Summary🗸Metrics stored in database🗸Cost effective storage🗸Display of metrics🗸Secured with AWS or our corporate SSO🗸Low TCO to maintain metric populationThe Solution has been AWS PrometheusLambda FunctionGrafana Cloud Some lessons here The Grafana UX is absolutely atrociousMost of the awesome things you want to do aren t enabled by default For instance if you want to connect Athena to Grafana so that your bucket can be queried you first have to enable the plugin for Athena in Grafana And only then can you create a DataSource for Athena It makes no sense why everything is hidden behind a plugin The same is true for AWS Prometheus it doesn t just work out of the box Second even after you do that your plugin still won t work The datasource can t be configured in a way that works that s because the data source configuration options need to be separately enabled by filing a support ticket with Grafana I died a bit when they told me that In our products Authress and Standup amp Prosper we take great pride in having everything self service that also means our products features are discoverable by every user Users don t read documentation That s a fact and they certainly don t file support tickets That s why every feature has a clear name and description next to it to explain what it does And you never have to jump to the docs but they are there if you need them We would never hide a feature behind a hidden flag that only our support has access to The documentation for Prometheus is equally badSince Prometheus was not designed to be useful but instead designed to be used with Ks there is little to no documentation using Prometheus to do anything useful Everyone assumes you are using some antiquated technology like Ks and therefore the metrics creation and population is done for you So welcome to pain but at least now there is this guide so you too can effectively use Prometheus in a serverless fashion Always check the AWS quotasThe default retention is days but this can be increased One of the problems with AWS CloudWatch is that if you mess up you have months of charges But here we start with only months That s a huge difference We ll plan to increase this and I m sure it will be configurable by API later Just remember you need to review quotas for every new service you start using so you don t get bitten later Lacks the polish of a secure solutionGrafana needs to be able to authenticate to our AWS account in order to pull the data it needs It should only do this one of two ways Uses the current logged in Grafana user s OAuth token to exchange for a valid AWS IAM roleGenerates an OIDC JWT that can be registered in AWSAnd yet we have It does neither of these Nor does it support dynamic function code to better support this Sad We are forced to use an AWS IAM user with an access key secret Which we all know you should never ever use And yet we have to do it here I will say there is something called Private DataSource Connections PDC but it isn t well documented if this actually solves the problem Plus if it did that we means we d have to write some GO and no one wants to do that Prometheus metric types are a lieEarlier I mentioned that perhaps you want metrics that are something other than a time series The problem is Prometheus actually doesn t support that That s confusing because you can find guides like this Prometheus Metric Types which lists CounterGaugeHistogramSummaryetc Also you ll notice that pathetic lack of libraries in nodejs and Rust How can these metric types exist in a time series way The truth is they can t And when Prometheus says you can have these what it really means is that it will take your data and mash it into a time series even if it doesn t work A simple example is API Response Time You could track request time in milliseconds and then count the number of requests that took that long There would be a new timeseries for ever possible request time That feels really wrong requests at ms requests at ms requests at ms But that s essentially how Prometheus works We can do slightly better and the solution here is to understand how the Histogram configuration works for Prometheus What actually happens is that we need to a priori decide useful buckets to care about For instance we might create Less than msLess than msLess than ms Then when we get a Response Time we add a to each of the buckets that it matches A ms request would only be in the ms bucket but a ms would be in all three Later when querying for this data you can filter on the buckets not the response time that you care about Which means something like ms step buckets may make sense orN buckets in ms steps from msM buckets in ms steps from msO buckets in ms steps from msP Buckets in s steps from ms That s about buckets to keep track off Depending on your SLOs and SLAs you have you might need SLIs that are different granularities The Finish LineWe were almost done and right before we were going to wrap up we saw this error err out of order sample What the hell does that mean Well it turns out that Prometheus cannot handle timestamp messages out of order What the actual hell Let me say that again Prometheus does not accept out of order requests Well that s a problem It s a problem because we batch our metrics being sent And we are batching them because we don t have all the data available And we don t have it because CloudWatch doesn t send it to us all at once nor in orderWe could wait for CloudWatch the requisite hours to update the metrics But there is no way we are going to wait for hours We want our metrics as live as possible it isn t critical that they are live but there is no good reason to wait If the technology does not support it then it is the wrong tech does it feel wrong yet eh maybe The second solution is use the hack flag out of order time window that Prometheus supports Why it doesn t support this out of the box makes no sense But also turns out things like mySQL and ProstreSQL didn t support updating a table schema without a table lock for the longest time The problem is that at the time of writing AWS does not let us set this out of order time window flag tsdb That only leaves us with two solutions ignore out of order processing and drop these metrics on the floorignore the original timestamp of processing the message and just publish now as the timestamp on every message Guess which one we decided to go with That s right we don t really care about the order of the metrics It doesn t matter if we got a spike exactly at PM or PM most of the time no one will see notice this difference or care When there is a problem we ll trust our actual logs way more than the metrics anyway metrics aren t the source of truth And I bet you thought that is what I was going to say And it s actually true we would be okay with that solution But the problem is that this STILL DOES NOT FIX PROMETHEUS You still will end up with out of order metrics and so the solution for us was add the CloudFront logs filename as a unique key as a label in Prometheus so our labels look like this labels name response status code total status code statusCode method method route route account id accountId unique id cloudFrontFileName Remember Labels are just the Dimensions in CloudWatch they are how we will query the data And with that now we don t get any more errors Because out of order data points only happen within a single time series By specifying the unique id this causes the creation of one time series per CloudFront log file Is this okay thing to do Honestly it s impossible to tell because there is zero documentation on how exactly this impacts Prometheus at scale Realistically there are a couple of other options to improve upon this like having a random unique id and retrying with a different value if it doesn t work This would limit the number of unique time series to Further since we group or what Grafana calls sum by the other labels anyway the extra unique id label will get automatically ignored And the result is in And here s the total cost wow If you liked this article come join our Community and discuss this and other security related topics 2023-08-22 13:09:09
海外TECH DEV Community AWS Metrics: Advanced https://dev.to/authress/aws-metrics-advanced-40f8 AWS Metrics AdvancedNormally I m the last proponent of collecting metrics The reason is metrics don t tell you anything And anything that tells you nothing is an absolute waste of time setting up However alerts tell you a lot If you know that something bad is happening then you can do something about it The difference between alerts and metrics is Knowing what s important If you aren t collecting metrics yet the first thing to do is decide what is a problem and what isn t It s far to often I feel like I m answering the question of How do I know if the compute utilization is above The answer is it doesn t matter because if you knew what would you do with that information Almost always the answer is I don t know or my director told me to it was important So what is importantThat s probably the hardest question to answer with a singular point So for the sake of this article to make it concrete and relevant let me share what s important for us Up time requirementsWe ve been at an inflection point within Rhosys for a couple of years now For Authress and Standup amp Prosper we run highly reliable services These have to be up at least nines and often we contract out for SLAs at nines But the actual up time isn t what has been relevant anymore Because as most reliability experts know your service can be up but still returning a XX here and there This is what s known as partial degredation You may think one XX isn t a problem However for millions of requests per day this amounts to a non trivial amount Even if it is just one XX per day it absolutely is important if you don t know why it happened It s one thing to ignore an error because you know why it s quite another to ignore it because you don t Further even returning XXs is often a concern for us as well Too many s could be a problem They could tell us something is wrong From a product standpoint a XX means that we did something wrong in our design because one of our users should never get to the point where they get back a XX If they get back a XX that means they were confused about our API or the data they are looking at This is critically important information Further a XX could mean we broke something in a subtle way Something that used to return a XX now returning a XX unintentionally means that we broke at least one of our users implementations This is very very bad So actually what we want to know is Are there more s now than there should be A simple example is when a customer of ours calls our API in and forgets to URL Encode the path that usually means they accidentally called the wrong endpoint For instance if the UserId had a in it then Route users userId dataIncorrect Endpoint users tenant user dataCorrect Endpoint users tenant Fuser dataWhen this happens we could tell the caller but that s actually the wrong thing to do It s the right error code but it conveys the wrong message The caller will think there is no data for that user even when there is That s because the normal execution of the endpoint returns when there is no data associated with the user or the user doesn t exist Instead when we detect this we return Hey you called a weird endpoint did you mean to call this other one A great Developer Experience DX means returning the right result when the user asked for something in the wrong way If we know there is a problem we can already return the right answer we don t need to complain about it However sometimes that s dangerous Sometimes the developer thought they were calling a different endpoint so we have to know when to guess and when to return a In reality when we ask Are there more s than there should be we are looking to do anomaly detection on some metrics If this is an issue we know to look at the recent code released and see when the problem started to happen This is the epitome of using anomaly detection Are the requests we are getting at this moment what we expect them to be or is there something unexpected and different happening To answer this question finally we know we need some metrics So let s take a look at our possible options The metric service candidatesWe use AWS heavily and luckily we AWS has some potential solutions AWS DevOps GuruAWS Lookout anomaly detectionAWS CloudWatch Alarms with anomaly detectionSo we decided to try some of these out and see if any one of them can support our needs 🗸The VerdictAll terrible So we had the most interest in using DevOps Guru the problem is that it just never finds anything It just can t review our logs to find problems The one case it is useful is for RDS queries and to determine if you need an index What happens when you fix all your indexes Then what After turning on Guru for a few weeks we found nothing Okay almost nothing we found a smattering of warnings regarding the not so latest version of some resources being used or permissions that aren t what AWS thinks they should be But other than that it was useless You can imagine for an inexperienced team having DevOps Guru enabled will help you about as much as Dependabot does at discovering actual problems in your GitHub repos The surprise however is that it is cheap DevOps Guru cheap but worthless Then we took a look at AWS Lookout for Metrics It promises some sort of advanced anomaly detection on your metrics AWS Lookout is actually used for other things not primarily metrics so this was a surprised And it seemed great exactly what we are looking for And when you look at the price it appears reasonable for metric We only plan on having a few metrics right So that shouldn t be a problem Let s put this one in our back pocket while we investigate the CloudWatch anomaly detection alarms At the time of writing this article we already knew something about anomaly detection using CloudWatch Alarms The reason is we have anomaly detection set on some of our AWS WAF Web Application Firewalls Here s an example of that anomaly detection We can see there is a little bit of red here where the results were unexpected This looks a bit cool although it doesn t work out of the box Most of the time we ended up with the dreaded alarm flapping sending out alerts at all hours of the day To really help make this alarm useful we did three things A lot of trial and error with the period and relevant datapoint count to trigger the alarmThe number of deviations outside of the norm to be considered an issue is the change gt a problem or is gt a problemUse logarithm based volumeNow while those numbers on the left aren t exactly the Log total requests they are something like that And this graph is the result See logarithms are great here because the anomaly detection will throw an error as soon as it s outside of a band And that band is magic you ve got no control over that band for the most part You can t choose how to make it think but you can choose how thick it is We didn t really care that there are k rps instead of rps for a time window but we do care that there are k rps or rps So the logarithm really makes more sense here Magnitudes different is more important So now we know that the technology does what we want we can start ⏵Time to start creating metricsThe WAF anomaly detection alarm looks great though it isn t prefect but hopefully AWS will teach the ML over time what is reasonable and let s pray that it works I don t have much confidence in that But at least it is a pretty good starting point And since we are going to be creating metrics we can reevaluate the success afterwards and potentially switch to AWS Lookuout for Metrics if everything looks good kNow that s a huge bill It turns out we must have done something wrong because according to AWS CloudWatch Billing and our calculation we ll probably end up paying k this month on metrics Let s quickly review we attempted to log APM metrics aka Application Performance Monitoring using CloudWatch metrics That means for each endpoint we wanted to log The response status code etc The customer account ID acc The HTTP Method GET PUT POSTThe Route v users userId That s one metric with these four dimensions and at custom metric month we assumed that means month However that is a lie AWS CloudWatch Metrics charges you not by metric but by each unique basis of dimensions that means that GET v users userId returning a for customer DELETE v users userId returning a for customer Are two different metrics a quick calculation tells us response codes used per endpoint we heavily use http verbs endpointsSince metrics are saved for months using even one of these status codes in the last months will add to your bill But this math shows us it will cost per customer If you only had customers for your SaaS that s going to cost you per month And we have a lot more customers than that Thankfully we did some testing first before releasing this solution I don t know how anyone uses this solution but it isn t going to work for us We aren t going to pay this ridiculous extortion to use the metrics service we ll find something else Worse still that means that AWS Lookout for metrics is also not an option because it would cost use of that cost per month as well so let s just call it k per customer per month Now while I don t mind shelling out k customer for a real solution to help us keep our nines SLAs it s going to have to do a lot more than just keep a database of metrics We are going to have look somewhere else SaaS to the rescue I m sure some one out there is saying we use insert favorite SaaS solution here but the truth is none of them support what we need Datadog and NewRelic were long eliminated from our allowed list because they resorted to malicious marketing phone calls directly to our personal phone numbers multiple times That s disgusting Even if we did allow one of those to be picked they are really expensive What s worse all the other SaaS solutions that provide APM fail in one of these ways UI UX is not just bad but it s terrible don t work with serverlessare more expensive that Datadog I don t even know how that is possible But wait didn t AWS release some managed service for metrics 𓁗Enter AWS AthenaAWS Athena isn t a solution it s just a query on top of S so when we say use AWS Athena what we really mean is stick your data in S But actually we mean Stick your data in S but do it in a very specific and painstaking way It s so complicated and difficult that AWS wrote a whole second service to take the data in S and put it back in S differently That service is called Glue We don t want to do this we don t want something crawling our data and attempting to reconfigure it it just doesn t make sense We already know the data at the time of consumption We get a timespan with some data and we write back that timespan Since we already know the answer using a second service dedicated to creating timeseries doesn t make sense It does absolutely make sense if we had non timeseries data and needed to convert it to timeseries for querying but we do not The real problem here however is that every service we have would need to figure out how to write this special format to S so that it could be queried Fuck While we could build out a Timeseries to S Service I d rather not The Total Cost of Ownership TCO of owning services is really really high We knew this already before we built our own statistics platform and deprecated it we don t want to do it again So Athena was out Sorry Enter PrometheusAnd no I m not talking about Elastic Search that doesn t scale and it isn t really managed It s a huge pain It s like taking a time machine back to the days on on prem DBAs except these DBAs work at AWS and are less accessible The solution is AWS runs a managed Grafana Prometheus solutions The true SaaS is Grafana Cloud but AWS has a managed version and of course these are also two different AWS services Apparently Prometheus is a metrics solution It keeps track of metrics and it s pricing is We know we aren t going to have a lot of samples aka API requests to prometheus so let s focus on the storage GB MoLooking at a metric storage line as status code verb path customer ID we get B which is e request storage For month that would afford us API requests per month This assumes days by default retention time Let s call it B requests per month that s a sustained rps Now while that is a fraction of where we are at the pricing for this feels so much better We pay only for what we use and the amortized cost per customer also makes a lot more sense So we are definitely going to use Prometheus it seems For how we did that exactly check out Part AWS Advanced Serverless Promtheus in ActionIf you liked this article come join our Community and discuss this and other security related topics 2023-08-22 13:06:05
海外TECH DEV Community Way to High Confidence: The Ideal Testing Trophy https://dev.to/borysshulyak/high-confidence-testing-levels-1n1m Way to High Confidence The Ideal Testing Trophy Table of ContentsTesting TrophyHow to define the cost of tests Static TestsWhat are static tests Why should you use static tests When to use static tests Recommended static tests toolsUnit TestsWhat are unit tests Why should you use unit tests When not to use unit tests When to use unit tests What not to mock in unit tests What to mock in unit tests Recommended unit tests toolsVisual SnapshotsWhat are visual snapshots tests Why should you use visual snapshots When not to use visual snapshots tests When to use visual snapshots tests Recommended visual regression toolsAccessibility TestsWhat are accessibility tests Why should you use accessibility tests Recommended accessibility tests toolsIntegration TestsWhat are integration tests Why should you use integration tests When not to use integration tests When to use integration tests What not to mock in integration tests What to mock in integration tests Recommended integration tests toolsEE TestsWhat are ee tests Why should you use ee tests Recommended ee tests toolsPerformance TestsWhat are performance tests Why should you use performance tests Recommended performance tests toolsManual TestsWhy should you use manual tests Recommended manual tests toolsConclusionI m glad to introduce you to the new articles series named Way to High Confidence and the first article of this series We are going to talk about the testing levels and the testing trophy Your testing trophy could be specific to your project but you should always define the ideal to which you will aspire In this article I have described my own ideal testing trophy the reasons why I have added some test types to this trophy and the recommended tools I should mention that all my work experience is Front End one Therefore this article is written through the prism of this experience For the Unit Integration and EE tests I have provided a “Tested Mocked schema to have more clear picture The technologies were provided just an example and you could easily swap it with your own Testing Trophy How to define the cost of tests To define the correct level of tests you could also think about the “cost of tests this is well explained in this article and the basic idea is that the cost of a test includes The time it takes to write the testThe time it takes to run the test every time the suite runsThe time it takes to understand the testThe time it takes to fix the test if it breaks and the underlying code is OKMaybe the time it takes to change the code to make the code testable Static Tests What are static tests Static Testing is a software testing technique that is used to check defects in software applications without executing the code Static testing is done to avoid errors at an early stage of development as it is easier to identify the errors and solve the mistakes It also helps to find errors that may not be found by Dynamic Testing Why should you use static tests Static code analyzing tools is a standard for now The strong Eslint configuration could help you with writing the new cleaner code and refactoring the old one But you should be careful with choosing the static test tools As we know the team that uses tests and other best practices is times more efficient than the team that uses only TypeScript With Eslint you could also create your own linting rules using the no restricted syntax rule It is more safe than conventions in your Confluence When to use static tests Always Recommended static tests toolsEslintPrettierREscriptProp Types Unit Tests What are unit tests Unit testing is a software testing technique that is used to verify that individual isolated parts work as expected Why should you use unit tests Writing unit tests will provide you with strongly documented methods in your code It is more confident and useful than TypeScript It is really helpful in onboarding new devs It is a pleasure to use it in collaboration with integration tests when your integration tests fail the unit tests would help you find the root cause Unit tests would help you to write clean and healthy code If it is difficult to cover some code with tests it is a bad code…In conclusion you will get Documentation in the code Saving time on finding the bugs Good code base When not to use unit tests Non exported functions classes and hooks Anything not exported from a module can be considered private or an implementation detail and doesn t need to be tested Constants Testing the value of a constant means copying it resulting in extra effort without additional confidence that the value is correct React components State methods and lifecycle hooks can be considered an implementation detail of components are implicitly covered by integration tests and don t need to be tested When to use unit tests Exported functions classes and hooks Anything exported can be reused at various places in ways you have no control over You should document the expected behavior of the public interface with tests Apollo Client Queries Any Apollo Client query must work consistently independent of the component it is triggered from Apollo Client Mutations For complex Apollo Client mutations you should write the tests including the check up of all the post mutation actions e g onError onComplete … What not to mock in unit tests Non exported functions classes or hooks Everything that is not exported can be considered private to the module and is implicitly tested through the exported classes functions and hooks Methods of the class under test By mocking methods of the class under test the mocks are tested and not the real methods Utility functions pure functions or those that only modify parameters If a function has no side effects because it has no state it is safe to not mock it in tests What to mock in unit tests State of the class under test Modifying the state of the class under test directly rather than using methods of the class avoids side effects in the test setup Other exported classes functions and hooks Every class function and hook must be tested in isolation to prevent test scenarios from growing exponentially All server requests When running frontend unit tests the backend may not be reachable so all outgoing requests need to be mocked Asynchronous background operations Background operations cannot be stopped or waited on so they continue running in the following tests and cause side effects Recommended unit tests toolsVitestJestReact Testing Library renderHook Visual Snapshots What are visual snapshots tests Visual regression testing tools take screenshots of web pages and compare the resulting images pixel by pixel Why should you use visual snapshots You could cover all the project with static unit integration and other tests You would be sure all the logic is correct and everything works fine But what about the visual part Are you sure all the components have the same visual look after your refactoring Or maybe you are going to check all the views after your MR to verify the styles of your components Let s put it on the shoulders of the machines The automated visual regression would help you to verify all the required components Yes sometimes your tests would be the false negative and you will have to update the screenshots But the benefits and time savings we get are more significant than the time it takes to update screenshots When not to use visual snapshots tests Visual elements that could be changed at different time moments e g animations The test result might be false negative Logic verification It should be tested on other test levels When to use visual snapshots tests Verifying what the user sees layout color size and contrast Recommended visual regression toolsChromaticpuppeteer jest image snapshot…There are many tools for visual testing Accessibility Tests What are accessibility tests Accessibility is the practice of making websites inclusive to all That means supporting requirements such as keyboard navigation screen reader support touch friendly usable color contrast reduced motion and zoom support Accessibility tests audit the rendered DOM against a set of heuristics based on WCAG rules and other industry accepted best practices They act as the first line of QA to catch blatant accessibility violations Why should you use accessibility tests Firstly it is pretty simple to implement the accessibility tests with Storybook just one dev dependency Secondly all the products and all the software should be accessible for users with challenges or disabilities All of us should care Recommended accessibility tests toolsStorybook ay addon Integration Tests What are integration tests Integration testing is a software testing technique that is used to verify that several units work together in harmony Why should you use integration tests Integration tests are the first step in testing how your system is used by the end users The biggest and most important reason that I write tests is CONFIDENCE I want to be confident that the code I m writing for the future won t break the app that I have running in production today So whatever I do I want to make sure that the kinds of tests I write bring me the most confidence possible and I need to be cognizant of the trade offs I m making when testing Kent C DoddsIntegration tests are the silver bullet it is pretty quick easy to maintain and provide strong confidence With a well designed integration test you will get a fully described user flow in the code as your tests should simulate the user behavior as much as possible In conclusion you will get High confidence in the logic of your components Fully described user flow in the code When not to use integration tests Full user flow across pages Trying to render the whole app page using the non ee testing tools could be too expensive Such tests could also be duplicated on the ee stage testing Framework specific integration If you have a framework specific integration like Next js with its getServerSideProps getStaticProps you should not or even can t test it on the integration level It would be covered with ee tests When to use integration tests React Components Verify that several units functions hooks other components work together in harmony What not to mock in integration tests Child components Every component unit and its logic should be covered in the integration tests We don t use shallow rendering Hooks functions Apollo Client All the logic and functionality of the component should be covered with tests DOM Testing on the real DOM ensures your components work in the intended environment What to mock in integration tests Side effects Anything that can change an external state for example a network request should be mocked rd party library not all of them For example we should mock the next router module to successfully write our tests but it is not necessary to mock material ui Recommended integration tests toolsVitestJestReact Testing LibraryStorybook EE Tests What are ee tests A helper robot that behaves like a user to click around the app and verify that it functions correctly Why should you use ee tests Sometimes very often you either join a new project or a project comes to you and the code quality of this project can often be questionable If that s the case then writing unit tests will take up a lot of time as you ll have to thoroughly understand the codebase and refactor the code Covering with integration tests will be easier but still you ll have to do the same things on a smaller scale As for ee tests writing them is very straightforward and takes relatively little time Therefore undoubtedly if you want to gain confidence in a new project or a project with a poor codebase while having enough time to deliver business features rather than just refactoring code ee tests are the best solution Recommended ee tests toolsPlaywrightCypressPuppeteerWebdriver io Performance Tests What are performance tests Performance testing involves assessing how a system behaves about its responsiveness and stability when subjected to a specific workload Such tests are commonly carried out to analyze attributes like speed resilience dependability and the system s overall size Why should you use performance tests Vodafone Italy improved LCP by   to achieve  more sales iCook improved CLS by to achieve more ad revenue Tokopedia improved LCP by and saw better average session duration Nykaa found that a improvement in LCP led to  more organic traffic from T T cities NIKKEI STYLE s LCP improvement resulted in  more pageviews per session Ameba Manga improved the number of comics read by times by improving the CLS score times Do you still think you don t need to care about the performance of your application If you need show cases and proofs for your business you could use the following articles on web dev Recommended performance tests toolsLighthousePageSpeedInsightsCrUx Manual Tests Why should you use manual tests Ohh that s a good question Covering the system with all the previous tests takes time So while you don t have strong confidence you could use manual QA even on each merge request If you care about the safe CI CD you care about the test coverage So let s imagine you already have a good testing strategy Do you still need the manual QA I think yes you need it But not too often just for a quick regression before release for writing the steps to reproduce the issues etc The main things you should keep in mind are Automation tools are not real humans manual testing could feedback with some suggestions even without the bugs Some bugs are totally unexpected Manual testing is cheaper for small projects The automated tests could be created not so well The automated tests could not test the UX as well as real users Recommended manual tests toolsChrome Recorder ConclusionIn conclusion testing is an essential part of software development Each type of test serves a specific purpose from catching errors early on with static tests to verifying that the system behaves as expected with end to end tests By understanding the cost of tests and the appropriate use cases for each type developers can create a comprehensive testing strategy that ensures their software meets the required standards for quality and functionality Thank you my dear reader In addition I m going to publish a lot of other articles for the Way to High Confidence series I believe you would get a lot of useful information So welcome to my followers If you find this article useful or interesting you could tap some emoji on it or write a comment and help to grow our community 2023-08-22 13:00:47
海外TECH Engadget The Apple Watch Ultra falls to a new low of $700 https://www.engadget.com/the-apple-watch-ultra-falls-to-a-new-low-of-700-133522809.html?src=rss The Apple Watch Ultra falls to a new low of Now s a good moment to get a smartwatch that can easily handle your end of summer hikes Amazon is selling the Apple Watch Ultra with a green Alpine Loop at a new all time low price of or off after a checkout voucher That s the same price as a mm Series in steel making it the obvious choice if you want more rugged Apple wristwear The Apple Watch Ultra remains the company s most powerful smartwatch and it s the clear pick if you re an outdoor adventurer The large extra bright screen makes it easy to read even in direct sunlight and the added water resistance is helpful for recreational dives The action button also comes in handy for marking hike waypoints or starting the next leg of a run And it s hard to ignore the extra battery life ーthis watch can last an entire weekend without a charge depending on how you use it You ll need an iPhone to even consider the Apple Watch Ultra of course Its size may also be off putting if you have thin wrists or simply prefer sleeker timepieces There s also the question of timing ーApple might introduce a refreshed Ultra at an event that could be just weeks away If you re more interested in value than having the absolute latest model though this discount is hard to top Follow EngadgetDeals on Twitter and subscribe to the Engadget Deals newsletter for the latest tech deals and buying advice This article originally appeared on Engadget at 2023-08-22 13:35:22
海外TECH Engadget Meta's new multimodal translator uses a single model to speak 100 languages https://www.engadget.com/metas-new-multimodal-translator-uses-a-single-model-to-speak-100-languages-133040214.html?src=rss Meta x s new multimodal translator uses a single model to speak languagesThough it s not quite ready to usher in the Doolittle future we ve all been waiting for modern AI translation methods are proving more than sufficient in accurately transforming humanity s roughly spoken and written communication systems between one another The problem is that each of these models tends to only do one or two tasks really well ーtranslate and convert text to speech speech to text or between either of the two sets ーso you end up having to smash a bunch of models on top of each other to create the generalized performance seen in the likes of Google Translate or Facebook s myriad language services nbsp That s a computationally intensive process so Meta developed a single model that can do it all SeamlessMT is quot a foundational multilingual and multitask model that seamlessly translates and transcribes across speech and text quot Meta s blog from Tuesday reads It can translate between any of nearly languages for speech to text and text to text functions speech to speech and text to speech supports those same languages as inputs and outputs them in any of others tongues including English nbsp In their blog post Meta s research team notes that SeamlessMT quot significantly improve s performance for the low and mid resource languages we support quot while maintaining quot strong performance on high resource languages such as English Spanish and German quot Meta built SeamlessMT from its existing PyTorch based multitask UnitY model architecture which already natively performs the various modal translations as well as automatic speech recognition It utilizes the BERT system for audio encoding breaking down inputs into their component tokens for analysis and a HiFi GAN unit vocoder to generate spoken responses nbsp Meta has also curated a massive open source speech to speech and speech to text parallel corpus dubbed SeamlessAlign The company mined quot tens of billions of sentences quot and quot four million hours quot of speech from publicly available repositories to quot automatically align more than hours of speech with texts and create about hours of speech to speech alignments quot per the blog When tested for robustness SeamlessMT reportedly outperformed its current state of the art predecessor against background noises and speaker style variations by percent and percent respectively As with most all of its previous machine translation efforts ーwhether that s Llama Massively Multilingual Speech MMS Universal Speech Translator UST or the ambitious No Language Left Behind NLLB project ーSeamlessMT is being open sourced quot we believe SeamlessMT is an important breakthrough in the AI community s quest toward creating universal multitask systems quot the team wrote quot Keeping with our approach to open science we are excited to share our model publicly to allow researchers and developers to build on this technology quot If you re interested in working with SeamlessMT for yourself head over to GitHub to download the model training data and documentation This article originally appeared on Engadget at 2023-08-22 13:30:40
海外TECH Engadget The best mobile microphones for 2023 https://www.engadget.com/best-mobile-microphones-for-recording-with-a-phone-154536629.html?src=rss The best mobile microphones for If you consider yourself a mobile creator and you re not using some sort of dedicated microphone you might be holding yourself back We re not judging but your audience likely is Audio especially dialog is often overlooked but you need good sound quality if you want your content to stand out There are many many options for the home or office studio but there are a surprising amount of mobile specific or at least mobile friendly solutions out there to elevate your on the go recordings be that for social a jam session live streaming making movies podcasting and beyond What “the best mic for iOS or Android is will vary depending on the task you need it for If you want to record a TikTok or a podcast or even a jam session all have slightly different needs but the selection below covers most bases and maybe even a few you didn t think of yet for recording high quality sound with little more than a mobile phone The gearThis guide is all about recording audio on the go free from the constraints of a studio or office but also far away from luxuries like power outlets acoustically friendly rooms and a full size PC As such there are two styles of external mic that really shine here Lavalier lapel and shotgun We ll be covering a few other types too but between those most tasks are covered We ll also show you how you can use the USB mics you may already have with your phone and even ways to connect heavy duty studio classics XLR to your humble handset but all that will be through accessories For now let s start with the classic clip mics James Trew EngadgetLavalier micsThe obvious benefit of a lapel microphone is size Their small profile makes them perfect for presenting to the camera with the flexibility to move around while maintaining consistent audio quality If you re a budding TikTok or vlogger it s definitely worth having one of these mini microphones in your bag The main trade off however is that they re only good for recording the person they re attached to If you have two people talking and only one is wearing the mic you ll only get good audio for one half of the conversation so for multi person recordings you ll need a mic for each guest and a way to record them at the same time so costs can go up quickly Fortunately lapel mics have become a very competitive market with good viable options costing as little as For an absolute bargain with a long cord and some connectivity accessories the Boya BY M is hard to argue with But while these budget choices are great value if you want something that should either last longer is more versatile or just sounds better it s worth paying a little bit more Best mm mic Rode Lavalier IIRode s Lavalier II is a slick looking low profile lavalier that sounds great At it s somewhere in the sweet spot between budget and higher end clip on options It s easy to recommend the Lavalier II just on its sound alone but it comes with a rugged case and a good selection of accessories For even more flexibility you can pair this with Rode s AI Micro interface which provides easy connection to an iPhone or Android phone or even PCs and adds support for a second mic perfect for recording podcasts or interviews Best USB C mic Sennheiser XSAt Sennheiser s XS USB C lav mic is fairly affordable sounds great and plugs right into your phone or laptop without needing an adapter This not only makes it convenient but reduces the overall cost as you don t need a headphone adapter for your phone What s more the XS has a meter long cable which gives you plenty of scope for movement or framing A word on wireless micsJames Trew EngadgetRecently there has been an explosion in mobile friendly wireless mic systems but there are two wireless mics we really like The first is Rode s Wireless GO II Arguably the original defined this category but the second generation improves on it with two wireless transmitters making this podcast and interview friendly This wireless microphone is also incredibly versatile as it doubles as a standalone recorder can be mounted in a camera cold shoe and even has its own “reporter mic adapter Oh and you can make any mm mic including the lavaliers above wireless by plugging it into one of the receivers The second is the Mikme Pocket This Austrian designed pack is a high end wireless lavalier microphone system designed to be particularly mobile friendly There s a comprehensive app for both video and audio recording and internal storage so you won t ever experience dropouts It also means you can enjoy a practically infinite range At it s a higher spend but if high quality audio and near infinite range are what you need then this is the one AdaptersSo we ve already touched on this with the AI Micro which is an adapter of sorts One of the first things you might bump up against when dealing with mobile audio accessories is TRRS vs TRS connectors Simply put mm TRS is what you might know as the age old classic headphone jack while TRRS became common for its support for headsets and inline mics You can easily tell them apart as TRS connectors have two black bands on them while a TRRS has three For you the budding creator it can be a bit of an annoyance as many mm lavaliers are going to be TRS and won t work when plugged into your phone s headphone adapter Sometimes your lavalier might include what you need in the box but otherwise you ll want to pick up a TRS to TRRS adapter like this Of course some smartphone specific mics have TRRS connectors already for those you ll want a cable that goes the other way should you want to use it with other devices like a DSLR Shotgun micsJames Trew EngadgetYou may be more familiar with shotgun microphones when it comes to video recording It s the style of microphone most often found atop a DSLR or mirrorless camera but they make great companions for other portable devices too your cell phone included The benefit of a shotgun is that they tend to be highly directional which makes them perfect for podcasts recording instruments foley sounds and much much more For us mobile recordists another benefit is that they tend to be light and portable perfect for slipping into a backpack or even a laptop bag Even better there are some great mobile specific options Best shotgun mic for video music Sennheiser MKE nd gen You shouldn t buy a mic just because of how it looks but the MKE from Sennheiser definitely makes its rivals look wimpy More important than aesthetics though is how it sounds and the MKE records very cleanly without obvious coloration to the audio What s more the battery powered mic won t steal power from your phone or camera and with three gain levels to choose from you can boost things when needed or avoid clipping on louder subjects The MKE also comes with both TRS and TRRS cables for compatibility with a variety of devices The MKE s physical gain controls and high pass filter unlike the other two below that are updated via an app take the stress out of worrying if your audio source moves or changes volume as you can adjust that on the fly If you re a musician looking to record loud drums and then softer vocals on the move for example these tactile gain settings are a massive plus Best budget shotgun mic Rode VideoMic GO IIWhen we tested the VideoMic GO II we were surprised at just how good it sounded right out of the box At it rivals many desktop microphones that cost three times the price You ll need a companion app to change settings otherwise this performs well across the board Best shotgun mic for portability Shure MV Not to be confused with the older MV that plugged directly into a Lightning port the MV is a mini shotgun mic made with the smartphone in mind Often sold as a vlogging kit with a tripod and phone grip the MV has modular cables for connecting directly to Android phones and iPhones Desktop and USB mics go mobileJames Trew EngadgetMobile specific mics are great but there s nothing stopping you from using your phone mic or another you might already have if it s somewhat portable You ll definitely need to do a little dance with some adapters but that s half the fun Below are a couple of recommendations for “regular microphones that pair well with a phone and then the cables and adapters that you ll need to get setup Apogee HypeMicArguably there are few microphones that are could be described as quot mobile friendly quot than the HypeMic from Apogee While it looks like a regular handheld mic it s actually deceivingly small making it very light and portable It also comes with cables to directly connect it to iPhones and Android handsets ーno adapters needed Don t let the small size deceive you though the HypeMic has a big trick up its sleeve a built in analog compressor for professional sounding vocals Whether you record podcasts vocals or instruments there s a setting on the HypeMic just for you At it s a little on the spendy side but you get a very versatile device that s just as useful for the desktop too Samson QUThis dynamic mic is a favorite with podcasters with many production companies using it as their standard mic to send out to remote guests thanks to its excellent quality to value performance The QU features both USB and XLR connectivity making it versatile for both desktop and mobile applications but it s the former we re interested in here as that s what allows you to connect it to your phone with nothing more than a USB cable and an adapter see below What s more the QU is solid enough to endure a little bit of rough and tumble so will happily live in the bottom of your backpack ready for when you need it Meanwhile the handheld design is versatile enough it can turn its hand to singing instruments podcasts interviews and more TulaYou may not be familiar with the name but Tula snuck into our hearts with its versatile vintage inspired debut microphone From a mobile perspective the Tula connects to Androids directly over USB C or iPhones with the right USB C to Lightning cable more on this below or a USB “camera kit adapter What makes the Tula special is that it s also a desktop mic and portable recorder with lavalier input and GB of storage and even features noise cancellation perfect for cutting down on outside background sounds With the Tula you could theoretically have one mic for home mobile and standalone recording IK Multimedia iRig Pre If you already have a stash of XLR mics or really do need a studio condenser microphone with phantom power then the iRig Pre is a portable interface that will feed any XLR mic into your phone It runs off two AA batteries which it uses to supply phantom power when needed and won t drain your phone There s also a headphone jack for monitoring gain controls and LEDs to help prevent clipping A word on cablesJames Trew EngadgetConnecting USB microphones directly to phones is rarely as simple as just one cable although that s starting to become more common In general Android makes this simpler but also thanks to the wide range of manufacturers and software versions you can t always guarantee things will work smoothly The iPhone is a whole other situation USB microphones have a good chance of working via the USB camera kit we mentioned earlier but that s still inelegant sometimes Frustratingly some USB C to Lightning cables will play nice with microphones but sadly most will not including Apple s own One confirmed option is this cable from Fiio or this generic alternative These are inexpensive enough that it s worth having a couple around if you work with audio a lot they of course can also be used to charge your phone as a bonus This article originally appeared on Engadget at 2023-08-22 13:30:03
Cisco Cisco Blog Cisco CX Helps Trident Technical College Deliver Secure Educational Services https://feedpress.me/link/23532/16308397/cisco-cx-helps-trident-technical-college-deliver-secure-educational-services Cisco CX Helps Trident Technical College Deliver Secure Educational ServicesJoin us in celebrating one of the latest CX Customer Stories Trident Technical College was able to provide secure accessible and innovative educational services by working with Cisco Services to provide a seamless experience for students to access data and learn more effectively 2023-08-22 13:26:17
金融 金融庁ホームページ 障害者である職員の任免状況について公表しました。 https://www.fsa.go.jp/common/about/sonota/shougai_joukyou.html 障害者 2023-08-22 14:00:00
金融 金融庁ホームページ 鈴木財務大臣兼内閣府特命担当大臣閣議後記者会見の概要(令和5年8月15日)を掲載しました。 https://www.fsa.go.jp/common/conference/minister/2023b/20230815-1.html 内閣府特命担当大臣 2023-08-22 14:00:00
ニュース BBC News - Home Greece wildfires: Eighteen bodies found in Greek forest https://www.bbc.co.uk/news/world-europe-66579193?at_medium=RSS&at_campaign=KARANGA greece 2023-08-22 13:11:02
ニュース BBC News - Home Threads: Meta to launch web version of flagging Threads app https://www.bbc.co.uk/news/technology-66574762?at_medium=RSS&at_campaign=KARANGA early 2023-08-22 13:43:28
ニュース BBC News - Home Four murder arrests after delivery driver dies in Shrewsbury https://www.bbc.co.uk/news/uk-england-shropshire-66576970?at_medium=RSS&at_campaign=KARANGA shrewsbury 2023-08-22 13:14:42
ニュース BBC News - Home Mason Greenwood: Man Utd forward moving to Saudi Arabia would be 'very surprising' https://www.bbc.co.uk/sport/football/66574759?at_medium=RSS&at_campaign=KARANGA Mason Greenwood Man Utd forward moving to Saudi Arabia would be x very surprising x Mason Greenwood moving from Manchester United to a Saudi Arabian club would be very surprising a senior league source has told BBC Sport 2023-08-22 13:28:09

コメント

このブログの人気の投稿

投稿時間:2021-06-17 05:05:34 RSSフィード2021-06-17 05:00 分まとめ(1274件)

投稿時間:2021-06-20 02:06:12 RSSフィード2021-06-20 02:00 分まとめ(3871件)

投稿時間:2020-12-01 09:41:49 RSSフィード2020-12-01 09:00 分まとめ(69件)