AWS |
AWS Management Tools Blog |
Build a multi-account access notification system with Amazon EventBridge |
https://aws.amazon.com/blogs/mt/build-a-multi-account-access-notification-system-with-amazon-eventbridge/
|
Build a multi account access notification system with Amazon EventBridgeWhile working with many of our customers a recurring question has been “How can we be notified when users login to key accounts so we can take action if needed This post shows how to implement a flexible simple and serverless solution that creates notifications when sensitive accounts are logged in to Alerting on high … |
2023-08-15 16:38:08 |
AWS |
AWSタグが付けられた新着投稿 - Qiita |
RDS for PostgreSQL のクラスターパラメータグループを作成する |
https://qiita.com/takumats/items/c2f7dde42d653f176c1f
|
rdsforpostgresql |
2023-08-16 01:57:50 |
AWS |
AWSタグが付けられた新着投稿 - Qiita |
AWS Well-Architected Labs Operational Excellence 100 Labs INVENTORY AND PATCH MANAGEMENTメモ |
https://qiita.com/kenryo/items/a706146072343486086d
|
architected |
2023-08-16 01:14:54 |
海外TECH |
Ars Technica |
Amazon exec responsible for money-losers like Alexa and Fire Phone is departing |
https://arstechnica.com/?p=1960864
|
alexa |
2023-08-15 16:33:56 |
海外TECH |
Ars Technica |
Astra’s new rocket won’t launch until 2024—if it ever flies |
https://arstechnica.com/?p=1960853
|
launch |
2023-08-15 16:22:24 |
海外TECH |
MakeUseOf |
7 Tips to Stay Productive When Taking an Online Course |
https://www.makeuseof.com/tips-to-stay-productive-taking-online-course/
|
courses |
2023-08-15 16:15:01 |
海外TECH |
DEV Community |
I “Promise” that u will understand promises in JS forever 😜 |
https://dev.to/dev_en/i-promise-that-u-will-understand-promises-in-js-forever-1i32
|
I “Promise that u will understand promises in JS forever We all know that JavaScript is a synchronous programming language But callback functions help to make it an asynchronous programming language No matter how helpful the callback functions were in JS they came with their own problems and pain points And trust me “callback hell is not the biggest pain point of it So u wonder what is…it is none other than “inversion of control Let s not waste time in understanding all the disadvantages of them you guys are smart enough to google ChatGPT Bard the terms and understand But as the callback functions are so troublesome we came up with the Promises in JS Let us understand promises in a very easy way from the example below Promises are like a waiter They promise to bring you food but it takes time While you wait you can do other things If the food is ready the waiter brings it to you If the food is not ready the waiter tells you In other words promises are a way to handle asynchronous operations in JavaScript Asynchronous operations are operations that do not block the main thread of execution This means that you can perform multiple asynchronous operations at the same time and your page will not become unresponsive Promises are just an easier way to manage the Async Operations in JS The Promise the object represents the eventual completion or failure of an asynchronous operation and its resulting value From the above example we can see that promises in JS are similar to the commitments we make in our daily lives Promises are mainly used to increase the readability of the code Promises make the code more clean and more manageable It is considered a good practice to use Promises A Promise has three states Pending The promise is not yet completed Fulfilled The promise has been completed successfully Rejected The promise has completed with an error Now let us see how to create promises We can create a promise using the new Promise constructor The constructor takes two arguments a resolve function and a reject function The resolve function is called when the promise is fulfilled and the reject function is called when the promise is rejected Here is an example of how to create a promise that resolves with the number const promise new Promise resolve reject gt setTimeout gt resolve How to use promisesWe can use the then method to handle the fulfillment or rejection of a promise The then method takes two arguments a callback function that is called when the promise is fulfilled and a callback function that is called when the promise is rejected Here is an example of how to use the then method to handle the fulfillment of the promise we created above promise then value gt console log value catch error gt console log error Error The request timed out In this example the then method will call the first callback function when the promise is fulfilled The first callback function will log the value of the promise which is If the promise is rejected the then method will call the second callback function The second callback function will log the error message The Promise the object represents the eventual completion or failure of an asynchronous operation and its resulting value Helper Functions related to promises Promise AllThe Promise all iterable the method returns a single Promise that resolves when all of the promises in the iterable argument have resolved or when the iterable argument contains no promises It rejects with the reason of the first promise that rejects Promise raceThe Promise race iterable the method returns a promise that resolves or rejects as soon as one of the promises in the iterable resolves or rejects with the value or reason from that promise Key Points to remember Use promises whenever you are using async or blocking code A promise is an object that returns a value in the future A promise starts in the pending state and ends in either a fulfilled state or a rejected state resolve maps to then amp reject maps to catch If something needs to be done in both cases use finallyConclusionPromises are a way to chain asynchronous operations cleanly without deeply nested callbacks Promise All ーIt waits for all promises to resolve and returns an array of their results If any of the given promises rejects it becomes the error of Promise All and all other results are ignored Promise AllSettled ーIt waits for all promises to settle and returns their results as an array of objects and they can be individually either resolved or rejected Promise Race ーIt will return the promise instance which is firstly resolved or rejected That s a wrap for this one Stay tuned for further blogs on more advanced JavaScript concepts |
2023-08-15 16:41:31 |
海外TECH |
DEV Community |
The Union of GraphQL and Large Language Models |
https://dev.to/blazestudios23/the-union-of-graphql-and-large-language-models-23dl
|
The Union of GraphQL and Large Language Models The Perfect Union Exploring the Synergy between GraphQL and Large Language Models IntroductionIn the dynamic world of software engineering two technologies have emerged as transformative GraphQL and Large Language Models LLMs like OpenAI While each has revolutionized how we handle data and build applications their combined potential is truly remarkable This blog post will delve into the perfect union between GraphQL and LLMs and how this synergy can benefit software engineers Understanding the BasicsBefore we delve into the heart of the matter let s demystify the basics LLMs such as OpenAI are powerful tools that can understand and generate human like text They can answer questions write essays summarize texts and even generate code On the other hand GraphQL is a query language for APIs and a runtime for executing those queries It allows clients to request exactly what they need making it easier to evolve APIs over time Two tools that exemplify the integration of these technologies are LangChain and Wundergraph LangChain offers a GraphQL plugin while Wundergraph provides an OpenAI integration Both tools showcase how GraphQL and LLMs can be combined to create powerful solutions The Synergy between GraphQL and LLMsGraphQL and LLMs complement each other in several ways Both technologies use graphs which are structures that model the relationships between entities In the context of LLMs graphs can represent the connections between different concepts in a text In GraphQL graphs represent the relationships between different types of data One of the key synergies between GraphQL and LLMs is the ability to feed data from GraphQL APIs into LLMs This allows LLMs to generate responses based on precise up to date data Conversely LLMs can be added to federated GraphQL APIs enriching the data graph with AI generated content LangChain A Case StudyLangChain provides a shining example of how GraphQL can be integrated with other technologies Its GraphQL plugin allows users to consume GraphQL APIs with ease This means that you can request exactly what you need from an API reducing over fetching and under fetching of data The LangChain GraphQL plugin is easy to use With just a few lines of code you can connect to a GraphQL API and start making queries This simplicity combined with the power of GraphQL makes LangChain a valuable tool for any software engineer For more information and examples check out the LangChain documentation Code Example from langchain import OpenAIfrom langchain agents import load tools initialize agent AgentTypefrom langchain utilities import GraphQLAPIWrapperllm OpenAI temperature tools load tools graphql graphql endpoint agent initialize agent tools llm agent AgentType ZERO SHOT REACT DESCRIPTION verbose True graphql fields allFilms films title director releaseDate speciesConnection species name classification homeworld name suffix Search for the titles of all the stawars films stored in the graphql database that has this schema agent run suffix graphql fields Wundergraph A Case StudyWundergraph on the other hand showcases how OpenAI can be integrated into a GraphQL API With Wundergraph s OpenAI integration you can include AI generated responses in your data graph This opens up a world of possibilities from AI powered chatbots to dynamic content generation While Wundergraph requires you to use their library and architecture it serves as a good example of how OpenAI can be implemented into a GraphQL API For more details and examples visit the Wundergraph documentation Wundergraph Code Example wundergraph operations openai weather tsimport createOperation z from generated wundergraph factory export default createOperation query input z object country z string description This operation returns the weather of the capital of the given country handler async input openAI log gt const parsed await openAI parseUserInput userInput input country schema z object country z string nonempty const agent openAI createAgent functions name CountryByCode name weather GetCityByName structuredOutputSchema z object city z string country z string temperature z number const out await agent execWithPrompt prompt What s the weather like in the capital of parsed country debug true return out The Impact on Software EngineeringThe combination of GraphQL and LLMs has significant implications for software engineering By integrating these technologies developers can create more dynamic intelligent and efficient applications Whether you re already using GraphQL or OpenAI in your projects or you re planning to do so understanding the synergy between these technologies can give you a competitive edge ConclusionIn conclusion the marriage between GraphQL and Large Language Models is indeed a match made in heaven The synergy between these technologies unlocks new possibilities from smarter APIs to more dynamic applications As software engineers it s our job to stay on top of these trends and leverage them to build better solutions So why not explore the union of GraphQL and LLMs today You might just discover a new way to revolutionize your projects Further ReadingFor those interested in delving deeper into the practical applications of GraphQL and Large Language Models here are some resources that might be helpful WunderGraph s websiteLangChains websiteGraphQL orgHow To GraphQLApollo GraphQLPractical GraphQL Become a GraphQL NinjaGitHub GraphQL APIThe Guild BlogRemember the implementation of GraphQL with Large Language Models will depend on your specific use case and the programming language you are using These resources should provide a good starting point Happy exploring |
2023-08-15 16:20:13 |
海外TECH |
DEV Community |
We moved our Cloud operations to a Kubernetes Operator |
https://dev.to/sklarsa/we-moved-our-cloud-operations-to-a-kubernetes-operator-392n
|
We moved our Cloud operations to a Kubernetes OperatorKubernetes operators seem to be everywhere these days There are hundreds of them available to install in your cluster Operators manage everything from X certificates to enterprise y database deployments service meshes and monitoring stacks But why add even more complexity to a system that many say is already convoluted With an operator controlling cluster resources if you have a bug you now have two problems figuring out which of the myriad of Kubernetes primitives is causing the issue as well as understanding how that dang operator is contributing to the problem So when I proposed the idea of introducing a custom built operator into our Cloud infrastructure stack I knew that I had to really sell it as a true benefit to the company so people wouldn t think that I was spending my time on a piece of RDD vaporware After the go ahead to begin the project I spent my days learning about Kubernetes internals scouring the internet far and wide to find content about operator development and writing and rewriting the base of what was to become our QuestDB Cloud operator After many months of hard work we ve finally deployed this Kubernetes operator into our production Cloud Building on top of our existing one shot with rollback provisioning model the operator adds new functionality that allows us to perform complex operations like automatically recovering from node failure and will also orchestrate upcoming QuestDB Enterprise features such as High Availability and Cold Storage If we ve done my job correctly you as a current or potential end user will continue to enjoy the stability and easy management of your growing QuestDB deployments without noticing a single difference This is the story of how we accomplished this engineering feat and why we even did it in the first place Original System ArchitectureTo manage customer instances in our cloud we use a queue based system that is able to provision databases across different AWS accounts and regions using a centralized control plane known simply as the provisioner When a user creates modifies or deletes a database in the QuestDB Cloud an application backend server asynchronously sends a message that describes the change to a backend worker which then appends that message to a Kafka log Each message contains information about the target database its unique identifier resident cluster and information about its config runtime and storage as well as the specific set of operations that are to be performed These instructional messages get picked up by one of many provisioner workers each of which subscribes to a particular Kafka log partition Once a provisioner worker receives a message it is decoded and its instructions are carried out inside the same process Instructions vary based on the specific operation that needs to be performed These range from simple tasks like an application level restart to more complex ones such as tearing down an underlying database s node if the user wants to pause an active instance to save costs on an unused resource Each high level instruction is broken down into smaller composable granular tasks Thus the provisioner coordinates resources at a relatively low level explicitly managing ks primitives like StatefulSets Deployments Ingresses Services and PersistentVolumes as well as AWS resources like EC Instances Security Groups Route Records and EBS Volumes Provisioning progress is reported back to the application through a separate Kafka queue we call it the gossip queue Backend workers are listening to this queue for new messages and update a customer database s status in our backend database when new information has been received Each granular operation also has a corresponding rollback procedure in case the provisioner experiences an error while running a set of commands These rollbacks are designed to leave the target database s infrastructure in a consistent state although it s important to note that this is not the desired state as intended by the end user s actions Once an error has been encountered and the rollback executed the backend will be notified about the original error through the gossip Kafka queue in order to surface any relevant information back to the end user Our internal monitoring system will also trigger alerts on these errors so an on call engineer can investigate the issue and kick off the appropriate remediation steps While this system has proven to be reliable over the past several months since we first launched our public cloud offering the framework still leaves room for improvement Much of our recent database development work around High Availability and Cold Storage requires adding more infrastructure components to our existing system And adding new components to any distributed system tends to increase the system s overall error rate As we continue to add complexity to our infrastructure there is a risk that the above rollback strategy won t be sufficient enough to recover from new classes of errors We need a more responsive and dynamic provisioning system that can automatically respond to faults and take proactive remediation action instead of triggering alerts and potentially waking up one of our globally distributed engineers This is where a Kubernetes operator can pick up the slack and improve our existing infrastructure automation Introducing a Kubernetes OperatorNow that stateful workloads have started to mature in the Kubernetes ecosystem practitioners are writing operators to automate the management of these workloads Operators that handle full lifecycle tasks like provisioning snapshotting migrating and upgrading clustered database systems have started to become mainstream across the industry These operators include Custom Resource Definitions CRDs that define new types of Kubernetes API resources which describe how to manage the intricacies of each particular database system Custom Resources CRs are then created from these CRDs that effectively define the expected state of a particular database resource CRs are managed by operator controller deployments that orchestrate various Kubernetes primitives to migrate the cluster state to its desired state based on the CR manifests By writing our own Kubernetes operator I believed we could take advantage of this model by allowing the provisioner to only focus on higher level tasks while an operator handles the nitty gritty details of managing each individual customer database Since an operator is directly hooked in to the Kubernetes API server through Resource Watches it can immediately react to changes in cluster state and constantly reconcile cluster resources to achieve the desired state described by a Custom Resource spec This allows us to automate even more of our infrastructure management since the control plane is effectively running all of the time instead of only during explicit provisioning actions With an operator deployed in one of our clusters we can now interact with a customer database by executing API calls and CRUD operations against a single resource a new Custom Resource Definition that represents an entire QuestDB Cloud deployment This CRD allows us to maintain a central Kubernetes object that represents the canonical version of the entire state of a customer s QuestDB database deployment in our cluster It includes fields like the database version instance type volume size server config and the underlying node image as well as key pieces of status like a node s IP address and DNS record information Based on these spec field values as well as changes to them the operator performs a continuous reconciliation of all the relevant Kubernetes and AWS primitives to match the expected state of the higher level QuestDB construct New ArchitectureThis leads us to our new architecture While this diagram resembles the old one there is a key difference here a QuestDB operator instance is now running inside of each cluster Instead of the provisioner being responsible for all of the low level Kubernetes and AWS building blocks it is now only responsible for modifying a single object a QuestDB Custom Resource Any modifications to the state of a QuestDB deployment large or small can be made by changing a single manifest that describes the desired state of the entire deployment This solution still uses the provisioner but in a much more limited capacity than before The provisioner continues to act as the communication link between the application and each tenant cluster and it is still responsible for coordinating high level operations But it can now delegate low level database management responsibilities to the operator s constantly running reconciliation loops Operator Benefits Auto HealingSince the operator model is eventually consistent we no longer need to rely on rollbacks in the face of errors Instead if the operator encounters an error in its reconciliation loop it re attempts that reconciliation at a later time using an exponential backoff by default The operator is registered to Resource Watches for all QuestDBs in the cluster as well as for their subcomponents So a change in any of these components or the QuestDB spec itself will trigger a reconciliation loop inside the operator If a component is accidentally mutated or experiences some state drift the operator will be immediately notified of this change and automatically attempt to bring the rest of the infrastructure to its desired state This property is incredibly useful to us since we provision a dedicated node for each database and use a combination of taints and NodeSelectors to ensure that all of a deployment s Pods are running on this single node Due to these design choices we are now exposed to the risk of a node hardware failure that renders it inaccessible Typical Kubernetes recovery patterns can not save us So instead of using an alerting first remediation paradigm where on call engineers would follow a runbook after receiving an alert that a database is down the operator allows us to write custom node failover logic that is triggered as soon as the cluster is aware of an unresponsive node Now our engineers can sleep soundly through the night while the operator automatically performs the remediation steps that they would previously be responsible for Simpler ProvisioningInstead of having to coordinate tens of individual objects in a typical operation the provisioner now only has to modify a single Kubernetes object while executing higher level instructions This dramatically reduces the code complexity inside of the provisioner codebase and makes it far easier for us to write unit and integration tests It is also easier for on call engineers to quickly diagnose and solve issues with customer databases since all of the relevant information about a specific database is contained inside of a single Kubernetes spec manifest instead of spread across various resource types inside each namespace Although as expressed in the previous point hopefully there will be fewer problems for our on call engineers to solve in the first place Easier UpgradesThe week Kubernetes release cadence can be a burden for smaller infrastructure teams to keep on top of This relatively rapid schedule combined with the short lifetime of each release means that it is critical for organisations to keep their clusters up to date The importance of this is magnified if you re not running your own control plane since vendors tend to drop support for old ks versions fairly quickly Since we re running each database on its own node and there are only two of us to manage the entire QuestDB Cloud infrastructure we could easily spend most of our time just upgrading customer database nodes to new Kubernetes versions Instead our operator now allows us to automatically perform a near zero downtime node upgrade with a single field change on our CRD Once the operator detects a change in a database s underlying node image it will handle all of the orchestration around the upgrade including keeping the customer database online for as long as possible and cleaning up stale resources once the upgrade is complete With this automation in place we can now perform partial upgrades against subsets of our infrastructure to test new versions before rolling them out to the entire cluster and also seamlessly roll back to an older node version if required Control Plane LocalityBy deploying an operator per cluster we can improve the time it takes for provisioning operations to complete by avoiding most cross region traffic that takes place from the provisioner to remote tenant clusters We only need to make a few cross region API calls to kickstart the provisioning process by performing a CRUD operation on a QuestDB spec and the local cluster operator will handle the rest This improves our API call network latency and reliability reducing the number of retries for any given operation as well as the overall database provisioning time Resource CoordinationNow that we have a single Kubernetes resource to represent an entire QuestDB deployment we can use this resource as a primitive to build even larger abstractions For example now we can easily create a massive but temporary instance to crunch large amounts of data save that data to a PersistentVolume in the cluster then spin up another smaller database to act as the query interface Due to the inherent composability of Kubernetes resources the possibilities with a QuestDB Custom Resource are endless The DevOps Stuff Testing the OperatorBefore deploying such a significant piece of software in our stack I wanted to be as confident as possible in its behavior and runtime characteristics before letting it run wild in our clusters So we wrote and ran a large series of unit and integration tests before we felt confident enough to deploy the new infrastructure to development and eventually to production Unit tests were written against an in memory Kubernetes API server using the controller runtime pkg envtest library Envtest allowed us to iterate quickly since we could run tests against a fresh API cluster that started up in around seconds instead of having to spin up a new cluster every time we wanted to run a test suite Even existing micro cluster tools like Kind could not get us that level of performance Since envtest is also not packaged with any controllers we could also set our test cluster to a specific state and be sure that this state would not be modified unless we explicitly did so in our test code This allowed us to fully test specific edge cases without having to worry about control plane level controllers modifying various objects out from underneath us But unit testing is not enough especially since many of the primitives that we are manipulating are fairly high level abstractions in and of themselves So we also spun up a test EKS cluster to run live tests against built with the same manifests as our development and production environments This cluster allowed us run tests using live AWS and Ks APIs and was integral when testing features like node recovery and upgrades We were also able to leverage Ginkgo s parallel testing runtime to run our integration tests on multiple concurrent processes This provided multiple benefits we could run our entire integration test suite in under minutes and also reuse the same suite to load test the operator in a production like environment Using these tests we were able to identify hot spots in the code that needed further optimization and experimented with ways to save API calls to ease the load on our own Kubernetes API server while also staying under various AWS rate limits It was only after running these tests over and over again that I felt confident enough to deploy the operator to our dev and prod clusters MonitoringSince we built our operator using the Kubebuilder framework most standard monitoring tasks were handled for us out of the box Our operator automatically exposes a rich set of Prometheus metrics that measure reconciliation performance the number of ks API calls workqueue statistics and memory related metrics We we were able to ingest these metrics into pre built dashboards by leveraging the grafana v alpha plugin which scaffolds two Grafana dashboards to monitor Operator resource usage and performance All we had to do was add these to our existing Grafana manifests and we were good to go Migrating to the New ParadigmNo article about a large infrastructure change would be complete without a discussion about the migration strategy from old to the new In our case we designed the operator to be a drop in replacement for the database instances managed by the provisioner This involved many tedious rounds of diffing yaml manifests to ensure that the resources coordinated by the operator would match the ones that were already in our cluster Once this resource parity was reached we utilized Kubernetes owner dependent relationships to create an Owner Reference on each sub resource that pointed to the new QuestDB Custom Resource Part of the reconciliation loop involves ensuring that these ownership references are in place and if not are created automatically This way the operator can seamlessly inherit legacy resources through its main reconciliation codepath We were able to roll this out incrementally due to the guarantee that the operator cannot make any changes to resources that are not owned by a QuestDB CR Because of this property we could create a single QuestDB CR and monitor the operator s behavior on only the cluster resources associated with that database Once everything behaved as expected we started to roll out additional Custom Resources This could be easily controlled on the provisioner side by checking for the existence of a QuestDB CR for a given database If the CR existed then we could be sure that we were dealing with an operator managed instance otherwise the provisioner would behave as it did before the operator existed ConclusionI m extremely proud of the hard work that we ve put into this project Starting from an empty git repo it s been a long journey to get here There have been several times over the past few months when I questioned if all of this work was even worth the trouble especially since we already had a provisioning system that was battle tested in production But with the support of my colleagues I powered through to the end And I can definitely say that it s been worth it as we ve already seen immediate benefits of moving to this new paradigm Using our QuestDB CR as a building block has paved the way for new features like adding a large format bulk CSV upload with our SQL COPY command The new node upgrade process will save us countless hours with every new ks release And even though we haven t experienced a hardware failure yet I m confident that our new operator will seamlessly handle that situation as well without paging up an on call engineer Especially since we are a small team it is critical for us to use automation as a force multiplier that assists us in managing our Cloud infrastructure and a Kubernetes operator provides us with next level automation that was previously unattainable The best way to determine the best fit is to make it real with your own data QuestDB Cloud offers in free credit Or if you would rather play around with sample data first check out our live Web Console |
2023-08-15 16:16:22 |
海外TECH |
DEV Community |
Why are WebSockets so hard? |
https://dev.to/momentohq/why-are-websockets-so-hard-59i4
|
Why are WebSockets so hard A couple of years ago I worked on a project to bring real time notifications into my web application I was excited at the idea of “real time and immediately knew I was going to get a chance to implement WebSockets I knew what WebSockets did but I didn t really know what they were meaning I knew you could send messages from a server to a browser but I had no idea how I didn t know much more than the fact there were “connections that you could use to push data both to and from the back end I set off to build what I thought was going to be a two day task What could possibly go wrong I then plunged into a downward spiral of complexity that made me rethink being a software engineer Let s talk about it WebSocket API structureI come from a REST background Endpoints have resource based paths with intent shown by which http method you re using GET load data POST create data PUT update data etc… The first thing I saw in the AWS API Gateway documentation were these weird connect and disconnect routes By naming convention I assumed what these routes did but I didn t know what to do with them It wasn t intuitive to me how to uniquely identify a user who was trying to connect I didn t know if data would freely pass back and forth across this connection once it was established I also had no idea how to keep track of the connection or if I even needed to keep track It was just one rabbit hole after another Eventually I discovered that with AWS API Gateway the connections are managed by the service itself but you the developer are responsible for keeping track of who is connected and what information they receive I also learned that data does not just freely flow back and forth For interactions going from the client to the server you have to define your own routes and point them to backing compute resources Each route required an API Gateway V Route API Gateway V Integration Lambda function and Lambda function permission resource defined in my Infrastructure as Code which was about lines per route For data going from the server to the client you can send anything you want You need to develop a convention for identifying different types of messages so you can handle them appropriately The disparity between client to server and server to client threw me for a loop One was very rigid and structured while the other was loosey goosey It doesn t quite feel like a way to build scalable maintainable software Connection managementAs I said earlier API Gateway manages maintaining connections for you but you re responsible for figuring out what data to send to which connection Let s take an example Imagine our user Mallory wants to be notified when tickets for Taylor Swift Adele or Ed Sheeran become available When she connects to our ticket vendor site we save records into our database One record that identifies the connection and user metadataOne record for each artist she wants to be notified forFor the artist records the pk is her connection id and the sk indicates it s a subscription record We add the artist name as a GSI so when we get an event indicating that Ed Sheeran tickets are on sale we can immediately notify all the connections subscribed to him To notify the subscribers with an AWS serverless back end we d trigger a Lambda function on an EventBridge event saying which artist had tickets available The function would query the artist GSI in DynamoDB to find all the connections subscribed to the incoming artist Then we d iterate over each record publishing the ticket information to the connected users That s a lot of work When the user disconnects we can query the database for all records with the pk containing the connection id and delete them In case we miss the disconnect event from API Gateway we set a time to live TTL on the connection records for hours or whatever fits your use case to delete them automatically This is a lot of infrastructure for something with “technically no business value This is simply a microservice that alerts users This is code that you have to maintain over time that could get stale slow or deprecated Code is a liability after all SecurityI come from a GovTech background An app isn t secure until it s overly secure So when I found out that the only route on a WebSocket API that supports auth is connect I was a little taken aback Once a connection is established it has free reign to call any route it wants without passing in an auth header or any other form of credentials I ve had a while to stew on this and it makes sense in theory Since WebSocket connections are stateful you shouldn t need to reauthenticate every time you make a call That would be like knocking on someone s door saying your name then after you re inside restating who you are every time you do something Doesn t really make sense Passing in an auth header to a WebSocket isn t as easy as you d think either Popular clients like SocketIO don t really support auth headers well unless you use it for both the client and server Best way I found to pass a bearer token through to a WebSocket hosted in AWS was to use a query string parameter You could also repurpose the Sec WebSocket Protocol header to accept both a subprotocol and the auth token but that is against the grain and one of those “just because you could doesn t mean you should moments Client side SDKsPeople seem to love SocketIO It has over million weekly downloads on npm and is arguably one of the better ways to connect to a WebSocket But just because it s popular doesn t mean it s easy For whatever reason I struggled big time to get it working with API Gateway Something with the WebSocket protocol wss instead of https and the way AWS set up the API just didn t get along well Through much trial and error shifting auth around and a few rage quits I ve been able to get WebSockets hooked up to my user interfaces once or twice But every time I do it I have to relearn the tricks of getting it just right Sometimes when things do everything like SocketIO they lose a bit of their intuitiveness and developer experience An easier wayWith Momento Topics all the hard parts of WebSockets are abstracted away There is no API structure to build Subscribers can connect and register for updates to specific channels with a single API call await topicClient subscribe websocket mychannel onItem data gt handleItem data valueString onError err gt console error err To publish to a channel the call is even simpler await topicClient publish websocket mychannel JSON stringify detail You can connect service to service service to browser even browser to browser with Topics Since the service uses Momento s servers for connection management you have options available that haven t been possible before like having two browser sessions communicate without getting a server involved This leaves you with two responsibilities publishing data when it s ready and subscribing for updates As with other serverless services Momento Topics comes with security at top of mind but also leaves you with flexible options to restrict access With fine grained access control you can configure your API tokens to be scoped as narrowly as possible An example access policy might be const tokenScope permissions role subscribeonly cache websocket topic mychannel An API token created with this set of permissions would only be allowed to subscribe to the mychannel topic in the websocket cache If someone intercepted the token and attempted to publish data or subscribe to a different topic they would receive an authorization error Momento has a plethora of SDKs for you to integrate with For browsers you can use the Web SDK For server side development the Topics service is available for TypeScript JavaScript Python and Go with support for NET Java Elixir PHP Ruby and Rust coming soon What s the catch Hopefully that sounds too good to be true It did to me at first Heck it still does But there is no catch Momento s mission is to provide best in class developer experience for their serverless services and take as much of the burden off of developers as possible You don t need to spend weeks building notification services that handle complex connection management and event routing Let SaaS providers like Momento take the operational overhead from you so you can focus on what really matters Pricing is simple GB of data transfer in and out with a GB perpetual free tier There s no reason not to try it Looking for examples Check out this fully functional chat application built with Topics in Next js You can also try our work in progress game Acorn Hunt built on both Momento Cache and Topics If you have any questions feedback or feature requests please feel free to let the team know via Discord or through the website These services are for all of us and we want to build the best possible product to get you to production safely and quickly Happy coding |
2023-08-15 16:13:26 |
海外TECH |
DEV Community |
🔥Unveiling Open Source: The Real Reasons Top Companies Share Their Code!🔓 |
https://dev.to/quine/unveiling-open-source-the-real-reasons-top-companies-share-their-code-4mh3
|
Unveiling Open Source The Real Reasons Top Companies Share Their Code tl dr Greater Innovation ️Increased Adoption aka more users Modularity aka you never start from scratch Better Code QualityCommunity Engagement🫶Positive Brand PerceptionHey peeps A question I thought instantly when hearing about open source was “If some companies make their project open source why would they spend their money and resources to give it away for free This is a valid question and the nuances of open source often lie in a couple of key factors For a real world example consider HashiCorp They recently transitioned Terraform their tool for defining and provisioning infrastructure as code from an open source license to a business source license This move has proved to be highly controversial read more on hackernews So this boils down to the question “Why do companies create software on Open Source Let s look at this together Quick Note If you want to start contributing to the right projects in open source you can filter by your language and topic preferences using the completely free tool quine At a high level there are main reasons why both big or small companies want to develop software on open source Greater InnovationOpen source projects have by definition various people worldwide working together They volunteer their time and expertise to work on the same project This means that the diversity of this “workforce fosters innovation This enables companies to leverage external expertise and insights to drive the development of new products and services ️ Increased Adoption aka more users Open source projects help companies gain widespread adoption of their technology attracting users who might not otherwise have tried their products or services This is because a lot of software is paid whereas open source projects are “free of use Note Most open source projects are free of use by individuals However it can get quite complex when other organisations use open source software We won t get into it here but there is a bit of a rabbit hole of what can be used or not based on the type of licenses the open source project has Modularity aka stand on the shoulder of giants Open source software OSS is built on the philosophy of creating programs that do one thing well and can work together Closed source programs on the other hand tend to lock users into their ecosystem and keep adding features instead of collaborating OSS thrives because it allows developers to build upon existing work In other words organisations that use open source stand on the shoulders of giants rather than having to start from scratch This collaborative approach leads to more integrated and versatile software solutions Better Code QualityBy making open source code companies can receive feedback and contributions from a much bigger community of developers than if they did this with their X amount of engineers Ultimately this will improve the quality of their software products and services Community EngagementOpen source projects fosters engagement with the wider developer community allowing companies to establish relationships with key influencers and thought leaders 🫶Positive Brand PerceptionSupporting open source enhances a company s reputation by showcasing its commitment to community collaboration and innovation It also re affirms your position in the market to make sure your name is always there We will see how Google leveraged this fantastically and continued to position their brand and reputation in the AI realm The Google example TensorFlow One example of an open source project is the TensorFlow project released in November by Google TensorFlow is an open source machine learning framework that allows developers to build and deploy machine learning models As of the time of writing the TensorFlow project has been a massive success and here are the positive outcomes Google experienced TensorFlow was extensible by the fact of being OSS If there were some new layers functions or gradient calculations that a researcher wanted they could build them themselves This became a playground for innovation and helped google win against its competitors By making TensorFlow open source Google continued to attract top talent in machine learning and AI strengthening their internal teams The open source release of TensorFlow spurred the development of a rich ecosystem of libraries and tools making it more powerful and user friendly Feedback from the community helped Google identify and fix issues ensuring TensorFlow met the needs of diverse users and accelerated its growth As a result of the above TensorFlow s popularity soared as it became a go to choice for researchers developers and businesses boosting Google s reputation in AI All in all this article showed you the various factors behind why companies are interested in creating open source software Whilst the reasons above tend to be the primary rationale for building in open source some companies can still find ways to benefit financially from open source and this was looked at more in depth here I hope you enjoyed this article and that it brought you some value Keep slaying Your Dev to buddy If you re looking for a platform to discover open source projects aligned with your interests and your programming language preference and do it whilst not spending a penny visit quine sh 🫶 If you haven t yet you could join our discord server if you need any help or have any questions 🫶 Quine Follow Build rep with every merge Wear your stats in your GitHub README |
2023-08-15 16:07:12 |
海外TECH |
DEV Community |
Supercharging Your Flutter Development with VS Code: 9 Must-Know Tips! 🚀 |
https://dev.to/yatendra2001/supercharging-your-flutter-development-with-vs-code-9-must-know-tips-ddi
|
Supercharging Your Flutter Development with VS Code Must Know Tips Yo wassup fellow Flutter developer If you re among the of Flutter enthusiasts using VS Code as their go to IDE you re in for a treat Today we re diving deep into some seriously awesome tips and tricks that ll make you wonder how you ever coded without them Buckle up and get ready to level up your Flutter game Discover the Zen within Zen Mode ️Ever wished for a clutter free space to code in peace Say hello to Zen Mode It gifts you a clean distraction free UI Just you and your beautiful code Getting into the zone is easy Ctrl K Z or head over to View gt Appearance gt Zen Mode Double or Triple the Fun Multi Cursor Editing ️Because why edit one line when you can edit many Easily edit multiple lines at once Just hold Alt and click wherever you fancy a cursor Rename variables or add code without breaking a sweat The Flutter Edge Flutter Specific Extensions Supercharge your Flutter game with these extensions Think syntax highlighting widget wrapping and much more Some of our faves include Flutter amp Dart extensions Psst stay tuned we re working on a comprehensive list just for you Swift and Smooth Keyboard Shortcuts Reference ️Remembering shortcuts coding at light speed Press Ctrl K Ctrl R to see all available shortcuts Don t like the defaults No problem Customize them to your heart s content Terminal Meet VS Code Integrated Terminal Run commands without leaving your cozy coding nest Simply hit Ctrl backtick or hop over to View gt Terminal Run your Flutter app manage packages it s all here Teamwork Makes the Dream Work Live Share Collaboration Code s more fun with friends Share your VS Code session and collaborate in real time Just grab the Live Share extension and you re ready for some paired programming fun Reusable Magic Code Snippets for Flutter Create once use endlessly Dive into Preferences gt User Snippets gt Dart to create or access snippets Or use extensions like Flutter Snippets for a quick fix Crack the Code Debugger and Breakpoints Get to the heart of any hiccups Set breakpoints to inspect your app on the fly Use F F F and Shift F to seamlessly navigate through debugging Seamless Workflow Git Integration Stay on top of your Git game without ever leaving VS Code Access the Source Control panel using Ctrl Shift G Commit push pull and even resolve those pesky conflicts with ease A Little Note Before You Go Hey thanks for sticking around If this post was your jam imagine what s coming up next I m launching a YouTube channel and trust me you don t want to miss out Give it a look and maybe even hit that subscribe button Videos will start dropping soon About me I am a coder with a keen interest in fixing real world problems through shipping tech products I love to read books I have read multiple books on start ups and productivity Some of my favourite reads are Zero to One Steve Jobs The Almanack of Ravikant and Hooked Nothing excites me more than exchanging opinions through productive conversations youtube com Until we meet again code on and stay curious Got any doubt or wanna chat React out to me on twitter or linkedin |
2023-08-15 16:01:05 |
Apple |
AppleInsider - Frontpage News |
Netherlands bike thieves foiled by AirTag |
https://appleinsider.com/articles/23/08/15/netherlands-bike-thieves-foiled-by-airtag?utm_medium=rss
|
Netherlands bike thieves foiled by AirTagA woman in Utrecht in the Netherlands was able to recover her stolen bike within minutes because she had hidden an AirTag in it AirTags are only two years old but in that time the only thing to become as ubiquitous is the ceaseless reports of them finding things like stolen cars Or tracking things such as luggage that has a better holiday than its owners Or true there are too many reports of stalking And the police have learned to warn people of the dangers of tracking down a thief Read more |
2023-08-15 16:35:08 |
海外TECH |
Engadget |
Logitech's Litra Glow streaming light drops back down to $50 |
https://www.engadget.com/logitechs-litra-glow-streaming-light-drops-back-down-to-50-164650143.html?src=rss
|
Logitech x s Litra Glow streaming light drops back down to Live streamers or anyone who just wants to look a little better on camera can now grab our recommended light for off Logitech s Litra Glow is back down to its all time low price of which is percent off the list price of We ve seen it drop to this price a few times before and when it does it s a good time to buy This is the light we recommend in our guide to game streaming gear in which Engadget s Jessica Conditt calls good lighting one of the best things you can do for your live streaming setup nbsp nbsp nbsp The Litra Glow is USB powered and grips on your monitor right next to your webcam with an extendable three way adjustable mount so you can dial in the right position Once set the full spectrum LED lights deliver a soft glow that gets rid of harsh shadows and hard edges which make you look better and more natural on camera You can adjust the light temperature and brightness using the on board manual controls or through Logitech s free companion app nbsp In the same guide Jessica also recommends Elegato s Stream Deck which lets you quickly program effects lights audio and app control in one cute retro space age package The new key MK edition is currently on sale at Amazon for instead of the usual That s not an all time low but still a decent deal for anyone who wants to upgrade their streaming setup right now nbsp Follow EngadgetDeals on Twitter and subscribe to the Engadget Deals newsletter for the latest tech deals and buying advice This article originally appeared on Engadget at |
2023-08-15 16:46:50 |
海外TECH |
Engadget |
Google Photos update improves Memories view with generative AI |
https://www.engadget.com/google-photos-update-improves-memories-view-with-generative-ai-161749404.html?src=rss
|
Google Photos update improves Memories view with generative AIGoogle Photos just got a major update that adds generative AI to its popular Memories view This toolset already creates scrapbook montages using your photos and videos but now these montages will be even more personalized with collections that make sense according to your life AI enhanced algorithms will collect the images into relevant categories a recent vacation as an example and create a catchy title to accompany the montage The app already does this more or less but the update should be something of a radical improvement Of course this is AI so it won t always get things right In other words you can rename collections or edit montages if necessary nbsp All of these scrapbook montages are now collected in a dedicated view called Memories so you only interact with them when you want to Before this Google Photos users received a push notification every time a new scrapbook was available for perusal Just click on the Memories tab and get going The new tab also provides access to previously released features like adding music to scrapbook montages and sharing memories via the app The update even allows these scrapbook entries to be co created by friends and family Invite anyone to collaborate and they can contribute their own photos and videos Everyone involved can delete any photos that don t match the theme or make simple edits and the system itself will recommend photos based on geotagging and the like As for more robust sharing options Google says you ll soon be able to save these collections as popular video formats to send via messaging and social media apps The new Google Photos update begins rolling out today in the United States but the company says it ll be a few months before a true global launch This isn t the first time this year the company has squeezed generative AI into Google Photos Back in May Google used the technology to improve its Magic Editor toolset which leverages AI to remove unwanted artifacts from photos This article originally appeared on Engadget at |
2023-08-15 16:17:49 |
海外TECH |
Engadget |
The best gaming mouse in 2023 |
https://www.engadget.com/best-gaming-mouse-140004638.html?src=rss
|
The best gaming mouse in If you regularly play games on a PC a good mouse will give you greater control over your cursor add a few more buttons you can customize to your liking and generally make your downtime more comfortable In competitive games the best gaming mouse won t magically make you unstoppable but the faster response time and extra inputs should make for a more pleasurable and responsive experience The best gaming mouse for you will ultimately be a matter of preference how well its shape fits your hand and how well its feature set suits your particular gaming needs Over the past several months though we set out to find a few options for PC gaming that might fit the bill be they for FPSes MMOs or anything in between After researching dozens of mice testing around and playing countless hours of Overwatch CS GO Halo Infinite and Final Fantasy XIV among others here s what we ve found with some general buying advice on top What to look for in a gaming mouseWired vs wirelessBuying a wireless gaming mouse used to mean sacrificing a certain level of responsiveness but thankfully that s no longer the case Over the last few years wireless connectivity has improved to the point where the difference in latency between a good wireless model and a tried and true wired gaming mouse is barely perceptible Note however that we re strictly talking about mice that use a GHz connection over a USB dongle not Bluetooth Many wireless models support both connection options which is great for travel but Bluetooth s latency is generally too high to be reliable for gaming Going wireless still has other trade offs too Battery life is improving all the time but with their higher performance demands and RGB lighting most wireless gaming mice usually don t last as long as normal wireless models You shouldn t expect more than a few days of power from a rechargeable gaming mouse you use regularly Good wireless gaming mice usually come at a much higher cost than their wired counterparts too That s not to say the premium is never worth it Who wants yet another cable on their desk You may need a wireless model if you hate the feel of “cable drag or if your gaming PC is located in an awkward spot Many wireless gaming mice come with a cable you can hook up in a pinch as well But if price is any sort of concern a good wired mouse is usually a better value Comfort and grip typesEveryone s hands are different so at the end of the day calling one mouse “more comfortable than another is mostly subjective Ensuring your comfort is the most essential step when buying any mouse though so we ve done our best to consider how each device we tested works with small average sized and big hands alike We also considered how each device accommodates the three grip styles most people use while holding a mouse palm fingertip and claw As a refresher a palm grip is when your whole hand rests on the mouse with your fingers resting flat on the main buttons A fingertip grip is when you steer the mouse solely with the tips of your fingers with your palm not in contact with the device at all A claw grip is when your palm only touches the back edge of the mouse with your fingers arched in a claw like shape toward the buttons In general most people use a palm grip which tends to offer the greatest sense of control though depending on the size of your hand you may need your mouse to be a specific length to use it comfortably A fingertip grip can allow for more rapid movements while a claw grip is something of a balance between the two Switch and Click has a good breakdown if you d like a bit more detail but we ll note below if a mouse isn t well suited for a particular grip style For what it s worth yours truly is a claw gripper most of the time Build quality and designA good gaming mouse feels sturdy and won t flex or creak when used strenuously We valued mice without any overly sharp angles or grooves that could be awkward for most people to hold And while most gaming mice have plastic exteriors not all plastic is created equal so we looked for finishes that were smooth not too slick and capable of withstanding the sweaty palms that often come with competitive gaming sessions The gaming mouse market is mostly split between two design styles ergonomic and ambidextrous Ergonomic gaming mice are almost always made with right handed users in mind and often feature dedicated thumb rests Ambidextrous mice are more symmetrical and designed to be used with either hand though they may not have extra buttons on both sides Which shape works best for you is largely a matter of personal preference A gaming mouse s feet meanwhile should provide a consistent glide and reduce the friction between your mouse and the surface beneath it as much as possible For the best performance look for feet made from PTFE aka Teflon All feet will eventually wear down but many mice come with spares and most manufacturers sell replacements if needed As for flashy RGB lighting it s a nice bonus but little more than that Still if you ve already kitted out your setup with RGB having a mouse with adjustable lighting effects can add to the gaming experience and more consumer tech could stand to do things for pleasure s sake More practically some mice let you assign custom lighting settings to separate profiles which can make it easier to see which one you re currently using WeightGaming mice have gotten lighter and lighter in recent years with some models we tested weighing as little as grams Your mouse doesn t need to be that light anything under g is still fairly low and it s not like a g mouse feels like an anchor Regardless a low weight makes it easier to pull off repeated fast movements with less inertia That said some players still enjoy a little bit of bulk in their gaming mouse relatively speaking especially with games that aren t as reliant on twitchy reactions To reach those lower weights some manufacturers have released gaming mice with “honeycomb style designs which come with several cutouts in the outer shell These mice can still perform great but having a bunch of holes that expose the internal circuit board to possible sweat dust and detritus isn t the best for long term durability We generally avoid recommending models with this design as a result Switches buttons and scroll wheelA growing number of gaming mice use optical switches instead of mechanical ones Since these involve fewer bits making physical contact they should generally be more durable and less prone to unwanted “double clicks over time Mice with mechanical switches still have plenty of merit but they carry a little more long term risk in a vacuum Since most people will use their gaming mouse as their mouse mouse we valued models whose main buttons have a softer feel when pressed with enough travel to make inadvertent actuations less frequent But even this is a matter of preference You may want lighter buttons if you play games that call for constant clicking Also we looked to testing from sites like Rtings to ensure each mouse we recommend has a sufficiently low click latency meaning your clicks will register with minimal lag Beyond the standard click panels a good gaming mouse should also have programmable buttons for quick macros or shortcuts For most games shoot for at least two extra buttons on the thumb side that are easy to reach and difficult to press by accident Lots of mice have more buttons which can be a plus but not if they force you to contort your fingers to avoid hitting them For MMO mice having at least side buttons is preferable in order to access as many hotbar commands as possible As for the scroll wheel it should have distinct ratcheted “steps that aren t too resistant but make it clear when you ve actually scrolled Its texture should be grippy and it shouldn t make a distracting amount of noise when used The wheel should also be clickable giving you another input to customize for certain games e g to control the zoom on a sniper rifle Sensors and performanceSome are more proficient than others but generally speaking the optical sensors built into most modern gaming mice are more than fast and accurate enough for most people s needs While shopping for gaming mice you ll see a number of terms related to sensor performance To be clear a gaming mouse s responsiveness doesn t come down to just one spec But for clarity s sake here s a rundown of the more noteworthy jargon DPI or dots per inch is a measure of a mouse s sensitivity The higher the DPI setting the more your cursor will move with every inch you move the mouse itself Many of the best gaming mice advertise extremely high DPIs that top out above or but that s largely marketing fluff Few people play above with a common sweet spot This concept is also referred to as CPI counts per inch which is probably the more accurate term though DPI is used more often IPS or inches per second refers to the maximum velocity a mouse sensor supports The higher the IPS the faster you can move the mouse before it becomes incapable of tracking motions correctly Acceleration goes with IPS In this context it refers to how many Gs a mouse can withstand before it starts to track inaccurately Polling rate is a measure of how often a mouse tells a computer where it is In general the more frequently your mouse reports information to your PC the more predictable its response time should be Anything at Hz or above is fine for gaming The current standard and likely the sweet spot for most is Hz Many newer mice can go well beyond that but the returns start to diminish unless you play on a monitor with a particularly high refresh rate Lift off distance is the height at which a mouse s sensor stops tracking the surface below it Many competitive players like this to be as low as possible in order to avoid unintended cursor movements while repositioning their mouse Software and onboard memoryIt doesn t take long to find horror stories about bugs and other niggling issues caused by gaming mouse software so the ideal app is one that doesn t force you to use it all the time It should let you adjust as many of the aspects above as possible ideally to several distinct profiles Preferably you can save your settings directly to the mouse itself letting you pick your customizations back up on another device without having to redownload any software All this is most important on Windows but Mac compatibility is always good to have too Warranty and customer supportMost major gaming mice brands offer warranties between one and three years The longer and more extensive a manufacturer s program is the better This is the case with most consumer tech but we note it here because the gaming mouse market is particularly flush with products from less than household names many of which you may see hyped up on YouTube Reddit or elsewhere around the web A bunch of these more obscure mice are genuinely great but if you ever buy from a more niche brand it s worth checking that some level of customer support is in place We ve made sure our picks for the best gaming mice aren t riddled with an abnormal amount of poor user reviews Best for most Razer Basilisk VOf the gaming mice we tested the Razer Basilisk V offers the most complete blend of price performance build quality and wide ranging comfort It s typically available between and and for that price it provides a sturdy body with a pleasingly textured matte finish and a shape that should be comfortable for each grip type and all but the smallest of hands It uses durable optical switches and its main buttons are large relatively quiet and not fatiguing to press repeatedly The Basilisk V has a total of customizable buttons including two side buttons that are easy to reach but difficult to press by accident There s a dedicated “sensitivity clutch on the side as well which lets you temporarily switch to a lower DPI for more precise aiming though it s the one button that may be harder for smaller hands to reach without effort Beneath those buttons is a well sized thumb rest The thumb wheel on top is loud and a bit clunky but it can tilt left and right and a built in toggle lets it switch from ratcheted scrolling to a free spin mode That s great for navigating unwieldy documents At roughly grams the Basilisk V is on the heavier side for twitch shooters but its PTFE feet let it glide with ease and Razer s Focus sensor helps it track accurately The weight shouldn t be a major hindrance unless you really take competitive FPS play seriously And if that s the case see our premium recommendations below Either way the included cable is impressively flexible and the mouse s RGB lighting is fun without being garish Razer s Synapse software is Windows only and can be naggy with updates but makes it easy enough to set profiles and adjust DPI polling rate macros and RGB effects You can also save up to five profiles to the mouse itself though your lighting customizations won t carry over nbsp The Basilisk V is an ergonomic mouse designed for right handed use If you want an ambidextrous model with similar performance in the same price range try the Razer Viper KHz It ditches the multi mode scroll wheel and its ludicrously high max polling rate of Hz has little real world benefit for most but it s much lighter at g and it has two customizable buttons on both its left and right sides We ll also note the Razer DeathAdder V the wired version of our top wireless pick below which is even lighter than the Viper KHz at g and should be better suited for righties solely focused on competitive gaming Best premium Razer DeathAdder V ProIf money is no object the best gaming mouse we tested is the Razer DeathAdder V Pro It s pricey at but its superlight g wireless design and top notch sensor make it exceptionally responsive While smaller handed folks may find it a bit too tall most should find its gently curved shape to be comfortable over long gaming sessions regardless of their grip type Its two side buttons are easy to reach and its body doesn t creak or flex The scroll wheel is soft and quiet while the main buttons feel satisfying but not overly sensitive It also uses optical switches Battery life is rated at a decent hours per charge and you can connect an included and highly flexible USB C cable in a pinch Razer also sells a “HyperPolling dongle that increases the mouse s max polling rate to Hz but few need that and the company says using it can drop the mouse s battery life down to just hours Despite its higher cost the DeathAdder V Pro does forgo some of the Basilisk V s extras There s no RGB lighting no Bluetooth support for just one onboard profile and no free spinning or side tilting on the scroll wheel The DPI switcher is inconveniently located on the bottom of the mouse and there s no built in storage compartment for the USB dongle Much of that helps the mouse trim the weight however and the whole point of the DeathAdder V Pro is to excel at the essentials which it does Razer s Focus Pro K sensor is complete overkill in terms of its maximum specs but combined with the mouse s PTFE feet low click latency and easy to flick design it makes fast movements feel as “one to one as any mouse we tested If you re a competitive player who spends most of their time in twitchy FPS games the DeathAdder V Pro should feel tailor made to your priorities That s really the main market here though most people don t need to drop on this kind of device While its contours aren t as pronounced as the Basilisk V the DeathAdder V Pro is still designed for righties For an ambidextrous model Razer s Viper V Pro is really the “B option here providing the same excellent performance in a flatter design that should play nicer with small hands and lefties The Basilisk V Ultimate meanwhile is essentially a wireless version of our “best for most pick with the DeathAdder V Pro s upgraded sensor though it s the heaviest option of this bunch at g If comfort and features are more important to you than esports style performance it s a better buy but since most people looking to buy a high end wireless gaming mouse are especially fixated on competitive play we gave the nod to the DeathAdder V Pro instead Best budget Logitech G LightsyncIf you just want a competent gaming mouse for as little money as possible go with the Logitech G Lightsync Its design is likely too small and flat for palm grippers with large hands its scroll wheel feels somewhat mushy and its rubbery cable isn t ideal It uses mechanical switches too But the rest of it is smooth reasonably light g and sturdily built for the money plus its shape plays well with fingertip or claw grips It s also available in snazzy lilac and blue finishes alongside the usual black or white There are two customizable buttons on the right side plus a DPI cycle button on top but the G s design is otherwise ambidextrous The RGB lighting around the bottom of the device is tasteful and Logitech s G Hub software makes it simple enough to tweak settings on both Windows and macOS There s no onboard memory however While the Logitech Mercury sensor within the G is a few years old and technically lacking compared to most newer alternatives it s consistent and responsive enough to yield few complaints The set of PTFE feet help too You wouldn t go out of your way to get the G to win competitive games of Counter Strike but it s perfectly fine for most games If you d prefer a cheap wireless gaming mouse the Logitech s G Lightspeed has more or less the same shape and build quality as the G but adds a more advanced sensor Logitech says it can get up to hours of battery life but it requires a AA battery to work which in turn pushes its weight to just over g Best for MMOs Logitech GIf you want a mouse specifically designed for MMO games get the Logitech G It s ancient having launched way back in and as such it uses mechanical switches and a laser sensor the Avago S that can be less precise than a more modern optical sensor It s hefty at g and it has a wide body that s not ideal for small hands or fingertip grips Plus its cable isn t particularly flexible and its scroll wheel and main buttons are just OK Hear us out though The G is far from the only mouse in this style to be on the larger side and any performance shortcomings it may have will be difficult to notice in an MMO Outside of faster action games it tracks fine For large and average hands particularly those that use a palm grip the G s sloped shape should be comfortable Plus the scroll wheel can tilt left and right The most important thing an MMO mouse can do is let you access several in game commands with minimal effort The G does that supplying customizable side buttons that are angled in a way that distinguishes them without constantly forcing you to look down Few MMO mice make these buttons “easy to reach but the G does about as well as one can The mouse s killer feature however is a third click button which sits under your ring finger and brings up an entire second set of commands when pressed This means you can access up to different inputs with just one hand which is a godsend in MMOs that ask you to juggle multiple hotbars worth of commands Being able to get through your “rotations in a game like Final Fantasy XIV without having to contort your fingers around the keyboard is hugely convenient This feature isn t exclusive to the G but it s not commonplace either Best of all this mouse is affordable typically retailing around There are certainly nicer MMO mice available but the G s functionality is enough to make it the best value in its market Other honorable mentionsPhoto by Jeff Dunn EngadgetIf you don t like the Razer aesthetic Logitech s G Pro X Superlight is a close runner up to the DeathAdder V Pro whose praises we ve sung in the past If you see it for less than the Razer models or just want a high performing mouse for Mac it s great but note that it has a lower battery life rating hrs and charges over microUSB instead of USB C It s also a few years old so we wouldn t be surprised to see an updated model in the coming months The Corsair Scimitar RGB Elite is a better built alternative to the G with a more modern optical sensor It lacks the G s third main button but it s a good buy if you don t need that and see it on sale The Razer Naga Left Handed Edition isn t nearly as good of a value as the G or Scimitar RGB Elite but it s one of the few MMO mice that s actually built for lefties The Ninjutso Sora comes from a lesser known brand and is harder to actually purchase as of this writing but it looks and performs like a G Pro X Superlight for smaller hands Its main buttons are fairly stiff but it s incredibly light at g so it plays great for FPS games It s a close call between the HyperX Pulsefire Haste and the Razer Viper KHz for those who prefer an ambidextrous mostly flat shape Both perform well for competitive play but the Pulsefire Haste is significantly lighter at g thanks to a clever design with cutouts on the bottom that are covered by the mouse s label That said it has two fewer side buttons and it doesn t support more than one onboard storage profile As we write this you can also find the Viper KHz for less If that ever changes though or if the lower weight is just more important to you the Pulsefire Haste is a great wired option Photo by Jeff Dunn EngadgetThe Lamzu Atlantis OG V is another fine choice for FPS games with snappy performance and a symmetrical ultralight g build that s particularly well suited to claw grips Its bottom plate has a semi open design however so it s at least somewhat more susceptible to damage from dust and debris than our picks above As of this writing it s also difficult to find in stock The Razer Cobra has a similarly compact shape as the Logitech G with a much lower weight g a more flexible cable and optical switches It s priced at putting it in something of a no man s land between the G s and Basilisk V s typical street prices but it should be an upgrade over the G if it s still within your budget and you need something small There s plenty to like about the Glorious Model I Wireless an ergonomically friendly shape that s reminiscent of the Basilisk V Logitech G X but lighter at g four customizable side buttons the ability to connect over a USB receiver or Bluetooth a smooth scroll wheel and tasteful RGB lighting all for However its honeycomb style design and mechanical switches both raise concerns about its long term durability The Razer Basilisk V X HyperSpeed is a more affordable wireless version of the Basilisk V with the same comfortable shape and layout plus a quieter scroll wheel Because it requires a AA battery for power though it weighs around g which isn t great for fast paced games The scroll wheel can t tilt left or right either nor can it switch between a ratcheted and free spin mode It also uses less durable mechanical switches and only supports one onboard profile Still it s a decent value at The Asus ROG Gladius III doesn t stand out from our main recommendations in terms of design or performance and its software can be buggy but it s unusually easy to repair That is admirable and should make the mouse a good long term investment for DIY types This article originally appeared on Engadget at |
2023-08-15 16:10:04 |
ニュース |
BBC News - Home |
Man sought after Clapham homophobic attack |
https://www.bbc.co.uk/news/uk-england-london-66515319?at_medium=RSS&at_campaign=KARANGA
|
london |
2023-08-15 16:54:22 |
ニュース |
BBC News - Home |
Rail fares in England to rise below inflation again in 2024 |
https://www.bbc.co.uk/news/business-66514022?at_medium=RSS&at_campaign=KARANGA
|
england |
2023-08-15 16:16:10 |
ニュース |
BBC News - Home |
Watch: Wind turbine in North Sea catches fire |
https://www.bbc.co.uk/news/uk-england-norfolk-66510193?at_medium=RSS&at_campaign=KARANGA
|
norfolk |
2023-08-15 16:06:13 |
ニュース |
BBC News - Home |
US tourists stay in Eiffel Tower overnight while drunk - prosecutors |
https://www.bbc.co.uk/news/world-europe-66515138?at_medium=RSS&at_campaign=KARANGA
|
early |
2023-08-15 16:29:29 |
ニュース |
BBC News - Home |
Omagh bomb: Families mark 25th anniversary with private service |
https://www.bbc.co.uk/news/uk-northern-ireland-66503382?at_medium=RSS&at_campaign=KARANGA
|
anniversary |
2023-08-15 16:04:42 |
ニュース |
BBC News - Home |
Norfolk and Suffolk police: Victims and witnesses hit by data breach |
https://www.bbc.co.uk/news/uk-66510136?at_medium=RSS&at_campaign=KARANGA
|
crimes |
2023-08-15 16:41:38 |
ニュース |
BBC News - Home |
Matildas mania sweeps Australia ahead of England semi-final |
https://www.bbc.co.uk/news/world-australia-66506541?at_medium=RSS&at_campaign=KARANGA
|
story |
2023-08-15 16:15:43 |
Azure |
Azure の更新情報 |
Public Preview: Azure NetApp Files Cloud Backup for Virtual Machines |
https://azure.microsoft.com/ja-jp/updates/public-preview-azure-netapp-files-cloud-backup-for-virtual-machines/
|
machines |
2023-08-15 16:00:51 |
Azure |
Azure の更新情報 |
Generally available: Azure Load Testing in Japan East and Brazil South |
https://azure.microsoft.com/ja-jp/updates/generally-available-azure-load-testing-in-japan-east-and-brazil-south/
|
brazil |
2023-08-15 16:00:50 |
コメント
コメントを投稿