IT |
ITmedia 総合記事一覧 |
[ITmedia Mobile] 10~50代の約8割がLINEを利用、10代はFacebookよりTikTok |
https://www.itmedia.co.jp/mobile/articles/2205/16/news141.html
|
facebook |
2022-05-16 19:07:00 |
IT |
ITmedia 総合記事一覧 |
[ITmedia News] ヤフオク!とPayPayフリマ、子どもが誤飲するおそれのある出品に新ルール マグネットや水で膨らむボールなど |
https://www.itmedia.co.jp/news/articles/2205/16/news163.html
|
itmedia |
2022-05-16 19:05:00 |
AWS |
AWS Japan Blog |
Amazon SageMaker から Amazon EMR クラスタを作成・管理し、Spark と ML のインタラクティブな ワークロードを実行する – Part2 |
https://aws.amazon.com/jp/blogs/news/part-2-create-and-manage-amazon-emr-clusters-from-sagemaker-studio-to-run-interactive-spark-and-ml-workloads/
|
AmazonSageMakerからAmazonEMRクラスタを作成・管理し、SparkとMLのインタラクティブなワークロードを実行するPartこの記事は、“CreateandmanageAmazonEMRClustersfromSageMakerStudiotoruninteractiveSparkandMLworkloadsPartを翻訳したものです。 |
2022-05-16 10:11:24 |
AWS |
AWS Japan Blog |
Amazon SageMaker から Amazon EMR クラスタを作成・管理し、 Spark と ML のインタラクティブな ワークロードを実行する – Part1 |
https://aws.amazon.com/jp/blogs/news/part-1-create-and-manage-amazon-emr-clusters-from-sagemaker-studio-to-run-interactive-spark-and-ml-workloads/
|
先日、StudioノートブックからAmazonEMRクラスタを視覚的に閲覧し、接続する機能を導入しました。 |
2022-05-16 10:01:45 |
python |
Pythonタグが付けられた新着投稿 - Qiita |
macOS Montoryでnoble(BLE)を使いたくてハマったところ #iotlt |
https://qiita.com/n0bisuke/items/d4113fb03837b3c05310
|
iotltmacnodejs |
2022-05-16 19:29:28 |
Ruby |
Rubyタグが付けられた新着投稿 - Qiita |
rubyの基本文法1 |
https://qiita.com/kazu_code/items/7bba27cbfc3170c88f19
|
rubyqiitarbputshelloworld |
2022-05-16 19:03:00 |
Azure |
Azureタグが付けられた新着投稿 - Qiita |
Azure環境でTerraformを使ってみる(1/2) |
https://qiita.com/nosecape/items/52b3333cc91e429b91de
|
infrastract |
2022-05-16 19:36:36 |
技術ブログ |
Developers.IO |
AWSDeepRacerのローカルトレーニング (DeepRacer For Cloud) をWindows(WSL)でやってみた |
https://dev.classmethod.jp/articles/awsdeepracer-local-training-on-windows-wsl/
|
window |
2022-05-16 10:19:01 |
海外TECH |
MakeUseOf |
A Complete History of the iPod: From 2001 to 2022 |
https://www.makeuseof.com/history-of-the-ipod/
|
relationship |
2022-05-16 10:56:27 |
海外TECH |
MakeUseOf |
3 Signs Your Hard Drive Is Failing (And What to Do) |
https://www.makeuseof.com/tag/5-signs-hard-drive-lifetime/
|
Signs Your Hard Drive Is Failing And What to Do Is your hard drive going bad Here are some simple ways to check whether your hard drive is failing and how to save or recover your data when it is |
2022-05-16 10:55:13 |
海外TECH |
MakeUseOf |
Android Auto Gets Major Update: What Does It Look Like and When Will You Get It? |
https://www.makeuseof.com/android-auto-redesign-split-screen/
|
Android Auto Gets Major Update What Does It Look Like and When Will You Get It Android Auto for has been redesigned to work better on larger displays Here s what you need to know about this major update |
2022-05-16 10:16:13 |
海外TECH |
DEV Community |
Optimizing Your Web App for Maximum Runtime Performance and Premature Optimization 🦄 |
https://dev.to/tkvishal/optimizing-your-web-app-for-maximum-runtime-performance-and-premature-optimization-3loj
|
Optimizing Your Web App for Maximum Runtime Performance and Premature Optimization This blog is originally posted at hashnode for the writethonWebsites nowadays fail to perform well on user inputs and actions Poorly optimized frontend code can very easily break the user experience and the adoption rate Your web application could have high user volumes built to be delivered to the browser by a CDN for faster loading caching and designed with resilient architectures well performing backends and disaster recovery mechanisms Your web application could also load blazingly fast within s and could have the prettiest looking UI anyone has ever seen with lazy loading code splitting and all other load time optimizations Conversely your application might have a poorly performing runtime frontend code which breaks the entire experience for end users in the long run If your application is highly dynamic real time and relies mostly on user actions there is a high chance that your application is client side rendered CSR with technologies like React Angular or Vue Hence it becomes very crucial to optimize the front end to deliver a seamless user experience How performance affects user experience in a simple use case How annoying is too annoying A well performing frontend should provide instant feedback for the action performed Users expect a native feel to the web applications that they use in any form factor desktop mobile as the line between native apps and standard web applications is becoming thinner day by day through Progressive Web Apps PWA Optimizing your app can have a drastic impact on your conversion rate and click through rates E commerce conversion rate by On Click Rendering Performance Category desktop from The Impact of Web Performance Caring About Performance Too Early or Too Late “move fast break things is a common motto around fast moving projects Although this is a good approach to ship “working“products fast It becomes very easy to forget about writing manageable performant code Developers would be more focused on delivering the results first and caring about performance later Depending on the application the performance tech debt piles on and becomes unmanageable Hacky patchy fixes would be made to critical parts of the application to fix the performance issues at the very end of the project It can often lead to various unknown side effects on other parts of the project that no one in your team has ever seen before Initially developers write straightforward code that is easy to understand and takes less time to write Thus writing optimized code has a cost time and resources attached to it Without proper documentation the code base becomes complex with cryptic performance hacks We should forget about small efficiencies say about of the time premature optimization is the root of all evil Yet we should not pass up our opportunities in that critical Donald KnuthThis does not mean that every line of code that you write should have a performance saving gimmick a proper performance fix is implemented only when it can be measured Unmeasured performance fixes can very often lead to unexpected bugs and issues caring about optimizing the non critical part of your application is a huge waste of time and resources fixing performance issues at the wrong time in your development cycle can also have a negative outcome While starting on a task or a project some good premature optimization could be…Restructuring your files and folders breaking your code into functions components Enforcing the usage of types on dynamically typed languages optimizing workflow The flow of data to and fro parent and child components and some bad premature optimization could be…Using profilers and fixing minor issues frequently without any feedback from your users Using complex data structures and algorithms where a simple Array and inbuilt sort function would do the job When starting it is necessary to think big It should be less about “should I use a for or forEach loop “ and more about “should I break down this huge component into sub components to reduce unnecessary re renders “ Measuring your frontend performance ️Runtime performance is a tricky problem to solve The trickier part is measuring the performance and sniffing out the heavy components Though there are various tools available to measure the frontend performance It is always helpful to identify the main pain points of the application manually by clicking around Identify components pages taking most of the load and use it as a starting point There can be various ways to measure performance depending on your app s use case and complexity Manually testingStress testing with devtools CPU throttling Using Chrome DevtoolsMeasuring performance on the code levelconsole time console timeEnd performance measure react addons perf more on react performance Using a profilerReact Devtools profilerAngular Devtools profilerAfter an initial round of testing you might get an idea of where and how to start optimizing your app This blog assumes you have the prerequisite knowledge on how to read flame graphs and to get insights from the browser profiler Ways to Optimize There are plenty of different ways to optimize your application depending on the tech stack that you use frequency and shape of the data that you get from the server use case of your application and so on This write up covers some of the methods in the increasing level of complexity As the complexity increases the performance fix becomes very narrow and context specific to your application Caching and MemoisationLayout Reflow amp ThrashingVirtualizationDelay and Debounce renderingThinking outside the boxOffloading to web workersOffloading to canvasOffloading to GPU GPGPU experimental Caching and Memoization ️By definition caching is a technique that stores a copy of a given resource and serves it back when requested Memoization is a type of caching where expensive calculations are stored in a cache to avoid frequent recalculations In a nutshell your code memorizes the previously calculated results and serves when requested from the memory instead of bothering the CPU Choosing the right data typeThis is where your good ol data structures and algorithms knowledge plays a vital role Consider a case where the server returns a list of users in an array of objects with a unique identifier userId To perform lookup operations which you might do frequently it would take O n time where n is the number of users in the array If you group the users by userId once and convert it to a key value pair map It can drastically reduce the lookup time to O more on the big O notation You ve basically indexed your local data for faster access Trading some space in the heap memory for easier lookups instead of relying on the CPU for frequent operations the array way const usersArray userId ted fullName Ted Mosby job architect userId barney fullName Barney Stinson job unknown userId robin fullName Ribon Scherbatsky job news anchor straight forward way to lookup search in O n worst caseconst ted usersArray find user gt user userId ted Hashmaps key value pairs have constant time retrieval lookups searching insertion and deletion You can easily generate key value maps from an array of objects by using lodash s keyBy usersArray userId This makes it the perfect data structure if the data is constantly being used inside for loops and blocking code the hashmap way const usersMap ted userId ted fullName Ted Mosby job architect barney userId barney fullName Barney Stinson job unknown robin userId robin fullName Ribon Scherbatsky job news anchor efficient way to lookup search O worst caseconst ted usersMap ted Here Array indexOf could be magnitude slower than object reference based lookup and it looks a lot cleaner to read That being said the performance difference between both methods depends on your access patterns and the size of the array object What s the catch Using hashmaps should not always be your prime solution Why so because JS is weird The way JS engine works is different on each browser they have their own implementation of managing the heap memory V on Chrome Spidermonkey on Firefox and Nitro for Safari Object lookups could be sometimes faster on chrome and slower on firefox given the same data It can also depend on the size and shape of the data inside your data structure array hashmap Always measure if the applied change has an impact on the final performance Function Level MemoizationFunctional memorization is a frequently used technique in Dynamic Programming It can memorize the function s output and inputs so that when the caller calls the function again with the same inputs it returns from its memory cache instead of re running the actual function A memorized function in JS consists of major components…A higher order function wrapper that wraps the expensive function in a closure An expensive pure function that returns the same outputs for the same inputs under any conditions Pure functions should not have any side effects nor should depend on any values outside their own scope A cache hashmap that acts as our memory and memorizes the input outputs and key value pairs gt difference between pure and impure functionsHere s the memoize higher order function implemented in typescript It takes in a function and returns the memoized function The expensive function to be memorized can have any number of arguments The cache keys are transformed into primitive data types like string or number using the second argument in the higher order function transformKey It is also fully typesafe type AnyFn args any gt anyfunction memo lt Fn extends AnyFn gt fn Fn transformKey args Parameters lt Fn gt gt string const cache Record lt string ReturnType lt Fn gt gt return args Parameters lt Fn gt ReturnType lt Fn gt gt transform arguments into a primitive key const key transformKey args return from cache if cache hit if key in cache return cache key recalulate if cache miss const result fn args populate cache with result cache key result return result const memoizedExpensiveFunction memo expensiveFunction args gt JSON stringify args Memoization is very well suited for recursive operations to cut whole chunks of redundant operations down the recursion tree It is also helpful in functions where there are frequently repeated inputs giving out the same outputs Instead of reinventing the wheel you could use battle tested memorize wrappers provided by libraries useMemo in react memoize in lodash memoize decoratorsWhat s the catch Memoization is not a solution to every expensive calculation problem you should not opt into memoization when…the expensive function depends on or affects some other value outside its own scope the input is highly fluctuating and irregular or where you don t know the range of inputs Since we are trading some space on heap memory for CPU cycles erratic inputs can create a huge cache object and the use of memoization here would become useless Component Level Memoization and Preventing Unnecessary RerendersIn the context of how React works the component only rerenders with props or the state of a component has changed When a parent component rerenders all of its children rerender too Rerendering is the process of calling the function render method Hence this is the perfect place to use our memoization techniques Before diving into memoizing our component it is essential to optimize the component s state first A common mistake that most React devs make is misusing the useState hook to store constant mutating variables that do not reflect on the UI useState is a better choice if the UI depends on the value else it is better to use useRef or useMemo for mutable variables instead when passing functions from the parent to child component it is better to use wrap that function with useCallback instead of passing the functions themselves Passing raw functions to memorized components would still trigger a rerender even when the props haven t changed since the parent component is rerendered it created a new reference to the function and passed it to children hence the rerender passing raw functions ️export const ParentComponent gt const handleToggle gt do something return lt SomeExpensiveComponent onToggle handleToggle gt using useCallback to pass functions export const ParentComponent gt const handleToggle useCallback gt do something return lt SomeExpensiveComponent onToggle handleToggle gt After the preliminary steps your component should have fewer rerenders now React decides to re render the children whenever the parent component rerenders If a child component is memorized React first checks if the props have changed by doing a shallow comparison of props If you have a complex object in your props it only compares the object reference to the old and new props a b The best part is that you have full control over this equality function to govern when to rerender the component based on old and new props const ExpensiveChildComponent state gt lt div gt state lt div gt const MemoizedExpensiveChildComponent React memo ExpensiveChildComponent oldProps newProps gt do custom validation on old and new props return boolean export const ParentComponent gt const someState setSomeState useState return lt MemoizedExpensiveChildComponent state someState gt What s the catch If your component depends on useContext React memo is useless The component would rerender irrespective if the memo use useCallback only when the preformance benifit is quanifiable useCallback can become a source of performance issue if not used at the right placeuse useMemo in the component only when the component re renders often with the same props Sometimes custom comparing the props can consume more CPU cycles than actually rerendering the component so measure first then apply the performance fix Layout Reflow amp Thrashing Layout reflow is when the browser calculates the dimensions position and depth of an element in a webpage A reflow would occur when getting setting measurements of elements metrics using offsetHeight scrollWidth getComputedStyle and other DOM functions adding inserting or removing an element in the DOM tree changing CSS styles resizing browser window or iframe window basically any operation that would need the browser to modify the presented UI on the screen gt very high level overview of the browser rendering pipelineWhen a reflow happens the browser would synchronously blocking code recalculate the dimensions and positions of elements on the screen As you might have guessed reflowing is a very expensive job for the render pipeline so the browser tries to queue and batch the updates so that it can reflow the whole UI at once instead of blocking the main thread with frequent reflows The recalculate style in the performance chart means that the browser is reflowingThe performance impact due to reflowing depends on the complexity of the reflow A call to getBoundingClientRect on a smaller DOM tree would have a lesser impact on performance than calling the same on a larger nested DOM tree Reflow in itself is an essential part of the rendering process and it is acceptable on lower margins Consider the following piece of code for let i i lt listItems length i listItems i style height listContainer clientHeight px Here the width and offsetHeight are being read or written inside a for loop for all the items inside a list Assume there are list items and is getting called every time there is a new list item There is an obvious performance hit when these properties are called too frequently the browser keeps adding those calls to the queue to process them later At one point when the browser flushes the queue the browser struggles to optimize and batch the reflows but it cannot since the code is requesting clientHeight in quick successions inside a for loop which triggers layout →reflow →repaint synchronously on every iteration When this happens the page freezes for some seconds and this is called Layout Thrashing This is a minor hiccup on desktops and laptops but has severe browser crashing consequences on lower end mobiles This is a very common mistake that many developers make lucky for us the solution is very simple and right before your eyes Caching outside the loopWe cache the reflow triggering value outside any kind of loop So we just calculate the height width only one time allowing the browser to optimize it on its own const listContainerHeight listContainer clientHeightfor let i i lt listItems length i listItems i style height listContainerHeight px Read amp Write PatternWe learned that browser tries to batch and optimize subsequent reflow layout calls into one single reflow We can use this to our advantage The code example illustrates better… read write read write read write pattern readlet listItemHeight listItem clientHeight write triggers layout listItemHeight style height listItemHeight px read reflows layout let listItemHeight listItem clientHeight write triggers layout listItemHeight style height listItemHeight px read reflows layout let listItemHeight listItem clientHeight write triggers layout listItemHeight style height listItemHeight px read read read write write write pattern read browser optimizes let listItemHeight listItem clientHeight let listItemHeight listItem clientHeight let listItemHeight listItem clientHeight write triggers layout listItemHeight style height listItemHeight px listItemHeight style height listItemHeight px listItemHeight style height listItemHeight px reflow just one time and its seamless Using window requestAnimationFrame window requestAnimationFrame or rAF is used to tell the browser that you are going to perform animations Hence it calls the callback inside rAF before the next repaint This allows us to batch all the DOM writes reflow triggering code inside rAF guaranteeing that browser runs everything on the next frame readlet listItemHeight listItem clientHeight writerequestAnimationFrame gt listItemHeight style height listItemHeight px read let listItemHeight listItem clientHeight writerequestAnimationFrame gt listItemHeight style height listItemHeight px readlet listItemHeight listItem clientHeight writerequestAnimationFrame gt listItemHeight style height listItemeight px browser calls rAF on the next frame hence reflows smoothly Virtualization ️Games tend to have highly detailed D models huge textures huge open world maps and complex shaders that fill in an immersive environment around the player How do they optimize all those complex models into a limited compute GPU and still get FPS The blue cone is the view frustum of the player and it only renders the resource inside it shown behind the scenes in Horizon Zero Dawn more about it They use a technique called Frustum Culling Frustum culling is the process of removing objects that lie completely outside the viewing frustum POV of the player It removes everything that s outside the player s POV and spends all the computing power to render only the resources that the player is looking at This technique was invented many years ago and it is still one of the major default ways to boost the runtime performance in games We can use this same old technique on our apps too The web folks call it Virtualization Imagine a large list or an infinite pannable zoomable canvas or a huge horizontally and vertically scrollable grid of items Optimizing the runtime on these kinds of use cases could be a hard problem to tackle virtualizing a lt ul gt list items highlighted blue are rendered grey is not rendered and outside user s view via Brian Vaughn Lucky for us there is a react library react window that handles the virtualization logic for you Virtualization works by implementing core ideas…Having a viewport container DOM element that acts as your scroll container Having a smaller element that contains your viewable items of a list Absolutely positioning the list items based on current scroll position width and height of scroll container Since the browser spends all of its compute power on rendering what the user is currently seeing you would gain a huge performance boost very easily performance upgrade with and w o virtualization via List Virtualization react window provides easy to use components that make implementing virtualization into your apps a piece of cake react window wraps your list item in a parent component that would handle all the virtualization logic under the hood react window expects a fixed height for the parent scroll container and precalculated height for the list item Illustrating react window FixedSizeList APIIf the height of all the list items is known and calculated you can use the FixedSizeList If the height of each list item depends on the content of the item then you can precalculate heights with a function and pass it to a VariableSizeList in the itemSize prop You can also use the overscanCount to render a specific number of items outside the scroll area if your list items need to prefetch image assets or to catch the focus of the user const rowHeights new Array fill true map gt Math round Math random const getItemSize index gt rowHeights index const ListItem index style gt lt div style style gt Row index lt div gt const HugeList gt lt VariableSizeList height itemCount itemSize getItemSize width gt ListItem lt VariableSizeList gt react window also supports grid based UI where there s both horizontal and vertical scrolling think of large e commerce websites or an excel sheet with variable item heights or widths react window infinite loader package that supports infinite loading and lazy load content outside the scroll area and also provides virtualization capabilities const HugeGrid gt lt VariableSizeGrid columnCount columnWidth getColumnWidth takes in current index height rowCount rowHeight getRowHeight takes in current index width gt GridItem lt VariableSizeGrid gt What s the catch It goes without saying that do not use virtualization for lists with less number of items measure first then go for virtualization Since there is a CPU overhead in running the virtualization algorithm If the list item has highly dynamic content and constantly resizing based on the content virtualization could be a pain to implement If the list items are added dynamically with sorts filters and other operations react window might produce odd bugs and blank list items Delay and Debounce Rendering Delaying and Debouncing rendering is a common practice to reduce unnecessary re renders on frequent data changes Some modern web apps process and render tons of complex data arriving at extreme speeds through WebSockets or HTTP long polling Imagine an analytics platform catering real time analytics to users through the data arriving at the frontend using WebSockets at the rate of messages per second Libraries like react and angular is not built for rerendering a complex DOM tree at that rate and humans can t perceive data changes at rapid intervals Debouncing is a common practice used in search inputs where each onChange event triggers an API call Debouncing prevents sending an API request for each letter change instead it waits for the user to finish typing for a specified amount of time and then sends an API request We can use this technique for rendering too Illustration of how debouncing can optimize unnecessary API requestsI won t go too deep into how to implement debouncing on API requests We ll concentrate on how we could debounce renders using the same method Imagine you have a stream burst of messages coming through a single WebSocket channel You would like to visualize said messages in a line graph There are main steps to debounce the renders…A local buffer that would hold your WebSocket frequently changing data outside of React angular context useRef A WebSocket listener that takes in the messages from the network parses transforms them into an appropriate format and puts them in the local buffer A debounce function that when triggered would flush the buffer data to the component s state to trigger a rerender const FrequentlyReRenderingGraphComponent gt const dataPoints setDataPoints useState lt TransformedData gt const dataPointsBuffer useRef lt TransformedData gt const debouceDataPointsUpdate useCallback debounce gt use the buffer update the latest state with buffer data however you want setDataPoints dataPoints gt dataPoints concat dataPointsBuffer current flush the buffer dataPointsBuffer current length sets the state only when websocket messages becomes silent for ms useWebsocket ENDPOINT onmessage data gt const transformedData TransformedData transformAndParseData data push into buffer does not rerender dataPointsBuffer current push transformedData important part of debouncing debouceDataPointsUpdate return lt LineGraph dataPoints dataPoints gt Here s a high level implementation of debouncing the render You can change the useRef buffer setter in the WebSocket message event and flushing logic during debounce however you want that is efficient depending on the shape of data Illustration of how debouncing can optimize unnecessary re rendersThere are many libraries that provide debounce functions out of the box…RxJS debounce function lodash debounce function custom react hook useDebounce What s the catch Debouncing should only be used on components that are frequently re rendering or does any expensive job frequently Buffering on real time data can introduce race condition bugs if not done carefully since you are maintaining two sources of truth at any point in time buffer data and actual data Debounce only when you surely know there is a breathing time to flush the buffer and set the state else the component won t rerender and the buffer would indefinitely grow in size Thinking Out of the Box Sometimes any kind of optimization that you do internally in your codebase wouldn t be enough That s when fixing a performance issue is not just a bottleneck to the UX it becomes a bottleneck to the solution your web app is providing Hence we must find clever ways to think outside of the existing ecosystem in search of making our web app “usable Do you think apps like Figma and Google Docs are just made up of DOM elements These apps exit out of the native approach to provide better solutions to users At this point it is not about fixing a performance Bug it is more about adding an innovative Feature to your web app Offloading to Web Workers Javascript is famously known to be single threaded Since it is single threaded we don t have to think about complex scenarios like deadlocks Since it is single threaded It can only run one task at a time synchronous To queue all these tasks for the CPU to execute it uses a mechanism called an event loop The OS and your browser have access to any number of threads your CPU provides That s why the browser can handle multiple tabs at once parallelly What if we could somehow get access to another thread to do some of our complex operations That s exactly why Web Workers are made Imagine you have a huge React app with a fairly complex DOM tree that updates frequently on network changes You are asked to perform a huge image processing mathematical operation with huge images or inputs Usually when done in a normal way would fill in the main thread pool blocking other essential operations like event listeners rendering and painting of the whole page Hence we use a Web Worker process to offload the work to a separate thread and get back with results asynchronous main jsconst worker new Worker worker js send complex operation inputs to worker jsworker postMessage data receive data from a worker jsworker onmessage event gt console log event data worker js receive data from main jsself onmessage event gt do complex operation here send results to main js self postMessage data The worker API is very simple you would post a message to the worker The worker would have the code to process and reply back with the results to the listeners To make it even easier Google has created the comlink library illustration of web workers via Web workers vs Service workers vs Worklets It is important to note that the web workers operate under a separate context so your global local variables applied on your main codebase won t be available in the worker js file So you would need to use specific bundling techniques to preserve the context between workers and main files If you would like to integrate web workers with React s useReducer hook the use workerized reducer package provides a simple way to do so Thus you can also process heavy state processing and also control react s component lifecycle based on web worker s results const WorkerComponent gt const state dispatch busy useWorkerizedReducer worker todos reducer name in worker js todos reducer intial state const addTodo todo gt dispatch type add todo payload todo return lt div gt busy lt Loading gt lt TodoList todos state todos gt lt div gt What s the catch Web workers have a separate context from your main application so it is necessary to bundle your code with webpack loader like worker loader more info You cannot use web workers to manipulate DOM or add styles to the page Offloading to Canvas This is essentially a hacky way to render the UI In some cases the WebSocket messages would be coming at rapid speeds with no breathing time In such cases debouncing won t solve the problem These use cases can be seen on trading and crypto platforms where there is a high volume of changes CoinBase solves the problem elegantly by using a canvas in the middle of a reactive DOM UI It performs very well under rapid data changes and looks seamless with the native UI Here s how the UI updates compared to the WebSocket messages in the network tab…The speed at which the data updates in the CoinBase web appThe whole table is just a canvas but note that I can still hover over each row and get a hover highlight effect This is by simply overlaying a DOM element on top of the canvas but the canvas handles all the heavy lifting of rendering the text and alignment It s just a long canvas underneath Offloading the work to canvas is very common when working with highly dynamic data like rich text editing infinite dynamic grid content and rapidly updating data Google has adopted canvas as their main rendering pipeline in Google Docs and Sheets to have more control over primitive APIs and most importantly to have greater control over performance What s the catch It is a hacky procedure to use a canvas inside a reactive DOM tree there will be lots of “reinventing the wheel“ situations for basic functionalities like text alignment and fonts Native DOM accessibility is lost in canvas based data text rendering Offloading to GPU GPGPU Experimental This is where the write up gets experimental and there s very less chance that you d use this technique on a real project Imagine you d have to train a neural network or batch process hundreds of images parallel or perform complex mathematical operations with a stream of numbers You might fall back to using a web worker thread to do the job which would still work But the CPU only has limited threads and a very limited number of cores This means that it can process data faster with low latency but it cannot handle fast parallel operations very well That s why GPUs are made Games and video encoding decoding requires parallel processing on individual pixels on the screen for faster rendering at FPS GPUs have thousands of cores and are specially made to handle heavy parallel processing tasks Using a CPU for these kinds of tasks would work but it would be too slow and would severely hog the CPU blocking other OS jobs The tradeoff is that interfacing the GPU GLSL Shaders with the JS environment is the hardest part GPUs are made to handle textures images in a particular data structure Doing trivial calculations with GPU requires hacky techniques to upload and download data from GPU The GPUs performing these kinds of non specialized CPU related calculations are called GPGPU General Purpose GPU The NVIDIA GEFORCE RTX GPU generates input matricesconst generateMatrices gt const matrices for let y y lt y matrices push matrices push for let x x lt x matrices y push Math random matrices y push Math random return matrices create a kernel function on GPU const gpu new GPU const multiplyMatrix gpu createKernel function a b let sum for let i i lt i sum a this thread y i b i this thread x return sum setOutput call the kernelconst matrices generateMatrices const result multiplyMatrix matrices matrices Here are the real world test results from GPU js notice that you don t see any difference in computing time until the x matrix operation After that point the compute time for CPUs increases exponentially Comparing CPU vs CPU performance on matrix multiplication via GPU js What s the catch GPU based general computation should be done with care GPUs have configurable floating point precisions and low level arithmetics Do not use GPGPU as a fallback for every single heavy calculation as shown in the graph about it provides a boost in performance only if the calculations are highly paralleled GPUs are best suited for image based processing and rendering That is it at least for now Why did I write this very long blog Without doubt This is the longest blog I ve ever written It is a culmination of raw experience and learnings from my previous projects It s been on my mind bugging me for a very long time We developers tend to work fast on features push working code and call it a day This looks good from a deliverable and management perspective But it is absolutely necessary to think about the end users situation while you are working on a feature Think about the type of device they would be using and how frequently the user would be interacting I ve learned most of the web development on a GB RAM laptop with a Pentium processor so I know the pain T T There is no right way to measure the performance attach a deadline to the performance fix or quantify everything beforehand It is a continuous process that requires reconnaissance skills Although it is very hard to include quantify a performance budget on every feature in a fast moving project Think how a particular feature addition would affect your application in the long run and document it It is the individual developer s responsibility to think big and try to write performant code from the ground up ciao if you want to get in touch for a chat you can follow me on Twitter tk vishal tk |
2022-05-16 10:51:12 |
海外TECH |
DEV Community |
How to fetch NFT collection using JavaScript and OpenSea API |
https://dev.to/codegino/how-to-fetch-nft-collection-using-javascript-and-opensea-api-2ij8
|
How to fetch NFT collection using JavaScript and OpenSea API TLDRCheck the code hereCheck my list of NFTs here IntroductionNon fungible tokens NFT A way to represent anything unique as an Ethereum based asset NFTs are giving more power to content creators than ever before Powered by smart contracts on the Ethereum blockchain OpenSea is the world s first and largest NFT marketplace It can be used to discover collect and sell extraordinary NFTs In this article I will show how I fetch all my collections using OpenSea s API and then display them on my website PrerequisiteFor us to be able to use OpenSea s API we need to request an OpenSea API key here Create a reusable options object for each API callThis will contain the API key provided by OpenSeaconst options method GET headers Accept application json Do not expose your API key in the browser X API KEY process env OPENSEA API To avoid exposing your API key refer to this blog OPTIONAL Create a variable for the wallet s addressThis is the address of our wallet We can use this address to fetch our collections and assets const WALLET ADDRESS xCDcbBBFdFCFbCDDDFDF Fetch all collections owned by the specified addressconst collectionResponse await fetch WALLET ADDRESS options then response gt response json Map the response to the preferred structureHere is the type definition of how my website used the response datatype NftCollection name string slug string contractAddress string details string owned id number img string name string Here is how the mapping of the response to the above type definitionconst collection collectionResponse map item gt details item description slug item slug name item name contractAddress item primary asset contracts address owned For each collection fetch all assets owned by the defined addressfor const iterator of collection const assetsResponse await fetch WALLET ADDRESS amp asset contract address iterator contractAddress amp include orders false options then response gt response json iterator owned assetsResponse assets map item gt name item name img item image url id item token id filter item any gt item name amp amp item img Putting it all together const fetchOwnCollection async gt const WALLET ADDRESS xCDcbBBFdFCFbCDDDFDF const options method GET headers Accept application json X API KEY process env OPENSEA API as string const collectionResponse await fetch WALLET ADDRESS options then response gt response json const collection collectionResponse map item gt details item description slug item slug name item name contractAddress item primary asset contracts address owned for const iterator of collection const assetsResponse await fetch WALLET ADDRESS amp asset contract address iterator contractAddress amp include orders false options then response gt response json iterator owned assetsResponse assets map item gt name item name img item image url id item token id filter item gt item name amp amp item img return collection Rendering the collectionIt will depend on the frontend framework of choice Here is how I did it using NextJs NOTE The overall API requests might be slow when fetched from the browser make sure to have an appropriate content loader in place Caching the response might help as well Demo What s next Add functionalities for visitors to make offers and buy my NFT collection |
2022-05-16 10:49:14 |
Apple |
AppleInsider - Frontpage News |
Daily deals May 15: $800 iPad Pro, $100 Beats Fit Pro, $800 75-inch Toshiba Smart TV, more |
https://appleinsider.com/articles/22/05/15/daily-deals-may-15-800-ipad-pro-100-beats-fit-pro-800-75-inch-toshiba-smart-tv-more?utm_medium=rss
|
Daily deals May iPad Pro Beats Fit Pro inch Toshiba Smart TV moreSunday s best deals include off the inch iPad Pro TB of M storage from XPG for a SanDisk wireless charger down to and much more May s deals include the iPad Pro a inch Toshiba smart TV and Beats Fit Pro Every day AppleInsider searches online stores for deals and offers on Apple products smartphones smart TVs appliances and many other devices and accessories compiling the best we find into a daily roundup If an item is out of stock you could still be able to order it for delivery at a later time Amazon discounts are generally available for a limited period of time so act fast if you want something Read more |
2022-05-16 10:12:33 |
海外TECH |
CodeProject Latest Articles |
Morestachio. More Mustachio, Less Mustache - Extending the Mustachio Syntax |
https://www.codeproject.com/Articles/1261021/Morestachio-More-Mustachio-Less-Mustache-Extending
|
formatter |
2022-05-16 10:50:00 |
海外TECH |
CodeProject Latest Articles |
Integrating TinyMCE Editor (WYSIWYG) to an ASP.NET MVC Application |
https://www.codeproject.com/Tips/5331999/Integrating-TinyMCE-Editor-WYSIWYG-to-an-ASP-NET-M
|
applicationintegrating |
2022-05-16 10:37:00 |
医療系 |
医療介護 CBnews |
KPI設定し創薬イノベーションエコシステム実現を-PhRMAが提言、ドラッグ・ラグ再来に強い懸念 |
https://www.cbnews.jp/news/entry/20220516193739
|
phrma |
2022-05-16 19:55:00 |
海外ニュース |
Japan Times latest articles |
Hideki Matsuyama finishes third at AT&T Byron Nelson |
https://www.japantimes.co.jp/sports/2022/05/16/more-sports/golf/matsuyama-finishes-third/
|
Hideki Matsuyama finishes third at AT amp T Byron NelsonMatsuyama who finished alongside Colombian Sebastian Munoz has struggled with a shoulder injury since March and returned after four weeks of rest after finishing th |
2022-05-16 19:28:59 |
海外ニュース |
Japan Times latest articles |
Terunofuji beats Tobizaru to remain one back |
https://www.japantimes.co.jp/sports/2022/05/16/sumo/basho-reports/terunofuji-keeps-pace/
|
Terunofuji beats Tobizaru to remain one backSumo s lone yokozuna Terunofuji kept pace with the lead pack at the Summer Grand Sumo Tournament by overpowering rank and file opponent Tobizaru on Monday The Mongolian born yokozuna |
2022-05-16 19:23:57 |
海外ニュース |
Japan Times latest articles |
Drama and exhibition portray the struggles and successes of Japanese women who moved to the U.K. |
https://www.japantimes.co.jp/culture/2022/05/16/stage/drama-exhibition-portray-struggles-successes-japanese-women-moved-u-k/
|
Drama and exhibition portray the struggles and successes of Japanese women who moved to the U K The stories document the interviewees changing relationships with both Britain and Japan and unveil the shifting views of Japan in Britain over many decades |
2022-05-16 19:10:10 |
ニュース |
BBC News - Home |
McDonald's to leave Russia for good after 30 years |
https://www.bbc.co.uk/news/business-61463876?at_medium=RSS&at_campaign=KARANGA
|
values |
2022-05-16 10:41:36 |
ニュース |
BBC News - Home |
Northern Ireland: PM poised for protocol change ahead of crisis talks |
https://www.bbc.co.uk/news/uk-northern-ireland-61456677?at_medium=RSS&at_campaign=KARANGA
|
crisis |
2022-05-16 10:45:13 |
ニュース |
BBC News - Home |
Diesel prices reach record of over £1.80 a litre |
https://www.bbc.co.uk/news/business-61463280?at_medium=RSS&at_campaign=KARANGA
|
march |
2022-05-16 10:30:15 |
ニュース |
BBC News - Home |
Buffalo shooting: Supermarket attack victims' names released |
https://www.bbc.co.uk/news/world-us-canada-61462939?at_medium=RSS&at_campaign=KARANGA
|
police |
2022-05-16 10:27:42 |
ニュース |
BBC News - Home |
Joseph Wright of Derby painting saved for nation |
https://www.bbc.co.uk/news/uk-england-derbyshire-61437488?at_medium=RSS&at_campaign=KARANGA
|
display |
2022-05-16 10:30:55 |
ニュース |
BBC News - Home |
Boris Johnson faces knotty diplomatic challenge in Brexit deal talks |
https://www.bbc.co.uk/news/uk-politics-61463207?at_medium=RSS&at_campaign=KARANGA
|
ireland |
2022-05-16 10:16:57 |
ニュース |
BBC News - Home |
'I had to apply to five banks before I got a loan' |
https://www.bbc.co.uk/news/business-61460002?at_medium=RSS&at_campaign=KARANGA
|
businesses |
2022-05-16 10:02:24 |
ニュース |
BBC News - Home |
Tiger Woods at US PGA: American feeling 'stronger' than at Masters |
https://www.bbc.co.uk/sport/golf/61464757?at_medium=RSS&at_campaign=KARANGA
|
Tiger Woods at US PGA American feeling x stronger x than at MastersTiger Woods returns to Southern Hills where he won the US PGA Championship feeling stronger than during last month s Masters |
2022-05-16 10:02:52 |
サブカルネタ |
ラーブロ |
鴨LABO 住吉店@住吉 |
http://ra-blog.net/modules/rssc/single_feed.php?fid=199187
|
東京進出 |
2022-05-16 11:00:59 |
北海道 |
北海道新聞 |
所属議員に法令順守の研修 自民がガバナンスコード骨子案 問題議員には丁寧な説明求める |
https://www.hokkaido-np.co.jp/article/681441/
|
資金管理 |
2022-05-16 19:18:00 |
北海道 |
北海道新聞 |
知床・観光船事故 択捉南部から標津付近まで拡大 水難学会、漂流予測を修正 |
https://www.hokkaido-np.co.jp/article/681412/
|
知床半島 |
2022-05-16 19:14:02 |
北海道 |
北海道新聞 |
海外から到着108人感染 29人に症状 |
https://www.hokkaido-np.co.jp/article/681437/
|
厚生労働省 |
2022-05-16 19:12:00 |
北海道 |
北海道新聞 |
首相「ジューシーで元気出た」 贈呈の茨城メロンに笑顔 |
https://www.hokkaido-np.co.jp/article/681435/
|
茨城県鉾田市 |
2022-05-16 19:04:00 |
北海道 |
北海道新聞 |
ユーロ圏22年GDP2・7%増 1・3ポイント下方修正 |
https://www.hokkaido-np.co.jp/article/681434/
|
下方修正 |
2022-05-16 19:04:00 |
北海道 |
北海道新聞 |
神奈川で1640人感染 1人死亡 |
https://www.hokkaido-np.co.jp/article/681433/
|
新型コロナウイルス |
2022-05-16 19:04:00 |
北海道 |
北海道新聞 |
沖縄戦直前の空襲映像14本公開 大分の団体、米公文書館から入手 |
https://www.hokkaido-np.co.jp/article/681432/
|
公文書館 |
2022-05-16 19:04:00 |
北海道 |
北海道新聞 |
サステナビリティ推進室を新設 アークス |
https://www.hokkaido-np.co.jp/article/681431/
|
食品スーパー |
2022-05-16 19:02:00 |
IT |
週刊アスキー |
ケンドリック・ラマーの空間オーディオに対応したニューアルバム「Mr. Morale & The Big Steppers」がApple Music配信初日の再生回数最高記録を更新 |
https://weekly.ascii.jp/elem/000/004/091/4091611/
|
ケンドリック・ラマーの空間オーディオに対応したニューアルバム「MrMoraleampTheBigSteppers」がAppleMusic配信初日の再生回数最高記録を更新AppleMusicはケンドリック・ラマーのドルビーアトモスによる空間オーディオに対応したニューアルバム「MrMoraleTheBigSteppers」がAppleMusicにおいて配信初日月日の再生回数の最高記録を更新したと発表した。 |
2022-05-16 19:30:00 |
IT |
週刊アスキー |
びっくりドンキー「チーズバーグディッシュ」をすごいことにした「メルティー」ただ今販売中 |
https://weekly.ascii.jp/elem/000/004/091/4091614/
|
welovecheeeese |
2022-05-16 19:30:00 |
IT |
週刊アスキー |
海外移籍した鈴木誠也選手も搭載!『パワプロ2022』&『プロスピ2021』で5月19日に選手アップデートを実施 |
https://weekly.ascii.jp/elem/000/004/091/4091612/
|
ebase |
2022-05-16 19:20:10 |
マーケティング |
AdverTimes |
広告業の22年1〜3月期は1.6兆円 コロナ禍前に届かず |
https://www.advertimes.com/20220516/article383933/
|
広告業 |
2022-05-16 10:40:31 |
海外TECH |
reddit |
HEIZOU |
https://www.reddit.com/r/Genshin_Impact_Leaks/comments/uqsdfo/heizou/
|
HEIZOU submitted by u momercek to r Genshin Impact Leaks link comments |
2022-05-16 10:01:46 |
コメント
コメントを投稿